Abstract
One of the essential problems of feature-based visual servoing is calculating the inverse Jacobian, which relates changes in features to changes in robot position. Neural networks can approximate the inverse feature Jacobian. Neural networks also allow other forms of vision input to be easily used to position the robot. The vision system is primarily responsible for reducing the dimensionality of the input to reduce the size and therefore computational load on the system. In this paper we develop a system which uses neural networks to both encode the image and generate control signals. In our system, the image dimensionality can be reduced in four ways: feature extraction, averaging compression, vector quantization, and principal component expansion. We demonstrate that it is possible to use neural networks for both image analysis and control of a vision guided robot, with little loss of accuracy when compared to feature based extraction.
Get full access to this article
View all access options for this article.
