Abstract
This work compares the performance of three different models of neural networks in predicting the intermediary pose of a robot end effector for visual servoing tasks. Robotic applications in complicated and complex workspaces benefit from the use of non-touching sensor technology like vision. Visual feedback control of a two camera robotic system combines the advantages of global and local visibilities of a fixed camera guiding the robot and end effector camera achieving convergence for pick and place tasks. Neural networks replace the control law for the visual guidance for targets initially not in the field of view of the eye-in-hand camera. Visual features collected by the eye-to-hand camera and the robot pose form input for the three types of networks, Multilayer Perceptron Neural Network (MLPNN), Radial Basis Function Neural Network (RBFNN) and Elman Neural Network (ENN). The robot moves to the predicted pose, favorable for switching to Image Based Visual Servoing (IBVS) limiting the number of discrete events. Simulation studies and experimentation with an ABB make robot are performed for drawing conclusions regarding the network performance.
Get full access to this article
View all access options for this article.
