Abstract
This paper integrates different novel intelligent concepts to perform scene analysis, hand-eye coordination and object manipulation to realize a concrete working robot named COERSU. Firstly, a robust tuner is presented to optimize the early visual processing based on genetic algorithms (GA). Then, a few architectures of the adaptive neuro-fuzzy inference system (ANFIS), multi-layer perceptron (MLP) and the K-nearest neighborhood (KNN) classifiers are compared to perform scene analysis and object recognition. Following on, new methods of performing eye-to-hand visual servoing based on neuro-fuzzy approaches are detailed and compared with relative visual servoing, a new method developed by the authors.
Theoretical model, mathematical framework and convergence criteria for our visual servoing techniques are also provided. The experiments show that the performance of the hybrid intelligent methods converge to relative visual servoing in terms of accuracy. However, in terms of speed, hybrid intelligent methods outperform relative visual servoing. Snapshots of the experimental results from COERSU in a table-top scenario to manipulate some soft objects (e.g. fruit/egg) are provided to validate the methods.
Keywords
Get full access to this article
View all access options for this article.
