Abstract
This paper presents an image-based visual servoing strategy for the autonomous navigation of a mobile holonomic robot from a current towards a desired pose, specified only through a current and a desired image acquired by the on-board central catadioptric camera. This kind of vision sensor combines lenses and mirrors to enlarge the field of view. The proposed visual servoing does not require any metrical information about the three-dimensional viewed scene and is mainly based on a novel geometrical property, the auto-epipolar condition, which occurs when two catadioptric views (current and desired) undergo a pure translation. This condition can be detected in real time in the image domain by observing when a set of so-called disparity conics have a common intersection. The auto-epipolar condition and the pixel distances between the current and target image features are used to design the image-based control law. Lyapunov-based stability analysis and simulation results demonstrate the parametric robustness of the proposed method. Experimental results are presented to show the applicability of our visual servoing in a real context.
Get full access to this article
View all access options for this article.
References
Supplementary Material
Please find the following supplemental material available below.
For Open Access articles published under a Creative Commons License, all supplemental material carries the same license as the article it is associated with.
For non-Open Access articles published, all supplemental material carries a non-exclusive license, and permission requests for re-use of supplemental material or any part of supplemental material shall be sent directly to the copyright owner as specified in the copyright notice associated with the article.
