Abstract
For mobile robotics, head gear in augmented reality (AR) applications or computer vision, it is essential to continuously estimate the egomotion and the structure of the environment. This paper presents the system developed in the SmartTracking project, which simultaneously integrates visual and inertial sensors in a combined estimation scheme. The sparse structure estimation is based on the detection of corner features in the environment. From a single known starting position, the system can move into an unknown environment. The vision and inertial data are fused, and the performance of both Unscented Kalman filter and Extended Kalman filter are compared for this task. The filters are designed to handle asynchronous input from visual and inertial sensors, which typically operate at different and possibly varying rates. Additionally, a bank of Extended Kalman filters, one per corner feature, is used to estimate the position and the quality of structure points and to include them into the structure estimation process. The system is demonstrated on a mobile robot executing known motions, such that the estimation of the egomotion in an unknown environment can be compared to ground truth.
Keywords
Get full access to this article
View all access options for this article.
References
Supplementary Material
Please find the following supplemental material available below.
For Open Access articles published under a Creative Commons License, all supplemental material carries the same license as the article it is associated with.
For non-Open Access articles published, all supplemental material carries a non-exclusive license, and permission requests for re-use of supplemental material or any part of supplemental material shall be sent directly to the copyright owner as specified in the copyright notice associated with the article.
