Abstract
Vision-based place recognition is becoming an increasingly viable component of navigation systems for autonomous robots and personal aids. However, attaining robustness to variations in environmental conditions—such as time of day, weather and season—and camera viewpoint remains a major challenge. Featureless, sequence-based place recognition techniques have demonstrated promise, but often rely on long image sequences, manually-tuned parameters and exhaustive sequence match searching through multiple locations and image scales. In this paper, we address these deficiencies by implementing a condition-invariant, sequence-based place recognition algorithm suitable for networked environments, such as city streets, and routes with lateral platform shift, such as multiple-lane roads. We achieve this capability by augmenting the traditional 1D image database with a directed graph to describe the branching of contiguous sections of imagery at intersections. A particle filter is then used to efficiently explore these paths, as well as various lateral positions synthesized by rescaling imagery. Our proposed approach eliminates manual tuning of sequence length parameters, improves localization on branched routes, improves overall place recognition accuracy and coverage, and reduces computational requirements. We evaluated the new method against the original SeqSLAM and SMART algorithms on two day–night, road-based datasets and a summer–winter train dataset, where it attained superior precision-recall performance and coverage in all environments. Together, these contributions represent a significant step towards the provision of a robust, near parameter-free condition- and viewpoint-invariant visual place recognition capability for vehicles and robots.
Get full access to this article
View all access options for this article.
