Abstract
Representing the scene appearance by a global image descriptor (BoW, NetVLAD, etc.) is a widely adopted choice to address Visual Place Recognition (VPR). The main reasons are that appearance descriptors can be effectively provided with radiometric and perspective invariances as well as they can deal with large environments because of their compactness. However, addressing metric localization with such descriptors (a problem called Appearance-based Localization or AbL) achieves much poorer accuracy than those techniques exploiting the observation of 3D landmarks, which represent the standard for visual localization. In this paper, we propose ALLOM (
Get full access to this article
View all access options for this article.
