Abstract
As one of the most widely used types of robots, ground robots play a crucial role in inspection, exploration, rescue, and other applications. In recent years, advancements in light detection and ranging (LiDAR) technology have made sensors more accurate, lightweight, and cost-effective. Therefore, researchers increasingly integrate LiDAR with other sensors, such as inertial measurement units (IMUs) and cameras, for simultaneous localization and mapping (SLAM) studies, providing robust technical support for ground robots and expanding their application domains. Public datasets that incorporate multiple sensors and diverse scenarios are essential for advancing SLAM technology in ground robots. However, existing datasets for ground robots are typically restricted to flat-terrain motion with 3 degrees of freedom (DOF) and cover only a limited range of scenarios. Although handheld devices and unmanned aerial vehicle (UAV) exhibit richer and more aggressive movements, their datasets are predominantly confined to small-scale environments due to endurance limitations. To fill these gaps, we introduce M2UD, a multi-modal, multi-scenario, uneven-terrain SLAM dataset for ground robots. This dataset contains a diverse range of highly challenging environments, including cities, villages, open fields, long corridors, plazas, underground parking, and mixed scenarios. Additionally, it presents extreme weather conditions such as darkness, smoke, snow, and dust. The aggressive motion and degradation characteristics of this dataset not only pose challenges for testing and evaluating existing SLAM methods but also advance the development of more advanced SLAM algorithms. To benchmark SLAM algorithms, M2UD provides smoothed ground truth localization data obtained via real-time kinematics (RTK) and introduces a novel localization evaluation metric that considers both accuracy and efficiency. Additionally, we utilize a high-precision millimeter-level laser scanner to acquire ground truth maps of two representative scenes, facilitating the development and evaluation of mapping algorithms. We select 12 localization sequences and 2 mapping sequences to evaluate several classical LiDAR and visual SLAM algorithms, verifying usability of the dataset. To enhance usability, the dataset is accompanied by a suite of development kits, including data transformation, timestamp alignment, ground truth smoothing. The dataset and related videos are available at https://yaepiii.github.io/M2UD/.
Keywords
Get full access to this article
View all access options for this article.
References
Supplementary Material
Please find the following supplemental material available below.
For Open Access articles published under a Creative Commons License, all supplemental material carries the same license as the article it is associated with.
For non-Open Access articles published, all supplemental material carries a non-exclusive license, and permission requests for re-use of supplemental material or any part of supplemental material shall be sent directly to the copyright owner as specified in the copyright notice associated with the article.
