Abstract
Existing 3D reconstruction algorithms face challenges in real-time performance and robustness in complex environments, limiting their applicability in engineering. This paper proposes a tightly coupled fusion method for lidar-visual inertial measurement unit tailored to indoor settings. The method extracts and integrates geometric features from visual odometry and employs a spatial hashing-based incremental voxel structure to deeply fuse lidar point clouds with visual features. This approach optimizes computational efficiency while enabling real-time state estimation and robust mapping in environments with varying lighting and texture. Experiments on public and self-collected datasets demonstrate that the method achieves accurate pose estimation and 3D reconstruction, ensuring reliable performance in complex environments.
Get full access to this article
View all access options for this article.
