Abstract
Visual SLAM is capable of achieving the localization and mapping functions for intelligent vehicles. However, in practical applications, the presence of complex environments significantly affects localization accuracy and real-time performance. To address these challenges, this paper proposes a dynamic scene visual SLAM method that fuses image segmentation and feature following. This method utilizes an object detection model to quickly identify dynamic regions within image, actively removes unstable dynamic feature points. By employing a clustering algorithm to partition map points and efficiently extract feature-rich regions, this approach avoids interference from dynamic objects. Simultaneously, it utilizes an optical flow algorithm to continuously track static feature-rich regions in subsequent images, mitigating the influence of dynamic pixels and achieving efficient feature point extraction. Experimental results demonstrate that in dynamic environments, the proposed method improves localization accuracy by 70% and reduces feature extraction time per frame by 45% compared to the ORB-SLAM2 system. In most cases, our method also outperforms DS-SLAM, which proves that the proposed method can effectively improve the localization accuracy and the real-time performance of feature extraction in dynamic scenes for visual SLAM.
Get full access to this article
View all access options for this article.
