Visual Localization and Dense Map Construction in Dynamic Scenes

Main Article Content

Huipeng Li, Zhaolan He, Kadan Xie, Cong Xue

Abstract

The key technology for autonomous navigation of mobile devices such as robots lies in Simultaneous Localization and Mapping (SLAM) based on vision, which has become increasingly sophisticated in idealized static environments. However, in dynamic scenarios, existing instance semantic based visual SLAM methods cause over segmentation during the dynamic segmentation process, while deep learning methods increase system runtime. Therefore, this article proposes a visual SLAM scheme in dynamic scenes. The method of combining fractal dimension and epipolar constraint is used to detect dynamic regions in images, remove all feature points in the corresponding dynamic regions, construct a backend model in dynamic scenes, and achieve a more comprehensive dense point cloud map. This enables the visual SLAM system to accurately complete navigation, positioning, and obstacle avoidance functions in real time.

Article Details

Section
Articles