Abstract
Foggy environments present significant challenges to autonomous driving owing to the effects of attenuation and backscattering, which often compromise the performance of LiDAR-camera fusion-based perception systems. In this study, we introduce FogFusion, a novel 3D object detection network specifically designed to operate effectively under foggy conditions by leveraging a synergistic camera-LiDAR fusion approach. Our approach integrates a Depth Completion network with Fog Convolution (DCFC) to generate virtual point clouds that enhance the original sparse LiDAR data. These enhanced point clouds are then processed using a Flexible Cylindrical Voxel (FCV) encoding method. To ensure robust multi-modal feature integration, we employ a Cylindrical Fusion Module (CFM) during the fusion process. Experimental evaluations on the KITTI and KITTI-C datasets reveal that FogFusion improves detection performance in foggy conditions by at least 3.32% compared to the baseline model and surpasses the performance of advanced 3D object detection models. These results highlight FogFusion’s potential to significantly enhance the environmental perception capabilities of autonomous vehicles operating in foggy weather conditions.
Keywords
Get full access to this article
View all access options for this article.
