Abstract
Vision-based multi-task learning (V-MTL) has significantly promoted the autonomous driving. However, the state-of-the-art (SOTA) V-MTL method still has limited receptive fields and unsatisfied feature extraction. It is insufficient to handle fast scale variation, frequent illumination variation, and partial occlusion, especially in tunnels, skyways, and highways. To address these issues, this work proposes a novel ghosting fusion network, that is, GF-Net, for autonomous driving. Specifically, a receptive field-aware parallel atrous ghosting is presented to enlarge the receptive field and capture context information with low computation cost. A spatial-aware dynamic feature fusion is developed to integrate features generated with different receptive fields and suppress the conflicting spatial information. Comprehensive experiments on the well-known autonomous driving benchmark demonstrate the superiority of GF-Net in comparison with the SOTA approaches. Besides, real-world tests with various challenges have proven the robustness and accuracy of GF-Net.
Keywords
Get full access to this article
View all access options for this article.
