Abstract
To address the issues of environmental perception fuzziness and decreased recognition accuracy in intelligent vehicles under foggy conditions, a fog recognition classification algorithm based on adversarial domain adaptation is proposed to achieve an intelligent, safe, and efficient autonomous driving system in foggy environments. First, a training set and a road fog image test set are constructed based on data collected by intelligent vehicles and existing datasets, and domain adaptation methods are used to effectively eliminate distribution differences between datasets. An adversarial learning architecture is introduced, where a deep feedforward network is built, and a feature extractor is employed to extract corresponding feature vectors. These extracted feature vectors are then input into the constructed classifier to predict domain labels. Next, the sample features obtained by the feature extractor are used as inputs for both the label predictor and the domain classifier, which output the predicted sample labels and domain labels, respectively. Finally, the domain classifier is optimized using a gradient reversal layer, and by measuring the cross-entropy loss with the predicted labels, the parameters of the feature extractor and label predictor are further optimized. Experimental results show that, on the training sets HAZYA, HAZYB, and HAZYC, the classification accuracy can reach up to 98.88%, and on the test set, the classification accuracy reaches 92.34%, significantly outperforming the ResNet-18 and LeNet-5 methods. Therefore, the proposed algorithm can effectively classify fog recognition images for environmental perception in autonomous driving scenarios.
Keywords
Get full access to this article
View all access options for this article.
