Abstract
Tunnel traffic vehicle detection plays a crucial role in enhancing traffic safety, optimizing flow, and improving rescue efficiency. However, given the complexity of the tunnel environment, image data may be obscured in front of the vehicle or the target may be too far away, leading to missed detections and false positives. Additionally, target detection networks have stringent requirements for real-time performance. To address the issues outlined above, this paper proposes Tunnel-YOLO (you only look once), a tunnel vehicle detection method. The enhanced feature extraction module demonstrates clear advantages in capturing fine-grained target features and retains detailed information about the obscured vehicle from both global and local perspectives. To reduce the interference from irrelevant regions, the SimAM (simple attention mechanism) attention mechanism is introduced to dynamically adjust regional weights, enabling the model to automatically focus on key areas. Additionally, the BiFPN-Concat (bidirectional feature pyramid network) splicing method is used to integrate features at different scales, improving the efficiency of information transfer between feature maps. The experimental results show that the mean average precision (mAP) for traffic vehicle detection using the proposed Tunnel-YOLO model on the Zhengzhou Urban Tunnel Comprehensive Management and Maintenance Center data set is 91.1%, which is 13 percentage points higher than that of the baseline YOLOv5s model. Tunnel-YOLO effectively enhances the accuracy and efficiency of traffic vehicle detection in tunnel environments while maintaining a small model size and efficient computational performance to meet real-time requirements. The source code of this study is available at: https://github.com/xlw222/Tunnel.git.
Get full access to this article
View all access options for this article.
