Abstract
Traffic-sign condition assessment is a critically important responsibility for transportation agencies. This paper presents an innovative framework to address this crucial task by leveraging mobile sensors, a machine vision camera, and a Global Positioning System (GPS) for real-time traffic-sign condition assessment. An artificial intelligence (AI) application, integrated into a robotic operating system framework, processes image and GPS data streams. The system deploys the YOLOv8 computer vision algorithm for traffic-sign detection and segmentation, supported by ByteTrack and a custom correction algorithm for object tracking. This sophisticated system segments traffic signs from other objects, enabling shape- and color-based damage assessment. Distance and speed from a GPS is also employed to calculate the stopping sight distance to identify severe obstructions. The study successfully demonstrated the efficacy of this approach by integrating these components into a seamless framework capable of real-time segmentation, classification, and geo-referencing of damaged traffic signs at regular traffic speeds. The XGboost machine learning algorithm, chosen for its accuracy and real-time prediction time, was used for second-stage damage classification. This work validates the value and effectiveness of AI and mobile sensor technology in traffic-sign condition assessment, making it a scalable, cost-effective solution for transportation agencies. Future work will seek to broaden the model’s capabilities to detect a wider range of traffic-sign types and conditions and to validate the effectiveness of the framework in real-world environments.
Keywords
Get full access to this article
View all access options for this article.
