Abstract
Driver stress is recognized as a significant influencing factor in traffic crashes owing to its negative effect on driving performance. However, existing studies have utilized the driver’s physiological data to predict stress levels, which can be difficult in practice because the required data collection processes can interfere with driver performance. To bridge this gap, this study proposes a fusion prediction framework for real-time driver stress level prediction using urban street view (USV) image data and vehicle kinematic data. Specifically, the framework utilized the Deeplabv3+ model to extract semantic information from USV images to reflect driving environment features. The driving environment features were subsequently combined with the vehicle kinematic features as input and finally output as driver stress levels. Multiple machine learning techniques were applied to improve the predictive performance of the framework. To verify the proposed framework, a real-world experiment was conducted in Nanjing, China. The results showed a satisfactory performance for driver stress level prediction, with F1 scores and G-means all exceeding 95.7%. Furthermore, USV images for environment feature collection were found to be more cost-effective. Given that the required data were easy to collect via the in-vehicle sensors, the proposed framework is expected to be applied in Level 2 Advanced Driving Assistance Systems for real-time driver stress level prediction and the reduction of potential crashes caused by human error.
Keywords
Get full access to this article
View all access options for this article.
References
Supplementary Material
Please find the following supplemental material available below.
For Open Access articles published under a Creative Commons License, all supplemental material carries the same license as the article it is associated with.
For non-Open Access articles published, all supplemental material carries a non-exclusive license, and permission requests for re-use of supplemental material or any part of supplemental material shall be sent directly to the copyright owner as specified in the copyright notice associated with the article.
