Abstract
Accurate traffic accident risk prediction is crucial for enhancing urban road network efficiency and safety, attracting increasing attention in traffic forecasting research. However, existing models often struggle to capture the global spatial-temporal correlations and spatial heterogeneity of traffic accident risk data. They are sensitive to data sparsity, especially for fine-grained prediction tasks. In this paper, we propose a novel spatial-temporal deep neural network, named channel attention-level spatial-temporal convolutional neural network (CA-STNet), to address these challenges. Specifically, to alleviate the impact of data sparsity on the prediction performance of the model, we designed a hierarchical spatial-temporal feature learning framework to capture coarse-grained and fine-grained traffic accident risk characteristics, respectively, and realize cross-scale traffic accident risk feature fusion through a feature transformation matrix, combined with weighted loss function to solve the zero-inflated issue. To better capture the spatial-temporal correlation and channel heterogeneity of traffic accident risk data, a channel-independent self-attention unit is introduced to dynamically capture global spatial-temporal correlation. At the same time, an inter-channel attention unit is adopted to quantify and adjust the importance of different channel characteristics in the spatial dimension. The results of the two real traffic accident data sets indicate that this model outperforms other benchmark models in predictive accuracy. The source code is available at https://github.com/MrYning/CA-STNet.
Keywords
Get full access to this article
View all access options for this article.
