Abstract
The evolution of cellular vehicle-to-everything (C-V2X) technologies has expanded from basic communication functions to enhancing real-time situational awareness for safety and mobility applications. High-resolution roadside sensors, such as light detection and ranging (LiDAR) and computer vision systems, are crucial for generating accurate trajectories of vehicles, pedestrians, and other road users, facilitating connected vehicle (CV) safety and mobility applications. However, existing surveillance cameras, while capable of covering large areas, often struggle with diverse object scales, aspect ratios, and viewing perspectives, posing challenges for accurate traffic detection and tracking. The limitations in resolution and design intent of these cameras further hinder the effectiveness of contemporary deep learning-based detection models, such as YOLOv8, Mask R-CNN, and DeepSORT. This paper introduces a novel tool that utilizes digital surface models and grid-based, pixel-by-pixel line-of-sight simulation methods to evaluate the detection and positioning performance of CCTV cameras under varying environmental and configuration scenarios. The proposed framework provides comprehensive insights for optimizing camera placement and configuration, thereby improving the integration and performance of C-V2X applications. Two limitations of the proposed model are also identified: 1) both horizontal and vertical fields of view are expected to be no larger than 180°, and 2) the effect of camera intrinsic parameters is not comprehensively modeled. This advancement is crucial for achieving complete situational awareness and maximizing the benefits of CV technologies even with limited CV penetration. The proposed model is evaluated with 3D terrain and LiDAR data collected from one freeway site from the DataCity Smart Mobility Testing Ground in New Brunswick, NJ, U.S.
Keywords
Get full access to this article
View all access options for this article.
