Abstract
The installation of surveillance systems in public or private settings is increasingly integrated with intelligent and real-time video anomaly detection to proactive threat consideration. A framework is proposed for real-time abnormal behavior detection for streaming video data based on Dynamic Graph Neural Networks (DGNNs). Unlike traditional frame-based models, the DGNN is not restricted to a static set of nodes; instead, it treats each frame as a graph, where nodes denote detected physical objects such as people and vehicles. DGNN introduces real-time frame-wise graph updates unlike prior methods with static or slow graph refresh using motion, pose, and interaction cues, supported by a lightweight spatio-temporal GCN optimized for live anomaly detection. The results convey high performance for the model, which recorded an accuracy of 99.65%, along with the corresponding precision, recall, and F1-score. An FNR and FPR of only 0.35% and 0.03%, respectively, highlighted its strong performance in discriminating normal from anomalous behaviors. The ROC and precision-recall AUC scores zoomed close to 1.0000 in almost all categories, thereby attesting to the system’s robustness where even slight stakes are involved. Following this was the fusion of TensorRT optimization in ensuring the real-time processing capability of the model, with an inference latency of fewer than 200 milliseconds per segment. The DGNN system, therefore, represents a scalable, low-latency option that is capable of real-time detection and classification of diverse and complex real-world abnormal behaviors, making it a novel approach toward smart public safety systems with very little dependency on huge annotated datasets.
Keywords
Get full access to this article
View all access options for this article.
