Abstract
Aiming at the core problems of insufficient dynamic fusion of multimodal time series data and difficulty in modeling individual long-term physiological adaptability in athletic injury risk prediction. This paper hypothesizes that a framework integrating biomechanical modeling with personalized adaptive learning can significantly improve the accuracy and practicality of injury risk prediction in dynamic sports scenarios. To test this, we propose a dynamic prediction framework based on Spatio-Temporal Graph Neural Networks (ST-GNN) and Federated Learning. Wearable devices are used to synchronously collect multimodal signals, including Inertial Measurement Unit (IMU), surface Electromyography (sEMG), and Photoplethysmography (PPG), which are mapped onto a dynamic human skeleton graph. The ST-GNN extracts spatio-temporal biomechanical features, while a federated learning framework with a meta-learning strategy enables personalized model adaptation for individual athletes, solving data heterogeneity and cold-start problems. The system is optimized for edge deployment through mixed precision quantization and layer fusion. Experimental results show that biomechanical feature extraction based on dynamic graph neural network reduces the mean square error of knee angle prediction to 0.82 in 50 rounds of training; the multimodal fusion injury warning AUC (Area Under the Curve) combined with adaptive attention mechanism reaches 0.89; the ACL (Anterior Cruciate Ligament) injury warning accuracy reaches 0.83 after 100 rounds of training iterations. This study provides a complete, low-latency solution from data collection to edge decision-making, advancing the application of wearable devices in medical-grade sports health management.
Keywords
Get full access to this article
View all access options for this article.
