Abstract
Effective analysis of biomechanical motion data is crucial for predicting and preventing athletic injuries. Recent advancements in wearable sensor technologies offer extensive multimodal physiological datasets, yet integrating these complex data streams into accurate, interpretable predictive models remains challenging. In this work, we propose the BioSensor-Transformer, a novel Transformer-based deep learning architecture explicitly incorporating biomechanical constraints and multimodal sensor integration to predict injury risk during dynamic human movements. The model integrates inertial measurement units (IMUs), electromyography (EMG), and plantar pressure sensor data, enhanced by biomechanical constraints that promote stable joints, smooth muscle motion, and efficient energy use. Extensive experimental validation on benchmark-aligned synthetic datasets demonstrates that BioSensor-Transformer significantly outperforms state-of-the-art models, achieving superior predictive accuracy (AUC: 0.94, F-AUC: 0.83) and exhibiting physiologically plausible outputs. Moreover, our model provides enhanced interpretability through physiologically grounded attention mechanisms, enabling clear, actionable insights for clinicians. The proposed framework effectively bridges the gap between high predictive accuracy and biomechanical realism, providing substantial improvements for real-world injury prevention and rehabilitation applications.
Keywords
Get full access to this article
View all access options for this article.
