Abstract
Team Situation Awareness (TSA) is critical for effective decision-making in complex domains, but it is difficult to measure and predict in real-time. This study explores whether overall TSA and its three levels (i.e., perception, comprehension, and projection) can be predicted and classified using machine learning and eye tracking data. Data was collected from 35 UAV teams (70 participants total) performing multitasking scenarios while eye tracking data was recorded and SAGAT data was collected. Multiple regression and classification models were trained using seven eye tracking metrics. Results showed that kNN and XGBoost classifiers achieved strong accuracy across TSA levels, with up to 90% accuracy and 0.95 F1-score for comprehension-level TSA. Predictive models yielded moderate R2 values, suggesting the need for further refinement. These findings demonstrate the feasibility of using eye tracking and machine learning to model TSA in real-time, laying the foundation for adaptive decision-support systems that monitor and respond to TSA breakdowns in high-risk environments.
Get full access to this article
View all access options for this article.
