Abstract
Against the backdrop of digital transformation in education, the integration of Mixed Reality (MR) and Artificial Intelligence (AI) has fostered a new “virtual-physical symbiotic” learning environment. However, current educational tools often fail to capture learners’ dynamic needs and the characteristics of 3D immersive scenarios, resulting in a disconnect between virtual character interactions and the teaching process. Existing approaches face various limitations: rule-based models struggle with the dynamics of three-dimensional spaces; collaborative filtering algorithms overlook spatial contextual features; and deep learning models lack joint modeling of emotion and context. These shortcomings highlight insufficient multi-source data integration and weak situational awareness. To address these challenges, this study proposes a virtual character recommendation method tailored to immersive learning environments. The model is composed of a transformation layer, a self-adversarial data generation layer, an embedding representation layer, and a virtual character prediction layer. It achieves dynamic matching between virtual characters and 3D interactive scenarios by incorporating multi-source data standardization, intelligent agent-based game simulation for scenario data generation, semantic vectorization of character features, and Long Short-Term Memory (LSTM)-attention mechanism fusion. This research marks the first application of MR spatial computation and self-adversarial learning in educational role recommendation, offering a technical framework to tackle adaptation challenges in immersive scene recommendations. The proposed approach contributes both theoretical innovations and practical guidance for the advancement of intelligent education.
Keywords
Get full access to this article
View all access options for this article.
