Abstract
To address the limitations of inaccuracy and data fragmentation in educational assessment systems, this study proposes a novel computational framework integrating federated learning (FL) with knowledge distillation. By modelling students as edge clients and institutional servers as central aggregators, the framework enables distributed collaborative training while preserving data privacy. Crucially, we optimize the FL aggregation strategy through a hierarchical knowledge distillation mechanism, where local models are guided by distilled global knowledge to enhance parameter efficiency. Experimental validation demonstrates a 10.3% improvement in assessment accuracy compared to baseline methods, alongside a 17% reduction in communication overhead. The proposed model showcases scalable computational efficiency for large-scale educational data analysis, with potential applications extending to scientific data fusion scenarios in IoT and edge computing environments.
Keywords
Get full access to this article
View all access options for this article.
