Abstract
Explainable Artificial Intelligence (XAI) refers to systems that make AI models more transparent, helping users understand how outputs are generated. XAI algorithms are considered valuable in educational research, supporting outcomes like student success, trust, and motivation. Their potential to enhance transparency and reliability in online education systems is particularly emphasized. This study systematically analyzed educational research using XAI systems from 2019 to 2024, following the PICOS framework, and reviewed 35 studies. Methods like SHapley Additive exPlanations (SHAP) and Local Interpretable Model-agnostic Explanations (LIME), used in these studies, explain model decisions, enabling users to better understand AI models. This transparency is believed to increase trust in AI-based tools, facilitating their adoption by teachers and students.
Keywords
Get full access to this article
View all access options for this article.
References
Supplementary Material
Please find the following supplemental material available below.
For Open Access articles published under a Creative Commons License, all supplemental material carries the same license as the article it is associated with.
For non-Open Access articles published, all supplemental material carries a non-exclusive license, and permission requests for re-use of supplemental material or any part of supplemental material shall be sent directly to the copyright owner as specified in the copyright notice associated with the article.
