Abstract
Research on AI in education has expanded rapidly, resulting in numerous first-order meta-analyses, yet these studies are limited by inconsistent findings, heterogeneous methods, variable quality, and overlapping primary studies, requiring a second-order meta-analysis to obtain a more reliable estimate of AI’s effects. This study examined the overall effect of AI applications on student outcomes, including academic achievement and higher-order thinking skills. We included first-order meta-analyses published between 2020 and 2025, covering primary studies published from 1993 to 2024. Nineteen meta-analyses obtained from electronic databases, involving a total of 58,702 participants, were analyzed. Using a random-effects model, we found a statistically significant moderate mean effect size (ES = .67, 95% CI [.55–.78]), indicating that AI technologies meaningfully contribute to student learning. The moderator analysis revealed that the moderators influencing the variability of effects of AI on student outcomes include education level, education field, and publication bias status. Effect sizes were robust across AI types and learning outcome types. Meta-regression showed that sample size and publication year did not predict effect sizes, whereas the number of primary studies did. These findings highlight the need for informed AI integration, strengthened pedagogical and institutional capacity, and evidence-based strategies to ensure meaningful improvements in student learning.
Keywords
Get full access to this article
View all access options for this article.
