Abstract
One-on-one tutoring is highly effective but remains difficult to scale. Large Language Models (LLMs) offer a potential solution for providing scalable, personalized instruction. Commercial publishers are already integrating AI-driven tutoring into courseware, yet open-source textbooks lack the funding to provide such interactive support. This study compared two LLM-based tutor designs intended to provide AI functionality for open-source textbooks: a structured textbook-integrated tutor and a flexible standalone tutor. Participants (N = 79) were randomly assigned to one of three conditions: textbook-integrated tutor, standalone tutor, or textbook-only control, completing a learning task followed by a posttest and user experience surveys. No significant differences emerged in quiz scores, engagement, or satisfaction. However, participants using the standalone tutor or textbook-only condition reported higher effort than those using the integrated tutor. While no short-term learning advantage was found, these findings emphasize design trade-offs, suggesting future research should explore extended interventions, embedded delivery, and within-subject comparisons.
Keywords
Get full access to this article
View all access options for this article.
