Abstract
Following the launch of the generative AI Web application, Ask.SMILE, designed to evaluate the cognitive levels of questions asked, 2559 educators generated 25,973 question-feedback sets over a three-month period, with an average of over 10 questions per participant. Analyses revealed a significant improvement in question quality from initial submissions to later stages. Specifically, participants progressed from Level 1 questions (simple recall) to Level 5 questions (creative and evaluative), based on Bloom’s Taxonomy. However, participants who began with lower-quality questions showed varied progression paths to higher levels. To analyze the data, a Linear Mixed Model (LMM) was used to account for both fixed and random effects, revealing a statistically significant improvement in question quality over time (p < .01). ANOVA results further confirmed the significance of question stage on quality, and a Least Squares Means (LSMEANS) analysis indicated a precise and significant improvement in later-stage questions. Additionally, survey responses highlighted participants’ appreciation for the ongoing development of AI-driven educational tools like Ask.SMILE. Several implications and future research directions are explored, including the potential to expand the platform’s features beyond question evaluation.
Keywords
Get full access to this article
View all access options for this article.
