Abstract
As generative AI tools become more commonly used in qualitative research, their role in supporting thematic analysis (TA) calls for critical exploration. This study compares the thematic outputs of four AI tools—ChatGPT-4o, QInsights, ATLAS.ti AI, and MAXQDA AI Assist—against a validated human-coded analysis of K–12 STEM teacher data. Using Braun and Clarke’s six-phase TA framework, we examined how each tool’s outputs aligned with, extended, or diverged from human-generated themes. Findings showed partial convergence across tools, no hallucinated content, and substantial time savings. While none fully replicated the depth of human interpretation, all captured relevant insights. This comparison offers a deeper epistemological understanding of AI integration into qualitative inquiry, as well as practical guidance for researchers considering its thoughtful application in qualitative analysis.
Get full access to this article
View all access options for this article.
References
Supplementary Material
Please find the following supplemental material available below.
For Open Access articles published under a Creative Commons License, all supplemental material carries the same license as the article it is associated with.
For non-Open Access articles published, all supplemental material carries a non-exclusive license, and permission requests for re-use of supplemental material or any part of supplemental material shall be sent directly to the copyright owner as specified in the copyright notice associated with the article.
