Abstract
The emergence of generative artificial intelligence (AI) presents a transformative moment for qualitative research, offering profound opportunities for enhancing the scale and efficiency of analysis while raising critical questions about methodological integrity, ethics, and the role of the researcher. This editorial introduces a special collection of papers that collectively map this nascent and dynamic landscape. Moving beyond speculation, the collection provides tangible methods, critical reflections, and robust frameworks for integrating AI into qualitative inquiry. The papers explore four key areas: the development of practical, AI-augmented methodological roadmaps for tasks like thematic analysis; the nuanced experience of the human-AI collaboration in both data analysis and collection; the urgent need for ethical and epistemological guardrails to guide responsible use; and novel applications of AI in the service of meta-research. A clear consensus emerges from this collection: the future of qualitative research with AI is not one of replacement, but of thoughtful and critical augmentation. The qualitative researcher—with their capacity for deep interpretation, ethical judgment, and reflexive engagement—remains indispensable. This special issue provides a foundational dialogue for a community navigating this new frontier, aiming to harness the computational power of AI without sacrificing the humanistic depth that defines the discipline.
Keywords
Introduction
The rapid emergence of generative artificial intelligence (AI) has created a pivotal moment for nearly every field of human inquiry, and qualitative research is no exception. This technological wave brings with it a current of profound opportunity, promising to enhance the scale, speed, and scope of our work. Yet, it also carries an undercurrent of apprehension, raising critical questions about methodological integrity, ethical responsibility, and the irreplaceable role of the human researcher. It is at this critical juncture that we present this special issue, a collection of papers that, together, map the nascent and dynamic landscape of AI in qualitative research. This collection moves beyond speculation to offer tangible methods, critical reflections, and robust frameworks, providing a foundational dialogue for a community navigating this new frontier. The overarching consensus that emerges is clear: the future of qualitative research with AI is not one of replacement, but of thoughtful and critical augmentation. The qualitative researcher remains indispensable, and the central challenge is to harness AI’s power without sacrificing the humanistic depth that defines the discipline.
From Process to Practice: AI-Augmented Methodological Roadmaps
A significant contribution of this collection is its provision of concrete, systematic guidance for integrating AI into the “engine room” of qualitative analysis. This signals a maturation from ad-hoc experimentation to methodical application. Several papers offer practical roadmaps for reimagining traditional workflows. Anakok et al. (2025) provide a direct and accessible translation of Braun and Clarke’s foundational six-phase thematic analysis into a “Generative AI-Assisted Thematic Analysis” (GATA) workflow, demonstrating how AI can be systematically integrated into one of our field’s widely used methods. Similarly, Katz et al. (2024) propose a novel “Extract, Embed, Cluster, and Summarize” (EECS) workflow for inductive codebook generation. Their work on a large corpus of student evaluations showcases AI’s power to manage scale and produce results that mirror human-led analysis, validating the utility of these new methods.
These principles are powerfully illustrated in the work of Barrera et al. (2025), who offer a real-world account of integrating tools like ATLAS.ti for coding and DeepL Pro for translation in two large-scale, multilingual public health projects. Their paper provides a step-by-step guide to implementing a “human-in-the-loop” process, where every AI-generated output is subject to systematic human review to resolve code redundancy and capture cultural subtleties. Crucially, they also make a vital call to update reporting standards like the Consolidated Criteria for Reporting Qualitative Research (COREQ) to ensure transparency and accountability in AI-assisted research. The collective contribution of these papers signals a significant shift in the discourse. The question is no longer if AI can be used for tasks like thematic analysis, but how it can be done systematically, transparently, and rigorously. These papers are building the first generation of standardized, replicable AI-assisted qualitative methods, moving the field from possibility to procedure.
The Researcher's Experience: Navigating the Human-AI Collaboration
As we move from process to practice, the researcher’s experience of this new human-AI collaboration becomes central. The collaborative autoethnography by Al-Fattal and Singh (2025) offers a powerful, reflective account of this dynamic. Their comparative reflections on manual versus GAI-assisted thematic analysis crystallize the central tension of this evolving landscape: the trade-off between the efficiency and scalability offered by AI and the deep, context-rich nuance that remains the hallmark of human-driven inquiry. While manual analysis provided context-rich insights, it was time-consuming; conversely, GAI-assisted analysis offered speed but lacked interpretative depth, often producing superficial findings.
This theme of collaboration is extended in a novel direction by Nardon et al. (2025), who explore the use of AI image generation during reflective interviews. In their work, AI is not merely an analytical tool but an active “third agent” in the data collection process itself—a helper, motivator, and facilitator, but also a potential distractor and influencer. This contribution suggests a paradigm shift. If AI is an active agent in the interview, it is no longer just analyzing data; it is helping to create it. The AI’s “misinterpretations” or “biases” become part of the participant’s reflective process, generating new insights and fundamentally altering the research encounter. This has profound implications for ethics and reflexivity, as our frameworks must now account for AI’s role not just in analysis, but in data genesis.
Critical Guardrails: The Imperative for Ethical and Epistemological Frameworks
With these new possibilities come profound responsibilities. A strong thread running through this collection is the urgent call for critical frameworks and ethical guardrails to guide the responsible use of these powerful tools. Cheah (2025) proposes a concrete ethical and methodological framework for AI-augmented netnography, tackling the complex challenges of privacy, consent, and algorithmic bias in digital research. This high-level theoretical engagement is complemented by the work of Foley et al. (2025), who challenge the community to develop a “critical imagination.” They urge researchers to scrutinize how AI tools, particularly in literature reviews, shape the very foundations of our knowledge systems by privileging certain sources (e.g., Global North, English-language) while making others invisible.
Providing a crucial lens for this entire discussion, Nicmanis and Spurrier (2025) introduce a vital distinction between “Small-q” (positivist) and “Big-Q” (interpretivist) research paradigms. Their approach-based model argues that the appropriateness of AI is not universal but is contingent on the fundamental values and goals of the research. This framework helps clarify that the question is not simply if we should use AI, but how and why it aligns with our specific methodological and epistemological commitments. It reframes the central question from a technical one (“Which tool should I use?”) to an epistemological one (“Do the values embedded in this tool align with the values of my research paradigm?”), providing the necessary language to navigate the practical trade-offs experienced by researchers on the ground.
AI as a Mirror: Expanding the Frontier to Qualitative Meta-Research
Pushing the boundaries even further, this collection demonstrates a novel application of AI in the service of meta-research: turning its analytical lens back onto our own field. Yildirim et al. (2025) demonstrate this potential by creating a custom GPT to systematically assess the reporting quality of 75 qualitative studies in the tourism literature against the COREQ checklist. Their finding—that reporting was often inadequate, especially regarding method and theory—illustrates AI’s capacity for large-scale methodological auditing. This application represents a significant expansion of AI’s role. It can serve not only as a tool for conducting primary research but also as a mechanism for enforcing and evaluating standards across the field, potentially becoming a key part of the infrastructure of academic quality control and methodological reflection.
Conclusion: The Indispensable Human and the Future of Augmented Inquiry
Taken together, these papers present a vibrant, multifaceted, and critical conversation. They reveal a clear consensus: the future of qualitative research with AI is not one of replacement, but of thoughtful and critical augmentation. The qualitative researcher - with their capacity for deep interpretation, ethical judgment, and reflexive engagement - remains indispensable. The challenge before us is to navigate the evolving landscape with a spirit of open inquiry, to harness the computational power of these new tools without sacrificing the humanistic depth that is the soul of our discipline. This special issue does not offer final answers but instead opens a crucial and timely dialogue. We invite the reader to join this conversation, to explore the potential and navigate the challenges as we collectively shape the future of qualitative inquiry in the age of intelligence, both human and artificial.
