Abstract
We argue that placing a generative artificial intelligence model between qualitative researchers and their subjects of inquiry fundamentally displaces and distorts the conditions under which meaningful and epistemologically sound knowledge can emerge. We further argue that such mediation raises ethical implications that are incompatible with the principles of responsible research that underpin business and society scholarship.
In their article in this journal, Anis and French (2023) advocate that qualitative researchers “embrace” generative artificial intelligence (GenAI) tools, such as ChatGPT, to produce more efficient, explicatory, and equitable research outcomes. Their article, published shortly after the release of ChatGPT, was shaped by early impressions. We now have the advantage of more experience with the technology—experience that leads us to a markedly different conclusion. Specifically, we caution against the deliberate act of placing a GenAI tool between researchers and their subjects of inquiry and warn against the instrumentalist view that its uptake is a simple matter of balancing utility with researcher oversight. Doing so not only threatens the essence of qualitative inquiry but also undermines the principles of responsible research, which motivate business and society scholarship. Qualitative inquiry is an inherently humanistic scholarly pursuit that AI, by means of its algorithmic functions, can neither enhance nor enrich. Accordingly, we argue against its embrace in qualitative research.
Why Qualitative Researchers Should Not Embrace GenAI
Our focus here is on generative AI, a class of systems that draws on statistical patterns encoded during training to generate new content (text, speech, and images). As a by-product of their training on large datasets, these systems are proficient in a range of natural language processing tasks. In qualitative research, GenAI developers, qualitative data analysis software providers, and some academics are promoting GenAI tools to conduct interviews, generate participant responses, and augment or automate data analysis.
Although often presented as neutral and objective productivity-enhancing tools, these systems produce untrustworthy outputs. Beyond the well-documented concerns for scientific integrity—including biases, opacity, privacy, and informed consent—they raise deeper ethical and epistemological risks that strike at the core of qualitative research and the broader project of business and society scholarship. Guided by Tsui and McKiernan’s (2022) principles of responsible research, we expose these risks, showing that GenAI is anything but the efficient, explicatory, and equitable technology that Anis and French (2023) claim it to be.
Efficiency Is the Wrong Scientific Value
Anis and French (2023) advocate for GenAI based on its ability to scale and automate time-intensive research processes to gain efficiencies that purportedly exceed human capabilities. However, they do not question whether efficiency constitutes an appropriate scientific value. We argue that it is not. Elevating efficiency to the status of a scientific value fosters a narrow, instrumentalist conception of research—one that privileges expediency, performance metrics, institutional imperatives, and career aspirations over fundamental scientific responsibilities, such as generating valid knowledge, upholding research integrity, and minimizing social harm (Tsui & McKiernan, 2022). These responsibilities can only be fulfilled through sustained and engaged scholarship; and they are diminished when efficiency and its associated outputs are treated as proxies for value.
Nowhere is the logic of efficiency more detrimental than in qualitative research, which produces knowledge through necessarily slow, embodied, and relational practices. What is often dismissed as the “drudgery” of qualitative research—building and maintaining social relations, conducting fieldwork, and the close, iterative reading of empirical materials—is not a bottleneck in need of automation, but essential immersion in a research setting. Qualitative research is time- and labor-intensive precisely because it requires meaningful engagement with people’s lived experiences to discern what matters, to whom and why.
To delegate to GenAI tasks such as identifying patterns and themes risks abandoning core sense-making activities and ceding epistemic and scientific responsibilities to decontextualized algorithms. GenAI use detaches researchers from the social world they study and reconfigures research participants as disembodied “data objects”: passive entities that exist “out there” ready to be “mined” and “processed” for research ends. This erodes the ethical obligation to engage with participants with respect, care, and integrity, along with the scholarly commitment to generating meaningful knowledge in service of business and society (Brammer et al., 2022).
Fabricating, Not Explicating
Anis and French (2023) argue that GenAI has “immense explicatory value” when applied to large and ambiguous multilayered data. However, such arguments overlook a fundamental technological limitation: these tools cannot interpret, elucidate, or explicate meaning in data as they do not comprehend the social world. While GenAI produces responses that are expeditious and human-like, this should not be conflated with depth, understanding, or theoretical insights. Its outputs are fabrications: probabilistic approximations of human text that are devoid of meaning. Accordingly, GenAI requires “humans in the loop”—that is, researchers must verify, correct, and reproduce outputs, rendering any explicatory benefits or efficiency gains illusory.
Not only are the explicatory claims overstated, but they distort the nature of qualitative inquiry and the conditions under which ethically and epistemologically sound knowledge can emerge. Human interpretation, on which qualitative data analysis depends, is neither neutral nor mechanical; it is a situated and reflexive practice of meaning-making shaped by the researcher’s positionality, research objectives and onto-epistemological commitments. To replace or augment human agency with decontextualized algorithmic systems is to misconstrue qualitative inquiry as a process of data extraction and representation, displacing the interpretive foundations of meaning-making when researchers work with and theorize about others’ lived experiences.
Exploitative Not Equitable
Anis and French (2023) hold out the hope that under-privileged researchers will, with the assistance of GenAI, be able to upgrade from purely empirical to theoretical work. However, as we have outlined, GenAI is not capable of theoretical work. More critically, such techno-optimistic framing obscures the ways in which GenAI facilitates epistemic exploitation under the guise of technological empowerment. Rather than democratizing knowledge production, GenAI is a global production network that exploits workers, appropriates intellectual property, and consumes vast natural resources. This “extraction machine” (Muldoon et al., 2024) also captures researchers across the Global North and South within its system, monetizing their labor and data.
Qualitative research, despite its onto-epistemological diversity, is guided by a commitment to methodological integrity grounded in trustworthiness, transparency, and context-sensitive engagement with data. These principles support the production of knowledge that is equitable, ethically responsible, and epistemologically sound. Yet the use of GenAI displaces these principles at every stage of the research process. GenAI cannot access or participate in the social processes that produce data (e.g., interviews and participant observations). Instead of analysis, they generate untraceable outputs that make it impossible to establish an audit trail; and instead of interpretation, they lack contextual understanding, reasoning, and reflexivity—essential human acts necessary to make sense of, and theorize about, social phenomena.
Way Forward and Conclusion
In arguing against the embrace of GenAI, we are not rejecting technological innovations. Instead, our position is that knowledge production is fundamentally a social activity (Bechky & Davis, 2025) shaped by the norms of its institutions. How we conduct research is itself a normative act, one that signals what we value, whom we serve, and what kind of knowledge we seek to produce. For qualitative researchers, the challenge lies less in competing with GenAI’s supposed efficiencies than in reaffirming the distinctive contributions of human-centered inquiry. Demarcating the boundaries between algorithmic outputs and qualitative findings cannot be left to commercial developers, but must be actively set and maintained by scholarly communities to ensure that reflexivity, context, and social responsibility remain central to our teaching, mentoring, and scholarship.
In insisting on the distinctiveness of qualitative research, we can use this technological moment to reclaim the essence of qualitative inquiry: its attentiveness to lived experience, its openness to ambiguity, and its role in fostering reflexivity and social change. This is more than a methodological stance: it is a commitment to science as a public good, one that resists instrumentalization and instead aims to deepen collective understanding about our social world. This is a scholarly conversation that business and society scholars are well placed to lead, and we urge this community not only to participate but to shape the terms of engagement. As we have suggested, this commences with refusing to cede epistemic responsibilities to technological solutions and reaffirming scholarly practices that promote the production of meaningful, socially engaged knowledge.
Footnotes
Acknowledgements
The authors gratefully acknowledge Frank de Bakker and Simon Pek for their insightful feedback on previous versions of this article.
Declaration of Conflicting Interests
The authors declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Funding
The authors received no financial support for the research, authorship, and/or publication of this article.
