Abstract
This position paper argues against the categorical rejection of generative Artificial Intelligence (genAI) in qualitative analysis, as proposed by Jowsey et al. in their open letter published in Qualitative Inquiry. While their concerns are framed as methodological, the authors’ position largely rests on philosophical assumptions that risk becoming dogma. In this short response I contend that prohibiting genAI on metaphysical grounds negatively impacts debate and innovation in qualitative analysis.
I rarely enter debates about things. I am not such an authority on anything, and I prefer doing practical things over spending time in heated exchanges; however, here I would like to speak as one of the first social scientists to have explored the use of Large Language Models (a term I usually prefer to genAI) for qualitative analysis (De Paoli, 2024). I found the letter rejecting the use of generative Artificial Intelligence (genAI) in qualitative analysis (Jowsey et al., 2025) that was circulated on social media and published in Qualitative Inquiry quite controversial and decided to pen down a few reflections. These are my own reflections on why we should oppose the rejection of the use of genAI in qualitative analysis.
While on the surface the letter may appear to be about methods and methodology (methodological concerns, as the authors say), in fact, it is more about stating a set of philosophical assumptions that are presented as dogma, and it is also, to an extent, a rhetorical exercise.
Point 1 of the letter states the following:
GenAI as simulated intelligence is incapable of meaning-making.
This reminded me right away about the Chinese room thought experiment proposed by Searle. It is not methodology; it is a philosophical opposition between the ideas that, on one side, machines can think (Turing, 1950) if they can convince us of that, and, on the other, that machines cannot think, as they simply parrot back symbols (Searle, 1980).
Rejecting a solution to a problem (using genAI for qualitative analysis) solely because it conflicts with certain philosophical assumptions feels rather reductive. I could understand if the arguments were framed as a form of disagreement based on solid empirical evidence, thus discouraging the use of genAI in analysis on this ground (e.g., Mehta et al., 2025). But presenting it as an absolute prohibition based on a metaphysical premise, over practical considerations, risks shutting down a productive conversation and potentially also some innovation in our qualitative processes.
It clearly transpires from Point 2 of the letter a human-centric perspective:
Qualitative research should remain a distinctly human practice.
This reflects a worldview (the cogito ergo sum doctrine) where humans are positioned at the center of meaning-making, and everything else is secondary (things present-at-hand Heidegger would say). Again, this is not primarily about methods; it is about philosophy. Of course, methods are grounded in philosophical assumptions, and we can legitimately disagree with one set or another. I disagree with positivism, for example, but would never go public saying that positivism should be rejected, and it should not go anywhere near researching society. The same goes for individualism. I simply disagree and try to show that other ways are possible.
This statement (Point 2 of the letter) effectively asserts that any post-humanist approach to qualitative science or anything that tries to move away from human-centric views should be rejected outright. Such a position closes the door on exploring how technology might extend, augment, or even challenge human practices in productive ways, including in qualitative analysis. It assumes that the essence of qualitative research is inseparable from human subjectivity, rather than considering whether new directions might offer complementary perspectives or novel forms of interpretation. Framing this as an absolute rather than as a debate risks turning a methodological conversation into a metaphysical dogma.
What is quite interesting is also the use of power and the rhetorical argument:
We write as 419 experienced qualitative researchers from 32 countries, to reject the use of generative artificial intelligence (GenAI) applications for Big Q Qualitative approaches.
This statement is typical of a post-humanist approach to science (or non-modern), as described by Latour (1987). It shows what he would call a collective of human and textual actants that give the claim its authority and power. The claim that we should reject genAI is not true by its objective and universal nature; its truthiness is constructed through rhetorical and social scaffolding (we are many important scholars!). The leading authors use their authority in the field (that nobody denies!) to exercise power over others, seeking to block off anti-programs of other scholars that (in their scientific curiosity) attempted something that evidently seems to go against their “program of action.” However, I would like to suggest that against any exercise of power and authority like this one, we should nurture a space for dissenting and diverse views. Moreover, what the authors of the letter fail to realize is that there are multiple programs and anti-programs in place.
For example, something that is not acknowledged is that in the early days of attempting qualitative analysis with genAI, most contributions were from computer scientists. In this area, doing qualitative analysis was a computational problem within Natural Language Processing, not a method problem, and most of the solutions were just ignoring the key aspects of the methodological rigor in social science (Friese, 2025 has written about this). There was a lack of understanding of social science approaches, for example, with studies confusing codes with themes. When a few social scientists started working on qualitative analysis with genAI, methodological issues were brought back to the fore as an anti-program to engineering and top-down approaches. Computer scientists will keep working on these topics, whether we like it or not, and they won’t publish in social science journals. If we reject the work of social scientists in this area, we could easily end up seeing a transformation of qualitative analysis into just an engineering top-down process.
I suppose there has to be an acceptance that things are possible with new innovations, and different ways need to co-exist. This also does not mean that things (two analyses) are always equal in all their aspects. I sometimes find myself discussing with colleagues about using AI for qualitative analysis, and to explain it, I employ the analogy of a piece of furniture; let’s imagine a chest of drawers. The main function of a chest of drawers is to store clothes, such as socks and underwear. Obviously, it may have other functions like organizing things at home, and it may also have some aesthetic, symbolic, and spatial functions (like where things are in a room).
A chest of drawers can be, of course, created by an expert artisan, starting from scratch, entirely by hand (and some tools), chiseling carefully the wood to create a beautiful piece with sophisticated inlays, entirely made of solid oak. It’s the art of woodwork that probably only a human trained in the field can realize. However, a chest of drawers can also be produced by automation and workers in a factory in Sweden, using plywood, with no frills. In the factory there will be quality control in place to ensure socks and underwear can be stored properly, and that nuts and bolts will screw in the right place for the furniture to be safe and stable. The appearance of the final industrial furniture will still be pleasing, although a bit more standardized than unique. As far as the primary function (storing clothes), these two chests of drawers offer the same. Obviously, there are differences; the handmade furniture will be beautiful and much more aesthetically appealing, perhaps also lasting longer. It may also be a symbol of wealth, something that shows to visitors how wealthy a person or a family is. But yet not everybody can afford to buy a costly chest of drawers (small organizations may not have the resources to pay for months of analysis) nor necessarily have the time to wait months for the artisan to create it (time frame of research is often very tight, in domains like design or health care). Somebody having the money to make only one significant purchase for this year may also decide it would be better to spend for something else too, like going on holiday instead of buying furniture (why wait months for a qualitative analysis if I can run a questionnaire?). Should we then suggest we need to reject the industrial furniture? And that the only true furniture is the one handmade, that only few people can afford and wait for? Are the qualities of storing socks and underwear something that only artisan labor can capture?
I think it would be more productive to accept that both pieces are valid and valuable; they offer a common main function, but they may have other important differences (including more or less reflexivity, this is clear!), whose acceptance and use depend on the situations, contexts, resources, and after all power.
Finally, on Point 3 of the letter:
The established manifold harms of GenAI, especially to the environment and workers in the Global South.
This is certainly a very important point deserving attention (UN Environment Program, 2025); however, many of the things that are around us present harm and exploitation including the shoes and clothes we all buy and wear or the food we consume (e.g., Notarnicola et al., 2017), for instance. Still, a recent analysis showed, for example, that AI data centers use only a relatively small amount of water compared to many other industries, including those related to our leisure (Masley, 2025). The social and environmental impacts of genAI are a matter for a broader political discussion, which we need to have for sure, but they are a non sequitur for a methodological discussion.
Concluding Thoughts
I wrote this letter as a sincere attempt to make sure we frame the discussion around using genAI for qualitative analysis as a healthy debate and not as a polarized or dogmatic confrontation. The use of genAI in qualitative research and analysis is here to stay, and while it is fundamental that we still maintain our ground on the established ways of performing methods, we also need to make sure novel processes are given the space to mature, be examined critically, and contribute meaningfully to the field of qualitative inquiry. Scrutiny of the use of genAI is important, and so is the need to adhere to methodological rigor and standards. We can achieve this only if we commit to engaging with these emerging tools openly, reflexively, and collaboratively, even if some of us have a deeply critical take on them.
Footnotes
Declaration of Conflicting Interests
The author declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Funding
The authors received no financial support for the research, authorship, and/or publication of this article.
