Abstract
We write as qualitative researchers to respond to the open letter opposing the use of generative artificial intelligence (GenAI) in reflexive qualitative research (Jowsey et al., 2025a). We believe that rejecting the technology in its entirety risks closing off methodological evolution, misunderstanding current practices, and isolating qualitative research from broader epistemic developments. We therefore advocate for a position grounded in critical engagement, methodological literacy, and ethical responsibility. GenAI, like any other technology, including computer-assisted qualitative data analysis (CAQDAS) software, when used critically and under close researcher leadership and control, can serve as a legitimate analytic support within reflexive qualitative inquiry.
Keywords
We write as qualitative researchers to respond to the open letter opposing the use of generative artificial intelligence (GenAI) in reflexive qualitative research, which was published in Qualitative Inquiry (Jowsey et al., 2025a).
The open letter calls for the categorical exclusion of generative Artificial Intelligence (GenAI) from reflexive qualitative research. While we acknowledge the ethical and socio-technical concerns that motivate this position, we respectfully contest the conclusion that GenAI is inherently incompatible with meaning-based methodologies.
We believe that rejecting the technology in its entirety risks closing off methodological evolution, misunderstanding current practices, and isolating qualitative research from broader epistemic developments. See our detailed commentary
Furthermore, a blanket rejection of GenAI in qualitative research can reinforce a problematic binary framing between pro- versus anti-AI, or ethical versus non-ethical positions, which may risk marginalizing in-between efforts that develop, design, and apply GenAI in ethical and responsible manners. Our stance is to refuse to side with either an absolutist anti-AI position (i.e., an outright rejection of all potential uses of GenAI) or an uncritical pro-AI enthusiasm (i.e., accepting that anything goes as long as it increases efficiency).
We believe in the diversity and flexibility of qualitative research practices where researchers can engage with GenAI while remaining reflexive, ethical, and responsible. We therefore advocate for a position grounded in critical engagement, methodological literacy, and ethical responsibility. GenAI, like any other technology, including computer-assisted qualitative data analysis software (CAQDAS), when used critically and under close researcher leadership and control, can serve as a legitimate analytic support within reflexive qualitative inquiry.
We therefore support
continued methodological development exploring GenAI as a reflexive analytic aid, with methodological groundings and critical evaluation of its capabilities and limitations;
maintenance of human interpretive and reflexive agency in all analytic decisions, positioning GenAI as a tool or an assistant under researcher control rather than an autonomous agent;
transparency and reflexivity in the design, implementation, monitoring, evaluation, and reporting of GenAI-supported analyses; and
collective action to reduce environmental and labor harms that targets big tech companies rather than individual qualitative researchers who critically and reflexively experiment, apply, or use GenAI in their work.
We invite colleagues from across disciplines and methodological traditions to participate in constructive dialogue about how qualitative research can engage productively and responsibly with emerging computational tools.
COMMENTARY: In Support of Researcher-Led and Responsible Use of Generative AI in Reflexive Qualitative Research
In the commentary of the open letter, Jowsey et al. (2025a, p. 1) explain their outright rejection of generative artificial intelligence (GenAI) by providing three primary reasons, including
1. GenAI, as simulated intelligence, is incapable of meaning-making.
2. Qualitative research should remain a distinctly human practice.
3. The established manifold harms of GenAI, especially to the environment and workers in the Global South.
We share many of the values motivating this position, including the commitment to reflexive, interpretative qualitative research and concern for environmental sustainability. However, we propose a different approach to GenAI’s emergence in qualitative research. While Jowsey et al. (2025a) advocate for categorical prohibition, we advocate for a responsible use of GenAI grounded in critical engagement, methodological literacy, and ethical responsibility.
In this commentary, we respond in detail to each of these reasons. Our position rests on three considerations.
GenAI Need Not Displace Human Interpretation
Jowsey et al. (2025a) argue that GenAI should be excluded from reflexive qualitative research because it “cannot make meaning of the language” or genuinely understand the world (p.1). While we agree that GenAI operates through statistical pattern-matching rather than meaning comprehension, our question is different: Can qualitative researchers make use of this technology as supportive tools in interpretative works?
We propose that this question deserves careful consideration rather than outright rejection. In what follows, we outline two areas that may provide additional nuance to the current debate.
The critique also overlooks the work that has gone into designing local AI systems, establishing secure Application Programming Interface (API) connections to institutional or self-hosted systems, or the integration of GenAI into existing CAQDAS environments as for instance is the case with NVivo and MAXDA, or the research reported by De Paoli (2024) and Katz et al. (2024). These measures, while not perfect, allow local training of data, prevent participant data from being transmitted to commercial companies’ cloud services, and offer additional layers of control that help mitigate the risks of data security and privacy (Davison et al., 2024).
The goal is careful oversight, not automation without reflection. In this context, the AI model operates as another source of knowledge, not as an unchecked agent. Even though the models lack the ontological embeddedness in lived, socio-historical worlds that inform human interpretation, their output can still inform analysis, because they are trained on vast and socially-situated corpora (Krähnke et al., 2025). It all depends on how researchers use this new technology. This leads to the second point that deserves nuanced consideration.
Beyond “Small Q” Mechanization
Toward Reflexive Human-AI Collaboration
However, we would like to draw attention to a different strand of emerging work that positions GenAI differently. In the open letter, Jowsey et al. (2025a) claim that the uncritical use of GenAI threatens the interpretive foundations of qualitative research. Yet the authors do not address what critical use would look like in practice. Their response largely overlooks current scholarship by qualitative researchers who are actively examining how human–AI collaboration can be carried out responsibly and in line with established epistemologies (Chubb, 2023; Friese, 2025; Hayes, 2025; Hoffmann et al., 2025; Izani & Voyer, 2024; Krähnke et al., 2025; MacGeorge, 2025; Morgan, 2026; Nguyen-Trung and Nguyen, 2026; Nguyen-Trung, 2025; Perkins & Roe, 2024; Schäffer & Lieder, 2023; Thominet et al., 2024; Walsh and Pallas-Brink, 2023).
In this kind of work, meaning-making remains anchored in human judgment, not machine autonomy. GenAI does not interpret; it supports the researcher’s interpretive work by generating associations, contrasts, or textual pointers that can be examined, confirmed, or dismissed. Analyzing with GenAI is far from entering one prompt and then having data analysis done. Instead, engagement is iterative, reflexive and abductive.
Engaging with LLMs can open an alternative possibility of interpreting data. These models can surface contrasts, thematic variations, and subtle connections that could broaden analytic perspectives. When used through dialogue and iteration, they contribute to interpretive triangulation, this time not among data sources but between human judgment and machine suggestions. Human expertise remains central, guiding the acceptance, rejection, or refinement of model output within theoretical, contextual, and ethical boundaries. Without engaging with such work, Jowsey et al.’s critique risks positioning GenAI as an abstract threat rather than considering how researchers are already negotiating its use within established interpretive traditions.
Rejecting GenAI on the grounds that it cannot independently interpret misrepresents both how the kinds of approaches outlined above function and how researchers are applying them.
Navigating a Field in Transition
We acknowledge that much of the mentioned literature is very recent, and methodological guidance, standards and good practices of reflexive qualitative research with GenAI are still developing. Dialogue on these issues is complicated by the fact that peer review has not yet fully caught up with the technology. Reviewers themselves are only now developing AI literacy, and some studies have been published in which GenAI is applied with methods that are either incomplete or poorly justified. These early publications can then be taken as representative, even when they do not reflect emerging best practice. This is not a matter of assigning fault. We do not suggest that the second group of literature—those based on iterative and abductive dialogues—exhaust all the possible ways of making meaningful use of GenAI assistance in Big Q analysis. We expect there are many other ways yet to be invented, and there needs to be further experiments and applications to shape what reflexive qualitative research with GenAI would look like. It reflects a field in transition, where both researchers and reviewers are still acquiring the skills needed to evaluate GenAI-supported inquiry—and to continue to do so as LLMs continue to evolve.
As methodological training develops and more scholars gain direct experience with LLMs, the collective capacity to judge quality will increase. The appropriate response is therefore to deepen engagement, improve standards, and continue examining how GenAI can support reflexive interpretation rather than dismissing the technology on the basis of early or inadequate applications. We thus do not recommend the use of GenAI in reflexive qualitative research without thorough training in research methodologies or social theories. With GenAI, qualitative researchers should continue to shape their analysis, to consider their own positionality, to choose and justify appropriate analytical tools, and to take responsibility for the final interpretation and its communication. Thus, we advocate for a position whereby we keep our minds open to such possibilities with GenAI technologies rather than ruling them out a priori.
Reflexive Qualitative Analysis Is Relational and Distributed
The claim that reflexive qualitative analysis must be “exclusively human” rests on two key misunderstandings.
This framing may also give the impression that all scholars working within these traditions categorically reject the use of GenAI in qualitative research. We see tension between the open letter’s call for outright refusal and the more flexible, non-prescriptive stance put forward by the developers of reflexive thematic analysis. Braun and Clarke (2022, p. 43) emphasize that “Reflexive TA offers a particular orientation to, and form of, TA. That, however, as we have just noted, doesn’t mean there is just one way to do reflexive TA. . . Indeed, one of the key advantages of reflexive TA is that it offers researchers a lot of flexibility.” Given that Braun and Clarke are themselves co-authors of the open letter, this raises an unresolved tension: how their long-standing emphasis on reflexive TA as a flexible, non-prescriptive orientation can be reconciled with the letter’s categorical rejection of GenAI. It remains unclear why, in this case, such flexibility is no longer seen as methodologically compatible with engaging generative AI, despite their earlier insistence that there is not just one way to practise reflexive analysis.
Hitch (2024) similarly points out that
Braun and Clarke (2021a) have always asserted that reflexive thematic analysis is intended to be a flexible rather than a prescriptive approach to qualitative analysis. As a pragmatist with a strong commitment to implementation, I would encourage anyone considering the use of AI in reflexive thematic analysis to also contemplate who we serve as qualitative researchers. (p. 602)
We agree with this sentiment and respectfully reject the claim that using GenAI in qualitative data analysis necessarily renders qualitative research non-reflexive. While we acknowledge that GenAI itself cannot be reflexive (Jowsey et al., 2025), we argue that researchers who use GenAI can.
According to Whitaker and Atkinson (2021), reflexivity is “a fundamental and inescapable feature of all research, in the natural and social sciences alike” (p. 3). We agree with this position and argue that reflexivity is not determined by the technical design of a tool but by the researcher’s capability and responsibility to reflect on discipline, methodology, textual representation, and positionality. Reflexivity is far from a binary situation: AI is not a button that turns our reflexivity on and off like a light bulb.
With AI, researchers still need to consciously maintain and grow their critical reflection to assess which sources of worldviews, knowledge, biases, experiences, materials, tools, and technologies influence the knowledge production. Like any other technological tool (be it online meeting applications, digital data collection tools, or CAQDAS software), qualitative researchers who engage with GenAI can, and should, maintain reflexivity when they design, develop, and adapt their digital research workflow in ways that “intentionally [consider] the choice of digital tools and spaces in meaningful and reflexive ways” (Paulus & Lester, 2024, p. 622).
Beyond the “Exclusively Human” Subject
The Claim That Reflexive Qualitative Analysis Must Be “Exclusively Human” Rests on a Narrow Understanding of Meaning-Making
Many established theoretical traditions within qualitative inquiry recognize interpretation as relational and distributed across human and non-human elements—including texts, tools, environments, discourses, and technologies. Their argument presupposes that the only valid model of qualitative interpretation is one where meaning is generated exclusively within an isolated human consciousness. This excludes established concepts such as
Assemblage thinking (Deleuze & Guattari, 1987).
Distributed cognition (Hutchins, 1995).
Posthumanist knowledge practices (Barad, 2007; Braidotti, 2013).
Sociomaterial entanglements in research (Fenwick et al., 2011; Orlikowski, 2007).
These traditions position meaning-making as relational, co-constructed across human and non-human agents, tools, discourses, and environments and predate generative AI. They do not deny the human researcher’s ethical responsibility but allow for cognitive scaffolding, external triggers, and mediated thinking. An AI assistant in this context remains a non-agentic cognitive artifact that supports human reasoning, not an interpretive subject. Below, we summarize the four mentioned concepts:
Assemblage Thinking
Assemblage thinking conceptualizes any social phenomenon—including qualitative analysis—as an assemblage of diverse elements (people, tools, environments, ideas) that come together and temporarily form a functional whole (Deleuze & Guattari, 1987). Rather than meaning residing solely in an individual’s mind, meaning emerges from the relations among heterogeneous parts of an assemblage. In Deleuze and Guattari’s terms, agency is distributed across a socio-material network: human action requires material interdependencies and networks of discursive devices. A researcher analyzing interview data is never “alone” in meaning-making—they are part of an assemblage with the interview transcripts, the coding scheme, the ambient environment, perhaps a software tool or an AI assistant. The connections among these components are fluid and can reconfigure; an assemblage is not static, but “dynamic, heterogeneous . . . constantly shifting, evolving, and adapting based on the interactions of its components.
Distributed Cognition
Distributed cognition proposes that cognitive processes are not bounded by an individual mind, but distributed across people, artifacts, and time (Hutchins, 1995). In Edwin Hutchins’ classic example, navigating a ship is a cognitive task accomplished by a system of navigators, maps, instruments, and culturally learned procedures—not by a single navigator’s brain alone. Applying this to qualitative research, we see data analysis as a team effort between humans and their tools. A codebook, for instance, is an external memory structure that “stores” definitions and examples of codes, thus offloading cognitive effort from the researcher’s mind into a shared artifact. Likewise, when qualitative collaborators discuss emerging themes around a whiteboard covered in sticky notes, the cognition is happening in the interaction of multiple brains and the material layout of notes that hold ideas in place. The fundamental unit of analysis, as Hutchins would say, is the entire “collection of individuals and artifacts and their relations to each other in a particular work practice” (Rogers & Ellis, 1994).
Posthumanist Knowledge Practices
Posthumanist and new materialist scholars argue that knowledge is not an exclusively human product—it emerges from the intra-action (in Barad’s term) of humans with non-human agencies, where the line between knower and known blurs (Barad, 2007; Braidotti, 2013). Karen Barad’s concept of agential realism insists that we must account for the material role in how knowledge comes to be. She rejects the idea that we simply reflect an external reality or construct it wholly socially; instead, our practices (including using instruments, technologies, and our bodies) participate in bringing forth phenomena. “We don’t obtain knowledge by standing outside the world; we know because we are of the world. We are part of the world in its differential becoming.”
Barad explains, emphasizing that non-humans (devices, organisms, matter) are “
Sociomaterial Entanglements in Research
The sociomaterial approach in fields like organization studies and education research likewise posits that the “social” (human intentions, interactions, interpretations) and the “material” (tools, technologies, physical spaces) are constitutively entangled in practice (Orlikowski, 2007; Fenwick et al., 2011). Wanda Orlikowski argues that we cannot adequately understand everyday work (or research) if we ignore the material components: “everyday organizing is inextricably bound up with materiality,” and we should assume the “
In other words, the activities of qualitative research (data collection, coding, sensemaking) are jointly shaped by social and material forces. Tara Fenwick and colleagues similarly describe any human activity as a product of complex entanglements of human and non-human actors, shaped by both social and material forces, and is therefore
Notably, sociomaterial theorists often adopt a flat ontology stance: rather than treating technology as a backstop or mere tool. Furthermore, Orlikowski warns that as technology becomes ubiquitous, it tends to vanish from view; sociomaterial analysis counters this by deliberately making technology visible to examine how it shapes knowledge production.
The Risk of Epistemological Generalization
If the open letter were to discuss only the process of meaning-making as defined by Braun and Clarke (2022), we could merely note that this represents a relatively narrow epistemological stance, but not object to it in principle. However, the authors of the open letter extend this understanding beyond reflexive thematic analysis to encompass all forms of reflexive qualitative research. This generalization is problematic, because Braun and Clarke’s position is grounded in a specific epistemological framework—namely, an interpretivist–constructionist orientation that locates meaning-making primarily within the human researcher’s interpretive activity.
Other qualitative traditions, however, conceptualize meaning not as a product of individual human interpretation but as emergent from dynamic relations among heterogeneous human and non-human entities—people, tools, technologies, discourses, and environments. By conflating Braun and Clarke’s human-centered model of meaning making with the epistemic grounds of all reflexive qualitative inquiry, the open letter overlooks this pluralism.
We argue that human reflexivity does not require solitude. As the above described positions have shown, it can involve external stimuli, conceptual provocations, dialogic prompts, or “thinking with” artifacts, texts, theories—and potentially AI outputs—while maintaining human interpretive agency.
Researchers routinely think with theory, with writing, with sensory environments, with conceptual tools, and with peers. In this context, a GenAI system can be understood as an additional sociomaterial artifact, an entity that can be instructed to contribute prompts, rephrasings, or contrasts as part of a dialogue around the research texts. Its involvement does not negate human reflexivity; rather, it expands the space of dialogic engagement. What remains essential is that interpretive agency and accountability stay with the researcher.
Placing GenAI within the category of “assisting, not replacing” aligns it with established analytic supports such as memos, mapping, search tools, computational querying, and collaborative discussion. Its presence does not threaten reflexivity; instead, it can contribute to productive interpretive provocation when guided by methodological awareness.
Ethical and Environmental Concerns Require Governance and Harm-Reduction, Not Prohibition
The environmental footprint of data infrastructures and the labor exploitation embedded in some AI supply chains require urgent attention. These issues should not be minimized. However, a categorical prohibition of GenAI within qualitative research is unlikely to produce meaningful change and may instead hinder the discipline’s ability to contribute to responsible governance.
The open letter treats AI’s environmental footprint as uniquely unacceptable, but it fails to contextualize those harms within broader debates on proportionality, mitigation, and responsible innovation (e.g., International Energy Agency [IEA], 2025; Xiao et al., 2025). To evaluate the environmental ethics of AI in research, we must compare it to the existing digital practices of social scientists.
Contextualizing Academic Energy Use
For a social scientist, it is perhaps most relevant to compare an hour of intensive LLM use for text analysis with a standard academic activity, such as participating in a 1-hour video call. Estimates for the energy consumption of video streaming vary, but a widely quoted 2020 IEA study estimates the energy cost of streaming video at approximately 77 Wh per hour, depending on the device, and excluding the network costs of uploading the user’s own camera feed (Barker, 2025).
In comparison, the energy costs for an hour of LLM analysis—based on an initial upload of texts followed by 10 to 20 subsequent queries—are strikingly similar. Josh (2025) estimates that uploading 200 pages of text (approx. 100k tokens) and querying it with a model like GPT-4o consumes up to 40 Wh. Subsequent queries are significantly lighter; Google reports that an average query on Gemini consumes roughly 0.24 Wh (Crownhart, 2025). Consequently, the total energy estimate for an intense hour of LLM-assisted analysis falls in roughly the same ballpark as a standard one-hour Zoom call (Josh, 2025).
Placing Resource Consumption in Perspective
While data centers undeniably consume significant water and energy, existing research indicates that this consumption, while non-trivial, remains significantly lower than that of other major sectors. When placed in a global context, the water withdrawals of data infrastructures are a fraction of that used in agriculture or municipal systems (e.g., Gabbatiss, 2025; International Renewable Energy Agency [IRENA], 2025; Ritchie et al., 2022):
Notably, data center usage is typically less than 10% of the local municipal supply. This comparison does not excuse AI’s footprint, but it places it in a realistic policy landscape.
Moving Toward Responsible Governance Use
The appropriate response to these findings is not categorical abstinence, but targeted intervention and harm reduction. Qualitative researchers should advocate for and adopt sustainable practices, such as prioritizing models hosted in renewable-powered data centers, utilizing efficiency-focused architectures (e.g., small language models), and demanding transparency regarding resource-intensive deployments. By engaging with these technologies rather than rejecting them, the field can contribute to the growing body of work on reducing AI’s footprint (e.g., Li et al., 2025).
The ethical stance presented in the Open Letter’s reason No. 3 effectively blocks any serious discussion of such dialogic, reflexive, or cognitively distributed approaches. By deciding that GenAI must be dismissed on moral grounds, the authors avoid engaging with the possibility that researcher-directed, dialogic AI could function responsibly within sustainable and ethically governed systems. Too often, their moral rejection becomes a methodological one; limited familiarity with AI methods can lead to the use of simplistic prompts and poor results, which are then taken as confirmation that AI cannot be used to support qualitative analysis (Jowsey et al., 2025b; Nguyen & Welch, 2025). This circular logic prevents the exploration of how qualitative researchers with the right epistemic grounding could build iterative, human-led analyses that avoid both uncritical automation and extractive scaling.
Conclusion
Responsible engagement positions qualitative researchers to shape practices and policies rather than withdraw from them. The question before us is not whether harm exists, but how methodological communities can contribute to its mitigation.
Reflexive qualitative research is an evolving set of practices. Its core commitments—to contextuality, situatedness, interpretation, and ethical responsibility—are not inherently threatened by Generative AI. When carefully constrained, researcher-led, and epistemically informed, GenAI can function as an analytic aid that complements rather than supplants human understanding. A categorical rejection risks foreclosing methodological innovation, narrowing the field of inquiry, and removing qualitative voices from critical conversations about technological development.
Signed
Susanne Friese, Kien Nguyen-Trung, Stephen Powell, David Morgan, Silke F. Heiss, Kai Dröge, Franz Breuer, Kyle Bower, Nguyen Nhu Anh Le, Sundara Kashyap Vadapalli, Thomas Hormaza Dow, Jarg Bergold, Paul Marsden, Leo Gürtler, Burkhard Schäffer, Jonas Wibowo, Wolfgang Beywl, Fabio Roman Lieder, Christopher Horne, Oliver Bidlo, Frank-Thomas Naether, Stefan Aufenanger, Christian Funk, Gesa Pult, Joshua Eisenstat, Stephen Gourlay, Lorenzo Salomon Cardenas, Maik Arnold, Matus Adamkovic, Roehl Sybing, Jules Sherman, Soul Seated Journey, Huynh Thi Hau, Thai Tran Van Hanh, Jörg Schwarz, Anea Klein, Christine Neubert, Evelyn Funk, Dean McDonnell, Hartmut Reinke, Doris Klappenbach-Lentz, Nguyen Le Hoai Anh, Elitsa Uzunova, Marion Dirksmeier, Franziska Wächter. Daniel Rode, Lidiany Cerqueira, Franziska Wächter, Salvador Bustamante Aragonés, Pham Huynh Thuy Uyen, Miguel Alejano Saquimux Contreras, Lina Franken, Ryan Baker, Tran Quoc Khai, Anes Felipe Zambrano, Anes Felipe Zambrano, Thorsten Dresing, Margrit Schreier, Stefano De Paoli, Aneta Heinz, Verena Butschkau, Peter Stegmaier, Marco Galle, Nicole Weydmann, Peo Prieto Martin, Marcus Nolden, Christiane Hof, Christian Arndt, Susanne von Jan, Jenni Burt, Anja Weiß, Nick Baker, Barbara A Reed, Sonja Kind, Niels Peek, Stefan Rädiker, Sarah J. Pearsall, Samantha Hurst, Lee Mager, Franjo Pehar, Stefan Kaufman, T.H. Anh Nguyen, Valerie Futch Ehrlich, Jay Delaune, Thomas S. Eberle, Tanja Kistler, Lucas M. Seuren, Flavio R. Durón González, David C. Coker, Zubair Barkat, Gareth Davey, Rebat Kumar Dhakal, Zeynel Amac, Nihan Albayrak, Anew McNeill, Nieky van Veggel, Aaron Hoy, Peter Slattery, Florentina Scârneci-Domnişoru, Brigitte Gasser, Khanh Nguyen-Xuan, Ali Teymoori, Arno Simons, Antony Bryant, Mohammad Hossein Jarrahi, Ravneet Somal, Rachel Shanks, Richard James Smith, Alexander K Saeri, Gregory Hadley, Gregory Hadley, Goele Scheers, Md Omar Faruk, Werner Vogd, Jonathan Harth, Charlie Constance, Kakali Bhattacharya, Dominic Cassidy.
Supplemental Material
sj-xlsx-1-qix-10.1177_10778004261429393 – Supplemental material for Beyond Binary Positions: Making Space for Critical and Reflexive GenAI Integration in Qualitative Research
Supplemental material, sj-xlsx-1-qix-10.1177_10778004261429393 for Beyond Binary Positions: Making Space for Critical and Reflexive GenAI Integration in Qualitative Research by Susanne Friese, Kien Nguyen-Trung, Steve Powell and David L. Morgan in Qualitative Inquiry
Footnotes
Declaration of Conflicting Interests
The authors declared the following potential conflicts of interest with respect to the research, authorship, and/or publication of this article: Dr Kien Nguyen-Trung and Prof. David L. Morgan declare that they have no conflicts of interest. Dr. Susanne Friese and Dr. Steve Powell disclose their roles as co-founders of QInsights.ai and Causal Map, respectively. Both platforms utilize generative AI technology to support the process of qualitative data analysis. Dr. Friese’s role is the outcome of a thirty-year career working with, teaching, and writing about Computer-Assisted Qualitative Data Analysis (CAQDAS) software, alongside extensive methodological research into the intersection of AI and qualitative inquiry. Her advocacy for the integration of Artificial Intelligence is grounded in early experimentation during 2023, where she identified the limitations of existing AI implementations in CAQDAS and the specific need for tools that support "collaborative, dialogic analysis" rather than simple automation. The decision to develop software was a direct response to these methodological gaps, driven by a commitment to push the boundaries of qualitative research methods. A record of this foundational work and methodological exploration—all of which pre-dates the commercial release of the software—is publicly available at
. Peer-reviewed publications based on these early writings are forthcoming in 2026. Dr. Powell is engaged in scholarly qualitative research for evaluation, and his motivation to develop tools for the research community stems from similar methodological commitments. His research record is available via his
Funding
The authors received no financial support for the research, authorship, and/or publication of this article.
Supplemental Material
Supplemental material for this article is available online.
Footnote
The open letter was shared through a link to a Google document, which could be read openly. A link to a Google Form was embedded in that document, allowing readers to add their names as signatories. We circulated it via social media, professional networks, and academic email listservs. For the full details of endorsers, see
.
