Abstract
Qualitative researchers, particularly those researching business and society topics, should embrace artificial intelligence (AI) to conduct efficient, explicatory, and equitable research but also exercise caution to avoid its pitfalls.
Keywords
In November 2022, OpenAI released chatGPT, a chatbox with open access and cutting-edge language processing capabilities that sparked a flurry of debates, including in academic circles. Weeks after its release, many journals, schools, and universities raised concerns about research authorship and plagiarism. While the ethical concerns about the use of artificial intelligence (AI) for academic purposes are valid and solutions to address such concerns (as well as their consequences) are in the offing, we argue in favor of using AI, specifically for qualitative research. AI-assistance can make qualitative research more efficient, explicatory, and equitable. However, scholars should also avoid pitfalls of AI-assisted qualitative research.
What is AI-Based Natural Language Processing?
AI-based natural language processing (NLP) uses linguistics and machine learning to comprehend, interpret, and produce human-style language. OpenAI’s chatGPT is a particular application of NLP in a chatbox format that enables users to have conversations with AI. Furthermore, AI can carry out complex language processing tasks such as text generation, language translation, and even answering questions. However, AI also has some limitations that arise due to the training data it uses and the limited capabilities it has to interpret the tacit knowledge. Based on the quality of training data, AI frequently makes factual errors and produces biased results. Furthermore, it lacks a thorough grasp of the nuances of human language and an understanding of physical and social world. For example, when chatGPT was asked “There are five birds on a branch. If you shoot one of them off the branch, how many are left on the branch?,” it answered “four birds,” failing to understand that the remaining four birds will fly off. Despite these critical drawbacks, AI offers several benefits to qualitative researchers.
Why Qualitative Researchers Should Embrace AI
AI use can help in overcoming certain disadvantages associated with small data size in qualitative research. Small data samples often fail to offer insights about varied experiences and views over time and space. While the speed of collecting large data and the capacity to do so have improved with technology, qualitative research analysis has remained a slow and labor-intensive process. Scholars can use AI to overcome these disadvantages to make qualitative research more efficient, explicatory, and equitable.
Efficient
Business and society scholars often encounter a plethora of unstructured qualitative data, which presents a significant challenge in terms of management and analysis. For instance, a researcher mapping changes in media discourse on sustainability over a decade will find the size of information too overwhelming to analyze. Popular methods such as content analysis and sentiment analysis capture some information, but that is often limited to the frequency of phrases or emotional tone, respectively. AI, on the contrary, can understand the text and highlight patterns as defined by the researcher’s interpretative grid. For instance, AI can underscore text signifying political, social, and cultural issues. Instead of a researcher reading a large chunk of uninteresting text, AI can act as an extension of the researcher’s ability to read and identify meanings present in data. This enables the researcher to focus on more interpretative aspects of research, such as refining codes, building conceptual connections, and theorizing. As a result, research becomes more efficient because thinking, deliberating, and developing interpretative repertoire becomes the primary task of the researcher who is freed from the more time-consuming aspects that AI can take care of.
Explicatory
AI is limited by its inability to possess human-level understanding of the social world (Dreyfus, 1992). Some of its limitations include a lack of common sense, an inability to learn from experience, and a lack of contextual understanding of social and cultural nuances. However, creative usage of these limitations can in fact enhance the explicatory potential of analysis. For instance, AI often fails to understand complex human-generated text that has multiple layers of meaning such as dog whistle, sarcasm, and metaphor. While AI can correctly read literal text and code as per researcher’s predefined coding scheme, complex cases can be creatively filtered out. The cases that do not fit into the coding scheme due to ambiguity and complex language are insightful cases that AI can identify for explication. This can be achieved by adding a predefined pseudo code say, “failure cases” in the original coding scheme. Once the AI is run on the data, all such cases of algorithmic failures get identified and tagged as “failure cases.” These cases can then be separately interpreted by the researcher. By identifying algorithmic failures, AI can enhance explicatory power of the research (for details, see Munk et al., 2022). This method has immense explicatory value when applied on critical socio-political discourses, such as social media or political speeches, in which language is often ambiguous and multilayered, and data size is huge.
Equitable
AI can be immensely useful to researchers from marginalized backgrounds with limited social/cultural capital. This is because AI can help researchers overcome limitations of language prowess, forms, and styles of academic conventions as well as their lack of access to resource and mentorship. For instance, in the context of caste, Guru (2002, p. 5003) contends that social sciences have “harboured a cultural hierarchy dividing it into a vast, inferior mass of academics who pursue empirical social science and a privileged few who are considered the theoretical pundits with reflective capacity which makes them intellectually superior to the former.” The divide of empirical and theoretical work between privileged and underprivileged often precipitates due to differential access to resources in an academic setting. Underprivileged scholars are often dependent on their privileged colleagues for resources. Being relegated to empirical work without much theorizing results in slower career growth. While this is a systemic issue, which cannot be solved only with technological interventions, AI technology has the potential to empower marginalized researchers by making them more resourceful and independent. Such independences have potential to enable researchers to conduct meaningful and critical research of their choice. Subsequently, AI can help bring varied perspectives of underrepresented scholars and add to their inclusion in mainstream academia.
Avoiding the Pitfalls
While use of AI can make qualitative research more efficient, explicatory, and equitable, there are two pitfalls that need to be underscored. First, AI is a tool to enhance researcher’s capabilities and not to replace her. AI cannot be accorded with ownership, and authorship of research. In other words, AI cannot be treated as an independent and objective interpreter of social field. The task of interpretation shall always remain with human researchers. AI may be used to automate the identification of patterns and trends in data, but the researcher should retain control by designing the interpretative repertoire. The values and the assumptions of the researcher ultimately influence the process and the research produced (Lincoln et al., 2011). By keeping the interpretative control with the researcher, concerns about ownership, authorship, and positionality of the research can be alleviated.
Second, AI carries inherent biases present in the training data. Therefore, various tasks performed during the research are expected to be influenced by dominant ideas, beliefs, and attitudes prevalent in the society. For instance, an AI system used by Amazon to classify and select successful job applicants was found to discriminate against female applicants by ignoring phrases such as “women’s chess club captain” as evidence of leadership (Stahl et al., 2023, p. 11). Such phrases were penalized by the system for not matching with characteristics of a successful candidate in the training data, which was dominated by male applicants. Such inherent biases need to be countered by the researcher while designing the interpretative grid, which specifically identifies such situations and gives them importance in analysis, instead of relying on AI’s linguistic capabilities. This is especially important for researchers doing critical work that challenges social norms and seeks to bring change in social structures.
To conclude, by being cognizant of the pitfalls of AI, qualitative researchers can use AI to analyze large-scale qualitative data, conduct insightful research, and empower marginalized researchers.
Footnotes
Acknowledgements
We would like to express our gratitude to Hari Bapuji and Frank de Bakker for their meticulous and insightful feedback provided on previous versions of this article.
Declaration of Conflicting Interests
The authors declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Funding
The authors received no financial support for the research, authorship, and/or publication of this article.
