Abstract
Drawing on our experience developing a visual polyvocal narrative of the immigration system in Canada and Brazil, we explore the role of artificial intelligence (AI) image generation as a tool for supporting interview participants in articulating their experiences. We found that the AI image generation process supported participants’ ability to reflect and express their experiences. However, there were several challenges due to technological limitations and inherent biases embedded in the AI, which resulted in unsatisfactory images and repeated image generation attempts. We came to conceptualize the AI image generation tool as a third agent in the interview process, facilitating access to artistic expression yet introducing content into the conversation. We identified five primary roles played by the AI image generation tool in the interview process: Helper (supported the image generation process), Distractor (transferred attention from the topic of study to prompt engineering), Motivator (motivated participants to better articulate their vision), Influencer (introduced content in the conversation), and Facilitator (facilitated reflection and sensemaking). We discuss avenues for maximizing the benefits of AI image generation in interviewing and mitigating its challenges. We contribute to a growing body of research on reflective and arts-based interventions in interviewing by illustrating the role new technologies can play in advancing the potential of interview-based research.
Keywords
Introduction
The rapid development of artificial intelligence (AI) technologies is reshaping the landscape of qualitative research and providing opportunities for innovative approaches to exploring complex human experiences. This study examines the potential of using AI image generation in reflective interviewing. Reflective interviewing recognizes that the interview interaction is not neutral and seeks to support participants in reflecting on and making sense of their experiences (Nardon et al., 2021).
Many approaches have been used to support reflection. For instance, Imaginative Metaphor Elicitation (IME) allows participants to explore their experiences in non-threatening ways and facilitates introspection and self-awareness by making the experience tangible and accessible (Nardon & Hari, 2021). Arts-based research methods provide a non-verbal, expressive platform for participants to convey their experiences (Vacchelli, 2018). Arts-based approaches have gained prominence for their ability to facilitate inclusivity and elicit richer data in qualitative research (Leavy, 2017). By minimizing reliance on the written word, visual arts-based methods allow participants to engage more fully and share their stories in ways that transcend linguistic and cultural barriers (Goldstraw et al., 2020). AI image generation emerges as a promising extension of these approaches, enabling participants who may feel apprehensive or resistant to traditional artistic expression to create visual representations of their experiences in a more accessible and participatory manner.
We draw on our experience developing a visual polyvocal narrative of the immigration system in Canada and Brazil https://migrationvoices.org.in which we interviewed 20 individuals with various linguistic and cultural backgrounds across the two countries and co-developed an image of their experience with the support of Open AI’s DALL-E. We found that this methodological innovation supported participants in expressing their emotions and experiences, creating space for introspection and self-awareness. Yet, the process also required careful reflexivity to address the power dynamics the AI image generation tool brought into play, which sometimes introduced biases into the conversation.
We contribute to the emerging body of work exploring the potential of AI tools in qualitative research by conceptualizing AI not merely as a tool but as an agent, introducing a third layer of subjectivity into the research process alongside participants and researchers. We describe the different roles the AI image generation tool plays in the interview process and provide researchers with suggestions to maximize the benefits of this tool while mitigating its challenges.
This paper is organized as follows: we start with a brief discussion of arts-based research as a tool to facilitate inclusivity and support reflection and the potential role of AI image generation tools. We then present our project, drawing on immigration narratives collected through AI image generation. We discuss the many roles AI played in our project and conclude by discussing the potential and challenges of using AI image generation in the context of qualitative interviews.
Arts-Based Research and AI Image Generation
Researchers are increasingly questioning the assumption that participants can verbally articulate their experiences (Aldridge, 2017) and propose using interview tools to facilitate reflection (Nardon et al., 2021). Arts-based research is the umbrella term for an array of methodologies that entail artistic expression (Leavy, 2017), including visual art (e.g., drawing, collage, painting, sculpture, installations, photography), literary techniques (e.g., poetry), performance (e.g., dance and theatre), folk art (e.g., quilts), and new media (e.g., video, zines) (Knowles & Cole, 2008). Arts-based research recognizes that art is a form of knowledge (Leavy, 2017) and allows participants to use visual arts to represent their experiences (Collings et al., 2022). Using arts-based methods to minimize the emphasis on the written word opens possibilities for sharing more diverse representations (Goldstraw et al., 2020). Alongside increased inclusivity, visual methods have the potential to elicit insights that cannot be expressed verbally (Gauntlett, 2007). Arts-based methods facilitate knowledge creation and discovery, illuminating the importance of multiple ways of knowing (Bagley & Castro-Salazar, 2012). Thus, visual methods can be strategically employed to gain more detailed insights (Flaherty & Garratt, 2023), elicit discussions on sensitive issues (Vacchelli, 2018), and increase inclusivity in research participation (Collings et al., 2022).
It is a common occurrence that participants experience apprehension, discomfort, or resistance in response to creating something artistic (Copeland & Agosto, 2012). AI image generation emerges as an alternative to increasing the accessibility of artistic creation, enabling users to create high-quality visual content without possessing traditional artistic skills (Oppenlaender, 2022; Vimpari et al., 2023). AI text-to-image generation systems allow users to generate images based on a text description, referred to as the prompt (Vimpari et al., 2023). These text-to-image generation systems, such as OpenAI’s DALL-E, are advertised as offering possibilities to generate various images without requiring technical skills, making generative AI more attractive, and democratizing artistic creation (Arikan & Aram, 2022; Van Wynsberghe, 2021).
Technological innovations have historically expanded creative processes. Generative art systems can now act as creative agents in collaboration with users (Arikan & Aram, 2022). The image generation process is guided by prompt engineering, a skill that involves crafting effective textual inputs (Oppenlaender, 2022). Effective inputs rely on a general understanding of the ways AI models are trained to mitigate biases and limitations of the system, such as over-representing some cultures and contexts and under-representing others (Lucy & Bamman, 2021). This interaction involves iterative refinement, where users adjust prompts until the desired image is achieved, often requiring the user to curate the output. Researchers using AI tools in support of research have found that AI tools are highly sensitive to small changes in prompts (Tabone & de Winter, 2023) and require researchers’ constant critical engagement and dialogue with the tool (Chubb, 2023).
While generative art systems like DALL-E offer innovative possibilities, they also present limitations and ethical challenges. First, these technologies are inherently opaque, which can lead to user alienation and privilege those with access to AI expertise (Oppenlaender, 2022). Second, there are concerns about the biases and stereotypes embedded in AI-generated content, as these systems often reflect inequalities present in their training data, leading to potential misinterpretations and offensive or inaccurate images (Lucy & Bamman, 2021). Third, there are concerns that the data used in training these tools infringes copyright and intellectual property laws (Appel et al., 2023). Fourth, there are concerns regarding the environmental impacts of AI (Dhar, 2020) and AI-based artistic practices (Jääskeläinen et al., 2022) as both the technology and its use significantly contribute to carbon emissions and high energy consumption. Despite these challenges, generative art technologies have the potential to democratize artistic processes, allowing individuals without traditional artistic skills to create visual content, which is particularly valuable for research methodologies employing art-based approaches (Oppenlaender, 2022; Vimpari et al., 2023). It is important to acknowledge that AI technology is rapidly evolving, and researchers and technologists are exploring strategies for reducing computational resources (Le Goff, 2023), optimizing energy use (Li et al., 2024), and mitigating AI challenges.
Extant research has explored the roles of AI in various contexts relevant to research and arts-based reflective intervention. Within the research context, AI has been explored as a tool for data analysis. Chubb (2023) explored the potential of ChatPDF to transform arts-based journey maps into vignettes and found that the tool can support researchers by freeing time for more critical engagement with the results of the research. Similarly, Tabone and de Winter (2023) explored the potential of ChatGPT to support qualitative data analysis and found it to be a valuable complementary tool if used critically. Similarly, Hitch (2024) found that ChatGPT took on the role of the ‘other’ for the researchers adopting a reflective thematic analytical approach, where researchers developed and evolved their analysis along with ChatGPT. Scholars have called attention to AI’s role as a tool used to enhance researcher’s capabilities rather than replace them, making qualitative research more equitable, efficient, and explicatory (Anis & French, 2023). AI has also been found to support artistic processes. For example, Yang and collaborators (2022) explored the roles of AI when collaborating with humans to construct sci-fiction stories. By proposing the following paragraph in a human-written text, AI provided new details and unexpected plots in the story, which frequently were considered core inspirations for subsequent writing. In this study, we explore the roles of AI image generation in the interview process, as discussed next.
Understanding Immigration Through AI-Image Generation
This project originated from our desire to create a polyvocal visual narrative of the immigration system in Canada and Brazil to raise awareness and propose solutions to immigrant inclusion in the workplace and society. We started from the assumption that individuals in various locations within the wide migratory ecosystem—including migrants and their families, community organizations, educational institutions, government bodies, employers, and the community at large—have knowledge and perspectives that can support the creation of inclusive narratives. Further, we wanted to provide participants in the system an opportunity to reflect deeper on their thoughts and experiences and contribute a visual element to the narrative.
Interview Protocol
We drew on reflective interviewing approaches (Nardon et al., 2021), which focus on providing participants with an opportunity to reflect on their situations. Specifically, we use Imaginative Metaphor Elicitation (IME) as an interviewing technique (Nardon & Hari, 2021), combined with an arts-based perspective to facilitate sensemaking and articulation of experiences. Metaphors allow understanding of one thing in terms of another (Lakoff & Johnson, 1980) and supply flexibility and expansion in language to express ideas (Weick, 1979). Metaphors have been used to support participants’ reflection in organizational development (Heracleous & Jacobs, 2008), coaching (Dunbar, 2017; Hunt, 2009), and research (Tosey et al., 2014). Metaphor elicitation in interviews allows participants to safely discuss sensitive issues (Nardon & Hari, 2021). Metaphor elicitation was used to make the artistic process safe and minimize the potential for offensive or unpleasant experiences when dealing with sensitive material. We invited participants to develop a metaphor to depict their experience, giving participants complete control over the metaphor creation process (Nardon & Hari, 2021), as we viewed participants as experts in their own lives (Nardon et al., 2021) and as knowledge holders (Lenette, 2019).
We started our interview by asking participants about their experiences. This portion of the interview reflects traditional open-ended interviews where the researcher strives to develop a rapport with participants while learning about their experiences. This section of the interview explored three questions: (1) Tell me a little bit about your position in the immigration system; (2) From where you are, what do you know about immigrant integration in [Canada or Brazil]; and (3) And what would you like others to know? We then worked with participants to create a metaphor to capture what they wanted others to know about their immigration experience using clean language questions – questions free of judgment, assumptions, and presuppositions (Cairns-Lee et al., 2021). Following this process, we then moved to the next section of the interview, where we engaged in developing a unique artistic image with the support of DALL-E. We started this process by summarizing the knowledge captured and inviting participants to consider an image that illustrates their message. The interaction with Heather, an immigrant, illustrates this process (Image 1).
1
Heather's rosebush. Researcher: …we’re talking a little bit about how, on the outside, Canada seems like an easier place to immigrate to, but there are hidden challenges along the way. This idea that there are still struggles, of course, when you get to Canada, finding housing, finding jobs finding doctors. And however, when it comes to social dynamics, it is a nice place because it is quite multicultural and more diverse. Also, this idea that Canada is sort of nicer on the outside and on the inside, there are challenges. All of this is like what? Heather: Huh. Let me try to think of a metaphor. I’m trying to think of what would look good on the outside but quite challenging once you get inside yet still navigable and delicious. Researcher: What are some images that come to mind when we’re thinking about something with an exterior and also an inside? Heather: Well, so I thought of a Rosebush. Okay, so the roses are pretty on the outside, but underneath, it’s thorny. Still, it is navigable.
We then worked together to articulate this image further through successive questioning and inviting participants to elaborate on their metaphor, as illustrated in the example below, where Frankie, an international student, uses the image of a rollercoaster as her metaphor (Image 2). Frankie's rollercoaster. Researcher: When you imagine this rollercoaster, do you see color? Does this rollercoaster have a specific color? Frankie: Dark blue? Or dark green. Researcher: Okay, and whereabouts is this roller coaster? Frankie: In my mind there is always—this roller coaster is always located in a park. But it is dark. Yeah, it’s night. Okay, it’s night; it is not in daylight. It’s night. But you can see the crowd. There are people, the sound, the noise of people, like a crowd. And you can see the light of the other instruments and people shouting! Wow, we are like surprised. We are scared, and it's exciting! Researcher: an image of a roller coaster. It’s got its ups and downs. It’s dark blue, dark green, located in the park at night, and you can see the crowd, and you can see the lights from other rides, and there are people shouting. And is there anything else about this image? Frankie: Also, I always see some stars the sign of the stars under a roller coaster under the body of the roller coaster.
Once the metaphor was imagined and articulated, we worked with Chat GPT to help create a prompt for AI art generation. We provided Chat GPT with the prompt: “Think as an expert and refine and elaborate the prompt below for better image generation to be given to DALL-E and give me that prompt please [description of participants’ metaphor].” We then worked with participants to ensure the prompt expressed their vision and inputted the new prompt in DALL-E to develop the image. Following AI’s first image output, we worked together with participants to finesse the image until it was to their satisfaction. The interview concluded by asking participants, “What would you like to call this image?” and “What would you like others to know about this image?”
Data Collection and Analysis
Participant Demographics and Interview Details.
As shown in Table 1, this study involves the analysis of 20 interviews, 13 developed in Canada and 7 in Brazil, with a diverse population of immigrants, immigrant children, international students, and people working with the immigrant population. Ten of the 20 interviews were conducted in Portuguese and 10 in English. The dataset includes participants from 13 different nationalities: Brazil (7), Iran (3), Venezuela (2), Canada (2), Ghana (1), Paraguay (1), Haiti (1), South Korea (1), India (1), Afghanistan (1). The duration of the interviews varied from 28 min to 1 h and 46 min, with an average duration of approximately 54 min. The number of images generated per interview ranged from 1 to 16. Most participants (13 out of 20) generated five or fewer images. Of those, eight participants were satisfied with the first image produced. Seven participants produced between 6 and 16 images. Among these, three cases stand out, with participants generating 10, 11 and 16 images, respectively. This variation in the number of images generated may reflect differences in participants’ interaction with the technology, visual preferences, or specific expectations regarding the generative art process.
Following each interview, the researchers took detailed process notes, which were shared during weekly discussions with the research team. These discussions informed best practices with the AI-image generation tool and supported continuous learning for the research team. The data was analyzed using an inductive, grounded approach, where initial analyses were used to inform further data collection (Charmaz, 2014; Locke, 2001). We engaged in an interactive process of theorizing and coding the data (Gioia et al., 2013), exploring how AI supports or hinders participants in creating imaginative metaphors. We were guided by the overarching research question: What role does AI-image generation play in the interview process?
The authors read the transcripts multiple times and repeatedly engaged with the dataset to inform regular discussions on coding schemes and ensure a consensus on emerging codes. In addition, the memos we recorded throughout the data collection and analysis supported our understanding of this method’s influence on participants and AI’s role in qualitative research employing visual methods, which we discuss below.
The AI Image Generation Tool as an Active Agent in the Interview Process
AI-Art Generation Roles in Interviewing.
AI as a Helper
Utilizing AI image generation as a visual method invites participants to partake in arts-based research virtually or in person without requiring the artistic abilities, materials, or time typically required for such protocols. Using AI as a helper was our main motivation as we started this project, and it fulfilled this role in many instances. For example, Cameron expressed satisfaction with the drawing support: Cameron: I wouldn’t know how to draw this one. Because I do draw stuff, I do paint, but I wouldn’t know how to do that one.
Visual arts-based research has already been recognized for its ability to be more inclusive (Collings et al., 2022), transcend linguistic and cultural barriers (Goldstraw et al., 2020), and allow for more diverse representations to be shared (Gauntlett, 2007; Goldstraw et al., 2020). AI-generated art has also been recognized to allow individuals without traditional artistic skills to create visual content (Vimpari et al., 2023). In addition, using AI image generation allowed us to conduct interviews online, including with participants who would otherwise be unable to take the time and resources to attend the interview in person.
However, it is important to note that AI image generation depends upon the writing of prompts, which involves crafting effective textual inputs, thus privileging those with language proficiency and access to AI expertise (Oppenlaender, 2022). In our case, the researcher played the role of prompt engineer, which increased the role of the interviewer in the knowledge construction, in contrast with traditional visual arts-based research where the participant has complete control over their creations.
AI as a Distractor
The process of representing participants’ metaphors through AI-generated images was, at times, laborious and frustrating due to technological shortcomings and inherent biases in the AI knowledge base (Bianchi et al., 2023). For example, Eva envisioned a metaphor in which the world is a house and the foreigner a friend. In her vision, the people and houses were represented as inhabiting the Earth. DALL-E generated an image of people outside of the world. Through the refinement process, DALL-E generated new modified images aligned with the concept of elements inhabiting the Earth. However, it still produced images of people outside the Earth, as illustrated in Image 3. The world is a house.
In addition to these technical shortcomings, AI images also reflected stereotypes and biases in their way of representing concepts, values, people, and cultures, which was at times challenging given our population of diverse cultures, genders, sexualities, and races that have been historically marginalized. For example, August wanted to express love and empathy while in another person’s shoes. DALL-E represented love by adding a traditional man-and-woman couple or family to the image, as displayed in Image 4. Stepping on another's shoes.
In some situations where the images were not satisfactory, the participant and researcher became focused on prompt engineering to generate better images rather than the immigration experience, which was the primary goal of the interview. These distractions were likely caused by the opacity of these technologies, biases, stereotypes embedded in AI-generated content, and inequalities in their training data (Lucy & Bamman, 2021).
AI as a Motivator
Sometimes, however, the generation of unsatisfactory images motivated participants to articulate their experiences better, providing more details and fully elaborating on their images. For example, when DALL-E generated a group of primarily white people, August was able to clarify her image further: August: Okay, we got the British people (laughs)…What I was thinking… maybe just a bunch of diverse people sitting at a table and eating a bunch of foods, breaking bread together. [...] because the standing one is looking like they’re ending war or something.
Through the iterative process of generating new images, participants could add specificity to their metaphors and extract additional meaning from what they created, as demonstrated by Frankie when she was reviewing an image depicting a group of people on a rollercoaster ride. Frankie: These people are shouting, and they are trying to express their energy and their feelings now that they are in the process of immigration. You need to expand your energy to find your balance and fix it.
In line with previous research (Oppenlaender, 2022), the creative process was more productive and satisfactory when participants remained in control of the process and were willing to go with the flow and embrace unexpected and emergent directions.
AI as an Influencer
The introduction of incorrect or unsolicited representations introduced new content in the interview interaction. Sometimes the new content created opportunities to explore new meaning in the co-creation process and productively added layers of meaning. For example, Dara proposed an image of a woman running in a field of flowers towards a tree near a stream. Although AI incorrectly depicted a woman running in the stream, an unexpected detail, this shift in the character’s position allowed Dara to extract helpful new meanings, as illustrated below. Dara: Is she in the river? Researcher: She is walking in the river. Do you want us to change it? Dara: No, I think it makes sense. I found it beautiful [...], and I think it even makes a lot more sense that I’m in the stream because it’s something that’s flowing. And I'm walking in something that's flowing, you know? There’s the sun, that’s warm, and I’m walking through something that’s also cool. So, this reinforces the experience, the spirituality in it. [...] This really moved me, walking in the stream.
At the same time, there is a risk that these introductions compromised the authenticity of the data, highlighting the importance of being reflexive about the authorship of the interview data (Cairns-Lee et al., 2021). There were cases in which beautiful images were generated, and it is possible that participants, excited with the creative potential, unconsciously complied with assumptions or presuppositions introduced by the AI-image generator.
AI as a Facilitator
We found that AI-image generation facilitated deeper reflection during the interview process by supporting participants in visualizing their ideas and co-inspiring new meanings. Loran expressed this support: Loran: It’s something very cool because it feels like we can see, sort of materialize, everything we think, you know? Because it’s something in your head, but you feel like you’ve never seen it, you’ve never looked at it, you know?
The generation of the image was very meaningful to Loran. She felt validated and even emotional. Loran: That’s really it! I swear to you, because I even received a little pendant for my graduation, […] It’s a little pendant that looks like that tree, that tree of life, you know, like the one that represents family and everything. […] And this drawing is just like that, you know? I got it from my college friends because they said this represents a lot of who I am […] I even got emotional now. [Loran starts crying with a smile on her face]…And I think this picture, I’m going to paint it, I’m going to print it to keep in my room… It was surprising; it was incredible.
At the same time, the processes represented what participants already knew and invited them to reflect deeper to articulate their experience visually. For example, Elizabeth noticed elements in her generated image that reminded her of experiences she had not discussed previously. Elizabeth: One thing I find really interesting is seeing the Canadian flag, or like the hockey players are, also on the other side of the image, which kind of reminds of me stepping into the Ghanaian culture as a Canadian-born citizen.
When coupled with Imaginative Metaphor Elicitation (IME), the AI-image generation enabled participants to convey meaning even when the experiences were challenging to articulate, empowering participants. Working with participants to co-create artistic images assisted by AI supports a more meaningful coproduction of knowledge, shifting the power imbalance often found in traditional social science research (Anyan, 2013; Hoffman, 2007; Nardon & Hari, 2021).
Discussion and Conclusion
Our goal in this article was to explore the potential role of AI image generation as a complement to reflective interviewing. We found that AI image generation has promising opportunities for creative expression in qualitative research but also challenges. On the one hand, the AI image generation tool provided participants with an accessible way of producing artwork (Vimpari et al., 2023) and supported the sensemaking process by generating images that invited conversation and prompted deeper reflection. On the other hand, AI brought its biases and stereotypes into the conversation and introduced new content into the interview interaction. These insertions sometimes provided an opportunity for reflection and meaning making but, at other times, may have introduced unwanted biases in the interview process. These findings highlight the multiple roles played by the AI image generation tool in the interview process, emphasizing the need for a flexible but reflexive iterative approach when engaging with AI, care in transforming the experience into metaphors before engaging in art generation, and the importance of observing ethical principles, as discussed below.
As discussed in this paper, the AI image generation tool acts as a third agent in the interview process with the potential to play different roles. Some roles support participants and the interview interaction (helper, motivator, facilitator), while others are disruptive and potentially counterproductive (distractor and influencer). By conceptualizing the role of AI image generation as a third agent in the interview process, we illustrate how the AI image generation process added new layers of content and meanings to the dialogue. This aligns with Yang and collaborators’ (2022) conceptualization of AI’s role in storytelling by creating unexpected plots and bringing new elements to the co-creative process. While the addition of new content may foster deeper or new reflections, enhancing participants’ sensemaking, unsatisfactory results may lead to a fixation on the tool and away from the creative process. Thus, it is critical that we, as researchers, observe the role AI is playing in the interaction and intervene when needed, highlighting the importance of researcher reflexive engagement (England, 1994). Remaining reflexive while working with this AI image generation is essential to manage the AI role away from a distracting role towards a motivating or facilitating role. Overall, we agree with Chubb and colleagues’ (2022) argument that AI should be an enabler of new methods, and its primary role is to assist, not replace, human creativity. Effective collaboration with the AI agent can enhance qualitative inquiry when the AI and human agents’ power and autonomy are balanced (Jiang et al., 2023) and supported by critical reflection (Glinka, 2022).
We urge researchers to carefully consider the nature of the image being generated. We found it essential to engage in metaphorical thinking before the image-generation process. Using metaphors enables participants to communicate their experiences through concrete descriptions that enhance their sense of empowerment and self-awareness (Nardon & Hari, 2021). In addition, metaphors allow participants to discuss sensitive issues safely and less threateningly by engaging in what Hunt calls a “subject-object move” (2009, p. 14), where the discussion is centered on a character or object external to the participant. Metaphors become increasingly vital for the AI image generation process, as metaphorical images (e.g., rollercoaster, rosebush) are less vulnerable to offensive biases than images that directly depict the experience (e.g., the precarious conditions of an immigrant). However, when developing these metaphors with participants, we found that participants with a fixed vision of a desired image had a more challenging creative experience when facing AI limitations or biases than those with a more exploratory attitude. Our study corroborates Yang and colleagues’ (2022) arguments that participants’ expectations and attitudes may influence their creative experience.
As with any reflective interview process, it is important to follow ethical principles in the research design, including allowing participants time to think, developing a relationship of trust, inviting reflection, and supporting the identification of solutions when participants are focused on problems (Nardon et al., 2021). We suggest that using AI-image generation to support reflective interviewing requires additional ethical considerations. First, researchers need to be reflexive of the increased role of the researcher and the AI in the co-creation to ensure that the participant is still the main driver of the creative process, paying particular attention to unwanted biases introduced by the AI tool (Viberg et al., 2024). Informing participants about the AI capabilities, how images are created, and the potential for biases (Bianchi et al., 2022) and inviting participants to explore, modify, and reject images may help alleviate undue AI dominance. Second, researchers should adopt an ethics of care approach, including being sensitive to participants’ experiences and navigating unintended content introduced by AI (Ganguli et al., 2022; Yang et al., 2022). It is critical to ensure participants feel safe expressing their thoughts and emotions and in control of the creative process. An ethics of care also implies cultural sensitivity and awareness of how AI represents cultural elements in ways that may perpetuate cultural stereotypes or biases (Tomasev et al., 2025).
In this paper, we explored the role of AI image generation in interviews to support reflection and sensemaking. AI offers many benefits for qualitative researchers, including its flexibility and accessibility. As is the case with any qualitative research interview, it is critical that researchers remain reflexive and attentive to the needs of research participants and the ways in which the interview dynamics are impacting the participants and the data being produced. We hope our exploration encourages other researchers to explore the ways in which AI can support qualitative studies through collaborative efforts between the participant, the researcher, and the AI tool. Given the growing prominence of AI tools and their rapid evolution, we call for deeper and continuous dialogue around processes to help us benefit from new technological advancements while ensuring that research participants remain active agents in the sensemaking process.
Footnotes
Acknowledgments
We are grateful for the support of the following research assistants through different elements of this process: Mariana Ramos, Henrique Quintiliano, Willian Almeida, Elinam Havor-Nutogo. We are grateful to Hannah Johnston for artificial intelligence advice and to Ali Arya for helpful comments in earlier drafts of this article. Above all, we are grateful to all participants who shared their stories with us.
Statements and Declarations
Funding
The author(s) disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: We are grateful for the financial support provided by Carleton International Research Seed Grant (IRSG), The Office of the Vice-President (Research and International), Carleton University; The Centre for Research on Inclusion at Work, Sprott School of Business; The Sprott Undergraduate Summer Research Experience; Global Research Internships, Mitacs; Universidade Federal do Paraná (UFPR), Pró-Reitoria de Extensão e Cultura (PROEC); Fundação Araucária de Apoio ao Desenvolvimento Científico e Tecnológico do Paraná (FA) (Araucária Foundation to Support the Scientific and Technological Development of Paraná); Secretaria da Ciência, Tecnologia e Ensino Superior do Paraná (SETI) (Secretariat of Science, Technology and Higher Education of Parana.
Conflicting Interests
The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
