Abstract
As synthetic visuals produced by Generative Artificial Intelligence (GenAI) proliferate across media and research contexts, they raise urgent epistemological, ethical, and methodological questions for ethnography. This commentary extends debates on GenAI's role in communication and knowledge production by focusing on ethnographic concerns of representation, reflexivity, and visual ethics. GenAI imagery, often photorealistic but nonindexical, unsettles long-standing conventions of co-presence, negotiated authorship, and situated seeing. This commentary proposes the 3C Framework of Contextuality, Consent, and Criticality, intended to guide future engagements with GenAI synthetic visual media in ethnographic research. Grounded in foundational ethnographic values, the framework is heuristic rather than prescriptive, offering a starting point for ethical experimentation and positional reflection.
Keywords
GenAI imagery and cultural research
From handheld photography to virtual reality (VR) fieldwork, research has always evolved alongside visual technologies. Today, GenAI is one that claims a significant presence, shaping cultural terrains in ways that demand fresh modes of engagement. In this commentary, I use “synthetic” to describe visual media produced through GenAI-enabled computational synthesis rather than direct capture of a physical referent, making them nonindexical in the semiotic sense. Unlike photographs or videos, GenAI synthetic images’ pixels are algorithmically generated, even when they appear to be photorealistic. Trained on massive datasets of existing human-produced data, GenAI generates one-of-a-kind synthetic outputs by extracting patterns and associations across language, visuals, and other modalities. The development of multimodal models like OpenAI’s Contrastive Language-Image Pre-Training (CLIP) further boosted such generative power by enabling AI to integrate visual and linguistic representations in a shared embedding, which is an approach that has been described as supporting a more holistic, if not fully “human-like,” understanding of visual content (Radford et al., 2021). As GenAI systems are used to operationalize interpretive labor at scale, they increasingly automate tasks such as visual data annotation and classification (Karjus, 2025).
Ethnographic research, rooted in immersive fieldwork and sustained observation, aims to understand living cultural practices from the perspective of the natives (Malinowski, 1922/2014). Geertz (1973) described it as an interpretive practice that reveals cultural meanings through “thick description.” Situating AI-generated imagery within ethnographic research is crucial because these visuals operate as cultural representations embedded in systems of shared meaning, drawing on Hall’s (1997) theorisation of representation. The visuals do not merely illustrate culture but actively participate in the production of culture through prompts, datasets, and social imaginaries. Ethnography, with its emphasis on first-hand experience and relational interpretation, offers conceptual tools for reflecting on how synthetic visuals move through cultural contexts. At a time when data is increasingly generated rather than collected, ethnographic sensibilities become vital for unpacking the epistemological and ethical shifts.
Recent scholarship reflects growing concern and curiosity over GenAI’s role in knowledge production. Helm et al. (2024) frame synthetic data as disruptive, which reshapes what counts as legitimate knowledge while obscuring ethical and political dimensions of data generation. De Seta et al. (2024) offer a methodological response by illustrating what they call “synthetic ethnography,” proposing field devices like creative participation and speculative modeling to engage with generative systems. In this commentary, GenAI imagery in ethnography refers to synthetic visuals produced by researchers or participants that can serve various roles depending on the research objectives, such as visualizing field experience or facilitating participation, rather than ethnographic studies of AI systems themselves. While cross-disciplinary discussions on AI ethics and visual production have been growing, there is still limited guidance on integrating GenAI imagery for ethnography-specific contexts. This commentary addresses this gap by focusing on GenAI visuals and proposing a framework to support reflexivity, positionality, and ethical decision-making in ethnographic practice.
Situating GenAI imagery in ethnography: context, consent, and criticality
To understand the ethical and epistemological stakes of GenAI imagery in ethnography, it is important to recognize the visual traditions and methodological shifts that precede it. Visual ethnography has long emphasized co-presence, indexicality, and situated vision, with the camera once regarded as an extension of the ethnographer’s witnessing authority (Pink, 2007). Early work privileged observational realism, as in Bateson and Mead's (1942) Balinese Character, where photography is used as a documentary tool to analyze cultural behaviors. Co-creating ethnofiction films with his subjects, Rouch and Morin (1961) demonstrated how staging and improvisation could reveal cultural truths beyond pure observational documentation. MacDougall (1998) similarly advanced a reflective, poetic visual style that foregrounds subjectivity and ethical engagement. These foundational works already gestured toward collaborative, speculative, and performative visual practices, which are qualities now resurfacing in conversations around synthetic imagery. GenAI imagery could arguably be a type of ethnofiction, a speculative relation-making device, while it complicates the ethnographer’s authorial position by dispersing curatorial agency across opaque algorithmic processes. This partial surrender of visual authorship raises new questions about how we engage with GenAI imagery in ethnographic practices.
In addition to visual ethnography, digital ethnography has pushed the boundaries of what counts as “being there” in fieldwork. As researchers followed people into online forums and virtual spaces, they challenged the assumption that fieldwork must be confined to physical sites. These interventions showed that presence and reality can be negotiated within digital mediation. Online spaces, now central to how people build relationships and form identities, can no longer be overlooked by ethnographers (Hallett and Barber, 2014). Projects like The Asthma Files (Fortun et al., 2014) model participatory infrastructures of knowledge-making, emphasizing evolving modes of analysis. Digital ethnography is thus not only about studying the digital, but also about experimenting with how ethnographic knowledge is created and circulated (Pink et al., 2016). This broader understanding of fieldwork as distributed, mediated, and co-constructed offers a groundwork for thinking about how GenAI imagery might be situated within ethnographic practices.
Within this lineage, photorealistic and nonindexical GenAI imagery appears both continuous and disruptive. They function as simulacra in the Baudrillardian sense: images referring not to reality but to other signs and statistical patterns (Baudrillard, 1994). While GenAI visuals are often criticized for being “not real,” we shall keep in mind that all images are mediated, curated, and sometimes “fictional,” including ethnographic ones. Traditionally, the ethnographer holds the power to frame and contextualize visual representations, making reflexivity and positionality essential to the ethics of visual anthropology. What GenAI introduces may be a new kind of anxiety over agency: part of that curatorial power is now outsourced to algorithmic inference.
Ethnography is relational and dialogic, grounded in co-presence and the co-construction of meaning. GenAI systems, by contrast, are statistical rather than critical: they generate based on probability distributions, not cultural understanding. As Natale et al. (2025: 3;4) state, “Western-centric technical culture tends to be universalized, while other perspectives are the subject of ‘othering’ and stereotyping” and GenAI “should not be thought of as universal, but instead imagined and implemented as diverse and culturally inclusive.” Such dynamics challenge the ethnographer’s role as an interpreter of lived realities and risk eroding the space for ambiguity, contradiction, and co-presence that ethnographic work values. If fieldnotes become prompts and encounters are simulated, ethnographic complexity may be reduced to universalized representations. The goal is not to romanticize traditional methods but to ask what is lost when interpretation is deferred to a predictive model.
Positioning GenAI imagery within visual and digital ethnographic practices shows that it is neither entirely alien nor entirely familiar. It inherits a legacy of mediated representation but disrupts long-held assumptions about agency, presence, and meaning-making. While its participatory potential may echo ethnographic ideals, its automated and opaque processes demand new ethical scrutiny. Many participatory claims in AI design risk becoming a form of “participation-washing,” where inclusion is superficial, performative, and ultimately reinforces existing power asymmetries rather than challenging them (Sloane, 2024; Sloane et al., 2022). The synthetic and obscured participatory nature makes GenAI image-making a precarious tool for ethnographic research.
The solutions for confronting this precarity lie in three interconnected pillars: context, consent, and criticality (3C). Ethnography treats meaning as situated, framing culture as context that emerges from lived practices and shared interpretations. Believing, with Max Weber, that man is an animal suspended in webs of significance he himself has spun, I take culture to be those webs, and the analysis of it to be therefore not an experimental science in search of law but an interpretive one in search of meaning. (Geertz, 1973: 311)
These webs are co-constructed through sustained engagement with participants, where informed consent is an ongoing negotiation that acknowledges the asymmetries of power inherent in research relationships. As Madison (2005: 9) notes, “critical ethnography is always a meeting of multiple sides in an encounter with and among the Other(s), one in which there is negotiation and dialogue toward substantial and viable meanings that make a difference in the Other’s world.” To engage with otherness ethically, reflexivity serves as “an essential process for informing developing and shaping positionality” (Holmes, 2020: 2), and positionality shapes how researchers navigate the politics of representation, a task central to criticality. Denzin frames critical qualitative inquiry as an effort “to understand how power and ideology operate through and across systems of discourse, cultural commodities, and cultural texts” (2017: 12), and urges scholars “to come together … to experiment with traditional and new methodologies, with new technologies of representation” (2017: 15). This orientation mirrors precedents in visual ethnographic practices, such as ethnofiction (e.g., Jean Rouch’s films) or arts-based engagement ethnography (Goopy and Kassan, 2019), where artistic representations are explicitly paired with interpretative dialogue to open up, rather than obscure, cultural meaning. Bringing the 3C principles to bear on the use of GenAI imagery is not only consistent with ethnographic commitments but also essential for interrogating and balancing the new politics of representation, since “the whole endeavor of collecting images, categorizing them, and labeling them is itself a form of politics” (Crawford and Paglen, 2021: 1113).
Uses and risks of GenAI imagery
Similar to visual creation in art-based methods, GenAI imagery can play diverse roles at multiple stages of ethnographic research. It may act as a cognitive partner during ideation by helping visualize abstract concepts, probe local cultural imaginaries, or test representational boundaries. In fieldwork, it can be a collaborative device, enabling participants to render the “invisible,” such as memories, dreams, and futures, into shareable visuals. Post-fieldwork, it may serve as a representational tool to evoke atmospheres, construct composite scenes, or convey patterns that extend past individual moments. Beyond these uses, GenAI has also been discussed as a means to automate or optimize stages of the research process, though such efficiency should not come at the expense of the “key formative elements in becoming—and being—a researcher or scientist” (Sætra, 2025: 2).
The nonindexical and synthetic nature of GenAI images can, in some cases, become an ethical strength, though this potential must always be weighed against the wider ethical and political implications of the generative systems from which such images emerge. Creating virtual identities, for example, offers an alternative to traditional anonymisation that protects participant privacy while still enabling visual expression (Kamelski and Olivos, 2025). In such cases, the absence of indexicality is desired rather than dismissed. Precisely because these images are synthesized, they may open up ethical ways of representing marginalized or vulnerable individuals, offering creative fictions that gesture toward lived realities. For example, artists like Felipe Rivas San Martín’s A Non-Existent Queer Archive (2022) and Aik Beng Chia’s Return to Bugis Street (2024) series, both use GenAI to create queer counter archives featuring absent history and erased cultural memory. Such practices create empathetic windows into pasts that were never visually recorded, and in Chia’s case, he protects community members from being re-identified or retraumatized through direct documentation.
These possibilities, however, are accompanied by significant risks. While GenAI offers convenience, similar outcomes could often be achieved through existing qualitative or visual methods without the massive energy consumption tied to GenAI data centers. The infrastructures sustaining generative models depend on intensive resource extraction that “could soon consume as much energy as entire nations,” raising urgent environmental and ethical concerns (Crawford, 2024: 693). Such conditions require us to ask not only what GenAI makes possible, but also at what environmental, cultural, and epistemic cost: whose esthetics are being represented, whose cultures are being depicted, and who holds the right to narrate these stories? Moreover, commercial models, trained mostly on Western cultural and esthetic norms, risk reinforcing stereotypes and marginalizing non-Western perspectives. Using GenAI in ethnography does not release researchers from reflexive responsibility; if anything, it demands more contextual understanding of both research subjects and how GenAI operates. As Messeri and Crockett remind us, “increasing productivity does not guarantee an improved understanding of the world” (2024: 55). The visual abundance that GenAI enables must be approached through critical interpretation and guided by ethical frameworks.
A framework for future engagement: the 3C framework
Because of its nonindexical, synthetic nature and its versatility in ethnographic research, GenAI imagery requires proactive experimentation alongside critical engagement with its epistemological and ethical challenges. The value of a guiding framework, then, is not to impose rigid protocols in anticipation of a predetermined future, but to support reflexive, context-sensitive, and ethically grounded decision-making. Technologies “can be assessed only in their relations to the sites of their production and use” (Suchman et al., 2017: 404), and GenAI is no exception, as its meanings and implications depend on the practices that take shape around it. Rather than offering definitive answers, this framework invites self-questioning and dialogue, encouraging creative engagement grounded in the values of relational and situated ethnography (Figure 1). While it cannot ensure ethical outcomes in all contexts, it offers a shared vocabulary for navigating this shifting landscape with care and intention.

3C model (contextuality–consent–criticality) framework for the ethical integration of GenAI imagery in ethnographic research.
Contextuality
Contextuality anchors GenAI images in the cultural and relational dynamics that shape their meaning. As visual methods are relational practices developed through the social dynamics of fieldwork (Pink, 2007), GenAI images should be examined for how they engage with local visual practices and ethical sensibilities. Contextuality urges ethnographers to embed GenAI usage within the specific cultural, social, and technological contexts of their research, reflecting not only on what the image shows, but how and why it was produced in the first place, including prompt assumptions, dataset scope, and audience. Another consideration is how audiences will interpret and respond. When visuals straddle the boundary between ethnographic record and artistic or fictional creation, they may evoke different emotions than direct encounters with participants’ lives. Rather than avoiding this tension, researchers can learn from ethnofiction precedents to frame such works and disclose their conditions of creation, purposes, and limits. This safeguards privacy and enhances interpretation by inviting critical engagement.
Consent
Consent reframes participation as an ongoing negotiation that recognizes power asymmetries and the representational stakes of synthetic media. GenAI images are not standalone artifacts but relational actors within knowledge production. Building on contextuality, consent must go beyond legal compliance to also consider how participants wish to be represented and whether they are offered real choices for co-creation or refusal. Consent also includes transparency: participants should know when GenAI is used, how images are generated, and for what purposes. In some cases, synthetic imagery can enable consent more ethically, such as through anonymised avatars or fictional composites that protect identity while still enabling expression. But these choices must be carefully designed, and researchers need to stay attentive to the new representational politics brought about by algorithmic synthesis. This involves examining how representational politics are shaped not only by image content but also by the political economy of GenAI systems—their ownership structures, data governance, and whether prompts or generated images are fed back into corporate models. These factors influence whose aesthetics, narratives, and biases are embedded in the technology, and thus how participants’ identities and experiences are mediated.
Criticality
Criticality asks researchers to actively interrogate how technology shapes meaning, visibility, and authority in their work. To engage GenAI critically is to remain attuned to the layers of power and politics embedded in the technology. These systems are trained on massive datasets that reflect dominant cultural values. While platforms like Midjourney have made significant improvements in accuracy and inclusivity, such progress remains uneven, and sometimes overcorrections may introduce new forms of bias. Without ongoing critical engagement, synthetic visuals may contribute to what Messeri and Crockett describe as “scientific monocultures” (2024: 49), where one mode of seeing and knowing becomes dominant at the expense of diversity and reflexive insight. Criticality requires researchers to treat synthetic imagery not as empirical evidence but as a site of interpretation, something to be questioned, contextualized, and reflexively situated.
Conclusion
With its nonindexicality and strong performance in photorealism, GenAI carries risks such as bias and representational harm and the erosion of reflexive understanding, yet it also opens new possibilities for ethnography. When approached critically by researchers, GenAI images can serve as catalysts for participant storytelling, as enablers of counter-narratives, and as speculative devices for (re)imagining the past and the future. The 3C framework offers a way to harness this potential by embedding context, consent, and criticality into every stage of engagement. It can guide project design, inform participatory workshops, and serve as a practical, actionable ethical anchor in communication with review boards. As GenAI imagery reshapes visual production and circulation within existing and entrenched media power structures, our task is not to defend “the real” against “the fake,” but to cultivate relationships, responsibilities, and representations that matter.
Footnotes
Acknowledgement
I would like to thank Professor Vineeta Sinha, whose wisdom and insights inspired the early draft of this commentary. I am also deeply grateful to the editor and anonymous reviewers for their thoughtful feedback and sustained support, which have been invaluable in shaping this article into its present form.
Funding
The author received no additional financial support for the research, authorship, and/or publication of this article.
Declaration of conflicting interests
The author declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Data availability
Data sharing is not applicable to this article as no datasets were generated or analyzed during the current study.
