Abstract
This article explores how AI-generated content reconfigures struggles over authenticity in witness media. Media witnessing is traditionally understood as a relational field of practice involving the performance through media of testimony to oppression and violence, where the testimony must both be genuine and carry democratic weight. As AI-generated content circulates in and around global conflicts, concern deepens about misinformation, given AIGC’s perceived lack of an “indexical” relationship to reality. This article acknowledges but goes beyond the misinformation frame, instead focusing on two recent cases to show how multimodal AI may represent human experiences of conflict in nuanced and ambivalent ways, unsettling contemporary assumptions about media witnessing and the mechanisms through which it happens. We present a comparative analysis of two controversies around AI-generated content that foreground authenticity debates, informed by STS, witnessing and journalism studies research. These controversies demonstrate that, by subordinating indexicality to iconic or analogical representation, AI-generated content resists established witnessing norms, yielding questions about media functions, ethics, and power relations.
Keywords
Introduction
In October 2024, Hurricane Helene struck the USA’s southeastern states, causing widespread devastation. As Americans began to interpret the disaster’s implications, images of a drenched, lifejacket-clad toddler clutching a puppy—sobbing with fear, alone in a kayak amid floodwaters—circulated on social media. Media commentators quickly pointed out that these images were AI-generated, and therefore fake (Growcoot, 2024). Meanwhile, conservatives shared and politicised them as a metaphor for the disaster’s human cost, framed as evidence not of climate change, but that the Biden-Harris administration had abandoned Americans to nature (Klee, 2024). Several dismissed that the toddler was a visual fabrication and not “real,” instead arguing that she testified to something greater: genuine human suffering. This brief example showcases both the uncertain relationship between AI-generated media, truth and reality, and how such uncertainty can be explored, exploited and contested through discourses of authenticity. In this article, we understand this and similar cases as controversies that reveal and progress longstanding struggles over authenticity in ways that are inflected by the uncertainty and contestation surrounding AI-generated content (AIGC).
In an era where AIGC circulates freely alongside indexical content, these controversies highlight how societies negotiate GenAI’s impact on information and media environments by rearticulating existing debates about authenticity. Whether created and shared in good or bad faith, multimodal AIGC strengthens the contemporary digital chaos of “information disorder” (Wardle and Derakhshan, 2017), as AI-generated text, images, video and audio converge with bots and digital platforms to “blur the line between fact and fiction” (Park and Nan, 2026: 1502; Shoaib et al., 2023). In bad faith, “deepfake” audiovisual fabrications can be used for misinformation and manipulation: political actors can circulate “slopaganda” to denigrate political opponents (Bond and Joffe-Block, 2025); deepfakes can constitute “false witnessing” (Gregory, 2022) by anonymous social media actors, as seen in AI-generated media content celebrating (and heroising) the callous deportation practices of US Immigrations & Customs Enforcement (ICE; Koebler, 2025). As WITNESS co-founder Gregory (2022) stresses, the mere existence of AI manipulation casts doubt on photojournalistic and artistic efforts to witness conflicts and atrocities by undermining belief in and verification of their authenticity. Indeed, contemporary academic and journalistic discourses about misinformation position synthetic content as necessarily inauthentic in terms of provenance, and uniformly problematic (Hicks et al., 2024). Meanwhile, the case studies we discuss below at least suggest the possibility that AIGC can be deployed meaningfully in media witnessing contexts. Nonetheless, these apparently good-faith cases have sparked significant controversies over authenticity; it is these controversies that are the focus of our paper.
In what follows, we aim to explore how controversies over AIGC (re)configure debates over authenticity in the context of witness media. Witnessing means testifying to human crises and their impact, to evoke responsibility typically from those physically removed (Frosh and Pinchevski, 2009; Peters, 2001). It diverges from simply reporting or documenting events “by making the moral claim to care,” a practice that transcends professional journalism’s claims to eye-witness (Chouliaraki, 2010, p. 305). Media witnessing is a relational field of practice where testimony to suffering and violence is “performed in, by, and through the media,” involving not only care but trust—for example in authenticity (Ashuri and Pinchevski, 2009; Frosh and Pinchevski, 2009: 1). Witness media are technologies used to mediate witnessing, problematising whose and how authenticity is mediated. Authenticity is a fundamental component of witnessing practices that, through social deliberation, determines the legitimacy of actors, practices and testimonies about real events. However, judgements about authenticity are not binary decisions that can be reduced to the presence or absence of synthetic media content; nor can they be made in abstract or general terms. Informed by science and technology studies (STS), witnessing scholarship, and media studies, we instead ask what authenticity does—that is, how it is deployed by actors in a controversy to claim or deny the authority of a text, image or account. We examine these debates through recent controversies over AI-generated witnessing content. While we recognise that AIGC is multimodal and that singular images are not representative of GenAI models as such, our focus is rather on how AI-generated images circulate and are discussed online. In other words, our interest is not in the particularities of visual generators (Jiang and Li, 2025), but in the way that authenticity controversies articulate configurations of contested and competing sociotechnical relations (Suchman, 2012, 2023), including media and platform practices, regulatory efforts, companies, bodies, and discourses.
Controversy is a useful method for examining debates around GenAI because controversies both interrupt and demonstrate emerging phenomena, while progressing deliberations around the issues they raise (Callon et al., 2009). By controversies we mean public instances of “debates that relate to science and technology,” in this case GenAI (Venturini and Munk, 2021: 2). Such debates concern both the anticipated dangers of technologies, and the process through which “credible and legitimate characterisation” of such danger is determined, including the extent to which issues are technical or social and who may debate them (Callon et al., 2009: 18–25). GenAI controversies in witness media are significant for authenticity debates overall: here, authenticity gets negotiated through the prism of technologies, yet witnessing practices require not only content veracity but believability and recognition. Instead of following accounts of AIGC as “partial authenticity” (Jiang and Li, 2025), we contend that AIGC controversies reveal what Haraway (2013/1991) termed “situated knowledges” regarding authenticity—culturally-specific, incomplete, and positional frameworks for understanding the world. Haraway’s concept is useful here because it reveals the frames of reference that correspond to knowledge systems, and questions how these frameworks imply real-world accountability (Andrews, 2011). Following Haraway, we consider that AIGC controversies articulate situated understandings of authenticity, and thus also inauthenticity, with unresolved implications for acceptable media practices, authorities, and responsibilities.
We therefore examine two controversial media events that enact and exemplify the sociotechnical reconfiguration of authenticity accompanying AIGC. We do not predetermine these cases as witnessing a priori, as this is the basis of controversy; instead, we explore how these controversies demonstrate the politics of witnessing and reconfigure authenticity in nuanced ways. To start, we survey existing research into how authenticity has evolved alongside technological change in witness media. Next, we discuss and compare two controversies, analysing what authenticity is and does for whom, and how. Firstly, we examine the “post-photography” essay 90 Miles in which photojournalist Michael Christopher Brown (Brown and Blockbird, 2023) used GenAI to represent Cuban refugees’ passage to the US, which represents a controversy about mediation methods and ethics particular to the witnessing field. Secondly, we analyse the All Eyes On Rafah protest image that circulated Instagram Stories in late May 2024 amid Israel’s ongoing genocide in Gaza, which represents controversy about witnessing modes and functions across multiple fields. In the first, confidentiality and iconicity are contrasted against evidence as witnessing priorities; the second case reveals how indexicality and affectivity enact tensions between different witnessing communities. These controversies show that reconfigurations of authenticity are not neutral but serve specific communities and interests, bearing critical implications for information politics and economies worldwide.
Witness media and the evolution of authenticity
Authenticity is generally understood as a degree of coherence between appearance and reality or behaviour and nature (Trilling, 1972). As a contemporary discourse on media, authenticity is often framed technically, in terms of authentication and evidence (Burton et al., 2023): whether objects bear an “indexical” relationship or “physical trace” to real events, places or people (Messaris, 1997: 12). Yet authenticity is not an intrinsic property of people or things but a spectrum of normative value, a judgement produced through the social and institutional rules of specific cultures (Umbach and Humphrey, 2018). For example, Serazio (2023) situates authenticity-as-provenance with museums’ historical practices for verifying items’ originality: a judgement essential to the institutional logic of the museum, but one that has evolved in response to changing sociotechnical norms. Authenticity is also something seen, felt, or embodied, for example as a sensory aesthetic tied to evolving platform affordances in contemporary creator and influencer cultures or, historically, the moral “fidelity” of acting in line with one’s beliefs (Banet-Weiser, 2012; Serazio, 2023: 5–7; Taylor, 1991). These competing poles of physical and affective integrity are described by Shifman (2018) as external and internal authenticity. In this vein, Burton et al.’s (2023: 11–12) account of “algorithmic authenticity” describes it as a “methodological” concept enacted and determined through “pattern-matching, performativity, authentication, and political subjecthood.” Such accounts highlight the entwined roles of social and technical actors, interlocutors and audiences in performing or recognising, and so producing, authenticity, which “is always already relational: mediated and shaped in the realm of the other” (Burton et al., 2023: 15). Authenticity is therefore not static, but an active dimension of social and technical life that has evolved alongside shifts in witness media.
Central to these evolutions is the role of the image, and the technologies, institutions and practices which produce, sustain, and authorise its authenticity. Since at least World War II, the photographic image has dominated the witnessing field as an evidentiary medium that conveys authenticity through “indexicality” (Messaris, 1997). Grounded by the emergence of modern scientific instruments and practices (Peters, 2001: 716) that presume an objective “view from nowhere” (Haraway, 1997), photographic technologies became central to bridging the epistemic gap between claims made by witnesses and audience belief (Felman and Laub, 1992) and the authenticity claims of journalistic witnessing. Over the last two decades, mobile digital technologies and social media platforms diminished journalistic control over witnessing as news organisations became reliant on user-generated content, especially in conflict zones (Bruns, 2018). Digital technologies enable “citizen witnesses” to supply journalists with photos, audiovisual recordings and verbal accounts, or act “as mediators of their own suffering” (Chouliaraki and Al-Ghazzi, 2022: 650). In turn, journalists may now resist citizen witnessing by using verification as a “boundary marker” for authenticity, as Nilsson and Simonsen (2025) found in visual news about the Ukraine war. Digitality therefore recasts authenticity contests by highlighting vertical and horizontal struggles for power and recognition among different mediators—especially those with mortal interest in being seen and heard—and, consequently, moral questions around ambivalence and suspicion (Chouliaraki, 2015; Chouliaraki and Al-Ghazzi, 2022).
Drones, computer vision and other automated technologies also institute tensions between human subjects, witnessing, and authenticity (Richardson, 2024). In response, media witnessing increasingly incorporates open-source investigations, in which organisations like Bellingcat and Airwars deploy algorithms, metadata, photographic multimedia, public documents, satellite imaging and social media to document and verify human rights abuses (Koettl et al., 2019). New concepts such as “data witnessing” typify these emergent practices, enabling organisations to compile collective witnesses by rendering “actors, relations, events, spatiality, temporality and activity as data” (Gray, 2019: 985), which in turn require new verification standards and practices (Ford and Richardson, 2023). This co-evolution of witnessing practices with emerging technologies challenges the apparent boundaries between technical and social dimensions of authenticity. Where television provoked debates over the active or passive status of audiences as distant witnesses (Frosh and Pinchevski, 2009), digital interfaces offer quasi-public spaces for user expressions of shared emotions, memorialisation and testimony (Papailias, 2016). Finding that news organisations mobilise affect to designate truth and construct “amateur footage as authentic,” Chouliaraki (2015: 1363) argues that authenticity is inherently ambivalent: enacted through journalistic mediators’ selective ascription of authenticity to certain (Western) testimonies and not others (e.g. Arabic combatants) through “affective attunement” that regulates, signifies, and stratifies “each life lost within a continuum of lives-worth-living.” These contemporary evolutions problematise the “authenticity models” engaged by witnesses, technical and social mediators and audiences, and what they do in media witnessing conducted across platforms and technologies.
As our brief genealogy shows, witness media technologies have prompted continual evolutions in media witnessing and authenticity. Enter AIGC, which mediates witnessing in ways that are difficult to account for within established frameworks. Despite being generated probabilistically by machine learning models, AIGC may be experienced and deliberated as media witnessing by news organisations and journalists who contingently remediate imagery; audiences who receive it as (in)authentic; and users who circulate it on digital platforms. In this way, GenAI technologies revive long-standing tensions around authenticity in the witnessing field, especially relationships between evidence, testimony, and false witnessing (Peters, 2001), and the validity of affective practices, for example “clicktivism.” The question is what these shifts mean for the nature and work of authenticity in the context of witness media. In the two case studies we will now compare, we consider how GenAI technologies disturb media witnessing as a field already in transition, both by introducing to existing media structures a new technology that “[transcends] its traditional role as a medium” (Laba, 2024:1600) and by shaking up authenticity.
Controversy 1: 90 Miles
Our first controversy represents a struggle over the functions, practices and ethics of the witnessing field. In April 2023, acclaimed photojournalist Michael Christopher Brown launched a new project: a “post-photography” visual essay depicting “historical events and realities of Cuban life that have, since [the Castro revolution in] 1961, motivated Cubans to cross the 90 miles of ocean separating Havana from Florida” (Brown and Blockbird, 2023). As an experienced award-winning National Geographic and former Magnum photojournalist whose iPhone photojournalism of the 2011 Libyan Revolution resulted in the acclaimed text Libyan Sugar, Brown described the project as an experiment regarding the role of AIGC in “reportage illustration” and a provocation “for image-based storytellers who care about reality and truth” (Brown and Blockbird, 2023; Terranova, 2023). After generating the images and uploading them to AirLab, a digital platform for “AI reporting,” Brown and his digital curator, Blockbird, described intentions to sell the project images as NFTs (supposedly-singular digital tokens whose ownership is controlled with blockchain technology, Trautman, 2022), donating a 10th of profits to Cuban refugees.
90 Miles depicts scenes of poverty and distress: portraits of pensive or depressed individuals sitting languidly or labouring in tobacco factories; street scenes of overcrowding, protests, riots and explosions; military leaders; families in rooms with peeling paint; distressed groups supporting each other in armpit-deep ocean waters or trying to stay afloat on capsized or overcrowded boats; a partially submerged car on a beach, surrounded by refugees and their belongings. PetaPixel (Growcoot, 2023) reported that Brown had “spent years as a documentarian, dedicating and risking everything to capture real stories in the most candid and pure way possible” and keeping a list of documentary subjects to which he could not gain photographic access. Having made documentary projects about Cuba before, Brown told Blind Magazine (Terranova, 2023) that he had “heard incredible stories of those who escaped to the USA” and “explored documenting the story in a variety of ways while on the ground.” Illustrating one way that witness media collides with questions of authenticity, Brown realised that refugees needed “secrecy and trust” to depart safely, so “any documentation” might have threatened their confidentiality; he determined photographic “access was impossible.” Instead, he strove to visualise his subjects with GenAI in ways that circumvented algorithmic bias, including, for example, generations of historical incidents like the 1994 sinking of a tugboat carrying 41 refugees on route to the US (Growcoot, 2023).
Brown had disclosed the AIGC status of his images in both metadata and the final project (Growcoot, 2023; Terranova, 2023). Nevertheless, once shared on Instagram, the images prompted what Blind Magazine described as an “emotional response” in the documentary photography community. One respondent commented, “ . . . using AI to tell a story you weren’t there to document is one thing in itself,” but it was “selling those AI generated images” that incurred a loss of respect. Another stated that “the rationalization of these tools and this project is murky, and I think it’s antithetical to the sincere spirit of the medium” (Terranova, 2023). As reported in PetaPixel (Growcoot, 2023), another called him “unethical” and the project “disturbing.” Some photography researchers concluded the controversy demonstrated how AIGC “cannot replace human photojournalists in certain situations” (Bournousouzi, 2023). ZEKE Magazine (Ayotte, 2024) reported that the project raised questions about the ethics of representing suffering people synthetically when this could diminish believability: “People who are victims of violence are the ones who are most affected—often they need the photo of their situation to be believed, especially if the photo is being used as evidence.”
Yet Blind Magazine also reported another documentarian’s point that “what illustrators do” involves imagination, and interpretive formats can afford important space for ethical witnessing. Illustrators “take information and they condense it, and some is factual, and some is imaginary,” representing “an opportunity” to reflect on events in new ways (Terranova, 2023), as seen in the impressionistic but no less accurate or truthful accounts of war documentary illustrators Joseph Sacco or Molly Crabapple. Brown himself confessed to sharing others’ concerns about AIGC but being interested in how the technology may be leveraged in cases with access issues “to tell stories. . . that might generate empathy and awareness of important issues” (Terranova, 2023). Elsewhere, he commented that “people generally don’t want to be voiceless but often desire to be faceless”; ZEKE Magazine pondered how AIGC could support witnessing by offering a means of visualisation in cases where “there is no light” (Ayotte, 2024). In representing community hardships and suffering in Cuba, the 90 Miles controversy closely resembles another where Amnesty International came under fire for using AIGC to illustrate “the grave human rights violations committed during the 2021 [Colombian] National Strike without endangering anyone who was present” (Koebler, 2023). It also provides a telling demonstration of authenticity negotiations in the witnessing field, where photojournalists have historically strived to mediate testimonies in ways that are sensitive to victims’ vulnerability yet with remuneration for their own labour and exposure.
Welcoming debate, Brown felt he was “exploring the possibilities of the AI medium as an artist,” and the series was “storytelling” rather than journalism: “something more similar to say, a film” which is “based on a true story [rather] than actual documentation” (Growcoot, 2023). Criticisms of his work represent documentarians and photojournalists putting indexicality—the representation of real events, places and people—on a pedestal as a principle of witnessing. These criticisms often compared AIGC with photography, such as when ZEKE Magazine claimed that the series transgressed Sontag’s (1977: 92) maxim that “the painter constructs, the photographer discloses.” This discourse of disclosure signifies photojournalistic witnessing in terms of indexical representation and evidence, where something is authentic if its origins and history are traceable and proven. From this view, Brown’s use of AIGC to represent culturally specific bodies made his images counter-indexical, despite that he worked “against the algorithm,” because the (lack of) diversity of bodies in training data and guardrails directly influences and politicises outputs (Gillespie, 2024), while models’ visual aggregation ultimately flattens and generalises physical specifics (Laba, 2024). Here, AIGC manifests and transmits structural ambivalence about the relationship between images and the real world, undermining the notion that authentic representation correlates neatly to indexicality.
However, Sontag’s appearance in such critiques provides a useful opportunity for analysis, as 90 Miles shows how contemporary frames for identifying and interpreting seemingly “photographic” visual content encounter friction with AIGC. Sontag’s (1977) own “definition” points out that the line between art and (documentary) photography is discursive and political: photography is a medium, and photographers’ practices “disclose” only insofar as they mechanically capture partial, atemporal and relational images of physical objects. Furthermore, photographs remain constructed objects in the sense that they almost always require post-hoc sense-making, and can be ethically “aggressive” in capturing likenesses. Bracketing the question of whether frameworks for photographic analysis and interpretation are appropriate to AIGC as witness media, the discursive, political and constructed nature of photographic practice highlights three further dimensions of the controversy: the acceptable limits of (1) intervention and (2) interpretation in witnessing, the latter by extension problematising (3) the boundary between photojournalism and art.
Firstly, photographic witnessing involves an ethical question around whether to represent crises photographically without any intervention (even in cases where harm is unfolding). Photography may offer indexical authenticity but problematises the ethics of non-intervention, and thus authenticity as moral fidelity, by neglecting or deferring action for those whose vulnerability is being documented (Sontag, 1977). Brown professed that his visualisations arose directly from real events that he could not capture photographically for fear of risking sources’ confidentiality, and indeed, reporting on authoritarian contexts is challenging precisely because witnessing practices may be constrained by contradictory political and national security interests (Kozol, 2014). As such, Brown’s use of AIGC for source protection could be read as moral fidelity between a witness-mediator and his subjects, despite its counter-indexicality.
Secondly, photojournalism and witnessing harbour unresolved debates over the limits of acceptable interpretation in documentary. If photojournalists offer indexical images accompanied by little to no narrative or interpretive context, they may defer interpretation of images to remediating actors like news agencies (Gynnild, 2017). This increases the risk of facts being misrepresented or images being misused, ultimately frustrating the authenticity-as-provenance of witnessing content. News and witnessing organisations may use iconicity and symbolism in visuals of events precisely so that they may have greater potential for political resonance and responsivity (Messaris, 1997; Mortensen et al., 2017). This may involve capturing images that convey human emotions, are personified through representative individuals, or come to be seen as emblematic of events after remediation over time, all of which occur in 90 Miles. Given Brown’s disclosure of AIGC and embrace of storytelling, we can infer that he intended the 90 Miles images to be iconic, rather than indexical, invoking symbolic and behavioural dimensions of authenticity. In turn, the ambivalence of the AIGC in 90 Miles highlights how divergent dimensions of authenticity may operate simultaneously to evidence, signify, and enact in different degrees (e.g. photorealism as aesthetic and medium, Hausken, 2024), though these functions remain normatively contested in the witnessing field. Ultimately, authenticity’s multidimensionality problematises “the sincere spirit of the [photojournalistic] medium” (Terranova, 2023) as an unresolved question.
Finally, 90 Miles was positioned as art rather than photojournalism, exemplified by Brown and Blockbird’s (2023) plan to mint the 90 Miles images as NFTs. As a seasoned photojournalist operating in the capacity of a self-proclaimed artist, Brown thus invoked debates around art (e.g. singular paintings) in relation to photographs, whose provenance may be frustrated by reproduction, reinterpretation and recontextualisation, as Walter Benjamin (1969) famously argued and as visual news agency practices can demonstrate (Gynnild, 2017). Despite the producers’ transparency, detractors implied their financial motive conflicted with the altruistic aims of media witnessing, implying moral inauthenticity via insincerity (Trilling, 1972). This debate for mediated witnessing and photojournalism over whether autonomous human labour is due compensation in relation to GenAI aligns with unresolved questions around AI generation in automated journalism (Montal and Reich, 2017).
90 Miles enacts and intensifies ongoing debates about the authenticity of different witness media practices, and their relationship to photojournalism and art as professions. Cumulatively, these debates reflect norms and practices that have sedimented over a century in photojournalistic witnessing only to experience re-contestation following the emergence of GenAI as an ambivalent medium — and thus an emerging modality that disrupts existing norms. We can observe that Brown’s project configures authenticity symbolically if not morally, in the sense that he strove to execute an honest, iconic representation instead of photographic witnessing. However, the dynamics of this configuration raise questions about the intent underpinning witnessing content production; the relationship between subjects and mediating witnesses, including subjects’ consent; the role of interpretation and iconicity in media witnessing; and the relationship between photojournalism and other witnessing practices.
Controversy 2: “All Eyes on Rafah”
Our second controversy is a vertical struggle regarding appropriate forms of mediated witnessing, between diverse sociotechnical actors with varying levels of power who operationalise authenticity differently. It arose around the All Eyes On Rafah image that circulated on Instagram Stories in May 2024 in protest against Israel’s arguably genocidal 1 assault on Gaza, which has claimed at least 60,000 Palestinian lives to date since the Hamas attacks of 7 October 2023 (Khatib et al., 2024; Yussuf et al., 2024). In one February 2024 episode, Israel proposed an offensive in Gaza’s southernmost city, Rafah, after having surreptitiously pushed thousands of evacuees southward to this supposed “safe zone.” Anticipating the imminent humanitarian catastrophe, director of the WHO Office for the Occupied Palestinian Territories (OPT) Dr Rick Peeperkorn warned that “all eyes are on Rafah” (Fletcher, 2024). Despite ICJ instructions to desist, Israel attacked western Rafah refugee camps in al-Mawasi and Tal al-Sultan on 26 and 28 May, ostensibly targeting Hamas commanders (Al Jazeera, 2024; Davies and BBC Arabic, 2024; Shamim, 2024). Airstrikes ignited fires throughout the camps, causing hundreds of “predictable” civilian casualties; it was immediately decried as a “monstrous atrocity” by senior UN officials, including the High Commissioner for Human Rights, who observed that “there is literally no safe place in Gaza” (UN News, 2024).
Following the attacks, graphic photographs and videos of headless or dismembered bodies and victims burning alive in the camps spread on social media platforms (Al Jazeera, 2024; Berger and Harb, 2024). Despite this ample evidence, one of the most widely-circulated media objects relating to Rafah was an AI-generated protest image. Unlike the 90 Miles images, it was not photorealistic: it depicted as if from an aerial perspective a refugee encampment stretching into the distant horizon in a desert valley encircled by (incongruously) snow-capped mountains, featuring in the foreground the slogan “All eyes on Rafah” as block text composed from tent roofs. No Palestinians are represented, nor any specific aspects of the suffering and trauma in Rafah; instead, simply an empty, clean camp under a blue sky. Originally created by a user called Zila AbKa in the pro-AI image generation Facebook group “Prompters Malaya,” the image was then de-watermarked and circulated by user Shahv4012 as an “Add Yours” sticker on Instagram Stories (a personal daily audiovisual blog), allowing other users to share it on their own stories in a memetic style (Koebler, 2024). The slogan had become a mobilising protest call in the weeks since Peeperkorn’s statement, and the generated image came to be shared some 47 million times by users, including prominent celebrities like actress Nicola Coughlan and supermodel Bella Hadid (Davies and BBC Arabic, 2024; Shamim, 2024). As such, it triggered a significant controversy over authenticity: could a synthetic image constitute an authentic engagement with real suffering?
Based on its lack of indexical representation of specific victims or actual conditions in Rafah following the Israeli attacks, numerous commentators felt All Eyes On Rafah was fundamentally inauthentic. In calling for witnesses to Rafah’s suffering while not actually providing them, the image was seen to both obfuscate conditions in Rafah and frustrate Palestinians’ efforts to testify to their oppression. As SBS News (Staszewska, 2024) reported, activists questioned the image’s abstraction of “real and often distressing footage from Gaza.” In their view, it failed “to inform people of what is happening on the ground,” in comparison to “videos or photos by Palestinian journalists, many of whom are risking their lives to document the war.” An Al Jazeera explainer (Shamim, 2024) observed that, especially in the wake of the May attack, Rafah looks nothing like that: Its skies are grey with smoke from Israeli bombs and there are no orderly rows of tents – many are smouldering after being bombed with their occupants still inside, and debris is scattered between them.
One of Shamim’s (2024) sources felt that “the image undermines Palestinian testimony and lived experience . . . because Palestinians have for decades asked the world to see them and believe them.” Another piece from CNN (Asmelash, 2024) contemplated whether the image was “slacktivism” like the black memorial squares posted on Instagram following George Floyd’s 2020 murder (see also Horn, 2024). Echoing scholarship on the difficult ambiguity of distinguishing between witnessing and spectatorship (Chouliaraki, 2006), Asmelash quoted a source who described the image’s virality as “performative” because “bearing witness is still a passive act” of spectatorship, and the image prioritised “the audience rather than Rafah, creating a distance between the viewers and the victims.” Specifically, arguments that the All Eyes On Rafah image was too abstracted to be authentic mobilised authenticity in terms of content provenance and indexicality, as well as moral fidelity to the sovereign yet vulnerable eyewitnesses and victims of the violence. In contrast, both the image itself and social media users’ engagement with it was positioned as inauthentic: performative, insincere, and shallow.
All Eyes on Rafah represents an authenticity controversy precisely because it intersects with the politics of media witnessing in sociotechnical contexts. Gaza remains a conflict where aid workers and local journalists have been increasingly targeted and silenced (Committee to Protect Journalists, 2025). As online environments have been polluted with false claims about the conflict (Górka, 2024), Palestinian journalists’ online footage has been treated with suspicion by Anglophone news outlets whose coverage of attitudes to the conflict has framed protests as mere spectacle (Brown, 2024; Matich, 2025). It is certainly true that false AIGC about Gaza has circulated on social media and in stock image banks (Spaggiari et al., 2024), yet so too have Palestinian journalists risked and lost their lives in efforts to report the reality of the conflict (Al Jazeera, 2025). Another of Asmelash’s (2024) sources identified that the image’s sanitised aesthetics and use of the keyword “Rafah,” rather than more policed terms “Palestine” or “Gaza,” enabled it to both circumvent moderation algorithms and become “more palatable to some viewers than real photos of Gaza, which are graphic and often show blood, dead bodies and violence.” They also queried why so many shared the image, acknowledging that the genocide may evoke feelings of powerlessness in distant audiences, thus sharing the image may service efforts to raise awareness or demand others “don’t look away.”
An important dimension of this controversy is therefore its mediated context. The image circulated on Instagram, a platform environment with algorithmic filters and community policies that moderate via rules and protocols which are themselves highly contested (Cobbe, 2021). As embedded war reporting regulates whose dead may be grieved, and when and how (Butler, 2005), so do platforms moderate violent and traumatic content like that which Palestinians uploaded from Rafah (Chouliaraki, 2015). Similarly, GenAI models have guardrails intended to curb unethical, illegal, or extreme outputs (Akheel, 2025). These limits make models unlikely to reproduce the violent imagery emerging from Rafah, and Instagram unlikely to permit it, regardless of its factuality. However, it does not follow that the image was misinformative. Rather than being machines with intentions, much less specifically “to mislead” as misinformation framing may suggest (e.g. Hicks et al., 2024), GenAI models are participatory systems with users who may “disappear in the system’s figuration as an object,” yet remain integral to its function (Suchman, 2012: 55). We must not, therefore, discount the everyday social media users who remediated the All Eyes On Rafah image 47 million times, nor the extent of any citizen witnessing content from Rafah they perhaps saw prior to sharing the AIGC. Instead, as a media object, users may have perceived the image as authentic to the extent it expressed their collective outrage at the conflict, which Shifman (2018) considers “internal authenticity”—authenticity as internal, affective integrity, rather than “external” indexical integrity. Social media users therefore appear as tertiary witnesses to the conditions in Rafah who strove to bear witness not through the indexicality of image, but its metatextuality and affectivity.
On its own, the image fails to represent Rafah or Palestinians, but its remediation demonstrates actors’ collective expression of belief in Palestinians’ testimonies and suffering where political, media, and sociotechnical structures constrain other forms of attention and political action. It represents the “embodied collectivity” (Pantti, 2013) that Mortensen (2015) argues is central to “connective witnessing.” Specifically, it demonstrates “connective witnessing as the collective of media users turned into media producers” (Mortensen, 2015: 1398), where the image as media event signifies collective expression in practice rather than content-specific visual indexicality to the conditions in Rafah. Rather than indexing specific events or lived experiences in Rafah, the mass sharing of the image might be read as an expression of both collective need to attest to the need to bear witness and collective discomfort with the conditions of possibility for witnessing on platforms such as Instagram.
We may therefore take the metatextuality and media context of this AIGC seriously. Instagram users’ participatory remediation through adding the image to their Stories demonstrates solidarity, paradoxically both authentically enacting audience testimony and participating in the image’s inauthenticity. The slogan’s call for seeing relates ironically to the image’s inert scene, where nothing is happening; we can infer that audiences’ seeing is happening elsewhere on social media, while the unseeing image serves as a symbol expressing belief and calling for recognition of eyewitnesses’ content in general at the risk of Palestinians’ continued suffering. Though, as detractors argued, the image’s generality demonstrates disempowered groups’ invisibility in generative systems (Gillespie, 2024), emphasising user remediation rather than eyewitnesses’ experience, the media event also includes Instagram users’ rally around a shared message, and thus their “internal authenticity” (Shifman, 2018: 178), which is here informed by rather than constitutive of “connective witnessing” (Mortensen, 2015). All Eyes On Rafah therefore demonstrates users’ “situated knowledge” of authenticity in terms of embodied rather than indexical practice, in contrast to media commentators’ and activists’ emphasis on real-world representation or witnessing in the journalistic tracts of debate. The controversy thus stretches the limits of indexical authenticity as the lens through which what matters about All Eyes On Rafah needs to be understood, even as it highlights the way the cumulative circulation of AIGC can enact authentic expressions of shared feeling. In a digital platform context where algorithmic and political frames moderate the flow of any content regarding Rafah, this kind of affective witnessing (Richardson and Schankweiler, 2019) demonstrates the centrality of embodied authenticity in social media practices. This case therefore demonstrates the discursive, political work of authenticity and of the political ambivalence of AIGC as a medium.
To recall our earlier discussion, authenticity may refract into dimensions of provenance, symbolism, behaviour, and morality. The tension in All Eyes On Rafah between (non)provenance, expression, and ethics—to see or to believe—tests the dominance of provenance as the key dimension of authenticity in media witnessing. So too does it challenge the dominance of institutional actors who lay claim to witnessing. Moreover, it demonstrates that provenance represents a dimension of authenticity which may, through discourses of its opposite (Hameleers and Minihold, 2022), be used to police or silence other, more metaphorical witnessing. This configuration thus spans the indexical (in)authenticity of AIGC; the generalised aesthetics and political context that stymy political responsivity; the ambivalence of witnessing across multiple platforms; and social media users’ participatory expression of faith despite distance from the conflict. Here, we do not seek to normatively evaluate the case, but to foreground the embodied and normative dimensions whereby authenticity relates and regulates — and where, for some participants in the media event resulting from the sharing of the image, testimony to shared feeling may have been the end in itself.
Conclusion
Under the dominant view of authenticity as indexicality or provenance, many scholars position AIGC as inauthentic for its irrationality (Stoljar and Zhang, 2024), sycophancy and statistical aggregation (Hicks et al., 2024). Yet as we have seen, authenticity is neither static in nature nor settled in practice. Sontag (1977: 23) once argued that objectivist discourses about authentic media overshadow the role of critique in understanding the world: Photography implies that we know about the world if we accept it as the camera records it. But this is the opposite of understanding, which starts from not accepting the world as it looks.
We must therefore problematise judgements of AIGC’s inauthenticity for their human-centrism, either in instituting human standards to “benchmark” AI outputs or expectations that “authenticity can only be created by humans,” though humans also engage in memetic behaviour (Jiang and Li, 2025: 3). Problematising the apparent inauthenticity of AIGC does not mean foregrounding a presumption of authenticity that GenAI configurations may either enshrine or corrupt, but rather asking what enactments of authenticity do.
Authenticity controversies such as those concerning 90 Miles and All Eyes On Rafah invite us to take seriously the ironies, motives, perspectives and constraints of those who use GenAI to “witness” conflict. Unlike traditional forms of witnessing, these media events involve iconic—symbolic and analogical—rather than indexical representations. Both media events are enacted by actors striving to testify beyond the constraints of contemporary sociotechnical systems, the former eschewing photography in light of confidentiality issues and the latter more broadly resisting the situated perspectives of both journalistic and algorithmic media systems. Both controversies reconfigure authenticity socially rather than technically, challenging existing witnessing boundaries.
At a field-specific scale, 90 Miles reflects controversy over acceptable witnessing practices and functions, showcasing the complexities of authenticity and witnessing’s unresolved practical and ethical quandaries. Dimensions of authenticity that have to date only quietly vied for dominance among mediators in the witnessing field are laid bare as ethical tensions surface between confidentiality, symbolism and indexical evidence. The crux of this controversy is that Brown had used AIGC to tell a story about a place and people he had not directly captured on camera and sought to monetise it. Though the images are counter-indexical, the controversy centres around the nature and status of witnessing labour in relation to GenAI, including the extent to which said labour may be photojournalistic or interpretive, its economic worth and the extent to which witness-mediators should prioritise indexicality or subject protection. As it pertains to the configuration of media witnessing in practice, it demonstrates a practical boundary struggle over authenticity in the witnessing field, where photojournalism and indexicality may conflict with other practices and ethical principles.
Similar themes are extended but reorganised at an international scale in All Eyes On Rafah, where affective witnessing and spectatorship converge. Here, discourses of authenticity as indexicality prioritise media that are not ethically but technically and politically constrained. This second controversy concerns the legitimacy of the image itself, leaving its broader technical configuration underexplored even though the media event and its millions of shares protest and invite reflection therein. It focuses on how the image failed to indexically represent its context despite being analogical content that proved transmissible in context; as such, it represents a symbolic struggle over authenticity in general, between actors with more or less power. This struggle transcends the witnessing field, involving debates about the status of ordinary social media users as tertiary witnesses, indexicality’s limits in a highly regulated media context and the legitimacy of affective witnessing as a means of highlighting and resisting regulation of what is itself a controversial conflict.
These controversies therefore reveal the political tensions and complexities of authenticity in the GenAI era. They demonstrate that media witnessing practices can mobilise different dimensions of authenticity simultaneously, and that GenAI configures such practices in nuanced and ambivalent ways according to the capacities and motives of “witnesses” or GenAI users. AIGC therefore antagonises contemporary assumptions about authenticity as a technical property of visual witness media, instead suggesting what Jestrovic (2008) describes as “hyper-authenticity,” where authenticity is co-constructed by the users, texts and beholders in question as a multidimensional value. Hyper-authenticity elevates questions about what authenticity means in different contexts, for different media communities, ultimately problematising configurations of GenAI use. This necessitates case-oriented analysis of communities and their discourses, goals, needs and sociotechnical contexts. Rather than presuming authenticity is fixed or agreed upon, we must ask: what does authenticity do for whom, and at whose expense?
Footnotes
Funding
The authors disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: This research was funded by the Australian Government through the Australian Research Council’s Centre of Excellence for Automated Decision Making and Society (ADM+S) [CE200100005].
Declaration of conflicting interests
The authors declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
