Abstract
The article explores the growing use of artificial intelligence (AI) in the entertainment industry, focusing on the impact to screen and audio performers. AI technologies enable the creation of synthetic performances, raising ethical concerns regarding identity manipulation, consent, and performer rights. Ethical AI should prioritize human-centric values such as transparency, accountability, and fairness, but many principles remain abstract and difficult to implement. Examining the intersection of acting and AI – in terms of working practice and discursive narratives used to situate that work – highlights significant issues about responsible AI use, the mechanics of media production, and the management of digitised identities. It stands as a useful model through which to explore and advance concepts of responsible AI practice and rhetoric. The article examines the role of fraudulent and exploitative uses of performer likenesses, how synthetic performance is enhancing celebrity brand identities, and evolving regulatory measures governing performer intellectual property (IP) across the U.S. and UK media industries. Performers are increasingly negotiating licensing and legal agreements to protect their likenesses, but current protections benefit primarily well-known stars over others. The article underscores the need for systemic frameworks to ensure responsible AI use, emphasizing the critical role performers play as both data sources and cultural agents, influencing public trust in AI technologies.
Keywords
Introduction
This article explores the increasing use of artificial intelligence (AI) in the entertainment industry, focusing on the impact on screen and audio actors. The ability to create synthetic performances from AI tools, along with other advanced digital technologies, raises important ethical and responsible use questions and helps frame ongoing debates. Individual identity is becoming subject to practices of synthetisation in both innovative and exploitative ways. The article discusses deepfakes and fraudulent practices that alter performer images and sounds without consent, new digital media content which enhances existing brand identities, and evolving contractual negotiations between actor and industry over historical and future uses of their performance IP. Examining the intersection of acting and AI – in terms of working practice and discursive narratives used to situate that work – highlights significant issues about responsible AI use, the mechanics of media production, and the management of digitised identities.
I first situate the intersection between screen and/or audio performance and AI within existing contexts of responsible AI, and then the commercial productions of digital humans and that field's accompanying call for urgent regulation and governance. Successful social, political, and commercial discourse depends on strong public trust in the application of advanced technologies capable of constructing new digital identities. The AI-driven digital replication of performer labour illustrates the complex regulatory pressures on emerging technologies where actors increasingly serve as a visible ‘social contract’ between society and technology. My exploration of responsible AI and performer labour focuses here on American and UK contexts that reflect “vectors of the global popular” (Acland, 2020, p.68) in their commonalities of media production practice. I explore different operational pathways and cultural narratives around AI and synthesised individuals: 1. Service or identity-based digital humans; 2. Fraudulent or exploitative use of performer identity; 3. Licensing agreements and legal rights around star performers; 4. Ecosystems surrounding celebrity digital brand narratives. These narratives help shape our understanding of responsible AI legally, technologically, culturally, and morally.
Synthetic performance in context: The power of AI narratives and evaluating frameworks of responsible AI
Hollywood, typical of global mainstream entertainment industries, is a crucial intersection of practical and discursive AI uses, serving as both an industrial structure and a storytelling apparatus. It's screen representations have shaped AI perceptions for decades and are significant in presenting thematic concerns around AI and influencing public imaginaries about technological futures (Brammer, 2022). The value of the narratives constructed around AI, from “works of science fiction and corporate marketing by big tech firms to subtler storytelling told by scholars and public intellectuals” (Chubb et al., 2022, n.p) has been widely acknowledged. Sartori and Bocca identify that dominant AI narratives often adopt utopian or dystopian tones, characterising the technology through hopes and fears (2022, p.446). Cave et al. emphasise the importance of these narratives in forming sociotechnical imaginaries, noting that “narratives of intelligent machines matter [because] they shape the backdrop against which AI systems are developed, interpreted and assessed” (2020, p.7–8). The impact of such representational discourse on cultural knowledge of technology is significant. The UK House of Lords’ Select Committee on Artificial Intelligence criticized popular AI portrayals in their 2018 report, stating that the image of “artificial intelligence in popular culture is lightyears away from the often complex and more mundane reality”, leading to unduly negative public views “largely been created by Hollywood depictions and sensationalist, inaccurate media reporting” (2018, p.22–23).
My work shifts focus from sensationalist screen narratives to the practical management of synthesised individuals in media entertainment. It examines how AI affects performers in media production and accompanying discourse (including marketing materials, journalism, industry reports, and trade union campaigns), investigating how the promotion and politics of performer labour and celebrity identity mould understanding and applications of AI technologies. Since humans are “reticent to adopt techniques that are not directly interpretable, tractable and trustworthy” (Barredo Arrieta et al., 2020, p.82), the demand for ethical AI works to address problems of increasing acceptance of uptake and use of AI systems. Examples and narratives that can demonstrate models of responsibility become important in this process, whereas cases that suggest misuse or exploitation offer a negative perspective that must be further overcome. The AI systems aligned most convincingly with human values can inspire and create trust. Therefore, responsible AI is inherently defined from a perspective that is “human-centric and society-grounded” (Dignum, 2019, p.5).
With its focus on people-centric data and position within complex, established systems of media production, commercialisation and cultural meaning making, synthetic media performance serves as a microcosm to explore some of the competing pressures, systems and perspectives informing ideas of responsible and ethical AI. The relationship between media and generative AI is relatively underexplored, despite the disruptive impact on generative AI models and systems on real-world domains like media generation (Kenthapadi et al., 2023). Discussions about regulatory frameworks around AI rarely include media, and when they do, tend to focus on elements associated with news media and journalism, including disinformation, data and AI literacy, issues of diversity, plurality, and social responsibility (Porlezza, 2023). Therefore, other aspects of media production, especially those that place actors and other performers at the centre, are often excluded from emergent conversations and concerns that arise from this impact can be side-lined.
Core principles of responsible AI include accountability, responsibility, and transparency, along with fairness, inclusivity, trust, privacy, sustainability, and explainability in AI design and application. However, these principles, while widely accepted, have been identified as universalised, abstract and vague, with the potential for weak implementation (Schiff et al., 2020). They derive from an “ideal model” that overlooks the socio-economic context, shaped by a plurality of contextual values, power dynamics, and material conditions (Gianni, Lehtinen & Nieminen, 2022, p.2). Acknowledgement of this obligates responsibility to lie beyond that of a system's operation, instead looking to how continuous chains of responsibility must link the system's behaviour to responsible actors which require the informed participation and commitment of all stakeholders (Dignum, 2019, p. 2).
To achieve responsible AI the technology must be situated not merely as a series of computational engineering challenges, but as a social object and part of socio-technical relations that must be operated holistically, necessitating ethical, legal, societal and economic implications to be taken into account. This creates a “many hands problem” whereby responsibility is distributed across a plurality of professional disciplines, potentially in a muddled way (Schiff et al., 2020). Possibly because of this plurality, there remains an assumed focus on designers and users/audiences as the key stakeholders to determine the boundaries of ethic and responsible use, acting as bookends to the workflows and interfaces (Barredo Arrieta et al., 2020; Gianni, Lehtinen & Nieminen, 2022; Schiff et al., 2020). Whilst it is broadly true in many contexts that the focus of AI is “not about imitating humans but providing humans with tools and techniques” (Dignum, 2019, p.5), this does not account for those moments where the aim of AI is precisely to imitate humans – as synthetic performance aims to do. Therefore, using the intersection between real performers and the generation of synthetic media performance to explore the boundaries around responsible and ethical use of AI intervenes in existing discussions in two primary ways.
First, it situates the application within the underexplored sector of media entertainment industries, mapping how those core and sub-principles of responsible AI are manifest, or absent, from current practice. This sector is a multifaceted socio-technical system that encompasses and manages multiple stakeholders, workers, machines, institutions, economic and legal concerns, so it illustrates aspects of the “many hands problem” but also demonstrates how certain pluralities may work effectively together. Second, it expands the idea of human-led AI beyond the developer and the user by placing human identity and performance as tangible data resources used in AI applications. This complicates the belief that most AI systems operate under the mandate of people or corporations that already have legal personhood, “which is sufficient to potential legal issues around the actions and decisions of AI systems” (Dignum, 2019, p.2). Instead, I consider the inadequacy of existing frameworks to protect human performers within AI-driven ecosystems, and how parallel frameworks drawn from celebrity rights can construction new protections. In this context, whilst under-theorised and under-protected, through their encounters with generative AI, actors and performers are at the coal front of determining what responsible AI looks like in practice.
Service and identity: Generative AI, digital humans and the removal of performer voices
Generative artificial intelligence (GAI) systems create content by analysing diverse training data and prompts, synthesising inputs like text, images, and videos to generate new visual, audio-visual or audio media. Performer-centred outputs may use data from a single performer or multiple sources to create new digital content, independent of the original performer's work. Data may come from an actor or musician's own archive (via agreed licensing or from material in the public domain) or from current projects where performers know their work will be used as a digital data source for GAI and other technological systems. This data includes images, movements, gestures, vocal intonations, sounds, and audio-visual examples of behaviour and emotions. The resulting outputs may be wide-ranging in terms of use-function and modes (still images, moving images, spoken-word media, and programmable digital assets) but share an aim to create a believable, often interactive, approximation of human identity, action or behaviour. Outputs range from simple two-dimensional deepfakes with limited appeal to intricate three-dimensional interactive digital human assets used across various platforms. These complex digital creations often integrate GAI tools with other advanced technologies like body scanning, motion capture, game engines, and rendering capabilities, involving intermediaries such as VFX artists, computational designers, body doubles, agents, and legal representatives. In broader technological and industrial landscapes, institutional organizations and agendas (such as Hollywood studios or consumer-driven frameworks) significantly influence the responsible or irresponsible use of performer-centred AI.
The 2023 Writers Guild of America (WGA) and Screen Actors Guild - American Federation of Television and Radio Artists (SAG-AFTRA) strikes in Hollywood exposed the increasing impact and future concern of GAI on the entertainment industry. Concerns over unregulated AI use dominated the narrative, with SAG-AFTRA demanding provisions “to protect human-created work and require informed consent and fair compensation when a ‘digital replica’ is made of a performer, or when their voice, likeness, or performance will be substantially changed using AI” (SAG-AFTRA, 2023). The rapid development of GAI tools may also bypass traditional media organisational structures, operating within unclear or ill-defined regulatory frameworks. For example, AI studios and software developers can sell or distribute synthesized performances directly to the open market without monitoring use or obtaining the original performer's permission.
Such concerns became visible in 2024, when OpenAI launched GPT-4o, a new Large Language Model capable of voice, text, and vision communication. The demo voice resembled Hollywood star Scarlett Johansson, who had played a similar role as a sophisticated chatbot in the film Her (2013); reportedly a favourite film of Sam Altman, OpenAI's chief executive. Although OpenAI denied cloning her voice, Johansson claimed that Altman had previously attempted to hire her in this role as “he felt by my voicing the system, I could help consumers feel comfortable with the seismic shift concerning humans and AI” (quoted in Naughton, 2024) - a position that reflects the notion that human performance, presence and values are central in achieving trust in AI technologies. Under the intense public scrutiny that followed “out of respect for Ms Johansson” or perhaps because “she is famous and has expensive lawyers” (ibid), OpenAI ceased use of the voice. As well as illustrating the value of human presence in establishing boundaries for new technologies, the case also points to the credibility of vocal replication in AI and the influence of public standing (in terms of both persona and fame) on determining appropriate AI use. As a narrative, it stands as ethically and legally imprecise standing outside clear regulatory frameworks - remaining unknown if cloning or impersonation occurred or what action Johansson's team took – but it also directs attention towards the important intersection between the value of identity (Johansson herself) and of service function (the chatbot tool).
Issues surrounding digital humans and AI are closely connected, with industry reports highlighting the pivotal role that AI plays in the construction of digital human interaction. In interactive cloud platform Zegocloud's 2023 whitepaper, digital human applications are categorised into “identity-based” and “service-based” types. Identity-based digital humans emphasize independent identities and personalities “for entertainment and social purposes”, such as “digital avatars of real people in virtual worlds” (2023, p.3,8). Service-based digital humans focus on functionality, “replac[ing] real human services” in content production as multimodal AI assistants (ibid). However, using real screen or audio performers as bases for these systems blurs these boundaries, revealing pluralities in managing digitized identities and digital servitude. Performer-led AI outputs occupy a complex space between replicating individual identities and producing generic content, demonstrating the overlapping roles and challenges AI and technology present in this field. With its inherently pluralistic values and functions acting is therefore a difficult but useful profession to integrate into AI and other digital systems. Actors create artistic work, but under particular conditions, including those that may assign the idea of ultimate creative power elsewhere – for example in the form of a director or editor. Actors provide a service, where their product – a given performance – is “inseparable from his or her physical [or vocal] attributes” (King, 1986, p.158). Furthermore, the strata of performers that can be culturally and economically defined as recognisable brand identities – stars – offer that service (or product) “centred on their presence rather than their performances” (ibid, p.165).
As seen with Johansson and OpenAI, voices and voice actors face pressures from the integration of GAI systems in audio content production. Realistic voice-cloning software uses actors’ performative data to create new audio content, producing synthetic performances that replicate human services without necessarily crediting a single actor, often anonymized or amalgamations of multiple actors. And yet actors are typically hired for their unique interpretations of text, emphasizing their individual personalities. Synthesised voiceovers for audiobooks, video games, and digital assistants may seem to fit a ‘service-based’ agenda but inherently rely on ‘identity-based’ values. Entertainment industry practices increasingly use identity branding and licensing of individualised IP to give performers more control over their synthesised work; however, this strategy only benefits those who can leverage their acting and performance style into a brand identity (Thomas, 2024). Emergent labour agreements are negotiating this dichotomy, evident in a 2024 deal between SAG-AFTRA and AI voiceover studio Replica, which protects voiceover performers licensing their voices for video games. This arrangement only applies to the use of AI in digital replication (re-creating the voice of a credited or identifiable actor) but not to performers whose work is used within a GAI dataset to create “synthetic actors that bear no resemblance to real performers” (Maddaus, 2024).
AI-driven workflows under ‘service-based’ agendas often obscure crucial performer labour, illustrating the potential to evade the necessary moral and legal scrutiny to protect these workers. Instead, human-based performer labour becomes disguised - technologically and conceptually - as a non-human object of apparatus by tech companies, rendering it absent in wider conversations. This invariably side-steps performers (as intermediaries/workers) in the discussion around responsible adaptation of technological advancement, in favour of user transparency and responsibility. But older parallels between the powerlessness of actors in the face of technology exist. In relation to photography, Jane Gaines argued that “the threat of the machine is the threat of the loss of the legal subject… totally missing in the creative act… where contenders for authorial contribution [performers] are written out of the process” (1992, p.66).
Rhetoric that removes performers as a creative element necessary for AI systems remains prevalent. When questions of AI and synthetic performance first garnered attention in 2019, then SAG-AFTRA President Gabrielle Carteris questioned Michael Petricone of the US Consumer Technology Association on how new technologies might harm performer identity as individual intellectual property. Carteris explained that digital replication would make performers vulnerable since an actor's identity is their product and “if someone… suddenly controls it and makes it their property, that is devastating” (SAG-AFTRA, 2019). Petricone's response ignored the human element and compared the situation to the introduction of home recording technology in the 1980s, whereby “the industry's fear of the record button on a VCR’ was reconciled audiences’ love of the play button” (ibid). By likening actors to VCR hardware, Petricone removed performers from the technological workflow, transforming them into consumer tech objects rather acknowledging them to be an active workforce crucial to emerging technologies.
This rhetoric persists despite actors highlighting AI integration in industrial practice through SAG-AFTRA's actions. Technology entrepreneurs continue to narrate their work without acknowledging performer contributions. In a 2024 interview, Greg Cross, the CEO of Soul Machines (a digital platform specialising in AI-driven digital humans) described the company's sense of the digital human process through “creator ecosystems” characterised by financiers, engineers, product design, technological expertise and “the consumer base” of users and fans (Xero, 2023). Absent from his account were the contributions made from any performer identity or labour that would be central to creating service-based digital humans like ‘Kai’ or ‘Vesper’ (who helps users practice for job interviews and plan trips), or identity-based digital humans like ‘Digital Jack Nicklaus’ and ‘Digital Mark Tuan’, based on and developed in collaboration with, respectively, the legendary American golfer and the contemporary K-Pop singer.
Despite limited technological narratives, there are broader acknowledgments that AI and digital humans are embedding in media and technological practices, with real humans negotiating these frontiers. Performer unions like SAG-AFTRA promote acceptance within defined boundaries rather than calling for a removal, emphasising that technology and performance must cohere to develop beneficial terms for performers, allowing them to engage with new opportunities (Campione, 2024). As such there is a widespread call for new, more effective regulatory systems to be developed alongside the emergence of synthetic performance, digital humans and AI; Matilde Pavis observes that much of “the creative economy is ill-equipped to adapt to the changes brought by AI systems to their industry” (2020, p.2) and “experts have been actively looking for solutions to control Deepfakes since their emergence in 2017” (2021, p.974). Zegocloud's report, while focused on mapping the digital human landscape, emphasises the urgent need for industry regulation and governance with the authors noting that advances in the digital human industry complicate social governance and human governance, citing fraud, deception, and ethical challenges as key issues (2023, p.29, 31). This stance, unlike CTA's Michael Petricone, acknowledges that technological regulation alone is insufficient. Human performance and performer voices must be considered to address appropriation, integration, emerging power dynamics, and the role of ethics in assessing these developments.
In Greg Cross's interview, the entrepreneur aligns user satisfaction and industry growth with moral ideas of truth and honesty, emphasising integrity as key to a satisfactory customer experience. For Soul Machines, this relies on a transparency where replication is overtly acknowledged and digital creations announce their replicated status: “Hi, I am the digital X, not the real X” (Xero, 2023). Such a stance is indicative of wider industrial strategies designed to establish trust in the cultural consumption of technological apparatus. A performer's individuality – their labour, their sense of self, or their brand image - and an ethical recognition of these by digital creators indicates a responsible technological field more broadly. As a UK newspaper noted regarding the Scarlett Johansson/OpenAI debate: “If Scarlett Johansson can’t bring the AI firms to heel, what hope for the rest of us?” (Naughton, 2024). This highlights a trichotomy between output (virtual identity), technology (AI systems), and society (ethical order), with performers as the crucial middle-ground – not creators or users, but workers and data sources - to reveal and reconcile these aspects.
Fraud and exploitation: Misusing performer labour and workforces
AI tools have increasingly been used to exploit performer labour and identity, notably through visual deepfakes that make use of star performers. Examples include face-swapping stars in viral clips, TikTok's Deepfake TomCruise, political satire, and deepfake porn mimicking leaked celebrity sex tapes. These AI-generated deepfakes have fuelled discussions about AI as a tool of fakery and fraudulence, raising significant cultural and political questions about the authenticity of what audiences see, enhancing public understanding in line with Sartori and Bocca's “dystopian” AI narratives. A notable example occurred during the 2024 New York Met Gala, where fake AI images of celebrities like Katy Perry and Rihanna circulated on X (formerly Twitter), further framed by extensive accompanying media reports of the fakery. Although not inherently negative, these images highlighted the ease of spreading believable but false media created and shared in real time. Whilst fake, the images broadly conformed to existing audience expectations of those celebrities’ star images. Perry herself had to clarify the images as fake on X, even to her own mother, as widely reported by major news outlets (Rufo, 2024). This situation underscores the challenges in distinguishing real from AI-generated content.
As Christopher Holliday argues, deepfakes use the recognisable faces of Hollywood stars “to exhibit the representative possibilities of Deepfakes as sophisticated technology of illusion… where the furore over the digital manipulation of the pro-filmic performer” reflects broader cultural anxieties about digital media manipulation (2021, p. 900). Deepfakes epitomises misleading media distortion that “sits squarely within the contemporary era of manipulated global media in a ‘post-truth’ climate” (ibid, p.903), raising concerns about technology's ability to create and circulate “relatively convincing fake news footage” and harmful pornographic content (Popova, 2020, p.367). In relation to deepfake porn, Popova notes that platforms like PornHub and Reddit have “grouped deepfakes together with revenge pornography under the headings of ‘non-consensual pornography” (ibid) to create an internal regulatory framework to ban such content.
Both Holliday's and Popova's studies of early deepfake content push the analysis towards user-communities rather than industrialised practice around a creative performer workforce. In doing so, the cultural narrative shifts into a more utopian tone, highlighting the creative and playful nature of fan communities. Popova contends that deepfakes are generated by audiences, not celebrities or commercial producers, and are intended for specialized, niche communities, often clearly marked as fake (2020, pp.369–70). Holliday characterises the playful face-swapping texts of independent online video artists as part of “the fan-made fixing of Hollywood stardom”, typical of ‘take two’ cinephilia (2021, p.912). Similarly, the Met Gala AI images of Katy Perry were framed as relatively harmless – fake rather than fraudulent - though language labelling it “just an internet prank” (Calfee, 2024), Perry humorously identifying the fakes on Instagram, and the platform branding them as altered photos which enabled the post to remain accessible (katyperry, 2024).
Instances of misuse abound though, especially in audio content, where AI-created impersonations or performances may be more significantly profitable, also increasing precarity for lesser-known voice actors. These uses are complex, making regulatory processes challenging. Legal frameworks able to link identity ownership to original performances and conditions of employment, and cultural ideas of appropriate worker treatment, are key to addressing this. Two 2024 UK examples highlight the scope and difficulties in identifying, negotiating, and trusting AI-generated performances in commercial workflows. They also reveal the need to separate case details from hyperbole around AI as a sole nefarious threat and how narratives of trust (AI or otherwise) are framed: contextualising AI use within the wider schemes it sits is crucial. The first example involves a wholly fraudulent case within a larger enterprise of digital deception. The second example is ingrained in existing creative industry infrastructures, raising ethical questions about employer/employee relationships and appropriate media production techniques. Responsible AI use involves not only the software itself but also how its practice is structured and communicated, with rhetoric and usage defining appropriate and responsible parameters.
In 2024, UK television presenter Liz Bonnin was involved in a work-based scam where audio and visual GAI tools were used to impersonate her and fraudulently negotiate a lucrative endorsement deal. Bonnin is well-known to UK audiences for her television appearances, mainly on wildlife and environmental programmes and for the BBC. In contrast to the deepfakes discussed above, the agenda here was not play or entertainment but financial gain, misleading a company through celebrity impersonation; as Bonnin commented after the incident was resolved, “it's not fun… but a violation on both our [hers and the company's] parts” (Gecsoyler, 2024). Widely reported as an example of dangers of AI, “after firm tricked by AI-generated voice” (ibid), this case illustrates a multifaceted identity fraud that exploited celebrity identity as a marker of authenticity, where GAI is but one element. An unknown party approached Howard Carter, CEO of Incognito - a company specialising in eco-friendly insect repellent - through social media. As Carter had previously attempted to engage Bonnin in an advertising deal, when a Facebook profile purporting to be the star contacted him, he interpreted it as Bonnin's legitimate follow-up.
Through WhatsApp voice messages, phone calls, and emails, the fake Liz Bonnin agreed to a deal “as a favour, provided it did not involve her main agency” (Goldbart, 2024) and provided a bank account for a £20,000 payment. AI software generated the fake voice, and an unlicensed image of Bonnin was used for a visual advertisement. Bonnin and her management discovered the fraud only after seeing the poster in circulation and issued a cease-and-desist notice to Incognito, who complied. This was a complex industrial deception involved fraudulent emails, digital contracts, and misleading information from charities associated with Bonnin, all appearing to make the deal appear legitimate. However, the focus remains directed towards AI and the risk the technology posed to trust and reputation. Bonnin's management highlighted this misuse of AI as a “worrying trend for the creative industries”, stressing that lawmakers and social media platforms are too slow to respond (ibid). Mainstream media reporting emphasised AI over the extensive identity-fraud, and high-profile cases like this contribute to the ongoing mythification of AI as a nefarious intellectual entity rather technological instrument. It is this through this type of cultural narrative and communication that have led to scholars arguing for a reinterpretation that “secularise[s] AI from the ideological status of ‘intelligent machine’ to one of ‘knowledge instruments’” (Pasquinelli & Joler, 2021, p.1263).
Whilst the impact of irresponsible AI use is keenly defined through cases of performer reputation and fraudulent economic practice, elsewhere it has been characterised as a threat to a precarious acting workforce, driving concerns about eliminating performer labour from creative media. These concerns contributed to the 2023 SAG-AFTRA strikes, with guild President Fran Drescher emphasising the threat to media performers by arguing that “the entire business model has been changed by streaming, digital, AI… If we don’t stand tall right now… we are all in jeopardy of being replaced by machines and big business” (Drescher quoted in Earl, 2023). This fear materialised again in 2024 when actress Sara Poyzer lost a work offer from the BBC. Primarily employed in voice overs and theatrical work, Poyzer shared a screenshot on social media of an email from a production company to her representatives, stating, “Sorry for the delay – we have had the approval from the BBC to use the AI-generated voice so we won’t need Sara anymore” (Sara Poyzer, 2024).
The post has become a prominent example of the GAI-performer debate, with 2.3 million views on X, nearly 3000 retweets, and hundreds of comments from actors sharing similar experiences and reduced employment opportunities. The example characterises three significant elements defining discussions around AI and human performance. First, it is a literal manifestation of the threat SAG-AFTRA warned of in the machine-led replacement of the human performer workforce. Second, the brusque language of the email disregards the emotional impact of rescinding that employment and devalues the livelihood and labour of this workforce. Third, it implicates the BBC, a trusted media institution, in using AI in a way deemed harmful and irresponsible. The BBC topped a 2023 YouGov poll on trust (Smith, 2023) and describes its own core values as “trust, respect, and accountability” (BBC.com). The incident suggested a conflict between the BBC's trusted image and the commercial benefits of using GAI content.
Following Poyzer's post, the BBC released a statement to mitigate reputational damage, reframing the issue around responsible AI use. They confirmed replacing Poyzer with GAI-created voice content but explained it was for a documentary about a terminally ill contributor who could no longer speak, claiming this method best represented the contributor's voice per their family's wishes and that the use of AI was to be clearly labelled (Kanter, 2024). This shifted the discourse from worker exploitation to a moral issue of representing selfhood, focusing on an ordinary vulnerable person rather than a celebrity. In doing so, it also downplayed the economic realities for performers working in the industry, unable to leverage values of notable individualism into protection. As a relatively unknown voiceover actress, Poyzer's self-presentation was limited to individual social media posts, unable to counter the larger media narrative that shifted her case from an economic threat to a broader societal merit issue. Unfortunately, this example also highlights systemic industrial practices that shift the focus of responsible and ethical AI use away from the human acting workforce, which is integral to the media industry, and instead emphasise a wider notion of the important ‘human element’.
Image and performance: Legal frameworks and licensing agreements around star performers
Managing synthesised performance is becoming increasingly complex as uses, workflows and cross-platform synergies expand around digital technologies, virtual humans and the integration of artificial intelligence tools within practices. Legislative and regulatory acts may focus on one example, such as the deepfake, but in practice can currently only offer limited protections in part due to the expanding digital media landscape that performers sit within. Where the call for broad changes have been addressed, outcomes such as the resolution of SAG-AFTRA's 2023 industrial action offered protections to individual performers able to prove their own identity, performance and IP has been replicated and the directive towards control and responsibility shifts away from a wider corporate responsibility towards notions of self-management and control over image on an individualised basis through specifically negotiated licencing agreements. This implicitly benefits those with the most negotiating power and networks, such as the highest strata of performers – stars – and as SAG-AFTRA's negotiating chief conceded, “celebrity type performers will benefit from provisions that are negotiated” (Campione, 2024).
Therefore, it is often through star-led individual emergent deals that help to define and ground appropriate boundaries of use and compensation around the further integration of AI systems into media production. One the one hand, this focus on celebrity-driven deals pushes a neoliberal agenda of individualised power, self-protectionism and management of the self over more systemic frameworks that can benefit a wider precarious workforce (like Sara Poyzer). But on the other hand, because of the pluralistic nature of star images and work as a form of intellectual property – and its long history in media production – this is also working to articulate the multifaceted aspects of image replication and recirculation that need to be considered more widely.
Mathilde Pavis (2020, 2021) explores significant issues arising from AI's impact on performers in the UK, focusing on the Copyright, Designs and Patents Acts 1988 and intellectual property rights known as “performers’ rights”. Pavis identifies that performers’ rights have international standing able to potentially address the global challenges of AI-produced content but would require substantive revisions to be fully useful. Currently, under performer's rights, UK intellectual property frameworks protect only the recording of a performance and replications derived from that recording, not the performance itself, which can be imitated without consequences (Pavis, 2021, p.990). AI-generated media reproduces performances without copying the original recording, so currently falls outside performers’ rights protections. Through her analysis of this emerging context, Pavis identifies a cycle that articulates what is at stake for professional performers with the introduction of performance synthetisation. With AI replication standing outside performer's right, performers are excluded from contractual agreements that credit or license synthetization of their work, preventing appropriate financial gains. Without contracts, performers cannot “authorize [and] commercialise the synthetisation of their own performance effectively” (Pavis, 2020, p.3). Thus, creating pathways to the right to protection, contractual agreements, and commercialisation is central to models of responsibly managing synthesised individual performances. Tellingly, one sign that Liz Bonnin's licencing agreement was false was the stipulation that it occurs outside her standard agency-led negotiations, for example.
Whilst Pavis's work reveals that performers’ rights lack the breadth to cover all necessary aspects, aspects of image rights could be used to fill this gap. In a UK context, Pavis cites the case of British footballer Wayne Rooney, who successfully terminated an image rights representation agreement made when he was 17 (2021, p.978); a victory based on his right to exploit his own image and enhance his earning capacity (Lynam et al., 2021, p.1736). This non-digital precedent again points to the high bargaining capacity of well-known public figures to set boundaries of appropriate protections, and a requirement to consider the intersection between ‘image’ and ‘performance’ as a way forward. As such, the synthetization of star figures is useful because the process must be determined through existing economic and legal systems where image, performance and other identity-based markers have traditionally been managed synchronically. Jane Gaines characterises Hollywood's twentieth century star system as “a story of how individuals attempted through contract negotiation to wrest control over their lives as well as their image product” (1992, p.37), and such long-standing contexts of media stardom offer a further framework to experiment with performance and rights amidst evolving technology, helping define workable parameters of responsibility and use. In the US, performers and public personalities are protected by the Right of Publicity, which safeguards against the misappropriation of their name, image, and likeness (NIL) for profit, and is well-established in celebrity business practices. Rights of Publicity protections are managed through licensing or contractual agreements and may include use restrictions, exclusivity, or perpetuity clauses. However, much like the UK system, this is not an infallible structure either as it executed state-by-state with varying levels of protection (Roesler & Hutchinson, 2020). Additionally, the Right of Publicity can conflict with the US Constitution's First Amendment, which allows the use of a person's NIL for news, education, and some entertainment purposes. Rights of Publicity protection therefore is enacted only when clear commercial exploitation (away from the celebrity) can be identified.
Frameworks of responsible AI therefore require the ability to negotiate and manage highly complex and pluralistic contexts around commercialisation, image replication, performance rights, and the ownership and reuse of identity-based intellectual property that media stardom embodies. Within celebrity rights, selfhood and iconicity are valued and protected alongside performance and image. Stardom (and star status) can be defined as the development of a personal monopoly on a “carefully developed persona” that then enables a professional to “manages his own labor commodity in a market” (Gaines, 1992, p.33). To have a fully protectable image-property, performers must cultivate “a public persona with secondary meaning (requiring public recognition)” (ibid p. 98). This allows stars to circulate without losing their distinct property quality, with “their meta-meaning surviving different embodiments” (King, 1986, pp.164–165). A star's performance, inherently structured through personality, can only be impersonated - not replicated - reinforcing the star's uniqueness (ibid p.166).
The protections afforded by copyright and privacy laws, alongside image and intellectual property rights, define a star's rights diversely; encompassing performative labor, acts and acting, appearance, gestures, promotional work, likeness and iconicity, the performance of self and brand, voice, and name. As economic and cultural entities, stars negotiate between representational power (of the image), operational power (of their own agency) and consumer power (of the recognisability and value of their iconicity for audiences). In the context of AI technologies, this balance can define frameworks of responsible use, where stars possess the bargaining power, along with an authenticity and stability of image to meaningfully “negotiate with new technologies, texts, audiences and markets” (Thomas, 2019, p.453). Examples of licenced celebrity-based AI-driven content created through well-established positions of trust, consent, control and compensation between individual performer, their employed intermediaries (agents, managers), and external stakeholders suggest emergent avenues in responsible AI development.
Crucial here are the details included in licensing agreements, and in how these details are positioned or circulated amidst the wider context of a star's “meta-meaning”. In star contracts that have become public discourse, boundaries of use and attempts at futureproofing are being drafted into agreements, and particular narratives are being weaved around individual stars that reflect their already established (and trusted) star image. In 2023 - while discourse around the threat of generative AI to media creativity was notably accelerating – accompanying promotion of John Wick 4, it was widely reported that its star Keanu Reeves had added a clause to his film contracts that prohibited his screen performance from being digitally altered in post-production without his consent. This clause had been in place since the early 2000s, responding to earlier shifts in digital post-production editing, but had garnered little attention. Yet in 2023, this was widely recirculated and framed through the mantle of Reeves taking a visible ethical stance on AI. This position was often constructed in journalistic discourse as mirroring Reeves’ own highly developed and recognisable star image of realism and heroic moral certainty, with one interview observing “That's one thing we can say about Keanu Reeves: In a world of fakes and frauds, he's fighting for what's real” (Watercutter, 2023).
Stars with less defined images may approach AI integration differently. In 2022, reports revealed that, at 91, James Earl Jones the veteran character actor famed for voicing Star Wars’ Darth Vader, had signed over rights to use archival records of his voice to Disney and Lucasfilm. The studios hired Respeecher, a Ukrainian voice-cloning start-up, to use AI technology to create new Darth Vader vocals for the Disney + series Obi-Wan Kenobi from Jones's original material. The case did not generate a significant discussion on the extended ownership of Jones’ voice, coming prior to 2023's AI concerns highlighted by SAG-AFTRA and WGA's industrial action. Instead, the agreement was framed through alternative ideas of technologically driven ethical and authentic action. The focus was on preserving the Star Wars franchise legacy, with the curation of Jones's voice standing as an important stewardship of the character (Bevan, 2022). It also emphasised Respeecher as symbol of moral heroism, with the company's practices continuing despite the 2022 Russian invasion of Ukraine. Here, synthetic voice technology represented resilience against tyranny, where the generation of any voice – even a digital voice – was a symbolic act of strength (Breznican, 2022). Placed within narratives of global conflict and the cultural power of franchise entertainment, star performance and image rights become secondary.
Beyond these individual cases though, newer scaled-up practices are emerging in the entertainment industry, potentially shifting from individualized self-management to more systemic operations that better understand the value of licensing and managing the synthetic asset. In March 2024, Creative Arts Agency (CAA), one of the pre-eminent entertainment and sporting agencies responsible for representing thousands of clients in the industry, announced a strategic partnership with American AI company Veritone to develop a synthetic media vault. This repository is designed to securely store intellectual property including a talent's name, image and likeness, digital assets and metadata such as synthetic counterparts, digital scans and voice recordings (Christopherson &; Metzner, 2024). CAA defined their investment through an increased responsibility to re-shape the entertainment industry, supporting and protecting artists while integrating AI responsibly into opportunities across the entertainment landscape. This collaboration between technical and entertainment industries aims to create a secure solution that ethically and financially safeguards performers, enabling them to “monetize their likeness with peace of mind” (ibid). The talent agency-led initiative addresses the challenge performers face in authorizing and commercializing AI-created synthetic performances, provided they are represented by CAA – although this is an affordance not open to all by any means. This large-scale connected model demonstrates a promising, if still limited, approach to ethically managing identity and performance-based AI in the industry.
Brand narratives and transmedia ecosystems: Integrating star image and managing plurality in synergistic media markets
Oversight of the fuller enterprise that AI-generated content and performance sits within is important. Moving away from the specifics of individual contractual agreements to consider a more complex media ecosystem enhances understanding of how the parameters of responsible AI use are being negotiated and set through the work of performers and media producers. To explore this, the final section looks one example of star-driven AI – Jennifer Lopez's digital replication for Virgin Voyages in 2023 – to illustrate how issues of identity management, performer rights, and the connected networks that finance and create AI outputs come together in an effective and successful way for all parties. It considers how the intersection of three established media systems – celebrity brand identity, transmedia cross-promotional campaigns, and digital image/VFX production – connect a variety of partners, stakeholders, audiences and practitioners in service of building large-scale strategies around consumer digital environments.
In 2023, Virgin Voyages, a cruise travel subsidiary of Virgin Group, launched an AI-driven marketing campaign featuring American screen and music star Jennifer Lopez. This campaign centred on the playful concept of ‘Jen AI’, an AI-generated interactive digital version of Lopez. Lopez authorized Virgin Voyages to use her likeness for a ‘Jen AI’ custom invite tool available on the Virgin Voyages website (virginvoyages.com) for a limited time. Access to Jen AI was restricted to the website and required user registration, including trip details and contact information. The tool allowed cruise customers to book and then send customised invites to family and friends in the form of ‘Jen AI’ - an audio-visual interface based on the star. This tool was developed by VML (formerly VMLY&R), with deepfake visuals by digital design agency Deeplocal and voice replication by SpeakUnique. Filmed footage of Lopez on a Virgin cruise ship provided the necessary data for creating realistic digital versions of her. Despite its limited availability, the tool generated significant impact, with advertising trade press and Deeplocal reporting between 1.4 and 2 billion media impressions, 25,000–30,000 custom invites created, and a notable increase in bookings (Deeplocal.com; WPP.com).
The rhetoric of responsibility, transparency and authenticity permeated the narratives situating Jen AI, and Virgin Voyage's partnership with Lopez. This was designed to alleviate fears around AI for both performer and for public, with an emphasis on the careful and respectful managing of Lopez's valuable image and identity. VMLY&R's creative directors stressed the importance of respecting Lopez's professional boundaries, ensuring her likeness was used appropriately and with her consent; that “we’re not using a celebrity's image in a way that disrespects them or their brand” and that "these guard rails are built into the platform” (LBBonline.com). The restricted access to Jen AI ensured controlled interactivity where “customers can’t get too mischievous” (Hall, 2023), maintaining the project's integrity around Lopez's image and allowing output monitoring to measure success.
A short promotional video accompanied the tool's launch, showcasing a glamorous Lopez on a Virgin cruise ship, then revealed to be a digital clone controlled ‘backstage’ by Virgin employees in humorous scenarios. Tonally, this fictionalised behind-the-scenes account also framed the project for wider public consumption through ideas of play and transparency in its comic exploration of the processes of digital replication. From a holistic perspective – from star to consumer – the campaign was explicitly designed to shift the conversation around AI from fear to entertainment, with the producers commenting “Everyone is talking about gen AI - but a lot of that talk tends to be negative and quite frankly… scary. So, we decided to put a Virgin twist to it and keep it playful and inviting” (LBBonline.com). In stage-managing this shift around AI from threat to fun through legitimate and recognisable channels - comic performance and the established brands of Lopez and Virgin - this idea of ‘playfulness’ is rendered more transparently than, for example, the ‘harmless’ Met Gala deepfake images, where the public may gain some enjoyment but remain unsure of who is creating and controlling AI-created images and for what other purpose.
Responsibility is therefore implied through projects not sitting in isolation, but how they sit within ecosystems of transmedia production, branding, licensing, endorsements, and commercialisation, all of which cohere to reinforce narratives of responsibility, engagement, and trust. Sitting within a saturated travel market (still negatively impacted by the Covid-19 pandemic), Virgin Voyages wanted this campaign to capture attention, foster engagement, and drive bookings by leveraging excitement around generative AI to distinguish them in the marketplace and enable easy consumer data collection (wpp.com). The use of a celebrity like Lopez added a humanising element, making the technology and data collection more palatable to audiences.
Lopez's involvement was also tied to her existing endorsement deal with Virgin Voyages (since 2021) which promoting her beauty range, JLo Beauty, on-board and in retail outlets. As the company's official brand ambassador and assigned the named role of ‘Chief Entertainment and Lifestyle Officer’, Lopez's input was promoted as helping to shape cruise programming, emphasising entertainment, fitness, wellness, and beauty. Extending the market for her own products benefited Lopez's own economic value and consolidation of digital consumer spaces, such as social media. The promotional video released to YouTube which announced her 2021 endorsement deal saw Lopez chat on FaceTime with Virgin Group founder and figurehead Richard Branson and making use of the communication platform and its inbuilt ‘fun’ digital filters. AI-driven digital twins may be seen as the logical next step along the path of celebrity marketing that integrates digital replication and AI systems within other digital frameworks such as online environment and virtual economies. Promotional trade press observes that celebrity digital success “hinges on authenticity, strategic audience targeting, and embracing innovation [where these figures become] master storytellers who craft engaging content, build strong partnerships, and adapt to the ever-changing digital landscape” (engaged.social, 2024). Jen AI and Virgin Voyages are but one part of Lopez's own synergistic digital entrepreneurship and brand storytelling skill.
As with any celebrity brand endorsement deal, there is a cultural interplay too, seen in the alignment between qualities of star image and company brand values to mutually reinforce each other. Lopez's multifaceted aspirational image of the independent, self-made female entrepreneur who combines ultra-feminine glamour and urban toughness - appearing to age gracefully over music, film and beauty - was used by Virgin Voyages to rebrand the traditionally older activity of cruising. They defined Lopez not as a ceremonial ‘Godmother’ figure (as their brand ambassadors had been named in the past) but as ‘Lifestyle Officer’; a dynamic creator helping to shape cruising ideology. This included an increased focus on entertainment, fitness, wellness and beauty and influencing an exclusive high-end Limitless Voyage cruise in 2023 (virginvoyages.com) designed to speak to and empower women. This relationship was defined through conceptual frameworks of star-driven “trust” and “connection” (LBBonline.com). Even the comic ‘Jen AI behind-the-scenes’ described above worked in service of the overall Lopez brand, providing important performative space for her to demonstrate talent as she impersonated the backstage Virgin staff. In the framing of the Jen AI project as responsible AI, the cultural storytelling of Lopez's stardom (and that of others) is symbolically integrated into company narratives and technological agendas. Lopez explicitly aligns individual artistry with social intent in her own claims that her “artistic and social mission is to empower, inspire and entertain” (quoted in Fernandez, 2022) and that this can be observed through the work with Virgin Voyages. Therefore, Jen AI, JLo beauty products and a high-end female orientated cruise are all imbued with ideas of social responsibility. At every level, this digital campaign is positioned as a visible flagship of responsible agency, ethics and empowerment – from the creative technologists to the star to the online users.
Conclusion: Connected models for responsible synthetic media performance
Through the human-centred nature of AI-driven synthetic media performance, human appearance, voice, labour and issues of data use, image rights and consent are being embedded in worldwide production practices and resulting media outputs. This field helps ground AI as a social object with performers as powerful and visible elements of the sociotechnical imaginary through AI systems are developed, interpreted and assessed (Cave et al., 2020, p.8). Their precarious presence within these systems offer an avenue to advance ideas of responsible and ethical AI. All strata of performers stand as human elements connecting operational systems and media models, existing – to varying degrees of profitability and ethical treatment - within complex technological, legal, economic, and cultural frameworks. Mapping how media performers have been deployed by producers using AI technologies extends the dialogue beyond traditional boundaries of responsibility conceived between developers and users to focus on an under-examined middle ground where human performance and identity stands as valuable but exploitable raw data. This sits within convergent structures of media production, circulation, regulation and capital that reflect the requirement to situate responsible AI in pluralistic systems of competing values, interests, material conditions and power asymmetries (see Gianni, Lehtinen & Nieminen, 2022, p.2).
Emerging practices in synthesized performance emphasize individual responsibility and branding for performers in digital and AI-driven environments. This has fostered useful boundaries and negotiations that are pathways for responsibly integrating AI systems into workflows and consumer landscapes, despite directing the broader performative workforce (non-stars) to cultivate individuality as a mode of labour protectionism against the disruption of AI tools. Not least it positions the importance of the individual data asset and the effective management of digitised identities as a central question, even if it has not solved this on an equitable footing for all yet. This focus on celebrification is not surprising given stardom's labour power and discursive value (Fortmueller, 2021; Turner, 2014). Known, individualised performers – stars, celebrities and other public figures – have long-straddled positions between the human, the virtual, the social, the industrial and the symbolic that have also become necessary to the cultural narrative of AI.
Through the context of synthesized media performance, there are indications of what responsible AI looks like. There should be the ethical recognition and use of an individual identity; defined as either selfhood, performance, labor or image - or a combination of these. Those that ignore, misuse or disavow the ideology of the original individual in their pursuit of digital human data asset exploit and eliminate the human element for their own gains. Identity-based projects that belong to a broader ecosystem of licensed operation and production (like Jen AI) cohere to reinforce socio-cultural positions of responsibility, engagement and trust. Those that do not (like fake Liz Bonnin or the Met Gala deepfakes) ultimately reinforce AI as a threatening force. Creating effective pathways to the right to protection, contractual and licensing agreements, and the potential for performers to benefit from the commercialization of synthetic performance derived from their work is central to models of responsibly managing performer-based AI outputs, as CAA appears to be implementing. Boundaries of responsible use are determined by applied practice and the dissemination of that practice to publics through different forms of cultural rhetoric (from Star Wars to the BBC).
The experiences of actors examined here point towards a series of interconnected actions that will aid the practical application of responsible and ethical AI.
First, ensure the human voice is heard in the middle of workflow, not just at the beginning (engineer) or the end (user). Performers must be empowered as a creator class in synthesized media production and credited as creative workers rather than assumed to a ‘natural data resource’ to be mined and exploited. Collaboration must be balanced across sectors: between the entertainment and computational industries, but also between media producers (e.g., studios) and creative workers (e.g., actors).
Second, further recognize a model in generative AI media production where that middle-ground between technological developer and consumer use is acknowledged as complex; where ‘data’ is driven by and designed from human endeavor, identity and labor. To do this, it is crucial to actively separate functions of ‘service’ and ‘identity’ in planning discussions, or to consider more fully what the impact of aligning these two often competing ideals leads to.
Third, situate solutions of identity-based data management as systemic industrial practice, not individualized self-management, but recognize that those with the most bargaining power (as valuable star brands) have the ability to influence emerging frameworks around boundaries of appropriate use, accessibility and developing audience engagement and trust that can be adopted more widely. Fourth, establish more effective legal and regulatory frameworks of protection able to deal with the pluralistic elements and qualities that identity synthetization must negotiate – from linking data ownership with an actual performance (not its recording) and other ill-defined conditions of employment to the complex standing of performers as economic, cultural and data-based entities.
Through such actions, synthetic media performance has the distinct potential to stand as a model of human-centered, socially-responsible AI, although it is far from this at the current juncture. In engaging more fully performer labor and celebrity identity beyond a focus on dystopian/utopian narratives of AI, this field has the foundations to be seen as accountable – forging a continuous chain of responsibility and informed participation across its decision-making mechanisms; transparent – with methods of data collection, training, storage and access that can be monitored and making clear all stakeholders and their interests; and sustainable, fair and inclusive – aiding, not eliminating, an already-precarious workforce.
Footnotes
Declaration of conflicting interests
The author declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Ethical statement
An ethic review was unnecessary for this study as it did not involve directly collecting data from human participants. It uses an archival methodology based on information freely available in the public domain as an open source of data.
Funding
The author received no financial support for the research, authorship, and/or publication of this article.
