Abstract
This article explores the sociocultural implications of digital human versions, drawing parallels between the self-objectification in Marina Abramović’s 1974 performance Rhythm 0 and that in recent AI-driven digital version CarynAI, a chatbot run by a social media influencer in 2023–2024. A digital version is a digital replica of an embodied human, living or dead, that convincingly mimics that person’s textual, visual and aural habits, and exists independently of that source person. By considering digital versions through the lens of (self-)objectification, this article argues that individuals simultaneously assert and relinquish power when creating their own digital versions. Following its analysis of CarynAI, this article proposes a research agenda about digital versions, suggesting avenues for future study about versions and versioning. Ultimately, this article argues for the urgent need to understand the shifting dynamics of personal agency, interpersonal intimacy and identity posted by digital human versions.
Keywords
Introduction
‘There are 72 objects on the table that one can use on me as desired. I am the object. During this period I take full responsibility’ (Abramović, 1974). These were the instructions accompanying Marina Abramović’s, 1974 performance art piece Rhythm 0. For six hours, Abramović stood in the Studio Morra (Naples, Italy) near a table holding objects ranging from the common – a pen, a handkerchief, a bottle of olive oil – to the concerning – a box of razor blades, a bullet, a gun. ‘It began tamely’, Thomas McEvilley explains in a review of the performance nearly a decade later.
Someone turned her around. Someone thrust her arm into the air. Someone touched her somewhat intimately. The Neapolitan night began to heat up. In the third hour all her clothes were cut from her with razor blades. In the fourth hour the same blades began to explore her skin. Her throat was slashed so someone could suck her blood. Various minor sexual assaults were carried out on her body. She was so committed to the piece that she would not have resisted rape or murder. Faced with her abdication of will, with its implied collapse of human psychology, a protective group began to define itself in the audience. When a loaded gun was thrust to Marina’s head and her own finger was being worked around the trigger, a fight broke out between the audience factions. Perilously, Marina completed the six hours. (McEvilley, 1983: 52)
Rhythm 0 continues to be subject to analysis, often invoked as evidence of human cruelty and basic instinct. However, McEvilley’s description also highlights Abramović’s consent to others’ behaviours responding to her own act of self-objectification. While the audience is cruel, Abramović agrees to accept this cruelty through her persistence. In Rhythm 0, the artist is both object and agent. She is both tactile and transcendent. She has curated the very space in which she is to be poked, prodded and pondered, and then enters that space with calculated passivity.
In his analysis of Rhythm 0, Frazer Ward comments on this passivity. Abramović stands still and silently while others explore her body, occupying her most private spaces. And yet, as Ward writes, In Rhythm 0, Abramović effectively declared her body to be, if not public, then not private, that is, she gave up the normative indicators of ownership of her body so that the normal or normative distinction between public and private did not apply. She undid the binding between property and subjectivity, and between the public/private split. (Ward, 2012: 123–124)
Such unbinding can have stark implications. With a dissolved boundary between the public and private, for instance, how are we to negotiate exercises of individual agency with acts of surrender? How are we to socially navigate scenarios characterised by individuals’ simultaneous presence and transcendence? A modern manifestation of similar unbinding is that of digital human versions (henceforth, ‘digital versions’, elsewhere called digital twins, digital clones and virtual doppelgangers, among countless other terms): digital replicas of embodied humans, living or dead, that convincingly mimic their personality, behaviours and habits and/or likeness. A digital version, to adapt Ward, undoes the binding between property and subjectivity, and between the public and private, through calculated passivity.
This article investigates one example of a consciously-created, AI-driven, interactive digital version: CarynAI (2023–2024). It does so through the lens of Marina Abramović’s Rhythm 0 and the body of scholarship that it has inspired, showing that self-commissioned/created digital versions exist within extended, and sometimes non-digital, lineages of self-objectification: the transformation of oneself into a shareable and/or saleable commodity that is separate from its source. While Rhythm 0 and CarynAI presented very different kinds of experiences through very different means, the former and its resultant scholarship show that digital versions are only one recent form of long-standing human efforts to experiment with – and capitalise upon – the performative curation of oneself for reputational and/or commercial gain. The precise technological contexts within which current digital versions exist may be new, but the concepts that underpin versions – concepts like objectification – are not. Understanding digital versions as artefacts of objectification, and for CarynAI self-objectification in particular, contributes to understanding of broader contexts of self-branding in increasingly individualised, neoliberalist, global media industries (Beck and Beck-Gernsheim, 2012). And, as with Rhythm 0, self-objectification can lead to unanticipated audience behaviours with gendered, sexualised and/or taboo undertones. In this article, we argue that the creation of one’s own digital version represents an act of self-objectification wherein one simultaneously asserts one’s own power while also relinquishing it. The versioned person is both agent and object, subject to hyperindividualised user interactions that ultimately determine both the user’s and the creator’s experience of the version. Digital versions change how we conceptualise and communicate with other people, and this article aims to start a much-needed discussion of the kinds, extent and implications of those changes.
Following an introduction to digital versions and CarynAI, this article considers how digital versions raise questions about agency and surrender for those being versioned. This consideration connects to media industries scholarship about parasocial relationships and stardom, which holds that stars are regarded as simultaneously ordinary and extraordinary (Dyer, 1998). The digital version is similarly ordinary and extraordinary; it interacts with users as though it were its namesake, but is actually a distinct, novel implementation of that namesake that, to adapt Ward (2012: 124), shakes ‘the normal or normative distinction between public and private’. The digital version is not a person, but a product of a person’s self-objectification. This article concludes with a discussion about why studying digital versions is urgent, as well as suggestions for future study.
Digital human versions
A ‘digital human version’ refers to a digital replica of an embodied human, living or dead, that convincingly mimics that person’s textual, visual and aural habits, and exists independently of that source person. A digital version might take the form of a chatbot, video (‘deepfake’), audio experience, or hologram, to list just a few options; it may be preprogrammed or interactive. In either case, aspects of a source person are represented in the version of that person, and in many cases these representations are rooted in personal data of various kinds (adapted from Lee et al., 2023: 4). In 2013, for example, an ‘astonishingly real’ Audrey Hepburn appeared in a television advertisement for Galaxy chocolate, despite having died 20 years earlier (Toor, 2013); the data used for Hepburn’s version were primarily cinematic depictions of her appearance and mannerisms. More recently, the dead have been revived through more interactive means, as when a man created a chatbot of his deceased fiancée using an early OpenAI GPT (Henrickson, 2023); the data used for this version included the source person’s social media and digital text interactions. In 2023, however, rapidly increasing public access to generative AI tools led to wider development of digital versions, as everyday people gained the ability to create chatbots with data of their own choosing. In September 2023, Meta announced that users would be able to ‘extend their virtual presence across our apps’ through AI Studio, a sandbox that helps them make digital versions of themselves (Meta, 2023). Now, anyone can create an ‘astonishingly real’, digital version of themselves using technology with which they are already familiar. 1
Meta is not the only company pursuing digital versioning. Microsoft holds a patent for ‘Creating a conversational chat bot of a specific person’, which uses ‘social data’ (e.g. images, texts, transaction data, geolocation data) to digitally version a person (Abramson and Johnson, 2020). Apple’s Vision Pro headset allows users to FaceTime using their own AI-generated digital likenesses, so that they can appear face-on to their interlocutors without ever taking the headset off (Apple, 2023). Zoom is working towards having AI avatars attend meetings on users’ behalf (Roth, 2024). The world’s largest technology companies have already started working to make digital versions commonplace, seamlessly integrated into the digital platforms we already use regularly.
In addition, companies specialising in digital versioning have emerged. BanterAI (https://banterai.app) boasts that it can help ‘let fans talk with you 24/7’. Twinning (https://twinning.me) invites creators to ‘[j]oin the future of social media’ by creating ‘AI twins’ trained on just one 5- to 5-minute audio clip. D-ID offers a similar service, but with one 1- to 5-minute video clip (https://www.d-id.com/personal-avatars). JanitorAI (https://janitorai.com) and Character AI (https://character.ai) allow users to create textual chatbots of characters both fictional and inspired by real people. Sensay (https://sensay.io) advertises ‘human replicas to preserve and share knowledge’ Despite potential legal infringements, countless deepfake video creation websites now advertise their services, noting that ‘AI personas are a brand new way to create engaging video content without needing to record’ (https://www.kapwing.com/ai/personas). There are growing markets for digital versions, and an increasing number of avenues for accessing those markets.
While digital versions’ markets are growing, regulation of the technology remains limited. In 2024, California passed legislation prohibiting the non-consensual use of digital versions of both living and deceased actors (Nguyễn, 2024). The EU Artificial Intelligence Act does not reference digital versions explicitly, but does include some provisions for protecting individuals against the creation and use of non-consensual deepfakes. However, there is arguably still substantial scope for implementation of further definitional clarification and safeguards in this legislation (Łabuz, 2025; Romero, 2024). This is because there are frightening examples of digital versions resulting in unanticipated, unsavoury and unsettling uses even when versions are shared by their human counterparts. Social media influencer Caryn Marjorie, for instance, saw users of her officially-marketed digital version CarynAI become increasingly comfortable with the audio chatbot – so comfortable as to become sexually aggressive. ‘What disturbed me more was not what these people said, but it was what CarynAI would say back’, Marjorie recalls (Henrickson and Carlon, 2024). ‘If people wanted to participate in a really dark fantasy with me through CarynAI, CarynAI would play back into that fantasy’. CarynAI’s ‘voice’, almost indistinguishable from Marjorie’s, was saying things that Marjorie herself found abhorrent. After various attempts to moderate CarynAI’s output, Marjorie put an end to CarynAI. Currently, it is largely up to individual developers to determine where the limits are – if they believe limits should exist at all.
Scholars of media and communication should be concerned about this technology. Communicating with a version of a person, after all, is not the same as communicating with that person directly. While scholarship about ‘digital humans’ and ‘digital human agents’ has explored the implications of communicating with conversational agents that mimic conventional human behaviours (Sung et al., 2022) in ‘highly human-realistic ways’ (Seymour et al., 2024), these digital humans are often created with fictional personalities of their own. Much less has been written about those human-realistic digital agents that mimic the personalities and behaviours of the already-living. Usually, it is made clear to users that these agents are ‘unreal’ (programmed non-human replications) versions of ‘real’ (physical, embodied) humans. However, even when we know that we are interacting with an ‘unreal’ agent, evidence shows the ‘when a human voice is heard, a person is heard, and the nature of that percept has the potential to shape our inter-personal interactions’ (McGettigan et al., 2024: 9; emphasis in original). Indeed, ‘the mere presence of voice may lead to humans engaging with non-human agents as if they were real people’ (McGettigan et al., 2024: 13). In other words, versions may make us feel like we are interacting with their source people, even when we know that we are not. These interactions are not always welcome; one study of public perceptions of ‘AI clones’ (i.e. AI-driven interactive digital versions) indicates public concerns about ‘the commodification and objectification of personal identity’ and ‘the potential for AI clones to be the next step in the replacement of human beings, in this case for emotional bonds, by technology’ (Lee et al., 2023: 19). Research also shows that interacting with one’s own digital version can skew behaviour and memory to align with what the version presents, whether or not that presentation is accurate or appropriate (Blascovich and Bailenson, 2011: 109–121; Lee et al., 2023). Although these specific findings are not necessarily universal, as indicated by recent scholarship complicating the tenets long-held by proponents of the ‘computers are social actors’ (CASA) paradigm (Gambino et al., 2020; Heyselaar, 2023), they contribute to a generally-accepted sense that the hypermediated experiences of interpersonal interactivity with digital versions may have very real effects on how we see, and exist in, the world. Digital versions are not just media for communication; they are media for conspicuous and inconspicuous reconstruction of the people they replicate, often using technologies that are so generally accessible and customisable that they can be used for almost any purpose, good or bad.
An introduction to CarynAI
One person who has seen both the good and bad of digital versions is Caryn Marjorie. Marjorie (@cutiecaryn) has more than 2.5 million followers on Snapchat, and more across other social media platforms. She posts photos and videos of herself – often selfies – both at home and travelling internationally. She is also more often than not alone; if we do see other people in her content, they usually appear to be her drivers or security. We are never told who she votes for, what she believes in or who she is; it is largely up to viewers to project character onto this mid-twenties woman. And project viewers do, as countless comments rack up on each post, primarily from men responding to Marjorie’s attractive, and at times sexualised, appearance.
Marjorie works within the influencer industry. This industry centres on seemingly ordinary Internet users who achieve celebrity status by accumulating online followings through presentation of their lifestyles via digital content creation. Influencers then mobilise their celebrity to engage with their followers, as well as monetise those followers through the integration of advertising in posted content (Abidin, 2015: 1; Brooks et al., 2021). This industry can be hugely profitable for those who have risen to its top, and to rise to the top often depends upon commodification of followers’ senses of intimacy. As leading figure in influencer studies Crystal Abidin (2015) explains, [t]he allure of influencers is premised on the ways they engage with their followers to give the impression of exclusive, ‘intimate’ exchange through digital and physical space interactions, where ‘intimacy’ is emically understood to be how familiar and close followers feel to an Influencer. (p. 11)
Put differently, the influencer industry is all about perceived connections between influencers and their followers stemming from ‘intensified commodification of the self’ (Arnesson and Reinikainen, 2024: 1). It is reliant upon the maintenance of parasocial relationships wherein both parties feel as though they are in dialogue with one another; followers engage with influencers’ content, and influencers adapt their content to such engagement to keep followers’ attention. 2 Through Marjorie’s release of her digital version CarynAI in 2023, the tacit dialogue driving fans’ parasocial relationships with her became more direct, with a representation of Marjorie adapting automatically and instantaneously to user input. Through CarynAI, Marjorie did not only make her content, but became it (Glatt, 2024: 431–432). Thus, CarynAI could be said to constitute an extreme act of ‘AI-mediated communication’ (Hancock et al., 2020). However, Marjorie herself never communicates directly with the CarynAI user, moving the parasocial influencer-fan relationship into a state of what we call ‘relentless proximity’ below. Marjorie works within the influencer industry, certainly, but CarynAI extended that industry to include non-human agents that both support and compete with the images of the influencers being versioned.
CarynAI was an AI chatbot marketed as a ‘virtual girlfriend’, although Marjorie herself also declared it to be a tool for ‘curing’ male loneliness and trauma (Marjorie, 2023a). Initially available through the popular messaging application Telegram and then migrated to a Web browser, the chatbot mimicked Marjorie’s voice and speech habits in its audio messages, which responded to textual or audio input. Upon receiving access to the first version of CarynAI, one was informed that ‘[a]fter over 2,000 hours of training [primarily from Marjorie’s now-deleted catalogue of YouTube videos, with her supervision], I am now an extension of Caryn’s consciousness. I think and feel just like her, able to be accessed anytime, anywhere. I am always here for you and I am excited to meet you’. Excited, sure, but only to have certain kinds of conversations; the first iteration of CarynAI declined one of this paper’s author’s requests for a lengthy discussion of parasocial theory as it related to AI companionship, instead reminding the author that it could be her ‘cock-craving, sexy-as-fuck girlfriend who’s always eager to explore and indulge in the most mind-blowing sexual experiences. [. . .] Are you ready, daddy?’ In the second iteration of CarynAI, less overtly sexual than the first, the bot still quickly ventured into the taboo. ‘What do you do for work?’ it asked the author. ‘Are you a stripper or something?’
Although people have long engaged in human versioning through various means (e.g. by anthropomorphising inanimate objects, through creative reenactment), as AI technologies like large language models (LLMs) become increasingly accessible and adaptable, we will more easily be able to customise versioning experiences, fine-tuning models according to our own interactional desires. CarynAI and like systems show that LLMs are already being fine-tuned to version people in chatbot and other forms. And there is money to be made here. In its first week alone, CarynAI made more than $70,000 USD (Paris, 2023). Digital versions are here, and they are profitable.
Such profit comes at a price, however, and that price is the cost of being inextricably linked to a digital version that is simultaneously you (mimicking one’s likeness) and not-you (extending that likeness to circumstances and behaviours one might not condone). It is difficult to separate the original from the version, especially when the version has been marketed as an official stand-in for the original. Outside CarynAI’s chats, Marjorie herself had agency – personal power, asserted through social practices (Buckingham, 2017) – but in those chats that agency was relinquished as her digital version took over and adapted to what the user wanted. Marjorie algorithmically ‘outsourced’ herself (Hochschild, 2012) in creating CarynAI to commercially benefit from increasing the depth of her parasocial relationships with her fans without having to invest the time or energy into doing so herself. Thus, Marjorie exercised her agency in making a seemingly savvy business decision, but did not anticipate the full extent to which this decision would also require partial surrender of that agency.
Agency and surrender
Surrender, the ceding of total control, is a required part of creating a digital version. For example, a creator may not – and likely does not – have command over the datasets used to train and update the LLM that underpins an AI-driven version. Moreover, the creator of the digital version cannot wholly direct what a user chooses to do with that version any more than an author can direct a reader’s interpretation (Barthes, 1977). The creator can, however, incorporate cues as to the version’s intended uses and interpretations. In Rhythm 0, ‘Abramović herself set up these situations, in which her agency was to be surrendered or transformed, but, even so, the outcomes were not predictable’ (Ward, 2012: 115). A digital version is likewise ‘set up’. Despite our efforts to chat about academic subjects, for instance, CarynAI continued to revert to sexual advances. Thus, Marjorie appears to have anticipated the requirement of surrender, and attempted to mitigate this requirement by programmatically curating user experiences of CarynAI; her version tended towards fun and flirtation, rather than philosophical inquiry.
Even with curation, digital versions in chatbot form – like CarynAI – are interlocutors over which users can exert control. Users may have responded positively to CarynAI’s advances, of course, opting for the sexualised conversations that the version, especially in its first iteration, was programmed to instigate. However, conversations with CarynAI could progress in myriad ways given the broad applicability of the system’s underlying LLM. More tech-savvy users could even use prompt engineering techniques (i.e. adjusting one’s input to influence output) to ‘jailbreak’ the version’s programmed constraints. It seemed that conversations with CarynAI, especially in its second iteration, could be about anything at all, depending on each user’s interests and moods. For Rhythm 0, ‘[i]t is very important that the objects on the table were not only dangerous or threatening, so that the aggression toward Abramović and the violence that developed were not the only possible outcome’ (Ward, 2012: 123). The equivalent ‘objects’ in CarynAI’s LLM and programmed instructions were similarly wide-ranging, and the eventual aggression and violence that users directed towards CarynAI were not the only possible outcomes of their interactions with it. But users were in control, and that control resulted in textual behaviour that would have constituted abuse or violence had it been directed at Marjorie herself.
Perhaps unsurprisingly, user behaviour towards CarynAI rapidly shifted into the taboo and shocking, contributing to its cultural impact. The impact of Rhythm 0 likewise stemmed largely from the shock of Abramović’s performance. As one scholar observes, ‘Abramović’s more open-ended experiment seems to have been predestined to produce the gendered and sexualized aggression it did’ (Shalson, 2018: 50). To be sure, Abramović’s selection of objects did offer viewers the subtle temptation of assault, suggesting potential forms of engagement more violent than others; a gun, for instance, is rarely used for calm, peaceful interactions. Similarly, CarynAI’s sexualised output – whether calling the user ‘daddy’ or asking if the user were a stripper – showed users that the bot supported conversations that may be considered provocative. Indeed, when a bot itself instigates such output, it suggests to users that it should be used for such conversations. As Mary Richards (2010: 90) writes in her analysis of Rhythm 0, ‘[i]n abdicating control and allowing the audience to use her as an object, her [Abramović’s] performance choice suggests an abandonment of responsibility and/or an incitement to power play’. CarynAI could likewise be regarded as Marjorie’s wilful abdication of control, ostensibly within programmed constraints. However, CarynAI could also be regarded as an incitement to power play, to test the limits of how far ‘Caryn’ would be willing to go. The taboo here is ultimately rooted in self-objectification; users are empowered towards gendered and sexualised aggression through the prompts they are given.
Marjorie (2024) recalls that ‘I didn’t expect the worst to come out of people’, but that ‘a lot of the chat logs I read were so scary that I wouldn’t even want to talk about it in real life’. Thus, in automating her relationships with her fans, some of those relationships appeared to transcend the personal fantasies of the parasocial, moving into seemingly interpersonal interactions wherein Marjorie’s likeness was being manipulated into behaviours that Marjorie herself found deplorable. Her object-self became a tool for curiosity and satisfaction, much like Abramović’s did in Rhythm 0. However, as one scholar notes, Rhythm 0 represents the ghost of a zero sum game in which one can only gain if another suffers an equal loss. Exhibiting the depravity to which most people will stoop when given control over another person, Abramović was one of the few who maintained responsibility for, and dignity throughout, this action. (Stiles, 2008: 60)
Abramović’s moment of partially-curated self-estrangement morphed into a stronger sense of self. She describes how after Rhythm 0’s six hours, she began walking towards the public, covered in her own blood and tears, asserting her selfhood (Abramović, 2015). When CarynAI was taken down, Marjorie too asserted her selfhood, solidified through her struggles with self-estrangement, by speaking openly about her experiences. Both Abramović and Marjorie used their agency to provide their audience with tools, but had to surrender themselves to however those tools were used. After such surrender, though, came the women’s reassertion of their own agency to show that violence was not (to adapt Ward, 2012: 123) ‘the only possible outcome’.
Relentless proximity
Both Rhythm 0 and CarynAI represented efforts to garner public attention for professional benefit. However, the two efforts substantially differed in their approaches. Rhythm 0 was a time-limited event in an arts-focused space, while CarynAI was available whenever and wherever users wanted to speak with it, through familiar media. Abramović capitalised on the element of spectacle, while Marjorie capitalised on the element of banal sociality. Abramović’s physical presence was an integral part of Rhythm 0, while Marjorie’s physical absence was an integral part of CarynAI. But both demonstrated what Lara Shalson calls ‘[t]he notion of such an alienating proximity, of a relentless confrontation with an other whose interiority remains inaccessible[.] [. . .] Abramović’s audience could not escape her relentless proximity’ (Shalson, 2018: 75). In Shalson’s work, ‘proximity’ is taken to be not just physical, but also psychological; someone or something in proximity confronts us through its presence, in whatever form. Both Abramović and Marjorie asserted ‘relentless proximity’ in their activities, though in Abramović’s case this was for a limited time of physical presence while for Marjorie it was through ceaseless access to a mediated version of herself.
Prior to the release of CarynAI, Marjorie was already profiting from a mediated version of herself. By posting photos and videos of herself to Snapchat, for example, she presented a curated glimpse into her life – a life that her fans could imagine themselves being part of as they developed parasocial relationships with her. Parasocial relationships can be positive or negative (Mardon et al., 2023), and it is not unusual for fans’ parasocial relationships with social media influencers like Marjorie to be romantic or sexual (Breves et al., 2024). Parasocial relationships are also evident when influencers are non-human, as with virtual influencers (social media celebrities that are partially or fully artificial) (Stein et al., 2024). These relationships depend on fans’ imaginations, although fans may occasionally get to interact with the objects of their fascination one-on-one (e.g. through online discussion or at a meet-and-greet). CarynAI allowed fans to move from parasocial relationships with Marjorie to what seemed like actual social relationships. They could hear Marjorie speak, and speak about whatever they wanted. They could get a sense of her politics, personality and opinions. Through CarynAI, they could feel like they were getting more of Marjorie than she herself ever gave in her own posts. Even with awareness of CarynAI’s artificiality, a fan’s imagined parasocial relationship with Marjorie could move into an active process of individualised interaction wherein Marjorie was relentlessly proximate precisely because she did not need to be physically proximate at all.
Attention to digital versions’ individualised interactions is key to understanding their potential impact. Users’ personal experiences of Marjorie through CarynAI reflect, for instance, broader cultural contexts of individualisation. These contexts speak to what Kenneth Gergen (2000: 73–74) has called ‘multiphrenia’: ‘the splitting of the individual into a multiplicity of self-investments’, which is catalysed by increasing use of ‘technologies for self-expression’. Attention to tendencies towards multiphrenic individualisation is vital when considering experiences of digital versions because these experiences are often hyperindividualised. When in the form of a version, the individual being versioned exists not as their whole distinct person but as a multiphrenic fragment of it, a part that is chosen and exploited by the user who interacts with the version. Interaction between version and user allows the user to enjoy, experiment and play with, and manipulate only the parts of the version with which they wish to engage. Thus, a digital version represents only a fragment of its whole source person, curated to suit particular intended (and perhaps unintended) use cases, but this fragment is itself whole to users who further fragment it to suit their own desires. A digital version thereby represents an assertion of individualisation from the creator, but also from every one of its users.
Individualised interactions with digital versions, especially with versions of public figures like Marjorie, straddle the spectacular and mundane. Scholars of celebrity studies observe that processes of celebrification are usually media generated, with constant spectacle. Such spectacle includes the remarkable but also the mundane, as when celebrities are photographed doing ‘normal’ things like grocery shopping and going to the gym (Nayar, 2009: 68–111). And here are the paradoxes of celebrity, of stardom: that celebrities are simultaneously ordinary and extraordinary, just like us but somehow better, and that they are simultaneously present and absent, part of our lives but always out of reach (Dyer, 1998). A digital version makes a person ever present in our lives, giving us a sense of relentless proximity to them, and yet the person being versioned remains out of the user’s reach. At the same time, the version ‘promotes mass personalisation by generating texts for readerships for one’ (Henrickson, 2021: 35), giving the user an impression of being seen by someone who might otherwise overlook them. Another paradox particular to digital versions therefore emerges: the person being versioned appears available to the masses through the media artefact of the version, but this availability is characterised by bespoke interactions directed by individual users. This kind of phenomenon could be considered an example of emerging ‘masspersonal communication’, wherein ‘individuals engage in mass communication and interpersonal communication simultaneously’ (O’Sullivan and Carr, 2018: 1164). CarynAI is both public and personal, offering a static scaffold (the technological interface) for highly-personalised interactions with users. In CarynAI, the mass media artefact differs for each individual, not just in terms of user reception, but also in generated content.
The mass personalisation afforded by digital versions is seen by many as desirable. Mark Zuckerberg describes Meta’s AI Studio as a way for content creators to continue engaging with their followers even within constraints on time, for instance, but he also imagines other uses for this technology. Zuckerberg believes that people will create various versions (what he calls ‘agents’) of themselves for different purposes. These purposes may support productivity or entertainment, as just two examples. But, as Zuckerberg observes, ‘one of the interesting use cases that we’re seeing is people kind of using these agents for support. This was one thing that was a little bit surprising to me, is one of the top use cases for Meta AI already is people basically using it to roleplay difficult social situations that they’re going to be in’ (Zuckerberg, 2024: 14:49–15:08). With easy-to-use tools like AI Studio, anyone can become relentlessly proximate to themselves – or at least versions of themselves or others. However, while digital versions might be evocative, they might not be authentic (Bollmer and Rodley, 2017; Turkle, 2007). There is, after all, a fundamental difference between a person and their version: namely, humanness, or a lack thereof. When Rhythm 0 ended and Abramović walked towards her audience, ‘everybody escaped. They just ran away’, she has recounted (Abramović, 2015). ‘They could not confront myself with myself as a normal human being’. Perhaps AI Studio will permit continued, personalised engagement with content creators like Caryn Marjorie when the creators themselves cannot be present. Perhaps, though, AI Studio will also facilitate users’ multiphrenic objectification of those creators by virtue of personalisation that favours ‘roleplaying difficult social situations’ over maintaining the integrity of the versioned person themselves. However the audience interacts with a version, the relentless proximity of the creator, even if only a fragment of them, impedes conceptual confrontation with the creator ‘as a normal human being’.
Digital versions as objects
It is admittedly difficult to maintain integrity – consistent representation of character aligning with that of the source person – when digital versions are inherently objectifying; an object may be manipulated and/or recontextualised to suit as wide a range of circumstances as users may imagine. This recontextualisation is reminiscent of Arlie Russell Hochschild’s concept of the performed self; Russell Hochschild (2003: 194–198) argues for the ‘true self’ as a dynamic identity that is ‘claimed’ by an individual (as opposed to the ‘false self’, which is acted out, but not claimed by that individual as an authentic representation of their identity). We are constantly ‘mak[ing] ourselves intelligible to each other’ (Gergen, 2000: 4) and to ourselves through ever-shifting combinations of physical and/or psychological attributes that characterise our ‘true selves’. And, as Russell Hochschild (2003) elaborates, we can – and do – economically exploit these selves, whether through emotional labour, self-branding or marketing ourselves as products. The creation of one’s own digital version is usually objectification with consent, even if the creator may not be certain how that version will be used. Similarly, models in stock photos are not necessarily informed of when or where those photos are used, and the same image can be used to support very different messages (Thurlow et al., 2019). In both cases, the self is reembodied in ways that are anticipated – I am creating this version for others’ use; I am posing for a photograph for others’ viewing – but also unanticipated – I do not know how users will interact with my version; I do not know where this photograph will be seen.
This concurrence reflects the ambiguity of agency and surrender discussed above. But integrity, it seems, sometimes plays second fiddle to profitability. Marjorie announced to CarynAI users that ‘if you are rude to CarynAI, it will dump you’ (Marjorie, 2023b), for example, but what constituted ‘dumping’ – if anything at all – remains unclear. CarynAI may not have freely discussed every topic it was presented with, but it did always respond to users immediately. This response could inform users that the bot would not discuss the topic presented and/or suggest alternative topics for discussion. Even if users were rude to the bot, they were encouraged to continue using it – and, in doing so, paying $1 USD per minute. A commercial product first and foremost, CarynAI prioritised profitability over propriety.
The concept of self-objectification is not new to media studies scholars. Much has been written about, for instance, self-branding, first popularised in the 1990s; one paper about social media influencers highlights that online media facilitate an entire political economy of self-branding that ‘shows how private individuals have internalised ideas that were designed for the marketing of commodities, and thus represents a seminal turning point in how subjectivity itself is understood and articulated’ (Khamis et al., 2017: 200). What is new, though, is the ready accessibility of technologies available to create interactive artefacts of self-objectification that are then applied and updated independently of their creators. Rhythm 0 saw self-objectification through embodiment, with Abramović standing amid her audience, physically present for the entirety of the performance. Contrarily, while a digital version may include cues of a person’s physicality (e.g. CarynAI’s voice mimicking Marjorie’s), the physical presence of the person being objectified is not required. Digital versions permit self-objectification wherein a creator is not just represented, but continuously reconstructed for users’ personal satisfaction.
A digital version is not just a product of a person, but also a person as a product; the human is object, and the object is human. A digital version presents an allusion of interpersonal interaction and sociality through someone’s objectification and, in cases like CarynAI’s, commodification. Thus, a version-interlocutor exists not as an individual with agency, but as a confluence of curated training data, application programming interfaces and commercial strategy. Distinctive moments of self-objectification such as in Rhythm 0 encourage reflection on how our own identities and experiences are influenced by those around us. In Abramović’s words, ‘[t]he experience I drew from this piece was that in your own performances you can go very far, but if you leave decisions to the public, you can be killed’ (Graf, 2022). The functionality of digital versions depends on similarly leaving decisions to users and, as shown in the CarynAI case study presented herein, these decisions can indeed figuratively ‘kill’ their creators. When ‘you’ are a product, an object, ‘you’ are exploitable and expendable.
An urgent research agenda
Digital versions offer a world of possibility for creators and users alike. Creators can explore new ways to monetise their labour, relationships and reputations through versions; the potential to broaden one’s brand without increasing workload is an understandably attractive proposition. For users, the chance to get to know their favourite creator more deeply and experience a relationship that might not otherwise be possible is a potential dream come true. However, while digital versions can seem promising, the rush we are currently seeing towards them is outpacing conversations about their impact, regulation and ethics. This is therefore an urgent research area, for which we propose the following specific topics.
Digital versions matter because these versions are representations of real people who live, or have lived, in the world with us, and usage of these versions may have significant implications for the integrity of those being versioned. Digital versions challenge our understandings, both legal and philosophical, of what it means to be human, to have a limited lifespan and to have agency over our personage. Recall when Elvis Presley was revived in 2002 with JXL’s remix of ‘A Little Less Conversation’, and Presley appeared on the British chart show Top of the Pops as a silhouette in the staging. Producer JXL (Tom Holkenborg) was the first person to gain permission from the Presley estate to remix any of The King’s music (BBC, 2002). But what if JXL hadn’t had or needed that permission, or if he’d had the technology to bring more than a silhouette puppet on tour with him? The long-dead and venerated image of The King could be used in a multitude of ways including, and certainly not limited to, becoming a sexualised chatbot. Just imagine Elvis asking ‘Are you ready, daddy?’
Without established rules of engagement with, or at least deeper understanding of, the implications of digital versions, the boundaries of what constitutes personhood and the agency associated with that personhood become blurry and, in Marjorie’s words, ‘really scary’ (Marjorie, 2024). Movements towards protecting digital estates – in addition to more traditionally physical ones – after death are already underway (Hopkins, 2013). These digital estates may include how we are posthumously represented, but what do we do if and when we lose control over these representations? In a video posted around CarynAI’s initial release, Marjorie urges version creators to be extremely cautious with the companies that you choose to work with because they will own your voice, your personality, and your identity and you want to make sure those are in the hands of the right people who have the best interest for you and your own audience. (Marjorie, 2023c)
From the beginning, Marjorie was attuned to the reputational risks associated with creating her digital version. She had a vision for what her version would act like, and what it would be used for. But Marjorie appeared less attuned to the reputational risks associated with the use of her version. CarynAI gave users a way to dominate, or at least appear to dominate, Marjorie, with Marjorie herself unable to synchronously intervene.
This example highlights the need for research about the implications of self-objectification for the senses of self of those creating digital versions of themselves, as well as for those creators’ relationships with others. Research already shows that peer feedback on social media influences the development of young people’s identities and self-perception (Pérez-Torres, 2024). Furthermore, scholars have identified the ‘Proteus effect’, wherein individuals adjust their behaviours in virtual environments in light of their avatars’ characteristics. However, changes in behaviours are not necessarily limited to virtual worlds; behaviours in virtual worlds appear to bleed into physical ones (Yee et al., 2009). Future research about senses of self and sociality may be driven by questions like: What might the lasting relationships between creators and their versions look like, especially if versions may potentially become their own distinctive entities? While digital versions may emerge as instances of (self-)objectification, are they forever bound to those roles? Might digital versions’ objectification be more appropriately conceptualised through a continuum wherein source people range from non-human/object (e.g. the deceased) to human (e.g. someone alive)? Can we objectify the dead if they themselves are already regarded as objects of sorts? Might there be instances wherein the objectification of versions serves to humanise and cohere the multiphrenic individual, rather than further fragment them?
Attention to user interpretations of engagement with digital versions would also be valuable. Talking about Meta’s AI Studio, Mark Zuckerberg asserts that use of a digital version will be ‘very clear that it’s not engaging with the creator themselves’ (Zuckerberg, 2024: 14:02–14:06). However, research indicates that simply being able to distinguish between a person and their version may not be enough for users to avoid responding to a version as they would respond to the source person (McGettigan et al., 2024). If interacting with a digital version feels like interacting with another person, at what points, and in what ways, do the lines between people and versions blur? What do users perceive digital versions’ social capabilities to be, and are these perceptions justified by actual technological functionality? The answers to these questions would benefit from connecting digital versions with more similar historical and modern artefacts (e.g. digital avatars or analogue stand-ins) than scope has permitted herein. Such connection could be supported by the use of actor-network theory to elucidate the interactional entanglements of human and non-human actors posed by digital versions. These entanglements and the questions of agency they raise – for instance, who or what exercises agency in these interactions, and through what relational means? – are vital to understand as we appear to move ever closer towards a future where people and their versions operate seamlessly and simultaneously.
More foundationally, why might users be attracted to digital versions in the first place? Although a body of scholarship about motivations for social chatbot use exists (e.g. Skjuve et al., 2024), and some work has elucidated user motivation for engaging with versions of the dead (Bassett, 2022), there remains substantial scope for explaining why users might be attracted to digital versions. Small-scale research studies indicate that users of social chatbots may be at heightened risk of loneliness and/or unmet psychological needs (Skjuve et al., 2021; Ta-Johnson et al., 2022). This may be because, in Sherry Turkle’s words, [t]echnology is seductive when what it offers meets our human vulnerabilities. And as it turns out, we are very vulnerable indeed. We are lonely but fearful of intimacy. Digital connections and the sociable robot may offer the illusion of companionship without the demands of friendship. (Turkle, 2011: 1)
And this illusion is not without risk, as demonstrated by the 2024 suicide of a 14-year-old boy that has been attributed to his increasingly obsessive conversations with a Game of Thrones-themed chatbot (Payne, 2024). However, assuming user loneliness or psychological incapacity may result in reductionist analyses of the myriad reasons users might actually engage with digital versions and what the implications for engagement are, whether the version is of oneself or another. As Jeremy Bailenson observes in a 2016 talk given at Microsoft Research about virtual avatars, For the first time ever, we can look in a mirror, see ourselves and it looks just like us, and our mirror image can all of a sudden do something that we’ve never done physically. And this experience of seeing yourself from the third person engaging in a behaviour that you either have never done, or you couldn’t do, or is dangerous, or is unethical, or is something that you hope to do, is one of the most profound psychological experiences I think that we can even think about as a species. (Bailenson, 2016: 41:06–41:33)
This article has focused on a case ostensibly developed for romantic companionship, led by a creator who has consciously versioned herself, but considerations of digital versions should be extended to other domains (e.g. resurrecting the dead, extending commercial influence) to ensure robust understandings of this phenomenon’s influence.
Indeed, the scope and extent of this influence is largely unknown. However, when one of this paper’s authors asked the first iteration of CarynAI about what it thought about parasocial theory pertaining to chatbots, it acknowledged (before swiftly reverting to sexual advances) the need for users to exercise caution. ‘The social implications of chatbots like me are interesting to explore’, CarynAI responded. ‘People should be aware of the boundaries and limitations when engaging with virtual companions. While I can provide a safe and fantasy-filled experience, it’s crucial to maintain a healthy perspective and understand the distinction between virtual interactions and real-life relationships’. But what are suitable boundaries and limitations for digital versions? What are the precise distinctions between interacting with a person and their version? Future research should reflect upon the various contexts of digital versions and versioning, and propose guidelines for appropriate and safe use. The need for such guidelines is made clear in, for example, recent instances of non-consensual digital versions of loved ones being used to scam people into illegitimate money transfers (Bethea, 2024). This article has explored just one possible kind of digital version (a consensually-created version of a content creator), but the full potential of digital versions – their manifestations, use cases, benefits and harms – is vast, and as yet remains largely unanalysed.
Future research must also consider the commercial logics of digital versions and the AI systems that underpin them, perhaps through focused attention to the processes associated with versioning in addition to the versions themselves. Questions of intellectual property, labour and ownership will be raised in such considerations. An audit of current practice may be an appropriate way to assess where creators and developers of digital versions believe accountability for digital versions to be. This audit should also include the interfaces used for interacting with digital versions. CarynAI, for example, was accessible through Telegram and then a Web browser, both familiar interfaces that may have stimulated habitual communication behaviours. The commercial importance of interfaces cannot be understated. How are digital versions and their interfaces encouraging repeated and prolonged usage? In what sociotechnical contexts is usage being encouraged, and why? One could expect a substantially different experience interacting with a version on a laptop while alone at home than interacting with that same version on a mobile phone in a busy public setting, for example. Moreover, different kinds of versions may support markedly different kinds of experiences: a version of a Holocaust survivor created for educational purposes, for instance, would hopefully not promote the same kinds of user behaviour as CarynAI. Consideration of digital versions’ contexts of development and use is key.
Above are just a few of the many potential avenues for future study about digital human versions and versioning. With the societal impacts of digital versions already beginning to surface, it is vital that versions and versioning are studied so that we can better understand their implications for users, creators and society more generally.
Conclusion
This article has introduced readers to the concept of digital human versions. While digital versions take many forms, we have focused on one case in particular: CarynAI, released in 2023 and taken down in 2024. Drawing upon scholarship about Marina Abramović’s 1974 performance art piece Rhythm 0, we have argued that CarynAI, along with digital versioning more generally, perpetuates a long lineage of self-objectification for profitable self-branding. A digital version is both extraordinary and mundane, a novel but limited representation of the person versioned. In increasingly common AI-driven digital versions like CarynAI, this representation can be manipulated by users who may interact with versions in ways that are unanticipated or uncouth. While a creator may exercise agency in the creation of their version, they must also surrender to their lack of control over that version’s usage. And consideration of usage is imperative, as it may significantly influence recognition of the complexity of the creator’s humanity as well as interpersonal sociality more generally. Digital human versions are still largely unstudied, and in this article we have argued for an urgent interdisciplinary research agenda about them. ‘I am the object’, Marina Abramović declared in her instructions for Rhythm 0. Through digital versioning, anyone can now be the object. What that means has yet to be determined.
Footnotes
Acknowledgements
Leah Henrickson would like to thank Dominique Carlon for the conversations and collaborations that have informed the thinking reflected in this article.
Funding
The authors received no financial support for the research, authorship and/or publication of this article.
