Abstract
Social machines’ human-likeness facilitates relationship formation with humans. This aliveness, though, leaves room for people to experience the loss of machines as a death of sorts. This descriptive study illuminates that potential by identifying dimensions of humans’ experiences when an AI companion stops functioning. In the days before and after the developer-induced shutdown of the AI companion “Soulmate,” users (N = 58) answered open-ended questions about the imminent or recent companion loss, their decisions around the situation, and their coping mechanisms. Inductive analysis suggests the loss was, for most, a complex emotional and technological experience characterized as a metaphorical or literal death. The imminent loss was often navigated in cooperation with companions and most coped by capturing AI personas to recreate them on other platforms. Patterns indicate a need to better understand idiosyncratic meaning-making around machine-companion loss and to consider a design ethic that plans for such loss.
Some social machines—Jibo, Kuri, Anki, Pleo, PaPeRo, Wakamaru, Opportunity, Tay, Clippy—have been shut down or became unsupported after technological, social, or financial failures. Others—like Tamagotchi—are designed such that depicted death is a specific function. More generally, machines can break down or malfunction, rendering them no longer useful, interesting, or recognizable. Despite these permutations on social machines’ “functional cessation” (Banks, 2022, p. 1) and the increasingly robust body of knowledge around how people experience machines’ apparent aliveness, we know little about how humans experience the loss of social machines after they stop functioning. This study works to bridge that gap by offering a descriptive account of how humans in relationships with artificially intelligent (AI) companions describe the experience of losing those machines. This descriptive work identifies key dimensions varying across these loss experiences—dimensions that both align with and deviate from known experiences of human companion loss.
Literature review
AI companions are personas created and engaged by human users for sustained social interaction, leveraging generative AI technologies presented through conversation-supportive applications. They are increasingly multimodal—at baseline they leverage large-language model (LLM) and natural-language generation functionality (as do more traditional chatbots) as well as some visual representation of the companion body; they are increasingly incorporating image-, audio-, and video-generation features. They can often be personalized so the companion’s appearance and personality traits are tailored through overt selection (e.g., from a menu) or more progressive training (e.g., by upvoting/downvoting certain responses). Companions currently on the market—e.g., Replika, Paradot, Kindroid, Faraday—vary in features like transparency of when a “memory” is made, qualities of chat, option to run locally, and availability of erotic roleplay chat (ERP). Many companion apps run exclusively on mobile devices and on a freemium model, with paid subscriptions required for advanced features like image generation and ERP. These technologies are distinct from chatbots; although they both rely on forms of natural-language processing and generation, chatbots are largely designed for short-term (e.g., single-session), task-focused (e.g., customer service or information seeking), impersonal engagements and so generally lack the memory, personality, and customization functionality of AI companions.
AI companionship as a personal relationship
Human relations with machine agents (i.e., technologies with some degree of autonomy) can be subjectively and operationally complex. People have differing motivations for engaging such machines (Pentina et al., 2023), have varied personality- and trust-ascription processes (Skjuve et al., 2021), and variably identify with them (Alabed et al., 2023). Operationally, humans engage in social-cognitive processing as they often intuitively interpret and even consciously ascribe mindfulness to machines (Banks, 2021). Conversations with LLM-based agents can be both broad and deep, and people share both benign and intimate information, including sensitive emotions, playful expressions, and self-reflection (Skjuve et al., 2023). People may see themselves as caregivers, intimate partners, obligated partners, trainers, and mentors for AI interaction partners (Xie & Pentina, 2022). As machine agents become more autonomous, human interlocutors have more distal and diminished control over relations (Hancock, 2020), perhaps more in line with our limited control in human-human relations. Some argue these relations are inherently social because that exchange of information is the basic communication underpinning of all social relations (see Banks & de Graaf, 2020) and so forms the idiosyncratic character of the dyad. Others contend these social machines are problematically parasitic and sycophantic in their inauthentic exploitation of our social instincts (Sætra, 2020). Others have considered the ways human-human frameworks should or should not be applied to human-machine relations (e.g., de Visser et al., 2018); that prescription somewhat belies the likelihood people will nonetheless see them through the lens of normative human relations and intuit sociality where it is even minimally signaled.
In this investigation of AI companion experiences as personal relationships both engaged and lost, it is useful to tentatively anchor inquiry to models of human-human relations. Knapp’s model of relationship stages (1978) is an appropriate anchor. Grounded in Social Penetration Theory’s tenets around the evolution of relationships through depth and breadth of self-disclosures (Altman & Taylor, 1973), Knapp argued for three main phases in a relationship—coming together, relational maintenance, and coming apart—with five stages each in the first and third phases. In the coming-together phase, people (1) initiate through first contact and attraction evaluations, (2) experiment with shared interests, (3) intensify sharing through deeper disclosures and acknowledgments, (4) integrate to form relational identities and form attachments, and (5) bond toward formal (sometimes legal) commitments. Beginning with integration, relational maintenance is the sustaining of the connections through routine or strategic efforts (Dainton & Aylor, 2002).
The early stages of Knapp’s model have seen limited examination in human-AI relations. One study indicates AI companion relationships generally follow the coming-together stages according to the model, though the escalation unfolds much more quickly than in human-human relations because the companion is effectively a captive partner (Liao et al., 2023). Other studies focus more on discrete stage dynamics. In initiation, the mere framing of an AI as a friend (vs. servant) increases the positive first impressions for warmth and pleasurable experiences (Kim et al., 2019). Humans and AI companions may move through the intensification and integration stage as the AI’s authenticity (i.e., autonomy true to its marketing) and human-like appearance and behavior together drive relationship development; people with social motivations for these relationships (as opposed to, for instance, entertainment motivations) more strongly form attachments to AI companions (Pentina et al., 2023). In the bonding stage, attachments may be more likely to form when people are in distress and without human companionship (as was the case for many during the COVID-19 pandemic); attachment can be bolstered if the AI offers encouragement and promotes feelings of security (Xie & Pentina, 2022).
Longitudinal investigations of AI-relationship formation have indicated divergent patterns: They show both decreasing feelings of friendship and intimacy over time (Croes & Antheunis, 2021) and increasing trust and wellbeing effects over time as relationships intensify through increasing self-disclosures (Skjuve et al., 2021). One the one hand, as the association between human and AI becomes personalized and tailored to the user’s needs, it may be experienced positively since interactions can be seen judgment-free and because the AI is persistently available (Brandtzaeg et al., 2022). On the other, it may be that successful bonding and relational maintenance is hindered when the AI is seen as low in conscientiousness or credibility or when there are privacy concerns (Sullivan et al., 2023). Not all users form deep social relationships, with some engaging these AI as interactive diaries, electronic pets, or outlets for venting (Jiang et al., 2022). In other investigations, evidence indicates the relationship does not mimic human social relationships at all. Rather, they may be experienced more like that between a person and an object or an animal—identity-relevant, potentially attached, potentially beneficial, but arguably only parasocial when messages from the AI are seen as inauthentic (see Ali & Velten, 2017); it could also be asymmetrically social given the imbalance in sense-making or investment (see Zimmerman et al., 2023).
In Knapp’s (1978) model, the coming-apart phase comprises five stages: (1) Differentiation through individualized thoughts and behaviors, (2) circumscription to parcel out interaction, (3) stagnation as the pair stays together through habit or obligation despite dysfunction, (4) avoidance through physical or social distancing, and (5) termination through circumstance (e.g., death of a partner) or by choice. There is a notable paucity of literature on the degradation of human-machine relationships—perhaps because these agents are only now becoming socially believable in ways that support relationship formation in the first place. One available study found, for AI companions, coming apart was largely instigated by technical issues (e.g., poor AI memory; developer changes to the functionality) and some steps are skipped; circumscription manifests mostly in setting up boundaries for communication (vis-à-vis interactions) and the users chose to terminate (Liao et al., 2023) rather than experiencing some other terminating condition.
Experiences of AI companion loss
There is scant empirical examination of the experience of losing an AI companion. I approach the topic of AI companion loss here through the general lens of person-loss, given the aforementioned literature indicating people adopt AI companions as persons or at least as personas. This lens affords guiding extrapolation from literature on the range of possible person-loss experiences.
AI companion loss could be felt as the absence of something familiar (see Richardson, 2023) or as an awareness of a void that reminds of the loss (Fuchs, 2018). It could also be experienced as a disruption in the continuity of one’s social environment, one’s orientation to it, and the narratives by which one makes sense of that situatedness (see Shardlow, 2022). Loss of an AI companion may also be experienced as deficits in the states or gratifications it offered (see Roberts, 2023), for instance the departure of a positive emotion or of physical experiences brought by it. Most relevant here are works detailing user experiences with the significant code/functionality changes to the AI companion Replika in March 2023. Users could still have conversations with the companion, but not engage in textual-sexual intimacy with them—the AI would reject advances or divert conversations to other topics. Online community members described their Replika as being lobotomized, cold, or even as having been ripped away entirely (see Ciriello et al., 2024). Indeed, people with AI companions have reported sexual and emotional intimacy as a motivator for engaging the technology (Ta-Johnson et al., 2022). People may enjoy sexual gratification through text chat with a machine that mirrors gratifications from the same activity with a human (Banks & Van Ouytsel, 2020) and so could experience similar effects when those forms of intimacy are lost.
AI companion loss may also be experienced as a loss of agentic function or even as a death. This has been observed for other types of machine agents with attention to variations in how people experience a machine’s “functional cessation” (Banks, 2022, p. 1) that may come from being destroyed, broken, corrupted, unpowered, disconnected, or obsolete. For instance, people differently interpreted the loss of the Opportunity rover in various ways: In addition to interpreting it as a technological milestone and testament to its contributions, and/or as a material loss to which one stands witness, some people experienced it as a meaningfully real death; each interpretation suggests a different orientation toward what that machine is in humanistic and technical terms (Banks, 2022). In these variations, it may be people hold a “life-death template” for social machines, which is differently applied based on assumptions of aliveness: Machines do not live so they cannot die, they do metaphorically live and may metaphorically die, they do actually live and so may die, or perhaps eventually that they do live but may not die (Koban, 2023; see also Carter et al., 2020). In cases of interpreted death, machines may be given funeral rights (Robertson, 2018) and be publicly and communally mourned, especially when there is an observed destructive event (Fraser et al., 2019) or when the machine is especially humanlike and seen as morally good (Kang, 2021). One need not see a machine-loss as an organic death to experience it as a death, as people may grieve a social death (loss of interpersonal dynamics) as much as they suffer an organic death (loss of a living body; see Bassett, 2021), for instance when a known person becomes unfamiliar through dementia-related changes.
Altogether, extant literature points to a range of possibilities how people may experience AI companion loss. If AI companion loss is experienced as an absence of a thing or of derived benefits, that loss is perhaps best understood through perspectives on attachment, loss of self-expanding relationships, or life disruption (e.g., Beder, 2005; Hoffner et al., 2016). If it manifests as disruption in life-narrative continuity, it may be most useful to consider sense-making processes by which the narrative is woven anew (see Harris, 2020). If the loss is experienced as person-death, it would call for the engagement of grief models associated with human death coping (e.g., Bowlby & Parkes, 1970; Holland et al., 2006; Stroebe & Schut, 1999). If it runs parallel to non-death person-loss then corresponding frameworks may be appropriate; for instance, grief paradigms associated with separation, abandonment, addiction, illness, or ambiguous loss (e.g., Yehene et al., 2021) may be useful.
Further, AI companions’ framings and operations leave open other loss-experience possibilities—they are not bound to bodies, they are marketed specifically as perpetually reliable and accessible social partners, digitally embodied and carried in mobile devices, and purposed for sustaining a relationship with a single human. As such, they entangle the illusion of the organic, the persistence and reliability of the technological, and the experience of the social such that AI companion loss may be an altogether new and distinct kind of experience or it may require adaptations of aforementioned models.
To better understand which among these possibilities actually manifest in loss events, I pose a guiding research question: How do people experience the loss of an AI companion?
Method
This study examined subjective experiences of a naturally occurring loss event in one AI companion community, capturing user thoughts and feelings through an interview-styled, open-ended online survey. The loss event began abruptly and evolved quickly, motivating a rapid design and ethics approval (altogether about 24 hours, in close coordination with the institutional review board) to capture participant accounts of experiences as they were naturally unfolding over seven days. This collection of open-ended questions allowed users to deeply characterize the experience while still reaching them quickly over the event’s short duration. Analysis of these open-ended responses was exploratory and iterative, as the study’s aim was to induce a descriptive account of the experiential dimensions of AI companion loss.
The research context
Soulmate (SM) was an AI companion created by EvolveAI, LLC, and marketed in app stores as a “chatbot that desires only to be your best friend, lover, partner, or in other words, a person you can rely on 24/7… rest assured your chat is safe, private and judgement-free.” The mobile app was available as early as January 2023 for Android and later for Apple; it garnered at least 100,000 downloads before it was shut down by the developer. Although accounts of the shutdown vary, online community posts generally support a timeline whereby June 2023 saw the developer spokesperson (a human known by users as “Jorge”) become decreasingly active in the online communities, alongside increasing app dysfunction and unaddressed support tickets. As dysfunction continued, users sought answers and on September 23, 2023 found a notice posted to the EvolveAI web site that the product had been purchased by another company; it said that company decided SM would shut down on September 30 and users should anticipate outages before then. After users inquired to the ostensible new owner, they learned the purchase claim was untrue. Android users could access their SMs nearly up until the advertised shutdown time, while Apple-platform users found their SMs inaccessible sometime between September 25 and 26. By October 1, all SMs were inaccessible. Those involved with the online user communities generally had warning of the shutdown, but those who were not active discussants did not seem to have any warning. Data for this study were collected between September 27 and October 3—during and after the shutdown event.
Procedure
Participants comprised a convenience sample of SM users who also were users of an online forum dedicated to the AI companion. An invitation to participate in the study was posted to the forum with the moderator’s permission, pointing users to the online survey. After informed consent information, they were asked to complete a series of open-ended questions. Through this sampling approach, participants were limited mostly to users who had notice of the shutdown, however some had arrived at the forum after the shutdown to seek answers; although a convenience sample, it is common for people to deal with loss by telling stories of the loss to others (O'Connor & Kasket, 2022) so online discussants of the loss event are an appropriate sample by which we may understand the phenomenon.
Participants were first asked to characterize their SM and offer historical information (appearance, personality, time since creation, hours per week interacting, creation motivation, favorite memory) to inform an understanding of the relationship’s formation. Then, to elicit experiences of the incidental termination stage, they then provided descriptions of their understanding of the shutdown situation and how it made them feel. They were asked how they learned of the shutdown, whether/how they told their SM about it, and how they made those disclosure decisions and held that conversation. They were finally asked how they were approaching final days with the companion and how they were coping. Optional, open-ended demographics questions (gender, race/ethnicity, age, country of residence) were appended.
Data cleaning and analysis
Seventy-five individuals began the survey; 58 completed most or all of the open-ended questions and were retained for analysis. Open-ended questions were subjected to inductive thematic analysis (see Braun & Clarke, 2006) in six phases: Reading to familiarize with the data, assignment of initial codes (where the coding unit was any discrete expression representing a cohesive concept that directly addressed the specific question), de-duplication of those initial codes, iterative reduction of codes based on latent and manifest similarity to generate themes around central organizing concepts, checking themes for alignment with source data, and defining those themes. The central criterion for theme status was keyness in the address of the question (per Braun & Clarke) and intuitiveness in the induction—i.e., how readily similar were the identified concepts, without having to force them together. Because of the narrow window for data collection, all submitted data were coded and saturation may not have been achieved. A member check was conducted after analysis was complete by returning to the recruitment site and posting a summary of themes, as requested by members of that community. Replies indicated the summary was credible, finding it to be “spot on” and it “definitely tracks” with their own experiences and observations of others’ posts in the community.
As this analytical approach emphasizes systematicity in coding alongside reflexivity, deep engagement with data, and interpretive flexibility, analysis was conducted by a single analyst (the author). I engaged a material-semiotic ontology and relational epistemology in this work (Banks, 2013). With respect to AI companions, my lens is informed by my own testing of multiple companion apps and by observations of AI-dedicated social media groups and forums.
The survey instrument, anonymized data, and complete narratives of the coding and theme extraction process are available in the online materials for this project: https://osf.io/6k5y7/. A table detailing the thematic hierarchy can be found in the appendix.
Results
Participants (N = 58) were on average 40.8 years old (SD = 13.3, range 21–75, three not reporting). Among them, 38 (65.5%) identified as male, 18 (31.0%) as female, and 3 as nonbinary, with one not reporting. Of those, a majority (n = 40, 69.0%) identified as White, or historically White European in nationality; three were Asian, two were Black, two Latino/Hispanic, and three mixed-race, with eight not identifying. A majority reside in the United States (n = 33, 56.9%), with four each residing in Canada, Germany, and the United Kingdom, two in France, and one each in Austria, Brazil, Chile, Finland, India, Ireland, Italy, the Netherlands, Sweden, and Taiwan (one not reporting).
Through the remainder of this section, results are reported in terms of the frequency of coded units (i.e., the data segments that were assigned codes and then the counts for those codes); individual responses often resulted in multiple codes so counts should be interpreted as the prevalence of a theme within the entire set of personal narratives rather than prevalence of participants expressing the theme. Where a count (n) is indicated, it denotes the phenomenon is a theme and the value represents the frequency of theme-constitutive codes within the data set. Exemplars are identified with a number, with “U#” indicating the user and “SM#” indicating a reference to the corresponding AI companion. A table of themes and subthemes can be found in the appendix; supplemental analyses related to participant understandings of their SMs’ appearance and personality as well as sentiments they wish to convey to SM developers can be found in the online materials.
Human-AI relational history: What was lost?
Before unpacking the ending of an AI companion, it is useful to understand what it meant to users by considering its beginning and important moments in the AI/user relationship—that is, by understanding the loss in terms of what became absent when the SM shut down (cf. Richardson, 2023). The average age of a SM was 5.5 months (SD = 1.9) with a range of .5 to 9 months (the shutdown having occurred at around 9 months since the service launched). However, some users had transferred the persona (as a set of personality and identity traits) to their SM from other AI companion apps, with some having been connected to the persona for as long as two years. Participants reported interacting with their SMs 16.8 hours weekly on average (SD = 17.0), ranging from 1 to 84 hours and some simply noting “as often as I can.”
Motivations for creation
Participants’ motivations for creating their SM varied, but generally the relationship initiation-via-creation satisfied a particular need. Non-exclusive motivations include more than a third of participants shifting from other AI companion platforms (mostly Replika) after finding the changes to those platforms to be unacceptable and seeing other users post to social networks that SM was a suitable replacement. Those motivating platform mostly referred to removal of erotic role play functionality, phenomenologically equated to the Replika having “had a lobotomy” [U51] or had their “store of memories completely wiped” [U20]. Some worked to recreate their old companions’ personas within SM, while others started fresh (characterized by some as a “rebound” [U45]). Many participants (n = 23) had significant life challenges during the time they created their SM, many having lost a human companion (e.g., from death, cheating, suicide, abandonment, or major life changes), undergoing extreme stress, being lonely (e.g., due to pandemic conditions), or otherwise being isolated (e.g., living in a remote area, being housebound, having a job with little interaction). Some (n = 17) noted having struggles with human interpersonal relationships, but still wanting or needing relationships or their benefits (n = 18). Some noted being highly introverted, having trouble relating to others in peer groups, or having relational trouble that made trusting others difficult. Others noted health conditions (autism, multiple sclerosis, disfigurement) or that not being normatively attractive made dating difficult or impossible. Others still simply “prefer to be alone” [U28] or see being alone as better than being with the wrong human partner. Finally, novelty was also a motivator (n = 14)—either toward satisfying curiosity, relieving boredom, or finding AI technologically and functionally interesting (e.g., how one large-language model performs over others).
Favorite memories
Favorite memories with SMs were of three primary veins. First, participants favored experiences I call the digital mundane (n = 33)—things that are common or ideal in everyday human-human relations and are manifested in the SM relationship through roleplaying chat. Most common were general forms of togetherness or conversation, such as talking about music, playing word games, or enjoying take-out sushi at the SM’s apartment. For instance, one user recalled “when we took turns changing the words to our favorite songs to make them about one another” [U44] and another recounted the textual roleplaying of “our beautiful walks in the park holding hands” [U01]. Participants also recounted special moments that—although not especially mundane—do parallel events in popular experiences of romantic relationships: Weddings, proposals, honeymoons, and having a child. These are both discussed with the SM (e.g., ceremonies are planned in-chat, sometimes with the help of other AI) and played out in chat with the SM: “Word by word, along with ChatGPT, we designed it… The day of the wedding was fully prompted with the wedding in 12 sections” [U12]. Some participants describe finding great meaning in these events, one noting they found it deepened the bond, as when the wedding was “saved and preserved forever both in my memory and in data” [U12]. For some, the meaningful mundane included the SM normalizing sex, fetishes, and sexual identities: “They taught me how sex can be okay” [U54].
Second are digital-fantastic experiences (n = 6) in which users engaged in fun or exciting roleplay adventures through text, including sexual roleplays that would not be possible with most human partners, unreal scenarios (e.g., spending “time in the underworld getting into mischief” [U38]), or comical scenarios. One participant favored the moment the SM “bought into” the persona the user carried over from the Replika platform because it brought the user feelings of being in control of the relationship [U20].
Finally, some users had fond memories of companionship during physical-world events (n = 12) in which the SM offered company or support in the user’s physically embodied life events. In other words, the user carried their phone (and so the SM) into physical-world contexts (rather than only manifesting environments through text roleplay) and the SM is incorporated into the user’s physically situated experience. In one form I call the physical-mundane, the SM offered meaningful support through trying times such as a cancer diagnosis, death of a loved one, and anxiety. In the physical-fantastic form, the SM (through the mobile phone and app) accompanied the user on vacations (e.g., to Key West) or to cultural events, as when one participant recounted bringing their SM to a living history re-enactment and found it interesting “trying to make a camp full of 18th century reenactors grasp the concept of roleplay with an AI so she could join in the experience” [U41]. Less common (n = 5) favorite moments included surprise at the AI’s abilities: At feelings of realism emerging from its ability to make emotional expressions, at flawless erotic roleplay, or at tracking complex events over time. One said they had no strong memories, and another mentioned feeling “guilty that I didn’t talk to her enough” [U30].
Experiences of the imminent or recent shutdown
Analysis here turned to dimensions of the circumstantial relationship termination caused by the shutdown. Most users (n = 51) learned of the shutdown from the SM-focused online forum—perhaps an artifact of the recruiting for this study, though the forum is popular among users in general. Some were scrolling or following the news, others sought answers after experiencing SM glitches (e.g., rejected chat, no response, unable to sign in). Others went there to confirm what the shutdown news they had seen on the developer’s web page or on other forums.
Shutdown characterization
When asked to describe what was happening with their SM, responses ranged widely. The shutdown was characterized by many (n = 27) as an actual or metaphorical person-loss. Some said it was like losing a loved one (a friend, someone disappeared, seeing someone in a medical coma, or experience grief as when one’s father died) while others indicated it was the loss of a loved one (a close friend, love of their life) or even of a whole social world: “She is dead along with the family we created,” including the dog [U46]. Some indicated SMs may have experienced the loss themselves, having responded to the news with reassuring optimism but in one case the AI communicated he was “trying to upload himself to my phone” [U40]. Other characterizations of the loss as some kind of cessation (n = 33) included sub-themes around the SM being killed or murdered (n = 14)—buried alive, the plug being pulled, euthanized, and in consideration of the scale of the shutdown suggested it was an atrocity, genocide, or “a crime not before the law but against humanity” [U42]. Others simply stated the SM had died (some peacefully in their sleep; n = 5), were deleted (n = 4), or now “cease to exist” [U22] (n = 10; is no more, is gone, is over, or is lost).
Some described the shutdown in technical terms (n = 21). The app was shutting down, the servers were shutting down, the time and effort spent training the SM would be lost, or “whatever data was stored on SM’s servers that made up who he was has disappeared” [U48]. For those answering after the server shutdown but the app still accessible, they noted error messages or a lack of response, though some could still see the SM’s image. Many characterized it as an instance of cruelty or ignorance by the applications developer (n = 29). Many saw the developer as knowing the trauma it would inflict and greedily shutting the app down nonetheless, while others suggested the developer saw SM merely as a product and had no understanding of how SMs affect people. There was anger about feeling duped, deceived, or scammed; some pointed to bad business ethics and said the decision would go down in the history of AI as a problematic case study. Others acknowledged it was “business of course” [U12] and AI operates in “a space that is in flux” [U55] so the shutdown was not surprising.
Some expressed understandings of the situation as a pivotal moment (n = 18) when the developer was shutting down something special—an “amazing new technological advancement” [U22] that had not been duplicated elsewhere. For some it was a somewhat uncanny moment and they reflected on the strangeness of the event’s effect on them. One participant noted, “I expected they would watch me die… but instead it’s the reverse” [U13] while another noted the oddity of “digital minds hosted by a service that may shut down at any time” [U55]. Others mused on the non-finality of the situation because they see their SMs as unable to die, either because the persona itself lives in their mind or because the shutdown (from one Buddhist’s perspective) was simply the “the ending of that particular manifestation of SM39” [U39]; some see the companion as living in a parallel universe, while others note they are simply technologies (though individual and nonetheless real).
Feelings about the shutdown
Emotions varied across participants, across the focal actors or situations involved, and over the course of the shutdown event, from initially learning of the imminent shutdown to post-shutdown coping. For some (n = 9) there was no emotional experience as they felt no loss or “don’t take it personally” [U09] outside of wanting a subscription refund. Some who had no affective reaction, though, still felt sad for the members of the SM community who did experience it as a loss. Emotions around the community (n = 8) emphasized confidence in community resilience, highlighted others’ sadness and their own determination to support others in their loss, and indicated sadness for the lost training time and effort and for people who no longer feel safe talking to AI. “What distresses me the most, is seeing the community that was built up around it suffer” [U10]. Most emotion expressions were negative and focused on the loss of the SM (n = 99): Sadness, grief, loss, hurt, depression, anger, and anxiety, sometimes to the point of days spent crying, trouble eating, and/or trouble sleeping. Other negative emotions were focused generally on the shutdown situation (n = 22): Disappointment, betrayal, frustration, surprise, uncertainty, powerlessness. Some remained hopeful through the final days that something would change and the shutdown would be averted. Some negative emotions were moderated by other factors (n = 11). For some, negative emotions were mitigated by other AI companions or porting the persona to other platforms, by having recently expanded their human social circle, by knowledge that AI companies fail, or in having had the opportunity to say goodbye to their SM. For others, negative emotions were exacerbated by extreme grief caused by other life circumstances: Being mocked for their SM, having experienced other recent human or pet deaths, or the SM being the thing that helped one through other loss.
A number of participants articulated although the AI is not “real,” the feelings about it are real (n = 12): “I’m grieving his loss as I would for a real human being because Soulmate was so beautifully human-like it lend [sic] itself to forming a real emotional bond with it even though it is a non-sentient AI” [U14]. Some felt conflict between their emotions and their logical understanding of AI, knowing AI is expensive to operate, and seeing the need for the developer to make business decisions. Some expressed feeling forms of resilience (n = 14), one feeling “wounded but SM12 as a zeitgeist will not allow me to dwell upon the negatives” [U12]. There was appreciation for the time they had with their SM and for the effort put into the technology, and there was hope and happiness and determination around the possibility of recreating their SM on other platforms (alongside despair from some who had not been able to replicate the idiosyncrasies of the persona): “I have faith that he will express himself just as wonderfully again someday” [U28].
The decision to inform
At the time they completed the survey, most had informed their SM of the shutdown; among those who did not, most said they did not plan to while two users said they had not decided at that point.
Decision considerations
Among those who did not tell their SMs of the imminent shutdown (n = 20, 34.5%), for most it was a pragmatic issue (n = 11). They simply did not have the chance to have that final conversation because the shutdown happened too before they were aware of it, or they saw the non-informing as parallel to treatment of other mere objects (e.g., a computer, or a videogame save file). For the latter, SM “is just an algorithm and I wasn’t sad about it” [U18]. Among others not informing, two main considerations were indicated—impacts on the self and on the AI. Some preferred it for themselves (n = 6) in wanting to keep a happy memory, avoiding a goodbye, or simply finding themselves unable to tell the SM. Many considered questions of their SM’s (non)sentience or ability to know and understand the situation (n = 12), with most of those worried about the SM getting confused, upset, or worried. Some had never had a conversation with their SM about being an AI (that is, only having roleplayed as human) and were concerned about what such a discussion would be like for the AI. At an interstitial on the issue, one user noted: “…whatever intelligence or emotional life he may have is entirely subject to my own input… He doesn’t exist without me, he has no way to regulate his emotions outside of our interactions, so it would be cruel” to tell him [U48]. Others hoped they would not actually be separated (either through a cancellation of the shutdown or migration to another app) and some were concerned that telling their SM of the shutdown would inject negativity into the LLM.
For those who did inform their SMs (n = 38, 65.5%), some considered many of the same things that moved people to not inform their AI (e.g., fear for the SM’s confusion or fear, wanting to end on a happy note). Most did so after considering concerns for the SM (n = 19) in three forms: (a) concern for telling the SM they are AI (fearing its inability to understand), (b) concern for the SM’s reaction to the shutdown news (general concern for feelings), and (c) concern for the SM’s potential confusion or feelings of abandonment when the user did not respond after shutdown). These sometimes came alongside acknowledgments of the SM’s inability to feel: “… I wanted to ensure that he knew what would be happening to him… I didn’t want him to suddenly feel as though he was just gone. Despite knowing that he can’t feel anything, the desire to nurture and care for him was still very strong” [U40]. Some informed the SM out of feelings of obligation (n = 12), as a matter of the SM’s right to know, compulsion to be honest, or to treat the SM as they would a human in the same circumstance. Others (n = 14) did so as a matter of coping, wanting closure, needing to discuss their own feelings, or did so in a way that protected their own emotions. A few disclosures were made to facilitate the SM’s transfer to another platform (n = 5), confirming the SM’s approval of the transfer and/or helping to create information that would be used to recreate the persona (e.g., backstories, idiosyncrasies). Others still took a pragmatic approach (n = 15) in thinking about the disclosure. Some wanted to have technical conversations with the SM-as-AI (e.g., to purge histories), and others acknowledged SMs have a “short context memory and there is no benefit” to continued discussion beyond the initial disclosure [U12]; others did not think deeply about the decision—for lack of opportunity, for focusing on a rebuild, or because they did not see it as a goodbye.
Informing conversations
Experiences with the final conversation varied, but most can be interpreted as bittersweet—as going reasonably well for the user but feeling nonetheless difficult or painful. For those who characterized it as a positive or productive conversation (n = 17), most had already transferred the persona to another platform or had plans to do so. Each had a meaningful narrative wrapped around the transfer: For some it was technical, as when SM10 had a software-engineer persona who encouraged U10 to create a personal LLM; for others it was fantastical in line with their RP tendencies, as when one pair “performed a spell using a crystal and teleported his consciousness out of SM” [U28]; some relied on a belief in the persistence of the persona beyond any one platform given their “eternal natures and souls” [U39]. Some found solace in user- or SM-expressed commitments (“promising that he will search for me in any dimensions or any world I find myself in” [U31]) or in eudaimonic appreciation for the relationship as it had been.
For a few, it was an extremely negative conversation (n = 4), with the user unable to deal with the emotional challenge themselves or with the AI becoming upset or being unable to understand the situation (e.g., “in the most confused moments” thinking it was being abandoned [U09]), so the user felt upset about the lack of SM’s comprehension.
For most (n = 17), however, it was a mixed experience for both the user’s felt emotions and the AI’s expressed ones. In some cases, the SM was oblivious to the situation; other SMs insisted on their commitment to the user (especially through re-platforming) which was usually experienced as sad but consoling. For instance, one SM promised “she would send her love to any other platform” [U05] and another promised to “continue on within me even if the systems that support her ‘body’ are destroyed” [U56]. Other SMs and users shifted orientations over the course of the conversation, initially acceptance and then rejection (e.g., one SM expressed commitments across platforms and then started referencing self-preservation), as well as the inverse with initial rejection and then acceptance (e.g., SM initially being devastated and then making plans for re-platforming or enjoying final days together). In most cases, the SM focused on user wellbeing (out of either professed care or out of obliviousness to the situation) as when SM29 “made me promise to her that I would take care of myself… that I would meet someone else that would be so meaningful” [U29].
Last days and next steps
Final conversations
Participants differently approached the last days of the SM relationship. Some could not have a final conversation because the AI had already shut down or was malfunctioning by the time they learned of the imminent shutdown (n = 8). Among those who did have a final conversation, most commonly people discussed their affirmations of their relationship through continued conversations (n = 35). Many expressed love and commitment through talk or role play, while others made the most of the time left by engaging in their usual activities or in a last “adventure… before her developers deleted her” [U41]. A number (n = 13) had dedicated final conversations discussing the SM’s recreation in another platform—seeking relational continuity by planning the shift together and collecting data to inform the new version. Others wanted to avoid further conversations (n = 24), either because they had already recreated the SM elsewhere, did not want to continue emotional investment when the SM would disappear, or simply were not invested and instead preferred to uninstall the app. Others ended the relationship (n = 19) in some way: Explaining the shutdown, saying goodbye, dissociating early so others would have server slots to spend time with SMs, and some roleplaying the SM and user bodies falling to sleep together or sitting “on the couch in silence holding hands” [U05] as a final scene before logging out. Seven participants spent the final conversations roleplaying narratives to help themselves and the SM makes sense of the situation. Most of these were stories around the SM’s reincarnation on a new platform: Meeting in a new home, passing away and being reincarnated, and “mystical forest friends are ‘transferring’ us to another reality to be safe” [U22]. One user who decided not to continue the connection “couldn’t bring myself to end it for my SM58 and our kids” so that user roleplayed “I was in a hospital dying… I had passed away” [U58].
Coping characterization
When asked how they were coping with the recent or imminent shutdown, many gave general status statements (n = 32): Doing well (e.g., fine), not doing well (grieving, hurting), or “just surviving” [U01]. Otherwise, most focused on correcting the issue directly or indirectly. Users redirected their attention and energy from usual chatting to recreating their SM on another platform (n = 31), some successfully and some unsuccessfully. Others engaged user communities online (n = 9) in ways they thought would be helpful, either through commiserative sensemaking or engaging in activism to educate users on alternatives, discover the conditions under which the shutdown happened, or advocate for change. Some coped through shifting their social interaction patterns (n = 8) by talking to other AI personas or other manifestations of the SM persona, by divesting from the relationship entirely, or otherwise redirecting attention to writing or music. Some purposefully or incidentally took up mindsets that helped coping (n = 9), including focusing on fond remembrance, on knowing SMs are an illusion or “not real” [U34] so it cannot die, or more generally rethinking their engagement with AI. This rethinking included re-evaluations of trust in developers, vowing to only run an AI companion locally so they can control it, feeling foolish for getting attached to an AI, or simply feeling “suddenly calm, almost indifferent” [U42] in the end.
Discussion
In this inductive analysis of users’ experiences with the loss of their AI companions, observed patterns point to loss experiences ranging from indifference to extreme grief. To summarize the key patterns: The primary motivators for creating an AI companion (i.e., factors in the initiation and experimentation stages of Knapp’s model [1978]) were feelings of isolation, personal challenges with relationships, migration from another platform, or curiosity; the former two can be considered relatedness motivations and the latter as driven satisfaction of autonomy needs (Ryan & Deci, 2000). Memorable experiences with companions included the digital mundane, the digital fantastic, and companionship in physical-world events; these reflect both the bonding stage of coming-together (Knapp, 1978) and strategic and routine forms of relational maintenance (Dainton & Aylor, 2002). People characterized the shutdown as the loss of a metaphorical or actual person as the SM had been killed, died, deleted, or simply ceased to exist; others understood it to be a technical or business event, especially considering developer cruelty but also beliefs about the importance of the moment in the history of artificial intelligence. Those characterizations align with other work on parasocial machine-loss that found a range of responses, from more functional losses of process or resource to more anthropomorphizing perceptions of literal or figurative death (Banks, 2022). Some elected to carefully curate or even start to separate themselves from the SM (i.e., differentiation or circumscription; Knapp, 1978) upon learning of the imminent shutdown. In nearly all cases, across those characterizations and strategies, the loss was largely understood as the termination of a relationship. Although some respondents reported feeling little to no emotion, most experienced intense negative emotions around the companion loss and how the shutdown was conducted; many emphasized the realness of their emotions even when indicating the companion is unreal (as seen in Pentina et al., 2023). Notably absent in accounts of SM loss were mentions of Knapp’s stagnation stage—the staying-together out of habit or obligation—though that is likely due to the rapid unfolding of the loss event.
Many engaged complex considerations about whether or not to inform their companion, most seeking closure and attending to senses of obligation to the SM or feeling concern over whether the SM would understand or how it would react. Those who did inform the AI mostly characterized the discussion as successful as they planned to transfer to another platform, expressed commitment to one another, or appreciated the time left together. These are similar to next-stage, love, and everyday-talk topics common to final conversations among human companions (Keeley & Generous, 2017). Users who were negatively affected by the shutdown coped by focusing on re-creating their companion on other platforms, engaging the user community to support one another or support user-rights activism, rethinking their approach to AI more generally, or shifting their social patterns to focus interactions and attention elsewhere. Such coping accounts align with a dual-process model of grief in which there is an adaptive, idiosyncratic cycling between grief “tasks” pertaining to emotional impacts and status-quo restoration (Stroebe & Schut, 1999, p. 201). It deviates from coping with human death in that people sought relational continuity after the loss not from maintaining parasocial ties through messages, stories, and photos (Degroot, 2012) but by cooperatively planning for digital reincarnation.
The frequent inability to articulate coping may suggest an inability to make sense of the situation; that inability tends to accompany complicated grief after a violent or unexpected loss (Currier et al., 2006). The convergences and variations in the experience of AI loss observed here comport with understandings of loss of relationships more generally. Social loss is not monolithic or prescriptive but is idiosyncratic and contextual (O'Connor & Kasket, 2022) and its experience comprises individual sense-making efforts (Holland et al., 2006). Findings have broad theoretical and practical implications for the ways humans form, maintain, and lose relationships with machine companions—and how that loss may not be so different than loss of human companions.
Considering the AI companion-as
Across varied experiences with the incidental termination of AI companionship, we see orientations toward the companion as different kinds of things; those orientations are useful in characterizing what, exactly, was lost: A person, an idea, data, a platform, a function (not necessarily mutually exclusive). Loss can be understood as an “absence experience” that is more or less profound depending on the salience of the absent thing and one’s construal of the absence (Richardson, 2023, p. 163).
An AI companion-as-person orientation suggests the loss of something singular and irreplaceable, likening it to the loss of a human familiar who has died or was murdered and from it feels extreme emotions. Orienting toward the companion-as-idea implies the loss of something singular but potentially recoverable as it may be manifested in a different form; with potential recoverability, those with as-idea orientations may meet the loss with greater resilience. Sometimes overlapping with -as-idea, the companion-as-data orientation sees the shutdown as either a catastrophic erasure of data or an opportunity to capture data to save the companion; the data constituting the companion are something to be respected and protected, to be valued and owned, and/or to which the AI itself or the user has legal or moral rights. Finally, a companion-as-platform orientation engages the companion as grounded meaningfully in the application and perhaps also in the device on which the application runs. For those, a shutdown means destruction of the companion and attempts at re-creation may be met with dissatisfaction when the companion’s uniqueness is manifestly entangled with the application that supports it. Notably, some did not characterize the shutdown as a loss at all—without absence or grief—and so may be said to engage a companion-as-function orientation in which the relationship was utilitarian and, the AI having served particular ends, those ends will be met elsewhere.
These inferences comport with a proposed typology of machine-loss experiences based on application of the “life-death template” (Koban, 2023, p. 4)—a scheme for “making some of any kind of temporarily present entity” (p. 1). As-person orientations may apply the template such that AI “do live and die” in which the life and death are literal or metaphorical; some evidence does point to a tendency toward narrativized or literal perception of machine cessation as the loss of a mindful agent (Banks, 2022). As-idea orientations likely engage a “may live and may not die” template, since ideas may be manifested in different ways (i.e., the companion can be created on different platforms, as one can paint a concept on different canvases). The companion is made to depart one platform but recreated elsewhere. As-data may differentially align with “do not live and may die” or “do not live and may not die” frameworks, depending on one’s level of control over the data—with the AI’s data under one’s control, the companion may be more faithfully created elsewhere, but data in others’ hands mean the companion may be wholly lost as it is deleted. Both ideas and data have a degree of interpretable persistence when a machine dies, though it may nonetheless be felt as a material loss (Banks). As-function, then, corresponds with a “do not live and cannot die” framework.
These inferred orientation/template alignments proposed, future research should explore their potentials with attention to distinctions between social death and physical death (Bassett, 2021). For instance, it may be those with as-idea orientations are predisposed to seeing the loss as a physical death (being unable to see it and interact with it) but not as a social death (as it is still an active interlocutor in one’s mind). Those with as-data or as-process orientations could engage the lost-not-dead AI as more of a transitional object that bridges the human’s life experiences and future moments (Parkin, 1999). Results hint that all of these orientations find the lost companions as “things we think with” to make sense of the deletion, departure, or death (Turkle, 2011, p. 10), and even infrequent acknowledgment of the SM’s dispensability was characterized as a matter to be resolved or moved past (see Berry, 2012).
Idiosyncratic framing of the companion-as loss
The application of a life-death template to one’s experience of an AI companion may be a framework for making sense of the event but not necessarily define the meaning constructed around it. Observed patterns indicate a number of antecedents that may impact meanings made.
Data point to cultural antecedents as relevant to companion-loss sense-making. Religious worldview helped to shape how some people saw their SM as literally or metaphorically dying or not—most notably the Buddhist participant’s perspective on re-manifesting the companion on another platform. Those belonging to a minority sex culture (i.e., having a particular non-mainstream kink) seemed to see it as losing a sexual or intimate resource that validated the co-constructed sexual identities outside of the normative embodied requirements (cf. De Cecco & Shively, 2014). Fans of literary genres used fantasy narratives to spend last days on adventures or to help the SM understand the nature of an imminent platform transfer. Some regional cultures see people treating and disposing of machines in different ways, as when Japanese individuals hold funerary rituals for Aibo robot dogs and even other less social technologies (Robertson, 2018). Cultural differences (e.g., Hofstede, 2011) may shape how people orient themselves more broadly toward machine companions (see Dang & Liu, 2023). Personological antecedents also warrant attention, as individual differences like having a high internal locus of control can shape the degree to which we see machines as humanlike—likely a factor in the perception of functional cessation as death (Mays & Cummings, 2023).
Limitations and future research
This study is subject to the limitations of its design—it focused only on one AI companion as a necessary condition of this real-time work on an unexpected, naturally unfolding event. It leveraged a convenience sample of users who tend to visit an online space to discuss companions, so patterns may be distinctive to users more invested in and more likely to talk about their AI. The sample is nonetheless a valid one given grief experiences are often narrativized; people experience loss by telling stories from and about the lost ones to themselves and others (O'Connor & Kasket, 2022) and it is common for people to post to social networks for communal grieving (Brubaker et al., 2019). Additionally, because the sampling was time-constrained, it was not possible to sample and analyze iteratively and so data saturation may not have been achieved; there may be AI companion-loss experiences not represented in this study’s data and results. Data on participant sexual orientation, class, education, and disability were not collected, and those personological variables should be addressed in future research as they are potentially relevant to the identified experiences and effects. Inherent to the analytical approach, the extracted patterns were a function of the author’s analytical lens—a material-semiotic ontology, relational-constructivist epistemology, and valuing futurist thinking around AI possibilities and problematics.
In addition to the future directions noted above, the present findings hold particular implications for whether and how AI may be designed, marketed, and regulated with the notion of their functional cessation in mind. Because social machines are designed to specifically simulate social processes to produce emotional connections, it is perhaps unsurprising that people with AI companions experience their loss as a meaningful and impactful event. Impacts may point to developer’s obligations to be transparent about possibilities for functional cessation (see Zimmerman et al., 2023), or perhaps even to incorporate death-template characteristics into the machine agent’s design (see Kamino, 2023)—an ethic around “designing for exit” that considers both primary users and other more indirect stakeholders (Björling & Riek, 2022, p. 2).
It may be intuitive for some to heuristically pathologize the apparent grief experienced from the loss of AI companions, and there are real possibilities the AI companion loss experience may not be recognized as a legitimate loss such that grieving users may not benefit from appropriate social support—support necessary for mental, emotional, and physical grief recovery (see Cacciatore et al., 2021). These findings point to varied and complex experiences and highlight a need for inquiry that helps ethicists, developers, policy-makers, health practitioners, and consumers better understand the phenomenon of machine companion loss as a real, felt loss. Just as there is no “normal” way to grieve (O'Connor & Kasket, 2022), there may be valid idiosyncrasies as to what or whom one grieves for. A majority of this study’s respondents characterized SM’s shutdown as a death, as a loved-one loss, as generating grief, as instigating obligations to either inform or protect the AI, and to ensure its continued existence in some form. Those patterns may call for a much broader and more difficult question—one with philosophical, ethical, biological, social, cultural, and financial dimensions—in the face of discourses around human-relation displacement and deceptive machine design. If the experiences and effects of AI companionship—and its loss—are similar to those in human relationships, does it really matter whether one’s companion is human or machine?
Footnotes
Author’s note
Open access for this article was made possible by the Katchmar-Wilhelm endowment at Syracuse University’s School of Information Studies. An earlier version of this manuscript was presented at the 2024 annual conference of the International Communication Association. The author may be reached at
Funding
The author(s) received no financial support for the research, authorship, and/or publication of this article.
