Abstract
Deathbots, griefbots or thanabots are chatbots based on the digital footprint of the deceased that offer mourners the possibility to ‘talk’ to their loved ones after their death. This Artificial Intelligence–based thanatechnology raises a number of ethical and psychological questions. Drawing on the concept of mediation from cultural psychology and the notion of continuing bonds in bereavement, the article discusses some controversial questions about deathbots, such as the illusion of reality that this technology may generate, its impact on the autonomy of the bereaved, the possible individualization of bereavement, the ethical implications in relation to the deceased and the potential therapeutic uses of this digital tool. We conclude by stressing the need for a non-essentialist perspective when studying the relationship between AI and grief, addressing the mediational role of deathbots not for what they supposedly are but for what they allow us to do.
Ever since Joseph Weizenbaum devised ELIZA, a chatbot that simulated a psychotherapist, in 1966 (Natale, 2019), the use of AI-based technology has spread both to everyday practices — through conversational chatbots designed to provide social interactions and companionship — and to the field of mental health — in psychological assessment and intervention (Fitzpatrick et al., 2017) and in end-of-life care (Utami et al., 2017). Following this trend, there are currently a number of AI-based projects dedicated to the development of so-called griefbots. Specifically, griefbots, deathbots or thanabots are chatbots based on the digital footprint of the deceased — left through social networks, emails or text messages — that offer mourners the possibility to ‘talk’ to their loved ones after their death. One of the initial projects was Replika, developed by Eugenia Kuyda after the death of her friend Roman Mazurenko in a car accident in 2015. Inspired by the Black Mirror episode ‘Be Right Back’ (Brooker & Harris, 2013) and using over 8,000 lines of text messages from her friend’s conversations with different people, Kuyda designed a deathbot of Roman that his friends found uncannily convincing (Elder, 2020). Another example is the so-called Project December-Simulation Matrix, developed by OpenAI by combining GPT-2 and GPT-3 technology, which allows users to train their own chatbots to develop speech patterns based on real people, both living and deceased (Fagone, 2021).
The possibility offered by deathbots to have conversations based on the digital footprint of our loved ones seems to go against the traditional notion of grief. Strongly influenced by Freud (1917) and popularized today, this notion suggests that a healthy response to loss implies letting go of our emotional bounds with our loved ones so that we can move on with our lives by establishing new bonds with other people. In line with this approach, some of Roman’s friends pointed out that the deathbot developed by Kuyda could leave people ‘mired in grief but drawn back into the pseudo-relationship, unable to move on but unfulfilled by the facsimile of a loved one’ (Elder, 2020, p. 74). However, more recent models of grief, such as the continuing bonds model (Klass et al., 1996), argue that, rather than severing ties with the deceased and moving on, the grieving involves rebuilding affective ties with our loved ones and moving with an ongoing connection to those no longer living. This sense of connection is relatively common, taking the form of imagined dialogues with the deceased, for example, by visiting their grave, writing messages on their online memorial or imagining their reactions to various events or actions in our lives (Stroebe et al., 1996). Based on this approach, recent studies have reported benefits in using deathbots to cope with loss (Galvão et al., 2021; Xygkou et al., 2023). Specifically, the use of these artefacts would appear to serve as an initial aid in coping with early grief by allowing us to regulate our emotions through conversational practices with a familiar ‘voice’ willing to listen to us without judgement (Krueger & Osler, 2022; Trothen, 2022).
As a new thanatechnology (Sofka, 1997) — or technology applied to grief — still in its early stages, deathbots are artefacts that raise many questions. While some authors have addressed the possible ethical and psychological implications of this new technology (Buben, 2015; Öhman & Floridi, 2018; Savin-Baden, 2022; Stokes, 2021), very few works have empirically addressed these questions (Galvão et al., 2021; Xygkou et al., 2023). Based on previous work (Jiménez-Alonso & Brescó, 2022a, 2023a, 2023b, 2023c) and existing literature on this topic, this article addresses some of the most heavily debated issues in recent studies, such as the illusion of reality effect that this technology may generate, its possible impact on the autonomy of the bereaved, the individualization of grief, the ethical implications for the deceased and the potential therapeutic uses of the tool. These questions are preceded by the theoretical framework (see next section) based on the concept of mediation from cultural psychology (Brescó et al., 2019) and on the notion of continuing bonds in grief (Klass et al., 1996).
Theoretical framework: mediation, grief and continuing bonds
According to cultural psychology, human action is characterized by an irreducible tension between individuals and different mediational tools that simultaneously constrain and enable experience (Wertsch, 1998). Under this approach, grief, far from being an exclusively intrapsychic phenomenon (Neimeyer et al., 2014), is a process that unfolds together with other people and through different cultural practices and artefacts that allow us to maintain affective bonds with our loved ones and regulate our emotions (Krueger & Osler, 2022). From graves, memorials (Wagoner & Brescó, 2021) and letter-writing to photographs (Jiménez-Alonso & Brescó, 2022b), Facebook and recent deathbots, humans have, throughout history, used a wide range of artefacts to cope with grief through imaginary conversations with the dead, thus making them socially present (Walter, 2015). In this regard, Despret (2015) notes the relatively common practice of mourners addressing their loved ones as if they were present. Graves, for example, can have a mediational function by facilitating an imaginary dialogue in which, based on past memories, the mourner imagines the responses of the deceased (Josephs, 1998). In the case of the internet and social networks, being able to visit the old profile pages of the deceased contributes to the sense of a certain permanent digital presence in many users, as if the dead were there listening behind the screen and receiving the message (Kasket, 2012a). As reported by Kasket (2012a), the bond resulting from the digital presence of the deceased leads some users to claim that the deletion of their profile would amount to losing the last bit of their loved ones.
In these cases, we see how this ‘as if’ conduct ends up transforming the mourners’ own psychological reality (as is) such that, according to Norlock (2017), imaginal relationships with the dead ‘are meaningful even when they are no longer reciprocal’ (p. 342). Here it is important to note, as Despret (2015) reminds us, that mourners do not seem to be concerned with whether or not communication with their loved ones is real. What is important is the psychological function that these imaginal dialogues play in maintaining affective ties with their loved ones, offering them the ‘real’ feeling of sharing with them — and regulating through them — their emotions. However, unlike having an internal conversation with the deceased at the graveside or through Facebook, the bidirectional conversation enabled by deathbots, by simulating the speech pattern of our loved ones, could affect our sense of connection with the deceased and, therefore, our overall grieving process (Jiménez-Alonso & Brescó, 2023b). For one thing, this bidirectionality or reciprocity (Krueger & Osler, 2022) implies a greater agency on the part of this technology in initiating conversation or participating in dialogue. Thus, unlike imagined responses in a graveside conversation, the text responses generated by the deathbot are no longer necessarily driven by the imagination or needs of the mourner. Furthermore, the fact that the conversation comes into being through immediate and tangible written responses adds certain materiality, thereby contributing to a certain illusion of reality (Kasket, 2012b).
This, in turn, raises ethical and psychological questions related to the potential deception or illusion of reality generated by deathbots, as well as their impact on the autonomy of mourners, which could be affected by their emotional dependence on the bot of the deceased loved one. Equally problematic is the individual use of this technology, which could socially isolate the mourner immersed in constant interaction with the bot. Moreover, the use of the deceased person’s digital footprint as a tool for coping with loss also raises ethical questions regarding the former and their memory. Lastly, the question of the possible therapeutic application of this tool, as well as the appropriateness (or otherwise) of a framework regulating its use, remains open. These questions will be addressed in the following sections.
Illusion of reality and the imitation game
From a Kantian perspective, Huber et al. (2016) argue that any technology that incites deception should be reprobated. Along these lines, some authors (Sparrow, 2002) are against nurturing the affective bonds of mourners through digital replicas. Others, such as Stokes (2021), suggest introducing glitches in the deathbots’ code to remind users that they are chatting with an AI-based technology and not with their loved one. Taking into consideration the illusion of reality that deathbots might generate, Ahmad (2016) revisits the Turing test — originally called the imitation game — whereby a conversation between a person and a machine tests the latter’s ability to impersonate a human being. In a gradualist approach to the challenge of simulating interaction with a dead person, Ahmad (2016) envisions text-only deathbots — similar to those currently in existence — as a first step. Pushing the Turing test to the limit, Ahmad (2016) asks to what extent ‘the ultimate version of such a simulacrum would be to interact with a live version of the deceased person’ (p. 402). Thus, in the hypothetical case that an exact copy of the deceased could be made, what would then be the difference from the point of view of the mourner’s experience? While the aforementioned episode of the Black Mirror series ‘Be Right Back’ (Brooker & Harris, 2013) suggests that replacing the deceased with a replica does not work in the eyes of the mourner because it is not, after all, a perfect copy, Brinkmann (2018) offers an alternative answer. According to this author, our emotional attachment to both human beings and things is based on our extraordinary sense of concreteness. Therefore, when faced with two copies that are identical in all respects, such replicas will never have the same emotional value. In this sense, mourning a loved one reminds us precisely of our love for a unique and irreplaceable person.
In view of the above, we could hypothesize that the very fact that deathbots can never replace the deceased may be what allows mourners to engage in this imitation game, acting as if they were chatting with their loved ones (Jiménez-Alonso & Brescó, 2022a). In line with this idea, Ahmad (2016) argues that, beyond the ability of deathbots to generate an illusion of reality, what matters is the mourners’ experience of interacting with the bot. In this regard, studies with deathbot users (Xygkou et al., 2023) observed the general willingness of mourners to engage in conversation through this application, including suspending their disbelief against inaccuracies and errors made by the bot. As in the case of ELIZA, the first-ever chatbot, users tend to accommodate to the interaction with the bot to facilitate the bot’s generation of comprehensible responses (Natale, 2019).
In sum, as Elder (2020) points out, interaction with deathbots does not require a large dose of illusion of reality on the part of users. Similarly to what happens in the case of graveside dialogues, mourners know that they are not really having a two-way conversation with the deceased, although in both cases they do seek that imaginal dialogue with the loved one. These findings raise interesting implications for the future development of deathbots. In particular, Xygkou et al. (2023) question to what extent their design should focus on developing the emotional connection with the user rather than on perfecting the conversational competence of the bot itself. However, the two-way communication enabled by deathbots still raises questions concerning the extent of the imitation game afforded by this technology and how such a game may end up affecting the imaginal dialogue between the living and the dead. For example, Ahmad (2016) posits the hypothetical case of a child growing up interacting with the bot of a deceased relative. Would the child be able to differentiate between simulation and reality? Furthermore, in our study conducted with different mourners on the hypothetical use of this technology (Jiménez-Alonso & Brescó, 2023a), some participants pointed out that receiving immediate and tangible responses through the bot could leave mourners with less room and agency to imagine and manage their own dialogue — and, therefore, their emotions — with their deceased loved ones.
Emotional dependence and monetization of the digital footprint
As has been pointed out, unlike other thanatechnologies, the bidirectional communication enabled by deathbots is materialized by the user receiving immediate responses from the application. For some authors (Lindemann, 2022), this can generate a constant expectation on the part of the user to receive a response from the bot of their loved one, which could foster a certain emotional dependence on this technology. Imaginal dialogues with the dead through letters, visiting their graves in the cemetery and, to a lesser extent, posting messages on their memorial page on the internet have more or less defined beginnings and endings (Henrickson, 2023). However, the continuous availability and the sense of immediacy afforded by deathbots further contributes to the integration of this technology into the user’s everyday life (Henrickson, 2023). In this regard, Lindemann (2022) warns us about the impact that deathbots can end up having on the autonomy of mourners after they have developed an emotional bond with the bot of the deceased person. This leads some authors (Bassett, 2015) to consider the impact that the sudden deletion of the bot, due to an unexpected technical failure, could have on the mourner, who could experience this sudden absence as a second loss. In this sense, one of the main risks derived from the use of this new technology could be that its users end up being hooked on a never-ending conversation driven by a logic that is not necessarily based on therapeutic criteria.
This leads us to consider the difference between the various logics involved in the use of this new AI-based technology, i.e., the difference between what is technologically possible, what is therapeutically beneficial for mourners and what is economically profitable for the growing industry involved in the custody and management of the digital remains of the dead. As Öhman and Floridi (2018) remind us, the economic benefit obtained by these companies is based on the use and monetization of these digital remains, which do not necessarily have to converge with the psychological needs of users during their grieving process. As these authors point out, it is precisely the posthumous interaction between these data and the deathbot user that makes it economically viable to store billions of digital remains aimed at creating a bot of a deceased person. Thus, based on this economic logic, we could imagine different strategies designed to keep users constantly engaged, for example, by sending unsolicited messages or updates from loved ones when users are inactive (Jiménez-Alonso & Brescó, 2023b). To what extent could a user who has developed an emotional dependence on their loved one’s bot ignore those messages? Going one step further, we could imagine the use of the deceased person’s bot for commercial purposes in order to influence mourners or even the possible hacking of the bot by third parties (Lindemann, 2022).
In sum, we can conclude by highlighting the importance of considering not only the technological logic of deathbots — based on constant and immediate two-way communication — but also the economic logic behind this technology. Some specific risks to be considered would be the impact on the user’s autonomy and the possible social isolation fostered by the individual and private use of this technology.
Towards an individualization of grief?
The increasing presence of commemorative memorials on the internet, active profile pages of deceased people or platforms that manage the digital legacy of the dead to deliver personal messages to their survivors has not only expanded the possibilities of communication — both with the living and with the dead — but has also turned mourning into an increasingly visible and everyday practice (Dilmaç, 2018). Thus, while digital technologies have enabled a more personalized way of expressing and sharing the experience of mourning, they have also contributed to socially legitimizing the practice of ‘communicating’ with the dead, either by receiving posthumous messages from the deceased or by sending messages to their memorial pages (Walter et al., 2012). In this sense, both the internet and deathbots imply a temporal and spatial expansion in that they provide a more direct form of communication regardless of time or place (Brubaker et al., 2013). However, there is a difference with regard to the social dimension of both technologies. While there are those who use social networks individually, the bond between mourners and loved ones is generally expressed and shared socially with other users on the same memorial page dedicated to the deceased — giving rise to what Kasket (2012a) calls communal bonds. In contrast, in the case of deathbots, the expression of the bond is confined to the private conversational space between the mourner and the deceased (Jiménez-Alonso & Brescó, 2023b).
This private and individual use of deathbots raises a number of questions. For example, if we assume, as Walter et al. (2012) argue, that social networks have contributed to bringing death out of the private sphere and into the public space of the internet, might deathbots contribute to bringing mourning and death back into the private sphere, encouraging the individual memorialization of loss over its commemoration — literally, communal remembering? Are deathbots symptomatic of an increasingly individualistic society devoid of collective rituals and, therefore, with a greater need for such tools? Lastly, another question Elder (2020) poses is ‘will deathbots induce us to interact with them, both in ways that preclude forming new attachments [. . .] and in ways that keep us turning to the deceased for support when we ought to be reaching out to others in our social network?’ (p. 76). While it may still be too early to answer these questions, the results of Xygkou et al.’s (2023) work with bereaved deathbot users seem to indicate that this technology could help to rebuild the social identity of bereaved people after the loss of their loved one and help them regain their confidence to reconnect with their close social circle or establish new relationships (see also Krueger & Osler, 2022).
Ethical implications for the deceased
As noted, deathbots enable constant personalized communication with the digital replica of the deceased from any place and at any time. In this sense, deathbots or thanabots offer non-stop availability, enabling communication at our service, without the risk of being judged or rejected, and without having to reciprocate the permanent attention that this technology provides (Xygkou et al., 2023). As Henrickson (2023) points out: the thanabot exists only to serve its user. When we text our friends and family, we may reasonably expect delays, forgotten responses, and updates with unexpectedly sudden personal news. The thanabot cannot provide this; a thanabot’s corresponding deceased person has essentially been reduced to a servile role. (p. 959)
This observation leads us to consider the possible ethical issues arising from this technology, not only in relation to mourners and their grieving process but also in relation to the deceased. Following Ahmad (2016), it is also worth considering the extent to which the private use of the digital replica of the dead could end up affecting our sense of loss or whether it is ethical to use the deceased — and their memory — for our own needs. In this regard, Brinkmann (2018) points out the tendency to perceive the death of our loved ones solely in light of the impact it has on us and our grieving process. However, as he reminds us, ‘grief is not just about the fact that I lose someone, but also about the more fundamental fact that someone no longer exists’ (Brinkmann, 2018, p. 182, italics in original). For their part, Bosch et al. (2022) refer to the bot of the deceased person as a ‘diminished other’ and criticize the ‘one-way relationship with a serviceable other designed to fulfil the user and intended to become a docile self-object, which is not a real other’ (p. 38).
Adopting a Kantian stance towards such technologies, Buben (2015) and Stokes (2021) differentiate between what they call replacement and recollection when it comes to the memorialization of the dead. While in the former case we remember the deceased as unique and irreplaceable beings, being therefore aware of the irretrievable loss, in the latter case we use the memory of the deceased as a means to our ends. Thus, from the perspective of replacement, the deathbot becomes a device that, instead of respecting the memory of the dead, reduces them to a resource to alleviate our grief by filling the void left after their departure. In this line, some of the mourners we interviewed (Jiménez Alonso & Brescó, in press) pointed out that the inaccuracies committed by the bots during their interaction with users could end up altering the memory of their deceased loved ones. In contrast, Krueger and Osler (2022) argue that the risk of such replacement of the dead through the deathbot is, in practice, unlikely. Firstly, these authors point out that, although deathbots enable a two-way conversation, users know and feel that it is not really a genuine dialogue, as it is based on thin reciprocity and asymmetric exchange. This leads them to their second, more theoretical argument, based on the notion of fictionalism borrowed from the philosophy of mind. Thus, just as it is useful for us to attribute mental entities to others without being able to verify their existence, interacting with a bot would be useful in order to maintain our emotional bonds with the deceased person, albeit knowing that we are acting as if we are communicating with that person.
Another important ethical issue arising from deathbots is the integrity of the deceased’s memory. In that regard, Lindemann (2022) points out that while we assume that we should respect the inert body (or what remains of it) of a deceased person, we rarely think that a person’s ‘informational body‘, composed of all the digital information left behind after their death, is also part of their identity and should be treated with equal dignity. In the same vein, Öhman and Floridi (2018) argue that in order to avoid commercialization or commodification of the digital footprint of deceased persons, their digital remains should be seen ‘as something constitutive of one’s personhood’ (p. 319). Elder (2020) highlights two problems regarding the integrity of the digital memory of the deceased. On the one hand, she argues that the digital footprint used in deathbots excludes the deceased’s conversations and interactions in the offline world, probably with people in their closest circle. By leaving no digital footprint, such conversations are not taken into account when designing the deceased’s bot, and the integrity of the deceased’s memory is thus not preserved. On the other hand, this author warns us that the use of the digital footprint could reveal the multiple public selves of the deceased shown in conversations in different digital fora throughout their life, thus revealing facets that family members were unaware of, and that the deceased would have likely preferred not to disclose.
These questions about the use of our digital legacy by deathbots bring to the fore the significance of something as apparently ephemeral as the digital footprint we leave behind in our everyday lives (Lagerkvist, 2017). As various authors indicate (Savin-Baden & Burden, 2019), this scenario is leading more and more people to take pre-mortem measures, such as the making of digital wills, in order to manage their digital presence after their departure.
Regulatory framework and possible therapeutic uses
While the development of deathbots — and other chatbots in the field of mental health — seems to be a growing trend, mourners’ conversations with the bot of their deceased loved ones is not yet a widespread practice. In addition to the ethical questions and imaginaries that this new technology mobilizes — about the very notion of death, mourning and the limits of what we consider to be ‘authentically’ human — deathbots raise important ethical, legal and therapeutic questions. On the one hand, there are serious legal implications regarding the preservation, use and privacy of data pertaining to deceased persons (Savin-Baden, 2022), which have sometimes led to the abandonment of ongoing projects (see Fussell, 2016). On the other hand, numerous questions arise regarding the framework that should regulate the therapeutic use of this technology, for example, to prevent constant communication with the bot from potentially fostering chronic coping strategies of denial on the part of the bereaved. In light of these problems, some authors propose different ideas for regulating the use of this technology.
Lindemann (2022) proposes a regulatory framework for the distribution and use of deathbots to prevent this technology from compromising users’ autonomy throughout the grieving process. According to this regulatory framework, deathbots should be legally classified as a medical device and used only under psychological or psychiatric supervision. Furthermore, this author is in favor of limiting the potential use of deathbots to mourners diagnosed with Prolonged Grief Disorder (PGD), a disorder recently incorporated into the Diagnostic and Statistical Manual of Mental Disorders (DSM-5) characterized by ‘intense and persistent grief that causes problems and interferes with daily life’ (Appelbaum & Yousif, 2022). Within this therapeutic framework, the patient should be informed about the design and functioning of the chatbot and its possible implications, thus taking co-responsibility for the consequences of its use.
Öhman and Floridi (2018) focus more on respect for the image of the deceased. They argue that the implementation of deathbots should be accompanied by an ethical commitment on the part of the AI companies developing this technology by ensuring three main aspects. Firstly, people who consent to the use of their digital footprint after death should be informed about how their posthumous data will be used and displayed. Secondly, the bot of the deceased person should not be radically different from the person it mimics or simulates. Thirdly, bots should be designed only on the basis of the informed consent of the person before death and not on the basis of consent given by their survivors, such as friends or relatives. Lastly, participants in the Galvão et al. (2021) study on digital immortality agreed on the following ethical principles that should govern the functioning of technologies such as deathbots: It is necessary for the system to be flexible, capable of changing over time, respect the memory of deceased ones, allow interactions between the virtualized dead and the living, and especially that it aligns and has respect for human values, both of its virtualized entities and its users. (p. 18)
Discussion and conclusions
Throughout history, different technological artefacts have been mediating the grieving process and shaping the continuing bonds between the living and the dead (Klass et al., 1996). Today, new digital technologies are providing new spaces that expand the way we experience those continuing ties with our loved ones. As we have seen, AI-based applications, such as deathbots, imply an evident temporal and spatial expansion by enabling a two-way dialogue with the bot of the deceased at any time and from any place. Unlike the asynchrony and the more social-based dimension of social networks, the individual and immediate interaction that the deathbot allows could contribute to individualizing the grieving process and creating a certain emotional dependence on the tool.
It should, however, be stressed that the mediational role of deathbots cannot be studied by solely considering the characteristics of this technology and their potential effects on mourners. Mediation does not imply a unidirectional cause–effect relationship (see Christensen & Sandvik, 2014) insofar as there is a distributed agency between individuals and the possibilities of action offered by technologies in each sociocultural context (Wertsch, 1998). Mourners do not react mechanically to loss (Brinkmann, 2020) and grief is not a passive experience. As Krueger and Osler (2022) point out, mourners ‘use rituals, practices, resources, and relationships to work through and with our grief’ (p. 226). Thus, although deathbots have a number of distinctive characteristics — for example, two-way dialogue and a sensation of simulation — this does not imply that these characteristics affect all users equally. Affordances derived from the materiality and design of technological artefacts provide a set of possibilities for action that must be studied considering the network of distributed agency in which their specific use by each mourner is inscribed. In the case of deathbots, we must not only consider the way in which each mourner negotiates the meaning of the conversation held with the deceased’s bot but also the simultaneous use that it may make of other available tools — both online and offline — as well as the ways of understanding death, mourning and the very use of such technologies in a certain social context.
According to Henrickson (2023), ‘our uses of technologies are informed by what we are told they can do, what we think they can do, and what we want them to do’ (p. 958). In other words, when studying the use of deathbots we must consider the discourses, expectations and imaginaries that this technology brings about in our society. For example, the increasingly blurred conception of the human-computer divide as the development of AI enables increasingly human-like behaviours in different computing tools (Henrickson, 2023). In this sense, it is expected that social conceptions and uses with respect to different thanatechnologies will change depending on each historical context. As observed by some of our interviewees in Jiménez-Alonso and Brescó (2023a), although the use of technologies, such as social networks, in coping with grief had initially raised suspicions, the practice of directly addressing the dead on the internet seems quite normal today.
All the foregoing leads us to highlight the need for a non-essentialist perspective (Vallès-Peris & Domènech, 2020) when studying the relationship between AI and grief, emphasizing the relational and contextual use of deathbots. Thus, beyond studying the features of the tool itself, this approach implies considering the experience of the mourners and their particular appropriation of the bot through the way in which it is used in each specific context. In sum, what we propose for future studies is to address the mediational role of deathbots not for what they supposedly are, but for the possibilities, constraints — and risks — this tool entails for mourners, the dead and the culture of care we want to develop in our societies.
