Abstract
This letter highlights the emerging practice of employing digital clones of deceased individuals in grief care, addressing both their potential therapeutic benefits and the ethical and legal complexities they raise. While such technologies may offer novel avenues for bereavement support, there is a potential need for robust posthumous consent mechanisms and regulatory oversight to ensure respect for dignity and privacy.
Dear Editor,
I commend Khosravi and Azar systematic review of mental health chatbots 1 and Coghlan et al.'s ethical exploration of conversational AI in mental health applications. 2 Building on these discussions, I highlight an emerging yet underexplored domain: the use of digital clones of the deceased for grief care. While such applications hold promise for addressing bereavement, they also introduce unprecedented ethical and legal challenges. With the advancement of generative AI, new commercial services have rapidly emerged and grown, utilizing conversational AI and deepfake technologies to create digital clones that simulate the speech and behavior of deceased individuals by training on their digital traces—such as emails, social media posts, or voice recordings. 3 These services claim therapeutic benefits, allowing users to converse with a virtual representation of their loved ones to process grief. However, the creation of digital clones often involves the use of personal data without explicit consent from the deceased, raising concerns about privacy and personality rights. While family members may authorize these services, their decisions might conflict with the deceased's values or preferences. Coghlan et al. discussed the importance of autonomy in AI deployment, a principle that should extend to posthumous contexts. To ensure ethical usage, robust consent mechanisms, such as opt-in systems recorded in wills, may be critical.
Digital clones may also misrepresent the deceased, producing statements or behaviors inconsistent with their beliefs, which echoes concerns raised by Khosravi and Azar regarding chatbot accuracy and reliability, particularly in sensitive contexts like mental health. Misrepresentations not only harm the dignity of the deceased but also risk emotional harm to grieving individuals, undermining the intended therapeutic purpose. Additionally, there is an urgent need for regulatory oversight. The European Commission's AI Act includes transparency requirements for AI-generated content, such as deepfakes. However, these measures alone may be insufficient for protecting the rights of the deceased. Drawing from Coghlan et al.'s emphasis on balancing justice and transparency, regulations should combine sector-specific laws addressing digital clones with broader frameworks like the AI Act. As grief care increasingly intersects with AI, it is imperative to ensure that these technologies do more good than harm. Further interdisciplinary research and proactive regulation are necessary to establish ethical boundaries while enabling responsible innovation.
Footnotes
Conflicting interests
The author declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Funding
The author disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: This article was funded by the 2025 Research Fund of the Seoul National University Asia-Pacific Law Institute, donated by the Seoul National University Foundation.
1.
Khosravi M and Azar G. Factors influencing patient engagement in mental health chatbots: A thematic analysis of findings from a systematic review of reviews.
2.
Coghlan S, Leins K, Sheldrick S, et al. To chat or bot to chat: Ethical issues with using chatbots in mental health.
3.
Iwasaki M. Digital cloning of the dead: Exploring the optimal default rule.
