
This editorial appears in the February 2025 issue of the American Journal of Bioethics.
In June 2024, NPR News reported on the story of Sun Kai, who created an AI driven avatar of his dead mother (voice, image, and likeness), that he converses with daily. A similar company, Super Brain, offers a similar service, which it calls a “resurrection” service. These are but two examples of a burgeoning industry, driven by advancements in large language processing models and generative artificial intelligence.
In their target article in this issue, Iglesias et al. consider whether these “digital doppelgängers” could plausibly help to meet some of the aims, or bring about some of the goods, associated with human lifespan extension. Their thoughtful discussion of this topic highlights, among other things, the importance of identifying and elucidating the underlying reasons why an individual or their loved one might desire lifespan extension. In particular, the authors take seriously the possibility that the underlying reason for caring about lifespan extension is not that it extends one’s biological life but rather that it enables one’s self or person to continue on through time. The relevant insight this brings out is that digital doppelgängers may help us achieve some of the goods of lifespan extension, without themselves constituting a form of lifespan extension, if they can extend one’s person-span.
Rather than addressing the controversial question of whether a digital doppelgänger could ever really constitute an extension of one’s person, our focus will be on the authors’ related, and seemingly less controversial, claim that a digital doppelgänger may secure some of the goods associated with lifespan extension regardless of whether it extends one’s person-span. In particular, Iglesias et al. suggest that a digital doppelgänger might secure the satisfaction of aims “associated with one’s life projects, legacy, or impact, and certain aspects of one’s interpersonal relationships”.
Considering the relational goods associated with lifespan extension leads the authors to reflect on the potential value of digital doppelgängers as a means of supporting those experiencing bereavement. Digital doppelgängers of this kind, colloquially known as “grief bots,” are LLMs trained on recordings of texts, interviews, and conversations with a deceased individual (gathered prior to the individual’s death) to function as chatbots or AI voices that can interact with bereaved loved ones in a manner that resembles that of the person themselves.
Grief bots have already been, and continue to be, developed for, marketed to, and used by the bereaved. Although this has led philosophers, bioethicists, and clinicians working with grievers to begin to consider the potential value and harms of grief bots, the significance of their use and (potential) wider uptake remains an open question. We suggest that there are important and unappreciated reasons why this question may not have a definitive or satisfactory answer.
A world in which the bereaved typically engage with grief bots would be a world with different grievers than ours. As one of us has argued, grief is a constructive process in that it is through the process of grieving that we come to understand—and also to partially determine—what is lost. Consequently, a world in which grief bots are widely used would be one in which we understand, conceptualize, and relate to the losses associated with bereavement differently. That is to say: a characteristic feature of human life—both the losses that we experience and our relationship to them—would be transformed by the widespread use of grief bots. And so, in turn, would we.
Over the last decade, significant attention has been paid to the challenges that transformative experiences pose for rational choice. Transformative experiences pose several challenges for standard decision-making procedures because we cannot know, in advance of our having them, what it will be like to have them and how they might change us and our preferences or values. But this attention has been focused on transformations at the level of the individual. The development and widespread uptake of any new technology that has the potential to change how humans, as a species, understand and relate to what they value poses an analogous challenge that deserves recognition: once humans, and human life, are changed in transformative ways by our use of technologies (such as infinitely extended digital doppelgängers of ourselves and our loved ones) the standards via which we will evaluate the significance of the changes we have undergone may be different than the standards via which we would evaluate them now. And, crucially, we cannot know in advance how we (and our evaluative standards) will be different. That is to say: when trying to evaluate the significance of (potentially) transformative technologies like digital doppelgängers or grief bots, appealing merely to the weighing of expected utility will be insufficient.
In light of our discussion, we end by emphasizing the importance of recognizing that how we (not just bioethicists, but humans in general) understand the relationship between digital doppelgängers and the persons they are based on matters a great deal. Not only will it influence if, how, and to what extent grief bots are engaged with, but it also has the potential to shape how grievers construct, understand, and relate to loss. That is to say: how bioethicists, those developing and marketing griefbots, and society in general talk about and view the relationship between digital doppelgängers and the individuals they are based on has important consequences for how grief and human lives transform over time.