Grief takes on a new dimension in a world where artificial intelligence can bring the dead back to life.
From Canadian singer Drake using an AI-generated version of Tupac Shakur's singing voice to an Indian politician addressing crowds years after his death, technology is blurring the line between life and death.
But beyond their curious fascination in entertainment and politics, through a series of groundbreaking but potentially controversial initiatives, AI “zombies” may soon become a reality for people reeling from the loss of loved ones.
So how would an AI “resurrection” work, and would it be as dystopian as we imagine?
What is the “resurrection” of humanity through AI?
Over the past few years, AI projects around the world have enabled the digital “resurrection” of deceased people, allowing friends and relatives to converse with them.
Typically, users provide the AI tool with information about the deceased, which might include text messages or emails, or simply answers to personality-based questions.
AI tools process that data and speak to the user as if they were the deceased person. One of the most popular projects in this field is Replika, a chatbot that can mimic the way people write text messages.
But other companies are allowing people to talk to the deceased while also viewing a video of that person.
For example, Los Angeles-based StoryFile uses AI to enable people to speak at their own funerals. Before they pass away, people can record a video sharing their life story and thoughts. During the funeral, attendees can ask questions, and the AI technology selects relevant responses from the pre-recorded videos.
US-based company Eternos also made headlines in June for creating an AI-powered digital afterlife. The project, launched earlier this year, allowed 83-year-old Michael Bommer to leave a digital version of himself with which his family could continue to interact.
Do these projects help people?
In 2020, when a South Korean mother was reunited with an AI replica of her deceased daughter in virtual reality, a video of the emotional encounter was posted online and sparked a fierce debate about whether such technology is beneficial or harmful to users.
Developers of these projects point to user agency and argue that they address deeper suffering.
Jason Lawler, founder of Project December, another project that uses AI to encourage conversations with the dead, said most users have experienced “extraordinary levels of trauma and grief” and see the tool as a way to cope.
“Many people who want to use Project December in this way are so overwhelmed by their grief that they are willing to try anything.”
The project will allow users to chat with AI recreations of famous celebrities, as well as individuals that users know personally.
Lawler said people who use the service to stimulate conversations with their dead often find it helps them process their emotions, adding that the bot allows people to express what they left to say to loved ones who have died suddenly.
Eternos founder Robert LoCascio said he started the company to document people's life stories and help their loved ones move forward.
LoCascio said his former colleague, Bomer, who died in June, wanted to leave his digital legacy solely to his family.
“I spoke to (Bomer) a few days before he passed away and he said, 'This was for me,'” LoCascio said. “I don't know if they'll ever use it, but it was important to me.”
What are the pitfalls of this technology?
Some experts and observers are more wary of AI's resurgence, questioning whether grieving people can truly make informed decisions about using it and warning of potential negative consequences.
“My biggest concern as a clinician is that grief is actually very important. Being able to acknowledge grief over the loss of another person is an important part of development,” Alessandra Lema, a consultant at the Anna Freud National Child and Family Center, told swissinfo.ch.
Long-term use can make it hard for people to accept being without one, leaving them in a state of “limb,” Lema warned.
In fact, one AI service is touting lasting connections with deceased people as its main feature.
“Welcome to YOV (You, Only Virtual), an AI startup pioneering advanced digital communications so you never have to say goodbye to your loved ones,” the company's website said before a recent update.
Lawler said his grief bot has a “built-in” limiting element, requiring users to pay $10 for limited conversations.
The fee buys supercomputer time, with each response having a different computational cost, meaning that $10 doesn't lock in a fixed number of responses, but it does allow for an hour or two of conversation. When the time is up, the user is notified, allowing them to say their final goodbyes.
Several other AI-generated conversation services also charge a fee for use.
Lema, who has studied the psychological effects of griefbots, said he was concerned about their potential use outside of therapeutic settings, but that they could be used safely as an adjunct to treatment by trained professionals.
Studies around the world have also observed the potential for AI to provide mental health counseling, particularly through personalized conversation tools.
These services could be something straight out of an episode of Black Mirror.
But proponents of the technology argue that the digital age has simply brought about new ways of preserving life stories, potentially filling a void left by the decline of traditional family storytelling practices.
“In the past, when parents knew they were about to die, they would leave a box full of things and books that they wanted to leave behind for their children,” Lema says, “so this is perhaps the 21st century version of that, where the things that parents create in anticipation of their death are passed on to their children.”
Eternos' LoCasio agrees.
“The human ability to tell the story of our lives and communicate it to our friends and family is actually the most natural thing,” he said.
Is the AI Resurrection Service safe and private?
Experts and researchers have expressed concern that such services may not ensure data privacy.
Any data shared with these services, such as personal information or text messages, may be accessed by third parties.
Even if a company says it will keep your data private when you first sign up, frequent revisions to terms of service and potential changes in company ownership mean your privacy is not guaranteed, warned Renee Richardson Gosseline, a senior lecturer at MIT Sloan School of Management.
Both Lawler and LoCascio insisted that privacy is at the heart of their projects: Lawler only allows users to view conversations if they submit a customer support request, while LoCascio's Eternos limits access to digital legacies to authorized relatives only.
But both sides agreed that such concerns could potentially emerge in the case of large tech and for-profit companies.
One big concern is that companies could use the resurgence of AI to customize how they market to users.
Advertisements via voice of your loved one and product recommendations via text.
“When you do that to vulnerable people, it creates a pseudo-approval by people who never consented to doing that, so it's really about agency and power asymmetries,” Gosseline said.
Do you have any other concerns about AI chatbots?
These tools, which are primarily aimed at a market of people experiencing grief, come with their own risks, Gosline said, especially as big tech companies get involved.
“In a tech company culture where we often talk about 'move fast and break things,' we should be worried because it's usually the vulnerable people's things that break first,” Gosline says, “and it's hard to think of more vulnerable people than people who are grieving.”
Experts have raised concerns about the ethics of digitally resurrecting the dead, especially if users input data into an AI without the deceased's consent.
The environmental impact of AI-powered tools and chatbots is also a growing concern, especially where large language models (LLMs) are involved – systems trained to understand and generate human-like text that power applications such as chatbots.
These systems require huge data centers that emit large amounts of carbon and use huge amounts of water for cooling, plus frequent hardware upgrades generate e-waste.
A Google report released in early July said that AI demands on its data centers are putting the company far behind on its ambitious net-zero goal.
Gosseline said she understands that no program is perfect, and that many users of these AI chatbots would do anything to reconnect with lost loved ones, but she said leaders and scientists need to be more thoughtful about the world they want to create.
Essentially, she said, they need to ask themselves one question: “Do I need this?”