Physical Address

304 North Cardinal St.
Dorchester Center, MA 02124

Leaving Your Legacy Via Death Bots? Ethicist Shares Concerns

This transcript has been edited for clarity. 
Hi. I’m Art Caplan. I’m at the Division of Medical Ethics at the NYU Grossman School of Medicine in New York City. 
I heard recently about a fascinating, important development in artificial intelligence (AI). All kinds of things are happening in artificial intelligence. Clearly, it’s being used in the background to trace and keep track of medical information inside hospitals.
There are AI bots out there that are starting to talk to patients about, say, mental health issues. Plenty of people are using AI to get information about their medical condition, seeing it supplement search engines, and on and on AI goes. 
It has entered into a space where I think patients may raise questions about whether they should use it or seek opinions from doctors and nurses, particularly those involved with seriously ill people. That space is grieving, and what might be called “death bots.”
Here’s what’s going on. There’s a gentleman I read about online, who was dying of end-stage colon cancer. His wife and he were talking, knowing his death was coming, about what it would be like after his death. She said she would really miss being able to ask him questions about a variety of topics that he was expert at and that he knew very well. 
He thought about it and decided, well, maybe he could record his voice and then use AI to search information that he would record and have available, which could really address questions that his wife might put to “him” once he was gone.
It turns out that a company was formed shortly thereafter, which is now offering the service both in the US and Europe, and in fact, I think perhaps even worldwide, basically saying we’ll record a dying person’s voice. We will help people grieve by allowing people to interact with the AI version of the departed when they’re gone. 
It will be able to, if you will, search not just recorded information but anything they might have online — diaries, things they may have written, earlier videos, and information from earlier parts of their life — to generate plausible answers to questions that might be put to the artificial version of the deceased.
Obviously, this would allow not only spouses but grandchildren and people in future generations to have some way to interact with an ancestor who’s gone. It may allow people to feel comfort when they miss a loved one, to hear their voice, and not just in a prerecorded way but creatively interacting with them.
On the other hand, there are clearly many ethical issues about creating an artificial version of yourself. One obvious issue is how accurate this AI version of you will be if the death bot can create information that sounds like you, but really isn’t what you would have said, despite the effort to glean it from recordings and past information about you. Is it all right if people wander from the truth in trying to interact with someone who’s died? 
There are other ways to leave memories behind. You certainly can record messages so that you can control the content. Many people video themselves and so on. There are obviously people who would say that they have a diary or have written information they can leave behind. 
Is there a place in terms of accuracy for a kind of artificial version of ourselves to go on forever? Another interesting issue is who controls that. Can you add to it after your death? Can information be shared about you with third parties who don’t sign up for the service? Maybe the police take an interest in how you died. You can imagine many scenarios where questions might come up about wanting to access these data that the artificial agent is providing. 
Some people might say that it’s just not the way to grieve.Maybe the best way to grieve is to accept death and not try to interact with a constructed version of yourself once you’ve passed. That isn’t really accepting death. It’s a form, perhaps, of denial of death, and maybe that isn’t going to be good for the mental health of survivors who really have not come to terms with the fact that someone has passed on.
I’m not against these death bots or AI versions of trying to leave a legacy. There are all kinds of legacies that people might want to leave. While perhaps not 100% accurate, I can see how this technology has a use. 
I do think one has to go in with their eyes open. We need consent before anything like this is really purchased by or sold to surviving people. They really have to understand it may not be an accurate version of what the deceased might have said in response to questions, conversations, or interactions. 
I think we need to know who controls the information, who can erase it, and who can say, “I’m done with it, and I don’t want my husband’s AI to go on anymore.”
All that said, it’s an interesting development in a world in which I think those who are very ill might start to plan to leave a legacy that is more than just a diary or a video message. It becomes a kind of ongoing, artificial, interactive version of themselves that may provide some people with comfort.
I’m Art Caplan, at the Division of Medical Ethics at the NYU Grossman School of Medicine. Thanks for watching.
 

en_USEnglish