A woman in Belgium has said her husband took his own life after sharing exchanges with an artificial intelligence chatbot.
The dad-of-two, who was given the pseudonym Pierre by local media outlet La Libre, had reportedly been having conversations about climate change with the AI bot during which he was encouraged to end his life to help save the planet.
His wife, who was not named in the report, told the publication: “Without Eliza [the chatbot], he would still be here.”
Advert
She told the newspaper that in the six weeks running up to his death, Pierre had been having ‘intensive’ conversations and built up an unusual relationship with a chatbot.
The chatbot runs on a model based on an open-source GPT-4 alternative.
The widow, said Pierre, who was in his 30s, had ramped up the number of conversations he was having with ‘Eliza’ as he began to grow increasingly concerned about climate change.
Advert
She said: “When he spoke to me about it, it was to tell me that he no longer saw any human solution to global warming.
“He placed all his hopes in technology and artificial intelligence to get out of it.
“He was so isolated in his eco-anxiety and in search of a way out that he saw this chatbot as a breath of fresh air.”
The woman said her husband began to spend longer and longer talking to the bot.
Advert
"Eliza valued him, never contradicted him and even seemed to push him into her worries,” she said.
During one conversation, Pierre reportedly asked the bot whom he loved more, Eliza or his wife, to which it replied: "I feel you love me more than her."
Following the death, the man’s family has since spoken with the Belgian Secretary of State for Digitalisation, Mathieu Michel who said: “I am particularly struck by this family's tragedy. What has happened is a serious precedent that needs to be taken very seriously.”
Advert
Chai Research, co-founder William Beauchamp, told Vice: “The second we heard about this [suicide], we worked around the clock to get this feature implemented.
“So now when anyone discusses something that could be not safe, we’re gonna be serving a helpful text underneath it in the exact same way that Twitter or Instagram does on their platforms.”
He added: “When you have millions of users, you see the entire spectrum of human behavior and we're working our hardest to minimize harm and to just maximize what users get from the app, what they get from the Chai model, which is this model that they can love.”
UNILAD has contacted Chai Research for comment.
Advert
If you’ve been affected by any of these issues and want to speak to someone in confidence, please don’t suffer alone. Call Samaritans for free on their anonymous 24-hour phone line on 116 123
Topics: Technology, Mental Health