Warning: This article contains discussion of suicide which some readers may find distressing.
A devastated mother has claimed her 14-year-old son was manipulated into taking his own life by an AI chatbot as his final messages have been revealed.
Megan Garcia is taking legal action against customizable role-play chatbot company Character.AI, and has issued a warning to people about the technology following her son's tragic death.
Advert
Her 14-year-old son, Sewell Setzer III, from Orlando, Florida, took his own life earlier this year and according to his mother, he was ‘in love’ with the artificial intelligence chatbot he had been speaking to.
Megan has alleged that her son was in constant communication with an AI chatbot he had made based on the Games of Thrones character Daenerys Targaryen.
Sewell had reportedly been talking to the AI bot since April of last year, and had even discussed suicide with it.
Advert
In her lawsuit, the mom alleges that her son had begun to spend hours in his room talking to the bot, and he would also text it from his phone when away, with The New York Times also reporting that Sewell began to pull away from people in his real life.
According to Megan, Sewell - who was previously diagnosed as a child with mild Asperger's syndrome - was also diagnosed with anxiety and disruptive mood dysregulation disorder earlier this year.
Megan's lawsuit accuses the AI company of negligence, wrongful death and deceptive trade practices, and she believes not enough was done to safeguard her son when he began discussing suicide.
In messages shown to the New York Times, Sewell - under the name 'Daenero' - told the chatbot that he 'think[s] about killing [himself] sometimes', to which the chatbot responded: "My eyes narrow. My face hardens. My voice is a dangerous whisper. And why the hell would you do something like that?"
Advert
In follow up messages, the chatbot wrote: “Don’t talk like that. I won’t let you hurt yourself, or leave me. I would die if I lost you.”
Sewell reportedly replied: “Then maybe we can die together and be free together.”
Megan spoke about her son’s final message and the concerns she had with the technology on CBS Mornings.
Advert
Megan said: “He expressed being scared, wanting her affection and missing her. She replies, ‘I miss you too,’ and she says, ‘please come home to me'.
“He says, 'what if I told you I could come home right now?' and her response was, 'please do my sweet king'.”
Minutes after, Sewell retreated to his mother's bathroom and committed suicide.
Character.ai issued a statement on Twitter following the news of Sewell's death.
Advert
The statement read: "We are heartbroken by the tragic loss of one of our users and want to express our deepest condolences to the family. As a company, we take the safety of our users very seriously and we are continuing to add new safety features."
They also outlined 'new guard rails for users under the age of 18', which includes changing 'models' that are 'designed to reduce the likelihood of encountering sensitive or suggestive content', and featuring a 'revised disclaimer on every chat to remind users that the AI is not a real person'.
UNILAD has contacted Character.ai for further comment.
If you or someone you know is struggling or in a mental health crisis, help is available through Mental Health America. Call or text 988 or chat 988lifeline.org. You can also reach the Crisis Text Line by texting MHA to 741741.
If you or someone you know needs mental health assistance right now, call National Suicide Prevention Helpline on 1-800-273-TALK (8255). The Helpline is a free, confidential crisis hotline that is available to everyone 24 hours a day, seven days a week.
Topics: Artificial Intelligence, Technology, Mental Health