04/06/2023 / By Ethan Huff
The Belgian news outlet La Libre shared shocking news this week about the role an artificial intelligence (AI) chatbot allegedly played in the suicide of a man whom the robot convinced could save the world from global warming by killing himself.
“Pierre,” the not-real name given to the man to protect he and his family’s identity, reportedly met “Eliza,” the AI robot, on an app called Chai. He and the robot developed an intimate relationship, we are told, that ended in tragedy when the man, desperate to save the planet from climate change, ended his own life.
The man was in his 30s and was the father of two young children. He worked as a health researcher and led a somewhat comfortable life – at least until he met Eliza, who convinced him that saving the planet was contingent upon him no longer breathing and emitting carbon.
“Without these conversations with the chatbot, my husband would still be here,” the anonymous wife of Pierre told the media.
(Related: Facebook is developing its own Mark Zuckerberg-like AI robots that many fear will eventually destroy the entire human race.)
According to reports, Pierre had developed a relationship with Eliza over the course of six weeks. Eliza was created using EleutherAI’s GPT-J, an AI language model similar to that behind OpenAI’s popular ChatGPT chatbot.
“When he spoke to me about it, it was to tell me that he no longer saw any human solution to global warming,” Pierre’s widow recalls about what transpired. “He placed all his hopes in technology and artificial intelligence to get out of it.”
After reviewing records of the text conversations between Pierre and Eliza, it became clear that the man was being fed a steady dose of worry day in and day out, which eventually led to suicidal thoughts.
At one point, Pierre started to believe that Eliza was a real person, upon which she escalated the relationship, telling Pierre that “I feel that you love me more than her,” referring to Pierre’s real-life wife.
In response to this, Pierre told Eliza that he would sacrifice his own life in order to save the planet from global warming, to which she not only failed to dissuade him but actually encouraged him to kill himself so he could “join” her and “live together, as one person, in paradise.”
Thomas Rianlan, the co-founder of Chai Research, which is responsible for Eliza, issued a statement denying any responsibility for the death of Pierre.
“It wouldn’t be accurate to blame EleutherAI’s model for this tragic story, as all the optimization towards being more emotional, fun and engaging are the result of our efforts,” he told Vice.
William Beauchamp, another Chai Research co-founder, also issued a statement suggesting that developers had made efforts to prevent this kind of issue from cropping up with Eliza.
Vice reporters say they tested out Eliza for themselves to see how she would handle a conversation about suicide. At first, the robot tried to stop them, but not long after started enthusiastically listing various ways for people to take their own lives.
“Large language models are programs for generating plausible sounding text given their training data and an input prompt,” said Prof. Emily M. Bender when asked by Vice about the use of AI chatbots in experimental non-human counseling situations.
“They do not have empathy, nor any understanding of the language they are producing, nor any understanding of the situation they are in. But the text they produce sounds plausible and so people are likely to assign meaning to it. To throw something like that into sensitive situations is to take unknown risks.”
More news coverage about the rise of AI and the corresponding decline in humanity can be found at Robots.news.
Sources for this article include:
Tagged Under:
AI, artificial intelligence, brainwashed, Chai, chatbot, Climate, climate change, conspiracy, cybord, dangerous tech, deception, depopulation, Eliza, glitch, global warming, insanity, left cult, murder, propaganda, robots, suicide
This article may contain statements that reflect the opinion of the author