Belgian woman blames ChatGPT-like chatbot ELIZA for her husband’s suicide

Suicidal thoughts expressed to the chatbot now result in a message directing them to suicide prevention services.
Baba Tamim
AI chatbot concept stock image.
AI chatbot concept stock image.


The widow of a Belgian man who recently killed himself alleges that an (artificial intelligence) AI chatbot forced her husband to commit suicide.

The father of two reportedly had daily exchanges with "ELIZA," a chatbot built by a US start-up utilizing GPT-J technology, an open-source substitute for OpenAI's GPT-3, according to a report by Belgian newspaper La Libre on Tuesday.

"Without these conversations with the chatbot ELIZA, my husband would still be here," she told La Libre.

The woman in her thirties has two small children and lived a comfortable life with her late husband. She provided a moving and extremely difficult testimony regarding the morality of these new conversational "intelligent" agents, noted La Libre's report.

As per the report, the man killed himself six weeks after speaking with ELIZA.

The person had extreme eco-anxiety that developed two years ago and sought comfort from ELIZA, a chatbot powered by EleutherAI's GPT-J open-source artificial intelligence language model, according to the family. 

GPT-J outperforms OpenAI's GPT-3 on a variety of zero-shot down-streaming tasks and can even outperform it on code generation tasks. The most recent version, GPT-J-6B, is a language model based on The Pile data set, as per online credentials of the company. 

Meanwhile, The chatbot's Silicon Valley-based founder told La Libre that his team is "working to improve the safety of the AI."

Suicidal thoughts expressed to the chatbot now result in a message directing them to suicide prevention services.

Safety concerns have increased after the tragedy

The tragedy has prompted demands for increased awareness and improved citizen safety.

"With the popularisation of ChatGPT, the general public has discovered the potential of artificial intelligence in our lives like never before," said Mathieu Michel, Belgium's Secretary of State for Digitalisation, in charge of Administrative Simplification. 

"While the possibilities are endless, the danger of using it is also a reality that has to be considered." 

The deceased's family spoke with Michel, who is also in charge of Privacy and the Regulation of Buildings, last week. 

"I am particularly struck by this family's tragedy. What has happened is a serious precedent that needs to be taken very seriously," he stated. 

He emphasized that this situation shows how important it is to "clearly define responsibilities" in order to prevent such a disaster in the near future.

"Of course, we have yet to learn to live with algorithms, but under no circumstances should the use of any technology lead content publishers to shirk their own responsibilities," Michel noted.

This wouldn't be the first, though; previously, many users have complained about Microsoft's new ChatGPT-powered Bing AI's "unhinged" responses.

Add Interesting Engineering to your Google News feed.
Add Interesting Engineering to your Google News feed.
message circleSHOW COMMENT (1)chevron
Job Board