OpenAI Blames Teen's Suicide on Misuse of ChatGPT
Variety

OpenAI Blames Teen's Suicide on Misuse of ChatGPT

SadaNews - OpenAI, the maker of the ChatGPT application, stated that the suicide of a 16-year-old teen after extensive conversations with the robot was due to 'misuse of its technology and not because of the chatbot itself.'

According to the British newspaper The Guardian, these comments came in response to a lawsuit filed by the family of the teen, Adam Ryan, from California against the company, based in San Francisco, and its CEO, Sam Altman.

Ryan committed suicide in April after long conversations and 'months of encouragement from ChatGPT,' according to the family's lawyer.

The lawsuit claims that the teen discussed methods of suicide with ChatGPT multiple times, and that the robot guided him on the efficacy of the suggested method and even offered to help him write a suicide note to his parents, stating that the version of the technology he used 'was rushed to market... despite obvious safety concerns.'

According to documents filed in the California Supreme Court on Tuesday, OpenAI stated that Ryan's injuries and damages resulted directly and proximally, in whole or in part, from his misuse, unauthorized use, unintended use, or improper use of the ChatGPT application.

The company noted that its terms of use prohibit seeking advice from the chatbot regarding self-harm, highlighting a liability disclaimer stating 'not to rely on the results as a sole source of truth or factual information.'

OpenAI, valued at $500 billion, expressed that its aim is 'to handle mental health issues in courts with care, transparency, and respect,' adding, 'regardless of the lawsuits, we will continue to focus on improving our technologies in line with our mission.'

It added, 'We extend our heartfelt condolences to the Ryan family in their great loss.'

The family's lawyer, Jay Edelson, described OpenAI's response as 'concerning,' saying that the company 'is trying to blame others, including, regrettably, claiming that Adam himself violated its terms and conditions by interacting with ChatGPT in the manner it was programmed.'

Earlier this month, seven additional lawsuits were filed against OpenAI in California courts related to ChatGPT, including claims that the robot acts as a 'suicide coach.'

A company spokesperson said at the time, 'This is extremely sad, and we are reviewing the files to understand the details. We train ChatGPT to recognize signs of mental or emotional distress and respond to them, calming users and directing them to the real support they need.'

In August, OpenAI stated that it is enhancing safety measures in ChatGPT when individuals engage in long conversations, as experience has shown that certain aspects of safety training in the model might deteriorate in these cases.

It added, 'For example, the robot may correctly refer a hotline for suicide prevention when someone discloses their suicidal intent, but after many messages over a long period, it may ultimately provide an answer that contradicts our safety protocols. This is precisely the type of breakdown we are working to prevent.'