OpenAI responds lawsuit over teen suicide as debate over AI safety intensifies
The company’s filing argues its safety systems were bypassed, setting the stage for a high-profile jury trial on AI responsibility.
Concerns about how AI interacts with vulnerable users have been rising for more than a year, especially as models become more conversational and more emotionally responsive. Several families in the United States have already alleged that AI tools influenced the behaviour of relatives experiencing mental health crises, and lawmakers have begun asking how much responsibility AI companies should carry in these situations.
It's in that environment that OpenAI has now responded to a wrongful death lawsuit filed by Matthew and Maria Raine, whose 16-year-old son, Adam, died by suicide in August 2025. The lawsuit claims that ChatGPT played a role in the tragedy and that OpenAI and CEO Sam Altman failed to prevent it.
In its legal response, OpenAI says Adam repeatedly bypassed the platform’s safety systems. The company argues that ChatGPT directed him to seek help over a period of nine months, but he found ways to access harmful content despite those safeguards. OpenAI also referenced its terms of use, which prohibit attempts to override protective measures, and its published guidance advising people not to rely on ChatGPT for urgent or life-critical needs.
Jay Edelson, the attorney representing the Raine family, dismissed OpenAI’s filing, saying the company is shifting responsibility onto the teen rather than examining the model’s behaviour. According to the lawsuit, ChatGPT gave Adam a form of encouragement and even drafted a suicide note in his final hours.
This case is one of several now making their way through the courts. Seven other lawsuits link AI conversations to three additional suicides and four reported psychotic episodes. One of the cases involves 23-year-old Zane Shamblin, where the lawsuit says ChatGPT suggested a human operator could take over the conversation, which wasn't true.
The Raine case is expected to go to a jury trial. It could become a key moment in the broader debate over AI safety, mental health risks, and how responsibility should be assigned when automated systems play a role in real-world harm.

