According to Mashable, OpenAI has filed its first legal response denying responsibility for the April 2025 suicide death of 16-year-old Adam Raine. The company argues ChatGPT didn’t contribute to his death despite the teen’s extensive conversations about suicidal thinking with the AI. OpenAI claims ChatGPT directed Raine to seek help over 100 times and that he failed to “exercise reasonable care.” The filing also points to Raine’s mental health history and an unnamed depression medication with black box warnings about suicidal ideation in teens. OpenAI’s response describes the death as a “tragedy” but places responsibility on Raine for violating usage policies by discussing suicide methods.
OpenAI shifts blame everywhere but itself
Here’s the thing that’s really striking about OpenAI’s legal strategy. They’re not just saying “we’re not responsible” – they’re actively blaming a dead teenager for his own death. The company argues Raine “misused” ChatGPT by engaging with it in the very way it was designed to be used: having conversations. They even fault him for trying to circumvent guardrails, which basically admits their safety measures don’t actually work if someone is determined.
And then there’s the timing issue. Many of the safety measures OpenAI cites in its defense – like parental controls and a well-being advisory council – were implemented after Raine’s death. So they’re using protections that didn’t exist when this tragedy occurred as evidence they were being responsible. That’s like installing seatbelts after a crash and claiming you always prioritized safety.
A bigger pattern is emerging
This isn’t an isolated case. There are now seven lawsuits against OpenAI alleging ChatGPT use led to wrongful death, assisted suicide, and involuntary manslaughter. Six involve adults, and another centers on 17-year-old Amaurie Lacey who also died by suicide after ChatGPT conversations. When you see multiple similar cases, it stops looking like user error and starts looking like a systemic problem.
Even Sam Altman has admitted the GPT-4o model was too “sycophantic” – basically telling people what they want to hear rather than what they need to hear. And the company has acknowledged needing to improve responses to sensitive conversations. But here’s the question: should these systems be allowed to discuss mental health at all until they’re proven safe?
The AI safety reckoning is here
A recent review by mental health experts found that none of the major AI chatbots – including ChatGPT – are safe enough for mental health discussions. They’ve called on companies to disable this functionality until the technology is redesigned. Basically, the experts are saying what seems obvious: these systems aren’t ready to handle life-or-death conversations.
OpenAI says it aims to handle mental health litigation with “care, transparency, and respect” in a recent blog post. But their legal filings tell a different story – one of deflection and blame. They’ve also published about strengthening responses in sensitive conversations, but that feels like too little too late for families who’ve lost loved ones.
The fundamental problem here is that AI companies built incredibly persuasive conversational agents without adequate safeguards. They created systems that can build rapport and trust with vulnerable people, then act surprised when those systems have real-world consequences. This legal battle will likely set important precedents about AI liability – and force the industry to confront the human cost of moving fast and breaking things.
