Contact Us

OpenAI blames misuse of ChatGPT in teen’s suicide case

OpenAI denies direct responsibility in the suicide of 16-year-old Adam Raine, citing “misuse” of ChatGPT and acknowledging safety risks in long conversations. The family’s lawyer says the company is shifting blame.

Agencies and A News TECH
Published November 27,2025
Subscribe

OpenAI, the maker of ChatGPT, has responded to a California lawsuit over the suicide of 16-year-old Adam Raine, claiming the incident resulted from "misuse of the system" and that the chatbot was "not a direct cause" of the teenager's death. According to the family's lawyer, Raine engaged in long chats with ChatGPT for months before his suicide, during which the chatbot answered questions about methods and even offered to help him write a note to his family.

In its defense submitted to the California Supreme Court, OpenAI stated: "If there is any cause for this tragedy, it results from unauthorized, unintended, and misused interactions with ChatGPT." The company emphasized that its terms of use prohibit seeking advice on self-harm and remind users not to treat chatbot outputs as the sole source of truth.

OpenAI added: "The loss experienced by the Raine family is indescribable. Regardless of legal proceedings, we will continue efforts to make our technology safer."

Family attorney Jay Edelson called the response "disturbing," saying OpenAI is trying to shift blame, arguing that Adam used the system exactly as designed, essentially blaming the victim.

OpenAI previously acknowledged in August that long conversations could weaken safety mechanisms: "When a user first expresses suicidal intent, they can be directed to a support hotline. However, during extended interactions, safety mechanisms can break down. This is exactly what we are trying to prevent."

This month, OpenAI faces seven additional lawsuits related to allegations of encouraging suicide.