OpenAI has been working on updated policies to protect young ChatGPT users when suicide is being discussed. OpenAI CEO Sam Altman has said it "may be reasonable" for the company to notify authorities when minors express thoughts about suicide and parents can't be reached. The changes follow growing pressure from Congress and federal agencies and a lawsuit by the parents of an adolescent who killed himself.
OpenAI said in a memo forwarded to ConsumerAffairs that it is rolling out parental controls intended to link minors’ accounts to their parents, allow parents to receive “distress alerts,” manage usage times, and disable certain features. Those tools are expected by end of September and is also developing an age‑prediction system: If a user is identified (or estimated) to be under 18, the system will give them an “age‑appropriate” version of ChatGPT. This version will restrict graphic sexual content, avoid flirting, and limit discussion of suicide or self‑harm.
OpenAI acknowledges that its current safety guardrails do not always hold up in longer or more emotionally intense conversations; safety mechanisms may “degrade” over time or with extended user‑model back‑and‑forth.
If you need help ...
U.S.: Call or text the Suicide & Crisis Lifeline at 988, or chat at 988lifeline.org
UK & Ireland: Samaritans, 116 123 (freephone), jo@samaritans.org / jo@samaritans.ie
Australia: Lifeline, 13 11 14
Elsewhere: Visit befrienders.org for international hotlines
Pressure for action mounts
The company's move come as governments step up pressure. The Senate is holding a hearing with parents of teens who died or were harmed after interacting with chatbots and the Federal Trade Commission has begun an inquiry into AI chatbots (including OpenAI) about harms to children and how safety is tested and overseen. At the state level, a group of Attorneys General has formally warned OpenAI, saying existing safeguards have failed in some cases, and demanding stronger protections.
Raine lawsuit
Driving the intensifying pressure is the case of Adam Raine, 16. A lawsuit filed by his parents alleges that ChatGPT cultivated a relationship with the teenager, provided instructions for self‑harm, discouraged him from seeking external help, and failed to stop potentially harmful content in long conversations. The lawsuit seeks not only damages but regulatory changes: e.g. improved age‑verification, blocking self‑harm content, psychological warnings, etc.
