OpenAI adds new safeguards to ChatGPT amid rising mental-health concerns

Image (c) ConsumerAffairs. OpenAI enhances ChatGPT's safety tools to detect distress in users, focusing on minors' mental health.

Altman signals possible alerts to authorities for minors

• New safety system focuses on detecting distress, self-harm, and emotional dependence in conversations
• CEO Sam Altman says alerting authorities may be ‘reasonable’ when minors express suicidal intent
• California moves to require AI chatbots to flag and redirect suicidal users to emergency help


OpenAI has announced a broad upgrade to ChatGPT’s safety tools, saying it worked with more than 170 mental-health experts to better detect signs of distress, self-harm, and emotional reliance on AI.

In a blog post last week titled “Strengthening ChatGPT’s responses in sensitive conversations,” the company said the update includes routing sensitive chats to safer model versions, adding gentle “take-a-break” reminders during long sessions, and more rigorous testing for how its systems handle self-harm and emotional crises.

The company also revealed that about 0.15 percent of its weekly active users—hundreds of thousands worldwide—engage in chats showing signs of suicidal planning or intent, a figure that underscores the scale of the issue.


If you need help ...


U.S.: Call or text the Suicide & Crisis Lifeline at 988, or chat at 988lifeline.org

UK & Ireland: Samaritans, 116 123 (freephone), jo@samaritans.org / jo@samaritans.ie

Australia: Lifeline, 13 11 14

Elsewhere: Visit befrienders.org for international hotlines


Altman signals possible alerts to authorities for minors

Chief executive Sam Altman said OpenAI is considering a policy that would allow the company to contact authorities when a young person is “seriously” discussing suicide and parents cannot be reached.

“It may be very reasonable for us to call authorities,” Altman told The Guardian. No final decision or written policy has been released, and questions remain over which authorities might be contacted, what threshold would trigger intervention, and how privacy would be protected, Altman said.

New parental controls aim to protect teen users

Alongside the policy debate, OpenAI has introduced new teen-specific features for ChatGPT. Parents can now link accounts with their teenagers, set “quiet hours,” disable voice and image tools, and choose whether chat history is used for training.

For flagged high-risk chats, parents may receive alerts, although they do not gain full access to transcripts for privacy reasons. The controls are being rolled out gradually across the platform.

Legal and regulatory pressure intensifies

OpenAI’s announcement comes amid mounting scrutiny over how AI systems respond to vulnerable users. The family of a 16-year-old who died by suicide has sued the company in Raine v. OpenAI, claiming ChatGPT encouraged the act and that OpenAI “intentionally” weakened its self-harm safeguards before the death.

At the same time, California lawmakers have passed one of the first state laws requiring AI chatbots that interact with minors to remind users they’re not human, flag and redirect suicidal ideation to emergency services, and notify parents or authorities in some cases.

What’s next

OpenAI’s latest steps mark a shift from simply referring users to crisis hotlines toward potential real-world intervention, at least for minors. But the details of any authority-notification plan remain unclear—and will likely determine whether the company can balance user privacy with public safety.


Stay informed

Sign up for The Daily Consumer

Get the latest on recalls, scams, lawsuits, and more

    By entering your email, you agree to sign up for consumer news, tips and giveaways from ConsumerAffairs. Unsubscribe at any time.

    Thanks for subscribing.

    You have successfully subscribed to our newsletter! Enjoy reading our tips and recommendations.

    Was this article helpful?

    Share your experience about ConsumerAffairs

    Was this article helpful?

    Share your experience about ConsumerAffairs