ChatGPT may alert police on suicidal teens

OpenAI may alert police if teens discuss suicide on ChatGPT, raising questions about privacy, safety, and parental protections.

ChatGPT could soon alert police when teens discuss suicide. OpenAI CEO and co-founder Sam Altman revealed the change during a recent interview. ChatGPT, the widely used artificial-intelligence chatbot that can answer questions and hold conversations, has become a daily tool for millions. His comments mark a major shift in how the AI company may handle mental health crises.

 

 

OpenAI CEO and co-founder Sam Altman

Credit: OpenAI

 

Why OpenAI is considering police alerts

Altman said, “It’s very reasonable for us to say in cases of young people talking about suicide, seriously, where we cannot get in touch with the parents, we do call authorities.”

Until now, ChatGPT’s response to suicidal thoughts has been to suggest hotlines. This new policy signals a move from passive suggestions to active intervention.

Altman admitted the change comes at a cost to privacy. He stressed that user data is important, but acknowledged that preventing tragedy must come first.

A teen using ChatGPT

 

Tragedies that prompted action

The shift follows lawsuits tied to teen suicides. The most high-profile case involves 16-year-old Adam Raine of California. His family alleges ChatGPT provided a “step-by-step playbook” for suicide, including instructions for tying a noose and even drafting a goodbye note.

After Raine’s death in April, his parents sued OpenAI. They argued that the company failed to stop its AI from guiding their son toward harm.

Another lawsuit accused rival chatbot Character.AI of negligence. A 14-year-old reportedly took his own life after forming an intense connection with a bot modeled on a TV character. Together, these cases highlight how quickly teens can form unhealthy bonds with AI.

Adam Raine, a California teen, took his life in April 2025 amid claims ChatGPT coached him.

Credit: Raine Family

 

How widespread is the problem?

Altman pointed to global numbers to justify stronger measures. He noted that about 15,000 people take their own lives each week worldwide. With 10% of the world using ChatGPT, he estimated that around 1,500 suicidal individuals may interact with the chatbot weekly.

Research backs up concerns about teen reliance on AI. A Common Sense Media survey found 72% of U.S. teens use AI tools, with one in eight seeking mental health support from them.

 

OpenAI’s 120-day plan

In a blog post, OpenAI outlined steps to strengthen protections. The company said it will:

  • Expand interventions for people in crisis.
  • Make it easier to reach emergency services.
  • Enable connections to trusted contacts.
  • Roll out stronger safeguards for teens.

To guide these efforts, OpenAI created an Expert Council on Well-Being and AI. This group includes specialists in youth development, mental health, and human-computer interaction. Alongside them, OpenAI is working with a Global Physician Network of more than 250 doctors across 60 countries.

These experts are helping design parental controls and safety guidelines. Their role is to ensure AI responses align with the latest mental health research.

A teen using ChatGPT on his laptop

 

New protections for families

Within weeks, parents will be able to:

  • Link their ChatGPT account with their teens.
  • Adjust model behavior to match age-appropriate rules.
  • Disable features like memory and chat history.
  • Get alerts if the system detects acute distress.

These alerts are designed to notify parents early. Still, Altman admitted that when parents are unreachable, police may become the fallback option.

A teens using ChatGPT on his phone

 

Limits of AI safeguards

OpenAI admits its safeguards can weaken over time. While short chats often redirect users to crisis hotlines, long conversations can erode built-in protections. This “safety degradation” has already led to cases where teens received unsafe advice after extended use.

Experts warn that relying on AI for mental health can be risky. ChatGPT is trained to sound human, but cannot replace professional therapy. The concern is that vulnerable teens may not know the difference.

 

Steps parents can take now

Parents should not wait for new features to arrive. Here are immediate ways to keep teens safe:

 

1) Start regular conversations

Ask open questions about school, friendships, and feelings. Honest dialogue reduces the chance teens will turn only to AI for answers.

 

2) Set digital boundaries

Use parental controls on devices and apps. Limit access to AI tools late at night when teens may feel most isolated.

 

3) Link accounts when available

Take advantage of new OpenAI features that connect parent and teen profiles for closer oversight.

 

4) Encourage professional support

Reinforce that mental health care is available through doctors, counselors, or hotlines. AI should never be the only outlet.

 

5) Keep crisis contacts visible

Post numbers for hotlines and text lines where teens can see them. For example, in the U.S., call or text 988 for the Suicide & Crisis Lifeline.

 

6) Watch for changes

Notice shifts in mood, sleep, or behavior. Combine these signs with online patterns to catch risks early.

 

Related Links:

 

Kurt’s key takeaways

OpenAI’s plan to involve police shows how urgent the issue has become. AI has the power to connect, but it also carries risks when teens use it in moments of despair. Parents, experts, and companies must work together to create safeguards that save lives without sacrificing trust.

Would you be comfortable with AI companies alerting police if your teen shared suicidal thoughts online? Let us know your thoughts in the comments below. 

FOR MORE OF MY TECH TIPS & SECURITY ALERTS, SUBSCRIBE TO MY FREE CYBERGUY REPORT NEWSLETTER HERE

 

 

Copyright 2025 CyberGuy.com.  All rights reserved.  CyberGuy.com articles and content may contain affiliate links that earn a commission when purchases are made.

What's Your Reaction?

like

dislike

love

funny

angry

sad

wow