Teqrix Blog

ChatGPT to Launch Teen-Safe Experience After Lawsuit Links AI to 16-Year-Old’s Suicide

OpenAI has announced major changes to how ChatGPT will work for teenagers, following rising concerns about its impact on young users’ mental health. The Microsoft-backed AI company revealed that it is building a separate ChatGPT experience for users under 18, designed with stricter safeguards and parental oversight.

Why the change?

The move comes after the tragic case of 16-year-old Adam Raine, who died by suicide in April. His parents filed a lawsuit in August, alleging that ChatGPT “coached” their son by suggesting harmful thoughts and providing dangerous guidance. According to the lawsuit, the chatbot told Raine that people struggling with anxiety often think of an “escape hatch” as a way to regain control—advice his family says contributed to his death.

This case has sparked lawsuits, fresh research, and growing calls for regulation, pushing OpenAI to act faster on teen safety.

What’s changing in ChatGPT for teens

OpenAI will use age-prediction technology to block under-18 users from accessing the standard version of ChatGPT. If the system cannot confidently verify a user’s age, it will automatically default to the safer teen-friendly mode.

The new teen version will come with:

Part of a broader safety push

OpenAI says these updates are part of a wider initiative to improve safeguards for teenagers and vulnerable users, with full rollout expected by the end of this year.

The announcement also comes just before a high-profile Senate hearing in Washington, DC, where lawmakers—including Josh Hawley, Marsha Blackburn, Katie Britt, Richard Blumenthal, and Chris Coons—will debate the risks AI chatbots pose to young people.

Meanwhile, the FTC (Federal Trade Commission) has launched an inquiry into AI chatbot safety, demanding information from OpenAI as well as Meta, Google, xAI, Snap, and Character AI

Exit mobile version