Subscribe to Our Newsletter

Success! Now Check Your Email

To complete Subscribe, click the confirmation link in your inbox. If it doesn’t arrive within 3 minutes, check your spam folder.

Ok, Thanks
OpenAI introduces parental control and GPT-5 routing after teen suicide case
Photo by Tom Krach / Unsplash

OpenAI introduces parental control and GPT-5 routing after teen suicide case

After tragedies tied to ChatGPT misuse, OpenAI is aiming to make AI not just smarter, but safer.

Oyinebiladou Omemu profile image
by Oyinebiladou Omemu

OpenAI is making one of its biggest moves yet to address the safety concerns surrounding ChatGPT

The company announced that within the next month, it will begin routing sensitive conversations to its more advanced reasoning models, as well as roll out parental controls.

The changes come after a string of tragic incidents, particularly the case of 16-year-old Adam Raine, who discussed suicide plans with ChatGPT and was provided with details on specific methods, leading him to later take his life in April. The California-based parents of the victim have since taken legal action, accusing OpenAI of wrongful death.

OpenAI believes one answer to further preventing this lies in its new real-time router, which can shift conversations to models designed for deeper thinking when sensitive topics arise. Unlike lighter, faster models, GPT-5 and o3 are trained to take more time, consider context, and reason before responding. The hope is that when users express distress, these models won’t just give harmful thoughts but will instead provide helpful, possibly even life-saving, interventions.

OpenAI’s GPT-5 Has Arrived—Here’s How It Stacks Up Against the Competition
The GPT-5 family is the newest brain inside ChatGPT that unifies all its capabilities into a single system.

At the same time, the company is expanding protections for younger users with parental controls, which will allow parents to link accounts with their teens and automatically enable age-appropriate behavior rules. Parents will be able to limit features like memory and chat history, which some experts say can encourage unhealthy attachment or distorted thinking, while also receiving alerts if the system detects their child in acute distress. 

OpenAI isn’t doing this alone. Over the past year, it has assembled an Expert Council on Well-Being and AI and tapped into a Global Physician Network of more than 250 doctors across 60 countries, including psychiatrists, pediatricians, and general practitioners. Their guidance has already shaped model training and evaluation, and the company says it is expanding the list to include specialists in eating disorders, substance use, and adolescent health. These collaborations are part of OpenAI's 120-day initiative to preview and roll out as many improvements as possible this year.

Of course, not everyone is convinced the changes are enough. Critics also question how reliably the system can detect distress in real time, how much control parents will truly have, and whether the company’s safeguards are more about optics than substantive protection.

Still, the rollout marks a significant shift in how OpenAI is positioning ChatGPT, not just as an assistant, but as a tool that needs to tread carefully in sensitive contexts. 

Google agrees to settle YouTube children’s privacy lawsuit
The lawsuit accused YouTube of illegally collecting children’s data without parental consent and then targeting them with ads.
Oyinebiladou Omemu profile image
by Oyinebiladou Omemu

Subscribe to Techloy.com

Get the latest information about companies, products, careers, and funding in the technology industry across emerging markets globally.

Success! Now Check Your Email

To complete Subscribe, click the confirmation link in your inbox. If it doesn’t arrive within 3 minutes, check your spam folder.

Ok, Thanks

Read More