OpenAI announced Tuesday that it has deployed a behavioural analysis system across ChatGPT consumer plans globally to estimate user age and automatically restrict content for minors. The move marks a shift from relying solely on self-reported birthdates to active monitoring of user behaviour for safety compliance.

"We’re rolling out age prediction on ChatGPT to help determine when an account likely belongs to someone under 18, so we can apply the right experience and safeguards for teens," the company stated in its announcement.

OpenAI Acquires Torch Days After Launching ChatGPT Health to Unify Medical Records
Five days. That’s how long it took OpenAI to realize its new health feature had a critical gap—and spend $100 million to plug it.

How It Guesses Your Age

The model is silently scanning a specific set of signals to decide if an account belongs to a minor. According to the official announcement, the system analyses “how long an account has existed, typical times of day when someone is active, usage patterns over time, and a user’s stated age.” It effectively looks for digital fingerprints of a teenager, such as logging in during school hours or using distinct slang. If those signals contradict your stated age, or if the AI lacks confidence, it defaults to the more restrictive experience just to be safe.

What Gets Restricted?

Any account identified as a minor loses access to specific high-risk categories immediately. The new filters aggressively stop the generation of graphic violence, gore, and any depictions of self-harm. The safety measures go beyond physical danger to mental health, blocking requests that promote unhealthy dieting or extreme beauty standards. Even role-play gets a downgrade; the system now prohibits sexual or romantic scenarios for anyone deemed under 18.

The Goal: “Treating Adults Like Adults”

OpenAI argues that tighter controls for teens are actually the key to expanding freedom for everyone else. The company states that accurately identifying minors allows them to “treat adults like adults and use our tools in the way that they want.” This predictive model clears the path for future updates that will let verified users access adult mode features currently deemed too risky for a general audience, ensuring that strict safety rails for kids don’t ruin the experience for grown-ups.

How to Fix Incorrect Flags

Algorithms get things wrong, and some adults will inevitably get caught in the dragnet. If you find yourself locked out of standard features, you have to prove you are an adult through a third-party service called Persona. You will need to upload a government ID or take a live selfie to confirm your age. To address privacy worries, OpenAI promises that “personal information collected for verification is deleted after the check is complete,” meaning your ID doesn’t stay on their servers.

OpenAI and Jony Ive’s Screenless Device Could Debut in 2026 — Here’s Everything We Know
Chief Global Affairs Officer, Chris Lehane confirmed the news on a panel at the World Economic Forum in Davos.