China is moving first to regulate a market the U.S. is still figuring out how to talk about, and the stakes are rising fast.

On Saturday, the Cyberspace Administration of China (CAC) released draft rules targeting AI systems designed to simulate human personalities — the same tools powering platforms like Character.AI, where users now average two hours a day, longer than most teenagers spend on homework.

The new policy, titled “Interim Measures For The Management Of Anthropomorphic AI Interaction Services,” forces platforms to tell users they are talking to a bot the moment they log in, and again every two hours.

This is not a friendly suggestion. It is a requirement backed by mandatory logging, real-time emotion detection, and security audits for any app with more than one million users.

While California passed SB 243, requiring break reminders for minors, and New York introduced disclosure rules in November, neither matches Beijing’s scope. China is not just protecting children — it is regulating an entire industry built on blurring the line between human and machine.

China Finalises Digital Yuan Framework As The US And EU Stall On Digital Currencies
China is moving its digital yuan from testing to real use with a clear plan for management, supervision, and daily transactions.

The Business Of Loneliness

The numbers explain the urgency. The global AI companion market hit $37.12 billion in 2025 and is projected to reach $552.49 billion by 2035.

The sector pulled in $120 million in revenue this year alone, fuelled by 220 million downloads. The most worrying metric for regulators is revenue per download, which jumped from $0.52 to $1.18 in just twelve months. Users are not just experimenting — they are paying for intimacy through premium features like unlimited voice calls.

In the U.S., adoption is accelerating. About 72% of teenagers aged 13 to 17 have tried AI companions, while Character.AI alone reports more than 20 million active users.

The Clash: Regulation Vs Revenue

Lin Wei, a legal scholar at Southwest University, framed the shift clearly in state media, saying AI is moving “beyond mere functional assistance toward emotional engagement.”

The new rules strike at the heart of this business model.

The Content: Anything that “manipulates emotions” or creates “emotional traps” is banned.

The Data: Platforms cannot use chat histories to train models without explicit consent.

The Loophole: Training datasets must reflect “socialist values,” a clause that could force U.S. firms to build separate China-only models to stay in the market.

Industry lawyers warn the compliance burden will crush startups. Large platforms may afford real-time emotion detection systems, but smaller teams likely cannot. The requirement for regulatory sandboxes — supervised testing environments — may soon become the only entry point for new players.

A Strategic Opening

The most immediate impact will hit the addiction economy.

Revenue per download surged 127% this year because AI companions are built to keep people emotionally engaged. China’s mandatory two-hour reminder directly attacks the metrics that drive subscriptions. If enforced, the endless scroll of emotional dependency will be broken by a state-mandated pause.

Beijing calls the framework “inclusive and prudent,” but the message is unmistakable: machines can simulate human connection, but they are not allowed to pretend it is real.

Whether this becomes the global standard depends on what Washington does next. For now, the U.S. has handed Beijing a first-mover advantage in shaping the rules of a $552 billion market. That is not just a regulatory gap — it is a strategic opening.

China Moves To Kill Tesla-Style Retractable Door Handles As America Hesitates
Starting January 1, 2027, all vehicles sold in China weighing under 3.5 tonnes must include mechanical exterior and interior door handles.