France and Malaysia joined India over the weekend in launching investigations into Grok, Elon Musk's AI chatbot, after some users exploited the tool to create sexualized images of women and children. The coordinated crackdown represents one of the most serious regulatory actions yet against generative AI misuse.
The alarm began December 28, when Grok generated and posted an image depicting two young girls, estimated ages 12-16, in sexualized clothing after being prompted by a user. The chatbot itself apologized when asked to after getting a lot of criticism, calling the content a violation of ethical standards and potentially U.S. laws on child sexual abuse material.
But regulators aren't accepting apologies. India's IT ministry issued an order Friday demanding X take action within 72 hours or risk losing "safe harbour" protections that shield platforms from legal liability. The directive specifically targets content that is "obscene, pornographic, vulgar, indecent, sexually explicit, paedophilic, or otherwise prohibited under law."
France went further. The Paris prosecutor's office told Politico it will investigate the proliferation of sexually explicit deepfakes on X, after three government ministers reported "manifestly illegal content." Under French law, violations carry penalties of up to two years in prison and €60,000 in fines. Malaysia's Communications and Multimedia Commission has also launched a probe into what it calls "serious concern" over AI-generated indecent content involving women and minors.
Musk responded Saturday with a warning. "Anyone using Grok to make illegal content will suffer the same consequences as if they uploaded illegal content," he posted. X's safety team added that accounts would be permanently suspended and cases referred to law enforcement.
The problem is shifting blame to users doesn't address why the platform allowed this in the first place. And the timing couldn't be worse for xAI. The Trump administration integrated xAI into the federal workflow earlier this year, signing an 18-month contract that authorizes the chatbot for official government business—despite a coalition of over 30 consumer advocacy groups urging the government to block Grok for lacking safety testing.

The regulatory pressure is mounting. Under the EU's Digital Services Act, X is classified as a Very Large Online Platform, requiring heightened standards to mitigate systemic harms like deepfakes and child safety risks. Breaches can result in fines of up to 6% of global turnover. With X recently hit with a €120 million ($140 million) fine for breaching online content rules, another violation could push regulators past warnings and into structural intervention.
The bigger question is whether embedding image-generation tools directly into social platforms can ever be made safe at scale. xAI positioned Grok as more permissive than competitors like ChatGPT and Claude, even launching "Spicy Mode" last summer to allow partial nudity and sexually suggestive content. That positioning worked until users figured out how to bypass the guardrails entirely.
Now, three governments are signalling that "we tried" isn't a defence. If platforms can't reliably prevent models from producing illegal content on demand, authorities may start mandating feature suspensions or requiring third-party safety audits before AI tools can operate publicly.
How X and xAI respond to these investigations will likely determine not just the outcome of these probes, but how much freedom AI companies get to self-regulate going forward.



