Apple has long marketed the App Store as a tightly controlled ecosystem where privacy, safety, and strict review rules protect users from harmful apps. But lately, that reputation has been under pressure.

Not long ago, reports surfaced about a fake crypto app slipping through review and allegedly wiping out users’ savings. Now, a new report suggests Apple was quietly dealing with another serious issue behind the scenes, one involving AI, deepfakes, and the Grok app from xAI.

According to a report by NBC News, Apple privately threatened to remove the Grok app from the Apple App Store earlier this year after users generated sexualised deepfakes of women and children. The details reportedly came to light through a letter Apple sent to US senators, revealing that while the company stayed publicly silent during the controversy, it had internally found both X and Grok in violation of its App Store guidelines.

At the height of the issue, social platform X was reportedly flooded with AI-generated explicit images involving non-consenting adults and minors. Lawmakers had written to Apple CEO Tim Cook, urging the company to suspend X and Grok from the store over the spread of abusive content. Behind the scenes, Apple contacted xAI and demanded a clear plan to improve content moderation.

What followed was a back-and-forth between Apple’s App Review team and the developers. Apple rejected an initial Grok update, saying the changes “didn’t go far enough.” A second round of submissions for both apps was reviewed, with Apple noting that X had “substantially resolved” its violations while Grok “remained out of compliance.” Apple reportedly warned that the app could be removed entirely if further fixes weren’t made. Only after additional changes did Apple approve the latest version, describing it as “substantially improved.”

In response, X’s Safety account publicly stated that xAI has “extensive safeguards” in place to prevent misuse, including prompt filters, real-time monitoring, and frequent model updates to stop users from generating non-consensual explicit deepfakes.

The situation highlights a growing tension for Apple. Its rules clearly require moderation of user-generated content, yet AI tools like Grok make enforcement far more complex than traditional apps. At the same time, Apple reportedly removed dozens of smaller deepfake apps from the store during the same period, raising questions about consistency in enforcement.

For Apple, this isn’t just about one app. It’s about whether its famously strict App Store policies can keep up with the messy, fast-moving reality of generative AI, and whether those rules apply equally, no matter how powerful the developer behind the app might be.

Apple Blocks Updates to AI “Vibe Coding” Apps Like Replit Over App Store Rules
Apple says they break long-standing App Store policies.