Meta to leverage EU users' data for AI model training
Meta wants your Facebook posts to teach its AI how to talk like a local.
If you’ve ever posted a funny comment on Facebook or a caption in your native slang on Instagram, Meta’s AI might soon be studying it. Not in a creepy, sci-fi way—but as part of a move to train its growing lineup of AI models using your public content. And for the first time, European regulators are letting it happen.
Starting this week, Facebook, Instagram, WhatsApp, and Messenger users across the EU will get notifications explaining that their public posts, including posts, comments, as well as interactions with its AI, may be used to train Meta’s AI models, according to Meta.
The company also said users can object to their data being used by submitting an opt-out request. Meanwhile, private messages and content from users under 18 are off-limits.
This kind of data collection isn’t new. Meta and competitors like OpenAI and Google already train their models using public data in the U.S. and other regions. But Europe has always been a tougher market, thanks to the EU’s General Data Protection Regulation, which requires strict data transparency and consent rules.
Last year, Meta tried to roll out a similar data collection effort in the EU but was forced to pause after backlash from privacy advocates and regulators. Now, the company is back with a revised approach—and this time, the European Data Protection Board (EDPB) has approved the plan.
The timing follows the launch of its Meta AI assistant last month for European users after several delays and the recent release of Meta’s most advanced AI yet—Llama 4 Scout and Maverick. While this public data policy isn’t about Llama 4 specifically, it’s part of a bigger push to train smarter, more localised AI systems.
According to Meta, using European public data will help its models better understand regional languages, context, and culture—something that’s hard to achieve using English-only datasets scraped from elsewhere.
Still, many users are wary—and for good reason. Meta’s past is marked by major data privacy failures, including the infamous Cambridge Analytica scandal, which led to a $5 billion fine and a global backlash.
This time, Meta’s following the playbook: up-front notices, opt-outs, and a public paper trail.