Meta CEO Mark Zuckerberg weighed changes to how the company studies the social impact of its products after internal research sparked public backlash and legal scrutiny, newly unsealed court documents show.

In an email to top executives, including then-COO Sheryl Sandberg and former UK Deputy Prime Minister Nick Clegg, dated September 15, 2021, Zuckerberg said the recent scrutiny prompted him to reassess whether Meta’s social research efforts were benefiting the company or creating more challenges.

“Recent events have made me consider whether we should change our approach to research and analytics around social issues,” Zuckerberg wrote in an email marked “privileged and confidential.”

The message was sent one day after The Wall Street Journal published a report based on internal Meta documents from whistleblower Frances Haugen, revealing that the company's research found Instagram worsened body image issues for many teen girls. One widely cited finding showed that 32% of teen girls said Instagram made them feel worse about their bodies when they already felt bad.

Meta’s Creator Marketplace Will Now Push Brands Toward More Relevant Creators
The update introduces Similar Creators search, intent-based recommendations, and ad performance badges.

Emails emerge in New Mexico lawsuit

The email was unsealed this week as part of a lawsuit filed by New Mexico Attorney General Raúl Torrez, which accuses Meta of deceptively portraying its platforms as safe for teens while being aware of harmful design choices that allegedly contributed to addiction and enabled child predators.

In the complaint, the attorney general’s office argued that Meta’s failure to disclose harms identified by its own research “would have corrected the misleading and deceptive nature of its public statements proclaiming its platforms ‘safe.’”

Meta spokesperson Andy Stone said in a statement that the company “is proud of our continued commitment to doing transparent, industry-leading research,” adding that Meta has used those insights to introduce measures like teen accounts with built-in protections and parental controls.

Opening statements in the New Mexico case are expected next week, with similar lawsuits pending in California.

Mark Zuckerberg Says Meta Is Building “Personal Superintelligence” AI — Here’s What That Means
Meta’s CEO laid out a vision for “personal superintelligence” that transforms how 3.5 billion daily users interact with Facebook, Instagram, and WhatsApp.

Zuckerberg: Apple avoids criticism by “lying low”

In the email thread, Zuckerberg also suggested that Meta’s rivals face less scrutiny by doing less proactive research into the social harms of their platforms.

“Apple, for example, doesn’t seem to study any of this stuff,” Zuckerberg wrote. “They’ve taken the approach that it is people’s own responsibility what they do on the platform… This has worked surprisingly well for them.”

He argued that Meta’s more aggressive reporting of child sexual abuse material (CSAM) made the company appear worse by comparison, even if the behaviour wasn’t necessarily more prevalent on its platforms.

“When Apple did try to do something about CSAM, they were roundly criticised for it,” Zuckerberg added, referencing Apple’s controversial 2021 proposal to scan iCloud photos for abuse material, a move that was later shelved following backlash from privacy advocates.

Zuckerberg also pointed to YouTube, X (formerly known as Twitter), and Snap, writing that they appeared to take a similar “low-profile” approach to researching and publicising platform harms, either by choice or due to limited resources.

Despite considering changes, Zuckerberg defended Meta’s work on social impact research, arguing that the company was being punished for transparency.

“I think we should be commended for the work we do to study, understand, and improve social issues on our platforms,” he wrote. “Unfortunately, the media is more likely to use any research… to say we’re not doing everything we can rather than that we’re taking these issues more seriously than anyone else.”

Other executives on the thread largely agreed that internal research remained necessary, even with the risks of leaks and backlash.

Former VP of Central Products Javier Olivan wrote: “Leaks suck, and will continue to happen… Given that, is it still worth trying to understand these issues? I think it is the responsible thing to do.”

David Ginsberg, then VP of product, choice, and competition, added that the work was essential for building better products, “separate and aside from any societal issues goals.”

Meta Rolls Out Ads on Threads Worldwide as the Platform Crosses 400 Million Users
The move marks a major step in monetizing Meta’s X rival, giving advertisers global access to a fast-growing social platform.

Meta debated restructuring sensitive research teams

Internal discussions later explored several options, including centralising teams working on sensitive topics, tightening access controls, or, in the most extreme scenario, outsourcing some research entirely. Ultimately, Meta opted to centralise research teams rather than dismantle them.

Product executive Guy Rosen described the exercise as “preliminary/discretional,” but executives ultimately decided to announce organisational changes before Instagram head Adam Mosseri’s congressional testimony, after concerns the news would leak and appear evasive.

Meta has since said it continues to study sensitive issues such as teen well-being and online safety.

A broader reckoning for Big Tech

The newly unsealed emails offer rare insight into how Meta’s leadership weighed transparency, public scrutiny, and internal research following the 2021 whistleblower revelations, a moment that reshaped global conversations about social media, teen mental health, and platform accountability.

Zuckerberg closed his original email by suggesting that repeated leaks may help explain why other tech companies avoid deep internal research into social harms altogether: “This may be part of why the rest of the industry has chosen a different approach towards these issues.”

As Meta heads into another major courtroom battle, the documents underscore a central tension facing Big Tech: whether studying and publicly acknowledging platform harms ultimately protects users or simply exposes companies to greater legal and reputational risk.

Facebook May Start Charging Creators for Sharing Links. Here’s What That Means
“Starting December 16, certain Facebook profiles without Meta Verified will be limited to sharing links in 2 links in two organic posts per month,” Meta said.