A $10 billion AI startup that supplies training data to companies like OpenAI, Anthropic, and Meta is facing a wave of lawsuits over how it collects and handles sensitive worker data. 

According to a report by The Wall Street Journal, at least seven class-action lawsuits have been filed against Mercor in recent weeks following a third-party data breach that allegedly exposed contractor information. 

The lawsuits claim the breach included highly sensitive material, ranging from recorded job interviews to facial biometric data and screenshots of workers’ computers. 

What the lawsuits allege 

One class-action suit filed in Northern California claims Mercor collected extensive applicant data — including background checks — and shared it with partners in violation of federal regulations. 

Plaintiffs allege that Mercor’s practices go beyond standard hiring processes. According to the suit, the company monitored contractors’ computers, used recorded candidate interviews to train AI models, and may have trained client systems on materials owned by other companies. 

In one account cited in the lawsuit, a plaintiff alleged that workers were encouraged to use real company data in tasks, provided it was slightly altered or anonymised. When the plaintiff tried to avoid including sensitive details, reviewers reportedly pushed back, criticising the work as too vague. 

Another contractor alleged he encountered financial models and prompts that appeared to contain proprietary information, including what the lawsuit describes as “pre-project metadata, hidden defined names, institutional data-terminal markers, real lender or counterparty names, irregular numeric precision, and other features that raised serious provenance questions.” 

Mercor responds 

Mercor has denied the allegations. 

“We strongly dispute the speculative claims in these lawsuits and look forward to presenting the facts at the appropriate time and place,” the company said in a statement. 

It added that it “take[s] the privacy of our customers, contractors, employees and those we interview very seriously” and that it complies with relevant laws and regulations. 

The company also said it acted quickly to address the breach, noting that “we are conducting a thorough investigation with leading third-party forensics experts and are communicating directly with affected stakeholder groups as we have findings.” 

How Mercor gathers data 

The case is drawing attention to how AI companies source the data used to train models. 

The Journal reports that Mercor previously attempted to buy work materials from individuals on LinkedIn, including documents those individuals did not necessarily own the rights to. Online postings also suggested the company offered payments for personal-finance files and even Google Maps histories. 

Workers also described a system of continuous monitoring. Contractors are required to install tracking software called Insightful, which captures screenshots of their computers during work sessions. 

One lawsuit alleges that this software recorded activity across hundreds of applications, including personal accounts, and that workers were not “clearly informed” of the extent of the monitoring. 

Mercor said it informs workers that screenshots may be taken during billing hours and instructs them to use only work-related applications while the software is active. 

Clients pull back as scrutiny grows 

The fallout is already affecting Mercor’s relationships. 

Meta has paused its work with the company and is investigating the incident, according to a spokesperson cited by the Journal. Anthropic declined to comment, while OpenAI did not respond to requests. 

The situation highlights growing tension in the AI industry, where companies are under pressure to secure large volumes of high-quality data to train increasingly advanced models. 

$10B AI Hiring Startup Mercor Confirms Data Breach Affecting Tech Talent
In an era of AI-driven platforms, treating data security as an afterthought is no longer an option.