Security researcher Chaofan Shou discovered on March 31 that Anthropic had accidentally included a 60MB source-map file in its Claude Code package on npm, the public registry where developers download software updates. The file allowed anyone to reconstruct nearly 2,000 files containing 500,000 lines of internal TypeScript code.

Within hours, developers mirrored the complete codebase on GitHub.

Anthropic Exposed Nearly 3,000 Internal Files in CMS Misconfiguration
The files, ranging from draft blog posts to images and documents, could be accessed by anyone who knew how to request them.

Unannounced Features Anthropic Already Built

The leaked source shows features Anthropic built but never announced. Claude Code can apparently review its own work sessions to learn from mistakes, run in background mode while you're not actively using it, and accept remote commands from phones or browsers. The code also reveals how the command-line tool works internally, the agent architecture behind it, and what tools Anthropic uses for development.

"Earlier today, a Claude Code release included some internal source code. No sensitive customer data or credentials were involved or exposed," an Anthropic spokesperson said in an emailed statement to Bloomberg. "This was a release packaging issue caused by human error, not a security breach. We're rolling out measures to prevent this from happening again."

Second Major Leak in Seven Days

This is the second time in seven days that Anthropic has accidentally exposed internal materials. Last week, Fortune reported that the company made thousands of files public, including drafts about an unreleased model called "Mythos" or "Capybara" internally.

The exposed roadmap shows that Anthropic is working on AI that autonomously handles longer tasks, remembers context between conversations, and coordinates with other AI agents. These features matter for enterprise customers as Anthropic prepares for its reported $380 billion IPO.

For competing AI companies, the leak is basically a free engineering school on building production coding assistants. The code is still available on GitHub mirrors even after Anthropic sent takedown notices.

Anthropic Accuses Chinese AI Labs of Mass “Distillation” Campaign to Copy Claude
Anthropic says it is strengthening detection systems and sharing intelligence with partners.