Subscribe to Our Newsletter

Success! Now Check Your Email

To complete Subscribe, click the confirmation link in your inbox. If it doesn’t arrive within 3 minutes, check your spam folder.

Ok, Thanks
Anthropic wants your conversations to power Claude (unless you say no)
Image Credit: Anthropic

Anthropic wants your conversations to power Claude (unless you say no)

The change should help improve the model's safety and make Claude better at things like coding, analysis, and reasoning.

Oyinebiladou Omemu profile image
by Oyinebiladou Omemu

A new line has been drawn in the AI market, and this time it’s not about model size or benchmark scores, but your chats. 

Anthropic, the company behind Claude, is making a big shift in how it treats user data. For the first time, it’s asking regular users to let their conversations and coding sessions be used to train its models. And unless you say otherwise by September 28, 2025, your chats will become part of Claude’s brain.

Until now, Anthropic stood apart from competitors by promising consumer chats wouldn’t be used for training. Previously, prompts and responses were deleted after 30 days, unless they triggered policy violations, in which case they might remain for up to two years. However, that window is closing quickly, and opt-in chats will be stored for only five years.

There are some caveats, though. Enterprise customers are safe from these changes, similar to what OpenAI gives to its corporate clients. Everyone else, from Claude Free to Pro to Max users, will face a choice to opt out or let their data fuel the next generation of Claude. And if you don’t make that choice by the deadline, the system enables it by default.

Anthropic Blocks OpenAI from Using Claude AI
Claude’s makers say OpenAI was using its chatbot to train their upcoming model.

According to its blog post, the change should help improve the model's safety and make Claude better at things like coding, analysis, and reasoning. But the truth is more complicated. Every company building large language models faces the same hunger for fresh, real-world data. And nothing beats millions of natural conversations to make an AI more capable, more accurate, and more competitive against giants like OpenAI and Google.

Of course, how you present the choice matters. The rollout itself has raised concerns. Existing users are met with a big “Accept” button for new terms, while the toggle for training permission sits smaller, lower down, and quietly switched on by default. It’s easy to imagine how many people will click through without noticing they’ve just agreed to five years of data retention.

In the end, Anthropic’s decision isn’t surprising. Building smarter, safer AI requires more than just big servers; it needs the human conversations that actually happen when people use the product. But the speed and subtlety of this shift show just how quickly user expectations around privacy are being rewritten.

Oyinebiladou Omemu profile image
by Oyinebiladou Omemu

Subscribe to Techloy.com

Get the latest information about companies, products, careers, and funding in the technology industry across emerging markets globally.

Success! Now Check Your Email

To complete Subscribe, click the confirmation link in your inbox. If it doesn’t arrive within 3 minutes, check your spam folder.

Ok, Thanks

Read More