Skip to content

Shelly Palmer: Anthropic's privacy pivot

Users must opt-out by Sept. 28.
ai-privacy

Greetings from Candlewood Lake. Yesterday, Anthropic quietly dropped a bombshell. Unless users explicitly opt out by Sept. 28, it will use consumer chat data to train future AI models. This is a stunning reversal from Anthropic's previous position as the privacy-first alternative to ChatGPT.

Previously, Anthropic automatically deleted user conversations after 30 days. Under the new policy, conversations from users who don't opt out will be retained for five years.

The new policy affects all consumer tiers: Claude Free, Pro, and Max users, plus those using Claude Code. Importantly, business customers using Claude for Work, Claude Gov, Claude for Education, or API access through services like Amazon Bedrock remain unaffected.

This creates a clear two-tiered privacy system where enterprise customers get protection while consumers become training data.

Anthropic frames the change around improving "model safety" and helping future Claude models "improve at skills like coding, analysis, and reasoning." The company emphasizes user choice and the ability to change settings at any time.

This is total nonsense, of course. In reality, training AI models requires vast amounts of high-quality conversational data, and accessing millions of Claude interactions will provide exactly the kind of real-world content that can improve Anthropic's competitive positioning against rivals like OpenAI and Google.

This isn't happening in isolation. Google recently announced a similar opt-out policy for Gemini, set to take effect on Sept. 2. That policy is similarly broad, covering user-uploaded files, photos, videos, and even screenshots that users ask questions about. The entire industry is converging on the same strategy: make data collection the default and require users to actively opt out.

If your company uses Claude, review your access method immediately. Consumer accounts now default to data sharing. Enterprise accounts maintain privacy protections, but at significantly higher cost. And you'll probably want to let your workforce know that they have to properly configure their personal AI accounts if they are likely to accidentally input sensitive company data while using their personal devices.

To opt-out today, go to Settings>Privacy. Under the Privacy settings area, you'll see "Help improve Claude." Toggle it off. Accept the terms. You're done.

The deadline is Sept. 28, 2025. After that date, users must make their selection to continue using Claude. I think we should consider this a preview of coming industry standards. Privacy-by-default will quickly transition to privacy-by-choice, with the burden shifting to users to protect their own data. -s

P.S. Yesterday I added a feature that lets you explore our stories in ChatGPT with one click. Yes, I see the irony of announcing an AI feature in a story about AI companies harvesting user data. (Original source links are still there for the privacy-conscious.) Love it? Hate it? Useful? Useless? As always, your thoughts and comments are both welcome and encouraged. -s

 

About Shelly Palmer

Shelly Palmer is the Professor of Advanced Media in Residence at Syracuse University’s S.I. Newhouse School of Public Communications and CEO of The Palmer Group, a consulting practice that helps Fortune 500 companies with technology, media and marketing. Named LinkedIn’s “Top Voice in Technology,” he covers tech and business for Good Day New York, is a regular commentator on CNN and writes a popular daily business blog. He's a bestselling author, and the creator of the popular, free online course, Generative AI for Execs. Follow @shellypalmer or visit shellypalmer.com.

push icon
Be the first to read breaking stories. Enable push notifications on your device. Disable anytime.
No thanks