Anthropic on Monday announced on-demand chat recall for Claude. Ask it to "find what we discussed about _____," and it will search your previous conversations and pull relevant context into a new thread. No persistent memory profile quietly building in the background. Just explicit search when you need it.
The feature is rolling out to Max, Team, and Enterprise subscribers first, and it works across web, desktop and mobile. Toggle it under Settings → Profile → "Search and reference chats." Once enabled, Claude only retrieves past chats when you specifically ask. Want to exclude a chat from future searches? Delete it. That's your entire privacy model.
Pro tip: Searches span all non‑project chats or stay scoped within a single project.
OpenAI took the opposite path in April. ChatGPT’s memory, when enabled, references all your past conversations and your saved memories by default. It is persistent unless you disable it. Claude’s approach is explicit recall on request. That trades convenience for control.
I like ChatGPT’s always‑on memory. It saves time. Claude’s “ask to search” model is cleaner for governance, but it asks me to remember to remember. For me, it’s a trade-off. For my enterprise clients, the control story almost always wins.
Importantly, conversation history creates switching costs. The more institutional knowledge inside one model’s logs, the harder it becomes to move. This is the biggest "gotcha" in the chat memory playbook. With that in mind, here are my recommendations:
Productivity. Multi‑week projects get easier. You avoid recap prompts and copy‑paste archaeology. Start with a target of 10-20 percent less “context reload” time and validate it with a pilot.
Governance. “Search on demand” is simpler to explain to legal and compliance than profile‑style memory. Fewer surprises. Clearer defaults.
Privacy posture. Claude for Work data is not used to train Anthropic’s models by default. That helps, but you still need retention and access policies because conversation history is work product.
Before You Deploy Claude Chat Memory at the Enterprise Level
- Set the default. Off for sensitive teams. Document when to enable it. Corporate AI Ops should own the toggle and the exceptions process.
- Enforce project hygiene. Use Projects for client and campaign separation so searches stay scoped and you avoid cross-contamination.
- Create deletion SOPs. If a conversation should never be resurfaced, delete it immediately. Add this step to incident and eDiscovery runbooks.
- Update the data map. Treat chat history as governed content. Define retention, legal hold, DSAR, and export processes. Cite your enterprise terms in policy.
- Measure the win. Track time saved on context reloads, multi-session completion rates, and manual recap prompts. If you can’t measure improvement, keep it off.
Expect wider availability beyond Max, Team, and Enterprise and rapid adoption in research, code review, and marketing ops. Procurement will push for clearer retention controls and exports. If Anthropic ships admin‑level org defaults and reporting, I would expect higher enterprise stickiness.
In practice, conversation history is a new data moat. Exploit the productivity, govern the risk, and do what you can to avoid getting trapped in a single vendor’s archive.
As always, your thoughts and comments are both welcome and encouraged. -s
Shelly Palmer is the Professor of Advanced Media in Residence at Syracuse University’s S.I. Newhouse School of Public Communications and CEO of The Palmer Group, a consulting practice that helps Fortune 500 companies with technology, media and marketing. Named LinkedIn’s “Top Voice in Technology,” he covers tech and business for Good Day New York, is a regular commentator on CNN and writes a popular daily business blog. He's a bestselling author, and the creator of the popular, free online course, Generative AI for Execs. Follow @shellypalmer or visit shellypalmer.com.