Claude Memory: What It's Storing About You and 5 Things to Delete

Claude has been quietly building a profile on you since March. Here's the 10-minute audit for HR, paralegals, accountants, and professionals.

Go open Claude right now, click the profile icon, and go to Settings → Memory. Read what’s in there.

If you’ve been using Claude since March — even the free tier — there is almost certainly a paragraph of notes sitting in that panel that Claude has been quietly building about you from every chat you’ve had. Your role. Your company (if you named it). Projects you’re working on. The names of people you’ve described. Preferences you stated once and forgot about. Client situations you mentioned in passing.

If you’re in HR, legal, healthcare, finance, or any profession where you handle other people’s private information, that profile is the second shoe in the Is ChatGPT Confidential? conversation that started in March with the Heppner ruling. It just hasn’t dropped yet for most people.

This is the 10-minute audit. Run it this week.

What Claude Memory Actually Is

Anthropic launched Claude’s persistent memory feature in August 2025, initially for Claude Max and Team subscribers only. By October it was available to all paid tiers. On March 3, 2026, Anthropic dropped the paywall entirely — free users got the feature too. Which means if you’ve been using any version of Claude since roughly early March, memory has been on by default, and Claude has been synthesizing a profile of you from every conversation.

The feature works like this:

  • It reads your chats and writes notes about you. Not the full transcripts — summarized facts. “User is a paralegal working on a product liability case.” “User prefers concise bullet-point answers.” “User’s team of six reports to a partner named Sarah.”
  • Those notes persist across every future conversation. When you start a new chat, Claude already has that memory loaded. You don’t have to re-explain yourself.
  • You can view, edit, pause, or delete any of it. Settings → Memory. It’s all editable text.
  • It can import your memory from ChatGPT, Gemini, and Grok. Anthropic added that feature in March 2026 — a small transfer tool in Settings.

That sounds like a productivity win, and for casual use it is. “Remember I’m building a cookbook in Spanish” — great. For professional use on anything involving other people’s data, it’s a quiet liability that most people haven’t audited yet.

Tall crystalline archive vault filled with rows of small amber memory-tokens; a magnifying lens hovers in front, examining one row as a few tokens dissolve into amber light-particles

Why the Memory Audit Matters, Specifically

Two things happened in the last sixty days that changed the stakes.

1. U.S. v. Heppner (Feb 10, 2026): A federal judge in New York ruled that AI conversations with consumer tools like Claude are not protected by attorney-client privilege. If you’ve been describing client matters to Claude, those chats are discoverable. Now extend that logic: if Claude has synthesized those chats into a running memory profile, the memory profile itself is discoverable.

2. The “supply chain risk” designation (Feb 27, 2026): The U.S. Secretary of Defense designated Anthropic a “supply chain risk to national security” under 10 USC §3252 — the first time that statute has been used against an American company. This is a national-security framing, not a privacy framing, but it signals that federal regulators are treating AI company data with a seriousness consumer users generally don’t expect.

A third thing matters for specifically the memory feature: Anthropic’s default data retention for training-enabled accounts jumped from 30 days to 5 years in 2025. That’s a 6,000% increase. If your memory profile is used in model training — and by default on consumer tiers, it may be — it lives on Anthropic’s servers for five years. Even if you delete the chat, the synthesized memory may not be purged on the same timeline.

For HR managers auditing discrimination cases, paralegals drafting motions, accountants reviewing client books, therapists taking session notes, and anyone with compliance obligations, this is the moment to audit. Not panic — audit. Because the feature is actually useful, once you know what to keep and what to delete.

The 10-Minute Audit

Do this now. It takes ten minutes and you can do it while a meeting is running.

Step 1 (1 minute) — Find the setting

In Claude (web or app), click your profile icon in the bottom-left, then Settings. You’ll see a left-side menu. Look for Memory. In some builds this is under Capabilities → View and edit memory. Same thing.

When you click it, you’ll see a panel called something like “Memory summary” with a block of text Claude has built. That’s your profile. Read it.

Step 2 (2 minutes) — Read what’s there, slowly

The first read-through is going to be weirder than you expect. You’ll find one of two patterns:

  • Benign: notes about your writing style, tools you use, ongoing personal projects. If that’s all, you’re fine.
  • Concerning: references to client situations, coworker names, company internal information, financial details, or anything you’d not want a subpoena to find. Flag every line.

Don’t edit yet. Just read.

Step 3 (3 minutes) — Delete the concerning lines

Claude lets you delete memory lines individually. Highlight the line (or click the small X next to it, depending on your build), and remove it.

The five categories most worth deleting for professionals:

  1. Specific client names, matter names, or case identifiers. If memory says “User is working on the Acme v. Johnson matter” — delete.
  2. Coworker or employee names paired with sensitive context. “User’s report Melissa is underperforming” — delete.
  3. Financial figures, account details, or internal numbers. “Company revenue is $4.2M” — delete.
  4. Health or medical information about anyone. “User’s patient has diabetes” — delete without exception. HIPAA does not care that Claude synthesized it.
  5. Negotiation positions, litigation strategy, or any “what I’m trying to achieve” information in an active matter. “User is trying to settle the case under $200K” — delete.

Keep the benign: preferred writing style, tools you use, topics you’re interested in, language preferences.

Step 4 (2 minutes) — Decide on your ongoing posture

You now have three choices, in Settings → Memory:

  • Keep memory on. Convenient for casual chats. Best for you if your work doesn’t touch other people’s private information.
  • Pause memory. Claude stops adding new memories but keeps what’s already there. Good middle ground if you want the existing context but want to stop adding to it.
  • Turn memory off entirely and Clear all. Resets Claude to “remembers nothing” state. Cannot be undone. Best for regulated professions — paralegals, HR handling active investigations, therapists, compliance officers.

The one that most people should probably default to: Pause + review your remaining entries every two weeks. You keep the useful context, you don’t accumulate new material while you’re working on sensitive matters.

Step 5 (2 minutes) — Turn off training (if you haven’t already)

Separate from memory: Anthropic may use your conversation data to train Claude. This is configurable at Settings → Data & Privacy Controls. Find “Allow training on your data” or similar wording and confirm it’s off.

This is a one-time change. Do it now. The difference is the one between a memory that’s yours and a memory that may have informed Claude’s next model training cycle across five years of retention. Your call. Mine is off.

Consumer Claude vs. Claude for Work vs. API: Know Your Tier

This part is the single most common mistake professionals make — assuming their paid Claude plan is “business-grade.” It isn’t.

TierTraining on your data by defaultMemory feature availableBusiness-grade privacy
Claude FreePossible — opt out in settingsYes (since March 2026)No
Claude ProPossible — opt out in settingsYesNo
Claude MaxPossible — opt out in settingsYesNo
Claude TeamPossible — opt out in settingsYesStill consumer tier
Claude EnterpriseNo (contractually excluded)Yes, admin-controlledYes
API with zero-retentionNoControlled by your appYes

Here’s the part that surprises most people: Claude Team is still a consumer product from a data-handling standpoint. It has the same training-data posture as Free and Pro. If you’re at a firm using Claude Team to handle client matters, you don’t have the privacy floor you probably assumed. For that, you need Claude Enterprise or a zero-retention API deployment.

What Should Go In Memory (and What Should Never)

Let’s be specific, because the temptation to use memory for everything is real.

Safe to keep in memory:

  • Your professional role and general area (“User is a paralegal, primarily employment law”)
  • Your writing preferences (“User prefers shorter, bullet-pointed answers”)
  • Your tools (“User works in Clio, not Westlaw”)
  • Your long-term projects that involve no third-party confidential data (“User is writing a novel about 1970s jazz”)
  • Language preferences and pronouns

Should never go in memory, even once:

  • Client names, matter numbers, case captions
  • Patient names, conditions, medications
  • Employee names paired with performance issues or legal matters
  • Negotiation positions on active deals
  • SSNs, account numbers, tax IDs, or any regulated identifier
  • Third-party financial information
  • Attorney work product (legal strategy, theories of the case)
  • Anything a subpoena response would have to disclose

The rule I use with the lawyers I know: if the sentence you’re about to paste would appear in a deposition exhibit, it doesn’t go into Claude memory. Full stop.

What This Means for You

If you’re a paralegal, attorney, or legal assistant: The Heppner ruling applies to the memory profile the same way it applies to chat transcripts. Turn memory off for any tool use that touches an active matter, and run the audit above on every Claude account you touch. If your firm is on Claude Team, flag to your managing partner that Team is consumer-tier — you may need an Enterprise migration.

If you’re in HR: Anything related to terminations, investigations, or performance management should never touch a memory-enabled consumer account. Claude Enterprise with admin controls is the right tier. If you’re running HR on Claude Pro or Team, audit this week. Delete every line of memory that names a real employee.

If you’re in healthcare: Memory-enabled consumer Claude is not HIPAA-compliant. The tool that is — for clinical questions — is OpenEvidence (HIPAA BAA, NPI-verified, no third-party training). For general-purpose work, you need Enterprise or API with zero-retention.

If you’re in finance or accounting: SEC and SOX rules treat AI-synthesized data the same as any other third-party disclosure. If you’ve been using Claude for client work on Pro or Team, audit this week. Training should be off. Memory should be purged of any client identifying information. Consider Enterprise.

If you’re a therapist or counselor: Memory should be off. Not paused — off. Any client content that’s been synthesized needs to be deleted, and you should document that the deletion happened (the memory reset is permanent but your professional documentation should still reflect the action).

If you’re a general knowledge worker without regulated data obligations: The memory feature is genuinely useful. Keep it on, run the audit once a quarter, and stay out of the habit of naming specific people or companies in your chats.

The bottom line: Memory is one of the genuinely good features Anthropic has shipped in the last year. It’s also the feature most likely to produce a discoverable document you didn’t know you’d made. The audit takes ten minutes. The alternative is explaining to a regulator in 2027 why Claude’s memory profile referenced your client by name.

The One-Sentence Decision Framework

If you’re not sure whether to keep memory on, off, or paused, here’s the sentence that answers it:

“Would I be comfortable with everything in this memory panel being handed to opposing counsel, my state board, or a regulator, with two weeks’ notice, in 2028?”

If yes, keep it on. If no, pause or clear. It really is that simple.

Go do the audit. Ten minutes.


Sources:

Build Real AI Skills

Step-by-step courses with quizzes and certificates for your resume