Is ChatGPT Confidential? The Judge Said No — Here Are 5 Rules

Is ChatGPT confidential? A federal judge ruled no — your chats can be used in court. Plain-English breakdown and 5 things to stop typing today.

A friend emailed me last week. He was in an ugly spot with his old employer over a non-compete, and he’d spent the weekend typing the whole story into ChatGPT — dates, what he did, what he probably shouldn’t have done, and a draft separation agreement he was planning to send. He wanted to know if I thought it sounded fair.

It didn’t sound fair. It sounded like evidence.

A federal judge in New York ruled in February that the exact thing he did — venting to a consumer AI chatbot before talking to a lawyer — doesn’t have any of the legal protection he assumed it did. The ruling has now made it into mainstream coverage, and lawyers on X are using the phrase “vibe lawyering” to describe what your coworkers are probably doing with their lunch breaks. If you’ve ever typed something into ChatGPT, Claude, Gemini, or Copilot that you wouldn’t want read aloud in court, this is the piece for you.

Here’s what was actually decided, what it means for non-lawyers, and the five things you should stop typing into AI today.

What the Judge Actually Ruled

The case is United States v. Heppner. It was decided on February 10, 2026, by Judge Jed S. Rakoff in the Southern District of New York. Rakoff is one of the most widely respected federal judges in the country — when he writes a first-of-its-kind opinion, other courts read it.

Here’s what happened. A man named Bradley Heppner got hit with a grand jury subpoena related to a fraud investigation. He hired a lawyer. His lawyer gave him materials and walked him through the case. Then, without telling his lawyer, Heppner went home and fed a bunch of that information into the consumer version of Anthropic’s Claude. He asked Claude to research legal issues, outline his defense strategy, and generate documents. Over time he generated 31 AI-written documents about his own case.

Then the FBI searched his home and took everything. The 31 Claude documents ended up in the prosecutor’s hands.

Heppner’s lawyers argued in court that those 31 documents should be thrown out — that they were covered by attorney-client privilege, or at least the work-product doctrine, and the government shouldn’t be allowed to use them.

Judge Rakoff said no. On three grounds:

  1. Claude isn’t your lawyer. Attorney-client privilege only protects communication between a client and their attorney. Claude is not an attorney. Anthropic, the company that runs Claude, is a third party. When you share confidential information with a third party, you’ve broken the seal of confidentiality.
  2. Anthropic’s terms let them hand over your data. Consumer Claude’s privacy policy allows Anthropic to disclose user content in response to legal process. A subpoena, court order, or search warrant can pull your chats.
  3. Work-product protection didn’t apply either. That doctrine covers materials prepared at an attorney’s direction in anticipation of litigation. Heppner did this on his own. His lawyer hadn’t asked him to use Claude.

Rakoff did leave one important door open. If your lawyer specifically directs you to use an AI tool as part of your legal research — like they’d direct an accountant or an interpreter — the analysis might come out differently. Lawyers call this a “Kovel arrangement.” But that’s not what most people are doing when they open ChatGPT on a Tuesday night.

Wait — What About the OpenAI 20 Million Logs Thing?

If this felt like it came out of nowhere, here’s the other case that’s been rattling around in the background.

In January 2026, a different SDNY judge affirmed an order requiring OpenAI to turn over 20 million random ChatGPT conversation logs to the plaintiffs in the big New York Times-led copyright lawsuit. Earlier, in May 2025, a magistrate judge had already ordered OpenAI to preserve every ChatGPT conversation — affecting over 400 million users worldwide. That preservation order ended in September 2025, but the 20-million-log sample is still being produced.

The logs are being de-identified before they’re handed over — names and phone numbers scrubbed. But “de-identified” isn’t the same as “private.” Researchers have repeatedly shown that enough context in a long chat thread can re-identify a specific person.

None of this has anything to do with Heppner’s criminal case. But both stories point at the same truth: your ChatGPT conversations are not sealed in a vault. They can be preserved under court order, they can be produced in civil discovery, they can be seized under a search warrant, and — per Heppner — they can be used against you when they surface.

So Is ChatGPT Actually Confidential?

Let’s answer the question directly. No. Not in the legal sense of the word.

The word “confidential” in a legal context means something specific: communication that can’t be compelled as evidence because it’s protected by privilege (attorney-client, spousal, clergy-penitent, doctor-patient) or by work-product doctrine. None of those apply to a chatbot.

What ChatGPT is, for most users, is private from other humans looking over your shoulder. OpenAI doesn’t sell your chats. Random strangers can’t read them. Your boss can’t sign into your account from their desk. That’s a real kind of privacy, and it’s the kind most people are thinking of when they ask “is ChatGPT confidential?”

But it is not the kind of privacy that holds up when a subpoena or a search warrant shows up. And the Heppner ruling is the first federal decision to spell that out.

Your Data Rights, By AI Provider (April 2026)

Here’s roughly where each major tool stands today. This can change, so always check current settings.

Provider / TierUses your chats to train?Data retentionHolds up against a subpoena?
ChatGPT Free / PlusYes, unless you opt out30 days after deleteNo
ChatGPT Team / EnterpriseNo (default)Admin-controlledBetter, but still no privilege
Claude Free / ProNo (default)30 days after deleteNo
Claude Team / EnterpriseNo (default)Zero-retention option availableBetter, but still no privilege
Gemini (Google)Yes in personal, no in WorkspaceUp to 72 months in ActivityNo
Copilot (Microsoft 365)No (when used inside M365)Tenant-controlledBetter in enterprise tenant

Two things to notice. First, even enterprise tiers with zero retention aren’t “privileged” — they just make the company’s lawful response to a subpoena slower and cleaner. Second, the gap between free and enterprise isn’t about hiding the chats from the world. It’s about who holds the keys and how much control your IT team has.

The 5 Things to Stop Typing Into ChatGPT Today

Lawyers at Gibson Dunn, Proskauer, Dorsey, and a handful of other big firms have been publishing memos since Heppner came down. The advice converges on roughly these five buckets. Read them even if you think they don’t apply to you.

1. Your own employment disputes

If you’re in the middle of a non-compete fight, an unpaid-wages claim, a harassment situation, or any kind of “I think I may have messed up at work and I’m trying to get ahead of it” conversation — do not type that into ChatGPT. That was exactly the example a lawyer at the firm Manatt went viral with on X in early April: a friend pasted an entire employment dispute and a description of things he’d done in violation of his non-compete into ChatGPT, then asked for a separation agreement. He now has a written record of his own liability sitting on a third party’s server.

Ask a human lawyer first. Same for any conversation you’re having with your own boss about a dispute. Keep it out of the bot.

2. “Am I in trouble for doing X?” questions

If you find yourself typing anything that starts with am I liable for, can I get sued for, is this a crime, could this be insider trading, can I be fired for, is this discrimination — close the tab. Those questions are the kind where the act of asking them creates a paper trail that shows you knew there was a risk. Which is exactly the kind of thing a plaintiff’s lawyer wants to find in discovery two years later.

3. Confidential client or customer data

This one is for HR managers, paralegals, financial advisors, accountants, consultants, and anyone else who handles data about other people. Don’t paste a client’s SSN, medical history, termination paperwork, or confidential negotiation position into a consumer chatbot. Even if your own conversations aren’t discoverable in a case you’re involved in, your client’s information just left your firewall. Your firm’s malpractice insurer has thoughts about this.

If you’re filing a disability claim, an injury lawsuit, a workers’ comp case, or any kind of insurance dispute — don’t dump your full medical situation into ChatGPT to try to figure out your odds. Your doctor’s files are covered by HIPAA. Your ChatGPT conversation about your doctor’s files is not.

5. Ongoing investigation details

This is the Heppner rule in plain English. If you are under any kind of investigation — grand jury subpoena, SEC inquiry, EEOC complaint, IRS audit, internal HR review at your company — do not use consumer AI to organize your defense, summarize the facts, or draft your response. Even if your lawyer knows you’re doing it. The only way this is arguably safe is if your lawyer has explicitly directed you to use a specific AI tool as part of their strategy, with a paper trail showing that direction. That’s rare and it’s not the default.

What This Means for You

If you’re a regular ChatGPT user who mostly asks for recipes and trip planning: You’re fine. This ruling isn’t about your day-to-day questions. It’s about the specific moment when you’re tempted to treat the chatbot like a confidential advisor for a legal problem. Don’t do that.

If you’re in HR: Your AI tools need to be part of your company’s AI policy, not a shadow workaround. Stop drafting termination letters, PIPs, and investigation memos in consumer ChatGPT. If your company hasn’t rolled out ChatGPT Enterprise or Microsoft Copilot for M365, that’s a conversation to have with IT and legal this week. Also — disable the meeting-transcription AI bots on calls where you’re discussing individual employees. Those recordings can be subpoenaed too.

If you’re a paralegal or legal ops person: You already knew not to share client info with outside parties. Add consumer AI to that list. If your firm wants to use AI for drafting or research, push for an enterprise deployment with attorney-direction protocols. Any AI use should be logged and directed by an attorney — not independent. That’s the lane where Rakoff left the door open.

If you’re a small business owner: Two actions this week. First, audit which AI tools your team uses and whether any of them are consumer tiers holding customer data. Second, have a one-pager of “don’t type this into AI” rules and actually send it around. If you’re ever in litigation, “we had a written policy” is an enormously better place to stand than “we didn’t think about it.”

If you’ve already done the thing: Don’t panic. In most cases it won’t matter. But if you’re in a dispute that could turn into litigation — or you already know you are — talk to a lawyer before you do anything else with that chat thread. Don’t delete it either. Destroying evidence after you know it might be relevant is a separate problem from creating the evidence in the first place.

The bottom line: ChatGPT is a useful tool, not a confidential advisor. The Heppner ruling didn’t change the nature of AI. It just wrote down in federal caselaw what was always true — that when you share something with a chatbot, you’re sharing it with a company, and a company can be compelled to hand it over. Treat every prompt like a postcard. If you wouldn’t write it on the back of a postcard and mail it, don’t type it into the bot.

One more thing. There’s a counter-case worth knowing about. On the same day Rakoff ruled in Heppner, a federal court in Michigan decided Warner v. Gilbarco, where a pro se plaintiff’s ChatGPT chats were protected as work product. The difference: Warner didn’t have a lawyer at all, so there was no privilege to waive. His AI use was his litigation strategy.

So the safest way to use a consumer chatbot for anything legal-adjacent is to have no lawyer and no pending case. Which is exactly the reverse of what most people think.


Sources:

Build Real AI Skills

Step-by-step courses with quizzes and certificates for your resume