The real question isn’t whether AI will replace paralegals. The research is pretty clear on that — it won’t. A Goldman Sachs report from 2023 estimated AI could automate up to 44% of legal tasks, but “tasks” and “jobs” are different things, and paralegals do about six jobs at once depending on the hour.
The real question is whether your current ChatGPT habits could get you fired before AI even has a chance to replace you.
On February 10, 2026, a federal judge in New York ruled in United States v. Heppner that documents someone typed into a consumer AI tool are not protected by attorney-client privilege or the work-product doctrine. The ruling was aimed at a criminal defendant who fed his lawyer’s strategy notes into Claude. But every legal ethicist who’s written about it since — Gibson Dunn, Proskauer, Dorsey, the New York State Bar — has pointed at the same group as the one most exposed: paralegals and legal support staff.
If that’s you, this piece is the short version of what twelve firm ethics memos are trying to tell you.
Quick Reality Check: Will AI Replace Paralegals?
Short answer: no.
Slightly longer answer: AI won’t replace paralegals, but paralegals who use AI well will replace paralegals who don’t. The data bears this out. According to the Legal Industry Report 2024, about 24% of law firms have adopted legal-specific AI tools — and firms that have adopted them are paying their AI-fluent paralegals more, not firing them. Your job security doesn’t depend on avoiding AI. It depends on using it without torching your career or your firm’s malpractice coverage.
That’s the whole point of this piece. Not don’t use AI. Use it well. The five things below are the habits that will sink you if you don’t change them.
The Heppner Rule, in One Paragraph
Before the list, you need to understand why the ground just moved. Here’s the shortest possible version:
In U.S. v. Heppner, Judge Jed Rakoff held that a consumer AI tool is a third party. When you feed privileged information into a third party, you waive privilege. The only narrow exception is if an attorney specifically directs your AI use as part of their strategy — a so-called Kovel arrangement. If you are using AI on your own initiative, at your desk, during a Tuesday afternoon, without a paper trail of direction from a supervising attorney, you are not in the protected lane.
ABA Formal Opinion 512 (July 29, 2024) is explicit that supervising lawyers are responsible for how nonlawyers — that’s you — use generative AI. Your partner’s license is on the line when you paste client data into a public chatbot. They don’t always know they should be training you on this. You’ll have to protect both of you.
Now the five things.
1. Don’t Paste Discovery Documents Into ChatGPT for a “Quick Summary”
This one is on the short list of examples that legal ethicists specifically flagged when writing about Heppner. A paralegal has a deposition transcript, a batch of client emails, or a stack of discovery PDFs. The document is 300 pages long. The attorney wants a summary by 5 p.m. The paralegal opens ChatGPT, uploads the PDF, asks for a chronological summary of key facts.
Every single thing about that workflow is a problem.
- The document is privileged or work product.
- ChatGPT is a third party.
- Anything uploaded to a consumer tier can end up in training data, preserved under a court order, or produced in a different case’s discovery.
- The attorney didn’t direct this specific tool use. Privilege is waived for that content.
If you need to summarize discovery, use a firm-approved enterprise AI tool — Relativity aiR, Everlaw AI, CoCounsel, Spellbook, or whatever your firm licensed. Those tools have zero-retention contracts, SOC 2 audits, and ethics walls. If your firm hasn’t licensed one, flag it to your supervising attorney with this exact sentence: “Given the Heppner ruling, we don’t have a privilege-safe way to do this right now — can we talk about tools?”
That sentence, in writing, is the single best career move you can make this month.
2. Don’t Draft Motions or Briefs in Consumer ChatGPT
This is the one that sends paralegals to r/paralegal with “I just got called into the managing partner’s office” stories. The workflow: a paralegal is asked to put together a first draft of a motion, a memo, or a brief. They open ChatGPT Plus, describe the case, and ask for a draft with citations.
Two problems, both severe.
First, hallucinated citations. Legal AI watchers have lost count of the sanctions orders. In one widely reported case, a California court fined an attorney $10,000 because 21 of the 23 cases cited in his appeal brief didn’t exist — ChatGPT made them up. If that hits your firm through something you drafted, it doesn’t matter that a lawyer signed the filing. You wrote it. Your boss will remember.
Second, privilege. You’re describing the client’s position to a third party. Even if you strip names, a sophisticated adversary’s e-discovery team will match fact patterns. The Heppner ruling said these chats are discoverable. The preserved logs from the OpenAI 20-million-sample order — a separate Jan 2026 ruling in the NYT copyright case — prove those chats actually get handed over.
Safe alternative: use a firm-approved drafting tool that runs on private infrastructure. If you must use a consumer tool for structure or style ideas, use it for generic language only. Never paste the facts of your case.
3. Don’t Run Legal Research Through ChatGPT Without Verification
“Can you find me three cases that support this argument?” is the most tempting ChatGPT prompt in the paralegal world. It’s also the fastest path to a sanctions hearing for your attorney.
ChatGPT’s ability to cite real cases has improved. It still hallucinates. Even when the case names are real, the holdings are often wrong or inverted. The standard ethics memos all say the same thing: any citation that came from a generative AI tool must be pulled from a primary source (Westlaw, Lexis, PACER, Bloomberg Law) and read in full before it goes in anything billable.
The right workflow:
- Use ChatGPT (or ideally, Lexis AI / Westlaw Precision / CoCounsel, which are trained on verified case law) for issue-spotting only
- Independently verify every citation in a primary source
- Read the full opinion — not just the AI’s summary
- Shepardize or KeyCite to confirm the case is still good law
- Document your verification trail. Firms are increasingly asking for proof of AI-citation-verification in the billing narrative
The paralegals keeping their jobs are the ones who are faster at AI-assisted research and more rigorous about verification than paralegals who did everything manually five years ago.
4. Don’t Summarize Client Communications in ChatGPT
This one sneaks up on people. A client has sent a long email thread. You want to write your supervising attorney a briefing. You paste the client’s emails into ChatGPT and ask for a summary.
You just uploaded client confidences to a third party. It doesn’t matter that your intent was to brief your boss. Per Heppner, the disclosure has happened. Anthropic’s consumer Claude terms and OpenAI’s consumer ChatGPT terms both reserve the right to disclose user content in response to legal process. Your client can now, in theory, sue for breach of confidentiality — and under Model Rule 1.6, your firm has a duty-to-safeguard problem regardless of what the client does.
Two safe alternatives:
- Use your firm’s enterprise AI. Most Microsoft 365 Copilot tenants, ChatGPT Enterprise instances, and Claude for Business instances are under zero-retention agreements. Your IT team can confirm in five minutes.
- Summarize it yourself. That was the job before AI. If that’s slower than AI, that’s an argument for your firm to license a proper tool. Not an argument for using the free one.
5. Don’t Use ChatGPT to Draft Your Firm’s Own Internal Documents Without Direction
Here’s the subtle one. You might think, “Okay, I won’t use it for client stuff. But for firm internal stuff — an engagement letter template, a conflict-check workflow, a staff training memo — that’s fair game, right?”
It depends. If the document references any specific client matter, any firm confidential strategy, or any privileged analysis, you’re back in the same problem space. Even a “template” engagement letter, if it references fact patterns from live matters, can leak information.
The safe lane: use AI only for truly generic, nonconfidential work. A CLE summary that anyone could write from publicly available sources? Fine. A summary of the state’s new paralegal certification requirements? Fine. A template that’s been sanitized of all specific matters and approved by your ethics counsel? Fine. Everything else should be running through an approved tool with a paper trail.
The Kovel Card: Your Only Real Protection
There’s one exception to all of this, and paralegals should know about it because it’s the only way to use consumer or public AI tools with something close to legal protection in an active matter.
It’s called a Kovel arrangement. The name comes from a 1961 case (United States v. Kovel) that held an accountant working at a lawyer’s direction can fall within attorney-client privilege, like an interpreter. Rakoff specifically left the door open for AI tools to be treated that way — if the supervising attorney directs the AI use in writing as part of legal strategy.
What a Kovel-adjacent AI workflow looks like in practice:
- Written engagement or internal memo from the attorney stating they are directing specific AI tool use as part of representation
- The AI tool has appropriate security (usually enterprise, not consumer)
- The paralegal’s role is as the attorney’s agent, not as an independent user
- All outputs are treated as attorney-client privileged material and are not shared outside the representation
If you’re doing serious AI-assisted legal work without that framework, you’re not protected. Not after Heppner.
What This Means for You
If you’re a paralegal at a small firm without an AI policy: You’re the most exposed. Push the ethics memo up the chain. Share this article. Suggest a 30-minute CLE at the next all-staff meeting. The ABA’s Opinion 512 made this your senior partner’s responsibility, but they might not know that yet — helping them find out is a career move, not a risk.
If you’re a paralegal at a big firm that has an AI policy: Read it again this week. Most firm policies were written before Heppner. They probably say “don’t put client data into consumer AI.” They probably don’t say anything about the Kovel carve-out or what to do when you need to use AI for research. Ask your practice management team for a 2026 update.
If you’re a paralegal and your partners are themselves copy-pasting client emails into ChatGPT: This is the most uncomfortable conversation in this article. You might need to have it anyway. The framing: “I noticed we’ve been pasting privileged material into ChatGPT — post-Heppner, that could expose us in discovery. Can we talk about safe workflows?” You’ll be glad you raised it if there’s ever a malpractice claim that looks back at 2026.
If you’re studying to become a paralegal: AI literacy is now table-stakes. Not “can you prompt ChatGPT” literacy — “can you explain to a partner why a specific AI tool is or isn’t safe for a specific task” literacy. That’s the paralegal who gets hired in 2026. Our Legal Professionals AI course covers the Heppner framework and the ABA Opinion 512 obligations specifically.
The bottom line: Will AI replace paralegals? No. Will AI get some paralegals fired? Yes — the ones who get into discoverable trouble with privilege-protected material because they didn’t know the rules had changed. Don’t be in that group. The five habits above are the ones to break this week.
You’re not competing against AI. You’re competing against the 76% of paralegals whose firms haven’t adopted AI tools yet — and the 24% whose firms did, but whose paralegals never learned to use them safely. Be the third group. The one who knows exactly what they can paste, what they can’t, and why.
Sources:
- SDNY First-of-its-Kind Ruling: AI-Generated Documents Are Not Privileged — O’Melveny
- AI Privilege Waivers: SDNY Rules Against Privilege Protection for Consumer AI Outputs — Gibson Dunn
- ABA Formal Opinion 512 — American Bar Association (July 2024)
- Generative AI Tools: ABA Formal Opinion 512 — National Conference of Bar Examiners
- How AI Use Can Lead to an Unintentional Waiver of Privilege — Spellbook
- Will AI Replace Paralegals? — Clio Blog
- Will AI Replace Paralegals and Legal Assistants? — MyCase
- What AI Means for the Paralegal Profession — Blackstone Career Institute
- Loose AI Prompts Sink Ships: How Heppner Shook the Legal Community — NYSBA
- OpenAI Must Turn Over 20 Million ChatGPT Logs — Bloomberg Law
- The Perils of Privilege Waivers Through AI — Duane Morris LLP