A Federal Judge Just Said Your AI Chats Aren’t Confidential
Here’s Why That’s Both Right and Completely Wrong
If you’ve ever typed something sensitive into ChatGPT—a legal question, a contract clause you didn’t understand, a memo from your lawyer, a summary of a dispute you’re dealing with—a federal judge just said that might not be protected anymore. And depending on what you pasted, you might have destroyed protections you didn’t even know you had.
First, Think About The World We Already Live In
Think about how you handle legal issues today. You get a letter from opposing counsel, or your GC sends you an update on pending litigation, or you’re dealing with an employee situation that might turn into a lawsuit. You type up your version of what happened so you can send it to your lawyer. You Google some of the legal terms to understand what you’re dealing with. You email your lawyer from your personal Gmail or your company Outlook. Maybe you hop on a Zoom call to talk it through.
At every step, your information is flowing through third-party platforms. Google is processing your search queries. Gmail is storing your privileged emails. Zoom is carrying your privileged calls. And here’s the thing most people don’t realize: AI is now embedded in all of these tools. Gemini is reading your Gmail and summarizing your emails. Copilot is analyzing your Outlook attachments. Zoom’s AI companion is transcribing your attorney calls. Slack’s AI is indexing your messages. You probably didn’t opt into most of this. Many of these features were turned on by default.
Every one of these platforms has terms of service that allow them to collect, process, and in some cases disclose your data. And no one—no court, no bar association, no one—has ever suggested that using Gmail or Zoom for privileged communications waives your legal protections. The legal profession runs on these tools. Privilege survives.
But a federal judge just ruled that if you take that same information and type it into ChatGPT or Claude—asking the AI to explain a legal concept, help you understand a contract, or break down what your lawyer’s memo means—that might be different. That might be a “disclosure to a third party” that destroys your privilege.
The Case Everyone Is Talking About
Bradley Heppner, a former CEO charged with $150 million in securities fraud, hired a major law firm (Quinn Emanuel) and then did what a lot of people would do: he opened Anthropic’s AI chatbot Claude and started researching his legal situation on his own. He explored legal theories, researched potential defenses, and generated about 31 documents. Then he sent them to his lawyers.
The FBI raided his house, found the AI documents on his devices, and the government wanted them. His lawyers said they were protected. On February 10, 2026, Judge Jed Rakoff in the Southern District of New York ruled: not protected. Hand them over. The written opinion came out February 17. It’s the first federal ruling to address this question directly.
The ruling involves two separate legal protections. I’m going to explain both, because understanding the difference matters for what you should actually do.
Protection #1: Attorney-Client Privilege
Attorney-client privilege is the rule that says your private communications with your lawyer are protected. The other side in a lawsuit can’t force you to turn them over. The government can’t read them. It’s one of the most fundamental protections in the legal system. But it only works if certain conditions are met: you need an actual attorney on the other end, the communication needs to be for the purpose of legal advice, and it needs to be kept confidential.
The court said Heppner’s AI interactions failed every one of those conditions. Claude isn’t a lawyer—it can’t form an attorney-client relationship with anyone. Its own terms say it doesn’t provide legal advice, so you can’t claim you were seeking legal advice from it. And the consumer version’s privacy policy says Anthropic can collect your inputs, train its models on them, and disclose them to government authorities. No confidentiality, no privilege.
On these specific facts, the court’s reasoning is solid. Heppner wasn’t talking to a lawyer. He was using software. And you can’t create privilege retroactively—making documents on your own and then sending them to your lawyer doesn’t make the documents privileged after the fact.
But Judge Rakoff left the door open. He said that if the platform did have confidentiality protections—like an enterprise agreement—and the person was acting on their lawyer’s instructions, the analysis could flip entirely. Essentially: use the right platform and have your lawyer direct the work, and AI interactions might be protected.
Here’s the Part That Should Worry You
Heppner didn’t just type his own thoughts into Claude. He took information his lawyers had given him and pasted it into the chatbot. The court found that by putting privileged attorney-client communications into a consumer AI tool, he may have destroyed the privilege on the original communications from his lawyers—not just the AI-generated documents.
Read that again.
Your lawyer sends you a strategy memo. You paste it into ChatGPT because you want to understand what it’s saying—maybe the legal language is confusing, maybe you want it simplified. Under this ruling, you may have just blown up the privilege on your lawyer’s actual advice to you. The original communication. Gone. Because you used a chatbot to read it.
Now, does that make sense? Think about what people actually do when they’re dealing with a legal issue. You write up what happened—a timeline, a statement, the details you think your lawyer needs to know. You Google unfamiliar terms. You might ask a trusted friend or your CFO what they think. You email your notes to your lawyer through Gmail. Your thought process is embedded in every step of that workflow, and every step runs through a third-party platform.
Using AI is the same activity. When you open Claude and type “what does ‘breach of fiduciary duty’ mean” or “help me organize what happened in chronological order” or “explain my lawyer’s memo to me in plain English,” you’re doing what people have always done: trying to understand your situation so you can work with your attorney effectively. You’re not “disclosing” privileged information to a third party. You’re using a tool to participate in your own legal matter.
Here’s a test. Your lawyer sends you a privileged memo. You upload it to Google Translate because you think better in another language. You just submitted the entire contents of a privileged communication to a third-party platform whose terms allow data collection and disclosure. Under the Heppner reasoning, that’s a waiver. But nobody believes that, because everyone understands that using a translation tool is just using a tool—not sharing secrets with Google.
Paste that same memo into ChatGPT and say “explain this in plain English”—same purpose, same intent, different tool—and suddenly it’s a waiver? The people most exposed by this reasoning are the ones who need the most help understanding their lawyers. That’s not a framework that protects the attorney-client relationship. It’s one that penalizes people for trying to participate in it.
This creates a doctrinal inconsistency that the opinion doesn’t address. Every time you send a privileged email through Gmail, you voluntarily put that communication on Google’s servers. Google stores the full text. Google’s AI now reads it, summarizes it, generates responses based on it. Google’s privacy policy allows use and disclosure. Under the Heppner reasoning, that’s a voluntary disclosure to a third party without confidentiality guarantees—the same act, the same terms, the same AI processing. But no court has ever called it waiver. The legal system treats email platforms as tools. The ruling treats chatbots as third-party recipients. That distinction has no doctrinal basis—it’s a difference in interface, not in law.
Protection #2: Work Product
The second protection is called work product. It’s separate from privilege and works differently. Work product protects the materials that get created when someone is preparing for litigation—the research, the notes, the strategy documents, the drafts. The strongest version, “opinion work product,” protects an attorney’s mental impressions—their strategic thinking about the case. The basic principle, from a Supreme Court decision called Hickman v. Taylor, is that the legal system breaks if one side can just take the other side’s preparation and use it against them.
Heppner lost on work product because, in his court’s jurisdiction, these materials generally need to be prepared by a lawyer or at a lawyer’s direction. His lawyers admitted they never told him to use Claude. He did it entirely on his own. So the court said: not protected.
The court framed work product’s purpose narrowly—as protecting “lawyers’ mental processes.” But the original Supreme Court decision said something broader: it’s about preventing the other side from borrowing “the wits of the other side.” That’s not limited to lawyers. It’s about whether your opponent gets to freeload off your preparation—whoever did the preparing.
And think about what Heppner was actually doing. He’s facing massive criminal charges. He sits down with AI and works through the law, tests defense theories, tries to figure out how the government’s case might work. He’s preparing for litigation. If he’d done the exact same thinking with a yellow legal pad—five drafts, testing ideas, crossing things out, starting over—those notes would almost certainly be protected. His strategic choices and mental impressions would be embedded in every page. Using AI to do that same thinking—prompting, revising, trying a different angle, discarding one approach and testing another—is the same cognitive process. The tool is different. The thinking is identical.
But the court didn’t look at it that way. It asked where the data went instead of what the person was doing. There’s already a federal court that took the opposite approach: in Tremblay v. OpenAI (N.D. Cal. 2024), a judge classified attorneys’ ChatGPT prompts as the highest level of protected work product, because the prompts reflected the lawyers’ strategic thinking. The key difference is that Tremblay involved lawyers and Heppner involved a client acting on his own. But the line between those two is thinner than it looks—especially when a client is doing AI research at their lawyer’s direction, which is already happening.
How courts resolve this going forward—do we ask “what was the person doing?” or “where did the data go?”—is going to shape AI and legal protection for the next decade.
What the Court Got Wrong About How AI Actually Works
What the court thinks “training on your data” means vs. what’s actually happening.
The court cited the fact that consumer Claude “trains on user data” as a key reason there’s no confidentiality. The implication is that Anthropic receives your inputs, reads them, and holds onto them—like a person you handed a document to. That’s not what’s happening. Not even close.
When an AI model is “trained” on your data, it doesn’t save your data anywhere. Training is a mathematical process that adjusts billions of numerical values inside the model. Your input nudges the model’s general patterns in some infinitesimal way, but the actual content of what you typed—your words, your questions, your documents—gets dissolved. Think of it like pouring a cup of water into the ocean. The ocean changes in some unmeasurable way. Nobody is fishing that specific cup back out.
If you asked a future version of Claude “what did Bradley Heppner type about his legal defense,” it would have absolutely no idea. It can’t retrieve his inputs. It can’t reproduce his documents. AI researchers have found that models can occasionally memorize fragments of highly unusual or frequently repeated training data, but 31 documents from one user in a dataset of billions? The chance of meaningful reproduction is essentially zero.
This matters because the whole concept of privilege waiver assumes that when you share information with a third party, that third party has it—they can remember it, repeat it, be called to testify about it. After AI training, nobody has it. No person at Anthropic read it. No system can reproduce it. The “third party” can’t be brought into court and asked what you said.
Now, waiver doctrine doesn’t technically require the third party to be able to reproduce what you shared—it turns on whether your disclosure was voluntary and inconsistent with maintaining confidentiality. But that standard evolved for a world of human recipients, where disclosure to a third party meant a person now possessed your information. AI creates a category the doctrine never contemplated: a process that ingests information without any person or system meaningfully receiving it. The court applied the old framework without acknowledging that the underlying assumptions don’t map onto this technology.
There’s an important distinction here: training is separate from data storage. Anthropic does store your actual conversation logs on its servers for a period of time, and those stored conversations could be subpoenaed. That’s a real risk. But the court treated training itself as an independent reason to destroy confidentiality—and that’s treating a mathematical process that makes your data lessaccessible as if it were the same as handing someone a copy of your documents.
The opinion completely ignores temporary chats.
Both Claude and ChatGPT have temporary or incognito modes where the conversation isn’t saved to your account and, according to the platforms, isn’t retained the way normal conversations are. The opinion doesn’t address this at all. If Heppner had used a temporary chat, there likely would have been nothing on his devices for the FBI to find and nothing on Anthropic’s servers to hand over.
So what then? Could the government force him to describe from memory what he asked the AI? In a criminal case, the Fifth Amendment almost certainly prevents that. In a civil case, he could theoretically be asked about it in a deposition—but that testimony would sound like: “I was trying to figure out whether a certain defense might work, so I asked something along these lines, and the AI said something like this.” That’s not describing a conversation with another person. That’s someone describing their own thought process about their own case. Which is exactly what work product exists to protect.
Under the Heppner framework, a court might say: it doesn’t matter that the record is gone—the waiver happened when you typed it. But that would mean the information has vanished, no one has it, the AI can’t reproduce it—and you’ve still permanently lost the right to protect your own thinking because you once typed it into a text box. That result is difficult to square with the purpose of waiver doctrine, which exists to address the risk that a third party will use or disclose what you shared—not to penalize the act of using a particular tool.
Google Search is the same thing in a different box.
When you Google “elements of securities fraud,” your query goes to Google’s servers. Google stores it. Their terms allow use and disclosure. Under the Heppner reasoning, that’s a disclosure without confidentiality. But no one has ever argued that Googling legal questions waives anything. And in 2026, Google has AI built into search—when you type a legal question, an AI model processes your query and writes you a natural language answer. Functionally, that’s the same thing ChatGPT does. The only difference is that one looks like a search bar and the other looks like a chat window. A difference in interface design shouldn’t determine whether you keep your legal protections.
The Subscription Tier Trap
If you’re thinking “I pay for the premium version and I turned off training, so I’m fine”—you’re not. Turning off the training setting means your inputs won’t be used to improve future versions of the model. That’s all it does. The privacy policy—which is what the court actually looked at—still allows the platform to hand your data to the government. Toggling a setting is not the same as having a confidentiality agreement.
Here’s the distinction that actually matters: consumer AI is a personal productivity tool. Enterprise AI is an evidentiary environment. They look identical—same interface, same models, same text box. But the legal infrastructure underneath is completely different. Consumer plans run on terms of service that don’t guarantee confidentiality. Enterprise plans run on negotiated commercial agreements with contractual confidentiality protections, restrictions on data use, and limits on disclosure. One is a convenience. The other is a legal architecture. And right now, most companies are running sensitive work through the convenience version.
Here’s something almost nobody knows: Anthropic’s Team plan, at $30 per user per month, is actually governed by their Commercial Terms—the same legal category as Enterprise. It’s not a consumer product, even though it’s sitting right next to the consumer plans on the pricing page. Whether those terms would actually hold up in court to preserve privilege is untested. But it’s a fundamentally different legal position than a $200/month consumer subscription with zero confidentiality.
Companies that get this right now gain a real competitive advantage: they can move faster with AI without creating litigation exposure. Companies that don’t are building a discoverable record of every sensitive question anyone on their team has ever asked a chatbot.
But there’s a harder version of this problem. Enterprise plans can cost $50,000 or more per year. If your legal protections depend on which AI subscription you can afford, we’ve built a system where well-resourced companies get to use AI with legal protection and everyone else is exposed. That includes your employees dealing with personal legal issues on the same AI tools they use at work. That includes the small business owner who can’t afford enterprise pricing. That includes anyone who doesn’t have a legal team telling them which text box is safe and which one isn’t. That’s a gap that should make everyone uncomfortable.
What You Should Do Right Now
Before the specifics, a quick vocabulary check—because these terms get thrown around and most people use them interchangeably, but they mean different things.
Not privileged means the protection never existed in the first place—the communication didn’t meet the legal requirements.
Waived means the protection existed but you destroyed it by doing something inconsistent with keeping it confidential—like pasting your lawyer’s memo into a consumer chatbot.
Discoverable means the other side in litigation can demand it and you have to hand it over. The Heppner ruling implicates all three, and each creates a different kind of exposure.
Here’s a scenario to make this concrete. You’re in the middle of an acquisition. Your GC sends you a memo flagging potential antitrust exposure in the target’s pricing practices. You paste that memo into ChatGPT and ask it to summarize the key risks so you can brief the board. Under Heppner, you may have just waived privilege over your GC’s original analysis—and opposing counsel in any future litigation over that deal could demand both the AI conversation and the underlying memo. Your board-level strategy session just became someone else’s evidence.
If you’re an executive or founder: Audit how your company is using AI today. If anyone on your team is using consumer ChatGPT or Claude for anything that touches legal strategy, regulatory questions, HR disputes, or litigation preparation, those conversations are potentially discoverable by the other side. That means opposing counsel in a lawsuit could demand them. Get your team on enterprise agreements with real confidentiality protections, or create a clear policy about what can and can’t go into consumer AI tools.
If you’re a GC who thinks your current AI policy covers this: It probably doesn’t. Most corporate AI policies say something like “do not enter confidential information into AI tools.” That’s a start, but it doesn’t define what counts as confidential, it doesn’t distinguish consumer from enterprise platforms, it doesn’t address the waiver risk when employees paste your legal memos into a chatbot, and it doesn’t account for the AI features now embedded in the email and collaboration tools your entire organization already uses. A policy that says “don’t put sensitive stuff in ChatGPT” while Gemini is reading every email in your company’s Gmail isn’t a policy. It’s a false sense of security.
If you’re personally dealing with a legal matter: Do not paste your lawyer’s communications, case-specific details, or identifying information about your legal situation into consumer AI tools. If you want to research legal concepts, ask general questions in general terms—“what is a breach of fiduciary duty” is fine, “here’s what my company did, what’s my exposure” is not. The friendly chat interface feels private. It’s not.
If you work with a lawyer: Ask them whether they have a policy on AI and privilege. If they don’t, they should. And if they’re using consumer AI for work on your case—their own research, their own strategic analysis—ask what platform they’re on and what the terms say about confidentiality. A different federal court has already recognized that lawyers’ AI prompts are protected work product (Tremblay v. OpenAI), but that argument is much stronger on an enterprise platform than on consumer terms.
If you’re a lawyer reading this: Tell your clients, in writing, that consumer AI is not confidential. Put it in your engagement letter. Warn them that pasting your communications into a chatbot could destroy yourprivilege, not just theirs. If you’re directing clients to use AI, document that direction—it could be the difference between protected and discoverable.
━━━
🧠 Bottom line: This is a district court ruling. Not binding on other courts. But it’s from one of the most prominent federal courts in the country, from a well-respected judge, and it will be cited in litigation everywhere. Within twelve months, expect plaintiff firms to start adding AI-prompt production requests to standard discovery. This is coming.
The reasoning treats chatbots as a special category of disclosure—while ignoring that AI is already embedded in the email, search, video, and collaboration tools the entire profession uses every day without anyone suggesting privilege is waived. That doctrinal inconsistency will have to be resolved. Until it is, the instability itself is the risk. And even where privilege isn’t technically waived, these AI interactions may still be independently discoverable.
Consider this your written warning. AI is now part of your litigation surface area. Every prompt you type could end up on the other side’s exhibit list. Treat it accordingly.
━━━
Executive Summary: What This Means for You
• Consumer AI use touching legal issues = privilege waiver risk.
• AI-generated documents and logs are potentially discoverable—even if privilege isn’t waived.
• Your internal AI policy likely doesn’t address privilege waiver.
• Enterprise contracts with confidentiality protections change the legal analysis.
• Toggling “training off” does not create confidentiality.
• Opposing counsel will test this aggressively in discovery. Plan for it now.
━━━
📬 If you’re a founder, executive, or GC trying to figure out how to use AI without accidentally creating a discovery nightmare, that’s the work I do.
Let’s talk before your prompts become someone else’s evidence.
Book a Legal Risk Consult: https://calendly.com/analaw/consult
⸻
🤖 Subscribe to AnaGPT
Every week, I break down the latest legal in AI, tech, and law—minus the jargon. Whether you’re a founder, creator, or lawyer, this newsletter will help you stay two steps ahead of the lawsuits.
➡️ Forward this post to someone working on AI. They’ll thank you later.
➡️ Follow Ana on Instagram @anajuneja
➡️ Add Ana on LinkedIn @anajuneja

