Your Private ChatGPT Just Leaked Your Trade Secrets — Now What?
Prompt injection is the new insider threat, and the law is about to get messy.
Imagine this.
You’re a mid-sized startup doing cool stuff in biotech or fintech or whatever. You just launched your shiny new “internal AI assistant” — basically a ChatGPT clone trained on all your private files: engineering specs, HR policies, product roadmaps, the whole kitchen sink.
It’s working. People love it. Your ops team is more efficient. Your engineers are asking it to write code. You’re finally feeling like one of those “AI-native” companies VCs won’t shut up about.
And then:
Your CTO gets a DM. From a security researcher. On Twitter (X? still not calling it that).
“Hey, I was able to extract some unreleased code from your internal chatbot. Prompt injection worked. Just a heads up.”
Panic. Slack chaos. Lawyers summoned. (Hi.)
⸻
Wait, what just happened?
Prompt injection. Basically: someone tricks your chatbot into ignoring its instructions and spilling secrets.
It’s like social engineering, but instead of hacking a human, you’re hacking the AI’s brain. And yeah — it works. There are public examples. There are GitHub repos. There are Discord channels dedicated to this.
If your AI was trained on sensitive data, that data can now leak with the right phrasing. One spicy prompt at a time.
And here’s the kicker: most “internal” AIs aren’t really internal. They’re usually:
Running on OpenAI or AWS,
Wrapped in some no-code UI,
Shared in a Slack channel with 200 people (and 3 interns), and
Built with vibes, not actual access controls.
⸻
Is this legally… theft?
Maybe.
To bring a trade secret case under U.S. law (Defend Trade Secrets Act or DTSA), you have to show:
You had a trade secret (✅),
You tried to protect it (🤷♀️), and
Someone took it improperly (🔥).
Prompt injection falls in a gray zone. Is it “hacking”? Or just “clever prompting”? Courts haven’t answered that yet. But here’s what will matter:
Did you lock down access?
Did you tell your vendor “don’t leak our IP”?
Did you monitor what people were asking the chatbot?
Did you actually treat your data like a secret?
If not? Even if you were technically “hacked,” you might still be blamed for leaving the front door wide open.
⸻
Who’s screwed?
Let’s break this down:
The prompt attacker? Probably anonymous and broke.
OpenAI or Anthropic? Protected by their TOS. (They’re not paying you.)
Your company? Might be liable to clients, investors, or regulators.
You, as legal or ops? Definitely losing sleep.
Oh, and if you leaked client data, patented code, or unannounced IP? You might’ve just nuked:
Trade secret protection,
Future patents, and
Your investors’ confidence in your security practices.
⸻
It gets worse (the IP twist)
Did your chatbot train on third-party content? Stack Overflow posts? GitHub code? Some engineer’s Notion dump of proprietary vendor docs?
Congrats — you may have just built an IP time bomb.
Because now your model could be:
Spitting out copyrighted code,
Generating derivative works,
Or violating NDAs — without anyone noticing until it’s too late.
⸻
So what do you actually do?
If you’re deploying AI internally:
Audit what’s going into your models. Don’t feed it crown-jewel data without guardrails.
Control who can prompt it, and what kinds of prompts are allowed.
Map your infrastructure. “Internal” isn’t a vibe — it’s a security setting.
Read your vendor contracts. If they disclaim all liability, you’re the one holding the bag.
Run fire drills. If this leaked tomorrow, who’s in charge of cleanup?
If you’re a lawyer advising companies on this:
Update your NDAs and IP clauses. Boilerplate won’t cut it here.
Push clients to classify what’s feeding their LLMs.
Treat prompt injection like an actual threat. Not a hypothetical.
⸻
Final thought
Your AI isn’t just a helpful assistant. It’s a high-speed gossip machine with no sense of context and a perfect memory.
It can also be a hostile witness, a negligent employee, or an unintentional leaker.
And no, saying “but it was internal!” won’t save you in court.
⸻
📬 If you’re a founder, GC, or IP counsel building with AI and want to talk about protecting your crown jewels from leaking through a chatbot, I’m here. Reach out.
⸻
🤖 Subscribe to AnaGPT
3x a week [MWF], I break down the latest legal in AI, tech, and law—minus the jargon. Whether you’re a founder, creator, or lawyer, my newsletter will help you stay two steps ahead of the competition.
➡️ Forward this post to someone working in/on AI. They’ll thank you later.
➡️ Follow Ana on Instagram @anajuneja
➡️ Add Ana on LinkedIn @anajuneja