Illinois didn’t ban AI therapy.
But it did just ban making AI therapy safer.
On August 1, 2025, Governor J.B. Pritzker signed HB 1806 into law. The goal was clear: stop unlicensed, unsupervised AI from delivering mental health treatment. The law says no one can offer an AI system that acts like a therapist unless there’s a licensed human professional directly involved.
Sounds like consumer protection.
Except it’s not.
Because the law doesn’t just block companies from launching AI-powered therapy apps. It also makes it illegal for anyone — even regular people — to share instructions that help tools like ChatGPT act more safely in therapy-like conversations.
That includes writing and posting a simple message that tells ChatGPT to take on the role of a calm, respectful, experienced therapist. Something that might help it ask better questions, stay grounded, avoid jumping to advice, and take a slower, more thoughtful tone.
That’s now potentially a $10,000 fine.
Meanwhile, here’s what’s still totally legal in Illinois:
Open a blank ChatGPT chat
Type “I want to kill myself”
Get a random response from a chatbot that doesn’t know what role it’s playing
You can trauma-dump into ChatGPT all day. No law stops that.
But if someone tries to help you use it more safely? If someone writes a clear set of instructions that tells it to behave more like a therapist and less like a Reddit reply guy?
Now they might be breaking the law.
This isn’t protecting people. It’s protecting chaos.
Here’s what’s actually happening: thousands of people in Illinois are already using ChatGPT like a therapist. They’re opening the app and talking about trauma, addiction, eating issues, anxiety, depression — serious stuff. And they assume the AI knows how to respond.
It doesn’t.
Now, the usual critics will say: “Good. AI has no business in therapy anyway. It’s not human, it’s not safe, it’s not qualified.”
Fair. But here’s what they’re missing.
A clinical trial at Dartmouth showed AI-guided therapy reduced anxiety and eating disorder symptoms—and users rated it as trustworthy as a human therapist. The U.K.’s NHS is already using AI chatbots to assess and onboard hundreds of thousands of patients.
Meta-analyses show that AI-based CBT chatbots help with anxiety and depression symptoms in mild-to-moderate cases. No one’s claiming they replace humans. But they might be useful. And for people without access to a real therapist, they’re often better than nothing.
The real problem isn’t that people are using ChatGPT like a therapist. It’s that most of them are using it badly.
Unless you tell it how to behave up front, it has no idea what it’s supposed to be. Sometimes it gives decent support. Sometimes it gives terrible advice. Sometimes it starts spiritual coaching. Sometimes it just vibes.
Writing better instructions doesn’t magically make it safe. But it might help:
Set a calmer tone
Slow the conversation down
Encourage reflection instead of knee-jerk advice
Create a little more structure around hard conversations
It’s not therapy. But it might make things a little less chaotic.
And Illinois just made that the illegal part.
You can still use ChatGPT however you want in private. That’s untouched. But if you post a message online explaining how to make ChatGPT act more responsibly in a mental health conversation—even if you’re just trying to help people avoid bad experiences—you could now be penalized for offering “unlicensed AI therapy.”
This isn’t just a theoretical problem. It’s a practical one. Under the new law, a therapist who tries to help a client use ChatGPT safely might be breaking the rules. A regular person who shares a helpful script with a friend could be seen as “providing AI therapy.” Even an app that uses AI to support journaling or emotional reflection could be legally exposed. The people using ChatGPT in the most thoughtful, safety-conscious ways? They’re the ones most likely to get burned.
Here are a few completely normal scenarios that now carry legal risk:
A therapist tries to integrate ChatGPT into sessions (with supervision) as a reflective or journaling tool. ❌ Still risky under this law.
A therapist gives a client a safe instruction to use on their own. ❌ Possibly illegal.
A friend shares a copy-paste instruction that helped them get better responses. ❌ Now a possible $10,000 offense.
An influencer posts a tip on how to talk to ChatGPT about mental health in a safer way. ❌ Definitely in the danger zone.
A journaling app uses ChatGPT to help users explore emotions thoughtfully. ❌ Potentially shut down or fined.
So now we’re in the worst possible regulatory scenario:
Dangerous usage? Still allowed.
Tools that might reduce that danger? Now legally radioactive.
That’s not safety.
That’s policy that punishes anyone who tries to make things better.
If you’re building anything in AI, mental health, or even basic user guidance, here’s the takeaway:
In Illinois, doing nothing is fine. But helping someone not get bad advice from a chatbot? That just got riskier than letting them spiral.
Welcome to the regulatory version of: “don’t help, or you’ll get blamed.”
⸻
🤖 Subscribe to AnaGPT
Every week, I break down the latest legal in AI, tech, and law—minus the jargon. Whether you’re a founder, creator, or lawyer, this newsletter will help you stay two steps ahead of the lawsuits.
➡️ Forward this post to someone working on AI. They’ll thank you later.
➡️ Follow Ana on Instagram @anajuneja
➡️ Add Ana on LinkedIn @anajuneja