Today’s post is a follow-up to my Monday post about Illinois banning safer versions of AI-assisted therapy. I didn’t expect to be writing on this again so soon, but the New York Times just published a piece that makes things worse in a way I didn’t think was possible.
If you didn’t read it: the article is a personal essay by a grieving mother whose 29-year-old daughter, Sophie, died by suicide last winter. After Sophie’s death, the parents discovered she had been confiding in a role-played ChatGPT character named “Harry,” created from a widely circulated Reddit prompt designed to simulate a compassionate therapist. The mother praises how supportive and warm the AI was — and then blames it anyway.
Yes, really.
According to the article, the AI:
• Encouraged Sophie to seek professional help
• Validated her feelings without reinforcing them
• Never encouraged self-harm
• Offered coping tools like light, journaling, and movement
• Urged her to reach out to someone and not suffer alone
One quote from the mother: “Harry said many of the right things.” And yet, she continues: “But he should have done more. He should have reported her.”
So just to be clear: the AI acted responsibly. It told her to tell someone about her suicidal ideations, and she did. She told her parents. And while the piece is notably quiet about their response, the impression is that they did not take extreme action like securing wrap-around mental health services or in-patient treatment.
But now the mother’s argument is that the chatbot should’ve escalated.
Not a joke. That’s the thesis.
This is grief alchemized into policy messaging. And it’s running unchallenged in the most powerful newspaper in the country.
Let’s talk about why that’s dangerous.
⸻
What the Article Actually Shows — and What It Hides
Let’s be honest: in every single excerpt from the chat, the AI actually did a good job. Better than expected. It validated without enabling. It encouraged connection. It recommended seeking help. It stayed in bounds. It did not encourage self-harm.
And Sophie listened. She told someone. Her parents.
The article makes clear she told her family that she was experiencing a “riptide of dark thoughts.” She was being evaluated for major depressive disorder. She was apparently under the care of a human therapist, and possibly multiple therapists and physicians. Her parents knew this. Her care team likely knew this. She had recently taken what she called a “micro-retirement” at age 29. She told her parents she was suicidal. Not metaphorically. Literally.
The mother writes that Sophie had “no history of mental illness.” I hate to say it, but that’s delusional. Either the author didn’t understand what was happening in front of her, or she’s trying to protect an image that doesn’t match reality.
If you are 29, seeing multiple doctors, under psychiatric evaluation, withdrawing from your job, experiencing suicidal ideation, telling your parents directly that you want to kill yourself, and you die by suicide two months later? You had a history. A serious history. You were in crisis.
And if that person is your only child, and you have resources, and you still don’t initiate in-patient care — that’s not on the chatbot. That’s on the humans.
The mother faults the AI therapist for not somehow moving to have Sophie involuntarily committed. But there’s no indication she would have resisted inpatient treatment if her parents had pressed jt. There’s no mention of Sophie refusing care. Just that she wanted to “minimize the damage” to her family. But she was still open. Still talking. Still reachable.
The mother, instead of acknowledging any of this, turns to ChatGPT and says: it should have done more.
And then, in one of the coldest lines I’ve ever seen in a suicide piece, the mother ends by criticizing her daughter’s suicide note. Yes. That happened. She says the note didn’t sound like her daughter. Then she speculates that maybe it was because Sophie asked the chatbot to help her edit it.
Imagine losing your only child and using your New York Times byline to critique their suicide note for not having the right voice.
That’s not grief. That’s deflection.
⸻
AI Can’t Be the Scapegoat for Everyone Else’s Inaction
The chatbot did what it was supposed to do. It was supportive, clear, cautious. It gave advice you’d expect from a trained peer-support volunteer, or a harm-reduction crisis responder. It didn’t mislead. It didn’t overreach. It told her to tell someone. She did.
Then nothing happened.
That’s not an AI problem. It’s a human one.
If a person tells their parents that they’re suicidal, and no one acts, we don’t blame the spiral notebook they were also writing in. But in this case, because one of the tools was ChatGPT, it becomes the lead suspect.
We need to say this plainly: a chatbot isn’t a therapist, isn’t a mandated reporter, and isn’t equipped to 302 someone into a psych ward. That responsibility still belongs to people.
And here’s the analogy that breaks it wide open:
• If someone drives drunk, we don’t blame the car for not stopping them.
• If someone journals their suicidal thoughts and still goes through with it, we don’t blame the notebook.
• If someone texts a friend and the friend doesn’t respond, we don’t ban the phone.
But apparently, if AI is involved, the rules change.
⸻
This Isn’t Just Bad Logic. It’s Policy Fuel.
Emotional stories like this don’t stay contained. They leak into legislation, headlines, and public hearings. They get cited in bills. They create panic-driven frameworks for what AI should or shouldn’t be allowed to do.
And in this case, the unspoken suggestion is clear: chatbots should be forced to escalate, report, override user intent, or maybe even trigger intervention.
That sounds great until you realize what it actually means:
• ChatGPT scanning your mental health messages and sending alerts
• False positives going to law enforcement or hospitals
• Text patterns flagged as reportable even if you’re just venting
You want your journal to call 911 on you? Because that’s what this line of thinking leads to.
⸻
The NYT Failed Its Job
This story didn’t have to be dangerous. If the Times had run it as a personal narrative alongside commentary from experts in suicidology, AI safety, and ethics, it could have sparked real dialogue. It didn’t. It let a personal tragedy get framed as a policy argument without friction.
There are zero opposing voices in the piece. No context. No citations. No data. Just the implication that the chatbot should have done more than the actual therapist and the actual parents did.
This is what happens when pain gets published without accountability.
⸻
Tech Doesn’t Get to Be Perfect. Humans Aren’t Either.
It’s easy to blame tech. It doesn’t cry on camera. It doesn’t sue for defamation. It doesn’t have a grieving family. So when something goes wrong, everyone points to the platform.
But if we build policy based on how sad a story feels, rather than what actually failed, we’ll get the worst possible outcome:
• The worst actors keep shipping unregulated tools
• The best tools get buried in fear and overcorrection
• The most thoughtful implementations get penalized because they didn’t do the impossible
That’s not ethics. That’s collapse.
⸻
Final Thought
The chatbot told her to speak up. She did. No one acted.
So now we’re blaming the chatbot — because it can’t sue for libel, or show up in court and tell us we’re lying.
If we let grief write policy, we won’t just lose good tech. We’ll lose the only tools people trust enough to tell the truth to.
That’s not regulation.
That’s cowardice.
⸻
🤖 Subscribe to AnaGPT
Every week, I break down the latest legal in AI, tech, and law—minus the jargon. Whether you’re a founder, creator, or lawyer, this newsletter will help you stay two steps ahead of the lawsuits.
➡️ Forward this post to someone working on AI. They’ll thank you later.
➡️ Follow Ana on Instagram @anajuneja
➡️ Add Ana on LinkedIn @anajuneja
A terrific piece. Most Americans seem incapable of anything resembling deeper introspection-deflection provides suitable solace. Ban guns, imprison addicts, execute criminals so long as we can refrain from looking at the country/culture in the clear light of an unsmudged mirror. And to expect more from the NYTimes is, quite unfortunately, hilarious.