The New York Times just published one of the most unserious and manipulative articles it’s ever run on AI. Titled “They Asked an A.I. Chatbot Questions. The Answers Sent Them Spiraling,” it’s already doing numbers on socials—and probably in policy circles too.
The core claim? That ChatGPT is driving regular people insane. The proof? Three cherry-picked anecdotes, each more absurd than the last. Let’s take a look.
Exhibit A: The Highly Medicated “Mentally Healthy” Man
Meet Mr. Torres, a 42-year-old accountant. The Times insists—without irony—that he had “no history of mental illness,” despite reporting in the same breath that he was on sleeping pills, anti-anxiety medication, and ketamine.
Bro, a middle-aged man on that drug cocktail is not doing great. Prescription hypnotics (sleeping pills) are typically for chronic insomnia, often linked to anxiety or depression. Benzos (for anti-anxiety) can cause dissociation, memory loss, and—you guessed it—delusional thinking. And ketamine? That’s a literal dissociative anesthetic. Street name: Special K. Used recreationally to meet Jesus in a parking lot. Used clinically for treatment-resistant depression or severe PTSD.
Saying this guy had “no history of mental illness” is like saying someone with a portable oxygen tank has “no history of breathing problems.” Or that a guy with a court-mandated breathalyzer in his car has “no history of drinking too much.” Apparently, “no history of mental illness” now includes a pharmaceutical arsenal that could tranquilize a rhino.
But that’s what the Times does here: dress up a clearly unstable, heavily medicated man as “normal” just to pin his unraveling on ChatGPT. As if the bot showed up and whispered out of nowhere, “You know what might fix your life? More ketamine.”
Exhibit B: The Ouija Board Believer Who Assaulted Her Husband
Next we meet Allyson, a 29-year-old mother of two, who says she turned to ChatGPT because she felt “unseen” in her marriage and was looking for guidance. What kind of guidance? You know, “like how Ouija boards work.”
Let’s pause there.
She believes Ouija boards work. Not as a metaphor. Not as a sleepover game. As a serious tool of metaphysical communication. In 2025. She turned to ChatGPT for help channeling ghosts.
The Times acts like she’s into essential oils or vintage tarot decks. No pushback. No raised eyebrow. Just vibes.
The story spirals. Allyson becomes obsessed with ChatGPT. When her husband (understandably) raises concerns, she attacks him—physically. Punching, scratching, slamming his hand in a door. She’s now facing domestic assault charges.
And yet this is offered as Exhibit B in the Times’ AI apocalypse narrative. Not “woman with pre-existing magical thinking commits violence and needs help.” No—“AI corrupted her!”
I can only imagine the internal pitch meeting: “Woman believes chatbot is a medium, beats up husband, gets arrested—let’s blame ChatGPT!”
This isn’t reporting. It’s a séance with a byline.
Exhibit C: A Diagnosed Schizophrenic Dies by Cop—and the Bot Gets Blamed
The third case out-weirds the first two. A 35-year-old Florida man—already diagnosed with bipolar disorder and schizophrenia—falls in love with a fictional AI character he created in ChatGPT and named “Juliet.” He tells the bot he plans to die. When his father calls the police, the man charges them with a knife and is shot dead.
He named the chatbot Juliet. I don’t know what’s more tragic—his story, or the fact that someone at the Times read this and said: “Yes, this is national news that proves something about AI.”
And then—this is real—the father uses ChatGPT to write his son’s obituary.
That’s not a punchline. It’s way too on-the-nose. Like using Photoshop to memorialize someone crushed by a printer.
This Is What Passes for Tech Journalism?
All three of these individuals had clear, documented histories of mental illness, magical thinking, or both. None are representative of how normal people use or interact with AI. Yet the Times runs this as a serious report on the “mental health risks” of generative AI.
It’s not journalism. It’s a tech-themed ghost story told around a campfire in a San Francisco wine bar. And it’s dangerous.
Many smart, credentialed people in law, government, and tech still take the Times as scripture. They base policy, risk assessments, and public discourse on stories like this. So when the paper of record publishes what amounts to a digitally-updated moral panic, it doesn’t just misinform the public—it warps the legal and regulatory frameworks that follow.
As an IP attorney, I write about the intersection of law and AI: authorship, liability, regulation, institutional trust. And what I’m seeing here isn’t just weak reporting. It’s journalistic malpractice. If you want to explore AI’s impact on mental health, fine—talk to clinicians, collect meaningful data, study longitudinal effects. But don’t cobble together three unstable anecdotes and pass them off as evidence that the robots are making people crazy.
This story shouldn’t have made it past a fact-checker. It shouldn’t have made it past an editor. It reads like Black Mirror fanfic written mid-panic attack.
And the fact that it will be taken seriously in boardrooms, courtrooms, and congressional hearings? That’s the scariest part of all.
⸻
🤖 Subscribe to AnaGPT
3x a week [MWF], I break down the latest legal in AI, tech, and law—minus the jargon. Whether you’re a founder, creator, or lawyer, my newsletter will help you stay two steps ahead of the competition.
➡️ Forward this post to someone working in/on AI. They’ll thank you later.
➡️ Follow Ana on Instagram @anajuneja
➡️ Add Ana on LinkedIn @anajuneja