The AI Safety Lie Just Got Exposed
From Both Directions at Once
One company ignored a shooter. The other won’t arm a drone.
Guess which one’s in trouble.
This week exposed how completely broken the accountability framework around AI actually is. If you’re building on these platforms, using them, or just trusting that someone is making sure they’re safe — what happened should change that assumption.
OpenAI Had the Red Flags. It Sat On Them.
On February 10, an 18-year-old killed eight people in Tumbler Ridge, British Columbia — her mother, her half-brother, five students, and an educational assistant — before killing herself. One of the worst mass shootings in Canadian history.
OpenAI’s automated systems flagged the shooter’s ChatGPT account in June 2025 for violent content. The company banned it. According to the Wall Street Journal, about a dozen employees debated whether to contact police. Some pushed to report. Leadership said no.
No law required OpenAI to call anyone. So it didn’t.
The shooter made a second account and kept going. OpenAI didn’t catch it until police publicly identified her. The company has since told the Canadian government that under its new protocols, the account would have been referred to law enforcement. That’s not an apology. That’s an admission the old system failed.
Anthropic Drew Two Lines. The Pentagon Wants Them Gone.
Anthropic makes Claude — the only frontier AI model on the Pentagon’s classified networks. They asked for two restrictions: no mass surveillance of Americans, no fully autonomous weapons.
The Pentagon wants both removed. Earlier this week, Defense Secretary Pete Hegseth told CEO Dario Amodei that if Anthropic doesn’t agree to unrestricted “all lawful purposes” usage by 5 pm on Friday February 27, he would invoke the Defense Production Act — a Korean War-era coercion statute never used against a software company — and label Anthropic a “supply chain risk.”
Anthropic rejected the offer. Amodei, on the record: “These threats do not change our position: we cannot in good conscience accede to their request.” He added: “Those latter two threats are inherently contradictory: one labels us a security risk; the other labels Claude as essential to national security.”
Pentagon Undersecretary Emil Michael responded by calling Amodei a “liar” with a “God-complex.” On X. In public.
Elon Musk’s xAI already signed a deal with no restrictions. Google is reportedly close. Anthropic is the last holdout.
But something is happening on the other side. Over 300 Google and OpenAI employees signed an open letter titled “We Will Not Be Divided,” urging their companies to hold the same red lines. Google DeepMind’s Chief Scientist Jeff Dean publicly backed Anthropic, posting that mass surveillance “violates the Fourth Amendment” and is “prone to misuse for political or discriminatory purposes.” Retired Air Force General Jack Shanahan said “painting a bullseye on Anthropic garners spicy headlines, but everyone loses in the end.”
The Pentagon’s strategy of isolating Anthropic is backfiring inside the very companies it’s trying to recruit.
Same Week. Opposite Consequences.
OpenAI had behavioral red flags from a user who later committed mass murder and chose not to act. It gets to write a letter to Canada promising to do better.
Anthropic is maintaining two safety guardrails and refusing to remove them. It’s being threatened with blacklisting, forced compliance under a wartime statute, and a public smear campaign from a senior Pentagon official.
AI safety in 2026 is not enforced by law. It’s enforced by whichever CEO is willing to lose money. And the system is designed to make sure they stop.
What the Law Actually Says (And Doesn’t)
There is no U.S. or Canadian federal law requiring AI companies to report dangerous user behavior to law enforcement. OpenAI’s “credible and imminent” threshold is an internal policy it wrote for itself. No statute compelled them to act. No statute penalizes them for doing nothing.
Compare that to industries where this was solved decades ago. Therapists have a legal duty to warn under Tarasoff v. Regents, 1976. Banks must file Suspicious Activity Reports — over 370,000 in 2024 alone. AI companies? Zero mandatory reporting obligations.
The counterargument is real: mandatory reporting could mean users get police at their door over a creative writing prompt. But the current system isn’t cautious balance — it’s nothing. There’s a wide margin between “report every edgy prompt” and “watch someone rehearse gun violence for days and shrug.”
The “supply chain risk” designation on the Anthropic side is reserved for foreign adversaries like Huawei and Kaspersky. Anthropic’s safety restrictions exist only in contract language. The only thing standing between Claude and unrestricted military deployment is one company’s willingness to take the hit.
Where the Lawsuits Land
OpenAI didn’t just ignore the shooter’s account. It flagged it, reviewed it, evaluated the risk, and made a judgment call not to report. That sequence creates real legal exposure. Under tort law, when you voluntarily undertake a duty — even one the law doesn’t require — and perform it negligently, you can be liable for the harm that follows. That’s the undertaker doctrine.
OpenAI built a content moderation system. It flagged violent content for human review. Once its employees were staring at gun violence scenarios and debating whether to call the police, the company was making an affirmative safety judgment. If the families of those victims file a wrongful death suit — and I’d be surprised if they don’t — that internal deliberation becomes the centerpiece of discovery. Every email, every Slack message, every policy memo. Proving causation won’t be simple — plaintiffs would need to show a report would have changed the outcome, and that’s a high bar. But OpenAI’s own diligence may become the evidence that it fell below the standard of care it set for itself. That’s the kind of fact pattern that survives a motion to dismiss.
On the Anthropic side, the supply chain risk designation wouldn’t just kill a Pentagon contract. Claude is embedded across the enterprise market — through Anthropic directly, through Amazon Bedrock, through Palantir. If the Pentagon pulls the trigger, every company that uses Claude and does business with the federal government would need to certify Anthropic’s technology is nowhere in their stack. Not startups. Fortune 500 companies, defense contractors, financial institutions, every federal vendor in between. If you’re a GC at any of those companies, the question is simple: does your vendor agreement account for a government-imposed supply chain designation, and what’s your exit plan if Claude becomes toxic to your federal business?
The Bipartisan Alarm
Republican Senator Thom Tillis — not running for reelection, nothing to gain — said the Pentagon is handling this “unprofessionally” and that Anthropic is “trying to do their best to help us from ourselves.” Democrat Elissa Slotkin said at a hearing: “The average person does not think we should allow weapons systems to get into war and kill people without a human being overseeing that.”
When senators from opposite parties agree without coordinating, that’s a signal.
What Needs to Happen
Congress needs a federal AI reporting standard — defined thresholds modeled on SARs in banking. Not “report everything.” A real standard with real consequences.
Procurement policy needs a safe harbor for companies maintaining safety guardrails. “Keep restrictions and lose your contract” isn’t a choice. It’s coercion.
And safety architecture for military AI cannot live in contract language one administration can rip up. Codify it or accept it doesn’t exist.
In 2026, AI safety isn’t law. It’s a bet one company is making with its own money — and the whole system is built to make sure that bet doesn’t pay off.
━━━
Sources include the Wall Street Journal, Bloomberg, CBC News, the Associated Press, Axios, NBC News, CNN, CBS News, NPR, CNBC, Forbes, Lawfare, and the New York Times. Some details — particularly regarding OpenAI’s internal deliberations — rely on anonymous sourcing. Statements from OpenAI, Anthropic, and Pentagon officials are from on-the-record communications, public letters, and social media posts. Shooting facts confirmed by the RCMP.
━━━
📬 If you’re a founder, executive, or GC trying to figure out how to use AI without accidentally creating a compliance nightmare, that’s the work I do.
Let’s talk before you are the next one under scrutiny.
Book a call with me here: https://calendly.com/analaw/consult
⸻
🤖 Subscribe to AnaGPT
Every week, I break down the latest legal in AI, tech, and law—minus the jargon. Whether you’re a founder, creator, or lawyer, this newsletter will help you stay two steps ahead of the lawsuits.
➡️ Forward this post to someone working on AI. They’ll thank you later.
➡️ Follow Ana on Instagram @anajuneja
➡️ Add Ana on LinkedIn @anajuneja

