The Truth About the Tea App Hack
The tea app just doxxed its users and it’s actually so much worse than you think.
The #1 app in the App Store last week was Tea: a women-only, invite-only app for sharing warnings, reviews, and stories about men they’ve dated. Think of it as Yelp—but for your ex.
Ironically, this women’s “safety” app was founded by a man, Sean Cook, who claimed that he had a vision to make dating better for women.
Instead, that vision produced a gossip app now at the center of one of the biggest data breaches of the year.
It’s giving less “safe space” and more nightmare dressed like a daydream.
For some context, Tea required users to upload government IDs to verify they were real women. That was its safety pitch: identity checks keep predators and fake accounts out.
Until this weekend, those IDs, selfies, and private photos were sitting in a publicly accessible folder—no password, no firewall, no audit trail. Just a URL away from anyone landing on all of that very sensitive information.
Literally like a public Google Drive folder. Yikes.
Here’s what happened, what leaked, and why it’s more than a tech mistake. It’s a legal risk, a PR nightmare, and a big warning for any startup that promises safety online.
The Leak That Wasn’t a Hack
This wasn’t a sophisticated cyberattack. It wasn’t even hacking. It was a simple—and frankly insane—setup mistake.
Tea used Google’s cloud storage to keep user photos, IDs, and messages. Those files sat in a folder called a “bucket” that should have been private. But someone flipped the setting to public.
That meant anyone with the link—or anyone who could guess it—could download everything: no password, no permissions, no logs.
This bucket wasn’t even part of Tea’s active systems. It was an old archive someone forgot to delete or move to a secure system. And it stayed exposed for months.
When Tea suddenly became popular, someone found it.
Attackers on 4chan scraped it and posted:
70,000+ images in total
13,000 ID verification photos (selfies and government IDs)
Even some photos still had GPS data attached, pointing to people’s homes
This wasn’t hacking. It was a Google search, a browser tab, and a basic script. The technical barrier was zero.
The Tea app developers’ approach to security? Privacy? Apparently nonexistent.
The Trouble with Lazy “Vibecoding”
This is vibecoding: when startups rush to build features with tools they barely understand, skip security reviews, and push to production on optimism and vibes.
Vibecoding uses AI tools to turn plain English instructions into working code—great for fast prototypes, terrible when you don’t know how to lock down sensitive data.
Even worse? Firebase, the cloud system Tea used, defaults to private.
Someone had to actively make it public—and then… just never fixed it.
Lowering the barrier to innovation is great. Lowering the barrier to security breaches is a fail.
What Was Exposed
13,000 verification images: Selfies, driver’s licenses, passports—often side-by-sides of faces with IDs.
59,000 other images: Photos from posts, comments, and private messages.
Direct messages: At least some were included, though Tea hasn’t said how many.
Metadata: Many images had GPS data attached, which could reveal home locations.
Tea says only users who joined before February 2024 were affected. That still covers over 1.6 million accounts.
Not a Feature. Not a Glitch. A Lawsuit.
This breach isn’t just embarrassing—it’s legally dangerous.
Negligent misrepresentation: Users gave IDs believing Tea would keep them safe. Publishing them on 4chan is the opposite.
Right of publicity & defamation: Faces paired with names, locations, and activity data—shared without consent—can create legal claims.
FTC violations: If Tea promised deletion and kept old files anyway, that’s a regulatory problem.
Law firms are already circling.
Live. Laugh. Lawsuit.
The Safety Paradox
Tea promised safety, vetting, and control. Instead, it created a giant target.
Women uploaded IDs because they trusted the app’s promise: privacy through exclusivity.
But safety isn’t just about how a product looks or feels—it’s about how the data behind it is handled.
If you design a product around trust but don’t secure the systems behind it, you’re not creating a safe space—you’re building a honeypot for attackers.
What Startups Need to Learn From This
Legacy systems are threats. If you haven’t checked a storage bucket since 2022, assume it’s a risk.
Treat sensitive data like nuclear waste. Don’t store it forever. Don’t leave it unencrypted. Don’t forget it exists.
Security isn’t optional. If you verify identity, you’re handling regulated data—even if you don’t call yourself a security company.
Metadata matters. Scrub GPS data from photos. Every uncleaned image is a potential map.
Trust is UX. Privacy is duty. If your app depends on users sharing their most sensitive data, securing it is non-negotiable.
Stop vibecoding blindly. Quick code is fine—if you do a real audit. “We didn’t know it was public” is not a defense.
Final Thought
Tea didn’t get hacked. It got careless. And that carelessness turned its safety promise into an attack surface.
Startups serving vulnerable users have a higher duty of care. That means encrypting data—and respecting it.
If you promise protection, you’d better deliver.
Because when the vault breaks, it’s not just a breach. It’s a betrayal.
⸻
🤖 Subscribe to AnaGPT
Every week, I break down the latest legal in AI, tech, and law—minus the jargon. Whether you’re a founder, creator, or lawyer, this newsletter will help you stay two steps ahead of the lawsuits.
➡️ Forward this post to someone working on AI. They’ll thank you later.
➡️ Follow Ana on Instagram @anajuneja
➡️ Add Ana on LinkedIn @anajuneja