The Copyright Office Thinks ChatGPT Wrote This. It Didn’t.
It just took 40 prompts, three rewrites, and one existential crisis about a comma.
If you’ve used GPT-4 to outline a screenplay, Claude to build a product roadmap, or Midjourney to mock up a pitch deck, congratulations: you may have just invalidated your own copyright.
That’s effectively the current position of the U.S. Copyright Office. Across three major releases—the March 2023 AI Registration Guidance, the January 2025 Copyrightability Report, and the May 2025 Training Data Report—it’s built a policy regime where once AI enters the process, authorship is suddenly up for debate. If a generative model “contributes too much,” your legal claim to ownership drops from yours to maybe, or just no.
It’s like telling Taylor Swift she doesn’t own Cruel Summer because Jack Antonoff laid down the synths. We’re still pretending creative control doesn’t count unless you did every part by hand—like you’re typesetting novels by lamplight.
I’m an intellectual property lawyer. I represent artists, developers, companies, and creators using AI to do serious, original work. And I’m watching the Copyright Office build a framework that’s not just outdated—it’s legally incoherent and totally divorced from how people actually create anything today.
The Copyright Office Has Invented Vibes Law
The Copyright Office begins with a rule no one disputes: only humans can be authors. Fine. No one’s arguing GPT-4 should win a Grammy.
But their application of that rule collapses on contact with reality. If the model “contributes too much”—whatever that means—the human supposedly loses authorship entirely. That treats generative AI like a co-author instead of what it actually is: a tool.
Anyone who’s actually used these systems knows the process isn’t “click and post.” It’s recursive. Messy. Directed. You prompt, tweak, discard, rewrite. You regenerate 20 times to get one usable paragraph. You adjust tone, enforce style, restructure flow. That is authorship. Just not 1870 authorship.
Instead of giving us a usable legal test, the Copyright Office throws out phrases like “meaningful human control,” “sufficient input,” and “selection and arrangement.” They sound thoughtful—until you try to apply them.
How many prompts is “meaningful”?
What if you iterate 40 times but only keep 20 words?
Does picking the best version count? Or do you have to manually retype it, like a copyright baptism?
No answers. Just vibes. And in law, vibes create chaos.
The Real Risks Are Already Here
Without a clear rule, here’s where we’re headed:
Infringers argue that AI involvement invalidates your copyright.
Opposing counsel demands prompt logs to question how “creative” your input really was.
Plaintiffs get hit with fraud claims for failing to disclose what model they used.
Courts issue contradictory rulings while everyone guesses what’s protected.
It’s giving: “You didn’t re-record the masters, so they’re not yours.”
The Copyright Office Has Never Prompted Anything In Its Life
Even a simple prompt—“write me a breakup song”—doesn’t come from nowhere. With memory on, GPT-4 might be pulling from prior chats, uploaded writing samples, detailed style instructions, and 50 hours of prior interaction. You’re not getting randomness. You’re getting your own voice reflected back.
Some users upload hundreds of pages of personal writing so the model can mirror their style. Others iterate a paragraph 30 or 40 times for pacing, rhythm, or tone. Some build structured templates and layer in constraints just to get the AI to stop overexplaining everything.
If someone else types the same prompt—with no uploads, no memory, no voice—they get generic trash. The model didn’t “create” the final work. The user trained it to sound like them. Sometimes explicitly. Sometimes implicitly. Over weeks.
If I’ve built a GPT that writes like me—using my material, my constraints, and my edits—who’s the author of that voice?
There’s a Fix. The Copyright Office Just Won’t Take It.
The standard isn’t complicated:
If a person directs what gets said and how it gets said—even using AI to get there—they are the author.
It’s simple. It’s workable. And it’s fully consistent with how courts have already handled computer-assisted and collaborative authorship for decades.
No one’s asking for AI to own anything. But the humans using it shouldn’t lose their rights just because they used a better tool.
The Stakes Are Bigger Than Creators
This isn’t just a creative fight. It’s a commercial one.
If companies can’t rely on copyright protection for AI-assisted work, they won’t license it. Investors won’t fund it. Courts won’t enforce it. And creators—especially independent ones—will be forced back into inefficient, expensive, purely manual workflows just to stay inside the shrinking borders of “safe” copyright.
The Copyright Office was supposed to bring clarity. Instead, it built a minefield.
If they don’t fix this, we’ll spend the next decade litigating what counts as “real” authorship—while generative tools reshape every creative industry in real time.
And when that happens, don’t say we didn’t warn you. Some of us were already screaming about it back when GPT still thought Reputation was Taylor’s most personal album. Which, to be clear—it wasn’t.
ChatGPT helped me write this article. Obviously memory was on—there are too many Taylor Swift references, and Chat knows me All Too Well.
⸻
🤖 Subscribe to AnaGPT
3x a week [MWF], I break down the latest legal in AI, tech, and law—minus the jargon. Whether you’re a founder, creator, or lawyer, my newsletter will help you stay two steps ahead of the competition.
➡️ Forward this post to someone working in/on AI. They’ll thank you later.
➡️ Follow Ana on Instagram @anajuneja
➡️ Add Ana on LinkedIn @anajuneja