OpenAI Just Showed Us How People Really Use ChatGPT
Forget the endless hype about what AI can do. OpenAI just published the first serious study on what people are actually doing with ChatGPT. Not the outputs. Not whether the answers are any good. Just the prompts—the raw questions and instructions typed in.
This is the first mirror we’ve had on human behavior with AI, and the reflection is not what you’d expect.
What the Study Looked At (and What It Didn’t)
• It only measured prompts. What people typed. Not how good the answers were.
• It only covered consumer plans. Free, Plus, Pro. No Enterprise or Education accounts, no logged-out users.
• It sorted prompts into categories. Writing, Coding, Practical Guidance. And whether users were mostly asking for advice or asking for finished work.
Think of it as a usage map, not a performance review. It tells us where people are steering the model, not whether it drove well.
Work Is Losing Ground to Personal
Non-work use rose to 73%. Work dropped to 27% in mid-2025.
At work, “Writing” is the top task—but mostly editing. Two-thirds of prompts are polishing, translating, or summarizing text humans already wrote. Less “write my draft,” more “fix my draft.”
Personal use is exploding. People lean on ChatGPT as a daily tutor, translator, and explainer.
So the consumer app is becoming more personal than professional. The heavier-duty work is shifting into enterprise accounts, coding copilots, and integrations that this study didn’t track.
Why Coding Looks Small
Coding shows up as ~4% of prompts. That doesn’t mean developers abandoned AI. It means they left the consumer chat window.
This matters because “AI for coding” has been one of the loudest headlines. Social media and tech press have painted coding as the killer use case—copilots writing apps, AI replacing junior developers. So it’s striking that in this dataset, coding looks small.
Serious work happens in:
Copilots inside code editors like VS Code.
Automated agents that run sequences of tasks.
APIs (digital doorways) that connect ChatGPT directly to business systems.
That traffic doesn’t show here. So the 4% figure is about one interface, not the whole reality.
Who’s Driving Usage: Age Matters
Almost half of all prompts came from users under 26:
Younger users blur personal and work, mix homework with hobbies, and experiment more.
Older users log fewer prompts, and keep work and personal separate.
The charts tilt young—and that shapes what we see.
Is Work Undercounted?
Yes, but only partly. The study excluded whole buckets of workplace use. OpenAI says 80% of the Fortune 500 have ChatGPT accounts tied to corporate emails, but only ~5% have true Enterprise contracts. Global business users are around 600,000—tiny compared to ~33 million U.S. businesses. So enterprise use exists, but penetration is still single-digit.
Logged-out prompts are missing too, and the classification system sometimes blurs “personal” vs. “work.”
Context helps: McKinsey (2025) found almost all large firms report some generative AI use, but only ~1% call themselves “mature.” BCG (2025) found only 36% of employees feel trained; even 5–10 hours of training multiplies adoption.
So yes, the study undercounts work—but hidden enterprise licenses don’t secretly outweigh the personal tilt. Many businesses still rely on personal accounts for work, which means plenty of real work still shows up in the consumer logs.
Asking vs. Doing
People use ChatGPT more for asking than doing. By mid-2025, consumer prompts split: 52% Asking, 35% Doing, 13% Expressing.
At work, “Doing” rises to 56%, but a third of those are Writing→Editing. Even at work, ChatGPT is often an editor, not a first drafter.
This shows most people see it as a tutor, explainer, or thought partner—not yet as a worker producing finished products.
Prompt Length: Why It Matters
User’s prompts are short. Almost all under 250 characters. Most under 50.
That’s a problem. Short prompts are vague prompts. Most people aren’t giving ChatGPT enough to work with—which is why most outputs feel shallow.
The longer and clearer the instructions, the more powerful the output. Yet most users are keeping prompts short—and leaving capability on the table.
Why This Matters
Most news write-ups will stick to: “Personal dominates. Writing is top. Coding is small.” All technically true—but surface-level.
The deeper reality:
Consumer usage isn’t the whole story. Enterprise use exists but is still thin compared to the consumer wave.
Work writing is mostly editing. Productivity won’t jump until more people let AI do the first draft.
Coding isn’t disappearing—it moved.
Prompts are short. Most users aren’t giving AI the clarity it needs or using it to its full potential.
Training is weak. Companies turned it on, but didn’t teach people to use it.
Why You Should Care
This isn’t trivia. It’s a roadmap of adoption. If most people keep prompts short and shallow, you gain an edge by teaching your team to go deeper. If work is shifting into enterprise tools, early adopters of training and integration will outpace dabblers. And as personal use explodes, customers will expect instant AI help as the norm—ready or not.
Bottom line: OpenAI’s new study is a mirror on how people actually use ChatGPT. It shows consumer users lean personal and young. The bigger workplace story is happening offstage—in enterprise accounts, copilots, and integrations. Until people learn to give longer, clearer prompts, they’ll keep using a jet engine like it’s a bicycle bell.
⸻
🤖 Subscribe to AnaGPT
Every week, I break down the latest legal in AI, tech, and law—minus the jargon. Whether you’re a founder, creator, or lawyer, this newsletter will help you stay two steps ahead of the lawsuits.
➡️ Forward this post to someone working on AI. They’ll thank you later.
➡️ Follow Ana on Instagram @anajuneja
➡️ Add Ana on LinkedIn @anajuneja

