AI Power Users Do 5 Things Differently — 1.4M Chats Prove It

UT Austin and KPMG analyzed 1.4M real workplace AI interactions. The top 5% share 5 specific habits — and they're all learnable in an afternoon.

Researchers at UT Austin and KPMG just analyzed 1.4 million real workplace AI interactions. Not surveys. Not opinions. Actual usage data from 2,597 employees over eight months.

The finding that matters: only about 5% of users consistently get dramatically better results from AI. Not because they use it more. Because they use it differently.

The study was published in Harvard Business Review in March 2026, and it might be the most useful piece of AI research for regular people that’s come out this year. Because it doesn’t just say “AI is powerful” — it shows exactly what the people getting the most out of it are doing that everyone else isn’t.

What the Study Actually Found

The researchers tracked every AI interaction these 2,597 employees had — prompts, responses, follow-ups, everything. Then they looked for patterns that separated the most effective users from everyone else.

The core finding: the gap between routine and sophisticated AI use isn’t about the prompts themselves. It’s about patterns of engagement — how people frame problems, guide the AI’s reasoning, and apply AI across their work.

In plain language: it’s not what you type. It’s how you think about what you’re typing.

The 5 Behaviors That Separate the 5%

Based on the study’s findings, here are the specific things power users do differently:

1. They Treat AI as a Thinking Partner, Not a Search Engine

This is the single biggest difference. Most people use AI the way they use Google — type a question, get an answer, move on.

Power users have conversations. They ask the AI to assume a role. They give it a perspective to work from. They say things like “You’re a senior marketing strategist. I’m launching a product for budget-conscious parents. What am I missing in this campaign plan?”

That framing doesn’t just change the answer — it changes the quality of reasoning the AI applies to the problem.

2. They’re Extremely Specific About What Success Looks Like

Regular users type: “Write me an email.”

Power users type: “Write a follow-up email to a client named Sarah who hasn’t responded in 5 days. Tone: warm but direct. Under 80 words. Include a suggestion to meet Thursday. Don’t use corporate buzzwords.”

The study found that top users are “very clear about what they’re asking for and what it will look like when they get it.” They define the output before the AI generates it — length, tone, format, audience, constraints.

This isn’t about longer prompts. It’s about clearer ones.

3. They Iterate Instead of Accepting the First Response

The biggest behavioral gap: power users push back. They don’t accept the first draft. They say “this is close, but the tone is too formal” or “good structure, but the second point is wrong — here’s why.”

The study calls this “intentional iteration” — a deliberate cycle of generating, evaluating, refining, and regenerating. Most people accept the first output or give up. The 5% treat every first response as a draft, not a final answer.

This is where the “thinking partner” framing becomes practical. You wouldn’t accept a colleague’s first draft without feedback. Don’t accept AI’s either.

4. They Ask the AI to Show Its Reasoning

Power users don’t just ask for answers. They ask for explanations. “Walk me through your reasoning.” “Why did you choose that approach over the alternatives?” “What assumptions are you making?”

This does two things: it catches errors before they become problems, and it teaches you how the AI thinks — which makes you better at directing it next time.

The study found that requiring the model to explain itself produced more accurate and useful outputs than simply asking for the output alone.

5. They Apply AI to Their Most Complex Work

Regular users save AI for simple tasks — drafting emails, summarizing articles, answering quick questions. The 5% do the opposite. They push AI on their hardest problems — strategic analysis, complex writing, multi-step projects, decision-making.

The study found a reinforcing cycle: “Iteration enables ambition, ambition drives strategic tool choice, and repeated success reinforces engagement.” In other words, the more you push AI on hard problems, the better you get at using it, which makes you push it on even harder problems.

Why Most People Stay at Level 1

The study’s most uncomfortable finding: the gap isn’t about access or intelligence. It’s about habits.

Most employees have the same AI tools as the top 5%. They have the same level of general education. They even have similar job roles. The difference is behavioral — how they approach the interaction.

The researchers put it this way: most users treat AI as a shortcut. Power users treat it as a collaborator. The shortcut approach is faster for simple tasks but hits a ceiling fast. The collaborator approach takes slightly more effort upfront but produces dramatically better results on anything complex.

What This Means for You

Here’s the practical translation:

Stop accepting the first response. Every time you use AI, follow up at least once. “Make this more specific.” “The tone is wrong — try again with [this tone].” “Good start, but address [this concern] too.”

Tell AI who to be. Before your question, set the role: “You’re a financial analyst with 10 years of experience” or “You’re a teacher explaining this to a 12-year-old.” Role framing changes everything about the quality of the response.

Be specific about what you want back. Don’t say “summarize this.” Say “give me 3 bullet points, each one sentence, focused on action items. Skip the background.”

Ask it to explain itself. “Why did you recommend that?” or “What’s the main risk with this approach?” This catches errors and teaches you how to work with AI better.

Use AI for your hard problems, not just the easy ones. The email draft is fine. But try giving AI your most complex project — with context, constraints, and a clear definition of success. The results might surprise you.

The Study in Numbers

FindingData Point
Total interactions analyzed1.4 million
Users studied2,597
Study duration8 months
“Power users” (top tier)~5% of total users
Published inHarvard Business Review, March 2026
Conducted byUT Austin McCombs School of Business + KPMG

The Bigger Picture

This study validates something that anyone who’s used AI seriously already suspected: the tool isn’t the bottleneck. You are.

ChatGPT, Claude, Gemini — they’re all remarkably capable. The difference in output quality between a casual user and a power user using the exact same tool is enormous. And that difference comes down to five learnable behaviors, not talent, not technical knowledge, not access.

The 5% aren’t smarter. They just approach the conversation differently.

And that’s a skill you can learn in an afternoon.


The UT Austin / KPMG study analyzed 1.4 million real workplace AI interactions from 2,597 employees over 8 months. Published in Harvard Business Review, March 2026.

Sources:

Related courses: Prompt Engineering | AI Fundamentals | GPT-5.4 for ChatGPT Users

Build Real AI Skills

Step-by-step courses with quizzes and certificates for your resume