Copilot Is 'For Entertainment Only' — What Claude, ChatGPT, and Gemini Say in Their Fine Print

Microsoft's ToS calls Copilot 'entertainment only.' We read all 4 AI tools' terms. Here's what each company actually promises — and disclaims.

Microsoft charges businesses $30 per user per month for Copilot. It’s baked into Word, Excel, Outlook, Teams, and Windows itself. The pitch? An AI co-worker that transforms how your organization gets things done.

The fine print? “Copilot is for entertainment purposes only. It can make mistakes, and it may not work as intended. Don’t rely on Copilot for important advice. Use Copilot at your own risk.”

That language — buried in the Terms of Use since October 2025 — went viral over the April 5-6 weekend. TechCrunch broke it. Lifehacker, Business Insider, Mashable, Tom’s Hardware, TechRadar, and Hacker News all piled on. Microsoft’s stock dipped. And one post on X captured the mood perfectly: “‘For entertainment purposes only’ is doing a lot of heavy lifting for software that just rewrote your contract, diagnosed your server, and cc’d your CEO.”

But here’s the question nobody’s answering: is Microsoft the only one saying this? What do ChatGPT, Claude, and Gemini say in their own terms? We read all four.

What Microsoft Actually Said

The exact language from Microsoft’s Copilot Terms of Use (last updated October 24, 2025):

“Copilot is for entertainment purposes only. It can make mistakes, and it may not work as intended. Don’t rely on Copilot for important advice. Use Copilot at your own risk.”

Microsoft has since responded that this is “legacy language from when Copilot originally launched as a search companion service in Bing” and promised to update it. But the language sat there for six months while Microsoft sold Copilot to enterprise customers at $30/user/month — a $3.6 million annual commitment for a company with 10,000 employees.

The timing is rough. As of early 2026, only 3.3% of Microsoft 365 users who have access to Copilot actually pay for it — 15 million out of 450 million commercial subscribers. The accuracy Net Promoter Score (a measure of whether users trust the tool) fell to -24.1 in September 2025 before partially recovering to -19.8 in January 2026. A negative score means more users distrust it than recommend it.

And Copilot’s market share among paid AI subscribers dropped from 18.8% in July 2025 to 11.5% in January 2026 — a 39% contraction. ChatGPT holds 55.2%. Even Gemini, at 15.7%, has overtaken it.

What the Other AI Tools Say in Their Terms

Here’s the comparison nobody has written. We pulled the relevant disclaimers from all four major AI tools:

AI ToolPriceKey DisclaimerLiability Cap
Microsoft Copilot$30/user/mo“For entertainment purposes only. Don’t rely on it for important advice.”Capped
OpenAI ChatGPT$20/mo (Plus)Services provided “as is.” “Not a sole source of truth or factual information.”$100 or 12 months of fees
Anthropic Claude$20/mo (Pro)“As is” and “as available.” “Actions are solely at your own risk.”Capped
Google GeminiFree-$250/mo“May display inaccurate information.” Users must “double-check its responses.”Capped

Every single AI tool says some version of “don’t fully trust us.” But there are real differences in tone:

Microsoft’s is the most extreme. “Entertainment purposes only” is language you’d expect on a horoscope app, not a $30/month enterprise productivity tool. No other major AI company uses the word “entertainment.”

OpenAI is the most direct. “Not a sole source of truth” is honest without being dismissive. It acknowledges limitations while still framing ChatGPT as useful — just verify important stuff.

Anthropic is the most standard. “As is” and “at your own risk” are standard software disclaimers. Nothing unusual, nothing dramatic. They disclaim warranties of accuracy, reliability, and availability — the same language you’d find in most SaaS agreements.

Google is the most specific. Rather than a blanket disclaimer, Google tells you to “double-check responses” and makes you responsible for configuring safety settings. It’s less about limiting liability and more about defining who’s responsible for what.

Why Microsoft’s Disclaimer Is Different

All four companies are protecting themselves legally. But Microsoft’s situation is uniquely awkward for three reasons:

1. They charge more than anyone else. At $30/user/month, Copilot costs 50% more than ChatGPT Plus or Claude Pro. When you charge premium prices, calling your product “entertainment only” is a harder sell.

2. They bundle it into everything. Copilot isn’t optional software you choose to install — it’s integrated into Windows, Office, Teams, and Edge. You encounter it whether you want to or not. “Entertainment purposes only” feels stranger when the software is in your operating system.

3. The marketing directly contradicts the terms. Microsoft’s sales materials position Copilot as an essential productivity tool for enterprises. The phrase “entertainment purposes only” is the opposite of that message. ChatGPT says “verify important information.” That’s a limitation. Copilot says “don’t rely on it for important advice.” That’s a rejection of the entire use case they’re selling.

As one commenter put it: “They sell you the tool. Then tell you not to trust it. Welcome to AI marketing.”

The Real Story: AI’s Dirty Open Secret

Microsoft got caught with embarrassing language. But the underlying truth applies to every AI tool on the market: none of them guarantee accuracy, and none of them will pay if something goes wrong.

If ChatGPT gives your team bad financial advice and you act on it, OpenAI’s liability is capped at $100 or whatever you paid in the last year. If Claude writes buggy code that crashes your production system, Anthropic disclaimed “fitness for a particular purpose” in their terms. If Gemini hallucinates a fake legal precedent and you cite it in court — that happened — Google told you to “double-check.”

This isn’t a scandal specific to Microsoft. It’s a fundamental tension in the AI industry: companies are selling these tools for professional use while their legal departments refuse to stand behind the output.

The difference is that Microsoft said it the loudest and most awkwardly. “Entertainment purposes only” is a phrase that sticks in your brain and won’t let go.

What This Means for You

If your company pays for Copilot: The terms don’t change what the tool actually does. Copilot works the same today as it did last week. But this should change how much you trust any AI tool’s output unsupervised. Don’t let Copilot draft your contracts, calculate your taxes, or make medical recommendations without human review. That was good advice before the scandal — it’s just more obvious now.

If you’re choosing between AI tools for work: Don’t pick based on which company has friendlier terms of service — they’re all protecting themselves. Pick based on which tool actually gives the best results for your work. ChatGPT leads in market share (55.2%). Claude leads in coding and analysis quality. Gemini offers the best free tier. And Copilot’s advantage is its Office integration. Our AI fundamentals course walks you through how to evaluate these tools for your specific needs.

If you use AI for anything important at work: Build a verification habit now, regardless of which tool you use. Check AI-generated numbers against your own data. Have a human review AI-drafted emails before sending them to clients. Don’t paste AI code into production without testing it. Every AI company’s terms of service tells you to do this — Microsoft just phrased it badly enough that everyone noticed.

If you’ve never tried AI at work: Don’t let this scare you off. AI tools are genuinely useful for drafting emails, brainstorming ideas, analyzing data, and dozens of other daily tasks. The “entertainment only” language is a legal shield, not a description of what the tools can actually do. The key is treating AI like a smart but sometimes wrong coworker — helpful, but always worth double-checking on the important stuff. Try our prompt engineering course to learn how to get reliable results.

The bottom line: Every AI tool has a version of “use at your own risk” in their terms. The smart move isn’t to avoid AI — it’s to use it with your eyes open. Verify important output. Don’t automate decisions you can’t afford to get wrong. And maybe read the terms of service before your legal team does.

What Happens Next

Microsoft said they’ll update the “legacy language.” That’s the right move — but it won’t change the underlying legal reality. The updated terms will still disclaim accuracy and limit liability. They’ll just do it with less embarrassing phrasing.

The bigger question is whether any AI company will ever stand behind their tool’s output. Right now, the answer is no — not at the consumer level. Enterprise agreements sometimes include stronger warranties, but you’ll pay for them. The $30/month version of any AI tool comes with an implicit “good luck.”

For AI to truly become the productivity co-worker these companies are selling, something has to give. Either the tools get reliable enough that companies can warranty them, or the disclaimers get honest enough that nobody’s surprised. Right now, we’re stuck in the gap between the marketing and the fine print.

Microsoft just made that gap impossible to ignore.


Sources:

Build Real AI Skills

Step-by-step courses with quizzes and certificates for your resume