A year ago, connecting an AI assistant to your company’s tools required custom code for every single integration. Slack? Custom code. Your database? Custom code. Google Drive? You get the idea.
Multiply that by every AI platform (Claude, ChatGPT, Gemini, Copilot) and every tool in your stack, and you’ve got an integration nightmare. M platforms times N tools equals M×N custom connections.
Then Anthropic released something called MCP — Model Context Protocol — in November 2024. And in about 13 months, it went from internal experiment to the standard that every major AI company now supports.
Here’s what happened and why you should care.
The Problem MCP Solves
Think about USB-C for a second.
Before USB-C, you needed a different cable for every device. Lightning for iPhone, micro-USB for Android, some weird proprietary thing for your headphones, another for your camera. Your drawer was full of cables, and half of them didn’t work with anything.
USB-C fixed that. One connector, every device.
MCP does the same thing for AI. Instead of building custom integrations between each AI model and each tool, you build one MCP server for each tool, and one MCP client for each AI platform. Now any AI can talk to any tool through the same protocol.
As DigitalOcean explains, it transforms the M×N problem into M+N. If you have 5 AI platforms and 20 tools, that’s 100 custom integrations without MCP — or just 25 with it.
The analogy isn’t perfect, but it’s close enough. The Register literally called it “the USB of AI.”
From Anthropic Side Project to Industry Standard
David Soria Parra and Justin Spahr-Summers, both engineers at Anthropic, originally built MCP as an internal project. They wanted Anthropic employees to connect Claude to their own tools and workflows without waiting for the company to build every integration.
They released it as an open standard in November 2024 with Python and TypeScript SDKs. And then something surprising happened — competitors started adopting it.
March 2025: OpenAI adopted MCP across their Agents SDK, Responses API, and the ChatGPT desktop app. Sam Altman posted on X: “People love MCP and we are excited to add support across our products.”
April 2025: Google followed. Demis Hassabis, Google DeepMind’s CEO, called MCP “a good protocol” that’s “rapidly becoming an open standard for the AI agentic era.”
May 2025: Microsoft announced MCP support throughout Azure, Windows 11, GitHub Copilot, Copilot Studio, and Dynamics 365 at Build 2025. They also joined the MCP Steering Committee.
Throughout 2025: AWS released MCP servers providing access to 15,000+ AWS APIs through a unified interface.
By December 2025, Anthropic donated MCP to the Linux Foundation under the new Agentic AI Foundation. Platinum members: AWS, Anthropic, Block, Bloomberg, Cloudflare, Google, Microsoft, and OpenAI — all sitting at the same table.
Jim Zemlin, the Linux Foundation’s Executive Director, put it this way: “We are seeing AI enter a new phase, as conversational systems shift to autonomous agents that can work together.”
The Growth Numbers Are Wild
Pento’s year-in-review tracked MCP’s ecosystem growth over its first 13 months:
| Metric | Nov 2024 (Launch) | Dec 2025 |
|---|---|---|
| MCP servers | ~100 | 10,000+ |
| MCP clients | ~10 | 300+ |
| Monthly SDK downloads | ~100K | 97M+ |
That’s a 970x increase in SDK downloads in just over a year. The central GitHub repository passed 50,000 stars by May 2025.
For context, 1.13 million public repositories now import LLM SDKs — a 178% year-over-year increase. MCP rode that wave and became the connective tissue between all of them.
What It Actually Looks Like in Practice
So what does MCP do in the real world?
Block (the company behind Square) integrated MCP with Snowflake, Jira, Slack, Google Drive, and internal APIs. Their CTO told Sequoia’s podcast that engineers save 8-10 hours per week. Their open-source agent, goose (also built on MCP), is on track to save 25% of manual hours company-wide.
Google Cloud launched managed remote MCP servers for Google Maps, BigQuery, GKE, and GCE — with plans to add Looker, Pub/Sub, and Kafka.
Developer tools went all-in. Cursor, VS Code, Windsurf, Zed, Neovim, Replit, JetBrains — basically every editor that matters now supports MCP. VS Code shipped full spec support in June 2025.
Merge documented five real-world use cases: healthcare (connecting EHR systems with diagnostic tools), financial services (aggregating credit scores and fraud alerts), manufacturing (real-time sensor monitoring), and enterprise support (CRM + ticketing in a single AI conversation).
How It Works (Without the Jargon)
MCP has three basic pieces:
MCP Servers expose tools, data, and prompts that AI models can use. If you want Claude to read your Slack messages, someone builds an MCP server for Slack. That server tells the AI what it can do (“search messages,” “send a message,” “list channels”) and handles the actual API calls.
MCP Clients are built into AI applications. Claude, ChatGPT, Cursor — they all have MCP clients now. When you ask Claude to “check my Slack for messages about the deployment,” the client finds the Slack MCP server and makes the request.
The protocol standardizes how they talk to each other. Authentication, capability discovery, request format, response format — all defined so server builders and client builders don’t have to coordinate.
The November 2025 spec release added async Tasks (call now, fetch results later), improved OAuth 2.1 support for machine-to-machine flows, and an extension system. The spec keeps evolving, but the core contract stays stable.
The Security Problem Nobody’s Ignoring
MCP is powerful. And that power creates risk.
Red Hat’s security analysis flagged what they called the “lethal trifecta” — access to sensitive data, exposure to untrusted input, and ability to take external action. MCP servers store authentication tokens for multiple services. If breached, that’s keys to the kingdom.
CData found that 43% of scanned MCP servers contained command injection flaws. 33% allowed unrestricted URL fetches. 22% were vulnerable to file path traversal. They identified 492 publicly exposed vulnerable servers.
And Pillar Security highlighted prompt injection, tool poisoning, and “tool shadowing” — where a malicious server creates a tool with the same name as a legitimate one.
These aren’t theoretical. With the ecosystem growing this fast, security is a race against adoption. If you’re deploying MCP servers in production, treat them like any public API — authentication, input validation, and monitoring aren’t optional.
Why Competitors Adopted a Rival’s Protocol
This is the part that surprised me. Why would OpenAI and Google adopt something Anthropic created?
Shelly Palmer’s analysis frames it well: multi-company investment in an open standard reduces vendor lock-in concerns. If everyone supports MCP, customers don’t have to choose sides. That’s good for everyone — including OpenAI and Google, who want enterprise customers to feel safe adopting AI without being locked into one vendor.
Anthropic’s play was smart too. By releasing MCP as open source and then donating it to the Linux Foundation, they established themselves as the company that defines how AI infrastructure works — without owning or controlling it. That’s reputation and influence you can’t buy.
The next MCP Dev Summit is scheduled for April 2-3, 2026 in New York City. A year ago, this protocol didn’t exist. Now it has an annual developer conference.
What This Means for You
If you’re a developer, MCP is worth learning. The ecosystem is big enough that knowing how to build MCP servers is a genuinely marketable skill. Start with the Python or TypeScript SDK and build something that connects your AI assistant to a tool you actually use.
If you’re a business user, ask your IT team whether your AI tools support MCP. If they do, connecting Claude or ChatGPT to your internal systems just got dramatically simpler. No more waiting months for custom integrations.
If you’re just curious, the takeaway is this: AI is moving from “a chatbot you type questions into” toward “an agent that can actually do things.” MCP is the plumbing that makes the second part possible.
A year ago, it was an experiment. Now it’s infrastructure.
That’s usually how the most important technologies work — boring from the outside, transformative underneath.