Mar 2026 · Engineering · 9 min

MCP: The Protocol That Gave AI Agents a Nervous System

Every computing paradigm eventually produces a standard interface. USB-C unified hardware peripherals. REST unified web services. In the agentic AI era, that standard is the Model Context Protocol, and by early 2026, it has already won.

The problem MCP solved

Before MCP, every AI application that needed to talk to an external tool (a database, a code editor, an API) required a bespoke integration. If you wanted Claude to query Postgres and Slack and GitHub, you wrote three separate connectors, each with its own authentication flow, data formatting, and error handling. Multiply that by every model provider, and you get an M×N integration nightmare that does not scale.

graph LR
  subgraph "Before MCP: M×N integrations"
    direction LR
    C1[Claude] --- T1[Postgres]
    C1 --- T2[Slack]
    C1 --- T3[GitHub]
    G1[GPT] --- T4[Postgres]
    G1 --- T5[Slack]
    G1 --- T6[GitHub]
    Ge[Gemini] --- T7[Postgres]
    Ge --- T8[Slack]
    Ge --- T9[GitHub]
  end
  style C1 fill:#0a0a0a,color:#ededed,stroke:#555
  style G1 fill:#0a0a0a,color:#ededed,stroke:#555
  style Ge fill:#0a0a0a,color:#ededed,stroke:#555
          
graph LR
  subgraph "After MCP: M+N integrations"
    direction LR
    C2[Claude] --- MCP{MCP}
    G2[GPT] --- MCP
    Gm[Gemini] --- MCP
    MCP --- S1[Postgres Server]
    MCP --- S2[Slack Server]
    MCP --- S3[GitHub Server]
  end
  style MCP fill:#0a0a0a,color:#ededed,stroke:#555
          
3 models × 3 tools = 9 integrations → 3 + 3 = 6 components

Anthropic released MCP as an open-source specification in November 2024 to collapse that matrix into a single protocol. The core insight: separate the what (tools, data, prompts) from the who (model providers, clients). Any MCP-compliant client can talk to any MCP-compliant server. Write the connector once, use it everywhere.

Architecture in sixty seconds

MCP follows a client-server model with three roles. Hosts are the user-facing applications (Claude Desktop, VS Code, an IDE plugin). Clients live inside hosts and maintain 1:1 connections with servers. Servers expose capabilities: tools agents can invoke, resources they can read, and prompts that provide templated workflows. Communication happens over JSON-RPC 2.0, and the protocol is bidirectional. Servers can request LLM completions back from the client through a feature called sampling.

graph LR
  User((User)) --> Host
  subgraph Host["Host (Claude Desktop / VS Code)"]
    direction TB
    LLM[LLM Engine]
    ClientA["Client A"]
    ClientB["Client B"]
    ClientC["Client C"]
    LLM --- ClientA
    LLM --- ClientB
    LLM --- ClientC
  end
  ClientA <-->|"JSON-RPC 2.0
stdio"| ServerA["MCP Server
Filesystem"] ClientB <-->|"JSON-RPC 2.0
HTTP"| ServerB["MCP Server
Postgres"] ClientC <-->|"JSON-RPC 2.0
HTTP"| ServerC["MCP Server
GitHub"] ServerA -.-|"sampling"| ClientA style Host fill:none,stroke:#555 style LLM fill:#0a0a0a,color:#ededed,stroke:#555
MCP host-client-server architecture, each client maintains a 1:1 connection

The June 2025 specification revision introduced Streamable HTTP as the canonical transport layer, replacing the earlier SSE approach. A single HTTP endpoint handles both POST requests and optional Server-Sent Events streaming, enabling everything from simple stateless tool calls to long-running bidirectional sessions. Each session gets a cryptographically secure ID, and servers can scale horizontally behind standard load balancers.

The capability surface

MCP servers expose three primitives. Tools are executable functions: run a SQL query, create a GitHub issue, send a Slack message. The 2025-06-18 spec added structured output schemas, so tools return typed, predictable data instead of free-form text, dramatically reducing context window waste. Resources are readable data: files, database records, screenshots. Resource linking borrows HATEOAS from REST: every response includes _links that let clients discover related resources. Prompts are dynamic, context-aware workflow templates that servers tailor to the current project state.

PrimitiveControlled byPurposeExample
ToolsModelExecutable actionsRun SQL query, send Slack message
ResourcesApplicationReadable data sourcesFile contents, DB schemas, screenshots
PromptsUserWorkflow templatesCode review checklist, deploy pipeline
SamplingServerRequest LLM completionsServer asks client model to summarize
ElicitationServerStructured user inputAsk user to confirm before delete

Then there is sampling, arguably the most powerful capability. It lets MCP servers request LLM completions from the client, meaning a server can reason about its own data without needing its own model access. The client retains full control over cost, model selection, and security. And with elicitation, servers can ask the user structured questions mid-workflow, turning one-shot tool calls into interactive conversations.

Security: OAuth 2.1 and the trust model

Remote MCP servers authenticate using OAuth 2.1 with PKCE. The server acts as an OAuth Resource Server: it validates tokens but does not issue them. An external authorization server handles authentication, consent, and token issuance. This means enterprise identity providers like Okta, Auth0, and WorkOS plug in natively. The principle of least privilege is enforced at the protocol level: servers declare the minimum scopes they need, and clients can enforce human-in-the-loop approval for sensitive operations.

sequenceDiagram
    participant U as User
    participant C as MCP Client
    participant A as Auth Server
(Okta / Auth0) participant S as MCP Server C->>A: Authorization request + PKCE A->>U: Consent prompt U->>A: Approve scopes A->>C: Access token (scoped) C->>S: Tool call + Bearer token S->>S: Validate token + check scopes S->>C: Tool result Note over C,S: Human-in-the-loop approval
for sensitive operations
OAuth 2.1 authorization flow for remote MCP servers

Early deployments exposed real risks. Research by Equixly found command injection vulnerabilities in 43% of tested MCP implementations, and many servers launched without any authentication at all. The June 2025 spec tightened this with mandatory Resource Indicators that prevent malicious servers from obtaining overly broad access tokens. Security is no longer optional. It is structural.

Adoption: the numbers that matter

97M+ Monthly SDK Downloads
5,800+ MCP Servers
300+ MCP Clients
150+ AAIF Member Orgs

MCP crossed 97 million monthly SDK downloads by early 2026. The ecosystem grew from roughly 100,000 server downloads in November 2024 to over 8 million by April 2025. OpenAI, Google DeepMind, Microsoft, and Hugging Face all adopted the protocol. VS Code shipped full MCP specification support. Atlassian built a remote MCP server for Jira and Confluence with Anthropic as launch partner, hosted on Cloudflare infrastructure.

Adoption timeline

Nov 2024 Anthropic open-sources MCP. Initial spec, Python + TypeScript SDKs.
Mar 2025 OpenAI adopts MCP across Agents SDK, Responses API, and ChatGPT. Streamable HTTP transport introduced.
Apr 2025 Google DeepMind confirms MCP support in Gemini. 8M+ server downloads reached.
Jun 2025 Major spec revision: structured tool outputs, OAuth 2.1, elicitation, resource linking.
Aug 2025 Microsoft announces Windows + Copilot + VS Code + Azure MCP integration at Build.
Sep 2025 Official MCP Registry launched. GitHub's Head of MCP joins steering committee.
Nov 2025 One-year anniversary spec: Tasks, Extensions Framework, PKCE mandatory, 97M SDK downloads.
Dec 2025 MCP donated to Agentic AI Foundation under the Linux Foundation. Vendor-neutral governance achieved.

In December 2025, Anthropic donated MCP to the newly formed Agentic AI Foundation under the Linux Foundation, ensuring vendor-neutral governance. This was not a symbolic gesture. With Anthropic, Google, Microsoft, OpenAI, and AWS as founding members, MCP became the first agentic protocol with genuine multi-vendor stewardship. The specification is now a community standard, not a company product.

Why MCP won

Three factors compounded. First, the protocol solved a real, immediate pain point (tool integration) that every developer building on LLMs had already hit. It was not speculative; it was practical from day one. Second, the architecture was simple enough to implement in a weekend but expressive enough for production use. Third, Anthropic played the adoption game correctly: open-source the spec early, get competitors to adopt it, then donate governance to a neutral foundation before anyone could fork.

DimensionFunction CallingMCP
ArchitectureTools inside the app processExternal servers via protocol
Vendor lock-inEach provider has own schemaProvider-agnostic, universal
DiscoveryHardcoded at request timeDynamic via tools/list
StateStateless per callStateful sessions
ReusabilityCoupled to one appBuild once, every client uses it
ScalabilityM × N integrationsM + N integrations

The result is that MCP is now table stakes. If your agent framework does not support MCP, it does not ship. The protocol did for AI tool integration what HTTP did for document transfer: it made the connection boring, so the applications on top could be interesting.

What comes next

The roadmap points toward server discovery via .well-known URLs, an established web standard that lets clients find and interrogate MCP servers without prior configuration. Think DNS for agent capabilities. Combine that with the registry and marketplace initiatives already in development, and MCP is evolving from a protocol into an ecosystem, one where agents can discover, evaluate, and connect to tools they have never seen before, at runtime, without human intervention.

That is not tool integration. That is an immune system.

← Notes