The MCP Discovery Problem: Why Every Installation Is a Gamble
The MCP Discovery Problem: Why Every Installation Is a Gamble
The idea of an App Store for MCP integrations is overdue. But the real challenge is not building the marketplace - it is solving the discovery and compatibility problem that makes MCP adoption harder than it needs to be.
What Discovery Looks Like Today
Right now, finding MCP servers means searching GitHub, reading blog posts, or asking in Discord servers. There is no central registry. No standardized way to describe what a server does, what permissions it needs, or what clients it works with.
The de facto directories that exist - MCP Index, mcp.so, and Smithery - each have different coverage and no consistent metadata format. You find a promising server for database access, clone it, configure it, and discover it only works with Claude Desktop but not with your preferred client. That is 30-60 minutes you do not get back.
This is not a hypothetical inconvenience. With thousands of MCP servers now published across GitHub and package registries, the signal-to-noise ratio is poor. Many are abandoned after a few commits. Some require client-specific features that are not documented anywhere on the repo page.
The Transport Protocol Split
MCP supports three transport mechanisms, and not all clients support all three:
- stdio - runs a local process via stdin/stdout. Supported by every client. Works for tools that need direct system access.
- SSE (Server-Sent Events) - older HTTP-based remote transport. Deprecated in the MCP spec in favor of Streamable HTTP, but still widely deployed.
- Streamable HTTP - the current recommended remote transport. Not yet supported by all clients.
Claude Desktop, Cursor, Windsurf, and Cline each have their own config file formats and different levels of support for these transports. A server that works perfectly in Claude Desktop via stdio may require a bridge or wrapper to work in a different client via SSE. The MCP Bridge project exists specifically because of this fragmentation - it multiplexes multiple servers into a single interface so clients that only support one transport can reach servers that require another.
Feature-Level Incompatibility Is Worse
Transport is the obvious layer. Feature-level incompatibility is subtler and harder to debug.
The MCP spec defines four capability types: tools, resources, prompts, and sampling. Most servers implement tools. Fewer implement resources (expose files or data streams). Very few implement sampling (the server asking the client to run an LLM call). Most clients implement tools. Many do not implement resource subscriptions. Almost none implement sampling on the client side.
This means a server that exposes dynamic resources - say, a live file watcher that pushes updates - may connect successfully but silently fail to deliver updates in clients that do not support subscriptions. No error. No warning. Just missing functionality.
A proper compatibility matrix would show:
| Client | stdio | SSE | Streamable HTTP | Tools | Resources | Resource Sub | Prompts | Sampling |
|---|---|---|---|---|---|---|---|---|
| Claude Desktop | Yes | Yes | Partial | Yes | Yes | No | Yes | No |
| Cursor | Yes | Yes | Yes | Yes | No | No | No | No |
| Windsurf | Yes | Yes | Partial | Yes | No | No | No | No |
| Cline | Yes | Yes | Yes | Yes | Yes | No | Yes | No |
This table is illustrative, not authoritative - which is exactly the problem. No one maintains an accurate, up-to-date version of this.
What Token Cost Means for Discovery
One thing an MCP registry should surface prominently is the token footprint of each server. A typical MCP tool definition costs 50-200 tokens for its name, description, and parameter schema. A server with 30 tools costs 1,500-6,000 tokens per request just to describe what it can do.
Research on dynamic toolsets shows that static tool definitions in context can use 10-100x more tokens than necessary. Progressive disclosure approaches - where the server only exposes relevant tools based on context - reduce this by 85-100x while maintaining accuracy. But you cannot choose the right approach if the registry does not tell you how many tools a server exposes or whether it supports dynamic tool filtering.
What a Good MCP Registry Needs
Beyond compatibility, an effective registry needs:
Security posture - MCP servers get access to your filesystem, API keys, and running processes. A registry should surface what permissions a server requests, whether it has been audited, and whether it uses sandboxing. There is currently no standard for expressing this.
Usage signals - GitHub stars are a weak proxy for adoption. Download counts, active install estimates, and issue response time matter more for deciding whether a server is maintained.
Configuration templates - copy-paste configs for each major client, not just raw JSON. The same server needs different configuration blocks for Claude Desktop, Cursor, and Windsurf.
Version compatibility tracking - the MCP spec is still evolving. A server written against an older protocol version may fail silently with a newer client that drops backward compatibility. The registry should flag version drift before you install.
Test coverage indicators - does the server include integration tests against actual client behavior, or just unit tests of internal logic?
Why This Matters Now
The MCP ecosystem is growing fast. Anthropic, Linear, Atlassian, GitHub, and dozens of other companies now publish official MCP servers. The number of community servers is in the thousands. Without proper discovery and compatibility tooling, the ecosystem risks fragmenting into incompatible islands where the only reliable installation method is trial and error.
The cost of this fragmentation is measured in developer hours. If each developer wastes two hours per month debugging MCP compatibility issues, across tens of thousands of active MCP users, that adds up to millions of wasted hours per year. A well-maintained compatibility matrix and registry would eliminate most of that waste.
// Example: what a machine-readable compatibility declaration could look like
{
"name": "example-mcp-server",
"version": "1.2.0",
"mcpVersion": "2024-11-05",
"transports": ["stdio", "sse"],
"capabilities": {
"tools": true,
"resources": true,
"resourceSubscriptions": false,
"prompts": false,
"sampling": false
},
"testedClients": ["claude-desktop-1.0", "cursor-0.44", "cline-2.1"],
"toolCount": 12,
"permissions": ["filesystem:read", "network:outbound"]
}
This would take an afternoon to implement as a spec extension. The ecosystem just needs someone to push for it as a standard.
Fazm is an open source macOS AI agent. Open source on GitHub.