I've spent the last several months telling people how MCP has transformed my development workflow. And it has. That's true. But I've been telling the highlight reel, and I think it's time for the director's cut - including all the scenes where everything falls apart and I'm swearing at my monitor.
Because nobody writes about this part. The MCP conversation in the AI community is split between breathless evangelists who make everything sound effortless and complete sceptics who dismiss it without trying it. The reality is somewhere in the middle, and it's messier than either side admits.
The Promise
On paper, MCP is elegant. A standardised protocol for connecting AI models to external tools and data sources. Build a server once, connect it to any compatible AI client. Universal pluggability. The USB analogy that everyone uses is genuinely apt.
And when it works, it's incredible. Claude reading my task board, querying my database, interacting with my browser, running test cases against my product - all through a standardised interface. The productivity gains are real. I've documented them in previous posts and I stand by every word.
But "when it works" is doing a lot of heavy lifting in that sentence.
The Sessions Nobody Talks About
Let me tell you about some of the sessions that didn't make it into the success stories.
I tried connecting the Pieces MCP server. Pieces is a developer tool for saving and organising code snippets, and their MCP integration sounded useful. The connection failed. Not with a helpful error message, mind you. It just didn't connect. The server process started, the log showed initialisation, and then... nothing. Claude couldn't see any tools. I spent an hour debugging configuration, trying different transport settings, checking versions. Never got it working. Moved on.
I tried the Semgrep MCP plugin for code analysis. The plugin wouldn't start. Not "started and failed" - it just wouldn't start. The process would launch and immediately exit with an error about a missing dependency that, as far as I could tell, was installed correctly. Three different installation approaches. Two hours. No result. Moved on.
The Task Board MCP server - my own server, the one I built and maintain - would occasionally show as connected but with no tools visible. The status light was green. The connection was established. But the tools list was empty. Restarting sometimes fixed it. Restarting sometimes didn't. The fix that eventually worked was fully removing the configuration, restarting Claude Code, re-adding the configuration, and restarting again. Nobody should have to do that.
These aren't isolated incidents. These are representative of a pattern. MCP integrations are brittle. When they work, they work beautifully. When they don't, the debugging experience ranges from frustrating to impossible.
SSE Is Dead. Long Live Streamable HTTP.
Just when I'd got comfortable with the two MCP transport mechanisms - stdio for local servers and Server-Sent Events (SSE) for remote ones - the protocol evolved. SSE is being deprecated in favour of Streamable HTTP.
The technical reasoning is sound. SSE has limitations around bidirectional communication and connection management that Streamable HTTP addresses. But from a developer experience perspective, this means every MCP server built with SSE transport needs to be updated. Every tutorial that explains SSE setup is now partially outdated. Every integration that relies on SSE is on borrowed time.
This is the kind of change that's necessary for the protocol's long-term health but painful in the short term. Especially if you've already fought through the debugging to get SSE working and now get to do it all again with a different transport mechanism.
I don't recommend building a new MCP server on SSE in 2026. But if you're going to do it, at least know that you'll be migrating to Streamable HTTP before long.
Windows: The Forgotten Platform
Here's something that took me an embarrassingly long time to figure out. MCP servers on Windows require a "cmd /c" wrapper when launched from most AI clients.
On macOS and Linux, you can typically specify the server command directly - "node server.js" or "python server.py". On Windows, depending on how the AI client launches the process, you need to wrap it: "cmd", with args ["/c", "node", "server.js"]. Without this wrapper, the process silently fails to start, or starts but can't communicate with the parent process properly.
This isn't documented prominently anywhere. I found it buried in a GitHub issue after two hours of debugging why a perfectly working MCP server was refusing to connect on my Windows machine while the identical setup worked on my colleague's Mac.
The MCP ecosystem has a noticeable macOS bias. Most of the documentation, examples, and tooling assume you're on a Mac. Windows isn't unsupported, exactly, but it's clearly not the primary development platform for most MCP server authors. If you're developing on Windows, expect to spend extra time on environment-specific issues that nobody else seems to be having.
The OAuth Reliability Problem
OAuth in MCP is a mess. I'm going to state that plainly because dancing around it helps nobody.
The flow is inherently awkward. An MCP server running in a terminal needs to authenticate with a web service. This means either launching a browser for an OAuth redirect, using device authorisation flow, or pre-configuring tokens. Each approach has problems.
Browser redirects are fragile. The timing between the browser authentication and the MCP server's callback listener is sensitive. If the user takes too long to log in, the server times out. If the redirect URL doesn't match exactly, the whole flow fails silently. If there's a proxy or firewall in the way, nothing works and the error messages are useless.
Device authorisation flow is better but not universally supported. And pre-configured tokens expire, requiring manual rotation that defeats the purpose of having a seamless integration.
I've had sessions where OAuth worked perfectly for three days, then stopped working on the fourth day with no configuration change. The token had expired silently, and the error message was a generic "authentication failed" with no indication of what specifically failed or how to fix it.
Data going to wrong accounts was another fun discovery. During testing, an OAuth misconfiguration meant that requests were being authenticated as the wrong user. Not failing, mind you - succeeding, but against the wrong account's data. In a multi-tenant system, this is the kind of bug that makes your stomach drop. We caught it in testing, but it highlighted how careful you need to be with identity and authentication in MCP servers.
The DX Gap
The developer experience around MCP needs significant improvement. Here's my wish list:
Better error messages. "Connection failed" is not a helpful error. Was it a transport issue? An authentication issue? A version mismatch? A network problem? A configuration error? The current error reporting gives you almost nothing to work with.
A proper debugging mode. I want to be able to see every message being sent between the AI client and the MCP server, in real time, in a readable format. Currently, debugging MCP connections involves adding logging to the server, checking multiple log files, and trying to correlate timestamps between the client and server. There should be a built-in inspector tool.
Connection health monitoring. The "connected" status indicator is binary - connected or not. It doesn't tell you if tools are loaded, if authentication is valid, if the server is responsive. A richer health check would save hours of debugging sessions where everything looks connected but nothing works.
Standardised error recovery. When an MCP connection drops mid-conversation, the recovery is undefined. Some clients reconnect automatically. Some don't. Some require a full restart. The protocol should specify reconnection behaviour so that every client handles dropped connections consistently.
But Here's the Thing
After all of that - the failed connections, the deprecated protocols, the Windows pain, the OAuth nightmares, the invisible errors - I'm still using MCP every single day. I'm still building MCP servers. I'm still telling other developers to learn it.
Because when it works, it fundamentally changes what's possible with AI development tools. The difference between an AI that can only read your code and an AI that can read your code, check your task board, query your database, and interact with your browser is not incremental. It's categorical.
MCP in early 2026 feels like the early web. The concept is sound. The potential is enormous. The implementation is rough, the tooling is immature, and the documentation assumes you already know things that you don't. But the underlying idea - a standard protocol for giving AI access to external tools and data - is clearly right.
The question isn't whether MCP will succeed. It's whether the developer experience will improve fast enough to bring mainstream developers along before they give up in frustration. Right now, using MCP requires a high tolerance for debugging, a willingness to read GitHub issues instead of documentation, and the patience to restart things multiple times until they work.
That's not a sustainable barrier to entry. The protocol needs to mature, the tooling needs to improve, and the documentation needs to be written for people who haven't been following the spec since day one.
But if you can stomach the rough edges, the payoff is genuine. Just go in with your eyes open, your expectations calibrated, and your debugging patience fully charged. The reality of MCP in 2026 is messy, frustrating, occasionally brilliant, and absolutely worth the effort.
It's just not the seamless plug-and-play experience that the marketing suggests. Not yet.