← Back to Blog

I Built an MCP Server for My SaaS Product - Here's What Nearly Broke Me

Paul Allington 24 February 2026 9 min read

I'd been using MCP servers built by other people for weeks. MongoDB, Chrome, file systems - all pre-built, all working nicely. So when I decided to build an MCP server for Task Board, our project management tool, I figured it would be a weekend project. Connect the API, expose some tools, done.

It was not a weekend project. It was a 123-message debugging odyssey that tested my patience, my understanding of OAuth, and my ability to resist throwing my laptop out of the window.

Why Build One at All?

The reasoning was sound. Task Board is a Kanban-style project management tool that I built and maintain. I use it every day. Claude Code is the other tool I use every day. If Claude could read tasks directly from Task Board, update statuses, add comments, and move cards between columns, I'd eliminate the constant context-switching that was eating into my workflow.

Instead of copying a task description from Task Board, pasting it into Claude, explaining the context, then going back to Task Board to update the status when I'm done - Claude could just do all of that itself. Read the task, understand it, implement the fix, comment on the task, move it to done. One fluid workflow instead of six browser tabs.

I'd already built the Task Board REST API. How hard could it be to wrap it in an MCP server?

The First 40 Messages: False Confidence

The initial scaffolding went surprisingly well. Claude Code helped me set up the MCP server structure, define the tools, and wire up the API calls. Within a couple of hours I had something that looked complete. A proper MCP server with tools for listing boards, reading tasks, creating tasks, updating statuses, adding comments - the full set.

I configured it in Claude Code's MCP settings, restarted, and... it connected. Green light. Tools showing in the list. I felt genuinely clever for about fifteen minutes.

Then I tried to actually use one of the tools.

Nothing happened. The tool was there. Claude could see it. But when Claude tried to call it, it just... didn't work. No error message. No timeout. Just silence, followed by Claude politely suggesting we try a different approach.

This is where the 123-message journey began.

The OAuth Nightmare

The first real wall was authentication. Task Board uses OAuth for its API, which is perfectly sensible for a multi-tenant SaaS product. What's less sensible is trying to make OAuth work inside an MCP server that's being called from a CLI tool that has no browser.

OAuth assumes a browser. It assumes redirect URIs. It assumes a user sitting at a login screen, typing credentials, and being bounced back to the application. An MCP server running inside Claude Code has none of these things. It's a process running in a terminal. There's no browser to redirect to. There's no login screen to present.

I tried half a dozen approaches. Local callback server that listens on a port and catches the redirect. Device authorisation flow. Pre-generated tokens stored in environment variables. Each one worked in isolation and broke when integrated with the MCP transport layer.

The OAuth callback was particularly maddening. I'd get the authorisation URL, open it in a browser, log in successfully, and the callback would fire - but the MCP server had already timed out waiting for it. The timing was wrong. The transport connection would drop while the user was still authenticating.

I'll be honest with you, there were messages in that thread that I'm not proud of. Things like "omg why is this no longer working?!" and, after yet another approach failed and then mysteriously started working: "ok it's working! what the heck happened?!" That's not a metaphor for calm, methodical debugging. That's genuine bewilderment.

The Transport Layer: A Special Kind of Pain

MCP supports different transport mechanisms. The one I started with was stdio - standard input/output. Simple, works locally, no network involved. Except when it doesn't, which was roughly 40% of the time for reasons I still cannot fully explain.

The symptoms were maddening. The server would start. The connection would establish. The tools would appear in Claude Code's list. And then when you actually tried to use a tool, the message would vanish into the void. No error. No response. Just nothing.

I'd restart the server. It would work. I'd change nothing and restart again. It wouldn't work. I'd add a logging statement. It would work. I'd remove the logging statement. It would still work. I'd close and reopen the terminal. It would stop working.

If you've ever debugged a Heisenbug - a bug that changes behaviour when you try to observe it - you'll know this feeling. It's the software development equivalent of trying to catch smoke.

Eventually I narrowed it down to a buffering issue with how the stdio transport was handling the JSON-RPC messages. The fix was absurdly simple once I found it. But finding it took about thirty messages of increasingly creative profanity directed at my terminal.

The ChatGPT Detour

At some point during this process, I had the bright idea of connecting my MCP server to ChatGPT as well. If I was building this integration, I might as well make it work everywhere, right?

I spent a not-insignificant amount of time configuring the connection, reading OpenAI's documentation on MCP support, and getting everything wired up. Then I discovered that MCP integration in ChatGPT required a paid tier that I wasn't on.

That's an hour I'm not getting back. The documentation could have mentioned this requirement approximately fourteen pages earlier than it did. But lesson learned: check the pricing page before the integration guide.

User Identity: The Subtle Problem

Here's a problem that doesn't occur to you until you're deep into the implementation: when Claude calls the Task Board API through the MCP server, who is making the request?

In a normal API call, the user authenticates and the API knows who they are. In an MCP context, the AI is making the call on behalf of a user. But which user? The MCP server needs to know who it's acting as, because Task Board is multi-tenant - different users see different boards, different tasks, different data.

I ended up building a user identity override feature. The MCP server configuration includes the user's identity, and all API calls are made in their context. It sounds simple when I describe it in one sentence. It was not simple to implement correctly, especially around permission boundaries. You don't want an MCP server accidentally giving Claude access to boards the user shouldn't see.

This is the kind of problem that pre-built MCP servers don't have to deal with. A file system MCP server runs as the local user. A MongoDB MCP server connects with whatever credentials you give it. But a multi-tenant SaaS MCP server has to handle identity properly, and the MCP specification doesn't have a lot of guidance on this.

The Anthropic Submission Guide

Once the MCP server was actually working - tools connecting, authentication flowing, user identity resolved - I wanted to submit it to Anthropic's MCP server directory. There's a submission guide. It has requirements.

Some of the requirements were straightforward. Documentation, tool descriptions, error handling. Standard stuff for any public-facing integration.

Some were less obvious. The way tools are named and described matters more than you'd expect, because Claude uses those descriptions to understand when to call each tool. A poorly described tool doesn't just look unprofessional - it means the AI won't use it correctly. I went through several rounds of refining tool descriptions to be precise enough for Claude to make good decisions about when to use each one.

There's also the question of what your MCP server should and shouldn't be able to do. The submission guide is clear about security boundaries. Your MCP server shouldn't be able to do anything the user couldn't do through the normal interface. It shouldn't cache sensitive data unnecessarily. It should fail gracefully when permissions are insufficient.

All sensible. All requiring additional work that I hadn't budgeted for in my "weekend project" estimate.

Was It Worth 123 Messages?

Here's the thing though. After all that pain, the working product is genuinely transformative for my workflow.

I can now say to Claude: "Read the top priority task from the Sprint board, implement it, commit the code, add a comment to the task explaining what was done, and move it to the Done column." And Claude does it. All of it. Without me touching Task Board's UI.

The debugging sessions that used to involve me copying error logs, pasting them into Claude, explaining the context, and then manually updating the task - they now happen in a single conversation where Claude has full access to both the code and the task management system.

Was it worth the pain? Absolutely. Would I do it differently if I started again? Also absolutely. I'd start with the authentication model, not the tool definitions. I'd test the transport layer in isolation before building any business logic on top of it. And I'd check the ChatGPT pricing page before spending an hour on integration.

If you're considering building an MCP server for your own product, my advice is this: budget three times longer than you think it'll take, expect the transport and authentication layers to be where all the pain lives, and keep a log of your debugging sessions. Not for documentation purposes. For your therapist.

Want to talk?

If you're on a similar AI journey or want to discuss what I've learned, get in touch.

Get In Touch

Ready To Get To Work?

I'm ready to get stuck in whenever you are...it all starts with an email

...oh, and tea!

paul@thecodeguy.co.uk