← Back to Blog

When Your Custom App Becomes a Markdown File

Paul Allington 13 March 2026 7 min read

I spent weeks building TriageAgent. A proper .NET application. Web UI for configuration and monitoring. A background Runner service that polled for new tickets. Workers that spun up Claude Code CLI processes, fed them carefully constructed prompts, parsed the output, and posted findings back to Task Board. Dependency injection, hosted services, configuration files, logging, error handling - the full stack.

Then I asked Claude a simple question, and the answer made me feel a very specific kind of stupid.

What TriageAgent Actually Did

The concept was sound. Support tickets arrive in Task Board. TriageAgent picks them up automatically, investigates the relevant codebase, checks error logs, queries the database if needed, and posts back a structured triage report. I wrote about the general concept in an earlier post - the "teaching AI to do my job" one.

The implementation was genuinely complex. There was a PromptBuilder class that assembled investigation prompts based on the ticket type, the relevant project, and the available context. It would pull in project-specific instructions, database connection details, and a structured investigation framework. The Runner service managed the lifecycle of Claude Code processes - starting them, feeding them the prompt, capturing output, handling timeouts, dealing with failures. The Web UI let me monitor active investigations, review findings, and adjust the prompt templates.

It worked. Not perfectly - as I've written about before, autonomous triage has real limitations - but it worked well enough to be useful. I was genuinely proud of the architecture.

The Question

Claude Code had recently shipped a feature called skills - essentially markdown files that define reusable capabilities. A skill is a structured prompt with instructions, context, and tool access definitions. You store them in your project under .claude/skills/ and invoke them when needed.

I was curious about how skills compared to what I'd built, so I asked Claude a question that I expected would validate my approach: "If I were to recreate TriageAgent as a skill, what would the skill definition look like?"

I was expecting it to say something like "well, a skill couldn't handle the complexity of what TriageAgent does because..." followed by a list of reasons why my custom application was necessary.

That is not what happened.

The Answer

Claude analysed the TriageAgent codebase - all of it, the Runner, the PromptBuilder, the Web UI, the workers - and came back with an assessment that I can summarise as follows:

"TriageAgent is essentially a .NET orchestration layer around what Claude Code can already do natively with skills and MCP servers."

It then laid out the mapping. The PromptBuilder logic - all those carefully constructed investigation prompts, the project-specific instructions, the structured framework - that was just the skill prompt. The Runner service that managed Claude Code CLI processes - skills invoke Claude Code directly; that orchestration layer was redundant. The MCP server connections for Task Board and MongoDB - Claude Code already supports MCP natively; my custom connection management was duplicating built-in functionality. The Web UI for monitoring - Claude Code's own output and Task Board integration covered most of it.

The entire application - the Web UI, the Runner service, the workers, the PromptBuilder, the configuration system - could be collapsed into a single markdown file.

A. Single. Markdown. File.

Sitting With That for a Moment

I'll be honest with you, this was not a comfortable realisation. I'd spent real time and effort building TriageAgent. I'd designed the architecture carefully. I'd handled edge cases. I'd written error handling for failure scenarios. I'd built a Web UI, for goodness' sake.

And none of it was wrong, exactly. When I built TriageAgent, skills didn't exist yet. The MCP integration in Claude Code was less mature. The approach I took was reasonable given what was available at the time. It's not that I made a mistake. It's that the platform moved underneath me.

This is something nobody talks about with AI tooling: the "right way" to build things is changing so fast that code you wrote three months ago might be architecturally obsolete. Not deprecated, not legacy - obsolete. Replaced not by a better version of the same approach, but by a fundamentally different paradigm that makes your entire approach unnecessary.

What the Skill Actually Looks Like

The skill definition is a markdown file. It starts with a description of what the triage capability does. Then it specifies what MCP servers it needs access to - Task Board for reading tickets and posting findings, the project's database for data queries. Then it contains the investigation framework - the same structured approach that PromptBuilder was assembling programmatically, but written as prose instructions.

The investigation steps. The decision heuristics for when to dig deeper versus when to escalate to a human. The output format for triage reports. The project-specific context. All of it fits in a file you could read in five minutes.

All that .NET code was doing one thing: turning human-readable instructions into a format that could be fed to Claude Code. A skill file is already human-readable instructions that Claude Code understands natively. The entire translation layer was unnecessary.

The Broader Pattern

I think this is going to happen to a lot of AI-adjacent tooling over the next year. Developers are building custom orchestration layers, prompt management systems, agent frameworks, and workflow engines. Many of these are solving real problems. But the platforms underneath - Claude Code, the various AI coding assistants, the agent frameworks - are absorbing these capabilities natively.

It's the same pattern we've seen before in software. Developers build libraries to work around platform limitations. The platform absorbs the library's functionality. The library becomes unnecessary. The jQuery-to-native-browser-APIs pipeline, basically.

The difference is speed. That cycle used to take years. With AI tooling, it's taking months. TriageAgent went from "this is a genuinely useful custom application" to "this is a markdown file" in the space of a few weeks.

What I Actually Did

I haven't fully decommissioned TriageAgent yet. There are a few edge cases that the skill approach doesn't handle as cleanly - specifically around long-running investigations that need to survive process restarts, and some monitoring capabilities that I find useful during debugging. But I've started using the skill for most triage work, and the results are comparable.

The .NET application still exists. It still runs. But it's going the way of all code that's been superseded by a better approach: gradually unused, then eventually deleted when someone needs to clean up the repo.

The Lesson

Build for what exists today, not for what you think will exist tomorrow. But also: hold your architecture lightly. The tools are changing fast enough that rigid attachment to a particular implementation is a liability.

If TriageAgent had been a side project I'd tinkered with for a weekend, this wouldn't sting at all. The reason it stings slightly is that I invested real effort in it. And the lesson there is about calibrating investment to the stability of the platform you're building on.

When the platform is mature and stable - .NET, SQL Server, HTTP - build robust, well-engineered solutions. When the platform is evolving weekly - AI tooling in early 2026 - build the minimum viable thing, prove the concept, and be ready to throw it away when the platform catches up.

My custom .NET application became a markdown file. Yours might too. And honestly? That's not a failure. That's progress.

Want to talk?

If you're on a similar AI journey or want to discuss what I've learned, get in touch.

Get In Touch

Ready To Get To Work?

I'm ready to get stuck in whenever you are...it all starts with an email

...oh, and tea!

paul@thecodeguy.co.uk