← Back to Blog

Replacing a Broken Legacy System in One AI Session

Paul Allington 24 April 2026 9 min read

The brief arrived as these things usually do: a sentence that sounds simple and contains an entire universe of pain. "The message automation system doesn't work. Nobody understands it. It's complicated."

Three sentences. Three red flags. And the third one - "it's complicated" - is almost always code for "someone built something clever five years ago and then left the company."

I spent a bit of time looking at the existing system. Not long, because there wasn't much point. The architecture was unclear, the documentation was non-existent, and the business rules were buried in layers of abstraction that had been patched and extended until the original intent was thoroughly obscured. The kind of system where every developer who touched it added a workaround on top of a workaround, and now the whole thing is held together by the software equivalent of duct tape and optimism.

So I said what I'd been thinking from the start: "I'm wondering if we should just scrap it and build something new."

The Decision to Start Fresh

There's a school of thought that says you should never rewrite from scratch. It's a well-known argument - the existing system, however messy, contains years of accumulated business logic and edge case handling that you'll inevitably forget to replicate. And that argument is often correct.

But sometimes a system is so far gone that understanding it would take longer than replacing it. When nobody in the organisation can explain what it does, when the behaviour doesn't match what anyone expects, and when the technical debt is compounding faster than anyone can pay it down - sometimes the right answer is to start clean.

The difference this time was that I wasn't starting clean alone. I had Claude Code open in a terminal, and what happened over the next extended session was one of the most complete examples of AI-assisted development I've experienced.

Architecture First, Then Everything Else

I started by describing what the system needed to do. Not what the old system did - what the business actually needed. Message automation: define triggers (a user signs up, a payment fails, a membership lapses), define sequences of messages, define timing and conditions. Users should be able to see the whole flow visually - what triggers what, in what order, with what delays.

From that conversation, we worked out the full architecture. Data models for automation workflows, steps, triggers, and enrolment records. A repository layer for persistence. A service layer for the business logic. A background job engine to handle the actual sending. And a visual UI where non-technical users could build and manage these workflows without touching code.

All of this was specced through conversation before a single line of code was written. I was acting as product owner - describing behaviour, defining requirements, making decisions about scope. Claude was acting as the entire development team.

The Full Stack in One Session

What followed was methodical. Database models first - the foundation everything else sits on. Then repositories. Then service classes with the business logic for creating workflows, adding steps, evaluating triggers. Then the background job engine that would actually process enrolments and send messages on schedule.

Each layer built on the one before it. And because we'd talked through the architecture upfront, there was very little back-and-forth about design decisions during implementation. The conversation had already happened. Now it was execution.

I'll be honest with you - watching a full-stack application come together at this speed is still slightly surreal, even after months of working this way. Components that would normally take days of development were appearing in minutes. Not perfect, not production-ready without review, but structurally sound and functionally correct.

The Visual UI: Where Iteration Got Real

The backend was relatively straightforward. The UI was where things got interesting - and by interesting, I mean painful in the way that UI work always is.

The requirement was a visual workflow builder. Users needed to see their automation as a flow: trigger at the top, steps connected by arrows, each step showing its type (email, SMS, wait period) and configuration. Click a step to edit it. Drag to reorder. Visual connections showing the flow.

The first version worked but looked wrong. The step editors opened in awkward positions - overlapping the steps they were editing, or appearing off-screen on smaller monitors. The arrows connecting steps weren't quite right. The modal styling was inconsistent with the rest of the application.

This is the part of AI-assisted development that nobody writes about. The first 80% comes together fast. The last 20% - the UI polish, the edge cases, the "this looks slightly off on mobile" adjustments - still requires iteration. Lots of iteration.

I'd describe what was wrong: "The step editor modal is opening behind the workflow panel." Fix applied. "Now it's opening in the right place but the styling doesn't match the rest of the modals." Fix applied. "The close button is too small and the padding is inconsistent." Fix applied. Each round was quick, but there were a lot of rounds.

UI polish through conversation is effective but verbose. You're essentially describing visual problems in words, which is inherently imprecise. "It looks a bit cramped" could mean five different things. Learning to be specific - "the gap between the step title and the step type selector needs to be at least 16px" - makes the feedback loop much tighter.

The Enrolment Engine: Harder Than It Looks

Here's the thing though - the really complex part wasn't the UI. It was the enrolment logic.

When you create a new automation workflow, it's obvious what should happen for future users who match the trigger. They get enrolled, they enter the sequence, messages go out on schedule. Simple.

But what about existing users who already match the trigger? If I create an automation that triggers when someone's membership lapses, do I retroactively enrol everyone whose membership has already lapsed? Some of them lapsed yesterday. Some lapsed six months ago. Do they all get the same sequence? Do they start from step one, or do they get placed at the appropriate point in the sequence based on when their trigger condition was met?

This is the kind of business logic that makes automation systems genuinely difficult. It's not a technical problem - it's a product design problem. And it required actual decisions from me as the product owner, not just implementation from the AI.

We ended up with a configurable approach: when creating a workflow, you choose whether to enrol existing users who match the trigger. If yes, they start from step one with their enrolment timestamp set to when the workflow was activated, not when their trigger condition originally occurred. It's a pragmatic compromise - not perfect for every scenario, but sensible for the most common use cases.

The Pattern That Emerged

Looking back at that session, a clear pattern emerged that I've seen repeated across multiple projects since:

Phase one: product thinking. What does this need to do? Who's it for? What are the key workflows? This is entirely human-driven. I was making every product decision. The AI was helping me structure and articulate those decisions, but the vision was mine.

Phase two: architecture. Collaborative. I'd describe what I wanted, Claude would propose a structure, I'd push back or refine, and we'd converge on something solid. This is where the AI's breadth of knowledge is most valuable - it knows patterns and approaches I might not have considered.

Phase three: implementation. Primarily AI-driven. Once the architecture was agreed, the actual code generation was fast and required minimal guidance. The odd correction here and there, but broadly the implementation matched the spec.

Phase four: polish. Back to highly collaborative, lots of iteration. This is the slowest phase relative to the value it produces, but it's where the product goes from "technically works" to "actually usable."

What This Means for Legacy Systems

The old system had been in place for years. Multiple developers had tried to fix it, extend it, work around its limitations. Considerable time and money had been spent on a system that, by the client's own admission, didn't work.

The replacement was built in one extended session. It's cleaner, it's better documented, it's actually understandable by the people who need to maintain it, and - critically - it works.

I'm not suggesting this is universally applicable. This was a bounded system with clear requirements and a willing client. Not every legacy replacement is this straightforward. But the general principle holds: when the cost of understanding the old system exceeds the cost of building a new one, AI has dramatically reduced the cost side of that equation.

Legacy systems used to be untouchable because the effort to replace them was prohibitive. That calculation has changed. Not for everything - mission-critical financial systems and deeply integrated enterprise platforms are still complex enough that "just rebuild it" is dangerous advice. But for mid-complexity business applications where the original developers are long gone and the documentation never existed? The rebuild conversation is worth having now in a way it wasn't two years ago.

The Human in the Loop

One thing I want to be clear about: I wasn't sitting back watching the AI build an application. I was deeply involved in every decision. What triggers should be available? How should the visual flow be laid out? What happens when a user is enrolled in two workflows simultaneously? How do we handle message failures?

Every one of those questions required product judgement, not technical knowledge. The AI had the technical knowledge. I had the product vision and the understanding of what the client actually needed.

That dynamic - human as product manager, AI as development team - is the most productive pattern I've found. And it's the pattern I keep coming back to, project after project. The human decides what to build. The AI figures out how to build it. Both are essential. Neither is sufficient alone.

Want to talk?

If you're on a similar AI journey or want to discuss what I've learned, get in touch.

Get In Touch

Ready To Get To Work?

I'm ready to get stuck in whenever you are...it all starts with an email

...oh, and tea!

paul@thecodeguy.co.uk