Somewhere around the third month of using AI for daily development, I noticed I was spending the first ten minutes of every session re-explaining things. Not code. Not architecture. The rules. The unwritten knowledge that every project accumulates over time - the "we do it this way because..." stuff that lives in developers' heads and nowhere else.
"Use British English." "The settings page should speak the user's language, not developer language." "Static styles go in SCSS. Dynamic brand colours stay inline." "We call them boards, not channels."
I was repeating myself every single day. And I realised: if I'm saying the same thing to the AI at the start of every session, I should probably write it down.
That obvious, almost embarrassingly simple insight turned into the single most impactful practice in our AI development workflow.
The First Rules
It started small. A few lines in a markdown file in the project root. The file was called CLAUDE.md (later I learned this was actually a convention, but at the time I just thought it was a sensible name) and it contained maybe ten lines:
Project name. Tech stack. British English spelling. Don't use ViewBag. Use strongly typed models.
Nothing revolutionary. But the effect was noticeable immediately. Sessions started faster. I spent less time correcting. The AI's output was closer to what I wanted on the first try.
So I kept adding to it.
Rules Born From Frustration
The best rules in our context documents all started the same way: something going wrong, me getting annoyed, and then turning that annoyance into a documented rule so it wouldn't happen again.
The terminology rule came from the "channels vs boards" fiasco I've written about before. After correcting Claude for the dozenth time, I added an explicit terminology section. "Boards, not channels. Swimlanes, not columns. Workflow steps, not stages." The problem virtually disappeared overnight.
The CSS architecture rules came from a particularly maddening session where Claude kept putting styles in the wrong file. I added a section that maps style types to files: static styles in SCSS, component-specific styles use Blazor CSS isolation, and dynamic styles - specifically the club-branded colours in ClubRight that change per tenant - stay as inline styles. That last distinction is important because every AI instinct is to extract inline styles into a stylesheet. For most cases, that's good practice. For dynamically themed multi-tenant applications, it breaks everything.
The user-facing language rules came from building settings pages for The Code Zone. Claude generated a toggle labelled "Allow future diary check-in". Technically accurate. Completely wrong for the audience. The Code Zone's admin users are club managers, not developers. The setting should say "Users will scan on the welcome screen" - speaking the language of the person who'll actually be reading it.
I caught it, corrected it, and then added a rule: "Settings and user-facing labels should speak the user's language, not developer language. Describe what happens from the user's perspective, not the system's perspective."
Every frustration became a rule. Every rule prevented the next frustration.
The Progressive Memory Pattern
What emerged over time was something I started thinking of as "progressive memory building". Each AI session would surface some new piece of project-specific knowledge - a naming convention, an architectural decision, a user experience principle - and we'd add it to the context document. The document grew organically, shaped by real problems rather than theoretical best practices.
I found myself routinely saying things like "add this to the style guide" or "update project memory with this rule" at the end of sessions. It became a habit, like writing tests or updating documentation. You don't finish a task until the lessons from that task are captured somewhere permanent.
The key word is permanent. In a human team, knowledge spreads through conversations, code reviews, and shared experience. It's imperfect but it accumulates. With AI, knowledge evaporates at the end of every session unless you explicitly write it down. The context document is the only mechanism for persistence. Treat it accordingly.
What a Mature Context Document Looks Like
After several months of progressive building, our context documents had grown into genuinely comprehensive project guides. Not massive - maybe two hundred lines for the most complex projects - but dense with the kind of knowledge that would take a new team member weeks to absorb.
A typical document now includes:
Project identity. What the product is, who uses it, the core value proposition. This sounds basic, but it prevents the AI from making assumptions. When Claude knows that Task Board is a Kanban tool for small teams, it makes different design decisions than if it's guessing.
Tech stack specifics. Not just "Blazor and .NET" but the specific patterns we use: which state management approach, which CSS methodology, which component library. The more specific, the less the AI has to guess.
Terminology. Every domain-specific term, mapped to what the AI might default to. This section alone probably saves us thirty minutes a week in corrections.
Architecture decisions. Where styles live. How components are structured. What goes in shared libraries versus feature-specific code. Which files should not be modified and why. These are the decisions that, once made, should be respected in every session - not re-debated because the AI doesn't know they were already settled.
User experience principles. How we write copy. What tone to use. Whether labels should describe the system's behaviour or the user's experience. These aren't coding decisions, but they affect every piece of UI the AI generates.
Known issues and fixes. Bugs that have been fixed and should not be re-introduced. CSS rules that have been carefully tuned. Performance optimisations that look unnecessary but are actually critical. This section is pure defensive documentation - protecting past work from future AI sessions.
Why This Is Actually Documentation
I want to push back on the idea that context documents are just an "AI thing". They're not. They're documentation. Good documentation. The kind of documentation that every project should have but rarely does.
Think about what's in our context files: architecture decisions, naming conventions, known issues, design principles, user experience guidelines. This is exactly the information a new developer would need on their first day. It's exactly the information that usually exists only in the heads of the senior team members. It's exactly the information that gets lost when someone leaves the company.
The AI forced us to write this down because it can't function without it. But the documentation is just as valuable for human developers. Every new team member who reads our context documents gets up to speed faster. Every code review is easier because the conventions are explicit rather than implicit.
AI didn't create a need for better documentation. It exposed the need that was always there.
The Compound Effect
Here's what I didn't expect: the improvement compounds. Each rule you add makes the AI's output marginally better. Each session that runs more smoothly because of good context documentation gives you more time to focus on actual development. Each new insight that gets captured makes the next session even smoother.
We're now at a point where the first output from Claude in a new session is noticeably better than it was three months ago - not because the model has improved (though it has), but because our context documents are more comprehensive. The AI starts each session with a better understanding of our projects, our conventions, and our expectations.
It's like compound interest for project knowledge. Small, consistent deposits of explicit rules and conventions, earning returns on every single AI interaction.
Practical Advice for Getting Started
If you're not already maintaining context documents for your AI-assisted projects, start today. Here's how:
Start with frustrations. What do you keep correcting? What does the AI keep getting wrong? Those corrections are your first rules. Write them down exactly as you'd say them to a junior developer.
Add rules when you make decisions, not after. When you decide that "settings labels should describe user experience, not system behaviour", add it to the context document immediately. Don't wait until you've corrected the AI three times. Capture the decision when it's fresh.
Be specific, not abstract. "Use clear naming" is useless. "Use boardId not workspaceId, use swimlane not column" is actionable. The AI needs concrete examples, not principles.
Review and prune regularly. Context documents can bloat if you're not careful, and every token in the document costs attention. Remove rules that are no longer relevant. Consolidate rules that overlap. Keep it as lean as possible while covering everything that matters.
Treat it like code. Version control it. Review changes. Discuss additions with your team. This is a living document that directly affects the quality of your AI-assisted output. It deserves the same rigour as any other part of your codebase.
The Meta-Lesson
The biggest thing I've learned from building project memory is that the value of AI in development isn't just about the model's capabilities. It's about the quality of context you provide. A mediocre model with excellent context will outperform a brilliant model with no context, every time.
We've gotten so focused on model comparisons - which LLM is smarter, which benchmark score is higher, which can handle more tokens - that we've undervalued the input side of the equation. The context document is your competitive advantage. It's the difference between an AI that generates generic code and an AI that generates code that belongs in your project.
Invest in your project memory. Update it religiously. Treat every corrected mistake as a rule waiting to be written. Because the AI can't remember yesterday, but with good documentation, it doesn't need to.