Architect by Profession, Programmer at Heart
AI Coding Assistants Struggle with Complex State. A Simple Diagram Fixed It.
I spent the holidays working on a hobby project, a little map editor. Think of it as brain gym, keeping the machine-room skills sharp.
I used an AI coding assistant, naturally. It handled most things well. But when I hit moderately complex state management (adding map points, selecting them, moving them, panning the viewport), the AI broke down.
It kept refactoring in circles. Move code here, fix something there, break something else. I had Playwright tests and unit tests in place. I tried the latest models. I switched vendors. The hours passed but nothing worked.
I mostly avoided looking at the code myself because I wanted to see how hard I could push the AI. Eventually, it stopped being fun.
So I drew a state diagram.
I used Mermaid, essentially UML as code, because text felt more accessible to AI than a pure image.

The Mermaid source:
stateDiagram-v2
state nothing_selected
state mouse_is_down
state near_selected_point <<choice>>
state is_point_selected <<choice>>
[*] --> nothing_selected
nothing_selected --> near_selected_point : mouse_down
mouse_is_down --> panning : mouse_move
panning --> panning : mouse_move
panning --> is_point_selected : mouse_up
is_point_selected --> nothing_selected : no_point_selected
is_point_selected --> point_selected : has_selected_point
mouse_is_down --> add_point : mouse_up
add_point --> point_selected
near_selected_point --> moving_point : is_within_selection_range
near_selected_point --> mouse_is_down : not_within_selection_range
moving_point --> moving_point : mouse_move
moving_point --> point_selected : mouse_up
point_selected --> near_selected_point : mouse_down
Not pretty, but it captures the essential logic. Once I had structured the problem in my mind, I copied the diagram source into my project and asked the AI to align the code with it.
The code worked immediately.
Why Did It Work?
I had already described my intent in plain language. The AI knew I wanted to select points, move them, pan the map. That wasn’t enough.
Knuth’s literate programming envisions code woven with natural language explanation. Careful abstractions and naming can make code self-documenting. But these assume the code already exists. What if you’re struggling to write it in the first place?
The diagram bridged that gap. It translated what I wanted (select, move, pan) into the technical primitives the code would use (mouse_down, mouse_up, mouse_move, state transitions). The AI could understand requirements. It could write code. But it couldn’t reliably connect the two on its own.
The diagram wasn’t documentation of intent. It was a translation of intent into implementation terms. Once that bridge existed, the AI could cross it.
Why This Matters Beyond Hobby Projects
This isn’t just about map editors. Any system with non-trivial state benefits from explicit modeling: order workflows, approval chains, multi-step wizards, document lifecycles.
The common assumption is that AI makes diagrams obsolete. Why document when AI can figure it out? My experience suggests the opposite. AI reasons better when you give it structure. So do humans.
Diagrams aren’t a relic. They’re a shared language between you and your AI collaborator.
The old tools still work. Now they have a new audience.