<?xml version="1.0" encoding="utf-8"?><feed xmlns="http://www.w3.org/2005/Atom" ><generator uri="https://jekyllrb.com/" version="4.4.1">Jekyll</generator><link href="https://heinecke.com/blog/feed.xml" rel="self" type="application/atom+xml" /><link href="https://heinecke.com/blog/" rel="alternate" type="text/html" /><updated>2026-02-27T16:05:29+00:00</updated><id>https://heinecke.com/blog/feed.xml</id><title type="html">Hasko Heinecke, Enterprise IT Architect</title><subtitle>Various things that I find interesting, such as programming, IT architecture and some hobbies.</subtitle><entry><title type="html">Project Management in the Age of AI</title><link href="https://heinecke.com/blog/2026/02/27/project-management-age-of-ai.html" rel="alternate" type="text/html" title="Project Management in the Age of AI" /><published>2026-02-27T00:00:00+00:00</published><updated>2026-02-27T00:00:00+00:00</updated><id>https://heinecke.com/blog/2026/02/27/project-management-age-of-ai</id><content type="html" xml:base="https://heinecke.com/blog/2026/02/27/project-management-age-of-ai.html"><![CDATA[<p>A colleague of mine, a software developer, recently asked if they should move into project management because they’re worried about AI replacing them. Diversifying your skills is always a good idea, I said. Broadening your perspective, learning how projects are run, understanding stakeholder dynamics, all of that makes you more valuable professionally. But if you’re moving into project management because you think it’s safer from AI disruption than software development, then you might be mistaken.</p>

<h2 id="the-state-of-pm">The State of PM</h2>

<p>Project management as practiced today has devolved into mostly PMO work: maintaining artifacts (Jira boards, status reports, RAG dashboards) and facilitating rituals (standups, retrospectives, steering committees). The actual hard part of project management, monitoring risk indicators, judging when to trigger mitigation actions, and executing them in a controlled manner, is largely ignored by the discipline.</p>

<h2 id="the-falsification-principle">The Falsification Principle</h2>

<p>Karl Popper argued that science progresses not by confirming hypotheses but by <em>falsifying</em> them. You can never prove a theory correct; you can only fail to disprove it. Every surviving theory is one that hasn’t been killed yet. Confirmation is cheap. Falsification is informative.</p>

<p>Translate this to project management. A successful milestone tells you very little. Maybe your plan was right, maybe you got lucky, maybe you’re measuring the wrong thing. The watermelon metaphor is a running gag in project management: Green outside, red inside. A failure that’s properly instrumented tells you exactly where your assumptions were wrong. That’s the information you need.</p>

<h2 id="two-modes-two-transformations">Two Modes, Two Transformations</h2>

<h3 id="business-as-usual">Business As Usual</h3>

<p>Kuhn called this “normal science”: puzzle-solving within an established paradigm. Most project management operates in this mode: the destination is known, the path is roughly understood, and the job is execution and control.</p>

<p>For BAU projects, the rituals and artifacts are overhead that must be optimized. AI is in a good position to do this. Scheduling, status tracking, risk register maintenance, stakeholder communication, action item tracking. These are structured, repeatable tasks that AI handles well. The BAU side of PM will be transformed. Not augmented, but automated.</p>

<p>Most PM training and certification (PMP, PRINCE2, SAFe) teaches the artifacts and the rituals, not the judgment. The profession doesn’t distinguish between BAU and R&amp;D, and applies the same tools to both. This is like using a ruler to measure the coastline of Britain. The tool works at one scale but is meaningless at another.</p>

<h3 id="rd-paradigm-shift-mode">R&amp;D: Paradigm Shift Mode</h3>

<p>The projects that create new capabilities, enter new markets, or change how work is done operate in paradigm-shift mode. Here, traditional project management is harmful. It treats deviation from plan as failure to be corrected, when deviation is the signal that carries information.</p>

<p>Treat the project plan as a hypothesis to be <em>tested</em>, not confirmed. Every iteration is designed to expose where the plan is wrong, as early and cheaply as possible.</p>

<p>This reframes the entire practice:</p>

<ul>
  <li><strong>Risk management</strong> isn’t about listing risks and assigning probabilities in a register. It’s about designing experiments that would <em>falsify</em> your key assumptions before you’ve committed too many resources.</li>
  <li><strong>Milestones</strong> aren’t progress markers. They’re falsification checkpoints: “If X hasn’t happened by this date, our core assumption about Y is probably wrong.”</li>
  <li><strong>Status reporting</strong> shouldn’t ask “are we on track?” It should ask “what have we learned that challenges our plan?”</li>
  <li><strong>A project that fails fast and cheaply</strong> has generated more value per Euro spent than one that succeeds slowly and expensively while never testing its assumptions.</li>
</ul>

<p>In R&amp;D, failure is the way to generate value. Success is just cost with a business case.</p>

<p>If you’ve done spike solutions, proof-of-concept phases, or risk-driven planning, you’re already doing some of this. The framework here makes it systematic and names the principle so that AI tools can be directed toward it.</p>

<h2 id="ais-role-in-each-mode">AI’s Role in Each Mode</h2>

<h3 id="bau-ai-as-executor">BAU: AI as Executor</h3>

<p>AI automates the PMO layer. Artifact generation, status aggregation, schedule optimization, meeting scheduling, action item tracking, stakeholder updates. The human PM becomes unnecessary for these tasks. What remains is exception handling and escalation judgment, and even those may be automatable as models improve.</p>

<h3 id="rd-ai-as-experiment-designer">R&amp;D: AI as Experiment Designer</h3>

<p>In paradigm-shift mode, AI transforms PM in a different way. Rather than automating rituals, AI helps design the experiments that test assumptions: given the current risks, what is the cheapest experiment that would disprove the most critical assumption? What unstated beliefs are embedded in the project plan? What would need to be true for the project to fail, and are those conditions being monitored? Are there weak signals across status updates, team communications, and external events that connect into an early warning?</p>

<p>The human’s role shifts to deciding which experiments promise the highest value and interpreting results in context. The experiments aim at constructive failure: learning that changes direction before expensive commitments are made.</p>

<h3 id="what-this-looks-like-in-practice">What This Looks Like in Practice</h3>

<p>A well-designed falsification experiment has a realistic chance of failure. Ideally, something close to 50%. This is a shock to current management incentives, where the goal is to design plans that succeed. But an experiment designed to succeed isn’t an experiment; it’s a demonstration. It doesn’t generate new information.</p>

<p>A company is building a global mobile app and needs to choose a technology: native development (iOS and Android separately), mobile web, or a cross-platform framework like Flutter. The team believes Flutter is the way to go. It promises one codebase for both platforms, faster delivery, and lower cost. But there are real risks. Flutter isn’t truly native, the framework may become obsolete (Xamarin anyone?), and the user experience might be worst-of-both-worlds. A traditional PM would commit to Flutter based on the business case, build the full app, and discover the problems in production.</p>

<p>Popperian PM asks: what would need to be true for Flutter to be the wrong choice? And how can we find out before we’ve spent the full budget?</p>

<p>The experiment: build a minimum viable version of a key user flow in Flutter. Roll it out. Instrument it with telemetry. Gather user feedback. Better yet, if an existing native app already covers the same flow, run an A/B test: keep the existing version as A, build the Flutter version as B, instrument both, compare. Measure what matters: performance, user satisfaction, crash rates, development velocity, maintenance cost.</p>

<p>If it turns out that users don’t love the Flutter experience, or that performance on lower-end devices is unacceptable, toss it. The experiment cost a fraction of the full build and saved the company from a multi-year commitment to the wrong technology. If the results are comparable, go ahead with Flutter, now with evidence rather than belief.</p>

<p>AI’s role in this is not to make the Flutter-vs-native decision. It’s to help design the experiment: suggest what to measure, identify the user segments most likely to reveal problems, flag assumptions the team hasn’t tested (“you’re assuming consistent performance across device tiers, but have you tested on budget Android phones?”), and synthesize the results into a clear signal.</p>

<p>Each experiment builds upon the previous ones. The goal isn’t to delay delivery. Think of experiments like going to the gym regularly. You don’t want what goes in, you want what comes out.</p>

<h3 id="a-second-scenario-the-database-migration-that-wasnt">A Second Scenario: The Database Migration That Wasn’t</h3>

<p>Here, the experiment didn’t happen.</p>

<p>A team migrated from a commercial relational database to a document database. The rationale was sound on paper. The document model promised flexibility, the team wanted to move away from expensive licensing, and the new database was popular in the industry. The plan was to rearchitect the data model along the way.</p>

<p>Under time pressure, the rearchitecture didn’t happen. The team migrated the existing relational structures, including stored procedures, into the document database. Tables became collections with table-like schemas. Joins became application-level lookups. The result: performance problems, awkward query patterns, and a data layer that combined the constraints of a relational model with none of the benefits of a relational engine.</p>

<p>In hindsight, the core assumption was: “we can migrate our relational data model to a document database and rearchitect it under project timeline pressure.” That assumption was testable.</p>

<p>The experiment: take one representative module, something with complex stored procedures and relational joins, and migrate it to the document database with the full rearchitecture the team envisioned. Time-box it. Measure how long the rearchitecture takes, what performance looks like, and how much of the original logic survives unchanged. In parallel, migrate the same module to a relational open-source alternative as a control: same schema, same stored procedures, minimal rearchitecture needed.</p>

<p>If the document database migration takes three times longer than planned, or if the team ends up recreating relational patterns in a document store, that’s the falsification signal. The relational alternative becomes the pragmatic intermediate step: escape the licensing cost now, rearchitect toward a document model later when there’s time to do it properly.</p>

<p>If the document migration goes smoothly and the rearchitected module performs well, proceed with confidence.</p>

<p>The cost of this experiment would have been a few weeks of AI-assisted parallel work on one module. The cost of not running it was a full migration that delivered the wrong architecture.</p>

<h3 id="the-human-in-the-loop">The Human in the Loop</h3>

<p>In both scenarios, AI can design the experiment, help execute it (generate the test code, set up the telemetry, build the alternative implementation), and assess the results from the data. What AI cannot do (yet) is decide which experiments are worth running in the first place.</p>

<p>That decision requires understanding which risks are the biggest in the context of the larger picture: the business strategy, the organization’s capacity, the organization’s tolerance for delay, the political landscape around the decision. A document database migration might be technically risky but politically unchallengeable because a senior leader championed it. A mobile framework choice might be technically straightforward but strategically critical because it locks the company into a platform for five years. Knowing which risk to test first, and how to frame the experiment so that the results are actionable rather than ignored, is judgment that depends on context that no model currently maintains.</p>

<p>The project manager of the future is the person who looks at a plan and asks: “What are we betting on, and which bet would hurt the most if we’re wrong?” AI helps answer half of that. The other half is the human value add.</p>

<p>The experiment produces real data regardless of who designed it. If the Flutter A/B test shows poor performance on budget devices, that finding is valid whether a human or an AI chose to test for it. The risk is not that AI designs a bad experiment; it’s that AI designs an experiment that tests the wrong assumption, and the team draws confidence from a result that doesn’t address the real risk. That’s why choosing which assumptions to test remains a human judgment call. The validation is in the results, not the design.</p>

<h2 id="the-tragedy-of-agile">The Tragedy of Agile</h2>

<p>This has happened before.</p>

<p>The Agile Manifesto (2001) was designed for the situation we’re describing: uncertain direction, willingness to act, learning through iteration. “Responding to change over following a plan” is at its core. Build something small. See if reality confirms or falsifies your assumptions. Adapt.</p>

<p>Within a decade, Agile devolved into cargo culture. Scrum ceremonies replaced waterfall milestones. Jira boards replaced Gantt charts. Story points replaced time estimates. The fundamental mindset never changed. The artifacts were adopted, the philosophy was discarded. Organizations built the runway and the control tower but never understood why the planes didn’t come.</p>

<p>The same fate awaits any attempt to introduce “constructive failure” or “falsification experiments” as a new methodology. If organizations adopt these ideas as rituals without internalizing the reasoning, we will get:</p>

<ul>
  <li>“Innovation sprints” that are just regular sprints with a different label</li>
  <li>“Experiment backlogs” maintained in Jira with the same obsessive tracking</li>
  <li>“Failure retrospectives” that quietly punish the people involved</li>
  <li>“Hypothesis-driven development” where the hypothesis is written after the fact to justify what was already decided</li>
</ul>

<p>Corporate culture, especially in large traditional enterprises, is structured against this. Bonuses are tied to delivery, not to learning. Performance reviews reward green dashboards, not falsified assumptions. There is no color for “we killed an assumption and saved the company money.”</p>

<p>Solving the incentive problem requires changes to governance, reporting, and compensation structures that go well beyond project management. What a project manager <em>can</em> do is frame experiments in terms executives already understand: a small, bounded cost now that reduces the probability of a large, unbounded cost later. The database migration scenario is a good example. A few weeks of parallel work on one module would have cost a fraction of what the failed full migration cost in performance remediation, team morale, and delayed delivery. That’s not a philosophical argument. It’s a budget line.</p>

<h2 id="why-this-time-is-different">Why This Time Is Different</h2>

<p>When Agile devolved into cargo culture, the penalty was inefficiency and developer cynicism. Annoying, but survivable. Organizations that performed Agile without understanding it were still roughly competitive with organizations that didn’t adopt it at all.</p>

<p>The AI era changes this. AI gives every player the capacity to iterate faster, experiment cheaper, and learn more quickly from failure. Organizations that embrace constructive failure as a way of thinking will outpace those that merely perform it. The gap between “gets it” and “performs it” widens quickly, because the tool that amplifies good judgment also amplifies the cost of bad judgment.</p>

<p>The tragedy of Agile is a cautionary tale. Don’t introduce the new thing as a process. Introduce it as a way of thinking, and let the process emerge from that. Which is what Agile itself was supposed to be.</p>

<h2 id="advice-to-the-budding-project-manager">Advice to the Budding Project Manager</h2>

<p>If you’re a developer considering project management as a safer career path. AI will transform PM at least as deeply as it will transform development. The artifact-and-ritual layer that constitutes most PM work today is more exposed to automation than code generation. Switching from one AI-disrupted field to another is not a strategy.</p>

<p>But the picture isn’t bleak.</p>

<p>As AI automates the BAU side of PM, the field will shift. The proportion of projects that need a human project manager will shrink, but the projects that remain will be the interesting ones: high-uncertainty R&amp;D work where the direction is unclear, the assumptions are untested, and the value comes from learning fast. What’s left is the hard, cognitive work of guiding AI toward the right experiments, interpreting ambiguous results, navigating human dynamics under uncertainty, and making judgment calls that no model can make alone.</p>

<p>The value of management is proportional to the uncertainty. If your project is predictable, PM is overhead. If your project is uncertain, PM done properly is essential. The job requires thinking, not ceremony facilitation. It rewards curiosity, not compliance. It values the ability to ask “what would need to be true for this to fail?” over the ability to maintain a Gantt chart.</p>

<p>The advice, then, is not to avoid PM but to prepare for the PM that’s coming. And the single most important skill to develop is a sense of uncertainty.</p>

<p>This sounds simple. It is not. Most company cultures expect the expert to have answers, not questions. Saying “I don’t know” in a steering committee feels like failure. Consultants have built entire industries on projecting certainty, and organizations have learned to reward it. The result is that false certainty is everywhere: project plans that look solid because nobody tested the assumptions, risk registers full of low-probability items because admitting a high-probability risk would raise uncomfortable questions, technology choices that feel inevitable because a senior leader endorsed them.</p>

<p>Developing a sense of uncertainty means learning to notice when you feel certain and asking why. Is the certainty based on evidence or on consensus? On data or on authority? On tested assumptions or on untested ones that everyone has agreed to stop questioning? The best project managers, like the best scientists, are the ones who are most honest about what they don’t know.</p>

<p>From there, the practical skills follow:</p>

<ul>
  <li>Learn to think in hypotheses and falsification, not plans and milestones</li>
  <li>Build the skill of designing cheap experiments that retire expensive risks</li>
  <li>Understand human dynamics: when to push, when to listen, when to escalate</li>
  <li>Practice asking “what would need to be true for this to fail?” in every project meeting, even when the question is unwelcome</li>
</ul>

<p>So should my colleague move into project management? I believe the career move matters less than the skills you develop. The boundary between developer and project manager is dissolving. Both roles are converging on the same core: judgment under uncertainty. Pick whichever role gives you the most exposure to hard problems with unclear answers, and run toward them.</p>]]></content><author><name></name></author><category term="it-strategy" /><category term="ai" /><category term="agile" /><summary type="html"><![CDATA[A colleague asked if they should move into project management because they're worried about AI replacing developers. But if you're moving into PM because you think it's safer from AI disruption, you might be mistaken. AI will transform PM at least as deeply as it will transform development.]]></summary></entry><entry><title type="html">Space Data Centers: The First Megastructure?</title><link href="https://heinecke.com/blog/2026/02/15/space-data-centers.html" rel="alternate" type="text/html" title="Space Data Centers: The First Megastructure?" /><published>2026-02-15T00:00:00+00:00</published><updated>2026-02-15T00:00:00+00:00</updated><id>https://heinecke.com/blog/2026/02/15/space-data-centers</id><content type="html" xml:base="https://heinecke.com/blog/2026/02/15/space-data-centers.html"><![CDATA[<p>Recent news of data centers being built across the world, mini nuclear power plants being planned to power them, and rivers being diverted to cool them have sparked debates about where this is leading. The obvious answer: into space.</p>

<p>This reminded me of the pulp science fiction novels I used to read as a kid. The German series <em>Perry Rhodan</em> features a giant AI data center on the Moon called Nathan, after the wise man from Lessing’s Enlightenment era play. Not the worst namesake. (Side note: I learned that n8n, the rather fashionable AI-enabled workflow platform, is specifically <em>not</em> pronounced “Nathan” but “en-eight-en.”)</p>

<p>In the decades between me reading those novels and now, computers did the opposite of what science fiction predicted. They shrank relentlessly. The idea of a giant computer being powerful became a punchline rather than a premise. In the 2012 film <em>Iron Sky</em>, the Moon Nazis’ secret weapon is a computer the size of a building, and a smartphone outperforms it. For a long time, it seemed like the future of computing was small.</p>

<p>Then AI happened, and suddenly we are back to building the biggest machines we can. Perry Rhodan’s Nathan begins construction in 2130 and takes 200 years to build. But life has started imitating fiction sooner than expected. Lonestar Data Holdings is working on the first real lunar data center. In March 2025, their “Freedom” payload successfully operated en route to the Moon, and they are planning multi-petabyte storage at Earth-Moon L1 by 2027. The CEO calls himself a “luna-tic.”</p>

<p>Meanwhile, Starcloud (formerly Lumen Orbit), a Y Combinator graduate partnered with Nvidia, launched a satellite carrying an Nvidia H100 GPU in November 2025. It was a hundred times more powerful than any GPU previously sent to space. They used it to train a small LLM on Shakespeare’s works. It spoke Shakespearean English. A marketing stunt, sure, but a cool one. Their actual ambition: a 5-gigawatt orbital data center.</p>

<h2 id="why-space-makes-sense">Why Space Makes Sense</h2>

<p>The appeal is straightforward. Solar energy in space is not filtered through kilometers of atmosphere and not intermittently blocked by inconvenient Earth rotation. It is available around the clock, in practically infinite amounts. The demand for AI computing power is enormous and growing, and our planet’s rivers and power grids are starting to feel the strain.</p>

<p>There are problems, of course. Micrometeorites (or not so microscopic ones) can pierce solar panels. Cooling is a real challenge: space is a vacuum, an excellent thermal insulator, and thermal energy can only be shed through radiators. To put that in perspective, Starcloud’s planned 5-gigawatt facility would need roughly 8 square kilometers of radiators, an area larger than Gibraltar. And then there is latency, the inescapable consequence of distance.</p>

<h2 id="the-lagrange-sweet-spot">The Lagrange Sweet Spot</h2>

<p>Low Earth orbits still have day/night cycles, residual atmosphere, and the growing problem of space debris. Even a lunar surface facility would experience a month-long day/night cycle, not ideal for solar-powered operations.</p>

<p>A better option: the Lagrange points L4 and L5 of the Earth-Moon system. These are gravitationally stable points where objects maintain their position relative to both Earth and Moon without constant fuel expenditure. They sit at roughly the same distance as the Moon, meaning a signal round-trip takes about 2.6 seconds. You would not want to guide a self-driving car from there. But for many AI workloads, quality of answers matters more than millisecond response times. A chatbot with that latency is conceivable, though perhaps not in customer service. We ought to think bigger than customer service anyway.</p>

<p>These Lagrange points have interesting neighbors. Faint dust accumulations known as Kordylewski clouds were first reported there in 1961 and confirmed photographically between 2018 and 2022. Their cores span roughly 25,000 kilometers across. Our data centers would share the neighborhood with ghostly cosmic dust. There are worse views from an office window.</p>

<h2 id="science-fiction-got-there-first">Science Fiction Got There First</h2>

<p>The idea of computing infrastructure in space has been imagined many times. Asimov’s “The Last Question” from 1956 traces computing from room-sized machines to cosmic scale across billions of years. Google cites Asimov as inspiration for their Suncatcher project. Douglas Adams gave us Deep Thought, which spent 7.5 million years computing the answer to life, the universe, and everything, then designed its successor: Earth itself, a planet-sized computer.</p>

<p>The logical endpoint of “move to space for more energy” was articulated by Robert Bradbury in 1997 with the Matrioshka Brain: nested Dyson spheres around a star, each layer using the waste heat of the one below, capturing an entire star’s output for computation. Named after Russian nesting dolls. We are not building that tomorrow. But between the Shakespearean LLM in orbit and the multi-petabyte lunar storage, the direction is set.</p>

<p>China has already named their orbital computing project the Three-Body Computing Constellation, after Liu Cixin’s novel. ADA Space and Zhejiang Lab launched 12 satellites in May 2025, with plans for 2,800. When your satellite constellation is named after a science fiction trilogy about existential cosmic threats, you are either very confident or very literary. Possibly both.</p>

<h2 id="are-we-building-a-megastructure">Are We Building a Megastructure?</h2>

<p>So are space data centers the first megastructure we will build beyond Earth? Quite possibly. We have the demand. We have early proof of concept: a GPU training Shakespeare in orbit, a storage payload tested on the way to the Moon, a 12-satellite computing constellation already launched. And we have the James Webb Space Telescope sitting at Earth-Sun L2, roughly 1.5 million kilometers away. If we can park a telescope four times farther than the Earth-Moon Lagrange points, data centers at L4 or L5 are plausible.</p>

<p>There are, of course, real engineering challenges: radiation shielding, maintenance without a service crew, connection bandwidth, and the sheer logistics of construction at Lagrange-point distances. On bandwidth: NASA demonstrated 622 Mbps laser communication from the Moon in 2013, and with the abundant energy available at L4/L5, multi-gigabit links are feasible, though still orders of magnitude below terrestrial data center interconnects. These are engineering problems, not physics problems.</p>

<p>Will it happen exactly as I have sketched it here? I am no physicist or rocket scientist, so don’t quote me on the details. For a proper deep dive, I recommend Isaac Arthur’s excellent video on the topic: <a href="https://www.youtube.com/watch?v=iLNrYwx0th0">Space-Based Industry</a>.</p>

<p>But the trajectory seems clear. The science fiction I grew up reading is becoming engineering documentation. Perry Rhodan’s Nathan was scheduled for 2130. At this pace, reality might not wait that long.</p>

<p>To borrow from Prof. Károly Zsolnai-Fehér: What a time to be alive.</p>]]></content><author><name></name></author><category term="it-strategy" /><summary type="html"><![CDATA[Data centers are being built across the world, mini nuclear power plants planned to power them, rivers diverted to cool them. The obvious next step: into space. From a Shakespearean LLM trained in orbit to multi-petabyte lunar storage, the science fiction I grew up reading is becoming engineering documentation.]]></summary></entry><entry><title type="html">A Bug’s Tale</title><link href="https://heinecke.com/blog/2026/01/19/a-bugs-tale.html" rel="alternate" type="text/html" title="A Bug’s Tale" /><published>2026-01-19T00:00:00+00:00</published><updated>2026-01-19T00:00:00+00:00</updated><id>https://heinecke.com/blog/2026/01/19/a-bugs-tale</id><content type="html" xml:base="https://heinecke.com/blog/2026/01/19/a-bugs-tale.html"><![CDATA[<p>Some years ago I had the pleasure of investigating a production outage in a customer-facing application. I say pleasure because these investigations can be genuinely enjoyable. You do meaningful work under pressure. When you find the answer and fix it, everyone is satisfied. The pressure at the time was heavy, but the work itself was good.</p>

<p>The root cause was a configuration file that a Spring Boot application tried to load from an (internal) GitHub repository at startup. The application ran in the cloud. During high business load, the containers restarted. GitHub happened to be unreachable at that moment. The startup failed. Customers couldn’t access the service.</p>

<p>We found and fixed the cause. We documented the lesson: production assets must not depend on development resources like GitHub. That dependency was eliminated.</p>

<p>Then someone asked the natural follow-up question.</p>

<p>“What other critical applications in our landscape depend on less critical ones?”</p>

<h2 id="the-answers-you-get">The Answers You Get</h2>

<p>In most large enterprises, the answers sound like this:</p>

<ul>
  <li>“None that I know of.”</li>
  <li>“We shouldn’t have that.”</li>
  <li>“We need to ask the teams.”</li>
</ul>

<p>So questionnaires go out, answers get collected in Excel, presentations get made in PowerPoint. Perhaps a dashboard appears in PowerBI. Actions are defined and tracked.</p>

<p>This creates cost and friction because the basic information isn’t maintained at the source. The data quality is unreliable: manually collected rather than automatically discovered. And once the outage is fixed, it recedes into memory.</p>

<p>Then the next production outage happens. Different application. Different dependency chain. Same fundamental problem.</p>

<h2 id="the-database-solution">The Database Solution</h2>

<p>The obvious answer is to maintain a database with the dependencies. Add an inventory of all applications with their criticality levels. Query it: How critical is this application? How critical are its dependencies? Show me the mismatches.</p>

<p>Many organizations have exactly this and suffer from persistent data quality problems.</p>

<p>Why? Maintaining it requires discipline most organizations struggle with. Applications change, teams restructure, dependencies shift. The database reflects reality only if someone updates it when reality changes.</p>

<p>But reliability requires more than process discipline. It requires structural support in the data model itself.</p>

<p>You need data types that distinguish “we checked and there are none” from “nobody has entered anything yet.” You need timestamps on every relationship, not just every asset. You need automatic staleness detection that flags entries older than some threshold. You need a process that prompts owners to reconfirm or update data when it ages.</p>

<p>This is infrastructure work that can be automated, but organizations rarely prioritize it. Without it, your dependency database is a collection of guesses with unknown freshness.</p>

<h2 id="two-different-questions">Two Different Questions</h2>

<p>Organizations confuse two questions.</p>

<p>“What do we have, and where is it deployed?” That’s inventory.</p>

<p>“What will break, and what depends on it?” That’s prediction.</p>

<p>Configuration Management Databases answer the first question well. Many have automatic discovery built in. They’re designed to maintain current state with decent data quality. That’s their job.</p>

<p>An architecture model should answer the second question. It should capture not just what exists, but what could exist, what should exist, and the dependencies between possible states.</p>

<p>One is a catalog of the present. The other is a map of risks and possibilities.</p>

<p>Most architecture repositories try to be both and end up being neither: too heavyweight to maintain as real-time inventory, too unreliable to trust as a forward-looking risk signal.</p>

<h2 id="what-works">What Works</h2>

<p>The organizations I’ve seen succeed maintain a clear division of labor. The CMDB tracks what exists right now. The architecture model captures criticality, strategic dependencies, and mismatches that represent risk. Risk platforms track the identified issues and their remediation. Portfolio management tools track the projects meant to fix them.</p>

<p>These are separate systems, but they reference the same entities. The cross-checking between them reveals data quality problems you’d miss if everything lived in one place.</p>

<p>This ecosystem approach requires active consistency management. Is it more work than a single tool? Yes. Does it produce reliable answers when someone asks “What will break if this fails?” Also yes.</p>

<h2 id="the-pattern">The Pattern</h2>

<p>Production outages reveal something about responsibilities.</p>

<p>Solution architects document the dependencies their services create. Enterprise architects oversee the dependencies that fall between services - the ones nobody owns until something breaks.</p>

<p>The job isn’t documenting what exists. It’s predicting what will fail.</p>

<p>The organizations that grasp this distinction stop having the same outage twice. They don’t wait for production to teach them where the risks are. They can answer “what depends on what, and are the criticality levels aligned?” before someone asks under pressure at 3am.</p>

<p>That’s not documentation. That’s architecture doing its job.</p>]]></content><author><name></name></author><category term="enterprise-architecture" /><category term="it-strategy" /><summary type="html"><![CDATA[Production outages reveal something about responsibilities. Solution architects document the dependencies their services create. Enterprise architects oversee the dependencies that fall between services - the ones nobody owns until something breaks.]]></summary></entry><entry><title type="html">AI Coding Assistants Struggle with Complex State. A Simple Diagram Fixed It.</title><link href="https://heinecke.com/blog/2026/01/11/ai-coding-assistants-diagrams.html" rel="alternate" type="text/html" title="AI Coding Assistants Struggle with Complex State. A Simple Diagram Fixed It." /><published>2026-01-11T00:00:00+00:00</published><updated>2026-01-11T00:00:00+00:00</updated><id>https://heinecke.com/blog/2026/01/11/ai-coding-assistants-diagrams</id><content type="html" xml:base="https://heinecke.com/blog/2026/01/11/ai-coding-assistants-diagrams.html"><![CDATA[<p>I spent the holidays working on a hobby project, a little map editor. Think of it as brain gym, keeping the machine-room skills sharp.</p>

<p>I used an AI coding assistant, naturally. It handled most things well. But when I hit moderately complex state management (adding map points, selecting them, moving them, panning the viewport), the AI broke down.</p>

<p>It kept refactoring in circles. Move code here, fix something there, break something else. I had Playwright tests and unit tests in place. I tried the latest models. I switched vendors. The hours passed but nothing worked.</p>

<p>I mostly avoided looking at the code myself because I wanted to see how hard I could push the AI. Eventually, it stopped being fun.</p>

<p>So I drew a state diagram.</p>

<p>I used <a href="https://mermaid.js.org/intro/syntax-reference.html">Mermaid</a>, essentially UML as code, because text felt more accessible to AI than a pure image.</p>

<p><img src="/blog/assets/img/state_transitions.png" alt="State transition diagram" /></p>

<p>The Mermaid source:</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>stateDiagram-v2
    state nothing_selected
    state mouse_is_down
    state near_selected_point &lt;&lt;choice&gt;&gt;
    state is_point_selected &lt;&lt;choice&gt;&gt;

    [*] --&gt; nothing_selected
    nothing_selected --&gt; near_selected_point : mouse_down

    mouse_is_down --&gt; panning : mouse_move

    panning --&gt; panning : mouse_move
    panning --&gt; is_point_selected : mouse_up
    is_point_selected --&gt; nothing_selected : no_point_selected
    is_point_selected --&gt; point_selected : has_selected_point
    mouse_is_down --&gt; add_point : mouse_up

    add_point --&gt; point_selected

    near_selected_point --&gt; moving_point : is_within_selection_range
    near_selected_point --&gt; mouse_is_down : not_within_selection_range

    moving_point --&gt; moving_point : mouse_move
    moving_point --&gt; point_selected : mouse_up

    point_selected --&gt; near_selected_point : mouse_down
</code></pre></div></div>

<p>Not pretty, but it captures the essential logic. Once I had structured the problem in my mind, I copied the diagram source into my project and asked the AI to align the code with it.</p>

<p>The code worked immediately.</p>

<h2 id="why-did-it-work">Why Did It Work?</h2>

<p>I had already described my intent in plain language. The AI knew I wanted to select points, move them, pan the map. That wasn’t enough.</p>

<p>Knuth’s <a href="https://en.wikipedia.org/wiki/Literate_programming">literate programming</a> envisions code woven with natural language explanation. Careful abstractions and naming can make code self-documenting. But these assume the code already exists. What if you’re struggling to write it in the first place?</p>

<p>The diagram bridged that gap. It translated what I wanted (select, move, pan) into the technical primitives the code would use (mouse_down, mouse_up, mouse_move, state transitions). The AI could understand requirements. It could write code. But it couldn’t reliably connect the two on its own.</p>

<p>The diagram wasn’t documentation of intent. It was a translation of intent into implementation terms. Once that bridge existed, the AI could cross it.</p>

<h2 id="why-this-matters-beyond-hobby-projects">Why This Matters Beyond Hobby Projects</h2>

<p>This isn’t just about map editors. Any system with non-trivial state benefits from explicit modeling: order workflows, approval chains, multi-step wizards, document lifecycles.</p>

<p>The common assumption is that AI makes diagrams obsolete. Why document when AI can figure it out? My experience suggests the opposite. AI reasons better when you give it structure. So do humans.</p>

<p>Diagrams aren’t a relic. They’re a shared language between you and your AI collaborator.</p>

<p>The old tools still work. Now they have a new audience.</p>]]></content><author><name></name></author><category term="genai" /><category term="software-architecture" /><summary type="html"><![CDATA[I had already described my intent in plain language. The AI knew what I wanted. That wasn't enough. The diagram bridged the gap: it translated what I wanted into the technical primitives the code would use.]]></summary></entry><entry><title type="html">Software Systems No Longer Have to Die</title><link href="https://heinecke.com/blog/2026/01/03/software-systems-no-longer-have-to-die.html" rel="alternate" type="text/html" title="Software Systems No Longer Have to Die" /><published>2026-01-03T00:00:00+00:00</published><updated>2026-01-03T00:00:00+00:00</updated><id>https://heinecke.com/blog/2026/01/03/software-systems-no-longer-have-to-die</id><content type="html" xml:base="https://heinecke.com/blog/2026/01/03/software-systems-no-longer-have-to-die.html"><![CDATA[<p>Some well-established truths of software engineering are becoming less true as AI tools mature. One of them: technical debt always accumulates until a system that was originally well-designed, modular, and extensible turns into a legacy mess of undocumented, untested special cases, to the point where replacing it becomes more economically sensible than maintaining it.</p>

<p>One might think: this is normal. All things age and degrade over time until they have to be replaced. But this is not true of software. Yes, hardware breaks or becomes obsolete, but the software running on it can be transplanted, without loss, to different, newer hardware. Software is not material, it is pure information, which physics tells us is preserved under all circumstances. (Well, that is simplifying a bit to make a point.)</p>

<p>What breaks software is not physical degradation but complexity. Some of that complexity comes from new features and requirements. A lot of it, however, comes from technical debt: Old code that was never restructured to accommodate and reflect the new requirements, features that were implemented without taking sufficient time to understand how they affect the system architecture, tests that were never written and cannot now protect from regressions into old errors, which then result in even more code to circumvent them without breaking everything else. This is undesired, accidental complexity. To compare with the physical degradation of material things, technical debt is the name for the degradation, the rot that affects software, albeit the bits and bytes themselves do not decay.</p>

<h2 id="relentless-and-cheap">Relentless and Cheap</h2>

<p>AI has two properties that matter here: it is relentless and it is cheap, much more so than any human mind.</p>

<p>As a relentless agent, it will explore codebases and log files for obsolete code paths, untested edge cases, and simplification opportunities, explaining obscure properties of system behavior and asking smart questions to either make them more explicit through architectural choices or documentation. As it gradually drives test coverage towards 100%, it builds the foundation it needs to fix those ten thousand warnings your compiler spat out, the ones the human development team only scanned superficially and then discarded because the next feature deadline was approaching fast. It will analyze those priority 4 bug reports and feature requests that never made it into one of the sprints.</p>

<p>As a cheap agent, it scales. Running a hundred such agents doing all of those things at the same time on all parts of the codebase is viable where hiring another one hundred human developers was simply not economically possible. Does AI replace those one hundred developers? Not at all. They were never going to be hired in the first place. But now the work may actually get done.</p>

<h2 id="imperfect-tools-reliable-results">Imperfect Tools, Reliable Results</h2>

<p>One property that AI doesn’t have, as we have all learned by now, is perfection. Say “hallucination” with me. But that’s not the only cause of errors AI can make. When task complexity exceeds a certain limit that is specific to each model generation, thinking appears to break down and the quality seems to fall off a cliff. Some models have particular flaws, such as being notoriously bad at understanding geometry, which makes them bad at UI design tasks. These are just examples. Suffice it to say, AI tools are not immune to making mistakes.</p>

<p>However, software engineering as a discipline has come up with approaches to handle (human) error that extend to the new AI world. For example, Test-Driven Development is the practice of writing a test before writing or changing the actual code that implements a certain behavior. TDD has remained an ideal that rarely gets reached because tests are just one of those things that often get skipped under pressure, resulting in technical debt.</p>

<p>AI agents, being relentless and cheap, can work this way. And because a good test is binary, it succeeds completely or it fails totally, and so it can be assessed by algorithmic, non-AI code. With the right harness, an AI can build tests reliably, and it can modify code reliably without breaking tests. Now this doesn’t protect against all problems: some tests must be broken to progress. Tests do not protect against Heisenbugs (bugs that don’t appear in the presence of a test), and they don’t reliably discover undesired emergent system behaviors such as race conditions or resource contention. And of course, all components passing their tests doesn’t mean the system as a whole is correct. But tests go a long, very long way toward reliable engineering. And who knows, once the basic test coverage is high enough, we might proceed to task our AI tools with more sophisticated techniques, such as creating test harnesses and system proofs that discover some of those pesky difficult edge cases.</p>

<h2 id="software-can-outlive-its-creators">Software Can Outlive Its Creators</h2>

<p>So what does this leave us with? Well, we know that software systems can be maintained for very long periods of time. Consider the Voyager probes that were launched in 1977, almost 50 years ago. Despite running on literally decaying hardware, their software has remained operational to this day. I am not comparing the genius minds of the Voyager space mission teams to some AI coding tool you just downloaded from the Internet. The point stands: With sufficient care, a software system can survive a very long time. And even if funding and staffing are not NASA’s, you can maintain and extend your systems for extended periods of time with the help of AI tools.</p>

<h2 id="the-migration-gamble">The Migration Gamble</h2>

<p>But is it worth it? Why not just build a new system or use a SaaS solution? Of course it depends. Current large-scale legacy systems often date back to the late 1990s, when Java entered the picture for enterprises. Some roots may reach back as far as the 1970s, with IBM System 360 being the original foundation. And your data analysts probably use Python libraries whose algorithms were first implemented in Fortran, then ported to C++, and now to Python. That is code refined over six decades. Whether 30+ years of compound expertise, tailored to your business, your customers, is worth salvaging is not for me to decide.</p>

<p>However, migrating to a new system (and really it will be a family of systems, with a plethora of integration points) always poses a significant business risk. Plenty of those migration projects fail or end up having yet another system accumulating technical debt next to, not instead of, the legacy. Often, they cost tens or hundreds of millions, take years to deliver, and delay much-needed business innovation. The very premise of cutting ties with the encumbering past and speedboating into a bright, innovative future ends up being postponed and tied down by the process of migration and integration.</p>

<p>Now it can be argued that those tasks can also be accelerated and stabilized with AI tools, and that is true. However, there remains significant business risk. Just think of the change management in the business itself: How will your employees, your processes, your customers adapt? With legacy modernization now becoming a real alternative, why take that particular risk in the first place? I cannot give you the answer in your specific situation. But I can tell you that:</p>

<p>Software systems no longer have to die of accidental complexity due to accumulating technical debt.</p>

<h2 id="what-about-the-humans">What About the Humans?</h2>

<p>Does all this mean that the original team maintaining a system will be replaced with AI tools as well? As I’ve explained in my <a href="/blog/2025/12/28/ai-wont-replace-architects.html">article on plurality and AI</a>, I don’t think so. Constructive disagreement is the force that drives innovation, and AI has a hard time emulating thousands of humans doing just that. My advice: Foster a safe team culture where voicing disagreement is encouraged, especially when it is constructive. And of course, encourage your teams to leverage AI tools where they can. It will be very expensive not to.</p>]]></content><author><name></name></author><category term="enterprise-architecture" /><category term="genai" /><summary type="html"><![CDATA[Some well-established truths of software engineering are becoming less true as AI tools mature. One of them: technical debt always accumulates until replacing a system becomes more economically sensible than maintaining it.]]></summary></entry><entry><title type="html">AI Won’t Replace Architects: The Case for Plurality</title><link href="https://heinecke.com/blog/2025/12/28/ai-wont-replace-architects.html" rel="alternate" type="text/html" title="AI Won’t Replace Architects: The Case for Plurality" /><published>2025-12-28T00:00:00+00:00</published><updated>2025-12-28T00:00:00+00:00</updated><id>https://heinecke.com/blog/2025/12/28/ai-wont-replace-architects</id><content type="html" xml:base="https://heinecke.com/blog/2025/12/28/ai-wont-replace-architects.html"><![CDATA[<p>2026 may become the year of AI disappointment. Not because the technology fails. It will keep improving. But expectations are aimed wrong.</p>

<p>Adoption is out of control. You can’t <em>not</em> adopt AI right now. Every vendor promises transformation. Every consultant warns you’ll be left behind. But are you leveraging AI where it matters to your business?</p>

<p>To answer that, you need to understand AI’s core weakness.</p>

<h2 id="ten-billion-minds">Ten Billion Minds</h2>

<p>Soon there will be ten billion people on this planet. Each one is wired differently. Each has different experiences, different knowledge, different values shaped by different cultures and circumstances.</p>

<p>There are maybe a dozen large language models.</p>

<p>Ten billion minds versus a handful of models. That asymmetry matters.</p>

<h2 id="the-value-isnt-the-human-its-the-humans">The Value Isn’t the Human. It’s the Humans.</h2>

<p>AI already outperforms any single person at certain tasks. Summarizing documents. Finding patterns in data. Generating code. Writing first drafts. This will only accelerate.</p>

<p>So the case for humans can’t rest on what one person can do. That’s a losing argument.</p>

<p>The value is plurality. Diversity. Disagreement.</p>

<p>An LLM can simulate disagreement with itself, the same way one person can play devil’s advocate with their own thoughts, asking “what if?” But one mind, however sophisticated, cannot discover its true blind spots this way. You can use mental tools to find some. Not all.</p>

<p>You could pit Claude against Gemini against GPT, and surface some new perspectives. But that’s still three minds. Your enterprise has hundreds or thousands.</p>

<h2 id="the-architects-in-the-room">The Architects in the Room</h2>

<p>When you put five IT architects in a room, they will spend a day arguing about the shape of the boxes and arrows.</p>

<p>This drives everyone crazy. Project managers want decisions. Executives want progress.</p>

<p>But it is a good thing.</p>

<p>What looks like arguing about notation is surfacing assumptions. One architect sees a dependency the others missed. Another has seen this pattern fail at a previous company. A third raises a regulatory constraint no one considered. The disagreement isn’t inefficiency. It’s the mechanism by which blind spots get discovered, and filled in.</p>

<p>A single AI, however fast and capable, cannot replicate this. It has one perspective, one training set, one set of embedded assumptions. It can generate options quickly, but it cannot genuinely challenge itself the way multiple humans with different experiences can challenge each other.</p>

<h2 id="what-ai-will-actually-replace">What AI Will Actually Replace</h2>

<p>AI won’t replace your architecture team. It will replace:</p>

<ul>
  <li>The tedious documentation no one wanted to write</li>
  <li>The repetitive pattern-matching that burned senior hours</li>
  <li>The first draft that used to take three days</li>
  <li>The manual analysis that delayed decisions</li>
</ul>

<p>These are real gains. They free your architects to spend more time in that room, arguing productively. They amplify human judgment rather than replacing it.</p>

<p>The teams most at risk are the most homogeneous ones. Teams doing one thing by a strict rulebook. Teams optimized for efficiency over adaptability. Those are the tasks AI handles well.</p>

<p>Innovative teams that creatively disagree? They’re the ones driving value creation. AI makes them faster, more resourceful, not obsolete.</p>

<h2 id="the-quiet-and-the-disruption">The Quiet and the Disruption</h2>

<p>History is full of examples: small, homogeneous groups do well in times of quiet growth. They’re efficient. Aligned. Fast.</p>

<p>Then the quiet ends, and they fail spectacularly. They couldn’t see the disruption coming because everyone shared the same assumptions.</p>

<p>AI is one such disruption. But that’s not my main point.</p>

<p>My point is: foster constructive disagreement in your enterprise. Cultivate diversity of thought. Encourage your teams to challenge AI to its limits, not just accept its outputs. Let them argue with it. Let them argue with each other.</p>

<p>Only then do you get surprising ideas. Robust decisions. Things that one mind, artificial or human, couldn’t produce alone.</p>

<h2 id="the-competitive-advantage">The Competitive Advantage</h2>

<p>Companies that win in 2026 won’t be the ones with the best AI. Everyone will have access to roughly the same models.</p>

<p>The winners will be the ones who understand what AI can’t do: hold multiple genuinely different perspectives simultaneously. The ones who build teams that disagree well. The ones who use AI to amplify plurality rather than eliminate it.</p>

<p>Feel free to disagree, though. That’s kind of my point.</p>]]></content><author><name></name></author><category term="enterprise-architecture" /><category term="genai" /><summary type="html"><![CDATA[AI already outperforms any single person at certain tasks. So the case for humans can't rest on what one person can do. The value is plurality, diversity, disagreement. Your enterprise has thousands of minds. AI has one.]]></summary></entry><entry><title type="html">What a Year in Enterprise Architecture Looks Like</title><link href="https://heinecke.com/blog/2025/12/22/what-a-year-in-enterprise-architecture-looks-like.html" rel="alternate" type="text/html" title="What a Year in Enterprise Architecture Looks Like" /><published>2025-12-22T00:00:00+00:00</published><updated>2025-12-22T00:00:00+00:00</updated><id>https://heinecke.com/blog/2025/12/22/what-a-year-in-enterprise-architecture-looks-like</id><content type="html" xml:base="https://heinecke.com/blog/2025/12/22/what-a-year-in-enterprise-architecture-looks-like.html"><![CDATA[<p>People sometimes ask what Enterprise Architects actually do. The honest answer is: it depends on the week. Here is what 2025 looked like: strategy sessions that shape investment decisions for years, security scanners built in a weekend, architecture reviews that strengthen proposals, and AI tools rolled out to a thousand developers.</p>

<h2 id="strategy-work">Strategy work</h2>

<p>I was part of a large team working on a portfolio alignment program spanning multiple countries and hundreds of applications. My role was to bring the perspective of an IT service provider and the lens of IT architecture. The goal was to figure out which modernization investments would deliver value and which would preserve complexity under a new name. One data-driven recommendation, that a particular region’s portfolio was too risky to approve, might shape investment decisions for years. That is the high-leverage end of the job.</p>

<h2 id="technical-work">Technical work</h2>

<p>I also spent time in code repositories, not just reviewing architecture diagrams. As a proof of concept, I built a security vulnerability scanner using an AI coding agent. It found and helped fix nine vulnerabilities in one system, four of them critical. Total cost to scan ten repositories: about eight dollars. That is useful, but the real value comes when this scales to more vulnerability types and hundreds of repositories. Sometimes the most useful thing an architect can do is build a tool that others can run.</p>

<h2 id="trench-work">Trench work</h2>

<p>A lot of architecture is document review. Reading proposals, catching errors, pushing back when a whitepaper is more marketing than substance. Nobody celebrates this work, but without it, bad ideas propagate. I reviewed dozens of papers this year. Some were good. Some needed to be stopped.</p>

<h2 id="adoption-work">Adoption work</h2>

<p>I helped roll out Claude Code to over a thousand developers and presented it to several hundred colleagues at internal events. Watching people discover what AI coding tools can do, and then actually use them, was one of the highlights of the year.</p>

<h2 id="pattern-recognition">Pattern recognition</h2>

<p>After enough reviews, you start noticing what recurs. Tool proliferation: a new system for every regulation. Manual workarounds replacing integration. The tension between wanting to innovate and wanting to control. These patterns matter more than any single project.</p>

<h2 id="on-ai-and-architects">On AI and architects</h2>

<p>AI has changed how I work. I used it to scan for vulnerabilities, automate pull request reviews, analyze application portfolios, and write documentation faster. It made me more effective.</p>

<p>But AI replacing architects? Not yet. The judgment calls still require humans. Which assessment matters most. When to push back. How to deliver uncomfortable conclusions without losing the room. These are not problems a language model solves on its own.</p>

<h2 id="looking-forward">Looking forward</h2>

<p>2026 will bring new challenges. I am grateful to the many colleagues who made this year possible. You know who you are.</p>

<p>I’m wishing everyone a happy festive season, Merry Christmas, and a turn of the year full of hope.</p>]]></content><author><name></name></author><category term="enterprise-architecture" /><category term="ai" /><summary type="html"><![CDATA[People sometimes ask what Enterprise Architects actually do. The honest answer is: it depends on the week. Here is what 2025 looked like: strategy sessions that shape investment decisions for years, security scanners built in a weekend, architecture reviews that strengthen proposals, and AI tools rolled out to a thousand developers.]]></summary></entry><entry><title type="html">3 Signs Your Architecture Function Has Become a Bureaucracy</title><link href="https://heinecke.com/blog/2025/12/13/architecture-bureaucracy-signs.html" rel="alternate" type="text/html" title="3 Signs Your Architecture Function Has Become a Bureaucracy" /><published>2025-12-13T00:00:00+00:00</published><updated>2025-12-13T00:00:00+00:00</updated><id>https://heinecke.com/blog/2025/12/13/architecture-bureaucracy-signs</id><content type="html" xml:base="https://heinecke.com/blog/2025/12/13/architecture-bureaucracy-signs.html"><![CDATA[<p>Architecture functions are everywhere. Architects with titles and responsibilities. Review boards, documentation standards, maybe even an expensive repository tool.</p>

<p>And yet.</p>

<p>When a critical decision needs to be made (which platform to bet on, whether to build or buy, how to integrate an acquisition), do you actually ask your architects? Or do you find yourself going straight to vendors, trusted individuals, or your own gut?</p>

<p>If you’re a CTO or CIO and that question stings a little, keep reading. If you’re a Chief Architect wondering why leadership doesn’t loop you in on the important calls anymore, definitely keep reading.</p>

<p>Here are three signs that your architecture function has drifted from its purpose and become a bureaucracy.</p>

<h2 id="sign-1-youve-stopped-asking">Sign 1: You’ve stopped asking</h2>

<p>Not “people” in general — you. The CTO, the CIO, the executive who owns the architecture function. When did you stop consulting your own architects on important decisions?</p>

<p>Maybe you gave up expecting useful input. Maybe you know you won’t get an answer in time. Maybe the last few answers disappointed you: too theoretical, too risk-focused, too disconnected from what you’re actually trying to accomplish.</p>

<p>So you work around them. You go to vendors directly. You rely on that one trusted technical person who “gets it.” You make the call yourself.</p>

<p>This is rational behavior. But it’s also a sign that something is broken. If the person who owns the architecture function doesn’t rely on it, why does it exist?</p>

<h2 id="sign-2-architecture-has-become-the-excuse">Sign 2: “Architecture” has become the excuse</h2>

<p>In your next program review, notice what gets blamed when delivery is late, when scope is cut, when quality suffers.</p>

<p>“We’re waiting on architecture approval.”
“Architecture won’t let us use that technology.”
“The architecture review board is backed up.”</p>

<p>The function that was supposed to enable delivery has become the scapegoat for why it doesn’t happen.</p>

<p>Sometimes this is fair. Sometimes architecture genuinely is the bottleneck. But more often, “architecture” has become shorthand for “the bureaucracy we have to navigate.” The approval process. The documentation requirements. The standards compliance.</p>

<p>I’ve been on both sides of this. I’ve sidestepped architecture functions to get innovative projects done: invoked pre-approvals, found allies, moved fast before resistance could organize. And I’ve been the architect called in two weeks before go-live to “review” a project that was already committed. Integration gaps patched with manual processes. Error-prone workarounds. Poor user experience. And I couldn’t say no without becoming the reason for the delay.</p>

<p>That’s the rubber stamp trap. Approve garbage and share responsibility for the mess, or reject it and become the scapegoat. No-win.</p>

<h2 id="sign-3-all-the-overhead-none-of-the-answers">Sign 3: All the overhead, none of the answers</h2>

<p>This is where the contradiction becomes impossible to ignore.</p>

<p>You have architecture governance. You have regular architecture meetings. You have documentation templates, review processes, maybe a sophisticated repository tool. You’ve invested real money and real time in architectural maturity.</p>

<p>And yet you still can’t answer basic questions:</p>

<ul>
  <li>What systems hold customer data?</li>
  <li>What would break if we retired this legacy platform?</li>
  <li>What’s our API landscape look like?</li>
  <li>Are we on track with our cloud migration, and how would we know?</li>
</ul>

<p>You have the trappings of architecture without the results. More artifacts than ever, but less usable knowledge.</p>

<p>The processes produce documents. The meetings produce decisions (or deferrals). The tools hold diagrams. But when someone needs to actually understand the landscape, to make a real decision about a real system, it’s still an ad-hoc scramble. Someone has to “know the right people” to get answers. And that doesn’t scale.</p>

<h2 id="what-went-wrong">What went wrong</h2>

<p>Architecture functions don’t become bureaucracies overnight. It happens gradually, usually with good intentions.</p>

<p>Someone decides that architecture needs to be “more mature.” Sometimes it’s internal ambition. More often, it’s regulatory pressure: DORA, NIS2, AI Act. Someone has to own the evidence, and EA seems like the natural home. So architecture gets governance. Standards. Review boards. Documentation requirements. Each individually reasonable. Collectively, they shift the function’s center of gravity.</p>

<p>The architects stop being asked “where should we go?” and start being asked “does this comply?” They become scribes, documenting decisions made elsewhere. They become police, enforcing standards they didn’t set for situations they don’t understand.</p>

<p>The ancient Greeks had a word for architect: <em>architékton</em>. It means senior builder. Not senior documenter. Not senior enforcer. Builder.</p>

<p>When your architecture function becomes scribes and police instead of senior builders, you get bureaucracy. The architects are busy (reviewing, documenting, governing) but they’re not building anything. They’re not shaping direction. They’re processing paperwork.</p>

<h2 id="the-cost">The cost</h2>

<p><strong>Unrealized vision.</strong> You have a strategy. A target state. A transformation you’re trying to drive. But if your architecture function can’t translate that vision into actionable guidance, it stays a PowerPoint. The organization defaults to local optimization, and the big picture never materializes.</p>

<p><strong>Shadow IT.</strong> Your best project managers learn to sidestep architecture. They find ways around the governance because the governance doesn’t help them deliver. This works until it doesn’t. Until the shortcuts create integration nightmares or security gaps.</p>

<p><strong>Talent churn.</strong> Good architects don’t want to be scribes and police. They want to solve hard problems and shape direction. If that’s not the job, they leave. But it’s not just architects. The project managers who keep hitting walls, the developers who can’t get answers, the people leaders stuck mediating between delivery and governance: they all burn out or move on. You’re left with the ones who are comfortable processing paperwork.</p>

<p><strong>Audit exposure.</strong> Regulators want evidence of architectural control. DORA, NIS2, AI Act: they all require documentation that reflects reality. If your architecture artifacts are disconnected from what’s actually running (or worse, fabricated to satisfy audits), that’s a finding waiting to happen.</p>

<h2 id="the-way-back">The way back</h2>

<p>If you recognize these signs, the fix isn’t more process. It’s not a better repository tool or a more rigorous review board.</p>

<p>The fix is clarity about what architecture is for.</p>

<p>Architecture is strategic. It’s about understanding where the organization needs to go and providing options to get there. Not documenting where you are. Not policing how you travel.</p>

<p>This requires architects who understand the business strategy, not just the technical standards. It requires leadership that asks architects for input <em>before</em> decisions are made, not rubber stamps after. It requires the judgment to know when existing standards apply and when you’re doing something the standards never anticipated.</p>

<p>I once worked with a Chief Architect who understood this. When we needed to transform how the organization approached cloud, he didn’t enforce existing standards that couldn’t accommodate what we were building. He helped us move fast, clear obstacles, and show results before skeptics could organize resistance. The transformation succeeded because architecture was a strategic ally, not a compliance checkpoint.</p>

<p>Here’s a simple test. Ask your Chief Architect: “What’s the most important technical decision we’ll face this quarter, and what options are you preparing?”</p>

<p>If they have a clear answer, you might be okay.</p>

<p>If they start talking about documentation backlogs and governance compliance, you don’t have an architecture function. You have a bureaucracy with architects in it.</p>]]></content><author><name></name></author><category term="enterprise-architecture" /><category term="transformation" /><summary type="html"><![CDATA[Architecture functions are everywhere. Architects with titles and responsibilities. Review boards, documentation standards, maybe even an expensive repository tool.]]></summary></entry><entry><title type="html">Functional programming with Elm, part 4</title><link href="https://heinecke.com/blog/2020/12/27/fp-with-elm-04.html" rel="alternate" type="text/html" title="Functional programming with Elm, part 4" /><published>2020-12-27T00:00:00+00:00</published><updated>2020-12-27T00:00:00+00:00</updated><id>https://heinecke.com/blog/2020/12/27/fp-with-elm-04</id><content type="html" xml:base="https://heinecke.com/blog/2020/12/27/fp-with-elm-04.html"><![CDATA[<p>In the <a href="/blog/2020/12/23/fp-with-elm-03.html">previous part of the series</a>, we implemented curved tracks and movement along them. This time, we will finally look at branching train lines.</p>

<p>We have already used a graph to store our layout, so we <em>can</em> have multiple connected track. We just have not used it so far. Let me show you the function that initializes the layout I have been using in the samples.</p>

<div class="language-elm highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="n">initialLayout</span> <span class="p">:</span> <span class="kt">Layout</span>
<span class="n">initialLayout</span> <span class="o">=</span>
    <span class="kt">Graph</span><span class="o">.</span><span class="n">empty</span>
        <span class="o">|&gt;</span> <span class="n">insertEdgeData</span> <span class="mi">0</span> <span class="mi">1</span> <span class="p">(</span><span class="kt">StraightTrack</span> <span class="p">{</span> <span class="n">length</span> <span class="o">=</span> <span class="mf">75.0</span> <span class="p">})</span>
        <span class="o">|&gt;</span> <span class="n">insertEdgeData</span> <span class="mi">1</span> <span class="mi">2</span> <span class="p">(</span><span class="kt">CurvedTrack</span> <span class="p">{</span> <span class="n">radius</span> <span class="o">=</span> <span class="mf">300.0</span><span class="o">,</span> <span class="n">angle</span> <span class="o">=</span> <span class="mf">15.0</span> <span class="p">})</span>
</code></pre></div></div>

<p>We are inserting edges so that track 0 connects to track 1, which then connectors to track 2. But now let’s add a branching track.</p>

<div class="language-elm highlighter-rouge"><div class="highlight"><pre class="highlight"><code>        <span class="o">|&gt;</span> <span class="n">insertEdgeData</span> <span class="mi">1</span> <span class="mi">3</span> <span class="p">(</span><span class="kt">StraightTrack</span> <span class="p">{</span> <span class="n">length</span> <span class="o">=</span> <span class="mf">75.0</span> <span class="p">})</span>
</code></pre></div></div>

<p>That was easy.</p>

<p>If we want to switch the track the train is using, we have to keep track which of the possible connections is active. Remember how we defined the layout graph?</p>

<div class="language-elm highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="k">type</span> <span class="k">alias</span> <span class="kt">Layout</span> <span class="o">=</span>
    <span class="kt">Graph</span> <span class="kt">Int</span> <span class="p">()</span> <span class="kt">Track</span>
</code></pre></div></div>

<p>We specified that we don’t want to store any particular data for vertices. Now is the time to change that: There are different switch types, simple ones but also crossings for example. Let’s define a type for switches and store it with the vertices in the layout graph. A switch is a list of <em>configurations of routes</em> that can be active, and “switching” means to change from one of the configurations to another one. A <em>route</em> is a pair of vertices that determines from where to where the route leads.</p>

<div class="language-elm highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="k">type</span> <span class="k">alias</span> <span class="kt">Layout</span> <span class="o">=</span>
    <span class="kt">Graph</span> <span class="kt">Int</span> <span class="kt">Switch</span> <span class="kt">Track</span>

<span class="k">type</span> <span class="k">alias</span> <span class="kt">Switch</span> <span class="o">=</span>
    <span class="p">{</span> <span class="n">configs</span> <span class="p">:</span> <span class="kt">List</span> <span class="p">(</span><span class="kt">List</span> <span class="p">(</span> <span class="kt">Int</span><span class="o">,</span> <span class="kt">Int</span> <span class="p">))</span> <span class="p">}</span>
</code></pre></div></div>

<p>Let’s write a function that returns all the switches in the layout. We want all the vertices in the layout graph that have switch data.</p>

<div class="language-elm highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="n">switches</span> <span class="p">:</span> <span class="kt">Layout</span> <span class="o">-&gt;</span> <span class="kt">List</span> <span class="p">(</span> <span class="kt">Int</span><span class="o">,</span> <span class="kt">Switch</span> <span class="p">)</span>
<span class="n">switches</span> <span class="n">layout</span> <span class="o">=</span>
    <span class="kt">Graph</span><span class="o">.</span><span class="n">nodes</span> <span class="n">layout</span>
        <span class="c1">-- Convert from a list of pairs with a Maybe inside to a list of Maybes.</span>
        <span class="o">|&gt;</span> <span class="kt">List</span><span class="o">.</span><span class="n">map</span> <span class="p">(</span><span class="o">\</span><span class="p">(</span> <span class="n">vertex</span><span class="o">,</span> <span class="n">data</span> <span class="p">)</span> <span class="o">-&gt;</span> <span class="kt">Maybe</span><span class="o">.</span><span class="n">map</span> <span class="p">(</span><span class="o">\</span><span class="n">switch</span> <span class="o">-&gt;</span> <span class="p">(</span> <span class="n">vertex</span><span class="o">,</span> <span class="n">switch</span> <span class="p">))</span> <span class="n">data</span><span class="p">)</span>
        <span class="c1">-- Filter out the Nothings, the vertices that are not switches.</span>
        <span class="o">|&gt;</span> <span class="kt">Maybe</span><span class="o">.</span><span class="kt">Extra</span><span class="o">.</span><span class="n">values</span>
</code></pre></div></div>

<p>Finally, we need to add the switch information to the initial layout.</p>

<div class="language-elm highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="n">initialLayout</span> <span class="p">:</span> <span class="kt">Layout</span>
<span class="n">initialLayout</span> <span class="o">=</span>
    <span class="kt">Graph</span><span class="o">.</span><span class="n">empty</span>
        <span class="o">|&gt;</span> <span class="n">insertEdgeData</span> <span class="mi">0</span> <span class="mi">1</span> <span class="p">(</span><span class="kt">StraightTrack</span> <span class="p">{</span> <span class="n">length</span> <span class="o">=</span> <span class="mf">75.0</span> <span class="p">})</span>
        <span class="o">|&gt;</span> <span class="n">insertEdgeData</span> <span class="mi">1</span> <span class="mi">2</span> <span class="p">(</span><span class="kt">CurvedTrack</span> <span class="p">{</span> <span class="n">radius</span> <span class="o">=</span> <span class="mf">300.0</span><span class="o">,</span> <span class="n">angle</span> <span class="o">=</span> <span class="mf">15.0</span> <span class="p">})</span>
        <span class="o">|&gt;</span> <span class="n">insertEdgeData</span> <span class="mi">1</span> <span class="mi">3</span> <span class="p">(</span><span class="kt">StraightTrack</span> <span class="p">{</span> <span class="n">length</span> <span class="o">=</span> <span class="mf">75.0</span> <span class="p">})</span>
        <span class="o">|&gt;</span> <span class="n">insertData</span> <span class="mi">1</span> <span class="p">(</span><span class="kt">Switch</span> <span class="p">[</span> <span class="p">[</span> <span class="p">(</span> <span class="mi">0</span><span class="o">,</span> <span class="mi">2</span> <span class="p">)</span> <span class="p">]</span><span class="o">,</span> <span class="p">[</span> <span class="p">(</span> <span class="mi">0</span><span class="o">,</span> <span class="mi">3</span> <span class="p">)</span> <span class="p">]</span> <span class="p">])</span>
</code></pre></div></div>

<h2 id="next">Next</h2>

<p>This was a bit hard. We are beginning to utilize the mechanisms of functional programming to make our program more concise. If you think that the switches function takes a lot of computation, keep in mind that it only needs to be evaluated once for a given layout. Elm knows that its result will never change unless the layout is replaced with another one. So it can optimize away repeated calls to the function wherever we need them.</p>

<p>In the next part, we will make the switches actually switchable.</p>

<div id="train-4"></div>
<script src="/j/assets/series/fp-elm/main-4.js"></script>

<script>var app = Elm.Main.init({ node: document.getElementById("train-4") })</script>]]></content><author><name></name></author><category term="programming" /><category term="elm" /><category term="functionalprogramming" /><category term="trains" /><summary type="html"><![CDATA[In the previous part of the series, we implemented curved tracks and movement along them. This time, we will finally look at branching train lines.]]></summary></entry><entry><title type="html">Functional programming with Elm, part 3</title><link href="https://heinecke.com/blog/2020/12/23/fp-with-elm-03.html" rel="alternate" type="text/html" title="Functional programming with Elm, part 3" /><published>2020-12-23T00:00:00+00:00</published><updated>2020-12-23T00:00:00+00:00</updated><id>https://heinecke.com/blog/2020/12/23/fp-with-elm-03</id><content type="html" xml:base="https://heinecke.com/blog/2020/12/23/fp-with-elm-03.html"><![CDATA[<p>In the <a href="/blog/2020/12/20/fp-with-elm-02.html">previous part of the series</a>, we implemented trains moving from one piece of track to the next. Our layout was still very constrained in that it only consisted of a series of straight tracks, and even though we used a graph to represent the layout internally, we did not yet utilize that power to implement branch lines. We will start with curves.</p>

<p>First, we need to put on our mathematician’s hat again. What does it mean to have a straight track or a curved track? For a straight track we already know part of the answer: Essentially, it has a length and nothing else. A curved track will have different specifications: The radius of the curve and the angle it covers. Its actual length can then be calculated using a function.</p>

<div class="sidenote">

  <p>Side note: Real railway curves are much more complicated than circle segments. Among other things, they ease into their final radius gradually so the passengers heads don’t jerk due to the sudden centrifugal force when the train enters the curve. We will ignore this for now as it only makes the involved functions more complex without adding many additional points from as a programmer.</p>

</div>

<p>Elm provides <em>union types</em> to accommodate different variants of a data structures. They look like this:</p>

<div class="language-elm highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="k">type</span> <span class="kt">Track</span>
    <span class="o">=</span> <span class="kt">StraightTrack</span>
        <span class="p">{</span> <span class="n">length</span> <span class="p">:</span> <span class="kt">Float</span> <span class="c1">-- in m</span>
        <span class="p">}</span>
    <span class="o">|</span> <span class="kt">CurvedTrack</span>
        <span class="p">{</span> <span class="n">radius</span> <span class="p">:</span> <span class="kt">Float</span> <span class="c1">-- in m</span>
        <span class="o">,</span> <span class="n">angle</span> <span class="p">:</span> <span class="kt">Float</span> <span class="c1">-- in degrees, why not</span>
        <span class="p">}</span>
</code></pre></div></div>

<p>Note that this time, <code class="language-plaintext highlighter-rouge">Track</code> is not a type <em>alias</em> anymore but a proper type. Type aliases work just like find and replace in your favorite text editor: We could have written the whole data structure in the curly braces everytime instead of the alias. A <em>type</em> is however its own thing, and here we tell Elm that it can have two wholly different shapes. The names <code class="language-plaintext highlighter-rouge">StraightTrack</code> and <code class="language-plaintext highlighter-rouge">CurvedTrack</code> serve as tags to identify which kind of data structure to expect.</p>

<p>To see how this works, have a look at the brand new <code class="language-plaintext highlighter-rouge">trackLength</code> function that calculates the value differently based on the type of track.</p>

<div class="language-elm highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="n">trackLength</span> <span class="p">:</span> <span class="kt">Track</span> <span class="o">-&gt;</span> <span class="kt">Float</span>
<span class="n">trackLength</span> <span class="n">track</span> <span class="o">=</span>
    <span class="k">case</span> <span class="n">track</span> <span class="k">of</span>
        <span class="kt">StraightTrack</span> <span class="n">s</span> <span class="o">-&gt;</span>
            <span class="n">s</span><span class="o">.</span><span class="n">length</span>

        <span class="kt">CurvedTrack</span> <span class="n">c</span> <span class="o">-&gt;</span>
            <span class="n">pi</span> <span class="o">*</span> <span class="n">c</span><span class="o">.</span><span class="n">radius</span> <span class="o">*</span> <span class="n">c</span><span class="o">.</span><span class="n">angle</span> <span class="o">/</span> <span class="mf">180.0</span>
</code></pre></div></div>

<p>The <code class="language-plaintext highlighter-rouge">case ... of</code> syntax catches different variants of the Track type and the variable <code class="language-plaintext highlighter-rouge">s</code> or <code class="language-plaintext highlighter-rouge">c</code> is assigned the data structure within the specific type. From there, it is easy to handle the two cases and return the correct number. We have seen the <code class="language-plaintext highlighter-rouge">case ... of</code> syntax already when we use <code class="language-plaintext highlighter-rouge">Maybe</code>. Indeed, <code class="language-plaintext highlighter-rouge">Maybe</code> is defined as follows: <code class="language-plaintext highlighter-rouge">type Maybe a = Nothing | Just a</code>, where <code class="language-plaintext highlighter-rouge">a</code> is a <em>type variable</em> that can be replaced with any type of our choosing.</p>

<h2 id="next">Next</h2>

<p>In the next part we will finally implement branches in train tracks.</p>

<div id="train"></div>
<script src="/j/assets/series/fp-elm/main-3.js"></script>

<script>var app = Elm.Main.init({ node: document.getElementById("train") })</script>]]></content><author><name></name></author><category term="programming" /><category term="elm" /><category term="functionalprogramming" /><category term="trains" /><summary type="html"><![CDATA[In the previous part of the series, we implemented trains moving from one piece of track to the next. Our layout was still very constrained in that it only consisted of a series of straight tracks, and even though we used a graph to represent the layout internally, we did not yet utilize that power to implement branch lines. We will start with curves.]]></summary></entry></feed>