Before Anyone Called It AI
There was a moment when nothing was broken. Decisions were being made, work was moving, and the organisation functioned as expected. What changed was not technology, but ownership. Long before automation entered the conversation, decisions began to arrive without a clear author — emerging from process and alignment rather than mandate — and when questions surfaced later, there was no obvious place to return them.
This shift was treated as maturity.
The organisation was complex. Decisions were interdependent. Clear ownership would have slowed things down. Distributed responsibility looked like sophistication, not risk.
Someone noticed — not as an opponent or dissident, but as part of their role.
Their work was to ensure decisions remained coherent as they moved across domains: that assumptions travelled with outcomes, that trade-offs were not lost in translation. They traced dependencies, reconciled rationales, and quietly repaired breaks in logic before those breaks became visible externally.
At first, this was necessary procedural work.
Over time, it became compensation.
Decisions increasingly arrived already justified. The reasoning existed, but it followed the outcome. The process could explain itself, even when no one could explain why this outcome — rather than another — had been chosen.
Responsibility had begun to migrate.
Not upward.
Not formally.
Laterally — toward those who made decisions intelligible rather than those authorised to make them.
This arrangement held until consequences stopped being local.
A decision made in one part of the organisation began to generate exposure elsewhere: contractual tension, regulatory attention, questions from partners. When these surfaced, there was no recognised owner to absorb them.
The issue was raised quietly.
A short notice was sent to the executive responsible for overall decision coherence. It did not allege failure or propose remedies. It stated that authority had become difficult to locate, that responsibility was being exercised without mandate, and that this pattern created downstream risk that could not be absorbed indefinitely.
The reply was reasonable.
The organisation was under pressure. Speed mattered. Complexity made clear ownership unrealistic. Existing governance processes were deemed sufficient.
Nothing in that response was improper.
And so the work continued.
The same people continued to translate decisions, absorb uncertainty, prevent escalation. From the outside, the organisation appeared stable.
Internally, something had shifted.
Responsibility was exercised without authority.
Authority remained with those increasingly distant from consequence.
This is the phase boards rarely see.
Metrics still look acceptable. Issues are handled. Nothing reaches the agenda.
When the first board-level question eventually arrives, it sounds procedural:
Who approved this?
On what basis was this decision taken?
Where is the record of the trade-off?
By then, the trail is thin.
There is process history, but no accountable author. Minutes, but no mandate. A rationale, but no one positioned to stand behind it.
The people who once could have answered have stopped compensating — not in protest, but by ceasing to carry responsibility they were never formally given.
When this is noticed, it is already hindsight.
The story will later be told as one of acceleration. Of systems moving faster than governance. Eventually, AI will be named.
But this is a retrospective convenience.
What failed happened earlier, without drama.
It failed when responsibility was redistributed without protection.
When ownership became impolite.
When formal leadership was informed — politely — and chose continuity over clarity.
Long before anyone called it AI, the organisation had already learned how to decide without deciding — and how to move forward without knowing who would be expected to account for the consequences.
Everything that followed merely made this visible.
This essay is part of a completed body of work.
The canonical version, and the book it belongs to, live at:
