On leading technology when the question has reopened
AI is reshaping how organisations operate — compressing what used to take teams, automating judgement calls, changing the economics of expertise. That shift has reopened a question most technical leaders thought was settled: what they are actually for. I know because it has reopened for me — more than once.
And watching how people around me respond, a set of patterns has become visible.
The buyer rebuilds their position around procurement. Evaluate vendors, manage the AI portfolio, own the spend. Every question answered with a platform name. The certainty lives in the vendor landscape — and the thinking has been delegated to it.
The vibe builder ships fast, demos often, lets AI generate the artefacts. Productive. Impressive in a standup. But unable to explain what was built if pressed, or to defend it when it breaks. The certainty lives in the output — and the thinking has been delegated to the model.
The depth-seeker goes deeper. Reads the papers, not the summaries. Learns the internals. Moves toward the frontier. It looks like the opposite of the first two. But the certainty lives in accumulated expertise — and the thinking has been delegated to it just the same.
The evangelist goes all in. Becomes the AI champion, sells the vision internally, builds momentum. The certainty lives in the hype itself — and the thinking has been delegated to enthusiasm.
The governor retreats into governance. Frameworks, committees, responsible AI charters. Busy without building. The certainty lives in process — and the thinking has been delegated to compliance.
The abstainer waits it out. Treats AI as hype, maintains their current position unchanged. The certainty lives in the belief that this, too, shall pass — and the thinking has been delegated to the status quo.
These are not types. They are tendencies — most leaders carry more than one, and one or two dominate. This list is not exhaustive. There will be others.
Six patterns. Six different expressions of confidence. Each one has delegated the thinking somewhere — to vendors, to the model, to expertise, to momentum, to process, to inertia. Each delegation feels like a choice. Each produces the same result: someone who has stopped questioning in exactly the place they should not have stopped.
Each has gone deep enough to feel sure. That feeling is where scrutiny stops.
The confidence trap is supposed to be trusting AI too much. The one that will actually catch technical leaders is subtler. It is the certainty itself — arrived at through different routes, but producing the same blind spot.
In an enterprise, confidence is currency. Uncertainty reads as weakness. But there is a distinction worth making.
Two kinds of uncertainty. One is not knowing things and not knowing which things. That is weakness. That is what each of these patterns has in common, underneath the performance of confidence.
The other kind is knowing precisely where the unknowns are.
The seams — the places where the system transitions, where assumptions have not been tested, where what AI does and what the organisation needs have quietly diverged. The seam between a model's output and a production decision. Between a pilot's performance and what happens at scale. Between what the system was designed to do and what it is actually being asked to do. These are the places that reward attention — and the first places certainty papers over.
The AI failures that will end careers are not the ones where someone said we do not fully understand this yet. They are the ones where someone was certain in exactly the place they should not have been.
The leaders who come through this period well will not be the best buyers, the fastest builders, the deepest technicians, or the most enthusiastic adopters. They will be the ones who held genuine uncertainty — and worked it. Who understood the seams — and let them breathe rather than papering over them with performed confidence. That is not hesitation. It is the steadiest position available.
The trap is not AI. It is the certainty.
Sheldon Kopp was a psychotherapist whose work was formative for me — a list of truths at the back of one of his books stayed on my wall through most of my growing up. One line that has only grown truer with time: all important decisions must be made on the basis of insufficient data. The one that followed left nowhere to hide: yet we are responsible for everything we do. No excuses will be accepted.
Uncertainty does not dissolve accountability. It is accountability — the real kind, not the performed kind.
Sthir, Maya, Karma asked what it means to build when the ground is shifting. This is the same question, one level up.
References and further reading
The piece that prompted this
de Bruin, E. (2026, May 4). The CTO's Extinction Event. The CTO Substack. ctosub.com
On critical thinking and AI
Lee, H-P., Sarkar, A., Tankelevitch, L., Drosos, I., Rintel, S., Banks, R., & Wilson, N. (2025). The Impact of Generative AI on Critical Thinking. CHI '25, Yokohama, Japan. ACM. doi.org
Enterprise AI adoption
Workplace Intelligence / Writer (2026). Enterprise AI Adoption in 2026. writer.com
On decisions and accountability
Kopp, S. (1972). If You Meet the Buddha on the Road, Kill Him! Science and Behavior Books. The Eschatological Laundry List.