Monthly Archives: October 2025

After the AI boom: what might we be left with?

Some argue that even if the current AI boom leads to an overbuild, it might not be a bad thing – just as the dotcom bubble left behind the internet infrastructure that powered later decades of growth.

It’s a tempting comparison, but the parallels only go so far.

The dotcom era’s overbuild created durable, open infrastructure – fibre networks and interconnects built on open standards like TCP/IP and HTTP. Those systems had multi-decade lifespans and could be reused for whatever came next. Much of the fibre laid in the 1990s still carries traffic today, upgraded simply by swapping out the electronics at each end. That overinvestment became the backbone of broadband, cloud computing, and the modern web.

Most of today’s AI investment, by contrast, is flowing into proprietary, vertically integrated systems rather than open, general-purpose infrastructure. Most of the money is being spent on incredibly expensive GPUs that have a 1-3 year lifespan due to becoming obsolete quickly and wearing out under constant, high-intensity use. These chips aren’t general-purpose compute engines; they’re purpose-built for training and running generative AI models, tuned to the specific architectures and software stacks of a few major vendors such as Nvidia, Google, and Amazon.

These chips live inside purpose-built AI data centres – engineered for extreme power density, advanced cooling, and specialised networking. Unlike the general-purpose facilities of the early cloud era, these sites are tightly coupled to the hardware and software of whoever built them. Together, they form a closed ecosystem optimised for scale but hard to repurpose.

That’s why, if the AI bubble bursts, we could just be left with a pile of short-lived, highly specialised silicon and silent cathedrals of compute – monuments from a bygone era.

The possible upside

Still, there’s a more positive scenario.

If investment outruns demand, surplus capacity could push prices down, just as the post-dotcom bandwidth glut did in the early 2000s. Cheap access to this kind of compute might open the door for new experimentation – not just in generative AI, but in other high-compute domains such as simulation, scientific research, and data-intensive analytics. Even if the hardware is optimised for GenAI, falling prices could still make large-scale computation more accessible overall. A second-hand market in AI hardware could emerge, spreading access to powerful compute much more widely.

The supporting infrastructure – power grid upgrades, networking, and edge facilities – will hopefully remain useful regardless. And even if some systems are stranded, the talent, tooling, and operational experience built during the boom will persist, as it did after the dotcom crash.

Without openness, the benefits stay locked up

The internet’s long-term value came not just from cheap capacity, but from open standards and universal access. Protocols like TCP/IP and HTTP meant anyone could build on the same foundations, without permission or platform lock-in. That openness turned surplus infrastructure into a shared public platform, unlocking decades of innovation far beyond what the original investors imagined.

The AI ecosystem is the opposite: powerful but closed. Its compute, models, and APIs are owned and controlled by a handful of vendors, each defining their own stack and terms of access. Even if hardware becomes cheap, it won’t automatically become open. Without shared standards or interoperability, any overbuild risks remaining a private surplus rather than a public good.

So the AI boom may not leave behind another decades-long backbone like the internet’s fibre networks. But it could still seed innovation if the industry finds ways to open up what it’s building – turning today’s private infrastructure into tomorrow’s shared platform.

Update: This post has received quite a lot of attention on HackerNews. Link to comments if you enjoy that sort of thing. Also, hi everyone 👋, I’ve written a fair bit of other stuff on AI, among other things, if your interested.

On “Team dynamics after AI” and the Illusion of Efficiency

This is one of the most important pieces of writing I’ve read on AI – and that’s not the kind of thing I say lightly. If you’re leading in a business right now and looking at AI adoption, it’s worth your full attention.

Duncan Brown’s Team dynamics after AI isn’t about model performance or the usual surface-level debates. It’s about the potential for AI to quietly reshape the structure and dynamics of teams – how work actually gets done.

He shows how the promise of AI enabling smaller teams (“small giants”) and individuals taking on hybrid roles can lead organisations to blur boundaries, remove friction and assume they can do more with less. But when that happens, you lose the feedback loops, diversity of perspective – and start to erode the structural foundations that quietly hold alignment together and make teams effective.

He also points to something I’ve been saying for a while – that AI doesn’t necessarily make us more productive, it can just make us busier. More output, more artefacts, more noise – but not always more value.

Here lies the organisational risk. The system starts to drift. Decisions narrow. Learning slows. More artefacts get produced, but they create more coordination and interpretation work, not less. The subtle structures that keep context and coherence together begin to thin out. Everything looks efficient – right up until it isn’t.

A bit like what happened with Nike: they optimised for the short-term and de-emphasised the harder, slower work that built long-term brand strength. It seemed to work at first, but the damage wasn’t visible until it was too late and it’ll now take them years to build back.

It’s also written by someone who’s been deep in the trenches – leading engineering at the UK Gov’s AI incubator, so not your usual ill-informed AI commentator.

And as a massive Ian MacKaye/Fugazi fan and a lapsed skateboarder, it honestly feels like another me wrote it.

Essential reading. It’s a long read – get a brew and a quiet 15 minutes.

Why AI won’t work as a software development abstraction

The idea of LLMs as a new abstraction layer for software development keeps coming up. On the surface it sounds appealing. Just as compilers turn source into binaries, AI could turn prompts into systems. You store the prompts, they become the source of truth, AI generates the code and the code just becomes an artefact.

Let’s assume, for the sake of the argument, things like non-determinism and hallucination are solved. There is still a big problem.

Complexity.

Software is never static. Requirements change, and each change adds complexity. Even the best engineers in the world struggle with this – whole disciplines around refactoring, code composition and architecture exist to contain it, and still complexity piles up.

Unless we reach some form of AI superintelligence, well beyond anything today, AI will run into the same problems, probably faster. Entropy builds up, not down.

The only way I can think of around that would be to regenerate the entire codebase (or at least large parts of it) from prompts each time, like a compiler rebuilding from source.

However, that just hits another wall.

But by my rough calculations, a mid-size 500k LOC codebase, with today’s LLMs and compute, would take days to build and cost thousands.

Software development depends on feedback loops measured in seconds or minutes, not hours or days.

And this points to a natural physical law – processing information always carries an energy cost – you can’t avoid it, only shift it.

in this case, from human cognitive effort to machine compute cycles. And today, the machine version would be far less efficient.

tl;dr You can’t beat the 2nd law of thermodynamics.

DORA 2025 AI assisted dev report: Some Benefit, Most Don’t

The recent DORA 2025 State of AI-Assisted Software Development report suggests that, today, only a small minority of the industry are likely to benefit from AI-assisted coding – and more importantly, avoid doing themselves harm.

The report groups teams into seven clusters to show how AI-assisted coding is shaping delivery. Only two – 6 (“Pragmatic performers”) and 7 (“Harmonious high-achievers”) – are currently benefitting.

They’re increasing throughput without harming stability – without an increase in change failure rate (CFR) i.e. they’re not seeing significantly more production bugs, which would otherwise hurt customers and create additional (re)work.

For the other clusters, AI mostly amplifies existing problems. Cluster 5 (Stable and methodical) will only benefit if they change how they work. Clusters 1–4 (the majority of the industry) are likely to see more harm than good – any gains in delivery speed are largely cancelled out by a rise in the change failure rate (CFR), as the report explains.

The report shows 40% of survey respondents fall into clusters 6 and 7. Big caveat though: DORA’s data comes from teams already familiar with DORA and modern practices (even if not applying them fully). Across the wider industry, the real proportion is likely *half that or less*.

That means around three-quarters of the industry are not yet in a position to realistically benefit from AI-assisted coding.

For leaders, it’s less about whether to adopt AI-assisted coding, and more about whether your ways of working are good enough to turn it into an asset, rather than a liability.