Monthly Archives: October 2025

You’re probably listening to the wrong people about AI Coding

Unsurprisingly, there are a lot of strong opinions on AI assisted coding. Some engineers swear by it. Others say it’s dangerous. And of course, as is the way with the internet, nuanced positions get flattened into simplistic camps where everyone’s either on one side or the other.

A lot of the problem is that people aren’t arguing about the same thing. They’re reporting different experiences from different vantage points.

I’ve sketched a chart to illustrate the pattern I’m seeing. It’s not empirical, just observational. It’s more nuanced than this, before camps start arguing about it. This is still an oversimplified generalisation.

The yellow line shows perceived usefulness of AI coding tools. The blue line shows the distribution of engineering competence. The green dotted line shows what the distribution would look like if we went by how experienced people say they are.

Different vantage points

Look at the first peak on the yellow line. A lot of less experienced and mediocre engineers likely think these tools are brilliant. They’re producing more code, feeling productive. The problem is they don’t see the quality problems they’re creating. Their code probably wasn’t great before AI came along. Most code is crap. Most developers are mediocre, so it’s not surprising this group is enthusiastic about tools that help them produce more (crap) code faster.

Then there’s a genuinely experienced cohort. They’ve lived with the consequences of bad code and learnt what good code looks like. When they look at AI-generated code, they see technical debt being created at scale. Without proper guidance, AI-generated code is pretty terrible. Their scepticism is rational. They understand that typing isn’t the bottleneck, and that speed without quality just creates expensive problems.

Calling these engineers resistant to change is lazy and unfair. They’re not Luddites. They’re experienced enough to recognise what they’re seeing, and what they’re seeing is a problem.

But there’s another group at the far end of the chart. Highly experienced engineers working with modern best practices – comprehensive automated tests, continuous delivery, disciplined small changes. Crucially they’ve also learned how work with AI tools using those practices. They are getting productivity without impacting quality. They’re also highly aware typing is not the bottleneck, so not quite as enthusiastic as our first cohort.

Interestingly, I’ve regularly seen sceptical experienced engineers change their view once they’ve been shown how you can blend modern/XP practices with AI assisted coding.

Why the discourse is broken

When someone from that rare disciplined expert group writes enthusiastically about AI tools, it’s easy to assume their experience is typical. It isn’t. Modern best practices are rare. Most teams don’t deploy to production multiple times per day. Most codebases don’t have comprehensive automated tests. Most engineers don’t work in small validated steps with tight feedback loops.

Meanwhile, the large mediocre majority is also writing enthusiastically about these tools, but they’re amplifying dysfunction. They’re creating problems that others will need to clean up later. That’s most of the industry.

And the experienced sceptics – the people who can actually see the problems clearly – are a small group whose warnings get dismissed as resistance to change.

The problem of knowing who to listen to

When you read enthusiastic takes on AI tools, is that coming from someone with comprehensive tests and tight feedback loops, or from someone who doesn’t know what good code looks like? Both sound confident. Both produce content.

When someone expresses caution, are they seeing real problems or just resistant to change?

The capability perception gap – that green dotted line versus reality – means there are probably far fewer people with the experience and practices to make reliable claims than are actually making them. And when you layer on the volume of hype around AI tools, it becomes nearly impossible to filter for signal.

The loudest voices aren’t necessarily the most credible ones. The most credible voices – experienced engineers with rigorous practices – are drowned out by sheer volume from both the mediocre majority and the oversimplified narratives that AI tools are either revolutionary or catastrophic.

We’re not just having different conversations. We’re having them in conditions where it’s genuinely hard to know whose experience is worth learning from.

After the AI boom: what might we be left with?

Some argue that even if the current AI boom leads to an overbuild, it might not be a bad thing – just as the dotcom bubble left behind the internet infrastructure that powered later decades of growth.

It’s a tempting comparison, but the parallels only go so far.

The dotcom era’s overbuild created durable, open infrastructure – fibre networks and interconnects built on open standards like TCP/IP and HTTP. Those systems had multi-decade lifespans and could be reused for whatever came next. Much of the fibre laid in the 1990s still carries traffic today, upgraded simply by swapping out the electronics at each end. That overinvestment became the backbone of broadband, cloud computing, and the modern web.

Most of today’s AI investment, by contrast, is flowing into proprietary, vertically integrated systems rather than open, general-purpose infrastructure. Most of the money is being spent on incredibly expensive GPUs that have a 1-3 year lifespan due to becoming obsolete quickly and wearing out under constant, high-intensity use. These chips aren’t general-purpose compute engines; they’re purpose-built for training and running generative AI models, tuned to the specific architectures and software stacks of a few major vendors such as Nvidia, Google, and Amazon.

These chips live inside purpose-built AI data centres – engineered for extreme power density, advanced cooling, and specialised networking. Unlike the general-purpose facilities of the early cloud era, these sites are tightly coupled to the hardware and software of whoever built them. Together, they form a closed ecosystem optimised for scale but hard to repurpose.

That’s why, if the AI bubble bursts, we could just be left with a pile of short-lived, highly specialised silicon and silent cathedrals of compute – monuments from a bygone era.

The possible upside

Still, there’s a more positive scenario.

If investment outruns demand, surplus capacity could push prices down, just as the post-dotcom bandwidth glut did in the early 2000s. Cheap access to this kind of compute might open the door for new experimentation – not just in generative AI, but in other high-compute domains such as simulation, scientific research, and data-intensive analytics. Even if the hardware is optimised for GenAI, falling prices could still make large-scale computation more accessible overall. A second-hand market in AI hardware could emerge, spreading access to powerful compute much more widely.

The supporting infrastructure – power grid upgrades, networking, and edge facilities – will hopefully remain useful regardless. And even if some systems are stranded, the talent, tooling, and operational experience built during the boom will persist, as it did after the dotcom crash.

Without openness, the benefits stay locked up

The internet’s long-term value came not just from cheap capacity, but from open standards and universal access. Protocols like TCP/IP and HTTP meant anyone could build on the same foundations, without permission or platform lock-in. That openness turned surplus infrastructure into a shared public platform, unlocking decades of innovation far beyond what the original investors imagined.

The AI ecosystem is the opposite: powerful but closed. Its compute, models, and APIs are owned and controlled by a handful of vendors, each defining their own stack and terms of access. Even if hardware becomes cheap, it won’t automatically become open. Without shared standards or interoperability, any overbuild risks remaining a private surplus rather than a public good.

So the AI boom may not leave behind another decades-long backbone like the internet’s fibre networks. But it could still seed innovation if the industry finds ways to open up what it’s building – turning today’s private infrastructure into tomorrow’s shared platform.

Update: This post has received quite a lot of attention on HackerNews. Link to comments if you enjoy that sort of thing. Also, hi everyone 👋, I’ve written a fair bit of other stuff on AI, among other things, if your interested.

On “Team dynamics after AI” and the Illusion of Efficiency

This is one of the most important pieces of writing I’ve read on AI – and that’s not the kind of thing I say lightly. If you’re leading in a business right now and looking at AI adoption, it’s worth your full attention.

Duncan Brown’s Team dynamics after AI isn’t about model performance or the usual surface-level debates. It’s about the potential for AI to quietly reshape the structure and dynamics of teams – how work actually gets done.

He shows how the promise of AI enabling smaller teams (“small giants”) and individuals taking on hybrid roles can lead organisations to blur boundaries, remove friction and assume they can do more with less. But when that happens, you lose the feedback loops, diversity of perspective – and start to erode the structural foundations that quietly hold alignment together and make teams effective.

He also points to something I’ve been saying for a while – that AI doesn’t necessarily make us more productive, it can just make us busier. More output, more artefacts, more noise – but not always more value.

Here lies the organisational risk. The system starts to drift. Decisions narrow. Learning slows. More artefacts get produced, but they create more coordination and interpretation work, not less. The subtle structures that keep context and coherence together begin to thin out. Everything looks efficient – right up until it isn’t.

A bit like what happened with Nike: they optimised for the short-term and de-emphasised the harder, slower work that built long-term brand strength. It seemed to work at first, but the damage wasn’t visible until it was too late and it’ll now take them years to build back.

It’s also written by someone who’s been deep in the trenches – leading engineering at the UK Gov’s AI incubator, so not your usual ill-informed AI commentator.

And as a massive Ian MacKaye/Fugazi fan and a lapsed skateboarder, it honestly feels like another me wrote it.

Essential reading. It’s a long read – get a brew and a quiet 15 minutes.

Why AI won’t work as a software development abstraction

The idea of LLMs as a new abstraction layer for software development keeps coming up. On the surface it sounds appealing. Just as compilers turn source into binaries, AI could turn prompts into systems. You store the prompts, they become the source of truth, AI generates the code and the code just becomes an artefact.

Let’s assume, for the sake of the argument, things like non-determinism and hallucination are solved. There is still a big problem.

Complexity.

Software is never static. Requirements change, and each change adds complexity. Even the best engineers in the world struggle with this – whole disciplines around refactoring, code composition and architecture exist to contain it, and still complexity piles up.

Unless we reach some form of AI superintelligence, well beyond anything today, AI will run into the same problems, probably faster. Entropy builds up, not down.

The only way I can think of around that would be to regenerate the entire codebase (or at least large parts of it) from prompts each time, like a compiler rebuilding from source.

However, that just hits another wall.

But by my rough calculations, a mid-size 500k LOC codebase, with today’s LLMs and compute, would take days to build and cost thousands.

Software development depends on feedback loops measured in seconds or minutes, not hours or days.

And this points to a natural physical law – processing information always carries an energy cost – you can’t avoid it, only shift it.

in this case, from human cognitive effort to machine compute cycles. And today, the machine version would be far less efficient.

tl;dr You can’t beat the 2nd law of thermodynamics.

DORA 2025 AI assisted dev report: Some Benefit, Most Don’t

The recent DORA 2025 State of AI-Assisted Software Development report suggests that, today, only a small minority of the industry are likely to benefit from AI-assisted coding – and more importantly, avoid doing themselves harm.

The report groups teams into seven clusters to show how AI-assisted coding is shaping delivery. Only two – 6 (“Pragmatic performers”) and 7 (“Harmonious high-achievers”) – are currently benefitting.

They’re increasing throughput without harming stability – without an increase in change failure rate (CFR) i.e. they’re not seeing significantly more production bugs, which would otherwise hurt customers and create additional (re)work.

For the other clusters, AI mostly amplifies existing problems. Cluster 5 (Stable and methodical) will only benefit if they change how they work. Clusters 1–4 (the majority of the industry) are likely to see more harm than good – any gains in delivery speed are largely cancelled out by a rise in the change failure rate (CFR), as the report explains.

The report shows 40% of survey respondents fall into clusters 6 and 7. Big caveat though: DORA’s data comes from teams already familiar with DORA and modern practices (even if not applying them fully). Across the wider industry, the real proportion is likely *half that or less*.

That means around three-quarters of the industry are not yet in a position to realistically benefit from AI-assisted coding.

For leaders, it’s less about whether to adopt AI-assisted coding, and more about whether your ways of working are good enough to turn it into an asset, rather than a liability.