“Typing is not the bottleneck” – illustrated

“Typing is not the bottleneck” – illustrated. If you’ve followed me for a while, you’ll know how often I say this – especially since the rise of AI-assisted coding. Here’s an example.

This is a cumulative flow diagram from Jira for a real development team. It shows work only spends around 30% of its time in the value-creation stage (Development). The other 70% is taken up by non-value-adding activities – labour and time-intensive manual inspection steps such as code reviews, feature and regression testing, and work sitting in idle queues waiting to be reviewed, tested and released.

This pattern is the norm. This example isn’t even a bad case (they at least look to be shipping to prod once or twice a week).

If you increase the amount of work in Development – whether through adding more devs, or because of GenAI coding – it increases the amount of work going into all the downstream stages as well. More waiting in idle queues, more to test, bigger and more risky releases, most likely resulting in slower overall delivery.

It’s a bit like trying to drive faster on a congested motorway.

The solution is Continuous Delivery – being able to reliably and safely ship to production in tiny chunks, daily or even multiple times a day – one feature, one bug fix at a time. Not having to batch work up because testing and deployment have such a large overhead.

If your chart looks anything like this, you’ll need to eliminate most, if not all, non-development bottlenecks – turn the ratio on its head – otherwise you’ll probably not see any producivity benefits from GenAI coding.

Is AI about to expose just how mediocre most developers are?

Most code is crap, most developers are mediocre. In the age of AI-assisted coding, that’s a problem for them and the industry.

Like many people, I’ve been of the belief that AI would not replace developers. I still don’t think it will – at least not directly. What it will do is change the economics of the profession by exposing just how much of it is built on mediocrity.

The uncomfortable truth is that most code is crap, and most developers are mediocre. AI-generated code is crap too, but it often matches – and is arguably better than – what many humans produce. When that level of work can be generated instantly, the market value of mediocrity starts to fall.

AI code is slightly less crap

I’ve been looking at codebases generated by Lovable, an AI tool that creates entire applications. I chose Lovable because its output is almost entirely AI-generated, with minimal human input, giving a clearer view of what AI produces. The output is not good, but it’s often no worse than what I’ve seen from human teams throughout my career.

In some ways, it’s better. You don’t get commented-out code left to rot, abandoned TODOs, outdated or misleading comments. Naming is often better. It can still be wrong or misleading, but not to the same extent as using generic, meaningless names like x, data, or processThing that mean nothing to the next person reading it.

Where Lovable code is no better is in the underlying design. Deeply nested conditional logic, tightly coupled code, duplication, poor separation of concerns, and framework anti-patterns – the same structural flaws that sink most human-written systems.

This isn’t surprising – AI is trained on the code that’s out there, and most of that code is crap.

Crap code is tolerated

Despite the fact most code is crap, the world keeps turning. I’ve seen many founders exit before the cracks in their technology had time to widen enough to hurt them. I’ve seen developers move on long before the consequences of their decisions landed. That time lag has allowed mediocrity to thrive.

Most organisations don’t even recognise what good engineering looks like. They treat software development as a commodity – a manufacturing production line – measured by how many features are shipped rather than whether the right outcomes are achieved. Few understand the value of investing in modern software engineering best practices and design – the things that make those outcomes sustainable.

Slow delivery, high defect rates, and spiralling maintenance costs are tolerated because they’re seen as normal – the way software has always been. The waste is staggering, but invisible to those who have never seen better. And until now, that delay in consequences has made it easier to live with.

With AI, consequences arrive sooner

AI removes that comfort zone. It speeds up the creation of code, but it also accelerates the arrival of the problems in that code. When you produce crap code faster, you hit the wall of maintainability much sooner.

For startups, more will now hit that wall before they reach an exit. In large enterprises or government departments, it could mean critical systems becoming unmaintainable years ahead of budget or replacement cycles.

For mediocre developers, AI is not a lifeline – it won’t make a poor engineer better. It’s matched the floor but not raised the ceiling. It simply lets them churn out crap code faster, so the consequences hit them and their teams sooner.

Mediocre developers are exposed

Mediocre developers – again, the majority of devs – may see themselves as experienced but are really just fast at producing code. For years, many have been able to pass as “senior” because they could churn out more code than less experienced colleagues, even if that code was crap.

With AI assistance becoming the norm, speed of output is no longer a differentiator. AI can match their pace and baseline quality (crap) so their supposed advantage disappears. And because the consequences of their bad code arrive much sooner, the cover that once let them move on or get promoted before their work collapsed under its own weight is gone. Their weaknesses are visible in real time, and their value to employers drops.

Many developers seem content to work in a factory fashion – spoon-fed Jira tickets, avoiding customers and the wider organisation, staying in their insulated bubble, and keeping their heads down just writing (crap) code. Those are the ones most at risk.

Why good engineers will only get more valuable

Good engineers apply modern best practices – automated testing, refactoring, small and frequent releases, continuous delivery – and design systems to stay adaptable under change. They pair this with a product mindset, making technical decisions in service of real user and business outcomes. It’s currently being labelled “product engineering” and talked about as the hot new thing, but it’s essentially agile software development as it was originally intended.

In the AI-assisted era, these aren’t just nice-to-have skills – they’re the only way to get meaningful benefit. Without them, AI simply helps teams create bad software faster.

Funnily enough, AI struggles just as much – if not more – with crap code as humans do. That’s not surprising when you remember LLMs are trained on human output. They’re built to mimic human reasoning patterns, so in clean, well-structured code they can do well, but in messy, inconsistent codebases they stumble – sometimes worse than a human – because they’re tripping over the same poor context we do.

The uncomfortable truth is that very few people in the industry can do this well. By my estimate, perhaps 15-20% of developers have deep, well-rounded engineering experience. Fewer than 10% have worked extensively with modern XP-style practices in a genuinely high-performing environment. Combine those skills with the ability to use AI coding tools effectively, and you might be looking at 5% of the industry currently.

A problem for the industry

Demand for genuinely good engineers will rise, but the supply is nowhere close to meeting it. AI will expose and devalue mediocre developers, yet it cannot replace the skills it reveals as missing. That leaves a gap the industry is not ready to fill.

In the short term, this could cause real delivery problems. AI in the hands of mediocre developers will accelerate the maintenance burden (“technical debt”). Some organisations may even retreat from using AI to help develop software if it becomes clear that it is making their systems harder – and more expensive – to maintain.

I do not know what the solution is. What happens next is unclear. It may take years for enough engineers to gain the depth of experience and the mindset needed to thrive in this environment.

In the meantime, the industry will have to navigate a period where the ability to tolerate mediocrity has fallen, the value of raw output has collapsed, and the expertise needed to replace it is in critically short supply.

No more comfort zone for mediocrity

One thing is certain: the comfort zone is gone. For decades, mediocrity could hide in plain sight, shielded by the slow arrival of consequences. AI removes that shield – and leaves nowhere to hide.

On Entitlement

I expect this won’t go down well, but I feel it needs to be said.

Firstly, bear with me – I want to start by talking about how I ended up in this industry.

I came out of uni heading nowhere. Meandered into a job as a pensions administrator. I was seriously considering becoming an IFA, not out of passion or ambition, just because I didn’t have any better ideas.

Then I got lucky. Someone I knew started a startup – like Facebook for villages (before Facebook existed). I picked up coding again (I’d played around as a kid). From there I blagged a job at another startup as an content editor, writing articles about online shopping. Then I blagged a job at Lycos as a “Web Master”.

Right place, right time. I was lucky. I benefitted from the DotCom boom. I fell on my feet. I still pinch myself every day.

I think about teachers and nurses – low pay, long hours, no real choice about where or how they work. I think about other well-paid knowledge professions – doctors, lawyers, architects – years of education, working brutal hours, often in demanding environments.

Most of the places I’ve worked had food and drinks on tap. Ping pong tables. Games machines. I’ve never had to wear a suit. Most places were progressive, and while the industry doesn’t have a great reputation overall, it’s been far more accommodating of people from different backgrounds, genders, and sexual orientations than many others.

After a long bull run – which peaked post-Covid with inflated salaries and over-promotion – things feel like they are changing.

Being asked to go back into the office a couple of times a week. You can’t just fall into jobs like you used to. And GenAI, of course – currently upending the way we work. A paradigm shift far greater than anything I’ve seen in 25 years of my career.

What we had wasn’t normal. It wasn’t standard. It was unusually good.

We weren’t owed any of this.

We all just got lucky.

“Attention is all you need”… until it becomes the problem

This is an attempt at a relatively non-technical explainer for anyone curious about how today’s AI models actually work – and why some of the same ideas that made them so powerful may now be holding them back.

In 2017, a paper by Vaswani et al., titled “Attention is All You Need”, introduced the Transformer model. It was a genuinely historic paper. There would be no GenAI without it. The “T” in GPT literally stands for Transformer.

Why was it so significant?

“Classical” neural network based AI works a bit like playing Snakes & Ladders – processing one step at a time, building up understanding gradually.

Transformers allow every data point (or token) to connect directly with every other. Suddenly, the board looks more like chess – everything is in view, and relationships are processed in parallel. It’s like putting a massive turbocharger on the network.

But that strength is also its weakness.

“Attention” forces every token to compare itself with every other token. As inputs get longer and the model gets larger, the computational cost doesn’t just increase. It grows quadratically. Double the input, and the work more than doubles.

And throwing more GPUs or more data at the problem doesn’t just give diminishing returns – it can lead to negative returns. This is why, for example, some of the latest “mega-models” like ChatGPT 4.5 perform worse than its predecessor 4.0 in certain cases. Meta is also delaying its new Llama 4 “Behemoth” model – reportedly due to underwhelming performance, despite huge compute investment.

Despite this, much of the current GenAI narrative still focuses on more: more compute, more data centres, more power – and I have to admit, I struggle to understand why.

Footnote: I’m not an AI expert – just someone trying to understand the significance of how we got here, and what the limits might be. Happy to be corrected or pointed to better-informed perspectives.

GenAI Coding Assistant Best Practice Guides

A constantly updated list of guides and best practices for working with GenAI coding assistants.

These articles provide practical insights into integrating AI tools into your development workflow, covering topics from effective usage strategies to managing risks and maintaining code quality.

Importantly, the authors of all these articles state they are continually updating their content as they learn more and the technology evolves.

There are some books now available on this topic, but they tend to be out of date by the time they are published due to the fast pace of AI development.

Duolingo’s Gerald Ratner Moment?

Duolingo’s AI-first announcement, the backlash, and the backtrack reminded me of how Gerald Ratner destroyed his business overnight.

In April, Duolingo’s CEO, Luis von Ahn, announced a bold shift: the company would become “AI-first,” aiming to replace contractors with AI and making AI proficiency a key performance metric.

The announcement sparked immediate customer backlash. Duolingo’s social media feeds lit up with criticism, as users pushed back against job losses and what they saw as a decline in the quality of the product.

One thing Duolingo had been particularly good at was social media. Their accounts have massive followings, and the Duolingo Owl has become a well-known meme and a much-loved character.

Amid the backlash, they wiped their TikTok and Instagram feeds, replacing everything with cryptic messages. A core brand strength – suddenly gone. The content has since returned, but the damage to the brand was done. It only reinforced the sense that things were unravelling.

Not long after Luis issued a very public backtrack.

It immediately reminded me of the Gerald Ratner story. In 1991, Ratner, then CEO of a successful UK jewellery chain (also called Ratners), famously joked that his products were “total crap”. The comment destroyed consumer confidence overnight. The business collapsed, and so did his career.

Gerald Ratner at the Institute of Directors, April 1991 – where he called his own products “total crap”

Similarly, Duolingo’s announcement has significantly shifted public perception. Since the AI-first statement, I’ve seen just as many articles and comments claiming Duolingo was never a good tool for learning languages in the first place as I have about the announcement itself (and the subsequent backtrack).

Users are also calling the new AI-generated courses “AI slop” and complaining about the synthetic voices. Maybe some of that is true – but I’d wager it’s being projected onto the old content too.

The key point here is the customer perception has shifted, and potentially, like Ratners, irreversibly.

It also didn’t help that, around the same time, CEO Luis von Ahn suggested in a podcast that schools might eventually serve primarily as childcare centres, with AI doing the teaching. One thing you don’t do is dunk on teachers – a group held in consistently high regard by the public.

Only last week I posted an article on the pitfalls of headcount-first transformations. I didn’t expect it to be so relevant so soon.

This is exactly the kind of outcome you get when you don’t put customers at the heart of your strategy. And when you treat technology as the strategy, rather than a tool to support it, you risk compounding the problem. If you don’t start with purpose, people, and the system around them – AI won’t fix it. It’ll just as likely make things worse.

Developers aren’t afraid of automation

Software developers are not against more automation in their work – quite the opposite.

This image is from the “Tech Manifesto” I put together when I was at 7digital, 12 years ago. One of the principles was: “We prefer not to do the same thing twice”.

The best engineers and teams automate everything that moves – tests, build and deployment, monitoring, alerting, infrastructure provisioning. They use rich IDEs with refactoring tools, code formatters, linters, and even, dare I say it, code generation (which has been around long before GenAI, by the way).

It’s about reducing toil, eliminating waste, getting fast feedback, and making space to focus on the more meaningful and enjoyable parts of the job.

Things like understanding and solving real-world problems, turning ideas into working software, building useful things. Creating.

Exactly the parts GenAI still isn’t any good at.

Why headcount-led transformations fail

All the fear-mongering about AI taking jobs reminds me of something I’ve seen too often: when organisations go into org change with the goal to reduce headcount, it rarely ends well.

I’ve been part of these exercises. You cut people, but the costs come back in other forms – lost sales, reduced capacity, expensive contractors to plug the gaps. The result? Often a rapid series of transformations, each one trying to fix the damage caused by the last. Org transformation whack-a-mole.

A good industry wide example was the trend to offshore software development a decade or so back. Sold as a way to cut costs, it often ended up costing more due to hidden overheads, coordination challenges, slow delivery and quality issues. Many quietly reversed course over the next few years.

The reason it doesn’t work? Yes, organisations can be bloated – but that’s usually a *symptom of deeper inefficiencies, not the root cause*.
If you cut people without addressing those inefficiencies, the problems persist – or get worse, because now fewer people are left to deal with the same issues.

The best transformations I’ve seen start with the outcome.

Why do we exist? What are we here to do?

Then look at the system end to end – people, culture, process, communication, technology – and identify the pain points and bottlenecks.

Optimise systematically.

Yes, this can lead to restructuring. Roles change. Some may no longer be needed. But that happens as a consequence of tackling the root causes.

AI? It’s just a tool. It could help. It could just as easily get in the way. Technology is a *fourth-order concern* – purpose, people, and process come first.

If you don’t understand the root causes, if you don’t work from first principles, AI won’t save you. It’ll amplify your dysfunction.




Footnote: There are situations where a headcount-first approach is justified – but these are typically extreme, when an organisation is fighting for immediate survival.

GenAI coding: most teams aren’t ready

All the evidence I see continues to suggest that good engineering discipline is not just desirable, but essential when using GenAI for coding. But that’s exactly what the vast majority of software engineers – and teams – lack.

Take Test-Driven Development (TDD) for example. I keep hearing that one of the most effective ways to stay in control of GenAI output is to take a test-first approach (“Test-Driven Generation” or TDG as its becoming known) – and I agree based on experience. On one hand, I’m excited by the idea of a TDD renaissance. However, I saw something recently suggesting only around 1% of code is written that way. Anecdotally, most developers I speak to who say they know TDD, don’t actually understand what it is. It’s a clear example of the skills gap we’re dealing with.

Let alone TDD, again, everything I see and hear on the ground suggests effective GenAI-assisted development also relies on having comprehensive automated tests and the ability to release frequently in small batches. Many teams have neither. Some have a few tests. Most can only release a few times a month because the rely on long, manual regression cycles due to their lack of automated test coverage.

The DORA research project suggests only ~19% of software teams globally have the kind of engineering practices in place to potentially capitalise on GenAI coding (their latest report suggests a downward negative pressure on overall delivery due to GenAI coding, but that’s another thing…)

I’m not convinced by arguments that GenAI will improve code quality (vs experienced engineers not using GenAI). The skills gap is part of the problem – but also, studies like GitClear’s earlier this year already show a significant drop in code quality linked to GenAI use.

At the very least, good practices will act as damage limitation.

GenAI coding could be a turning point. But most teams simply aren’t equipped to handle it. And unless that changes – quickly – which seems unlikely given how long these practices have existed without widespread adoption, we’re likely heading for a wave of poor-quality code, delivered at speed.

We need a rise in the voices of techno-realists

GenAI is the hypiest tech I’ve seen in my career – and that’s saying something. Because of all the noise it generates, we need to hear from more grounded, pragmatic voices.

Social media is dominated by extremes: Those who see tech as the solution to everything, often without really understanding it – and those whose negativity leads them to dismiss it.

It’s great for engagement, but real progress will come from those in the middle – curious, thoughtful, and focused on outcomes.

In my mind, a techno-realist:

  • Is open-minded, but not easily sold
  • Is curious enough to dig in and understand how things actually work
  • Is conscious of their biases
  • Applies critical thinking
  • Works from evidence
  • Proves by doing
  • Understands that every decision involves trade-offs
  • Takes a systemic view – steps back to see the bigger picture and how things connect
  • Understands that tech is powerful – but not always the answer
  • Sees technology as a means to an end – never the end itself

Social platforms reward loud certainty, not nuanced thoughtfulness.

But these voices – the thoughtful ones – matter more than ever.

If this sounds like you, here’s how I suggest showing up as a techno-realist online:

  • Be polite and constructive – even when you strongly disagree
  • Call out the hype when you see it (but see point above)
  • Amplify grounded voices – like, repost, and comment on thoughtful posts and replies
  • Ask questions – seek to understand, not just to respond
  • Share what you’re learning – especially from real-world experience
  • Connect with and follow others who bring thoughtful, balanced perspectives

Let’s find each other – and make this mindset more visible 🙌

I’ve even added techno-realist to my LinkedIn profile 🫡