Faster horses, not trains. Yet

I’ve been trying to work out why recent advances in GenAI models don’t give me much of a wow factor, while others seem genuinely excited.

I use these tools constantly and have done since ChatGPT4 was released over 2 years ago. I couldn’t imagine a world without them. In that sense, they already feel as transformative as the web. However once they become ambient, the magic fades. That usually happens with genuinely useful technology. You get used to improvements and stop noticing them. There’s some truth in that. But the more I’ve thought about it, the more I think there are deeper structural reasons why the experience has plateaued, for me at least.

The lossy interface

All meaningful work starts in a physical, social, constraint-filled environment. We reason with space, time, bodies, artefacts, relationships, incentives, and history. Much of this understanding is tacit. We sense it before we can explain it.

To involve a computer, that reality has to be translated into symbols. Text, files, data models, diagrams, prompts. Every translation step compresses context and/or throws information away. There is loss from brain to keyboard. Loss from keyboard to prompt. Loss from prompt to model. And loss again when the output comes back and has to be interpreted.

GenAI only ever sees what makes it across that boundary. It reasons over compressed representations of reality that humans have already filtered, simplified, and distorted.

Better models reduce friction within that interface, but they don’t change its dimensionality. In that respect it doesn’t really matter how “smart” the models get, or how well they do on the latest benchmarks. The boundary stays the same.

Because of that, GenAI works best where the world is already well-represented in digital form. As soon as outcomes depend on physical capacity, human coordination, or tacit knowledge, its leverage drops sharply.

That is why GenAI helps with slices of work, not whole systems. It is powerful, but fundamentally bounded.

Some real world examples:

  • In software development, generating code hasn’t been the main bottleneck since we moved away from punch cards. The far bigger constraints are understanding the problem, communicating with stakeholders, working effectively with other people, designing the system, managing risks and trade-offs, and operating systems in complex social environments over time.
  • In healthcare, GenAI can assist with diagnosis or documentation, but outcomes are dominated by staff, facilities, funding, and coordination across complex human systems. Better reasoning does not create more nurses or hospital beds.

In both cases, GenAI accelerates parts of the work without shifting the underlying constraint.

Faster horses, not trains

In that respect, GenAI feels like faster horses rather than trains. It makes us more effective at things we were already doing, writing, code, analysis, planning, and sense-making, but it operates on parts of systems rather than redefining the system itself.

Trains didn’t just make transport faster. They removed a hard upper bound on the movement of people and goods. Once that constraint moved, everything else reorganised around it. Supply chains, labour markets, cities, timekeeping, and even how people understood distance and work all changed. Railways were not just a tool inside the system, they became the system.

GenAI doesn’t yet do that. It works through a narrow, virtual interface and plugs into existing workflows. It improves articulation, synthesis, and local efficiency, but the real constraints on outcomes sit elsewhere.

What actually changed the world

A recent conversation reminded me of Vaclav Smil’s How the World Really Works, which I read last year. Smil’s work is useful here because it focuses on what’s actually driven the biggest changes in human life, and it isn’t information technology.

Smil highlights that modern civilisation rests on a small number of physical pillars: energy, food production (especially nitrogen), materials like steel and cement, and transport. Changes in these pillars are what led to the biggest transformations in human life. Information technology barely registers at that level in his analysis. He doesn’t deny its importance, but treats it as secondary, an optimiser of systems whose limits are set elsewhere.

Judged through that lens, GenAI doesn’t (yet) register as a civilisation-shaping force. It doesn’t produce energy, grow food, create new materials, or move mass. It operates almost entirely above those pillars, improving coordination, design, and decision-making around systems whose hard limits are set elsewhere.

That does not make it trivial. It explains why GenAI feels powerful and useful in practice. But it also explains why, so far, it looks closer to previous waves of information technology than to steam or electricity. It optimises within existing constraints rather than breaking them.

The big if

Smil’s framing doesn’t say GenAI cannot matter at an industrial scale. It says where it would have to show up.

GenAI becomes civilisation-shaping only if it materially accelerates breakthroughs in those physical pillars. Energy is the obvious one. New sources, better storage, cheaper generation. Materials is another. Stronger, lighter, more abundant inputs. Medicine, fertiliser, manufacturing. Things that change what the world can physically sustain.

This is where “superintelligence” comes in. If GenAI can explore hypothesis spaces humans cannot, design and run experiments, or compress decades of scientific iteration into years, resulting in major scientific breakthroughs, it moves from optimising within constraints to changing them.

What kind of change are we talking about?

This is why I don’t feel the same wow that others might, and why that doesn’t mean they’re wrong.

A lot of the current debate slides between different meanings of “fundamental.” When people talk past each other, it often comes down to this.

If, by fundamental, we mean web-scale change, reshaping workflows, collapsing skill barriers, and changing how work is organised and coordinated, then GenAI is already there. That is real. I feel it myself. I wouldn’t want to work without these tools now.

But if we mean the kind of change associated with the industrial revolution (as it’s often compared to) – longer lives, better health, radically different working conditions, step changes in material living standards, then what we have today does not qualify. Historically, those shifts followed from breaking physical constraints, not from better information or reasoning alone.

That gap explains both the excitement and the scepticism. The excitement is about what GenAI already does to knowledge work and / or it’s potential to go further (ASI). The scepticism is about its current limits and whether it has the potential to go further.

So my lack of wow isn’t indifference, and it isn’t denial. It’s a judgement about category.

GenAI feels as big as the web because, so far, that’s the right comparison. Whether it ever deserves to be spoken about in the same breath as steam or electricity depends on a future that is still conditional, and far from guaranteed.

Leave a Reply

Your email address will not be published. Required fields are marked *