After the AI boom: what might we be left with?

Some argue that even if the current AI boom leads to an overbuild, it might not be a bad thing – just as the dotcom bubble left behind the internet infrastructure that powered later decades of growth.

It’s a tempting comparison, but the parallels only go so far.

The dotcom era’s overbuild created durable, open infrastructure – fibre networks and interconnects built on open standards like TCP/IP and HTTP. Those systems had multi-decade lifespans and could be reused for whatever came next. Much of the fibre laid in the 1990s still carries traffic today, upgraded simply by swapping out the electronics at each end. That overinvestment became the backbone of broadband, cloud computing, and the modern web.

Most of today’s AI investment, by contrast, is flowing into proprietary, vertically integrated systems rather than open, general-purpose infrastructure. Most of the money is being spent on incredibly expensive GPUs that have a 1-3 year lifespan due to becoming obsolete quickly and wearing out under constant, high-intensity use. These chips aren’t general-purpose compute engines; they’re purpose-built for training and running generative AI models, tuned to the specific architectures and software stacks of a few major vendors such as Nvidia, Google, and Amazon.

These chips live inside purpose-built AI data centres – engineered for extreme power density, advanced cooling, and specialised networking. Unlike the general-purpose facilities of the early cloud era, these sites are tightly coupled to the hardware and software of whoever built them. Together, they form a closed ecosystem optimised for scale but hard to repurpose.

That’s why, if the AI bubble bursts, we could just be left with a pile of short-lived, highly specialised silicon and silent cathedrals of compute – monuments from a bygone era.

The possible upside

Still, there’s a more positive scenario.

If investment outruns demand, surplus capacity could push prices down, just as the post-dotcom bandwidth glut did in the early 2000s. Cheap access to this kind of compute might open the door for new experimentation – not just in generative AI, but in other high-compute domains such as simulation, scientific research, and data-intensive analytics. Even if the hardware is optimised for GenAI, falling prices could still make large-scale computation more accessible overall. A second-hand market in AI hardware could emerge, spreading access to powerful compute much more widely.

The supporting infrastructure – power grid upgrades, networking, and edge facilities – will hopefully remain useful regardless. And even if some systems are stranded, the talent, tooling, and operational experience built during the boom will persist, as it did after the dotcom crash.

Without openness, the benefits stay locked up

The internet’s long-term value came not just from cheap capacity, but from open standards and universal access. Protocols like TCP/IP and HTTP meant anyone could build on the same foundations, without permission or platform lock-in. That openness turned surplus infrastructure into a shared public platform, unlocking decades of innovation far beyond what the original investors imagined.

The AI ecosystem is the opposite: powerful but closed. Its compute, models, and APIs are owned and controlled by a handful of vendors, each defining their own stack and terms of access. Even if hardware becomes cheap, it won’t automatically become open. Without shared standards or interoperability, any overbuild risks remaining a private surplus rather than a public good.

So the AI boom may not leave behind another decades-long backbone like the internet’s fibre networks. But it could still seed innovation if the industry finds ways to open up what it’s building – turning today’s private infrastructure into tomorrow’s shared platform.

Update: This post has received quite a lot of attention on HackerNews. Link to comments if you enjoy that sort of thing. Also, hi everyone 👋, I’ve written a fair bit of other stuff on AI, among other things, if your interested.

5 thoughts on “After the AI boom: what might we be left with?

  1. ~chris

    great read. Will the AI boom’s net benefit be a software model more than a physical infrastructure one like your article suggests happened for the dotcom bubble?

    Does MCP become the TCP equivalent in the dotcom comparison? It doesnt matter if a better one emerges if there is enough drive behind it..

    Then again, a bubble will lead to loads of GPU and datacenters and at a discount price, maybe that will be the benefit? What else can we do that uses the same compute infra that would be prohibitively expensive without the AI part. lmk so I can get into those 😉

    Reply
    1. Rob Post author

      MCP could well, but in comparison the internet resulted in hundreds of open standards (TCP, IP, HTTP, TLS, DNS, SMTP/POP/IMAP, BGP, etc.), GenAI so far, a handful, and only MCP anywhere near widely adopted

      Reply
  2. Santiago

    Hey there! Very interesting, made me think. The best kind of blogpost.

    I partially agree, but not 100% because of two things that will be left behind even if the bubble pops: the weights and the techniques.

    Future models will have checkpoints to start their training — an analogy closer to the evolution of the brain than to actual training. It’s like saying that AI DNA has improved quite a bit in a short time. Open models are downstream from propietary ones, but still being pushed along.

    Even with the same models, techniques are emerging to make them faster, cheaper, smarter and more versatile just by using them different during inference. Claude Code is a good example of this, it’s constantly getting better by improving the techniques while running the same models.

    Both of these things are knowledge, either opaque knowledge in the form of weights or transparent knowledge in the form of techniques. Knowledge always remains.

    So while I agree with the general point — it’s a good one — I wouldn’t take it as far.

    Reply
    1. Rob Post author

      I only partially agree with myself half the time! I could be and am happy to be wrong. I write to get stuff out of my head and the conversation (if anyone responds) is often the most interesting part.

      Reply
  3. Pingback: Technology as Nature

Leave a Reply

Your email address will not be published. Required fields are marked *