Why AI won’t work as a software development abstraction

The idea of LLMs as a new abstraction layer for software development keeps coming up. On the surface it sounds appealing. Just as compilers turn source into binaries, AI could turn prompts into systems. You store the prompts, they become the source of truth, AI generates the code and the code just becomes an artefact.

Let’s assume, for the sake of the argument, things like non-determinism and hallucination are solved. There is still a big problem.

Complexity.

Software is never static. Requirements change, and each change adds complexity. Even the best engineers in the world struggle with this – whole disciplines around refactoring, code composition and architecture exist to contain it, and still complexity piles up.

Unless we reach some form of AI superintelligence, well beyond anything today, AI will run into the same problems, probably faster. Entropy builds up, not down.

The only way I can think of around that would be to regenerate the entire codebase (or at least large parts of it) from prompts each time, like a compiler rebuilding from source.

However, that just hits another wall.

But by my rough calculations, a mid-size 500k LOC codebase, with today’s LLMs and compute, would take days to build and cost thousands.

Software development depends on feedback loops measured in seconds or minutes, not hours or days.

And this points to a natural physical law – processing information always carries an energy cost – you can’t avoid it, only shift it.

in this case, from human cognitive effort to machine compute cycles. And today, the machine version would be far less efficient.

tl;dr You can’t beat the 2nd law of thermodynamics.

2 thoughts on “Why AI won’t work as a software development abstraction

  1. Matt Collins

    Nice article, Rob! It’s something I’ve been wondering about, too.

    Regarding the feasibility of having coding agents regenerate codebases, I guess we can reasonably expect the cost of doing so to reduce significantly over time.

    Suppose the cost of rebuilding a 500k LOC codebase comes down to tens of dollars. And suppose we only do it periodically rather than for every change?

    I wonder if that starts to look much more feasible?

    Something else I wonder about, though, is whether the resultant system would inevitably end up looking and/or behaving a bit differently each time and whether that would be a deal-breaker.

    (Also, the data and code would need to be kept in sync somehow. But perhaps the rewrites could avoid changing data schema.)

    Reply
    1. Rob Post author

      Suppose is doing a lot of heavy lifting, of course it becomes more feasible, but it would require a ~500x improvement in speed and cost

      Reply

Leave a Reply to Rob Cancel reply

Your email address will not be published. Required fields are marked *