Why AI won’t work as a software development abstraction

The idea of LLMs as a new abstraction layer for software development keeps coming up. On the surface it sounds appealing. Just as compilers turn source into binaries, AI could turn prompts into systems. You store the prompts, they become the source of truth, AI generates the code and the code just becomes an artefact.

Let’s assume, for the sake of the argument, things like non-determinism and hallucination are solved. There is still a big problem.

Complexity.

Software is never static. Requirements change, and each change adds complexity. Even the best engineers in the world struggle with this – whole disciplines around refactoring, code composition and architecture exist to contain it, and still complexity piles up.

Unless we reach some form of AI superintelligence, well beyond anything today, AI will run into the same problems, probably faster. Entropy builds up, not down.

The only way I can think of around that would be to regenerate the entire codebase (or at least large parts of it) from prompts each time, like a compiler rebuilding from source.

However, that just hits another wall.

But by my rough calculations, a mid-size 500k LOC codebase, with today’s LLMs and compute, would take days to build and cost thousands.

Software development depends on feedback loops measured in seconds or minutes, not hours or days.

And this points to a natural physical law – processing information always carries an energy cost – you can’t avoid it, only shift it.

in this case, from human cognitive effort to machine compute cycles. And today, the machine version would be far less efficient.

tl;dr You can’t beat the 2nd law of thermodynamics.

Leave a Reply

Your email address will not be published. Required fields are marked *