More with less, or is it more with the same?

The crude clickbait narrative is that AI means job cuts, replacing roles. But when I look at how AI is actually being used in real organisations, it seems more likely it’ll be more effective at expanding capacity rather than reduce headcount. Many organisations may end up doing more with the same long before they can credibly do the same (or more) with less.

This thought started for me with an observation – AI is not substituting whole roles, we’re getting micro-specialists that can do slices of work. In software you see agents for tests, code review, planning. Other sectors look much the same. Legal teams using AI for drafting. Sales teams for outreach. Finance for reconciliation. Tools handling tasks, not outcomes, and someone still has to stitch the pieces together.

There are (at least) three forces I can think of that matter when asking whether organisations will genuinely be able do more with less:

1. How automatable the work already is.
Where the work is rules based, high volume, and low variation, AI may replace labour in the same way classic automation has. Think claims processing, simple customer support, structured back office workflows. These functions already lived close to the automation frontier. AI just expands the frontier a bit.
This will reduce headcount, but mostly in places where headcount has been under pressure for decades anyway.

2. How much the organisation can absorb increased output.
Most professional work is not constrained by how fast someone types or drafts. It is constrained by coordination, sequencing, ambiguity, stakeholder alignment, and quality. Software is a good example. So is legal, consulting, product, sales. If you cut the number of lawyers because drafting is faster, you will simply overload the remaining lawyers with negotiation, risk, and client work.

3. The cost and consequences of mistakes.
In many industries, the limiting factor is not productivity, but risk. Healthcare, aviation, finance, law. Increased throughput also increases the risk surface area. If AI increases the probability or cost of an error, you cannot shrink the team. You often need more human oversight, not less.

If you put these together, the more likely outcome is is this:

  • Some operational functions will shrink, but these were already at risk of automation.
  • Most knowledge work will shift toward more with the same, not less.
  • Some domains will accidentally create more with more, because oversight and correction absorb the gains.

Leave a Reply

Your email address will not be published. Required fields are marked *