Monthly Archives: December 2024

Will the Generative AI bubble start deflating in 2025?

This is a copy of an article I wrote for Manchester Digital. You can read the original here and their full series here

As we approach 2025, it’s been nearly two years since OpenAI’s GPT-4 launched the generative AI boom. Predictions of radical, transformational change filled the air. Yet these promises remain largely unfulfilled. Instead of reshaping industries, generative AI risks becoming an expensive distraction – one that organisations should approach with caution.

Incremental gains, not transformational change

Beyond vendor-driven marketing and from those with vested interests, there are still scant examples of generative AI being deployed with meaningful impact. The most widespread adoption has been in coding and office productivity assistants, commonly referred to as “copilots.” However, the evidence largely suggests that their benefits are limited to marginal gains at best.

Most studies on coding assistants report a modest boost in individual productivity. The findings are similar for Microsoft Copilot. A recent Australian Government study highlighted measurable, but limited benefits.

Notably, the study also highlighted training and adoption as significant barriers. Despite being widely in use for years, many organisations still struggle to use the existing Office 365 suite effectively. Learning to craft clear and effective prompts for an LLM presents an even greater challenge, where good results rely heavily on the ability to provide precise and well-structured instructions. A skill that requires both practice and understanding.

Busier at busywork?

These tools are good at helping with low-level tasks – writing simple code, drafting documents faster, or creating presentations in less time. However, they don’t address the underlying reasons for performing these tasks in the first place. There’s a real risk they could encourage more busywork rather than meaningful, impactful change. As the old adage in software development goes, “Typing is not the bottleneck.”

All in all, this is hardly the kind of game-changing impact we were promised. 

But they’ll get better, right?

Hitting the wall: diminishing returns

The initial promise of generative AI was that models would continue to get better as more data and compute were thrown at them. However, as many in the industry had predicted, there are clear signs of diminishing returns. According to a recent Bloomberg article, leading AI labs, including OpenAI, Anthropic, and Google DeepMind, are all reportedly struggling to build models that significantly outperform their predecessors.

Hardware looks like it may also be becoming a bottleneck. The GPU chip maker NVidia, which has been at the heart of the AI boom (and got very rich from it), is facing challenges with its latest GPUs, potentially further compounding the industry’s struggles.

Another exponential leap – like the one seen between ChatGPT3.5 and ChatGPT4 – currently looks unlikely.

At what environmental and financial costs?

The environmental impact of generative AI cannot be ignored. Training large language models consumes vast amounts of energy, generating a significant carbon footprint. With each new iteration, energy demands have risen exponentially, raising difficult questions about the sustainability of these technologies.

Additionally, current generative AI products are heavily subsidised by investor funding. As these organisations seek to recoup costs, customer prices will undoubtedly rise. OpenAI has already said they aim to double the price of ChatGPT by 2029

Advice for 2025: Proceed with caution

Generative AI remains a promising technology, but its practical value is far from proven. It has yet to deliver on its transformational promises and there are warning signs it may never do so. As organisations look to 2025, they should adopt a cautious, focused approach. Here are three key considerations:

  1. Focus on strategic value, not busywork
    Generative AI tools can make us faster, but faster doesn’t always mean better. Before adopting a tool, assess whether it helps address high-impact, strategic challenges rather than simply making low-value tasks slightly more efficient.
  2. Thoughtful and careful adoption
    GenAI tools are not plug and play solutions. To deploy them effectively, organisations need to focus on clear use cases, where they can genuinely add value.Take the time to train employees, not just on how to use the tools but also on understanding their limitations and best use cases.
  3. Avoid FOMO as a strategy
    As Technology Strategist, Rachel Coldicutt highlighted in her recent newsletter, “FOMO Is Not a Strategy”. Rushing to adopt any technology out of fear of being left behind is rarely effective. Thoughtful, deliberate action will always outperform reactive adoption.

Is “computer says maybe” the new “computer says no?”

GenAI and quantum computing feel like they’re pulling us out of an era when computers were reliable. You put in inputs and get consistent, predictable outputs. Now? Not so much.

Both tease us with incredible potential but come with a similar problems: they’re unreliable and hard to scale.

Quantum computing works on probabilities, not certainties. Instead of a clear “yes” or “no,” it gives you a “probably yes” or “probably no.”

Generative AI predicts based on patterns in its training data, which is why it can sometimes be wildly wrong or confidently make things up.

We’ve already opened Pandora’s box with GenAI and are needing to learn to live with the complexities that come with its unreliability (for now at least).

Quantum Computing? Who knows when a significant breakthrough may come.

Either way it feels like we’re potentially entering an era where computers are less about certainty and more about possibility.

Both technologies challenge our trust in what a computer can do, forcing us to consider how we use them and what we expect from them.

So, is “computer says maybe” the future we’re heading towards? What do you think?

My restaurant anecdote: a lesson in leadership

I want to share a story I often use when coaching new leaders – a personal anecdote about a lesson I learned the hard way.

Back when I was at university, I spent a couple of summers working as a waiter in a restaurant. It was a lovely place – a hotel in Salcombe, Devon (UK), with stunning views of the estuary and a sandy beach. It was a fun way to spend the summer.

The restaurant could seat around 80 covers (people). It was divided into sections and waiters would work in teams for a section.

I started as a regular waiter, but was soon promoted to a “station waiter.” This role had to co-ordinate with the kitchen and manage the timing of orders for a particular section. For example, once a table finished their starters, I’d signal the kitchen to prepare their mains.

Being me, I wanted to be helpful for the other waiters. I didn’t want them thinking I wasn’t pulling my weight, so I’d make sure I was doing my bit clearing tables.

Truth be told, I also had a bit of an anti-authority streak – I didn’t like being told what to do, and I didn’t like telling others what to do either.

Then it all went wrong. I ordered a table’s main course before they’d finished their starters. By the time the mains were ready sitting on under the lights on the hotplate, the diners were still halfway through their first course.

If you’ve worked in a kitchen, you’ll know one thing: never piss off a chef.

I was in the shit.

In my panic, I told the other station waiter what had happened. Luckily, they were more quick witted than me. They told me to explain to the head chef that one of the diners had gone to the toilet, and to keep the food warm.

So I did.

The head chef’s stare still haunts me, but I got away with it.

That’s when I realised what I’d been doing wrong. My section was chaotic. The other waiters were stressed and rushing around, and it was clear that my “helping” wasn’t actually helping anyone.

My job wasn’t to be just another pair of hands; it was to stay at my station, manage the orders, and keep everything running smoothly. I needed to focus on the big picture -keeping track of the checks, working with the kitchen, and directing the other waiters.

Once I got this, it all started to click. People didn’t actually mind being told what to do, in fact it’s what they wanted. They could then focus on doing their jobs without feeling like they were also panicking and running around.

What are the lessons from this story?

The most common challenge I see with new leaders is struggling to step out of their comfort zone when it comes to delegation and giving direction.

Leadership is about enabling, not doing. Your primary role isn’t to do the work yourself; it’s to guide, delegate, and create clarity so your team can succeed. Trying to do everything means you’ll miss the big picture, creates confusion and stress.

It’s tempting to keep “helping” or to dive into the weeds because it feels safer. But that’s where things start to unravel – and where many new leaders experience their own “oh shit” moment.

And remember, giving direction doesn’t mean micro-managing, it’s about empowering. Set clear priorities, communicate expectations, step back and allow people to do their jobs.

And yes, sometimes it’s OK to be quite directive – that clarity is often what people need most.

Are GenAI copilots helping us work smarter – or just faster at fixing the wrong problems?

Are GenAI copilots helping us work smarter – or just faster at fixing the wrong problems? Let me introduce you to the concept of failure demand.

The most widespread adoption of GenAI is copilots – Office365 CoPilot and coding assistants. Most evidences suggests they deliver incremental productivity gains for individuals: write a bit more code, draft a doc faster, create a presentation in less time.

But why are you doing those tasks in the first place? This is where the concept of failure demand comes in.

Originally coined by John Seddon, failure demand is the work created when systems, processes, or decisions fail to address root causes. Instead of creating value, you spend time patching over problems that shouldn’t have existed in the first place.

Call centres are a perfect example.

Most call centre demand isn’t value demand (customers seeking products or services). It’s failure demand: caused by unclear communication, broken systems, or unresolved issues.

GenAI might help agents handle calls faster, but the bigger question is why are people calling at all?

The same applies to all knowledge work. Faster coding or document creation only accelerates failure demand if the root issues (e.g. unclear requirements, poor alignment, unnecessary work) – go unaddressed.

Examples:

– Individual speed gains might mask systemic problems, making them harder to spot and fix and reducing the incentive to do so.

– More documents and presentations could bury teams in information, reducing clarity and alignment.

– More code written faster could overwhelm QA teams or create downstream integration issues.

There’s already evidence which suggests this. The 2024 DORA Report (an annual study of engineering team performance) found found AI coding assistants marginally improved individual productivity but correlated with a downward trend in team performance.

The far bigger opportunities lies in asking:

– Why does this work exist?
– Can we eliminate or prevent it?

Unless GenAI helps addressing systemic issues, it risks being a distraction. While it might improve individual productivity, it could hurt overall performance.