Gitclear’s latest report indicates GenAI is having a negative impact on code quality

I’ve just been reading GitClear’s latest report on the impact of GenAI on code quality. It’s not good 😢. Some highlights and then some thoughts and implications for everyone below (which you won’t need to be a techie to understand) 👇

Increased Code duplication 📋📋

A significant rise in copy-pasted code. In 2024, within-commit copy/paste instances exceeded the number of moved lines for the first time.

Decline in refactoring 🔄 

The proportion of code that was “moved” (suggesting refactoring and reuse) fell below 10% in 2024, a 44% drop from the previous year.

Higher rate of code churn 🔥

Developers are revising newer code more frequently, with only 20% of modified lines being older than a month, compared to 30% in 2020 (suggests poor quality code that needs more frequent fixing).


If you’re not familiar with these code quality metrics, you’ll just need to take my word for it, they’re all very bad.

Thoughts & implications

For teams and organisations

Code that becomes harder to maintain (which all these metrics indicate) results in the cost of change and the rate of defects both going up 📈. As the Gitclear report says, short term gain for long term pain 😫

But is there any short term gain? Most good studies suggest the productivity benefits are marginal at best and some even suggest a negative impact on productivity.

Correlation vs causation

Significant tech layoffs over the same period of the report could also be a factor for some the decline. Either way code quality is suffering badly (and GenAI, at the very least, isn’t helping).

For GenAI

  1. Models learn from existing codebases. If more low-quality code is committed to repos, future AI models will be trained on that. This could lead to a downward spiral 🌀 of increasingly poor-quality suggestions (aka “Model Collapse”).
  2. Developers have been among the earliest and most enthusiastic adopters of GenAI, yet we’re already seeing potential signs of quality degradation. If one of the more structured, rule-driven professions is struggling with AI-generated outputs, what does that mean for less rigid fields like legal, journalism, and healthcare?

Building Quality In: A practical guide for QA specialists (and everyone else)

Introduction

I wrote this guide because I’ve struggled to find useful, practical articles to share with QA (Quality Assurance) specialists, testers and software development teams, for how to shift away from traditional testing approaches to defect prevention. It’s also based on what I’ve seen work well in practice.

More than that, it comes from a frustration that the QA role – and the industry’s approach to quality in general – hasn’t progressed as much as it should. Outside of a few pockets of excellence, too many organisations and teams still treat QA as an afterthought.

When QA shifts from detecting defects to preventing them, the role becomes far more impactful. Software quality improves, delivery speeds up, and costs go down.

This is intended a practical guide for QA specialists who want to move beyond testing into true quality assurance. It’s also intended to be relevant for anyone involved in software development who cares about building quality in.

The QA role hasn’t evolved

Something similar happened to QA as it did with DevOps. At some point, testers were rebranded QAs, but largely kept doing the same thing. From what I can see, the majority of people with QA in their title are not doing much actual quality assurance.

Inspection doesn’t improve quality, it just measures a lack of it.

Too often, QA is treated as the last step in delivery – developers write code, then chuck it over the wall for testers to find the problems. This is slow, inefficient, and expensive.

Unlike DevOps (which is a collection of practices, culture and tools, not a job title), I believe there’s still a valuable role and place for QA specialists, especially in larger orgs.

QA’s goal shouldn’t be just to find defects, but to prevent them by embedding quality throughout the development process – not just inspecting at the end. In other words, we need to build quality in.

The exponential cost of late defects and delivery bottlenecks

The cost of fixing defects rises exponentially the later they are found. NASA research1Error Cost Escalation Through the Project Life Cycle Stecklein et al 2004 confirms this, but you don’t really need empirical studies to substantiate this, it’s pretty simple:

The later a defect is found, the more resources have been invested. More people have worked on the it, and fixing it involves more rework – it’s easy to tweak a requirement early, but rewriting code, redeploying, and retesting is much more expensive. In production, they impact users, sometimes requiring hotfixes, rollbacks, and firefighting that disrupts everything else. Beyond direct costs, there’s the cumulative cost of delay – the knock-on effect to future work.

Late-stage testing isn’t just costly – it’s often the biggest bottleneck in delivery. Most teams have far fewer QA specialists/testers than developers, so work piles up at feature testing (right after development) and even more at regression testing. Without automation, regression cycles can take days or even weeks.

As a result, features and releases stall, developers start new work while waiting, and when bugs come back, they’re now juggling fixes alongside new development. It’s an inefficient and expensive way to build software.

The origins of Build Quality In

“Build Quality In” originates It comes from lean manufacturing and the work of W. Edwards Deming2Wikipedia: W. Edwards Deming and Toyota’s Production System (TPS)3Wikipedia: Toyota Production System. Their core message: Inspection doesn’t improve quality – it just measures the lack of it. Instead, they focused on preventing defects at the source.

Toyota built quality in by ensuring that defects were caught and corrected as early as possible. Deming emphasised continuous improvement, process control, and removing reliance on inspection. These ideas have shaped modern software development, particularly through lean and agile practices.

Despite these well-established principles, QA and testing in many teams hasn’t moved on as much as it should have.

From gatekeeper to enabler

Quality assurance shouldn’t be a primarily late stage checkpoint, it should be embedded throughout the development lifecycle. The focus must shift left. Upstream.

This means working closely with product managers, designers, BAs, developers from the start and all the way through, influencing processes to reduce defects before they happen.

Unless you’re already working this way, it probably means working a lot more collaboratively and pro-actively than you currently are

Be involved in requirements early

QA should be part of requirements discussions from the start. If requirements are vague or ambiguous, challenge them. The earlier gaps and misunderstandings are addressed, the fewer defects will appear later.

Ensure requirements are clear, understood and testable

Requirements should be specific, well-defined, and easy to verify. QA specialists should work with the team to make sure everyone is clear, and be advising on appropriate automated testing to ensure it’s part of the scope.

Tip: Whilst there are some strong views on the benefit of Cucumber and similar acceptance test frameworks, I’ve found the Gherkin syntax very good for specifying requirements in stories/features, which makes it easier for developers to write automated tests and easier for anyone to take part in manual testing

If those criteria are not met, it’s not ready to start work (and it’s your job to say so). Outside of refinement sessions/discussions, I’m a fan of a quick Three Amigos (QA, Dev Product) before a developer is about to pick up a new piece of work from the backlog

Collaborating with developers

QA specialists and developers should collaborate throughout development, not just at the end. This means pairing on tricky areas and automated tests, being available to provide fast feedback (rather than always waiting for work to e.g. be moved to “ready to to test”), having open discussions about risks and edge cases. The earlier QA provides input, the fewer defects make it through.

Encourage effective test automation

QA should help developers think about testability as they write code. Ensure unit, integration, and end-to-end tests as part of the development process, rather than relying on manual testing later. Guide on the most suitable tests to be implementing (see the test pyramid and testing trophy). If a feature isn’t easily testable, that’s a design flaw to address early.

Get everyone involved with manual testing

Manual testing shouldn’t be a bottleneck owned solely by QA. Instead of being the sole tester, be the specialist who enables the team. Teach developers and product managers how to test effectively, guiding them on what to look for. (Note: the clearer the requirements, the easier this becomes – good testing starts with well-defined expectations). Having everyone getting involved in manual testing not only removes bottlenecks and dependencies, it tends to mean everyone cares a lot more about quality

Embedding Quality into the SDLC

Most teams have a documented SDLC (Software Development Lifecycle)4Wikipedia: Software Development Lifecycle. But too often, these are neglected documents – primarily there for compliance, rarely referred to and, at best, reviewed once a year as a tick-box exercise. When this happens, the SDLC fails to serve its actual intended purpose: to enable teams to deliver high-quality software efficiently.

An effective SDLC should emphasis building quality in. If it reinforces the idea that quality is solely the QA’s responsibility and the primary way of doing so is late stage testing – it’s doing more harm than good.

QA specialists should work to make the SDLC useful and enabling. This means collaborating with whoever owns it to ensure it focuses on quality at every stage and supports best practices that prevent defects early. It should promote clear requirements, testability from the outset, automation, and continuous feedback loops – not just a final sign-off before release. And importantly, it should be something teams actually use, not just a compliance artefact.

Shifting from reactive to proactive

There are far more valuable things a QA specialist can be doing with their time than manually clicking around on websites. Performance testing, exploratory testing, reviewing static analysis, reviewing for common recurring support issues, accessibility. The list goes on and on. QA should be driving these conversations, ensuring quality isn’t just about finding defects, but about making the entire system stronger.

Quality is a team sport: Fostering a Quality culture

The role of QA specialists should be to ensure everyone sees quality as their responsibility, not something QA owns. I strongly dislike seeing developers treat testing as someone else’s job (did you properly test the feature you worked on before handing it over, or did you rush through it just to move on to the next task?)

Creating a quality culture means fostering a shared commitment to building better software. It’s about educating teams on defect prevention, empowering them with the right tools and practices, and making it easy for everyone to care about quality and be involved.

The value of modern QA specialists

I firmly believe QA specialists still have an important role in modern software teams, especially in larger organisations. Their role isn’t disappearing – but it must evolve faster. The days of QA as manual testers, catching defects at the end of the cycle, should be left behind.

The best QA specialists aren’t testers; they’re quality enablers who shape how software is built, ensuring quality is embedded from the start rather than checked at the end.

This isn’t just better for organisations and teams – it makes the QA role a far richer, more rewarding career. On multiple occasions I’ve seen QA specialists who embody this approach go on to become Engineering Managers, Heads of Engineering and other leadership roles.

The demand for people who drive quality, improve engineering practices isn’t going away. If anything, with the rise of GenAI generated code5a recent Gitclear study shows that GenAI generated code is having a negative impact on code quality, it’s becoming more critical than ever.

Are we undervaluing the benefit of junior developers?

With the rise of GenAI coding assistants, there’s been a lot of noise about the supposed decline of junior developer roles. Some argue that GenAI can now handle much of the grunt work juniors traditionally did, making them redundant. But this view isn’t just short-sighted – it’s wrong.

For a start, organisations don’t hire juniors simply to offload repetitive tasks to cheaper staff. Juniors are primarily brought in to grow your own senior talent and reduce reliance on external hiring.

But more than that, junior developers contribute far beyond just writing code, and if anything, GenAI only highlights just how valuable they really are.

Developers only spend a small amount of time coding

As I covered in this article, developers spend surprisingly little time coding. It’s a small part of the job. The real work is understanding problems, solving problems, designing solutions, collaborating with others, and making trade-offs. GenAI might be able to generate some code, but it doesn’t replace the thinking, the discussions, and the understanding that go into good software development.

Typing isn’t the bottleneck. I’ve written about this before, but to reiterate – coding is only one part of what developers do. The ability to work through problems, ask the right questions, and contribute to a team is far more valuable than raw coding speed and perhaps, even deep technical knowledge (go with boring common technology and this is less of a problem anyway).

If coding isn’t the bottleneck, and collaboration, problem-solving, and domain knowledge matter more, then the argument against juniors starts to fall apart.

What juniors bring to the table

One of the best examples I’ve seen of this was when we started our Technical Academy at 7digital. One of our first cohort came from our content ingestion team. They’d played around with coding when they were younger, but had never worked as a developer. From day one, they added value – not because they were churning out lines of code, but because they were inquisitive, challenged assumptions, and made the team think harder about their approach. They weren’t bogged down in the ‘this is how we do things’ mindset. (It also benefited they had great industry and domain knowledge, which meant they could connect technical decisions to real business impact in ways that even some of our experienced developers struggled with).

This is exactly what people often under-appreciate about junior developers. In the right environment, curiosity and problem-solving ability are far more important than years of experience. A good junior can:

  • Ask the ‘stupid’ questions that expose gaps in understanding.
  • Challenge established ways of working and provoke fresh thinking.
  • Improve team communication simply by needing clear explanations.
  • Bring insights from other disciplines or domains.
  • Provide opportunities to mentor for other developers (to e.g. gain experience as a line manager/engineering manager)
  • Grow into highly effective engineers who understand both the tech and the business.

GenAI doesn’t replace the learning process

There’s also the issue of long-term talent development. If we cut off junior developer roles, where do our future senior engineers come from? GenAI might make some tasks easier, but it doesn’t replace the learning process that happens when someone grapples with real-world software development (one challenge, however, is ensuring junior devs don’t become over-reliant on GenAI and still develop fundamental problem-solving skills).

Good juniors add more value than we often realise. They bring energy, fresh perspectives, (and even sometimes, domain knowledge) that makes them valuable from day one. In the right environment, they’re not a cost – they’re an investment in better thinking, better collaboration, and ultimately, better software.

Rather than replacing junior developers, GenAI highlights why we need them more than ever. Fresh thinking, collaboration, and the ability to ask the right questions will always matter more than just getting code written.

And that’s precisely why juniors still matter.

A plea to junior developers using GenAI coding assistants

The early years of your career shape the kind of developer you’ll become. They’re when you build the problem-solving skills and knowledge that set apart excellent engineers from average ones. But what happens if those formative years are spent outsourcing that thinking to AI?

Generative AI (GenAI) coding assistants have rapidly become popular tools in software development, with as many as 81% of developers reporting to use them. 1Developers & AI Coding Assistant Trends by CoSignal

Whilst I personally think the jury is still out on how beneficial they are, I’m particularly worried about junior developers using them. The risk is they use them as a crutch – solving problems for them rather than encouraging them to think critically and solve problems themselves (and let’s not forget: GenAI is often wrong, and junior devs are the least likely to spot its mistakes).

GenAI blunts critical thinking

LLMs are impressive at a surface level. They’re great for quickly getting up to speed on a new topic or generating boilerplate code. But beyond that, they still struggle with complexity.

Because they generate responses based on statistical probability – drawing from vast amounts of existing code – GenAI tools tend to provide the most common solutions. While this can be useful for routine tasks, it also means their outputs are inherently generic – average at best.

This homogenising effect doesn’t just limit creativity; it can also inhibit deeper learning. When solutions are handed to you rather than worked through, the cognitive effort that drives problem-solving and mastery is lost. Instead of encouraging critical thinking, AI coding assistants short-circuit it.

Several studies suggest that frequent GenAI tool usage negatively impacts critical thinking skills.

I’ve seen this happen. I’ve watched developers “panel beat” code – throwing it into an GenAI assistant over and over until it works – without actually understanding why 😢

GenAI creating more “Expert Beginners”

At an entry-level, it’s tempting to lean on GenAI to generate code without fully understanding the reasoning behind it. But this risks creating a generation of developers who can assemble code but quickly plateau.

The concept of the “expert beginner” comes from Erik Dietrich’s well known article. It describes someone who appears competent – perhaps even confident – but lacks the deeper understanding necessary to progress into true expertise.

If you rely too much on GenAI code tools, you’re at real risk of getting stuck as an expert beginner.

And here’s the danger: in an industry where average engineers are becoming less valuable, expert beginners are at the highest risk of being left behind.

The value of an average engineer is likely to go down

Software engineering has always been a high-value skill, but not all engineers bring the same level of value.

Kent Beck, one of the pioneers of agile development, recently reflected on his experience using GenAI tools:

Kent Beck Twitter post “I’ve been reluctant to try ChatGPT. Today I got over that reluctance. Now I understand why I was reluctant. The value of 90% of my skills just dropped to $0. The leverage for the remaining 10% went up 1000x. I need to recalibrate”

This is a wake-up call. The industry is shifting. If your only value as a developer is quickly writing pretty generic code, the harsh reality is: if you lean too heavily on AI, you’re risking making yourself redundant.

The engineers who will thrive are the ones who bring deep understanding, strong problem-solving skills, the ability to understand trade-offs and make pragmatic decisions.

My Plea…

Early in your career, your most valuable asset isn’t how quickly you can produce code – it’s how well you can think through problems, how well you can work with other people, how well you can learn from failure.

It’s a crucial time to build strong problem-solving and foundational skills. If GenAI assistants replace the process of struggling through challenges and learning from them (and from more experienced developers), and investing time to go deep into learning topics well, it risks stunting your growth, and your career.

If you’re a junior developer, my plea to you is this: don’t let GenAI tools think for you. Use them sparingly, if at all. Use them in the same way most senior developers I speak to use them – for very simple tasks, autocomplete, yak shaving. But when it comes to solving real problems, do the work yourself.

Because the developers who truly excel aren’t the ones who can generate code the fastest.

They’re the ones who problem solve the best.


The evidence suggests GenAI coding assistants offer tiny gains – real productivity lies elsewhere

GenAI coding assistants increase individual developer productivity by just 0.7% to 2.7%

How have I determined that? The best studies I’ve found on GenAI coding assistants suggest they improve coding productivity by around 5-10%.1see the following studies all published in 2024: The Impact of Generative AI on Software Developer Performance, DORA 2024 Report and The Effects of Generative AI on High Skilled Work: Evidence from Three Field Experiments with Software Developers

However, also according to the best research I could find, developers spend only 1-2 hours a day on coding activity (reading/writing/reviewing code).2see Today was a Good Day: The Daily Life of Software Developers, Global Code Time Report 2022 by Software

In a 7.5-hour workday, that translates to an overall productivity gain of just 0.7% to 2.7%.

But even these figure aren’t particularly meaningful – most coding assistant studies rely on poor proxy metrics like PRs, commits, and merge requests. The ones including more meaningful metrics, such as code quality or overall delivery show the smallest, or even negative gains.

And as I regularly say typing isn’t the bottleneck anyway. The much bigger factors in developer productivity are things like:

  • being clear on priorities
  • understanding requirements
  • collaborating well with others
  • being able to ship frequently and reliably.

GenAI might slightly speed up coding activity, but that’s not where the biggest inefficiencies lie.

If you want to improve developer productivity, focus on what will actually make the most difference

Best Practices for Meetings

Make Every Meeting Count

Too many meetings and ineffective meetings are highly wasteful. If a meeting is necessary, make sure it’s worth everyone’s time. Here’s a guide for how both attendees and organisers can ensure meetings are effective and productive (please feel free to steal).


For Attendees

Should You Even Be There?

The best meetings are the ones we don’t need to have! Before accepting an invite, ask yourself:

  • Does it have to be a meeting? Can the same outcome be achieved via an email, a shared document, or an async discussion?
  • Do I need to be there? Understand your role in the meeting and why your presence is necessary.

What to Expect When You Attend

  • A clear purpose, an agenda, and a desired outcome.
  • An understanding of why you need to be there.
  • The right to challenge if the above expectations aren’t met.

How to Show Up Effectively

  • Be on time, visible*, present, and engaged.
  • If you realise during the meeting that you don’t need to be there, politely excuse yourself – it’s not rude, it’s efficient.
  • When committing to actions, use commitment language: “I’ll do X by this time.”
  • If can’t make it and you decline, explain why.

*cameras on unless a large team meeting like a town hall


For Organisers

Should It Even Be a Meeting?

Before scheduling, consider:

  • Is this truly necessary? Can the same outcome be achieved via an email, a shared document, or an async discussion?
  • Do you have a clear purpose? If not, don’t book it.
  • Can you meet the further guidance below? If not, rethink your approach.

Expect attendees to challenge if your meeting lacks clarity.

Get the Right People in the Room

  • Don’t invite people just because you’re unsure if they need to be there.
  • If decisions need to be made, fewer attendees are better.
  • Don’t invite people just for awareness – share the output instead.

Set the Right Duration

  • Can it be 30 minutes? 15 minutes? Avoid defaulting to an hour.
  • Adjust your calendar settings for shorter meetings by default (Outlook/Gmail has options for this).

Choose the Right Time

  • Consider attendees’ working patterns.
  • Avoid disrupting deep work (engineers, designers). Best times are often after stand-ups or after lunch.

Facilitate Effectively

  • Keep the meeting focused and ensure it meets its objective.
  • Be inclusive – don’t let the loudest voices dominate.
  • Pre-reads: Send any information you want to go over well before the meeting, don’t waste people’s time trying to read something as a group for the first time
  • Be quorate or cancel if key people don’t turn up – don’t waste people’s time if you’ll end up needing to have the meeting again.

Summarise and Track Actions

The organiser is accountable for:

  • Sending a summary with key decisions, actions, owners, and deadlines.
  • Tracking and following up on agreed actions.

Additional Tips

Recurring Meetings

  • Use meeting notes to track progress.
  • All actions should have owners and dates
  • Make sure tracking actions is part of the agenda
  • Regularly review the cadence – is the meeting still needed? Do the right people attend?

For more, check out: Avoiding Bad Meetings and What to Do When You’re in One.

Will the Generative AI bubble start deflating in 2025?

This is a copy of an article I wrote for Manchester Digital. You can read the original here and their full series here

As we approach 2025, it’s been nearly two years since OpenAI’s GPT-4 launched the generative AI boom. Predictions of radical, transformational change filled the air. Yet these promises remain largely unfulfilled. Instead of reshaping industries, generative AI risks becoming an expensive distraction – one that organisations should approach with caution.

Incremental gains, not transformational change

Beyond vendor-driven marketing and from those with vested interests, there are still scant examples of generative AI being deployed with meaningful impact. The most widespread adoption has been in coding and office productivity assistants, commonly referred to as “copilots.” However, the evidence largely suggests that their benefits are limited to marginal gains at best.

Most studies on coding assistants report a modest boost in individual productivity. The findings are similar for Microsoft Copilot. A recent Australian Government study highlighted measurable, but limited benefits.

Notably, the study also highlighted training and adoption as significant barriers. Despite being widely in use for years, many organisations still struggle to use the existing Office 365 suite effectively. Learning to craft clear and effective prompts for an LLM presents an even greater challenge, where good results rely heavily on the ability to provide precise and well-structured instructions. A skill that requires both practice and understanding.

Busier at busywork?

These tools are good at helping with low-level tasks – writing simple code, drafting documents faster, or creating presentations in less time. However, they don’t address the underlying reasons for performing these tasks in the first place. There’s a real risk they could encourage more busywork rather than meaningful, impactful change. As the old adage in software development goes, “Typing is not the bottleneck.”

All in all, this is hardly the kind of game-changing impact we were promised. 

But they’ll get better, right?

Hitting the wall: diminishing returns

The initial promise of generative AI was that models would continue to get better as more data and compute were thrown at them. However, as many in the industry had predicted, there are clear signs of diminishing returns. According to a recent Bloomberg article, leading AI labs, including OpenAI, Anthropic, and Google DeepMind, are all reportedly struggling to build models that significantly outperform their predecessors.

Hardware looks like it may also be becoming a bottleneck. The GPU chip maker NVidia, which has been at the heart of the AI boom (and got very rich from it), is facing challenges with its latest GPUs, potentially further compounding the industry’s struggles.

Another exponential leap – like the one seen between ChatGPT3.5 and ChatGPT4 – currently looks unlikely.

At what environmental and financial costs?

The environmental impact of generative AI cannot be ignored. Training large language models consumes vast amounts of energy, generating a significant carbon footprint. With each new iteration, energy demands have risen exponentially, raising difficult questions about the sustainability of these technologies.

Additionally, current generative AI products are heavily subsidised by investor funding. As these organisations seek to recoup costs, customer prices will undoubtedly rise. OpenAI has already said they aim to double the price of ChatGPT by 2029

Advice for 2025: Proceed with caution

Generative AI remains a promising technology, but its practical value is far from proven. It has yet to deliver on its transformational promises and there are warning signs it may never do so. As organisations look to 2025, they should adopt a cautious, focused approach. Here are three key considerations:

  1. Focus on strategic value, not busywork
    Generative AI tools can make us faster, but faster doesn’t always mean better. Before adopting a tool, assess whether it helps address high-impact, strategic challenges rather than simply making low-value tasks slightly more efficient.
  2. Thoughtful and careful adoption
    GenAI tools are not plug and play solutions. To deploy them effectively, organisations need to focus on clear use cases, where they can genuinely add value.Take the time to train employees, not just on how to use the tools but also on understanding their limitations and best use cases.
  3. Avoid FOMO as a strategy
    As Technology Strategist, Rachel Coldicutt highlighted in her recent newsletter, “FOMO Is Not a Strategy”. Rushing to adopt any technology out of fear of being left behind is rarely effective. Thoughtful, deliberate action will always outperform reactive adoption.

Is “computer says maybe” the new “computer says no?”

GenAI and quantum computing feel like they’re pulling us out of an era when computers were reliable. You put in inputs and get consistent, predictable outputs. Now? Not so much.

Both tease us with incredible potential but come with a similar problems: they’re unreliable and hard to scale.

Quantum computing works on probabilities, not certainties. Instead of a clear “yes” or “no,” it gives you a “probably yes” or “probably no.”

Generative AI predicts based on patterns in its training data, which is why it can sometimes be wildly wrong or confidently make things up.

We’ve already opened Pandora’s box with GenAI and are needing to learn to live with the complexities that come with its unreliability (for now at least).

Quantum Computing? Who knows when a significant breakthrough may come.

Either way it feels like we’re potentially entering an era where computers are less about certainty and more about possibility.

Both technologies challenge our trust in what a computer can do, forcing us to consider how we use them and what we expect from them.

So, is “computer says maybe” the future we’re heading towards? What do you think?

My restaurant anecdote: a lesson in leadership

I want to share a story I often use when coaching new leaders – a personal anecdote about a lesson I learned the hard way.

Back when I was at university, I spent a couple of summers working as a waiter in a restaurant. It was a lovely place – a hotel in Salcombe, Devon (UK), with stunning views of the estuary and a sandy beach. It was a fun way to spend the summer.

The restaurant could seat around 80 covers (people). It was divided into sections and waiters would work in teams for a section.

I started as a regular waiter, but was soon promoted to a “station waiter.” This role had to co-ordinate with the kitchen and manage the timing of orders for a particular section. For example, once a table finished their starters, I’d signal the kitchen to prepare their mains.

Being me, I wanted to be helpful for the other waiters. I didn’t want them thinking I wasn’t pulling my weight, so I’d make sure I was doing my bit clearing tables.

Truth be told, I also had a bit of an anti-authority streak – I didn’t like being told what to do, and I didn’t like telling others what to do either.

Then it all went wrong. I ordered a table’s main course before they’d finished their starters. By the time the mains were ready sitting on under the lights on the hotplate, the diners were still halfway through their first course.

If you’ve worked in a kitchen, you’ll know one thing: never piss off a chef.

I was in the shit.

In my panic, I told the other station waiter what had happened. Luckily, they were more quick witted than me. They told me to explain to the head chef that one of the diners had gone to the toilet, and to keep the food warm.

So I did.

The head chef’s stare still haunts me, but I got away with it.

That’s when I realised what I’d been doing wrong. My section was chaotic. The other waiters were stressed and rushing around, and it was clear that my “helping” wasn’t actually helping anyone.

My job wasn’t to be just another pair of hands; it was to stay at my station, manage the orders, and keep everything running smoothly. I needed to focus on the big picture -keeping track of the checks, working with the kitchen, and directing the other waiters.

Once I got this, it all started to click. People didn’t actually mind being told what to do, in fact it’s what they wanted. They could then focus on doing their jobs without feeling like they were also panicking and running around.

What are the lessons from this story?

The most common challenge I see with new leaders is struggling to step out of their comfort zone when it comes to delegation and giving direction.

Leadership is about enabling, not doing. Your primary role isn’t to do the work yourself; it’s to guide, delegate, and create clarity so your team can succeed. Trying to do everything means you’ll miss the big picture, creates confusion and stress.

It’s tempting to keep “helping” or to dive into the weeds because it feels safer. But that’s where things start to unravel – and where many new leaders experience their own “oh shit” moment.

And remember, giving direction doesn’t mean micro-managing, it’s about empowering. Set clear priorities, communicate expectations, step back and allow people to do their jobs.

And yes, sometimes it’s OK to be quite directive – that clarity is often what people need most.

Are GenAI copilots helping us work smarter – or just faster at fixing the wrong problems?

Are GenAI copilots helping us work smarter – or just faster at fixing the wrong problems? Let me introduce you to the concept of failure demand.

The most widespread adoption of GenAI is copilots – Office365 CoPilot and coding assistants. Most evidences suggests they deliver incremental productivity gains for individuals: write a bit more code, draft a doc faster, create a presentation in less time.

But why are you doing those tasks in the first place? This is where the concept of failure demand comes in.

Originally coined by John Seddon, failure demand is the work created when systems, processes, or decisions fail to address root causes. Instead of creating value, you spend time patching over problems that shouldn’t have existed in the first place.

Call centres are a perfect example.

Most call centre demand isn’t value demand (customers seeking products or services). It’s failure demand: caused by unclear communication, broken systems, or unresolved issues.

GenAI might help agents handle calls faster, but the bigger question is why are people calling at all?

The same applies to all knowledge work. Faster coding or document creation only accelerates failure demand if the root issues (e.g. unclear requirements, poor alignment, unnecessary work) – go unaddressed.

Examples:

– Individual speed gains might mask systemic problems, making them harder to spot and fix and reducing the incentive to do so.

– More documents and presentations could bury teams in information, reducing clarity and alignment.

– More code written faster could overwhelm QA teams or create downstream integration issues.

There’s already evidence which suggests this. The 2024 DORA Report (an annual study of engineering team performance) found found AI coding assistants marginally improved individual productivity but correlated with a downward trend in team performance.

The far bigger opportunities lies in asking:

– Why does this work exist?
– Can we eliminate or prevent it?

Unless GenAI helps addressing systemic issues, it risks being a distraction. While it might improve individual productivity, it could hurt overall performance.