On Entitlement

I expect this won’t go down well, but I feel it needs to be said.

Firstly, bear with me – I want to start by talking about how I ended up in this industry.

I came out of uni heading nowhere. Meandered into a job as a pensions administrator. I was seriously considering becoming an IFA, not out of passion or ambition, just because I didn’t have any better ideas.

Then I got lucky.

Someone I knew started a startup – like Facebook for villages (before Facebook existed). I picked up coding again (I’d played around as a kid).

From there I blagged a job at another startup as an editor, writing articles about shopping. Then I blagged a job at Lycos as a “Web Master”.

Right place, right time. I was lucky. I benefitted from the DotCom boom. I fell on my feet.

I still pinch myself every day.

I think about teachers and nurses – low pay, long hours, no real choice about where or how they work.

I think about other well-paid knowledge professions – lawyers, architects – working brutal hours, often in toxic environments.

Most of the places I’ve worked had food and drinks on tap. Ping pong tables. Games machines. I’ve never had to wear a suit.

Most places were progressive, and while the industry doesn’t have a great reputation overall, it’s been far more accommodating of people from different backgrounds, genders, and sexual orientations than many others.

After a long bull run – which peaked post-Covid with inflated salaries and over-promotion – things feel like they are changing.

Being asked to go back into the office a couple of times a week. You can’t just fall into jobs like you used to.

And GenAI, of course – currently upending the way we work. A paradigm shift far greater than anything I’ve seen in 25 years of my career.

What we had wasn’t normal. It wasn’t standard. It was unusually good.

We weren’t owed any of this.

We just got lucky.

“Attention is all you need”… until it becomes the problem

This is an attempt at a relatively non-technical explainer for anyone curious about how today’s AI models actually work – and why some of the same ideas that made them so powerful may now be holding them back.

In 2017, a paper by Vaswani et al., titled “Attention is All You Need”, introduced the Transformer model. It was a genuinely historic paper. There would be no GenAI without it. The “T” in GPT literally stands for Transformer.

Why was it so significant?

“Classical” neural network based AI works a bit like playing Snakes & Ladders – processing one step at a time, building up understanding gradually.

Transformers allow every data point (or token) to connect directly with every other. Suddenly, the board looks more like chess – everything is in view, and relationships are processed in parallel. It’s like putting a massive turbocharger on the network.

But that strength is also its weakness.

“Attention” forces every token to compare itself with every other token. As inputs get longer and the model gets larger, the computational cost doesn’t just increase. It grows quadratically. Double the input, and the work more than doubles.

And throwing more GPUs or more data at the problem doesn’t just give diminishing returns – it can lead to negative returns. This is why, for example, some of the latest “mega-models” like ChatGPT 4.5 perform worse than its predecessor 4.0 in certain cases. Meta is also delaying its new Llama 4 “Behemoth” model – reportedly due to underwhelming performance, despite huge compute investment.

Despite this, much of the current GenAI narrative still focuses on more: more compute, more data centres, more power – and I have to admit, I struggle to understand why.

Footnote: I’m not an AI expert – just someone trying to understand the significance of how we got here, and what the limits might be. Happy to be corrected or pointed to better-informed perspectives.

GenAI Coding Assistant Best Practice Guides

A constantly updated list of guides and best practices for working with GenAI coding assistants.

These articles provide practical insights into integrating AI tools into your development workflow, covering topics from effective usage strategies to managing risks and maintaining code quality.

Importantly, the authors of all these articles state they are continually updating their content as they learn more and the technology evolves.

There are some books now available on this topic, but they tend to be out of date by the time they are published due to the fast pace of AI development.

Duolingo’s Gerald Ratner Moment?

Duolingo’s AI-first announcement, the backlash, and the backtrack reminded me of how Gerald Ratner destroyed his business overnight.

In April, Duolingo’s CEO, Luis von Ahn, announced a bold shift: the company would become “AI-first,” aiming to replace contractors with AI and making AI proficiency a key performance metric.

The announcement sparked immediate customer backlash. Duolingo’s social media feeds lit up with criticism, as users pushed back against job losses and what they saw as a decline in the quality of the product.

One thing Duolingo had been particularly good at was social media. Their accounts have massive followings, and the Duolingo Owl has become a well-known meme and a much-loved character.

Amid the backlash, they wiped their TikTok and Instagram feeds, replacing everything with cryptic messages. A core brand strength – suddenly gone. The content has since returned, but the damage to the brand was done. It only reinforced the sense that things were unravelling.

Not long after Luis issued a very public backtrack.

It immediately reminded me of the Gerald Ratner story. In 1991, Ratner, then CEO of a successful UK jewellery chain (also called Ratners), famously joked that his products were “total crap”. The comment destroyed consumer confidence overnight. The business collapsed, and so did his career.

Gerald Ratner at the Institute of Directors, April 1991 – where he called his own products “total crap”

Similarly, Duolingo’s announcement has significantly shifted public perception. Since the AI-first statement, I’ve seen just as many articles and comments claiming Duolingo was never a good tool for learning languages in the first place as I have about the announcement itself (and the subsequent backtrack).

Users are also calling the new AI-generated courses “AI slop” and complaining about the synthetic voices. Maybe some of that is true – but I’d wager it’s being projected onto the old content too.

The key point here is the customer perception has shifted, and potentially, like Ratners, irreversibly.

It also didn’t help that, around the same time, CEO Luis von Ahn suggested in a podcast that schools might eventually serve primarily as childcare centres, with AI doing the teaching. One thing you don’t do is dunk on teachers – a group held in consistently high regard by the public.

Only last week I posted an article on the pitfalls of headcount-first transformations. I didn’t expect it to be so relevant so soon.

This is exactly the kind of outcome you get when you don’t put customers at the heart of your strategy. And when you treat technology as the strategy, rather than a tool to support it, you risk compounding the problem. If you don’t start with purpose, people, and the system around them – AI won’t fix it. It’ll just as likely make things worse.

Developers aren’t afraid of automation

Software developers are not against more automation in their work – quite the opposite.

This image is from the “Tech Manifesto” I put together when I was at 7digital, 12 years ago. One of the principles was: “We prefer not to do the same thing twice”.

The best engineers and teams automate everything that moves – tests, build and deployment, monitoring, alerting, infrastructure provisioning. They use rich IDEs with refactoring tools, code formatters, linters, and even, dare I say it, code generation (which has been around long before GenAI, by the way).

It’s about reducing toil, eliminating waste, getting fast feedback, and making space to focus on the more meaningful and enjoyable parts of the job.

Things like understanding and solving real-world problems, turning ideas into working software, building useful things. Creating.

Exactly the parts GenAI still isn’t any good at.

Why headcount-led transformations fail

All the fear-mongering about AI taking jobs reminds me of something I’ve seen too often: when organisations go into org change with the goal to reduce headcount, it rarely ends well.

I’ve been part of these exercises. You cut people, but the costs come back in other forms – lost sales, reduced capacity, expensive contractors to plug the gaps. The result? Often a rapid series of transformations, each one trying to fix the damage caused by the last. Org transformation whack-a-mole.

A good industry wide example was the trend to offshore software development a decade or so back. Sold as a way to cut costs, it often ended up costing more due to hidden overheads, coordination challenges, slow delivery and quality issues. Many quietly reversed course over the next few years.

The reason it doesn’t work? Yes, organisations can be bloated – but that’s usually a *symptom of deeper inefficiencies, not the root cause*.
If you cut people without addressing those inefficiencies, the problems persist – or get worse, because now fewer people are left to deal with the same issues.

The best transformations I’ve seen start with the outcome.

Why do we exist? What are we here to do?

Then look at the system end to end – people, culture, process, communication, technology – and identify the pain points and bottlenecks.

Optimise systematically.

Yes, this can lead to restructuring. Roles change. Some may no longer be needed. But that happens as a consequence of tackling the root causes.

AI? It’s just a tool. It could help. It could just as easily get in the way. Technology is a *fourth-order concern* – purpose, people, and process come first.

If you don’t understand the root causes, if you don’t work from first principles, AI won’t save you. It’ll amplify your dysfunction.




Footnote: There are situations where a headcount-first approach is justified – but these are typically extreme, when an organisation is fighting for immediate survival.

GenAI coding: most teams aren’t ready

All the evidence I see continues to suggest that good engineering discipline is not just desirable, but essential when using GenAI for coding. But that’s exactly what the vast majority of software engineers – and teams – lack.

Take Test-Driven Development (TDD) for example. I keep hearing that one of the most effective ways to stay in control of GenAI output is to take a test-first approach (“Test-Driven Generation” or TDG as its becoming known) – and I agree based on experience. On one hand, I’m excited by the idea of a TDD renaissance. However, I saw something recently suggesting only around 1% of code is written that way. Anecdotally, most developers I speak to who say they know TDD, don’t actually understand what it is. It’s a clear example of the skills gap we’re dealing with.

Let alone TDD, again, everything I see and hear on the ground suggests effective GenAI-assisted development also relies on having comprehensive automated tests and the ability to release frequently in small batches. Many teams have neither. Some have a few tests. Most can only release a few times a month because the rely on long, manual regression cycles due to their lack of automated test coverage.

The DORA research project suggests only ~19% of software teams globally have the kind of engineering practices in place to potentially capitalise on GenAI coding (their latest report suggests a downward negative pressure on overall delivery due to GenAI coding, but that’s another thing…)

I’m not convinced by arguments that GenAI will improve code quality (vs experienced engineers not using GenAI). The skills gap is part of the problem – but also, studies like GitClear’s earlier this year already show a significant drop in code quality linked to GenAI use.

At the very least, good practices will act as damage limitation.

GenAI coding could be a turning point. But most teams simply aren’t equipped to handle it. And unless that changes – quickly – which seems unlikely given how long these practices have existed without widespread adoption, we’re likely heading for a wave of poor-quality code, delivered at speed.

We need a rise in the voices of techno-realists

GenAI is the hypiest tech I’ve seen in my career – and that’s saying something. Because of all the noise it generates, we need to hear from more grounded, pragmatic voices.

Social media is dominated by extremes: Those who see tech as the solution to everything, often without really understanding it – and those whose negativity leads them to dismiss it.

It’s great for engagement, but real progress will come from those in the middle – curious, thoughtful, and focused on outcomes.

In my mind, a techno-realist:

  • Is open-minded, but not easily sold
  • Is curious enough to dig in and understand how things actually work
  • Is conscious of their biases
  • Applies critical thinking
  • Works from evidence
  • Proves by doing
  • Understands that every decision involves trade-offs
  • Takes a systemic view – steps back to see the bigger picture and how things connect
  • Understands that tech is powerful – but not always the answer
  • Sees technology as a means to an end – never the end itself

Social platforms reward loud certainty, not nuanced thoughtfulness.

But these voices – the thoughtful ones – matter more than ever.

If this sounds like you, here’s how I suggest showing up as a techno-realist online:

  • Be polite and constructive – even when you strongly disagree
  • Call out the hype when you see it (but see point above)
  • Amplify grounded voices – like, repost, and comment on thoughtful posts and replies
  • Ask questions – seek to understand, not just to respond
  • Share what you’re learning – especially from real-world experience
  • Connect with and follow others who bring thoughtful, balanced perspectives

Let’s find each other – and make this mindset more visible 🙌

I’ve even added techno-realist to my LinkedIn profile 🫡

Start Up Security Basics Every Founder Should Know

You might think your startup is too small to be a target and it’s only larger organisations at risk. But attackers don’t work like that. They behave more like drive by opportunists than trained assassins. They scan the internet to see what comes back, then probe for weaknesses. They spray phishing emails to see who bites. If your defences are weak, you’re low-hanging fruit.

One of the biggest threats today is ransomware – where attackers lock you out of your own systems and demand payment to unlock them. These attacks are widespread and often hit smaller companies simply because they’re easier targets.

Here are some practical, low-cost steps every founder should take – no deep tech knowledge needed:

🔐 Turn on two-factor authentication for all key accounts – (email, cloud services etc).

🔑 Use a password manager like 1Password or Bitwarden – never share passwords via Slack, email, or docs.

🔒 Limit access – only give people what they need. Avoid shared logins.

📬 Set up your email securely – Google Workspace and Microsoft 365 include spam and phishing protection, but you still need to enable sender validation to prevent attackers sending emails that pretend to be from your domain (SPF, DKIM, DMARC).

🛡️ Use a web application firewall (WAF) – Cloudflare or AWS WAF can block common attacks before they reach your app.

💾 Back up your databases – and test that you can actually restore them.

🧊 Encrypt your databases – easy to enable in platforms like AWS or Azure.

🧪 Scan your code – GitHub and GitLab offer built-in code vulnerability scanning tools, even on free plans.

🔄 Keep third-party libraries and frameworks up to date – tools like GitHub Dependabot or Snyk are free or cheap and help let you know when things need patching.

🧩 And finally: have a plan for what you’d do if a device is lost, an account is compromised, or your data is locked or leaked.

None of this is expensive or particularly complicated. But recovering from an attack will be.

The counterintuitive truth about trying to go faster (what I learnt about running)

Hopefully this is a useful analogy you can use if you’re struggling with a boss or manager who thinks the way to go faster is to push the team harder or cram in more work.

A few months ago, I took up running. At first, I improved steadily – each 5km a little quicker than the last. I assumed the way to keep getting faster was simple: run harder, push more.

But then recently, I hit a wall. My pace stopped improving. I finished every run exhausted. And no matter how much I tried to “dig deep”, I wasn’t getting anywhere.

So I did some research. It turns out running hard all the time doesn’t make you faster – it often slows you down. Improvement comes from running slower most of the time, staying in your “aerobic zone”, building endurance, recovering well, and only pushing occasionally.

Here’s the key point: it’s completely counterintuitive.

The analogy with running breaks down a bit here, but this counterintuitiveness is exactly why so many software teams – despite best intentions – end up underperforming.

The intuitive belief is that the path to delivering faster is to do more: write more code, skip meetings, avoid “distractions”, and stay heads-down. But just like me trying to sprint every run, it has the opposite effect.

Some common examples

  • Not spending enough time on discovery or analysis to “get going” faster – but ending up building the wrong thing and wasting time on re-work.
  • Skipping retrospectives or post-mortems – missing key opportunities to learn and improve, so mistakes get repeated.
  • Worrying that developers spend too much time collaborating – and believing solo work is more efficient, but ending up with bottlenecks, siloed knowledge, and poor decisions.

These instincts feel productive, but they’re often the root cause of slow, ineffective delivery.

Improvement doesn’t come from pushing harder. It comes from pacing well, working sustainably, and continuously improving the system you’re running in.

It’s often counterintuitive. But it’s true. Agile software development best practices have been around for decades, and the principles they were founded on even longer. Yet they’re still not common – because they go against intuition.

Sometimes, the way to go faster… is (quite literally in the case of running) to slow down.