Monthly Archives: June 2023

Why you shouldn’t lose sleep over the existential threat of AI

There is no existential threat from Artificial Intelligence any time soon, despite what the headlines might have you believe, so I’m going to try and explain why.

Why are we hearing so much about it then? Fear, uncertainty, and doubt (FUD) make for great headlines, sell papers, and generate advertising revenue on podcasts. However, if you dig a little deeper, the substance beneath is considerably less sensational.

Understanding the current state of AI

AI as we know it today predominantly falls under what’s known as Machine Learning (ML). There are other concepts in use, but the vast majority – including those using Large Language Models (LLMs) like ChatGPT and MidJourney – are based on ML principles.

ML is learning in the very loosest sense. It’s intelligence in the very loosest sense.

Machine Training would, be a more accurate description. Algorithms are fed data, evaluated on their outputs, adjusted, and then fed more data until their results improve. This iterative process eventually creates models capable of some very impressive tasks. But they’re not ‘learning’ in the way we think of a child or a baby chimp discovering the world. They’re not generating new, novel insights or demonstrating any form of consciousness or understanding.

Artificial General Intelligence (AGI) is no more than theory

ML isn’t remotely close to the kind of intelligence that could theoretically pose an existential threat to humanity. The fundamentals of ML have been around for at least 40 years and it’s taken us that long to get to a point where it has genuine, widespread practical applications.

As for AGI, there are currently no accepted theories for how it could even be achieved. There are plenty of ideas, but they remain hypothetical. Could machines become genuinely intelligent? Possibly. But no one knows for sure.

Predictions of when the “Singularity” (the point at which artificial intelligence surpasses human intelligence) will arrive, are thus pure conjecture.

Ignore the FUD and focus on the real issues of AI

While ML-based AI is undeniably changing our lives, it is doing so in the same way computers have been since the invention of the pocket calculator. There are tasks at which computers already outperform us, like processing large amounts of data and performing complex calculations, but for the vast majority of what we consider to be human intelligence, they’re still light years away.

We’re no closer to a Terminator-style “Judgement Day” than when Alan Turing first started kicking around the idea of AI in the mid 20th Century.

That’s not to say AI doesn’t present us with challenges. Job displacement, privacy concerns, potential misuse, and inherent biases are real and pressing issues we need to address. We’d be better off focusing on these tangible problems rather than worrying about hypothetical existential threats posed by AGI. Let’s redirect our energy to making sure that our use of AI is responsible, ethical, and beneficial for all.

On confirmation bias

I grow more convinced each day that one of our biggest battles, in our organisations and even society as a whole, is with Confirmation Bias.

Confirmation Bias is when we unconsciously look for, interpret, and remember information that backs up our own beliefs or values, and downplay information that doesn’t.

It’s all around us and has likely grown worse with the rise of social media. We create “filter bubbles” by following only what we like, and recommendation algorithms make this even easier to do.

Recent events like Covid, Brexit, and even Twitter’s rate limiting over the last few days, show how people selectively use information to back our view.

This also happens at work, especially in “them and us” cultures between teams. A common example I see is between commercial and development teams:

Development says, “Commercial sell new features without asking, make unreasonable demands, and don’t care about tech.”

Commercial says, “Development take too long, only care about the tech stuff and don’t care about the business being successful.”

In both cases, we tend to amplify the information that backs our view and ignore what doesn’t. This makes our biases stronger and the “them and us” gap bigger, which hurts open communication and cooperation (let alone being an unpleasant working environment).

So, what can we do?

First, have some humility. Realise that YOU are just as likely to be vulnerable to Confirmation Bias as anyone else. We like to think we’re more self-deterministic than others. We’re not. Get over it!

Second, show some empathy. Put yourself in the other person’s shoes. Engage positively and with an open mind. It’s amazing how many times I’ve had an “Ah ha” moment, and even apologised for how I acted when I better understood their view point. Crucially, this also builds trust, which is vital in being able to work together to solve problems.

Lastly, burst your filter bubbles. Follow and read viewpoints you disagree with as well as ones you do. Be careful about opinions that don’t have evidence to back them up. And check that the evidence is reliable.

Challenging our biases can be tough, but it’s worth it. By doing so, we build stronger connections, foster better communication, and create more collaborative environments. And who knows, we might even change our minds along the way!

Quality is a team sport

I think it was Jamie Arnold who first introduced me to this phrase.

In engineering teams it’s – sadly – still all too common that Quality Assurance (QA) is the last step in the delivery process. Developers code, then throw it over the wall to QAs to test. Teams working this way typically have a high rate of failure and large release bottlenecks – features and releases pile up, waiting on the QAs. Developers pick up more new work whilst they’re waiting. Bugs come back and developers are now juggling bug fixes and new work.

It’s slow, inefficient and costly!

What I dislike the most is the cultural aspect – the implication that quality is the responsibility of QAs, not the developers who wrote the code.

Quality is a team sport. The most valuable role for QAs* is to ensure quality is baked into the entire end-to-end delivery process. This has become known as “shift-left” – QAs moving away from spending all their time at the end of the delivery lifecycle and focusing more on how we can “build quality in” throughout.

What does this look like in practice?

– QAs involved in requirements gathering and definition, making sure requirements are clear, well understood and we’ve considered how we’re going to test it (inc. automated tests).

– QAs ensure we’re following our agreed Software Delivery Lifecycle (SDLC) and the steps and control we have in place to ensure quality is front of mind.

– QAs collaborate with developers to write automated tests, developers collaborate with QAs on mutation testing, compatability testing, performance testing.

– If there’s any manual testing required, /everyone gets involved/. QAs make sure everyone in the team is capable of doing it.

It’s a much richer role for QAs, and far better for everyone!

Fewer, better, people

This is something I said in a talk on high performing teams recently that resonated with a few folks.

In my experience the most effective teams are small, between 3-5 members, and the most effective organisations are the ones that manage to stay small overall.

Why might this be? Fewer people streamlines communication: a 3-member team has 3 channels, a 5-member one has 10. It rises exponentially with every person you add.

In small teams, alignment is more organic. Greater shared understanding fosters greater autonomy and more informed decision making.

“Better” is not just about technical expertise. Behaviours are just as important, if not more so (teamwork, communication, adaptability and so on).

In a high performing team, the whole is greater than the sum of its parts.

With an underperforming team adding more people will most likely slow things down (it may not look like at first because everyone is “busy”, but it will).

How can you stay small? Do less, better.