There is no existential threat from Artificial Intelligence any time soon, despite what the headlines might have you believe, so I’m going to try and explain why.
Why are we hearing so much about it then? Fear, uncertainty, and doubt (FUD) make for great headlines, sell papers, and generate advertising revenue on podcasts. However, if you dig a little deeper, the substance beneath is considerably less sensational.
Understanding the current state of AI
AI as we know it today predominantly falls under what’s known as Machine Learning (ML). There are other concepts in use, but the vast majority – including those using Large Language Models (LLMs) like ChatGPT and MidJourney – are based on ML principles.
ML is learning in the very loosest sense. It’s intelligence in the very loosest sense.
Machine Training would, be a more accurate description. Algorithms are fed data, evaluated on their outputs, adjusted, and then fed more data until their results improve. This iterative process eventually creates models capable of some very impressive tasks. But they’re not ‘learning’ in the way we think of a child or a baby chimp discovering the world. They’re not generating new, novel insights or demonstrating any form of consciousness or understanding.
Artificial General Intelligence (AGI) is no more than theory
ML isn’t remotely close to the kind of intelligence that could theoretically pose an existential threat to humanity. The fundamentals of ML have been around for at least 40 years and it’s taken us that long to get to a point where it has genuine, widespread practical applications.
As for AGI, there are currently no accepted theories for how it could even be achieved. There are plenty of ideas, but they remain hypothetical. Could machines become genuinely intelligent? Possibly. But no one knows for sure.
Predictions of when the “Singularity” (the point at which artificial intelligence surpasses human intelligence) will arrive, are thus pure conjecture.
Ignore the FUD and focus on the real issues of AI
While ML-based AI is undeniably changing our lives, it is doing so in the same way computers have been since the invention of the pocket calculator. There are tasks at which computers already outperform us, like processing large amounts of data and performing complex calculations, but for the vast majority of what we consider to be human intelligence, they’re still light years away.
We’re no closer to a Terminator-style “Judgement Day” than when Alan Turing first started kicking around the idea of AI in the mid 20th Century.
That’s not to say AI doesn’t present us with challenges. Job displacement, privacy concerns, potential misuse, and inherent biases are real and pressing issues we need to address. We’d be better off focusing on these tangible problems rather than worrying about hypothetical existential threats posed by AGI. Let’s redirect our energy to making sure that our use of AI is responsible, ethical, and beneficial for all.