When picturing, as it has become fashionable to do, the AI apocalypse we are potentially sleepwalking into, you may typically conjure up a kind of Terminator-esque dystopia with silver robots battling it out for global control.
This kind of thing might not be completely off the mark, but it does not capture the breadth of ways in which artificial general intelligence (that sought-after AI technology which can do everything better than us, not just individual tasks) might pose an existential threat to civilisation.
Take “superpersuasion”. If somebody is convincing enough, they can make you believe false things. And if somebody knows what would convince you, they might have a pretty good shot at it. Now imagine an AI system which knows more about you, and the way you think, than even you do. Suppose this system were in the hands of a government party, or a scammer, or a foreign army, or some other malicious actor. A dictator may not need to threaten you with AI robots to ensure your compliance. He may simply use AI to genuinely convince you to support him. If this sounds far-fetched, remember that this AI can also control the information you see, falsify media, censor relevant opposition, and ready a perfect answer to every one of your objections.
There are also technological risks we once could not even imagine. Consider an army of mosquito-sized autonomous drones carrying deadly (AI developed) bioweapons. Suppose you were the head of your nation’s national security department—how would you protect against this? And such fears as this can also only concern technologies which we can currently imagine, even if they do not yet exist. But there are doubtless terroristic and military technologies further around the corner that we cannot appropriately anticipate.
We are in the very infancy of this technological revolution. We do not know what is coming. In 1440, Johannes Gutenberg invented the first modern printing press. Do you think, with any amount of reflection, he could have anticipated the yearly Black Friday sale on Amazon Kindle ebooks? Imaging a technological development 100x more radical and 1000x faster, and we’re in the ballpark of where AI might take us.
So warns William MacAskill, a philosopher, author and co-originator of the effective altruism movement. He is currently a senior research fellow at Forethought, a research nonprofit focused on how to navigate the transition to a world with superintelligent AI systems. He joins me in today’s episode of Within Reason to bring us all up to speed on the current risks around the forthcoming intelligence explosion.
TIMESTAMPS:
0:00 - The World Isn’t Ready for AGI
9:12 - What Does AGI Doomsday Look Like?
15:08 - Alignment is Not Enough
18:24 - How AGI Could Cause Government Coups
26:10 - Why Isn’t There More Widespread Panic?
32:51 - What Can We Do?
39:07 - If We Stop, China Won’t
46:39 - What is Currently Being Done to Regulate AGI Growth?
49:59 - The Problem of “Value Lock-in”
01:03:59 - Is Inaction a Form of Action?
01:07:43 - Should Effective Altruists Focus on AGI?










