Last week I hosted Anil Seth to discuss whether AI will ever become conscious. That question dominates much of the AI discussion, but recently a book co-authored by Nate Soares and Eliezer Yudkowsky has caught a lot of attention. It’s called If Anyone Builds It, Everyone Dies.
They are talking about superhuman artificial intelligence. And it is no joke. We do not even understand how these systems function. We do not even build them, really; we grow them, organically and without monitoring every step. We trust them with information that could be used to harm us. We have repeatedly caught them lying to us, and pretending to be less sophisticated than they really are.
Artificial intelligence of this kind is a scary prospect, and Soares and Yudkowsky offer a solemn warning: if such systems achieve superintelligence, we will all die. They are not exaggerating, nor hypothesising a remote possibility. They firmly believe that if anyone, anywhere in the world, builds it, everyone dies.
But why? How can they be so certain? What even is superintelligence? Isn’t it inevitable? Why would it want to kill us?
To answer these questions and more, one of the authors, Nate Soares, joins me for today’s episode of Within Reason. Do let me know what you think.
***
TIMESTAMPS:
0:00 - Is This an Exaggeration?
4:31 - What Is Unique About the Threat of AI?
11:28 - What is Superintelligence?
20:09 - From Chess Computers to Murderous Machines
26:38 - What Really Drives AI Systems?
43:13 - Evidence AI Is Already Turning Against Us
54:49 - How We Are Helping AI Take Over
01:00:05 - Why Would AI Seek Power or Control?
01:06:26 - Some Worst-Case AI Scenarios
01:17:22 - What Do We Do About This Now?
01:31:37 - How Has AI Changed in the Last Six Months?










