Vincent Boulanin joins the podcast to explain the dangers of incorporating artificial intelligence in nuclear weapons systems. Learn more about Vincent's work: https://sipri.org Timestamps: 00:00 Introduction 00:55 What is strategic stability? 02:45 How can AI be a positive factor in nuclear risk? 10:17 Remote sensing of nuclear submarines 19:50 Using AI in nuclear command and control 24:21 How does AI change the game theory of nuclear war? 30:49 How could AI cause an accidental nuclear escalation? 36:57 How could AI cause an inadvertent nuclear escalation? 43:08 What is the most important problem in AI nuclear risk? 44:39 The next episode
Peter Wildeford discusses methods for forecasting AI progress and why he sees AI as neither a bubble nor a normal technology, covering economic effects, national security, cyber capabilities, robotics, export controls, and prediction markets.
Inria researcher Carina Prunkl discusses why AI evaluation struggles to keep pace with general-purpose systems, including jagged capabilities, missed real-world behavior, misuse risks, de-skilling, red teaming, and layered safeguards.
Li-Lian Ang from Blue Dot Impact discusses how to build a workforce to defend against AI-driven risks, including engineered pandemics, cyber attacks, job disempowerment, and concentrated power, using a defense-in-depth framework for uncertain AI progress.