Vincent Boulanin joins the podcast to explain the dangers of incorporating artificial intelligence in nuclear weapons systems. Learn more about Vincent's work: https://sipri.org Timestamps: 00:00 Introduction 00:55 What is strategic stability? 02:45 How can AI be a positive factor in nuclear risk? 10:17 Remote sensing of nuclear submarines 19:50 Using AI in nuclear command and control 24:21 How does AI change the game theory of nuclear war? 30:49 How could AI cause an accidental nuclear escalation? 36:57 How could AI cause an inadvertent nuclear escalation? 43:08 What is the most important problem in AI nuclear risk? 44:39 The next episode
Maya Ackerman discusses human and machine creativity, exploring its definition, how AI alignment impacts it, and the role of hallucination. The conversation also covers strategies for human-AI collaboration.
Adam Gleave, CEO of FAR.AI, discusses post-AGI scenarios, risks of gradual disempowerment, defense-in-depth safety strategies, scalable oversight for AI deception, and the challenges of interpretability, as well as FAR.AI's integrated research and policy work.
Beatrice Erkers discusses the AI pathways project, focusing on approaches to maintain human oversight and control over AI, including tool AI and decentralized development, and examines trade-offs and strategies for safer AI futures.