Annie Jacobsen joins the podcast to lay out a second by second timeline for how nuclear war could happen. We also discuss time pressure, submarines, interceptor missiles, cyberattacks, and concentration of power. You can find more on Annie's work at https://anniejacobsen.com Timestamps: 00:00 A scenario of nuclear war 06:56 Who would launch an attack? 13:50 Detecting nuclear attacks 19:37 The first critical seconds 29:42 Decisions under time pressure 34:27 Lessons from insiders 44:18 Submarines 51:06 How did we end up like this? 59:40 Interceptor missiles 1:11:25 Nuclear weapons and cyberattacks 1:17:35 Concentration of power
Adam Gleave, CEO of FAR.AI, discusses post-AGI scenarios, risks of gradual disempowerment, defense-in-depth safety strategies, scalable oversight for AI deception, and the challenges of interpretability, as well as FAR.AI's integrated research and policy work.
Nate Soares discusses his book on the risks of superintelligent AI, arguing that current approaches make AI unpredictable and uncontrollable, and advocates for an international ban on research toward superintelligence.
Tom Davidson discusses the risks of AI-enabled coups, examining how advanced artificial intelligence could facilitate covert power grabs, alter democratic processes, and outlining strategies to mitigate these emerging threats.