Annie Jacobsen joins the podcast to lay out a second by second timeline for how nuclear war could happen. We also discuss time pressure, submarines, interceptor missiles, cyberattacks, and concentration of power. You can find more on Annie's work at https://anniejacobsen.com Timestamps: 00:00 A scenario of nuclear war 06:56 Who would launch an attack? 13:50 Detecting nuclear attacks 19:37 The first critical seconds 29:42 Decisions under time pressure 34:27 Lessons from insiders 44:18 Submarines 51:06 How did we end up like this? 59:40 Interceptor missiles 1:11:25 Nuclear weapons and cyberattacks 1:17:35 Concentration of power
Researcher Zak Stein discusses how anthropomorphic AI can exploit human attachment systems, its psychological risks for children and adults, and ways to redesign education and cognitive security tools to protect relationships and human agency.
Andrea Miotti, founder of Control AI, discusses the extreme risks from superintelligent AI and his case for a global ban on systems that could outsmart humans, touching on industry lobbying, regulation strategies, public awareness, and citizen actions.
Ryan Kidd of the MATS program joins The Cognitive Revolution to discuss AGI timelines, model deception risks, dual-use alignment, and frontier lab governance, and outlines MATS research tracks, talent needs, and advice for aspiring AI safety researchers.