In this podcast interview, Lucas and Ariel discuss the concepts of nuclear deterrence, hair trigger alert, the potential consequences of nuclear war, and how individuals can do their part to lower the risks of nuclear catastrophe.
Ryan Kidd of the MATS program joins The Cognitive Revolution to discuss AGI timelines, model deception risks, dual-use alignment, and frontier lab governance, and outlines MATS research tracks, talent needs, and advice for aspiring AI safety researchers.
Deric Cheng of the Windfall Trust discusses how AGI could transform the social contract, jobs, and inequality, exploring labor displacement, resilient work, new tax and welfare models, and long-term visions for decoupling economic security from employment.
Researcher Oly Sourbut discusses how AI tools might strengthen human reasoning, from fact-checking and scenario planning to honest AI standards and better coordination, and explores how to keep humans central while building trustworthy, society-wide sensemaking.