In this podcast interview, Lucas and Ariel discuss the concepts of nuclear deterrence, hair trigger alert, the potential consequences of nuclear war, and how individuals can do their part to lower the risks of nuclear catastrophe.
Researcher Oly Sourbut discusses how AI tools might strengthen human reasoning, from fact-checking and scenario planning to honest AI standards and better coordination, and explores how to keep humans central while building trustworthy, society-wide sensemaking.
Technical specialist Nora Ammann of the UK's ARIA discusses how to steer a slow AI takeoff toward resilient, cooperative futures, covering risks from rogue AI and competition to scalable oversight, formal guarantees, secure infrastructure, and AI-supported bargaining.
David Duvenaud examines gradual disempowerment after AGI, exploring how economic and political power and property rights could erode, why AI alignment may become unpopular, and what forecasting and governance might require.