Timestamps: 00:00 Introduction 00:45 Categorizing risks from AI and nuclear 07:40 AI being used by non-state actors 12:57 Combining AI with nuclear technology 15:13 A human should remain in the loop 25:05 Automation bias 29:58 Information requirements for nuclear launch decisions 35:22 Vincent's general conclusion about military machine learning 37:22 Specific policy measures for decreasing nuclear risk
Deric Cheng of the Windfall Trust discusses how AGI could transform the social contract, jobs, and inequality, exploring labor displacement, resilient work, new tax and welfare models, and long-term visions for decoupling economic security from employment.
Researcher Oly Sourbut discusses how AI tools might strengthen human reasoning, from fact-checking and scenario planning to honest AI standards and better coordination, and explores how to keep humans central while building trustworthy, society-wide sensemaking.
Technical specialist Nora Ammann of the UK's ARIA discusses how to steer a slow AI takeoff toward resilient, cooperative futures, covering risks from rogue AI and competition to scalable oversight, formal guarantees, secure infrastructure, and AI-supported bargaining.