Tobias Baumann on Artificial Sentience and Reducing the Risk of Astronomical Suffering
Tobias Baumann joins the podcast to discuss suffering risks, artificial sentience, and the problem of knowing which actions reduce suffering in the long-term future.
Ryan Kidd of the MATS program joins The Cognitive Revolution to discuss AGI timelines, model deception risks, dual-use alignment, and frontier lab governance, and outlines MATS research tracks, talent needs, and advice for aspiring AI safety researchers.
Technical specialist Nora Ammann of the UK's ARIA discusses how to steer a slow AI takeoff toward resilient, cooperative futures, covering risks from rogue AI and competition to scalable oversight, formal guarantees, secure infrastructure, and AI-supported bargaining.
William MacAskill discusses his Better Futures essay series, arguing that improving the future's quality deserves equal priority to preventing catastrophe. The conversation explores moral error risks, AI character design, space governance, and ethical reasoning for AI systems.