Tobias Baumann on Artificial Sentience and Reducing the Risk of Astronomical Suffering
Tobias Baumann joins the podcast to discuss suffering risks, artificial sentience, and the problem of knowing which actions reduce suffering in the long-term future.
Adam Gleave, CEO of FAR.AI, discusses post-AGI scenarios, risks of gradual disempowerment, defense-in-depth safety strategies, scalable oversight for AI deception, and the challenges of interpretability, as well as FAR.AI's integrated research and policy work.
Nate Soares discusses his book on the risks of superintelligent AI, arguing that current approaches make AI unpredictable and uncontrollable, and advocates for an international ban on research toward superintelligence.
Tom Davidson discusses the risks of AI-enabled coups, examining how advanced artificial intelligence could facilitate covert power grabs, alter democratic processes, and outlining strategies to mitigate these emerging threats.