Connor Leahy on AI Safety and Why the World is Fragile
Connor Leahy from Conjecture joins the podcast to discuss AI safety, the fragility of the world, slowing down AI development, regulating AI, and the optimal funding model for AI safety research.
Connor Leahy from Conjecture joins the podcast to discuss AI safety, the fragility of the world, slowing down AI development, regulating AI, and the optimal funding model for AI safety research. Learn more about Connor's work at https://conjecture.dev Timestamps: 00:00 Introduction 00:47 What is the best way to understand AI safety? 09:50 Why is the world relatively stable? 15:18 Is the main worry human misuse of AI? 22:47 Can humanity solve AI safety? 30:06 Can we slow down AI development? 37:13 How should governments regulate AI? 41:09 How do we avoid misallocating AI safety government grants? 51:02 Should AI safety research be done by for-profit companies? Social Media Links: ➡️ WEBSITE: https://futureoflife.org ➡️ TWITTER: https://twitter.com/FLIxrisk ➡️ INSTAGRAM: https://www.instagram.com/futureoflifeinstitute/ ➡️ META: https://www.facebook.com/futureoflifeinstitute ➡️ LINKEDIN: https://www.linkedin.com/company/future-of-life-institute/
Emilia Javorsky explores how AI can realistically aid cancer research, where current hype exceeds evidence, and what changes researchers, policymakers, and funders must make to turn AI advances into real clinical impact.
Researcher Zak Stein discusses how anthropomorphic AI can exploit human attachment systems, its psychological risks for children and adults, and ways to redesign education and cognitive security tools to protect relationships and human agency.
Andrea Miotti, founder of Control AI, discusses the extreme risks from superintelligent AI and his case for a global ban on systems that could outsmart humans, touching on industry lobbying, regulation strategies, public awareness, and citizen actions.