Ajeya Cotra joins us to discuss how artificial intelligence could cause catastrophe. Follow the work of Ajeya and her colleagues: https://www.openphilanthropy.org Timestamps: 00:00 Introduction 00:53 AI safety research in general 02:04 Realistic scenarios for AI catastrophes 06:51 A dangerous AI model developed in the near future 09:10 Assumptions behind dangerous AI development 14:45 Can AIs learn long-term planning? 18:09 Can AIs understand human psychology? 22:32 Training an AI model with naive safety features 24:06 Can AIs be deceptive? 31:07 What happens after deploying an unsafe AI system? 44:03 What can we do to prevent an AI catastrophe? 53:58 The next episode
Emilia Javorsky explores how AI can realistically aid cancer research, where current hype exceeds evidence, and what changes researchers, policymakers, and funders must make to turn AI advances into real clinical impact.
Researcher Zak Stein discusses how anthropomorphic AI can exploit human attachment systems, its psychological risks for children and adults, and ways to redesign education and cognitive security tools to protect relationships and human agency.
Andrea Miotti, founder of Control AI, discusses the extreme risks from superintelligent AI and his case for a global ban on systems that could outsmart humans, touching on industry lobbying, regulation strategies, public awareness, and citizen actions.