Ajeya Cotra joins us to discuss how artificial intelligence could cause catastrophe. Follow the work of Ajeya and her colleagues: https://www.openphilanthropy.org Timestamps: 00:00 Introduction 00:53 AI safety research in general 02:04 Realistic scenarios for AI catastrophes 06:51 A dangerous AI model developed in the near future 09:10 Assumptions behind dangerous AI development 14:45 Can AIs learn long-term planning? 18:09 Can AIs understand human psychology? 22:32 Training an AI model with naive safety features 24:06 Can AIs be deceptive? 31:07 What happens after deploying an unsafe AI system? 44:03 What can we do to prevent an AI catastrophe? 53:58 The next episode
Luke Drago discusses the potential societal and economic impacts of AI dominance, including changes in workplace structures, privacy concerns, and the importance of taking career risks during technological transitions.
Basil Halperin discusses how financial markets and economic indicators, such as interest rates, can provide insights into AI development timelines and the potential economic impact of transformative AI.
Benjamin Todd discusses the evolution of reasoning models in AI, potential bottlenecks in compute and robotics, and offers advice on personal preparation for AGI, including skills, networks, and resilience, with projections through 2030.