Darren McKee joins the podcast to discuss how AI might be difficult to control, which goals and traits AI systems will develop, and whether there's a unified solution to AI alignment.
Darren McKee joins the podcast to discuss how AI might be difficult to control, which goals and traits AI systems will develop, and whether there's a unified solution to AI alignment. Timestamps: 00:00 Uncontrollable superintelligence 16:41 AI goals and the "virus analogy" 28:36 Speed of AI cognition 39:25 Narrow AI and autonomy 52:23 Reliability of current and future AI 1:02:33 Planning for multiple AI scenarios 1:18:57 Will AIs seek self-preservation? 1:27:57 Is there a unified solution to AI alignment? 1:30:26 Concrete AI safety proposals
Luke Drago discusses the potential societal and economic impacts of AI dominance, including changes in workplace structures, privacy concerns, and the importance of taking career risks during technological transitions.
Basil Halperin discusses how financial markets and economic indicators, such as interest rates, can provide insights into AI development timelines and the potential economic impact of transformative AI.
Benjamin Todd discusses the evolution of reasoning models in AI, potential bottlenecks in compute and robotics, and offers advice on personal preparation for AGI, including skills, networks, and resilience, with projections through 2030.