Jason Crawford joins the podcast to discuss the history of progress, the future of economic growth, and the relationship between progress and risks from AI.
Jason Crawford joins the podcast to discuss the history of progress, the future of economic growth, and the relationship between progress and risks from AI. You can read more about Jason's work at https://rootsofprogress.org Timestamps: 00:00 Eras of human progress 06:47 Flywheels of progress 17:56 Main causes of progress 21:01 Progress and risk 32:49 Safety as part of progress 45:20 Slowing down specific technologies? 52:29 Four lenses on AI risk 58:48 Analogies causing disagreement 1:00:54 Solutionism about AI 1:10:43 Insurance, subsidies, and bug bounties for AI risk 1:13:24 How is AI different from other technologies? 1:15:54 Future scenarios of economic growth
Maya Ackerman discusses human and machine creativity, exploring its definition, how AI alignment impacts it, and the role of hallucination. The conversation also covers strategies for human-AI collaboration.
Adam Gleave, CEO of FAR.AI, discusses post-AGI scenarios, risks of gradual disempowerment, defense-in-depth safety strategies, scalable oversight for AI deception, and the challenges of interpretability, as well as FAR.AI's integrated research and policy work.
Beatrice Erkers discusses the AI pathways project, focusing on approaches to maintain human oversight and control over AI, including tool AI and decentralized development, and examines trade-offs and strategies for safer AI futures.