Robin Hanson joins the podcast to explain his theory of grabby aliens and its implications for the future of humanity. Learn more about the theory here: https://grabbyaliens.com Timestamps: 00:00 Introduction 00:49 Why should we care about aliens? 05:58 Loud alien civilizations and quiet alien civilizations 08:16 Why would some alien civilizations be quiet? 14:50 The moving parts of the grabby aliens model 23:57 Why is humanity early in the universe? 28:46 Could't we just be alone in the universe? 33:15 When will humanity expand into space? 46:05 Will humanity be more advanced than the aliens we meet? 49:32 What if we discovered aliens tomorrow? 53:44 Should the way we think about aliens change our actions? 57:48 Can we reasonably theorize about aliens? 53:39 The next episode
Maya Ackerman discusses human and machine creativity, exploring its definition, how AI alignment impacts it, and the role of hallucination. The conversation also covers strategies for human-AI collaboration.
Adam Gleave, CEO of FAR.AI, discusses post-AGI scenarios, risks of gradual disempowerment, defense-in-depth safety strategies, scalable oversight for AI deception, and the challenges of interpretability, as well as FAR.AI's integrated research and policy work.
Beatrice Erkers discusses the AI pathways project, focusing on approaches to maintain human oversight and control over AI, including tool AI and decentralized development, and examines trade-offs and strategies for safer AI futures.