Anders Sandberg joins the podcast to discuss how big the future could be and what humanity could achieve at the limits of physics. Learn more about Anders' work: https://www.fhi.ox.ac.uk Timestamps: 00:00 Introduction 00:58 Does it make sense to write long books now? 06:53 Is it possible to understand all of science now? 10:44 What is exploratory engineering? 15:48 Will humanity develop a completed science? 21:18 How much of possible technology has humanity already invented? 25:22 Which sciences have made the most progress? 29:11 How materially wealthy could humanity become? 39:34 Does a grand futures depend on space travel? 49:16 Trade between proponents of different moral theories 53:13 How does physics limit our ethical options? 55:24 How much could our understanding of physics change? 1:02:30 The next episode
Maya Ackerman discusses human and machine creativity, exploring its definition, how AI alignment impacts it, and the role of hallucination. The conversation also covers strategies for human-AI collaboration.
Beatrice Erkers discusses the AI pathways project, focusing on approaches to maintain human oversight and control over AI, including tool AI and decentralized development, and examines trade-offs and strategies for safer AI futures.
Luke Drago discusses the potential societal and economic impacts of AI dominance, including changes in workplace structures, privacy concerns, and the importance of taking career risks during technological transitions.