Conversations with far-sighted thinkers.
Featuring researchers, philosophers, and experts on technology, existential risks, and our shared future.
Featured episodes
Welcome to the Future of Life Institute Podcast, hosted by Gus Docker. Each week, we explore humanity's greatest challenges through conversations with leading researchers, philosophers, and experts.
This podcast examines the complex intersection of emerging technologies, existential risks, and our shared future. We dive into critical topics including artificial intelligence safety, biotechnology, and the long-term survival and flourishing of humanity.
Gus brings thoughtful curiosity to each interview, creating space for nuanced discussions that balance scientific precision with accessible insights. Join us as we explore how to navigate the unprecedented risks and opportunities of our time, and work toward a beneficial future for all life.
Latest episodes
Can Machines Be Truly Creative? (with Maya Ackerman)
Maya Ackerman discusses human and machine creativity, exploring its definition, how AI alignment impacts it, and the role of hallucination. The conversation also covers strategies for human-AI collaboration.
View episode
From Research Labs to Product Companies: AI's Transformation (with Parmy Olson)
Parmy Olson discusses the transformation of AI companies from research labs to product businesses. The episode explores how funding pressures impact company missions, the role of personalities, safety challenges, and industry power consolidation.
View episode
Can Defense in Depth Work for AI? (with Adam Gleave)
Adam Gleave, CEO of FAR.AI, discusses post-AGI scenarios, risks of gradual disempowerment, defense-in-depth safety strategies, scalable oversight for AI deception, and the challenges of interpretability, as well as FAR.AI's integrated research and policy work.
View episode
How We Keep Humans in Control of AI (with Beatrice Erkers)
Beatrice Erkers discusses the AI pathways project, focusing on approaches to maintain human oversight and control over AI, including tool AI and decentralized development, and examines trade-offs and strategies for safer AI futures.
View episode
Breaking the Intelligence Curse (with Luke Drago)
Luke Drago discusses the potential societal and economic impacts of AI dominance, including changes in workplace structures, privacy concerns, and the importance of taking career risks during technological transitions.
View episode
What Markets Tell Us About AI Timelines (with Basil Halperin)
Basil Halperin discusses how financial markets and economic indicators, such as interest rates, can provide insights into AI development timelines and the potential economic impact of transformative AI.
View episode
Reasoning, Robots, and How to Prepare for AGI (with Benjamin Todd)
Benjamin Todd discusses the evolution of reasoning models in AI, potential bottlenecks in compute and robotics, and offers advice on personal preparation for AGI, including skills, networks, and resilience, with projections through 2030.
View episode
From Peak Horse to Peak Human: How AI Could Replace Us (with Calum Chace)
Calum Chace discusses the potential for AI to transform employment, exploring universal income, fully-automated economies, AI-driven education, and the ethical challenges of attributing consciousness to machines.
View episode
How AI Could Help Overthrow Governments (with Tom Davidson)
Tom Davidson discusses the risks of AI-enabled coups, examining how advanced artificial intelligence could facilitate covert power grabs, alter democratic processes, and outlining strategies to mitigate these emerging threats.
View episode
Preparing for an AI Economy (with Daniel Susskind)
Daniel Susskind discusses the differing perspectives of AI researchers and economists, the measurement of AI's economic impact, the influence of human values, the future of work, commercial incentives in AI, and changes in education.
View episode
Will AI Companies Respect Creators' Rights? (with Ed Newton-Rex)
Ed Newton-Rex discusses the challenges of AI models trained on copyrighted material, his resignation from Stability AI, and the future of creator rights and authenticity in an AI-driven world.
View episode
AI Timelines and Human Psychology (with Sarah Hastings-Woodhouse)
Sarah Hastings-Woodhouse discusses AI benchmarks, development trajectories, current and future capabilities, alignment issues, AGI plans, and the psychological impact of rapid technological change on long-term planning.
View episode