Conversations with far-sighted thinkers.
Featuring researchers, philosophers, and experts on technology, existential risks, and our shared future.
Featured episodes
Welcome to the Future of Life Institute Podcast, hosted by Gus Docker. Each week, we explore humanity's greatest challenges through conversations with leading researchers, philosophers, and experts.
This podcast examines the complex intersection of emerging technologies, existential risks, and our shared future. We dive into critical topics including artificial intelligence safety, biotechnology, and the long-term survival and flourishing of humanity.
Gus brings thoughtful curiosity to each interview, creating space for nuanced discussions that balance scientific precision with accessible insights. Join us as we explore how to navigate the unprecedented risks and opportunities of our time, and work toward a beneficial future for all life.
Latest episodes
How to Rebuild the Social Contract After AGI (with Deric Cheng)
Deric Cheng of the Windfall Trust discusses how AGI could transform the social contract, jobs, and inequality, exploring labor displacement, resilient work, new tax and welfare models, and long-term visions for decoupling economic security from employment.
View episode
How AI Can Help Humanity Reason Better (with Oly Sourbut)
Researcher Oly Sourbut discusses how AI tools might strengthen human reasoning, from fact-checking and scenario planning to honest AI standards and better coordination, and explores how to keep humans central while building trustworthy, society-wide sensemaking.
View episode
How to Avoid Two AI Catastrophes: Domination and Chaos (with Nora Ammann)
Technical specialist Nora Ammann of the UK's ARIA discusses how to steer a slow AI takeoff toward resilient, cooperative futures, covering risks from rogue AI and competition to scalable oversight, formal guarantees, secure infrastructure, and AI-supported bargaining.
View episode
How Humans Could Lose Power Without an AI Takeover (with David Duvenaud)
David Duvenaud examines gradual disempowerment after AGI, exploring how economic and political power and property rights could erode, why AI alignment may become unpopular, and what forecasting and governance might require.
View episode
Why the AI Race Undermines Safety (with Steven Adler)
Former OpenAI safety researcher Stephen Adler discusses governing increasingly capable AI, including competitive race dynamics, gaps in testing and alignment, chatbot mental-health impacts, economic effects on labor, and international rules and audits before training superintelligent models.
View episode
Why OpenAI Is Trying to Silence Its Critics (with Tyler Johnston)
Tyler Johnston of the Midas Project discusses applying corporate accountability to the AI industry, focusing on OpenAI's actions, including subpoenas, and the need for transparency and public awareness regarding AI risks.
View episode
We're Not Ready for AGI (with Will MacAskill)
William MacAskill discusses his Better Futures essay series, arguing that improving the future's quality deserves equal priority to preventing catastrophe. The conversation explores moral error risks, AI character design, space governance, and ethical reasoning for AI systems.
View episode
What Happens When Insiders Sound the Alarm on AI? (with Karl Koch)
Karl Koch discusses the AI Whistleblower Initiative, focusing on transparency and protections for AI insiders who identify safety risks. The episode explores current policies, legal gaps, and practical guidance for potential whistleblowers.
View episode
Can Machines Be Truly Creative? (with Maya Ackerman)
Maya Ackerman discusses human and machine creativity, exploring its definition, how AI alignment impacts it, and the role of hallucination. The conversation also covers strategies for human-AI collaboration.
View episode
From Research Labs to Product Companies: AI's Transformation (with Parmy Olson)
Parmy Olson discusses the transformation of AI companies from research labs to product businesses. The episode explores how funding pressures impact company missions, the role of personalities, safety challenges, and industry power consolidation.
View episode
Can Defense in Depth Work for AI? (with Adam Gleave)
Adam Gleave, CEO of FAR.AI, discusses post-AGI scenarios, risks of gradual disempowerment, defense-in-depth safety strategies, scalable oversight for AI deception, and the challenges of interpretability, as well as FAR.AI's integrated research and policy work.
View episode
How We Keep Humans in Control of AI (with Beatrice Erkers)
Beatrice Erkers discusses the AI pathways project, focusing on approaches to maintain human oversight and control over AI, including tool AI and decentralized development, and examines trade-offs and strategies for safer AI futures.
View episode