Conversations with far-sighted thinkers.
Featuring researchers, philosophers, and experts on technology, existential risks, and our shared future.
Featured episodes
Welcome to the Future of Life Institute Podcast, hosted by Gus Docker. Each week, we explore humanity's greatest challenges through conversations with leading researchers, philosophers, and experts.
This podcast examines the complex intersection of emerging technologies, existential risks, and our shared future. We dive into critical topics including artificial intelligence safety, biotechnology, and the long-term survival and flourishing of humanity.
Gus brings thoughtful curiosity to each interview, creating space for nuanced discussions that balance scientific precision with accessible insights. Join us as we explore how to navigate the unprecedented risks and opportunities of our time, and work toward a beneficial future for all life.
Latest episodes
Why the AI Race Undermines Safety (with Steven Adler)
Former OpenAI safety researcher Stephen Adler discusses governing increasingly capable AI, including competitive race dynamics, gaps in testing and alignment, chatbot mental-health impacts, economic effects on labor, and international rules and audits before training superintelligent models.
View episode
Why OpenAI Is Trying to Silence Its Critics (with Tyler Johnston)
Tyler Johnston of the Midas Project discusses applying corporate accountability to the AI industry, focusing on OpenAI's actions, including subpoenas, and the need for transparency and public awareness regarding AI risks.
View episode
We're Not Ready for AGI (with Will MacAskill)
William MacAskill discusses his Better Futures essay series, arguing that improving the future's quality deserves equal priority to preventing catastrophe. The conversation explores moral error risks, AI character design, space governance, and ethical reasoning for AI systems.
View episode
What Happens When Insiders Sound the Alarm on AI? (with Karl Koch)
Karl Koch discusses the AI Whistleblower Initiative, focusing on transparency and protections for AI insiders who identify safety risks. The episode explores current policies, legal gaps, and practical guidance for potential whistleblowers.
View episode
Can Machines Be Truly Creative? (with Maya Ackerman)
Maya Ackerman discusses human and machine creativity, exploring its definition, how AI alignment impacts it, and the role of hallucination. The conversation also covers strategies for human-AI collaboration.
View episode
From Research Labs to Product Companies: AI's Transformation (with Parmy Olson)
Parmy Olson discusses the transformation of AI companies from research labs to product businesses. The episode explores how funding pressures impact company missions, the role of personalities, safety challenges, and industry power consolidation.
View episode
Can Defense in Depth Work for AI? (with Adam Gleave)
Adam Gleave, CEO of FAR.AI, discusses post-AGI scenarios, risks of gradual disempowerment, defense-in-depth safety strategies, scalable oversight for AI deception, and the challenges of interpretability, as well as FAR.AI's integrated research and policy work.
View episode
How We Keep Humans in Control of AI (with Beatrice Erkers)
Beatrice Erkers discusses the AI pathways project, focusing on approaches to maintain human oversight and control over AI, including tool AI and decentralized development, and examines trade-offs and strategies for safer AI futures.
View episode
Breaking the Intelligence Curse (with Luke Drago)
Luke Drago discusses the potential societal and economic impacts of AI dominance, including changes in workplace structures, privacy concerns, and the importance of taking career risks during technological transitions.
View episode
What Markets Tell Us About AI Timelines (with Basil Halperin)
Basil Halperin discusses how financial markets and economic indicators, such as interest rates, can provide insights into AI development timelines and the potential economic impact of transformative AI.
View episode
Reasoning, Robots, and How to Prepare for AGI (with Benjamin Todd)
Benjamin Todd discusses the evolution of reasoning models in AI, potential bottlenecks in compute and robotics, and offers advice on personal preparation for AGI, including skills, networks, and resilience, with projections through 2030.
View episode
From Peak Horse to Peak Human: How AI Could Replace Us (with Calum Chace)
Calum Chace discusses the potential for AI to transform employment, exploring universal income, fully-automated economies, AI-driven education, and the ethical challenges of attributing consciousness to machines.
View episode