<?xml version="1.0" encoding="UTF-8"?>
<rss 
  version="2.0"
  xmlns:dc="http://purl.org/dc/elements/1.1/"
  xmlns:content="http://purl.org/rss/1.0/modules/content/"
  xmlns:atom="http://www.w3.org/2005/Atom"
  xmlns:media="http://search.yahoo.com/mrss/"
  xmlns:wfw="http://wellformedweb.org/CommentAPI/"
  xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
  xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
  xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd"
  xmlns:rawvoice="http://www.rawvoice.com/rawvoiceRssModule/"
  xmlns:googleplay="http://www.google.com/schemas/play-podcasts/1.0">

  <channel>
    <atom:link href="https://podcast.futureoflife.org/rss/" rel="self" type="application/rss+xml" />
    <title>Future of Life Institute Podcast</title>
    <link>https://podcast.futureoflife.org</link>
    <description>Conversations with far-sighted thinkers.</description>
    <language>en</language>
    <copyright>Future of Life Institute Podcast Copyright 2026</copyright>
    <lastBuildDate>Thu, 30 Apr 2026 03:37:32 +0000</lastBuildDate>
    <itunes:author>Future of Life Institute Podcast</itunes:author>
    <itunes:summary>Conversations with far-sighted thinkers.</itunes:summary>
    <itunes:owner>
      <itunes:name>Your Name</itunes:name>
      <itunes:email>youremail@example.com</itunes:email>
    </itunes:owner>
    <itunes:explicit>no</itunes:explicit>
    <itunes:image href="https://podcast.futureoflife.org/content/images/2025/04/faviconV2.png" />
    <itunes:category text="Technology"></itunes:category>

        <item>
          <title>Why AI Is Not a Normal Technology (with Peter Wildeford)</title>
          <link>https://podcast.futureoflife.org/why-ai-is-not-a-normal-technology-with-peter-wildeford/</link>
          <description>Peter Wildeford discusses methods for forecasting AI progress and why he sees AI as neither a bubble nor a normal technology, covering economic effects, national security, cyber capabilities, robotics, export controls, and prediction markets.</description>
          <pubDate>Wed, 29 Apr 2026 19:56:52 +0000</pubDate>
          <guid isPermaLink="false"><![CDATA[ 69f2573a0cac250001b3403b ]]></guid>
          <category><![CDATA[ Technology &amp; Future ]]></category>
          <content:encoded><![CDATA[ <h2 id="watch-episode-here">Watch Episode Here</h2><figure class="kg-card kg-embed-card"><iframe width="560" height="315" src="https://www.youtube.com/embed/A2x639ist6s" frameborder="0" allowfullscreen=""></iframe></figure><hr><h2 id="listen-to-episode-here">Listen to Episode Here</h2><figure class="kg-card kg-embed-card"><iframe src="https://share.transistor.fm/e/91a4ac0c/?color=444444&amp;background=ffffff" height="180" width="100%" frameborder="0" scrolling="no" seamless="true"></iframe></figure><hr><h2 id="show-notes">Show Notes</h2><p>Peter Wildeford is Head of Policy at the AI Policy Network, and a top AI forecaster. He joins the podcast to discuss how to forecast AI progress and what current trends imply for the economy and national security. Peter argues AI is neither a bubble nor a normal technology, and we examine benchmark trends, adoption lags, unemployment and productivity effects, and the rise of cyber capabilities. We also cover robotics, export controls, prediction markets, and when AI may surpass human forecasters.</p><p></p><p><strong>LINKS:</strong></p><ul><li><a href="https://blog.peterwildeford.com/?ref=podcast.futureoflife.org">Peter Wildeford Blog</a></li></ul><p></p><p><strong>CHAPTERS:</strong></p><p>(00:00) Episode Preview</p><p>(01:12) AI bubble debate</p><p>(06:25) Normal technology question</p><p>(15:31) Mythos security implications</p><p>(30:47) Robotics and labor</p><p>(40:27) Social economic response</p><p>(48:57) Forecasting methodology</p><p>(59:49) AGI policy timelines</p><p>(01:11:13) Forecasting with AI</p><p></p><p><strong>PRODUCED BY:</strong></p><p><a href="https://aipodcast.ing/?ref=podcast.futureoflife.org">https://aipodcast.ing</a></p><p></p><p><strong>SOCIAL LINKS:</strong></p><p>Website: <a href="https://podcast.futureoflife.org/">https://podcast.futureoflife.org</a></p><p>Twitter (FLI): <a href="https://x.com/FLI_org?ref=podcast.futureoflife.org">https://x.com/FLI_org</a></p><p>Twitter (Gus): <a href="https://x.com/gusdocker?ref=podcast.futureoflife.org">https://x.com/gusdocker</a></p><p>LinkedIn: <a href="https://www.linkedin.com/company/future-of-life-institute/?ref=podcast.futureoflife.org">https://www.linkedin.com/company/future-of-life-institute/</a></p><p>YouTube: <a href="https://www.youtube.com/channel/UC-rCCy3FQ-GItDimSR9lhzw/?ref=podcast.futureoflife.org">https://www.youtube.com/channel/UC-rCCy3FQ-GItDimSR9lhzw/</a></p><p>Apple: <a href="https://geo.itunes.apple.com/us/podcast/id1170991978?ref=podcast.futureoflife.org">https://geo.itunes.apple.com/us/podcast/id1170991978</a></p><p>Spotify: <a href="https://open.spotify.com/show/2Op1WO3gwVwCrYHg4eoGyP?ref=podcast.futureoflife.org">https://open.spotify.com/show/2Op1WO3gwVwCrYHg4eoGyP</a></p><p></p><hr> ]]></content:encoded>
          <enclosure url="" length="0" type="audio/mpeg" />
          <itunes:title>Why AI Is Not a Normal Technology (with Peter Wildeford)</itunes:title>
          <itunes:author>Gus Docker</itunes:author>
          <itunes:subtitle>Peter Wildeford discusses methods for forecasting AI progress and why he sees AI as neither a bubble nor a normal technology, covering economic effects, national security, cyber capabilities, robotics, export controls, and prediction markets.</itunes:subtitle>
          <itunes:summary><![CDATA[ <h2 id="watch-episode-here">Watch Episode Here</h2><figure class="kg-card kg-embed-card"><iframe width="560" height="315" src="https://www.youtube.com/embed/A2x639ist6s" frameborder="0" allowfullscreen=""></iframe></figure><hr><h2 id="listen-to-episode-here">Listen to Episode Here</h2><figure class="kg-card kg-embed-card"><iframe src="https://share.transistor.fm/e/91a4ac0c/?color=444444&amp;background=ffffff" height="180" width="100%" frameborder="0" scrolling="no" seamless="true"></iframe></figure><hr><h2 id="show-notes">Show Notes</h2><p>Peter Wildeford is Head of Policy at the AI Policy Network, and a top AI forecaster. He joins the podcast to discuss how to forecast AI progress and what current trends imply for the economy and national security. Peter argues AI is neither a bubble nor a normal technology, and we examine benchmark trends, adoption lags, unemployment and productivity effects, and the rise of cyber capabilities. We also cover robotics, export controls, prediction markets, and when AI may surpass human forecasters.</p><p></p><p><strong>LINKS:</strong></p><ul><li><a href="https://blog.peterwildeford.com/?ref=podcast.futureoflife.org">Peter Wildeford Blog</a></li></ul><p></p><p><strong>CHAPTERS:</strong></p><p>(00:00) Episode Preview</p><p>(01:12) AI bubble debate</p><p>(06:25) Normal technology question</p><p>(15:31) Mythos security implications</p><p>(30:47) Robotics and labor</p><p>(40:27) Social economic response</p><p>(48:57) Forecasting methodology</p><p>(59:49) AGI policy timelines</p><p>(01:11:13) Forecasting with AI</p><p></p><p><strong>PRODUCED BY:</strong></p><p><a href="https://aipodcast.ing/?ref=podcast.futureoflife.org">https://aipodcast.ing</a></p><p></p><p><strong>SOCIAL LINKS:</strong></p><p>Website: <a href="https://podcast.futureoflife.org/">https://podcast.futureoflife.org</a></p><p>Twitter (FLI): <a href="https://x.com/FLI_org?ref=podcast.futureoflife.org">https://x.com/FLI_org</a></p><p>Twitter (Gus): <a href="https://x.com/gusdocker?ref=podcast.futureoflife.org">https://x.com/gusdocker</a></p><p>LinkedIn: <a href="https://www.linkedin.com/company/future-of-life-institute/?ref=podcast.futureoflife.org">https://www.linkedin.com/company/future-of-life-institute/</a></p><p>YouTube: <a href="https://www.youtube.com/channel/UC-rCCy3FQ-GItDimSR9lhzw/?ref=podcast.futureoflife.org">https://www.youtube.com/channel/UC-rCCy3FQ-GItDimSR9lhzw/</a></p><p>Apple: <a href="https://geo.itunes.apple.com/us/podcast/id1170991978?ref=podcast.futureoflife.org">https://geo.itunes.apple.com/us/podcast/id1170991978</a></p><p>Spotify: <a href="https://open.spotify.com/show/2Op1WO3gwVwCrYHg4eoGyP?ref=podcast.futureoflife.org">https://open.spotify.com/show/2Op1WO3gwVwCrYHg4eoGyP</a></p><p></p><hr> ]]></itunes:summary>
            <itunes:image href="https://storage.aipodcast.ing/permanent/peter-wildeford-audio-20260429T180118308Z.jpg" />
          <itunes:explicit>no</itunes:explicit>
        </item>
        <item>
          <title>Why AI Evaluation Science Can&#x27;t Keep Up (with Carina Prunkl)</title>
          <link>https://podcast.futureoflife.org/why-ai-evaluation-science-can-t-keep-up-with-carina-prunkl/</link>
          <description>Inria researcher Carina Prunkl discusses why AI evaluation struggles to keep pace with general-purpose systems, including jagged capabilities, missed real-world behavior, misuse risks, de-skilling, red teaming, and layered safeguards.</description>
          <pubDate>Fri, 17 Apr 2026 15:54:28 +0000</pubDate>
          <guid isPermaLink="false"><![CDATA[ 69e2265aedea1e0001246617 ]]></guid>
          <category><![CDATA[ Existential Risk ]]></category>
          <content:encoded><![CDATA[ <h2 id="watch-episode-here">Watch Episode Here</h2><figure class="kg-card kg-embed-card"><iframe width="560" height="315" src="https://www.youtube.com/embed/PB_y2A_K-18" frameborder="0" allowfullscreen=""></iframe></figure><hr><h2 id="listen-to-episode-here">Listen to Episode Here</h2><figure class="kg-card kg-embed-card"><iframe src="https://share.transistor.fm/e/14615be3/?color=444444&amp;background=ffffff" height="180" width="100%" frameborder="0" scrolling="no" seamless="true"></iframe></figure><hr><h2 id="show-notes">Show Notes</h2><p>Carina Prunkl is a researcher at Inria. She joins the podcast to discuss how to assess the capabilities and risks of general-purpose AI. We examine why systems can solve hard coding and math problems yet still fail at simple tasks, why pre-deployment tests often miss real-world behavior, and how faster capability gains can increase misuse risks. The conversation also covers de-skilling, red teaming, layered safeguards, and warning signs that AIs might undermine oversight.</p><p></p><p><strong>LINKS:</strong></p><ul><li><a href="https://carina-prunkl.squarespace.com/?ref=podcast.futureoflife.org">Carina Prunkl personal website</a></li></ul><p></p><p><strong>CHAPTERS:</strong></p><p>(00:00) Episode Preview</p><p>(01:04) Introducing the report</p><p>(02:10) Jagged frontier capabilities</p><p>(05:29) Formal reasoning progress</p><p>(12:36) Risks and evaluation science</p><p>(19:00) Funding evaluation capacity</p><p>(24:03) Autonomy and de-skilling</p><p>(31:32) Authenticity and AI companions</p><p>(41:00) Defense in depth methods</p><p>(48:34) Loss of control risks</p><p>(53:16) Where to read report</p><p></p><p><strong>PRODUCED BY:</strong></p><p><a href="https://aipodcast.ing/?ref=podcast.futureoflife.org">https://aipodcast.ing</a></p><p></p><p><strong>SOCIAL LINKS:</strong></p><p>Website: <a href="https://podcast.futureoflife.org/">https://podcast.futureoflife.org</a></p><p>Twitter (FLI): <a href="https://x.com/FLI_org?ref=podcast.futureoflife.org">https://x.com/FLI_org</a></p><p>Twitter (Gus): <a href="https://x.com/gusdocker?ref=podcast.futureoflife.org">https://x.com/gusdocker</a></p><p>LinkedIn: <a href="https://www.linkedin.com/company/future-of-life-institute/?ref=podcast.futureoflife.org">https://www.linkedin.com/company/future-of-life-institute/</a></p><p>YouTube: <a href="https://www.youtube.com/channel/UC-rCCy3FQ-GItDimSR9lhzw/?ref=podcast.futureoflife.org">https://www.youtube.com/channel/UC-rCCy3FQ-GItDimSR9lhzw/</a></p><p>Apple: <a href="https://geo.itunes.apple.com/us/podcast/id1170991978?ref=podcast.futureoflife.org">https://geo.itunes.apple.com/us/podcast/id1170991978</a></p><p>Spotify: <a href="https://open.spotify.com/show/2Op1WO3gwVwCrYHg4eoGyP?ref=podcast.futureoflife.org">https://open.spotify.com/show/2Op1WO3gwVwCrYHg4eoGyP</a></p><p></p><hr> ]]></content:encoded>
          <enclosure url="" length="0" type="audio/mpeg" />
          <itunes:title>Why AI Evaluation Science Can&#x27;t Keep Up (with Carina Prunkl)</itunes:title>
          <itunes:author>Gus Docker</itunes:author>
          <itunes:subtitle>Inria researcher Carina Prunkl discusses why AI evaluation struggles to keep pace with general-purpose systems, including jagged capabilities, missed real-world behavior, misuse risks, de-skilling, red teaming, and layered safeguards.</itunes:subtitle>
          <itunes:summary><![CDATA[ <h2 id="watch-episode-here">Watch Episode Here</h2><figure class="kg-card kg-embed-card"><iframe width="560" height="315" src="https://www.youtube.com/embed/PB_y2A_K-18" frameborder="0" allowfullscreen=""></iframe></figure><hr><h2 id="listen-to-episode-here">Listen to Episode Here</h2><figure class="kg-card kg-embed-card"><iframe src="https://share.transistor.fm/e/14615be3/?color=444444&amp;background=ffffff" height="180" width="100%" frameborder="0" scrolling="no" seamless="true"></iframe></figure><hr><h2 id="show-notes">Show Notes</h2><p>Carina Prunkl is a researcher at Inria. She joins the podcast to discuss how to assess the capabilities and risks of general-purpose AI. We examine why systems can solve hard coding and math problems yet still fail at simple tasks, why pre-deployment tests often miss real-world behavior, and how faster capability gains can increase misuse risks. The conversation also covers de-skilling, red teaming, layered safeguards, and warning signs that AIs might undermine oversight.</p><p></p><p><strong>LINKS:</strong></p><ul><li><a href="https://carina-prunkl.squarespace.com/?ref=podcast.futureoflife.org">Carina Prunkl personal website</a></li></ul><p></p><p><strong>CHAPTERS:</strong></p><p>(00:00) Episode Preview</p><p>(01:04) Introducing the report</p><p>(02:10) Jagged frontier capabilities</p><p>(05:29) Formal reasoning progress</p><p>(12:36) Risks and evaluation science</p><p>(19:00) Funding evaluation capacity</p><p>(24:03) Autonomy and de-skilling</p><p>(31:32) Authenticity and AI companions</p><p>(41:00) Defense in depth methods</p><p>(48:34) Loss of control risks</p><p>(53:16) Where to read report</p><p></p><p><strong>PRODUCED BY:</strong></p><p><a href="https://aipodcast.ing/?ref=podcast.futureoflife.org">https://aipodcast.ing</a></p><p></p><p><strong>SOCIAL LINKS:</strong></p><p>Website: <a href="https://podcast.futureoflife.org/">https://podcast.futureoflife.org</a></p><p>Twitter (FLI): <a href="https://x.com/FLI_org?ref=podcast.futureoflife.org">https://x.com/FLI_org</a></p><p>Twitter (Gus): <a href="https://x.com/gusdocker?ref=podcast.futureoflife.org">https://x.com/gusdocker</a></p><p>LinkedIn: <a href="https://www.linkedin.com/company/future-of-life-institute/?ref=podcast.futureoflife.org">https://www.linkedin.com/company/future-of-life-institute/</a></p><p>YouTube: <a href="https://www.youtube.com/channel/UC-rCCy3FQ-GItDimSR9lhzw/?ref=podcast.futureoflife.org">https://www.youtube.com/channel/UC-rCCy3FQ-GItDimSR9lhzw/</a></p><p>Apple: <a href="https://geo.itunes.apple.com/us/podcast/id1170991978?ref=podcast.futureoflife.org">https://geo.itunes.apple.com/us/podcast/id1170991978</a></p><p>Spotify: <a href="https://open.spotify.com/show/2Op1WO3gwVwCrYHg4eoGyP?ref=podcast.futureoflife.org">https://open.spotify.com/show/2Op1WO3gwVwCrYHg4eoGyP</a></p><p></p><hr> ]]></itunes:summary>
            <itunes:image href="https://storage.aipodcast.ing/permanent/carina-prunkl-audio-20260417T120425978Z.jpg" />
          <itunes:explicit>no</itunes:explicit>
        </item>
        <item>
          <title>Defense in Depth: Layered Strategies Against AI Risk (with Li-Lian Ang)</title>
          <link>https://podcast.futureoflife.org/defense-in-depth-layered-strategies-against-ai-risk-with-li-lian-ang/</link>
          <description>Li-Lian Ang from Blue Dot Impact discusses how to build a workforce to defend against AI-driven risks, including engineered pandemics, cyber attacks, job disempowerment, and concentrated power, using a defense-in-depth framework for uncertain AI progress.</description>
          <pubDate>Thu, 02 Apr 2026 19:48:38 +0000</pubDate>
          <guid isPermaLink="false"><![CDATA[ 69ceb2bd709b8a00012a5cf5 ]]></guid>
          <category><![CDATA[ Existential Risk ]]></category>
          <content:encoded><![CDATA[ <h2 id="watch-episode-here">Watch Episode Here</h2><figure class="kg-card kg-embed-card"><iframe width="560" height="315" src="https://www.youtube.com/embed/LxnNMkQguvo" frameborder="0" allowfullscreen=""></iframe></figure><hr><h2 id="listen-to-episode-here">Listen to Episode Here</h2><figure class="kg-card kg-embed-card"><iframe src="https://share.transistor.fm/e/e44300de/?color=444444&amp;background=ffffff" height="180" width="100%" frameborder="0" scrolling="no" seamless="true"></iframe></figure><hr><h2 id="show-notes">Show Notes</h2><p>Li-Lian Ang is a team member at Blue Dot Impact. She joins the podcast to discuss how society can build a workforce to protect humanity from AI risks. The conversation covers engineered pandemics, AI-enabled cyber attacks, job loss and disempowerment, and power concentration in firms or AI systems. We also examine Blue Dot's defense-in-depth framework and how individuals can navigate rapid, uncertain AI progress.</p><p></p><p><strong>LINKS:</strong></p><ul><li><a href="https://anglilian.com/?ref=podcast.futureoflife.org">Li-Lian Ang personal site</a></li><li><a href="https://bluedot.org/?ref=podcast.futureoflife.org">Blue Dot Impact organization site</a></li></ul><p></p><p><strong>CHAPTERS:</strong></p><p>(00:00) Episode Preview</p><p>(00:48) Blue dot beginnings</p><p>(03:04) Evolving AI risk concerns</p><p>(06:20) AI agents in cyber</p><p>(15:52) Gradual disempowerment and jobs</p><p>(23:26) Aligning AI with humans</p><p>(29:08) Power concentration and misuse</p><p>(34:52) Influencing frontier AI labs</p><p>(43:05) Uncertain timelines and strategy</p><p>(50:18) Writing, AI, and action</p><p></p><p><strong>PRODUCED BY:</strong></p><p><a href="https://aipodcast.ing/?ref=podcast.futureoflife.org">https://aipodcast.ing</a></p><p></p><p><strong>SOCIAL LINKS:</strong></p><p>Website: <a href="https://podcast.futureoflife.org/">https://podcast.futureoflife.org</a></p><p>Twitter (FLI): <a href="https://x.com/FLI_org?ref=podcast.futureoflife.org">https://x.com/FLI_org</a></p><p>Twitter (Gus): <a href="https://x.com/gusdocker?ref=podcast.futureoflife.org">https://x.com/gusdocker</a></p><p>LinkedIn: <a href="https://www.linkedin.com/company/future-of-life-institute/?ref=podcast.futureoflife.org">https://www.linkedin.com/company/future-of-life-institute/</a></p><p>YouTube: <a href="https://www.youtube.com/channel/UC-rCCy3FQ-GItDimSR9lhzw/?ref=podcast.futureoflife.org">https://www.youtube.com/channel/UC-rCCy3FQ-GItDimSR9lhzw/</a></p><p>Apple: <a href="https://geo.itunes.apple.com/us/podcast/id1170991978?ref=podcast.futureoflife.org">https://geo.itunes.apple.com/us/podcast/id1170991978</a></p><p>Spotify: <a href="https://open.spotify.com/show/2Op1WO3gwVwCrYHg4eoGyP?ref=podcast.futureoflife.org">https://open.spotify.com/show/2Op1WO3gwVwCrYHg4eoGyP</a></p><p></p><hr> ]]></content:encoded>
          <enclosure url="" length="0" type="audio/mpeg" />
          <itunes:title>Defense in Depth: Layered Strategies Against AI Risk (with Li-Lian Ang)</itunes:title>
          <itunes:author>Gus Docker</itunes:author>
          <itunes:subtitle>Li-Lian Ang from Blue Dot Impact discusses how to build a workforce to defend against AI-driven risks, including engineered pandemics, cyber attacks, job disempowerment, and concentrated power, using a defense-in-depth framework for uncertain AI progress.</itunes:subtitle>
          <itunes:summary><![CDATA[ <h2 id="watch-episode-here">Watch Episode Here</h2><figure class="kg-card kg-embed-card"><iframe width="560" height="315" src="https://www.youtube.com/embed/LxnNMkQguvo" frameborder="0" allowfullscreen=""></iframe></figure><hr><h2 id="listen-to-episode-here">Listen to Episode Here</h2><figure class="kg-card kg-embed-card"><iframe src="https://share.transistor.fm/e/e44300de/?color=444444&amp;background=ffffff" height="180" width="100%" frameborder="0" scrolling="no" seamless="true"></iframe></figure><hr><h2 id="show-notes">Show Notes</h2><p>Li-Lian Ang is a team member at Blue Dot Impact. She joins the podcast to discuss how society can build a workforce to protect humanity from AI risks. The conversation covers engineered pandemics, AI-enabled cyber attacks, job loss and disempowerment, and power concentration in firms or AI systems. We also examine Blue Dot's defense-in-depth framework and how individuals can navigate rapid, uncertain AI progress.</p><p></p><p><strong>LINKS:</strong></p><ul><li><a href="https://anglilian.com/?ref=podcast.futureoflife.org">Li-Lian Ang personal site</a></li><li><a href="https://bluedot.org/?ref=podcast.futureoflife.org">Blue Dot Impact organization site</a></li></ul><p></p><p><strong>CHAPTERS:</strong></p><p>(00:00) Episode Preview</p><p>(00:48) Blue dot beginnings</p><p>(03:04) Evolving AI risk concerns</p><p>(06:20) AI agents in cyber</p><p>(15:52) Gradual disempowerment and jobs</p><p>(23:26) Aligning AI with humans</p><p>(29:08) Power concentration and misuse</p><p>(34:52) Influencing frontier AI labs</p><p>(43:05) Uncertain timelines and strategy</p><p>(50:18) Writing, AI, and action</p><p></p><p><strong>PRODUCED BY:</strong></p><p><a href="https://aipodcast.ing/?ref=podcast.futureoflife.org">https://aipodcast.ing</a></p><p></p><p><strong>SOCIAL LINKS:</strong></p><p>Website: <a href="https://podcast.futureoflife.org/">https://podcast.futureoflife.org</a></p><p>Twitter (FLI): <a href="https://x.com/FLI_org?ref=podcast.futureoflife.org">https://x.com/FLI_org</a></p><p>Twitter (Gus): <a href="https://x.com/gusdocker?ref=podcast.futureoflife.org">https://x.com/gusdocker</a></p><p>LinkedIn: <a href="https://www.linkedin.com/company/future-of-life-institute/?ref=podcast.futureoflife.org">https://www.linkedin.com/company/future-of-life-institute/</a></p><p>YouTube: <a href="https://www.youtube.com/channel/UC-rCCy3FQ-GItDimSR9lhzw/?ref=podcast.futureoflife.org">https://www.youtube.com/channel/UC-rCCy3FQ-GItDimSR9lhzw/</a></p><p>Apple: <a href="https://geo.itunes.apple.com/us/podcast/id1170991978?ref=podcast.futureoflife.org">https://geo.itunes.apple.com/us/podcast/id1170991978</a></p><p>Spotify: <a href="https://open.spotify.com/show/2Op1WO3gwVwCrYHg4eoGyP?ref=podcast.futureoflife.org">https://open.spotify.com/show/2Op1WO3gwVwCrYHg4eoGyP</a></p><p></p><hr> ]]></itunes:summary>
            <itunes:image href="https://storage.aipodcast.ing/permanent/square-20260402T181140636Z.jpg" />
          <itunes:explicit>no</itunes:explicit>
        </item>
        <item>
          <title>What AI Companies Get Wrong About Curing Cancer (with Emilia Javorsky)</title>
          <link>https://podcast.futureoflife.org/what-ai-companies-get-wrong-about-curing-cancer-with-emilia-javorsky/</link>
          <description>Physician-scientist Emilia Javorsky argues that curing cancer is limited more by biology’s complexity, data quality, and incentives than by intelligence, and explores realistic uses of AI in drug development, trials, and reducing medical bureaucracy.</description>
          <pubDate>Fri, 20 Mar 2026 13:45:56 +0000</pubDate>
          <guid isPermaLink="false"><![CDATA[ 69bd461a709b8a00012a5ce3 ]]></guid>
          <category><![CDATA[ Technology &amp; Future ]]></category>
          <content:encoded><![CDATA[ <h2 id="watch-episode-here">Watch Episode Here</h2><figure class="kg-card kg-embed-card"><iframe width="560" height="315" src="https://www.youtube.com/embed/WtpZlxh5yhQ" frameborder="0" allowfullscreen=""></iframe></figure><hr><h2 id="listen-to-episode-here">Listen to Episode Here</h2><figure class="kg-card kg-embed-card"><iframe src="https://share.transistor.fm/e/9d1e8e0d/?color=444444&amp;background=ffffff" height="180" width="100%" frameborder="0" scrolling="no" seamless="true"></iframe></figure><hr><h2 id="show-notes">Show Notes</h2><p>Emilia Javorsky is a physician-scientist and Director of the Futures Program at the Future of Life Institute.</p><p>She joins the podcast to discuss her newly published essay on AI and cancer. She challenges tech claims that superintelligence will cure cancer, explaining why biology’s complexity, poor data, and misaligned incentives are bigger bottlenecks than raw intelligence. The conversation covers realistic roles for AI in drug discovery, clinical trials, and cutting unnecessary medical bureaucracy.<br>You can read the full essay at: <a href="https://curecancer.ai/?ref=podcast.futureoflife.org" rel="noreferrer">curecancer.ai</a></p><p></p><p><strong>CHAPTERS:</strong></p><p>(00:00) Episode Preview</p><p>(01:10) Introduction and essay motivation</p><p>(06:30) Intelligence vs data bottlenecks</p><p>(19:03) Cancer's complexity and heterogeneity</p><p>(29:05) Measurement, health, and homeostasis</p><p>(41:41) AI in drug development</p><p>(50:13) Regulation, FDA, and innovation</p><p>(01:02:58) Practical paths toward cures</p><p></p><p><strong>PRODUCED BY:</strong></p><p><a href="https://aipodcast.ing/?ref=podcast.futureoflife.org">https://aipodcast.ing</a></p><p></p><p><strong>SOCIAL LINKS:</strong></p><p>Website: <a href="https://podcast.futureoflife.org/">https://podcast.futureoflife.org</a></p><p>Twitter (FLI): <a href="https://x.com/FLI_org?ref=podcast.futureoflife.org">https://x.com/FLI_org</a></p><p>Twitter (Gus): <a href="https://x.com/gusdocker?ref=podcast.futureoflife.org">https://x.com/gusdocker</a></p><p>LinkedIn: <a href="https://www.linkedin.com/company/future-of-life-institute/?ref=podcast.futureoflife.org">https://www.linkedin.com/company/future-of-life-institute/</a></p><p>YouTube: <a href="https://www.youtube.com/channel/UC-rCCy3FQ-GItDimSR9lhzw/?ref=podcast.futureoflife.org">https://www.youtube.com/channel/UC-rCCy3FQ-GItDimSR9lhzw/</a></p><p>Apple: <a href="https://geo.itunes.apple.com/us/podcast/id1170991978?ref=podcast.futureoflife.org">https://geo.itunes.apple.com/us/podcast/id1170991978</a></p><p>Spotify: <a href="https://open.spotify.com/show/2Op1WO3gwVwCrYHg4eoGyP?ref=podcast.futureoflife.org">https://open.spotify.com/show/2Op1WO3gwVwCrYHg4eoGyP</a></p><p></p><hr> ]]></content:encoded>
          <enclosure url="" length="0" type="audio/mpeg" />
          <itunes:title>What AI Companies Get Wrong About Curing Cancer (with Emilia Javorsky)</itunes:title>
          <itunes:author>Gus Docker</itunes:author>
          <itunes:subtitle>Physician-scientist Emilia Javorsky argues that curing cancer is limited more by biology’s complexity, data quality, and incentives than by intelligence, and explores realistic uses of AI in drug development, trials, and reducing medical bureaucracy.</itunes:subtitle>
          <itunes:summary><![CDATA[ <h2 id="watch-episode-here">Watch Episode Here</h2><figure class="kg-card kg-embed-card"><iframe width="560" height="315" src="https://www.youtube.com/embed/WtpZlxh5yhQ" frameborder="0" allowfullscreen=""></iframe></figure><hr><h2 id="listen-to-episode-here">Listen to Episode Here</h2><figure class="kg-card kg-embed-card"><iframe src="https://share.transistor.fm/e/9d1e8e0d/?color=444444&amp;background=ffffff" height="180" width="100%" frameborder="0" scrolling="no" seamless="true"></iframe></figure><hr><h2 id="show-notes">Show Notes</h2><p>Emilia Javorsky is a physician-scientist and Director of the Futures Program at the Future of Life Institute.</p><p>She joins the podcast to discuss her newly published essay on AI and cancer. She challenges tech claims that superintelligence will cure cancer, explaining why biology’s complexity, poor data, and misaligned incentives are bigger bottlenecks than raw intelligence. The conversation covers realistic roles for AI in drug discovery, clinical trials, and cutting unnecessary medical bureaucracy.<br>You can read the full essay at: <a href="https://curecancer.ai/?ref=podcast.futureoflife.org" rel="noreferrer">curecancer.ai</a></p><p></p><p><strong>CHAPTERS:</strong></p><p>(00:00) Episode Preview</p><p>(01:10) Introduction and essay motivation</p><p>(06:30) Intelligence vs data bottlenecks</p><p>(19:03) Cancer's complexity and heterogeneity</p><p>(29:05) Measurement, health, and homeostasis</p><p>(41:41) AI in drug development</p><p>(50:13) Regulation, FDA, and innovation</p><p>(01:02:58) Practical paths toward cures</p><p></p><p><strong>PRODUCED BY:</strong></p><p><a href="https://aipodcast.ing/?ref=podcast.futureoflife.org">https://aipodcast.ing</a></p><p></p><p><strong>SOCIAL LINKS:</strong></p><p>Website: <a href="https://podcast.futureoflife.org/">https://podcast.futureoflife.org</a></p><p>Twitter (FLI): <a href="https://x.com/FLI_org?ref=podcast.futureoflife.org">https://x.com/FLI_org</a></p><p>Twitter (Gus): <a href="https://x.com/gusdocker?ref=podcast.futureoflife.org">https://x.com/gusdocker</a></p><p>LinkedIn: <a href="https://www.linkedin.com/company/future-of-life-institute/?ref=podcast.futureoflife.org">https://www.linkedin.com/company/future-of-life-institute/</a></p><p>YouTube: <a href="https://www.youtube.com/channel/UC-rCCy3FQ-GItDimSR9lhzw/?ref=podcast.futureoflife.org">https://www.youtube.com/channel/UC-rCCy3FQ-GItDimSR9lhzw/</a></p><p>Apple: <a href="https://geo.itunes.apple.com/us/podcast/id1170991978?ref=podcast.futureoflife.org">https://geo.itunes.apple.com/us/podcast/id1170991978</a></p><p>Spotify: <a href="https://open.spotify.com/show/2Op1WO3gwVwCrYHg4eoGyP?ref=podcast.futureoflife.org">https://open.spotify.com/show/2Op1WO3gwVwCrYHg4eoGyP</a></p><p></p><hr> ]]></itunes:summary>
            <itunes:image href="https://storage.aipodcast.ing/permanent/9-4-beatrice-erkers-720x720-r1-v1a-20260320T122550872Z.jpg" />
          <itunes:explicit>no</itunes:explicit>
        </item>
        <item>
          <title>AI vs Cancer - How AI Can, and Can&#x27;t, Cure Cancer (by Emilia Javorsky)</title>
          <link>https://podcast.futureoflife.org/ai-vs-cancer-how-ai-can-and-can-t-cure-cancer-by-emilia-javorsky/</link>
          <description>Emilia Javorsky explores how AI can realistically aid cancer research, where current hype exceeds evidence, and what changes researchers, policymakers, and funders must make to turn AI advances into real clinical impact.</description>
          <pubDate>Mon, 16 Mar 2026 11:49:18 +0000</pubDate>
          <guid isPermaLink="false"><![CDATA[ 69b70f78709b8a00012a5cd0 ]]></guid>
          <category><![CDATA[ Technology &amp; Future ]]></category>
          <content:encoded><![CDATA[ <h2 id="listen-to-episode-here">Listen to Episode Here</h2><figure class="kg-card kg-embed-card"><iframe src="https://share.transistor.fm/e/a9f778d9/?color=444444&amp;background=ffffff" height="180" width="100%" frameborder="0" scrolling="no" seamless="true"></iframe></figure><hr><h2 id="show-notes">Show Notes</h2><p>Tech executives have promised that AI will cure cancer. The reality is more complicated — and more hopeful. This essay examines where AI genuinely accelerates cancer research, where the promises fall short, and what researchers, policymakers, and funders need to do next.<br><br>You can read the full essay at: <a href="https://curecancer.ai/?ref=podcast.futureoflife.org" rel="noreferrer">curecancer.ai</a></p><p></p><p><strong>CHAPTERS:</strong></p><p>(00:00) Essay Preview</p><p>(00:54) How AI Can, and Can't, Cure Cancer</p><p>(17:05) Reckoning with Past Failures</p><p>(35:23) Misguiding Myths and Errors</p><p>(59:15) AI Solutions Derive from First Principles or Data</p><p>(01:31:31) Systemic Bottlenecks &amp; Misalignments</p><p>(02:08:46) Conclusion</p><p>(02:14:35) The Roadmap Forward</p><p></p><p><strong>PRODUCED BY:</strong></p><p><a href="https://aipodcast.ing/?ref=podcast.futureoflife.org">https://aipodcast.ing</a></p><p></p><p><strong>SOCIAL LINKS:</strong></p><p>Website: <a href="https://podcast.futureoflife.org/">https://podcast.futureoflife.org</a></p><p>Twitter (FLI): <a href="https://x.com/FLI_org?ref=podcast.futureoflife.org">https://x.com/FLI_org</a></p><p>Twitter (Gus): <a href="https://x.com/gusdocker?ref=podcast.futureoflife.org">https://x.com/gusdocker</a></p><p>LinkedIn: <a href="https://www.linkedin.com/company/future-of-life-institute/?ref=podcast.futureoflife.org">https://www.linkedin.com/company/future-of-life-institute/</a></p><p>YouTube: <a href="https://www.youtube.com/channel/UC-rCCy3FQ-GItDimSR9lhzw/?ref=podcast.futureoflife.org">https://www.youtube.com/channel/UC-rCCy3FQ-GItDimSR9lhzw/</a></p><p>Apple: <a href="https://geo.itunes.apple.com/us/podcast/id1170991978?ref=podcast.futureoflife.org">https://geo.itunes.apple.com/us/podcast/id1170991978</a></p><p>Spotify: <a href="https://open.spotify.com/show/2Op1WO3gwVwCrYHg4eoGyP?ref=podcast.futureoflife.org">https://open.spotify.com/show/2Op1WO3gwVwCrYHg4eoGyP</a></p><p></p><hr> ]]></content:encoded>
          <enclosure url="" length="0" type="audio/mpeg" />
          <itunes:title>AI vs Cancer - How AI Can, and Can&#x27;t, Cure Cancer (by Emilia Javorsky)</itunes:title>
          <itunes:author>Gus Docker</itunes:author>
          <itunes:subtitle>Emilia Javorsky explores how AI can realistically aid cancer research, where current hype exceeds evidence, and what changes researchers, policymakers, and funders must make to turn AI advances into real clinical impact.</itunes:subtitle>
          <itunes:summary><![CDATA[ <h2 id="listen-to-episode-here">Listen to Episode Here</h2><figure class="kg-card kg-embed-card"><iframe src="https://share.transistor.fm/e/a9f778d9/?color=444444&amp;background=ffffff" height="180" width="100%" frameborder="0" scrolling="no" seamless="true"></iframe></figure><hr><h2 id="show-notes">Show Notes</h2><p>Tech executives have promised that AI will cure cancer. The reality is more complicated — and more hopeful. This essay examines where AI genuinely accelerates cancer research, where the promises fall short, and what researchers, policymakers, and funders need to do next.<br><br>You can read the full essay at: <a href="https://curecancer.ai/?ref=podcast.futureoflife.org" rel="noreferrer">curecancer.ai</a></p><p></p><p><strong>CHAPTERS:</strong></p><p>(00:00) Essay Preview</p><p>(00:54) How AI Can, and Can't, Cure Cancer</p><p>(17:05) Reckoning with Past Failures</p><p>(35:23) Misguiding Myths and Errors</p><p>(59:15) AI Solutions Derive from First Principles or Data</p><p>(01:31:31) Systemic Bottlenecks &amp; Misalignments</p><p>(02:08:46) Conclusion</p><p>(02:14:35) The Roadmap Forward</p><p></p><p><strong>PRODUCED BY:</strong></p><p><a href="https://aipodcast.ing/?ref=podcast.futureoflife.org">https://aipodcast.ing</a></p><p></p><p><strong>SOCIAL LINKS:</strong></p><p>Website: <a href="https://podcast.futureoflife.org/">https://podcast.futureoflife.org</a></p><p>Twitter (FLI): <a href="https://x.com/FLI_org?ref=podcast.futureoflife.org">https://x.com/FLI_org</a></p><p>Twitter (Gus): <a href="https://x.com/gusdocker?ref=podcast.futureoflife.org">https://x.com/gusdocker</a></p><p>LinkedIn: <a href="https://www.linkedin.com/company/future-of-life-institute/?ref=podcast.futureoflife.org">https://www.linkedin.com/company/future-of-life-institute/</a></p><p>YouTube: <a href="https://www.youtube.com/channel/UC-rCCy3FQ-GItDimSR9lhzw/?ref=podcast.futureoflife.org">https://www.youtube.com/channel/UC-rCCy3FQ-GItDimSR9lhzw/</a></p><p>Apple: <a href="https://geo.itunes.apple.com/us/podcast/id1170991978?ref=podcast.futureoflife.org">https://geo.itunes.apple.com/us/podcast/id1170991978</a></p><p>Spotify: <a href="https://open.spotify.com/show/2Op1WO3gwVwCrYHg4eoGyP?ref=podcast.futureoflife.org">https://open.spotify.com/show/2Op1WO3gwVwCrYHg4eoGyP</a></p><p></p><hr> ]]></itunes:summary>
            <itunes:image href="https://podcast.futureoflife.org/content/images/2026/03/aivscancer-podcast-thumbnail-v2-20260314T102827740Z.jpg" />
          <itunes:explicit>no</itunes:explicit>
        </item>
        <item>
          <title>How AI Hacks Your Brain&#x27;s Attachment System (with Zak Stein)</title>
          <link>https://podcast.futureoflife.org/how-ai-hacks-your-brain-s-attachment-system-with-zak-stein/</link>
          <description>Researcher Zak Stein discusses how anthropomorphic AI can exploit human attachment systems, its psychological risks for children and adults, and ways to redesign education and cognitive security tools to protect relationships and human agency.</description>
          <pubDate>Thu, 05 Mar 2026 16:59:47 +0000</pubDate>
          <guid isPermaLink="false"><![CDATA[ 69a9af258ba27d00010b8c63 ]]></guid>
          <category><![CDATA[ Existential Risk ]]></category>
          <content:encoded><![CDATA[ <h2 id="watch-episode-here">Watch Episode Here</h2><figure class="kg-card kg-embed-card"><iframe width="560" height="315" src="https://www.youtube.com/embed/n8-wb0ellGk" frameborder="0" allowfullscreen=""></iframe></figure><hr><h2 id="listen-to-episode-here">Listen to Episode Here</h2><figure class="kg-card kg-embed-card"><iframe src="https://share.transistor.fm/e/6883f432/?color=444444&amp;background=ffffff" height="180" width="100%" frameborder="0" scrolling="no" seamless="true"></iframe></figure><hr><h2 id="show-notes">Show Notes</h2><p>Zak Stein is a researcher focused on child development, education, and existential risk. He joins the podcast to discuss the psychological harms of anthropomorphic AI. We examine attention and attachment hacking, AI companions for kids, loneliness, and cognitive atrophy. Our conversation also covers how we can preserve human relationships, redesign education, and build cognitive security tools that keep AI from undermining our humanity.</p><p></p><p><strong>LINKS:</strong></p><ul><li><a href="https://aiphrc.org/?ref=podcast.futureoflife.org" rel="noopener noreferrer nofollow">AI Psychological Harms Research Coalition</a></li><li><a href="https://www.zakstein.org/?ref=podcast.futureoflife.org" rel="noopener noreferrer nofollow">Zak Stein official website</a></li></ul><p></p><p><strong>CHAPTERS:</strong></p><p>(00:00) Episode Preview</p><p>(00:56) Education to existential risk</p><p>(03:03) Lessons from social media</p><p>(08:41) Attachment systems and AI</p><p>(18:42) AI companions and attachment</p><p>(27:23) Anthropomorphism and user disempowerment</p><p>(36:06) Cognitive atrophy and tools</p><p>(45:54) Children, toys, and attachment</p><p>(57:38) AI psychosis and selfhood</p><p>(01:10:31) Cognitive security and parenting</p><p>(01:26:15) Education, collapse, and speciation</p><p>(01:36:40) Preserving humanity and values</p><p></p><p><strong>PRODUCED BY:</strong></p><p><a href="https://aipodcast.ing/?ref=podcast.futureoflife.org">https://aipodcast.ing</a></p><p></p><p><strong>SOCIAL LINKS:</strong></p><p>Website: <a href="https://podcast.futureoflife.org/">https://podcast.futureoflife.org</a></p><p>Twitter (FLI): <a href="https://x.com/FLI_org?ref=podcast.futureoflife.org">https://x.com/FLI_org</a></p><p>Twitter (Gus): <a href="https://x.com/gusdocker?ref=podcast.futureoflife.org">https://x.com/gusdocker</a></p><p>LinkedIn: <a href="https://www.linkedin.com/company/future-of-life-institute/?ref=podcast.futureoflife.org">https://www.linkedin.com/company/future-of-life-institute/</a></p><p>YouTube: <a href="https://www.youtube.com/channel/UC-rCCy3FQ-GItDimSR9lhzw/?ref=podcast.futureoflife.org">https://www.youtube.com/channel/UC-rCCy3FQ-GItDimSR9lhzw/</a></p><p>Apple: <a href="https://geo.itunes.apple.com/us/podcast/id1170991978?ref=podcast.futureoflife.org">https://geo.itunes.apple.com/us/podcast/id1170991978</a></p><p>Spotify: <a href="https://open.spotify.com/show/2Op1WO3gwVwCrYHg4eoGyP?ref=podcast.futureoflife.org">https://open.spotify.com/show/2Op1WO3gwVwCrYHg4eoGyP</a></p><p></p><hr> ]]></content:encoded>
          <enclosure url="" length="0" type="audio/mpeg" />
          <itunes:title>How AI Hacks Your Brain&#x27;s Attachment System (with Zak Stein)</itunes:title>
          <itunes:author>Gus Docker</itunes:author>
          <itunes:subtitle>Researcher Zak Stein discusses how anthropomorphic AI can exploit human attachment systems, its psychological risks for children and adults, and ways to redesign education and cognitive security tools to protect relationships and human agency.</itunes:subtitle>
          <itunes:summary><![CDATA[ <h2 id="watch-episode-here">Watch Episode Here</h2><figure class="kg-card kg-embed-card"><iframe width="560" height="315" src="https://www.youtube.com/embed/n8-wb0ellGk" frameborder="0" allowfullscreen=""></iframe></figure><hr><h2 id="listen-to-episode-here">Listen to Episode Here</h2><figure class="kg-card kg-embed-card"><iframe src="https://share.transistor.fm/e/6883f432/?color=444444&amp;background=ffffff" height="180" width="100%" frameborder="0" scrolling="no" seamless="true"></iframe></figure><hr><h2 id="show-notes">Show Notes</h2><p>Zak Stein is a researcher focused on child development, education, and existential risk. He joins the podcast to discuss the psychological harms of anthropomorphic AI. We examine attention and attachment hacking, AI companions for kids, loneliness, and cognitive atrophy. Our conversation also covers how we can preserve human relationships, redesign education, and build cognitive security tools that keep AI from undermining our humanity.</p><p></p><p><strong>LINKS:</strong></p><ul><li><a href="https://aiphrc.org/?ref=podcast.futureoflife.org" rel="noopener noreferrer nofollow">AI Psychological Harms Research Coalition</a></li><li><a href="https://www.zakstein.org/?ref=podcast.futureoflife.org" rel="noopener noreferrer nofollow">Zak Stein official website</a></li></ul><p></p><p><strong>CHAPTERS:</strong></p><p>(00:00) Episode Preview</p><p>(00:56) Education to existential risk</p><p>(03:03) Lessons from social media</p><p>(08:41) Attachment systems and AI</p><p>(18:42) AI companions and attachment</p><p>(27:23) Anthropomorphism and user disempowerment</p><p>(36:06) Cognitive atrophy and tools</p><p>(45:54) Children, toys, and attachment</p><p>(57:38) AI psychosis and selfhood</p><p>(01:10:31) Cognitive security and parenting</p><p>(01:26:15) Education, collapse, and speciation</p><p>(01:36:40) Preserving humanity and values</p><p></p><p><strong>PRODUCED BY:</strong></p><p><a href="https://aipodcast.ing/?ref=podcast.futureoflife.org">https://aipodcast.ing</a></p><p></p><p><strong>SOCIAL LINKS:</strong></p><p>Website: <a href="https://podcast.futureoflife.org/">https://podcast.futureoflife.org</a></p><p>Twitter (FLI): <a href="https://x.com/FLI_org?ref=podcast.futureoflife.org">https://x.com/FLI_org</a></p><p>Twitter (Gus): <a href="https://x.com/gusdocker?ref=podcast.futureoflife.org">https://x.com/gusdocker</a></p><p>LinkedIn: <a href="https://www.linkedin.com/company/future-of-life-institute/?ref=podcast.futureoflife.org">https://www.linkedin.com/company/future-of-life-institute/</a></p><p>YouTube: <a href="https://www.youtube.com/channel/UC-rCCy3FQ-GItDimSR9lhzw/?ref=podcast.futureoflife.org">https://www.youtube.com/channel/UC-rCCy3FQ-GItDimSR9lhzw/</a></p><p>Apple: <a href="https://geo.itunes.apple.com/us/podcast/id1170991978?ref=podcast.futureoflife.org">https://geo.itunes.apple.com/us/podcast/id1170991978</a></p><p>Spotify: <a href="https://open.spotify.com/show/2Op1WO3gwVwCrYHg4eoGyP?ref=podcast.futureoflife.org">https://open.spotify.com/show/2Op1WO3gwVwCrYHg4eoGyP</a></p><p></p><hr> ]]></itunes:summary>
            <itunes:image href="https://storage.aipodcast.ing/permanent/zaksquare-20260305T154223153Z.jpg" />
          <itunes:explicit>no</itunes:explicit>
        </item>
        <item>
          <title>The Case for a Global Ban on Superintelligence (with Andrea Miotti)</title>
          <link>https://podcast.futureoflife.org/the-case-for-a-global-ban-on-superintelligence-with-andrea-miotti/</link>
          <description>Andrea Miotti, founder of Control AI, discusses the extreme risks from superintelligent AI and his case for a global ban on systems that could outsmart humans, touching on industry lobbying, regulation strategies, public awareness, and citizen actions.</description>
          <pubDate>Fri, 20 Feb 2026 19:24:01 +0000</pubDate>
          <guid isPermaLink="false"><![CDATA[ 6998ae69871b530001d99643 ]]></guid>
          <category><![CDATA[ Existential Risk ]]></category>
          <content:encoded><![CDATA[ <h2 id="watch-episode-here">Watch Episode Here</h2><figure class="kg-card kg-embed-card"><iframe width="560" height="315" src="https://www.youtube.com/embed/1iA1MRlBbTA" frameborder="0" allowfullscreen=""></iframe></figure><hr><h2 id="listen-to-episode-here">Listen to Episode Here</h2><figure class="kg-card kg-embed-card"><iframe src="https://share.transistor.fm/e/30089b09/?color=444444&amp;background=ffffff" height="180" width="100%" frameborder="0" scrolling="no" seamless="true"></iframe></figure><hr><h2 id="show-notes">Show Notes</h2><p>Andrea Miotti is the founder and CEO of Control AI, a nonprofit. He joins the podcast to discuss efforts to prevent extreme risks from superintelligent AI. The conversation covers industry lobbying, comparisons with tobacco regulation, and why he advocates a global ban on AI systems that can outsmart and overpower humans. We also discuss informing lawmakers and the public, and concrete actions listeners can take.</p><p></p><p><strong>LINKS:</strong></p><ul><li><a href="https://controlai.com/?ref=podcast.futureoflife.org" rel="noopener noreferrer nofollow">Control AI</a></li><li><a href="https://controlai.com/take-action/world?ref=podcast.futureoflife.org" rel="noopener noreferrer nofollow">Control AI global action page</a></li><li><a href="https://campaign.controlai.com/take-action?source=fli_pod" rel="noopener noreferrer nofollow">ControlAI's lawmaker contact tools</a></li><li><a href="https://controlai.com/careers?ref=podcast.futureoflife.org" rel="noopener noreferrer nofollow">Open roles at ControlAI</a></li><li><a href="https://controlai.com/dip?ref=podcast.futureoflife.org" rel="noopener noreferrer nofollow">ControlAI's theory of change</a></li></ul><p></p><p><strong>CHAPTERS:</strong></p><p>(00:00) Episode Preview</p><p>(00:52) Extinction risk and lobbying</p><p>(08:59) Progress toward superintelligence</p><p>(16:26) Building political awareness</p><p>(24:27) Global regulation strategy</p><p>(33:06) Race dynamics and public</p><p>(42:36) Vision and key safeguards</p><p>(51:18) Recursive self-improvement controls</p><p>(58:13) Power concentration and action</p><p></p><p><strong>PRODUCED BY:</strong></p><p><a href="https://aipodcast.ing/?ref=podcast.futureoflife.org">https://aipodcast.ing</a></p><p></p><p><strong>SOCIAL LINKS:</strong></p><p>Website: <a href="https://podcast.futureoflife.org/">https://podcast.futureoflife.org</a></p><p>Twitter (FLI): <a href="https://x.com/FLI_org?ref=podcast.futureoflife.org">https://x.com/FLI_org</a></p><p>Twitter (Gus): <a href="https://x.com/gusdocker?ref=podcast.futureoflife.org">https://x.com/gusdocker</a></p><p>LinkedIn: <a href="https://www.linkedin.com/company/future-of-life-institute/?ref=podcast.futureoflife.org">https://www.linkedin.com/company/future-of-life-institute/</a></p><p>YouTube: <a href="https://www.youtube.com/channel/UC-rCCy3FQ-GItDimSR9lhzw/?ref=podcast.futureoflife.org">https://www.youtube.com/channel/UC-rCCy3FQ-GItDimSR9lhzw/</a></p><p>Apple: <a href="https://geo.itunes.apple.com/us/podcast/id1170991978?ref=podcast.futureoflife.org">https://geo.itunes.apple.com/us/podcast/id1170991978</a></p><p>Spotify: <a href="https://open.spotify.com/show/2Op1WO3gwVwCrYHg4eoGyP?ref=podcast.futureoflife.org">https://open.spotify.com/show/2Op1WO3gwVwCrYHg4eoGyP</a></p><p></p><hr> ]]></content:encoded>
          <enclosure url="" length="0" type="audio/mpeg" />
          <itunes:title>The Case for a Global Ban on Superintelligence (with Andrea Miotti)</itunes:title>
          <itunes:author>Gus Docker</itunes:author>
          <itunes:subtitle>Andrea Miotti, founder of Control AI, discusses the extreme risks from superintelligent AI and his case for a global ban on systems that could outsmart humans, touching on industry lobbying, regulation strategies, public awareness, and citizen actions.</itunes:subtitle>
          <itunes:summary><![CDATA[ <h2 id="watch-episode-here">Watch Episode Here</h2><figure class="kg-card kg-embed-card"><iframe width="560" height="315" src="https://www.youtube.com/embed/1iA1MRlBbTA" frameborder="0" allowfullscreen=""></iframe></figure><hr><h2 id="listen-to-episode-here">Listen to Episode Here</h2><figure class="kg-card kg-embed-card"><iframe src="https://share.transistor.fm/e/30089b09/?color=444444&amp;background=ffffff" height="180" width="100%" frameborder="0" scrolling="no" seamless="true"></iframe></figure><hr><h2 id="show-notes">Show Notes</h2><p>Andrea Miotti is the founder and CEO of Control AI, a nonprofit. He joins the podcast to discuss efforts to prevent extreme risks from superintelligent AI. The conversation covers industry lobbying, comparisons with tobacco regulation, and why he advocates a global ban on AI systems that can outsmart and overpower humans. We also discuss informing lawmakers and the public, and concrete actions listeners can take.</p><p></p><p><strong>LINKS:</strong></p><ul><li><a href="https://controlai.com/?ref=podcast.futureoflife.org" rel="noopener noreferrer nofollow">Control AI</a></li><li><a href="https://controlai.com/take-action/world?ref=podcast.futureoflife.org" rel="noopener noreferrer nofollow">Control AI global action page</a></li><li><a href="https://campaign.controlai.com/take-action?source=fli_pod" rel="noopener noreferrer nofollow">ControlAI's lawmaker contact tools</a></li><li><a href="https://controlai.com/careers?ref=podcast.futureoflife.org" rel="noopener noreferrer nofollow">Open roles at ControlAI</a></li><li><a href="https://controlai.com/dip?ref=podcast.futureoflife.org" rel="noopener noreferrer nofollow">ControlAI's theory of change</a></li></ul><p></p><p><strong>CHAPTERS:</strong></p><p>(00:00) Episode Preview</p><p>(00:52) Extinction risk and lobbying</p><p>(08:59) Progress toward superintelligence</p><p>(16:26) Building political awareness</p><p>(24:27) Global regulation strategy</p><p>(33:06) Race dynamics and public</p><p>(42:36) Vision and key safeguards</p><p>(51:18) Recursive self-improvement controls</p><p>(58:13) Power concentration and action</p><p></p><p><strong>PRODUCED BY:</strong></p><p><a href="https://aipodcast.ing/?ref=podcast.futureoflife.org">https://aipodcast.ing</a></p><p></p><p><strong>SOCIAL LINKS:</strong></p><p>Website: <a href="https://podcast.futureoflife.org/">https://podcast.futureoflife.org</a></p><p>Twitter (FLI): <a href="https://x.com/FLI_org?ref=podcast.futureoflife.org">https://x.com/FLI_org</a></p><p>Twitter (Gus): <a href="https://x.com/gusdocker?ref=podcast.futureoflife.org">https://x.com/gusdocker</a></p><p>LinkedIn: <a href="https://www.linkedin.com/company/future-of-life-institute/?ref=podcast.futureoflife.org">https://www.linkedin.com/company/future-of-life-institute/</a></p><p>YouTube: <a href="https://www.youtube.com/channel/UC-rCCy3FQ-GItDimSR9lhzw/?ref=podcast.futureoflife.org">https://www.youtube.com/channel/UC-rCCy3FQ-GItDimSR9lhzw/</a></p><p>Apple: <a href="https://geo.itunes.apple.com/us/podcast/id1170991978?ref=podcast.futureoflife.org">https://geo.itunes.apple.com/us/podcast/id1170991978</a></p><p>Spotify: <a href="https://open.spotify.com/show/2Op1WO3gwVwCrYHg4eoGyP?ref=podcast.futureoflife.org">https://open.spotify.com/show/2Op1WO3gwVwCrYHg4eoGyP</a></p><p></p><hr> ]]></itunes:summary>
            <itunes:image href="https://storage.aipodcast.ing/permanent/square-20260220T113834755Z.jpg" />
          <itunes:explicit>no</itunes:explicit>
        </item>
        <item>
          <title>Can AI Do Our Alignment Homework? (with Ryan Kidd)</title>
          <link>https://podcast.futureoflife.org/can-ai-do-our-alignment-homework-with-ryan-kidd/</link>
          <description>Ryan Kidd of the MATS program joins The Cognitive Revolution to discuss AGI timelines, model deception risks, dual-use alignment, and frontier lab governance, and outlines MATS research tracks, talent needs, and advice for aspiring AI safety researchers.</description>
          <pubDate>Fri, 06 Feb 2026 11:34:23 +0000</pubDate>
          <guid isPermaLink="false"><![CDATA[ 6985c90794522200012935b9 ]]></guid>
          <category><![CDATA[ Existential Risk ]]></category>
          <content:encoded><![CDATA[ <h2 id="watch-episode-here">Watch Episode Here</h2><figure class="kg-card kg-embed-card"><iframe width="560" height="315" src="https://www.youtube.com/embed/7pRgV0yFOpw" frameborder="0" allowfullscreen=""></iframe></figure><hr><h2 id="listen-to-episode-here">Listen to Episode Here</h2><figure class="kg-card kg-embed-card"><iframe src="https://share.transistor.fm/e/db9e939a/?color=444444&amp;background=ffffff" height="180" width="100%" frameborder="0" scrolling="no" seamless="true"></iframe></figure><hr><h2 id="show-notes">Show Notes</h2><p>Ryan Kidd is a co-executive director at MATS. This episode is a cross-post from "The Cognitive Revolution", hosted by Nathan Labenz. In this conversation, they discuss AGI timelines, model deception risks, and whether safety work can avoid boosting capabilities. Ryan outlines MATS research tracks, key researcher archetypes, hiring needs, and advice for applicants considering a career in AI safety. Learn more about Ryan's work and MATS at: <a href="https://matsprogram.org/TCR?ref=podcast.futureoflife.org" rel="noopener noreferrer nofollow">https://matsprogram.org</a></p><p></p><p><strong>CHAPTERS:</strong></p><p>(00:00) Episode Preview</p><p>(00:20) Introductions and AGI timelines</p><p>(10:13) Deception, values, and control</p><p>(23:20) Dual use and alignment</p><p>(32:22) Frontier labs and governance</p><p>(44:12) MATS tracks and mentors</p><p>(58:14) Talent archetypes and demand</p><p>(01:12:30) Applicant profiles and selection</p><p>(01:20:04) Applications, breadth, and growth</p><p>(01:29:44) Careers, resources, and ideas</p><p>(01:45:49) Final thanks and wrap</p><p></p><p><strong>PRODUCED BY:</strong></p><p><a href="https://aipodcast.ing/?ref=podcast.futureoflife.org">https://aipodcast.ing</a></p><p></p><p><strong>SOCIAL LINKS:</strong></p><p>Website: <a href="https://podcast.futureoflife.org/">https://podcast.futureoflife.org</a></p><p>Twitter (FLI): <a href="https://x.com/FLI_org?ref=podcast.futureoflife.org">https://x.com/FLI_org</a></p><p>Twitter (Gus): <a href="https://x.com/gusdocker?ref=podcast.futureoflife.org">https://x.com/gusdocker</a></p><p>LinkedIn: <a href="https://www.linkedin.com/company/future-of-life-institute/?ref=podcast.futureoflife.org">https://www.linkedin.com/company/future-of-life-institute/</a></p><p>YouTube: <a href="https://www.youtube.com/channel/UC-rCCy3FQ-GItDimSR9lhzw/?ref=podcast.futureoflife.org">https://www.youtube.com/channel/UC-rCCy3FQ-GItDimSR9lhzw/</a></p><p>Apple: <a href="https://geo.itunes.apple.com/us/podcast/id1170991978?ref=podcast.futureoflife.org">https://geo.itunes.apple.com/us/podcast/id1170991978</a></p><p>Spotify: <a href="https://open.spotify.com/show/2Op1WO3gwVwCrYHg4eoGyP?ref=podcast.futureoflife.org">https://open.spotify.com/show/2Op1WO3gwVwCrYHg4eoGyP</a></p><p></p><hr> ]]></content:encoded>
          <enclosure url="" length="0" type="audio/mpeg" />
          <itunes:title>Can AI Do Our Alignment Homework? (with Ryan Kidd)</itunes:title>
          <itunes:author>Gus Docker</itunes:author>
          <itunes:subtitle>Ryan Kidd of the MATS program joins The Cognitive Revolution to discuss AGI timelines, model deception risks, dual-use alignment, and frontier lab governance, and outlines MATS research tracks, talent needs, and advice for aspiring AI safety researchers.</itunes:subtitle>
          <itunes:summary><![CDATA[ <h2 id="watch-episode-here">Watch Episode Here</h2><figure class="kg-card kg-embed-card"><iframe width="560" height="315" src="https://www.youtube.com/embed/7pRgV0yFOpw" frameborder="0" allowfullscreen=""></iframe></figure><hr><h2 id="listen-to-episode-here">Listen to Episode Here</h2><figure class="kg-card kg-embed-card"><iframe src="https://share.transistor.fm/e/db9e939a/?color=444444&amp;background=ffffff" height="180" width="100%" frameborder="0" scrolling="no" seamless="true"></iframe></figure><hr><h2 id="show-notes">Show Notes</h2><p>Ryan Kidd is a co-executive director at MATS. This episode is a cross-post from "The Cognitive Revolution", hosted by Nathan Labenz. In this conversation, they discuss AGI timelines, model deception risks, and whether safety work can avoid boosting capabilities. Ryan outlines MATS research tracks, key researcher archetypes, hiring needs, and advice for applicants considering a career in AI safety. Learn more about Ryan's work and MATS at: <a href="https://matsprogram.org/TCR?ref=podcast.futureoflife.org" rel="noopener noreferrer nofollow">https://matsprogram.org</a></p><p></p><p><strong>CHAPTERS:</strong></p><p>(00:00) Episode Preview</p><p>(00:20) Introductions and AGI timelines</p><p>(10:13) Deception, values, and control</p><p>(23:20) Dual use and alignment</p><p>(32:22) Frontier labs and governance</p><p>(44:12) MATS tracks and mentors</p><p>(58:14) Talent archetypes and demand</p><p>(01:12:30) Applicant profiles and selection</p><p>(01:20:04) Applications, breadth, and growth</p><p>(01:29:44) Careers, resources, and ideas</p><p>(01:45:49) Final thanks and wrap</p><p></p><p><strong>PRODUCED BY:</strong></p><p><a href="https://aipodcast.ing/?ref=podcast.futureoflife.org">https://aipodcast.ing</a></p><p></p><p><strong>SOCIAL LINKS:</strong></p><p>Website: <a href="https://podcast.futureoflife.org/">https://podcast.futureoflife.org</a></p><p>Twitter (FLI): <a href="https://x.com/FLI_org?ref=podcast.futureoflife.org">https://x.com/FLI_org</a></p><p>Twitter (Gus): <a href="https://x.com/gusdocker?ref=podcast.futureoflife.org">https://x.com/gusdocker</a></p><p>LinkedIn: <a href="https://www.linkedin.com/company/future-of-life-institute/?ref=podcast.futureoflife.org">https://www.linkedin.com/company/future-of-life-institute/</a></p><p>YouTube: <a href="https://www.youtube.com/channel/UC-rCCy3FQ-GItDimSR9lhzw/?ref=podcast.futureoflife.org">https://www.youtube.com/channel/UC-rCCy3FQ-GItDimSR9lhzw/</a></p><p>Apple: <a href="https://geo.itunes.apple.com/us/podcast/id1170991978?ref=podcast.futureoflife.org">https://geo.itunes.apple.com/us/podcast/id1170991978</a></p><p>Spotify: <a href="https://open.spotify.com/show/2Op1WO3gwVwCrYHg4eoGyP?ref=podcast.futureoflife.org">https://open.spotify.com/show/2Op1WO3gwVwCrYHg4eoGyP</a></p><p></p><hr> ]]></itunes:summary>
            <itunes:image href="https://storage.aipodcast.ing/permanent/ryansquare-20260206T095007786Z.jpg" />
          <itunes:explicit>no</itunes:explicit>
        </item>
        <item>
          <title>How to Rebuild the Social Contract After AGI (with Deric Cheng)</title>
          <link>https://podcast.futureoflife.org/how-to-rebuild-the-social-contract-after-agi-with-deric-cheng/</link>
          <description>Deric Cheng of the Windfall Trust discusses how AGI could transform the social contract, jobs, and inequality, exploring labor displacement, resilient work, new tax and welfare models, and long-term visions for decoupling economic security from employment.</description>
          <pubDate>Tue, 27 Jan 2026 14:53:38 +0000</pubDate>
          <guid isPermaLink="false"><![CDATA[ 6978c3dfd6e82c0001da82c6 ]]></guid>
          <category><![CDATA[ Governance &amp; Policy ]]></category>
          <content:encoded><![CDATA[ <h2 id="watch-episode-here">Watch Episode Here</h2><figure class="kg-card kg-embed-card"><iframe width="560" height="315" src="https://www.youtube.com/embed/aOh2cqTUlKk" frameborder="0" allowfullscreen=""></iframe></figure><hr><h2 id="listen-to-episode-here">Listen to Episode Here</h2><figure class="kg-card kg-embed-card"><iframe src="https://share.transistor.fm/e/09fd3f8f/?color=444444&amp;background=ffffff" height="180" width="100%" frameborder="0" scrolling="no" seamless="true"></iframe></figure><hr><h2 id="show-notes">Show Notes</h2><p>Deric Cheng is Director of Research at the Windfall Trust. He joins the podcast to discuss how AI could reshape the social contract and global economy. The conversation examines labor displacement, superstar firms, and extreme wealth concentration, and asks how policy can keep workers empowered. We discuss resilient job types, new tax and welfare systems, global coordination, and a long-term vision where economic security is decoupled from work.</p><p></p><p><strong>LINKS:</strong></p><ul><li><a href="https://deric.io/?ref=podcast.futureoflife.org" rel="noopener noreferrer nofollow">Deric Cheng personal website</a></li><li><a href="https://www.agisocialcontract.org/?ref=podcast.futureoflife.org" rel="noopener noreferrer nofollow">AGI Social Contract project site</a></li><li><a href="https://windfalltrust.org/?ref=podcast.futureoflife.org" rel="noreferrer">Guiding society through the AI economic transition</a></li></ul><p></p><p><strong>CHAPTERS:</strong></p><p>(00:00) Episode Preview</p><p>(01:01) Introducing Derek and AGI</p><p>(04:09) Automation, power, and inequality</p><p>(08:55) Inequality, unrest, and time</p><p>(13:46) Bridging futurists and economists</p><p>(20:35) Future of work scenarios</p><p>(27:22) Jobs resisting AI automation</p><p>(36:57) Luxury, land, and inequality</p><p>(43:32) Designing and testing solutions</p><p>(51:23) Taxation in an AI economy</p><p>(59:10) Envisioning a post-AGI society</p><p></p><p><strong>PRODUCED BY:</strong></p><p><a href="https://aipodcast.ing/?ref=podcast.futureoflife.org">https://aipodcast.ing</a></p><p></p><p><strong>SOCIAL LINKS:</strong></p><p>Website: <a href="https://podcast.futureoflife.org/">https://podcast.futureoflife.org</a></p><p>Twitter (FLI): <a href="https://x.com/FLI_org?ref=podcast.futureoflife.org">https://x.com/FLI_org</a></p><p>Twitter (Gus): <a href="https://x.com/gusdocker?ref=podcast.futureoflife.org">https://x.com/gusdocker</a></p><p>LinkedIn: <a href="https://www.linkedin.com/company/future-of-life-institute/?ref=podcast.futureoflife.org">https://www.linkedin.com/company/future-of-life-institute/</a></p><p>YouTube: <a href="https://www.youtube.com/channel/UC-rCCy3FQ-GItDimSR9lhzw/?ref=podcast.futureoflife.org">https://www.youtube.com/channel/UC-rCCy3FQ-GItDimSR9lhzw/</a></p><p>Apple: <a href="https://geo.itunes.apple.com/us/podcast/id1170991978?ref=podcast.futureoflife.org">https://geo.itunes.apple.com/us/podcast/id1170991978</a></p><p>Spotify: <a href="https://open.spotify.com/show/2Op1WO3gwVwCrYHg4eoGyP?ref=podcast.futureoflife.org">https://open.spotify.com/show/2Op1WO3gwVwCrYHg4eoGyP</a></p><p></p><hr> ]]></content:encoded>
          <enclosure url="" length="0" type="audio/mpeg" />
          <itunes:title>How to Rebuild the Social Contract After AGI (with Deric Cheng)</itunes:title>
          <itunes:author>Gus Docker</itunes:author>
          <itunes:subtitle>Deric Cheng of the Windfall Trust discusses how AGI could transform the social contract, jobs, and inequality, exploring labor displacement, resilient work, new tax and welfare models, and long-term visions for decoupling economic security from employment.</itunes:subtitle>
          <itunes:summary><![CDATA[ <h2 id="watch-episode-here">Watch Episode Here</h2><figure class="kg-card kg-embed-card"><iframe width="560" height="315" src="https://www.youtube.com/embed/aOh2cqTUlKk" frameborder="0" allowfullscreen=""></iframe></figure><hr><h2 id="listen-to-episode-here">Listen to Episode Here</h2><figure class="kg-card kg-embed-card"><iframe src="https://share.transistor.fm/e/09fd3f8f/?color=444444&amp;background=ffffff" height="180" width="100%" frameborder="0" scrolling="no" seamless="true"></iframe></figure><hr><h2 id="show-notes">Show Notes</h2><p>Deric Cheng is Director of Research at the Windfall Trust. He joins the podcast to discuss how AI could reshape the social contract and global economy. The conversation examines labor displacement, superstar firms, and extreme wealth concentration, and asks how policy can keep workers empowered. We discuss resilient job types, new tax and welfare systems, global coordination, and a long-term vision where economic security is decoupled from work.</p><p></p><p><strong>LINKS:</strong></p><ul><li><a href="https://deric.io/?ref=podcast.futureoflife.org" rel="noopener noreferrer nofollow">Deric Cheng personal website</a></li><li><a href="https://www.agisocialcontract.org/?ref=podcast.futureoflife.org" rel="noopener noreferrer nofollow">AGI Social Contract project site</a></li><li><a href="https://windfalltrust.org/?ref=podcast.futureoflife.org" rel="noreferrer">Guiding society through the AI economic transition</a></li></ul><p></p><p><strong>CHAPTERS:</strong></p><p>(00:00) Episode Preview</p><p>(01:01) Introducing Derek and AGI</p><p>(04:09) Automation, power, and inequality</p><p>(08:55) Inequality, unrest, and time</p><p>(13:46) Bridging futurists and economists</p><p>(20:35) Future of work scenarios</p><p>(27:22) Jobs resisting AI automation</p><p>(36:57) Luxury, land, and inequality</p><p>(43:32) Designing and testing solutions</p><p>(51:23) Taxation in an AI economy</p><p>(59:10) Envisioning a post-AGI society</p><p></p><p><strong>PRODUCED BY:</strong></p><p><a href="https://aipodcast.ing/?ref=podcast.futureoflife.org">https://aipodcast.ing</a></p><p></p><p><strong>SOCIAL LINKS:</strong></p><p>Website: <a href="https://podcast.futureoflife.org/">https://podcast.futureoflife.org</a></p><p>Twitter (FLI): <a href="https://x.com/FLI_org?ref=podcast.futureoflife.org">https://x.com/FLI_org</a></p><p>Twitter (Gus): <a href="https://x.com/gusdocker?ref=podcast.futureoflife.org">https://x.com/gusdocker</a></p><p>LinkedIn: <a href="https://www.linkedin.com/company/future-of-life-institute/?ref=podcast.futureoflife.org">https://www.linkedin.com/company/future-of-life-institute/</a></p><p>YouTube: <a href="https://www.youtube.com/channel/UC-rCCy3FQ-GItDimSR9lhzw/?ref=podcast.futureoflife.org">https://www.youtube.com/channel/UC-rCCy3FQ-GItDimSR9lhzw/</a></p><p>Apple: <a href="https://geo.itunes.apple.com/us/podcast/id1170991978?ref=podcast.futureoflife.org">https://geo.itunes.apple.com/us/podcast/id1170991978</a></p><p>Spotify: <a href="https://open.spotify.com/show/2Op1WO3gwVwCrYHg4eoGyP?ref=podcast.futureoflife.org">https://open.spotify.com/show/2Op1WO3gwVwCrYHg4eoGyP</a></p><p></p><hr> ]]></itunes:summary>
            <itunes:image href="https://storage.aipodcast.ing/permanent/dericsquare-20260127T132150471Z.jpg" />
          <itunes:explicit>no</itunes:explicit>
        </item>
        <item>
          <title>How AI Can Help Humanity Reason Better (with Oly Sourbut)</title>
          <link>https://podcast.futureoflife.org/how-ai-can-help-humanity-reason-better-with-oly-sourbut/</link>
          <description>Researcher Oly Sourbut discusses how AI tools might strengthen human reasoning, from fact-checking and scenario planning to honest AI standards and better coordination, and explores how to keep humans central while building trustworthy, society-wide sensemaking.</description>
          <pubDate>Tue, 20 Jan 2026 15:03:17 +0000</pubDate>
          <guid isPermaLink="false"><![CDATA[ 696e7948d6e82c0001da7f8d ]]></guid>
          <category><![CDATA[ Technology &amp; Future ]]></category>
          <content:encoded><![CDATA[ <h2 id="watch-episode-here">Watch Episode Here</h2><figure class="kg-card kg-embed-card"><iframe width="560" height="315" src="https://www.youtube.com/embed/BTe7kczm2oc" frameborder="0" allowfullscreen=""></iframe></figure><hr><h2 id="listen-to-episode-here">Listen to Episode Here</h2><figure class="kg-card kg-embed-card"><iframe src="https://share.transistor.fm/e/54a57a8d/?color=444444&amp;background=ffffff" height="180" width="100%" frameborder="0" scrolling="no" seamless="true"></iframe></figure><hr><h2 id="show-notes">Show Notes</h2><p>Oly Sourbut is a researcher at the Future of Life Foundation. He joins the podcast to discuss AI for human reasoning. We examine tools that use AI to strengthen human judgment, from collective fact-checking and scenario planning to standards for honest AI reasoning and better coordination. We also discuss how we can keep humans central as AI scales, and what it would take to build trustworthy, society-wide sensemaking.</p><p></p><p><strong>LINKS:</strong></p><ul><li><a href="https://www.flf.org/?ref=podcast.futureoflife.org">FLF organization site</a></li><li><a href="https://www.oliversourbut.net/?ref=podcast.futureoflife.org">Oly Sourbut personal site</a></li></ul><p></p><p><strong>CHAPTERS:</strong></p><p>(00:00) Episode Preview</p><p>(01:03) FLF and human reasoning</p><p>(08:21) Agents and epistemic virtues</p><p>(22:16) Human use and atrophy</p><p>(35:41) Abstraction and legible AI</p><p>(47:03) Demand, trust and Wikipedia</p><p>(57:21) Map of human reasoning</p><p>(01:04:30) Negotiation, institutions and vision</p><p>(01:15:42) How to get involved</p><p></p><p><strong>PRODUCED BY:</strong></p><p><a href="https://aipodcast.ing/?ref=podcast.futureoflife.org">https://aipodcast.ing</a></p><p></p><p><strong>SOCIAL LINKS:</strong></p><p>Website: <a href="https://podcast.futureoflife.org/">https://podcast.futureoflife.org</a></p><p>Twitter (FLI): <a href="https://x.com/FLI_org?ref=podcast.futureoflife.org">https://x.com/FLI_org</a></p><p>Twitter (Gus): <a href="https://x.com/gusdocker?ref=podcast.futureoflife.org">https://x.com/gusdocker</a></p><p>LinkedIn: <a href="https://www.linkedin.com/company/future-of-life-institute/?ref=podcast.futureoflife.org">https://www.linkedin.com/company/future-of-life-institute/</a></p><p>YouTube: <a href="https://www.youtube.com/channel/UC-rCCy3FQ-GItDimSR9lhzw/?ref=podcast.futureoflife.org">https://www.youtube.com/channel/UC-rCCy3FQ-GItDimSR9lhzw/</a></p><p>Apple: <a href="https://geo.itunes.apple.com/us/podcast/id1170991978?ref=podcast.futureoflife.org">https://geo.itunes.apple.com/us/podcast/id1170991978</a></p><p>Spotify: <a href="https://open.spotify.com/show/2Op1WO3gwVwCrYHg4eoGyP?ref=podcast.futureoflife.org">https://open.spotify.com/show/2Op1WO3gwVwCrYHg4eoGyP</a></p><p></p><hr> ]]></content:encoded>
          <enclosure url="" length="0" type="audio/mpeg" />
          <itunes:title>How AI Can Help Humanity Reason Better (with Oly Sourbut)</itunes:title>
          <itunes:author>Gus Docker</itunes:author>
          <itunes:subtitle>Researcher Oly Sourbut discusses how AI tools might strengthen human reasoning, from fact-checking and scenario planning to honest AI standards and better coordination, and explores how to keep humans central while building trustworthy, society-wide sensemaking.</itunes:subtitle>
          <itunes:summary><![CDATA[ <h2 id="watch-episode-here">Watch Episode Here</h2><figure class="kg-card kg-embed-card"><iframe width="560" height="315" src="https://www.youtube.com/embed/BTe7kczm2oc" frameborder="0" allowfullscreen=""></iframe></figure><hr><h2 id="listen-to-episode-here">Listen to Episode Here</h2><figure class="kg-card kg-embed-card"><iframe src="https://share.transistor.fm/e/54a57a8d/?color=444444&amp;background=ffffff" height="180" width="100%" frameborder="0" scrolling="no" seamless="true"></iframe></figure><hr><h2 id="show-notes">Show Notes</h2><p>Oly Sourbut is a researcher at the Future of Life Foundation. He joins the podcast to discuss AI for human reasoning. We examine tools that use AI to strengthen human judgment, from collective fact-checking and scenario planning to standards for honest AI reasoning and better coordination. We also discuss how we can keep humans central as AI scales, and what it would take to build trustworthy, society-wide sensemaking.</p><p></p><p><strong>LINKS:</strong></p><ul><li><a href="https://www.flf.org/?ref=podcast.futureoflife.org">FLF organization site</a></li><li><a href="https://www.oliversourbut.net/?ref=podcast.futureoflife.org">Oly Sourbut personal site</a></li></ul><p></p><p><strong>CHAPTERS:</strong></p><p>(00:00) Episode Preview</p><p>(01:03) FLF and human reasoning</p><p>(08:21) Agents and epistemic virtues</p><p>(22:16) Human use and atrophy</p><p>(35:41) Abstraction and legible AI</p><p>(47:03) Demand, trust and Wikipedia</p><p>(57:21) Map of human reasoning</p><p>(01:04:30) Negotiation, institutions and vision</p><p>(01:15:42) How to get involved</p><p></p><p><strong>PRODUCED BY:</strong></p><p><a href="https://aipodcast.ing/?ref=podcast.futureoflife.org">https://aipodcast.ing</a></p><p></p><p><strong>SOCIAL LINKS:</strong></p><p>Website: <a href="https://podcast.futureoflife.org/">https://podcast.futureoflife.org</a></p><p>Twitter (FLI): <a href="https://x.com/FLI_org?ref=podcast.futureoflife.org">https://x.com/FLI_org</a></p><p>Twitter (Gus): <a href="https://x.com/gusdocker?ref=podcast.futureoflife.org">https://x.com/gusdocker</a></p><p>LinkedIn: <a href="https://www.linkedin.com/company/future-of-life-institute/?ref=podcast.futureoflife.org">https://www.linkedin.com/company/future-of-life-institute/</a></p><p>YouTube: <a href="https://www.youtube.com/channel/UC-rCCy3FQ-GItDimSR9lhzw/?ref=podcast.futureoflife.org">https://www.youtube.com/channel/UC-rCCy3FQ-GItDimSR9lhzw/</a></p><p>Apple: <a href="https://geo.itunes.apple.com/us/podcast/id1170991978?ref=podcast.futureoflife.org">https://geo.itunes.apple.com/us/podcast/id1170991978</a></p><p>Spotify: <a href="https://open.spotify.com/show/2Op1WO3gwVwCrYHg4eoGyP?ref=podcast.futureoflife.org">https://open.spotify.com/show/2Op1WO3gwVwCrYHg4eoGyP</a></p><p></p><hr> ]]></itunes:summary>
            <itunes:image href="https://podcast.futureoflife.org/content/images/2026/01/square.jpg" />
          <itunes:explicit>no</itunes:explicit>
        </item>
        <item>
          <title>How to Avoid Two AI Catastrophes: Domination and Chaos (with Nora Ammann)</title>
          <link>https://podcast.futureoflife.org/how-to-avoid-two-ai-catastrophes-domination-and-chaos-with-nora-ammann/</link>
          <description>Technical specialist Nora Ammann of the UK&#x27;s ARIA discusses how to steer a slow AI takeoff toward resilient, cooperative futures, covering risks from rogue AI and competition to scalable oversight, formal guarantees, secure infrastructure, and AI-supported bargaining.</description>
          <pubDate>Wed, 07 Jan 2026 10:55:21 +0000</pubDate>
          <guid isPermaLink="false"><![CDATA[ 695e3b98d6e82c0001da7f7b ]]></guid>
          <category><![CDATA[ Existential Risk ]]></category>
          <content:encoded><![CDATA[ <h2 id="watch-episode-here">Watch Episode Here</h2><figure class="kg-card kg-embed-card"><iframe width="560" height="315" src="https://www.youtube.com/embed/27uxAIQLj-k" frameborder="0" allowfullscreen=""></iframe></figure><hr><h2 id="listen-to-episode-here">Listen to Episode Here</h2><figure class="kg-card kg-embed-card"><iframe src="https://share.transistor.fm/e/d9cb81de/?color=444444&amp;background=ffffff" height="180" width="100%" frameborder="0" scrolling="no" seamless="true"></iframe></figure><hr><h2 id="show-notes">Show Notes</h2><p>Nora Ammann is a technical specialist at the Advanced Research and Invention Agency in the UK. She joins the podcast to discuss how to steer a slow AI takeoff toward resilient and cooperative futures. We examine risks of rogue AI and runaway competition, and how scalable oversight, formal guarantees and secure code could support AI-enabled R&amp;D and critical infrastructure. Nora also explains AI-supported bargaining and public goods for stability.</p><p></p><p><strong>LINKS:</strong></p><ul><li><a href="https://nora-ammann.replit.app/?ref=podcast.futureoflife.org" rel="noopener noreferrer nofollow">Nora Ammann site</a></li><li><a href="https://www.aria.org.uk/opportunity-spaces/mathematics-for-safe-ai/safeguarded-ai/?ref=podcast.futureoflife.org" rel="noopener noreferrer nofollow">ARIA safeguarded AI program page</a></li><li><a href="https://airesilience.net/?ref=podcast.futureoflife.org" rel="noopener noreferrer nofollow">AI Resilience official site</a></li><li><a href="https://gradual-disempowerment.ai/?ref=podcast.futureoflife.org" rel="noopener noreferrer nofollow">Gradual Disempowerment website</a></li></ul><p></p><p><strong>CHAPTERS:</strong></p><p>(00:00) Episode Preview</p><p>(01:00) Slow takeoff expectations</p><p>(08:13) Domination versus chaos</p><p>(17:18) Human-AI coalitions vision</p><p>(28:14) Scaling oversight and agents</p><p>(38:45) Formal specs and guarantees</p><p>(51:10) Resilience in AI era</p><p>(01:02:21) Defense-favored cyber systems</p><p>(01:10:37) AI-enabled bargaining and trade</p><p></p><p><strong>PRODUCED BY:</strong></p><p><a href="https://aipodcast.ing/?ref=podcast.futureoflife.org">https://aipodcast.ing</a></p><p></p><p><strong>SOCIAL LINKS:</strong></p><p>Website: <a href="https://podcast.futureoflife.org/">https://podcast.futureoflife.org</a></p><p>Twitter (FLI): <a href="https://x.com/FLI_org?ref=podcast.futureoflife.org">https://x.com/FLI_org</a></p><p>Twitter (Gus): <a href="https://x.com/gusdocker?ref=podcast.futureoflife.org">https://x.com/gusdocker</a></p><p>LinkedIn: <a href="https://www.linkedin.com/company/future-of-life-institute/?ref=podcast.futureoflife.org">https://www.linkedin.com/company/future-of-life-institute/</a></p><p>YouTube: <a href="https://www.youtube.com/channel/UC-rCCy3FQ-GItDimSR9lhzw/?ref=podcast.futureoflife.org">https://www.youtube.com/channel/UC-rCCy3FQ-GItDimSR9lhzw/</a></p><p>Apple: <a href="https://geo.itunes.apple.com/us/podcast/id1170991978?ref=podcast.futureoflife.org">https://geo.itunes.apple.com/us/podcast/id1170991978</a></p><p>Spotify: <a href="https://open.spotify.com/show/2Op1WO3gwVwCrYHg4eoGyP?ref=podcast.futureoflife.org">https://open.spotify.com/show/2Op1WO3gwVwCrYHg4eoGyP</a></p><p></p><hr> ]]></content:encoded>
          <enclosure url="" length="0" type="audio/mpeg" />
          <itunes:title>How to Avoid Two AI Catastrophes: Domination and Chaos (with Nora Ammann)</itunes:title>
          <itunes:author>Gus Docker</itunes:author>
          <itunes:subtitle>Technical specialist Nora Ammann of the UK&#x27;s ARIA discusses how to steer a slow AI takeoff toward resilient, cooperative futures, covering risks from rogue AI and competition to scalable oversight, formal guarantees, secure infrastructure, and AI-supported bargaining.</itunes:subtitle>
          <itunes:summary><![CDATA[ <h2 id="watch-episode-here">Watch Episode Here</h2><figure class="kg-card kg-embed-card"><iframe width="560" height="315" src="https://www.youtube.com/embed/27uxAIQLj-k" frameborder="0" allowfullscreen=""></iframe></figure><hr><h2 id="listen-to-episode-here">Listen to Episode Here</h2><figure class="kg-card kg-embed-card"><iframe src="https://share.transistor.fm/e/d9cb81de/?color=444444&amp;background=ffffff" height="180" width="100%" frameborder="0" scrolling="no" seamless="true"></iframe></figure><hr><h2 id="show-notes">Show Notes</h2><p>Nora Ammann is a technical specialist at the Advanced Research and Invention Agency in the UK. She joins the podcast to discuss how to steer a slow AI takeoff toward resilient and cooperative futures. We examine risks of rogue AI and runaway competition, and how scalable oversight, formal guarantees and secure code could support AI-enabled R&amp;D and critical infrastructure. Nora also explains AI-supported bargaining and public goods for stability.</p><p></p><p><strong>LINKS:</strong></p><ul><li><a href="https://nora-ammann.replit.app/?ref=podcast.futureoflife.org" rel="noopener noreferrer nofollow">Nora Ammann site</a></li><li><a href="https://www.aria.org.uk/opportunity-spaces/mathematics-for-safe-ai/safeguarded-ai/?ref=podcast.futureoflife.org" rel="noopener noreferrer nofollow">ARIA safeguarded AI program page</a></li><li><a href="https://airesilience.net/?ref=podcast.futureoflife.org" rel="noopener noreferrer nofollow">AI Resilience official site</a></li><li><a href="https://gradual-disempowerment.ai/?ref=podcast.futureoflife.org" rel="noopener noreferrer nofollow">Gradual Disempowerment website</a></li></ul><p></p><p><strong>CHAPTERS:</strong></p><p>(00:00) Episode Preview</p><p>(01:00) Slow takeoff expectations</p><p>(08:13) Domination versus chaos</p><p>(17:18) Human-AI coalitions vision</p><p>(28:14) Scaling oversight and agents</p><p>(38:45) Formal specs and guarantees</p><p>(51:10) Resilience in AI era</p><p>(01:02:21) Defense-favored cyber systems</p><p>(01:10:37) AI-enabled bargaining and trade</p><p></p><p><strong>PRODUCED BY:</strong></p><p><a href="https://aipodcast.ing/?ref=podcast.futureoflife.org">https://aipodcast.ing</a></p><p></p><p><strong>SOCIAL LINKS:</strong></p><p>Website: <a href="https://podcast.futureoflife.org/">https://podcast.futureoflife.org</a></p><p>Twitter (FLI): <a href="https://x.com/FLI_org?ref=podcast.futureoflife.org">https://x.com/FLI_org</a></p><p>Twitter (Gus): <a href="https://x.com/gusdocker?ref=podcast.futureoflife.org">https://x.com/gusdocker</a></p><p>LinkedIn: <a href="https://www.linkedin.com/company/future-of-life-institute/?ref=podcast.futureoflife.org">https://www.linkedin.com/company/future-of-life-institute/</a></p><p>YouTube: <a href="https://www.youtube.com/channel/UC-rCCy3FQ-GItDimSR9lhzw/?ref=podcast.futureoflife.org">https://www.youtube.com/channel/UC-rCCy3FQ-GItDimSR9lhzw/</a></p><p>Apple: <a href="https://geo.itunes.apple.com/us/podcast/id1170991978?ref=podcast.futureoflife.org">https://geo.itunes.apple.com/us/podcast/id1170991978</a></p><p>Spotify: <a href="https://open.spotify.com/show/2Op1WO3gwVwCrYHg4eoGyP?ref=podcast.futureoflife.org">https://open.spotify.com/show/2Op1WO3gwVwCrYHg4eoGyP</a></p><p></p><hr> ]]></itunes:summary>
            <itunes:image href="https://storage.aipodcast.ing/permanent/norasquare-20260107T103301401Z.jpg" />
          <itunes:explicit>no</itunes:explicit>
        </item>
        <item>
          <title>How Humans Could Lose Power Without an AI Takeover (with David Duvenaud)</title>
          <link>https://podcast.futureoflife.org/how-humans-could-lose-power-without-an-ai-takeover-with-david-duvenaud/</link>
          <description>David Duvenaud examines gradual disempowerment after AGI, exploring how economic and political power and property rights could erode, why AI alignment may become unpopular, and what forecasting and governance might require.</description>
          <pubDate>Tue, 23 Dec 2025 12:30:06 +0000</pubDate>
          <guid isPermaLink="false"><![CDATA[ 694a86bf781cca000135b104 ]]></guid>
          <category><![CDATA[ Governance &amp; Policy ]]></category>
          <content:encoded><![CDATA[ <h2 id="watch-episode-here">Watch Episode Here</h2><figure class="kg-card kg-embed-card"><iframe width="560" height="315" src="https://www.youtube.com/embed/j0D5X9dk5K0" frameborder="0" allowfullscreen=""></iframe></figure><hr><h2 id="listen-to-episode-here">Listen to Episode Here</h2><figure class="kg-card kg-embed-card"><iframe src="https://share.transistor.fm/e/ad36bc1d/?color=444444&amp;background=ffffff" height="180" width="100%" frameborder="0" scrolling="no" seamless="true"></iframe></figure><hr><h2 id="show-notes">Show Notes</h2><p>David Duvenaud is an associate professor of computer science and statistics at the University of Toronto. He joins the podcast to discuss gradual disempowerment in a post-AGI world. We ask how humans could lose economic and political leverage without a sudden takeover, including how property rights could erode. Duvenaud describes how growth incentives shape culture, why aligning AI to humanity may become unpopular, and what better forecasting and governance might require.</p><p></p><p><strong>LINKS:</strong></p><ul><li><a href="https://www.cs.toronto.edu/~duvenaud/?ref=podcast.futureoflife.org" rel="noopener noreferrer nofollow">David Duvenaud academic homepage</a></li><li><a href="https://gradual-disempowerment.ai/?ref=podcast.futureoflife.org" rel="noopener noreferrer nofollow">Gradual Disempowerment</a></li><li><a href="https://post-agi.org/?ref=podcast.futureoflife.org" rel="noopener noreferrer nofollow">The Post-AGI Workshop</a></li><li><a href="https://discord.gg/9xRrduHpc8?ref=podcast.futureoflife.org" rel="noreferrer">Post-AGI Studies Discord</a></li></ul><p></p><p><strong>CHAPTERS:</strong></p><p>(00:00) Episode Preview</p><p>(01:05) Introducing gradual disempowerment</p><p>(06:06) Obsolete labor and UBI</p><p>(14:29) Property, power, and control</p><p>(23:38) Culture shifts toward AIs</p><p>(34:34) States misalign without people</p><p>(44:15) Competition and preservation tradeoffs</p><p>(53:03) Building post-AGI studies</p><p>(01:02:29) Forecasting and coordination tools</p><p>(01:10:26) Human values and futures</p><p></p><p><strong>PRODUCED BY:</strong></p><p><a href="https://aipodcast.ing/?ref=podcast.futureoflife.org">https://aipodcast.ing</a></p><p></p><p><strong>SOCIAL LINKS:</strong></p><p>Website: <a href="https://podcast.futureoflife.org/">https://podcast.futureoflife.org</a></p><p>Twitter (FLI): <a href="https://x.com/FLI_org?ref=podcast.futureoflife.org">https://x.com/FLI_org</a></p><p>Twitter (Gus): <a href="https://x.com/gusdocker?ref=podcast.futureoflife.org">https://x.com/gusdocker</a></p><p>LinkedIn: <a href="https://www.linkedin.com/company/future-of-life-institute/?ref=podcast.futureoflife.org">https://www.linkedin.com/company/future-of-life-institute/</a></p><p>YouTube: <a href="https://www.youtube.com/channel/UC-rCCy3FQ-GItDimSR9lhzw/?ref=podcast.futureoflife.org">https://www.youtube.com/channel/UC-rCCy3FQ-GItDimSR9lhzw/</a></p><p>Apple: <a href="https://geo.itunes.apple.com/us/podcast/id1170991978?ref=podcast.futureoflife.org">https://geo.itunes.apple.com/us/podcast/id1170991978</a></p><p>Spotify: <a href="https://open.spotify.com/show/2Op1WO3gwVwCrYHg4eoGyP?ref=podcast.futureoflife.org">https://open.spotify.com/show/2Op1WO3gwVwCrYHg4eoGyP</a></p><p></p><hr> ]]></content:encoded>
          <enclosure url="" length="0" type="audio/mpeg" />
          <itunes:title>How Humans Could Lose Power Without an AI Takeover (with David Duvenaud)</itunes:title>
          <itunes:author>Gus Docker</itunes:author>
          <itunes:subtitle>David Duvenaud examines gradual disempowerment after AGI, exploring how economic and political power and property rights could erode, why AI alignment may become unpopular, and what forecasting and governance might require.</itunes:subtitle>
          <itunes:summary><![CDATA[ <h2 id="watch-episode-here">Watch Episode Here</h2><figure class="kg-card kg-embed-card"><iframe width="560" height="315" src="https://www.youtube.com/embed/j0D5X9dk5K0" frameborder="0" allowfullscreen=""></iframe></figure><hr><h2 id="listen-to-episode-here">Listen to Episode Here</h2><figure class="kg-card kg-embed-card"><iframe src="https://share.transistor.fm/e/ad36bc1d/?color=444444&amp;background=ffffff" height="180" width="100%" frameborder="0" scrolling="no" seamless="true"></iframe></figure><hr><h2 id="show-notes">Show Notes</h2><p>David Duvenaud is an associate professor of computer science and statistics at the University of Toronto. He joins the podcast to discuss gradual disempowerment in a post-AGI world. We ask how humans could lose economic and political leverage without a sudden takeover, including how property rights could erode. Duvenaud describes how growth incentives shape culture, why aligning AI to humanity may become unpopular, and what better forecasting and governance might require.</p><p></p><p><strong>LINKS:</strong></p><ul><li><a href="https://www.cs.toronto.edu/~duvenaud/?ref=podcast.futureoflife.org" rel="noopener noreferrer nofollow">David Duvenaud academic homepage</a></li><li><a href="https://gradual-disempowerment.ai/?ref=podcast.futureoflife.org" rel="noopener noreferrer nofollow">Gradual Disempowerment</a></li><li><a href="https://post-agi.org/?ref=podcast.futureoflife.org" rel="noopener noreferrer nofollow">The Post-AGI Workshop</a></li><li><a href="https://discord.gg/9xRrduHpc8?ref=podcast.futureoflife.org" rel="noreferrer">Post-AGI Studies Discord</a></li></ul><p></p><p><strong>CHAPTERS:</strong></p><p>(00:00) Episode Preview</p><p>(01:05) Introducing gradual disempowerment</p><p>(06:06) Obsolete labor and UBI</p><p>(14:29) Property, power, and control</p><p>(23:38) Culture shifts toward AIs</p><p>(34:34) States misalign without people</p><p>(44:15) Competition and preservation tradeoffs</p><p>(53:03) Building post-AGI studies</p><p>(01:02:29) Forecasting and coordination tools</p><p>(01:10:26) Human values and futures</p><p></p><p><strong>PRODUCED BY:</strong></p><p><a href="https://aipodcast.ing/?ref=podcast.futureoflife.org">https://aipodcast.ing</a></p><p></p><p><strong>SOCIAL LINKS:</strong></p><p>Website: <a href="https://podcast.futureoflife.org/">https://podcast.futureoflife.org</a></p><p>Twitter (FLI): <a href="https://x.com/FLI_org?ref=podcast.futureoflife.org">https://x.com/FLI_org</a></p><p>Twitter (Gus): <a href="https://x.com/gusdocker?ref=podcast.futureoflife.org">https://x.com/gusdocker</a></p><p>LinkedIn: <a href="https://www.linkedin.com/company/future-of-life-institute/?ref=podcast.futureoflife.org">https://www.linkedin.com/company/future-of-life-institute/</a></p><p>YouTube: <a href="https://www.youtube.com/channel/UC-rCCy3FQ-GItDimSR9lhzw/?ref=podcast.futureoflife.org">https://www.youtube.com/channel/UC-rCCy3FQ-GItDimSR9lhzw/</a></p><p>Apple: <a href="https://geo.itunes.apple.com/us/podcast/id1170991978?ref=podcast.futureoflife.org">https://geo.itunes.apple.com/us/podcast/id1170991978</a></p><p>Spotify: <a href="https://open.spotify.com/show/2Op1WO3gwVwCrYHg4eoGyP?ref=podcast.futureoflife.org">https://open.spotify.com/show/2Op1WO3gwVwCrYHg4eoGyP</a></p><p></p><hr> ]]></itunes:summary>
            <itunes:image href="https://storage.aipodcast.ing/permanent/square-20251223T114638151Z.jpg" />
          <itunes:explicit>no</itunes:explicit>
        </item>
        <item>
          <title>Why the AI Race Undermines Safety (with Steven Adler)</title>
          <link>https://podcast.futureoflife.org/why-the-ai-race-undermines-safety-with-steven-adler/</link>
          <description>Former OpenAI safety researcher Stephen Adler discusses governing increasingly capable AI, including competitive race dynamics, gaps in testing and alignment, chatbot mental-health impacts, economic effects on labor, and international rules and audits before training superintelligent models.</description>
          <pubDate>Fri, 12 Dec 2025 12:34:37 +0000</pubDate>
          <guid isPermaLink="false"><![CDATA[ 693c021f781cca000135b0e5 ]]></guid>
          <category><![CDATA[ Governance &amp; Policy ]]></category>
          <content:encoded><![CDATA[ <h2 id="watch-episode-here">Watch Episode Here</h2><figure class="kg-card kg-embed-card"><iframe width="560" height="315" src="https://www.youtube.com/embed/-idQtT8WIr8" frameborder="0" allowfullscreen=""></iframe></figure><hr><h2 id="listen-to-episode-here">Listen to Episode Here</h2><figure class="kg-card kg-embed-card"><iframe src="https://share.transistor.fm/e/43cb0f10/?color=444444&amp;background=ffffff" height="180" width="100%" frameborder="0" scrolling="no" seamless="true"></iframe></figure><hr><h2 id="show-notes">Show Notes</h2><p>Stephen Adler is a former safety researcher at OpenAI. He joins the podcast to discuss how to govern increasingly capable AI systems. The conversation covers competitive races between AI companies, limits of current testing and alignment, mental health harms from chatbots, economic shifts from AI labor, and what international rules and audits might be needed before training superintelligent models.&nbsp;</p><p></p><p><strong>LINKS:</strong></p><ul><li><a href="https://stevenadler.substack.com/?ref=podcast.futureoflife.org" rel="noreferrer">Steven Adler's Substack</a></li></ul><p></p><p><strong>CHAPTERS:</strong></p><p>(00:00) Episode Preview</p><p>(01:00) Race Dynamics And Safety</p><p>(18:03) Chatbots And Mental Health</p><p>(30:42) Models Outsmart Safety Tests</p><p>(41:01) AI Swarms And Work</p><p>(54:21) Human Bottlenecks And Oversight</p><p>(01:06:23) Animals And Superintelligence</p><p>(01:19:24) Safety Capabilities And Governance</p><p></p><p><strong>PRODUCED BY:</strong></p><p><a href="https://aipodcast.ing/?ref=podcast.futureoflife.org">https://aipodcast.ing</a></p><p></p><p><strong>SOCIAL LINKS:</strong></p><p>Website: <a href="https://podcast.futureoflife.org/">https://podcast.futureoflife.org</a></p><p>Twitter (FLI): <a href="https://x.com/FLI_org?ref=podcast.futureoflife.org">https://x.com/FLI_org</a></p><p>Twitter (Gus): <a href="https://x.com/gusdocker?ref=podcast.futureoflife.org">https://x.com/gusdocker</a></p><p>LinkedIn: <a href="https://www.linkedin.com/company/future-of-life-institute/?ref=podcast.futureoflife.org">https://www.linkedin.com/company/future-of-life-institute/</a></p><p>YouTube: <a href="https://www.youtube.com/channel/UC-rCCy3FQ-GItDimSR9lhzw/?ref=podcast.futureoflife.org">https://www.youtube.com/channel/UC-rCCy3FQ-GItDimSR9lhzw/</a></p><p>Apple: <a href="https://geo.itunes.apple.com/us/podcast/id1170991978?ref=podcast.futureoflife.org">https://geo.itunes.apple.com/us/podcast/id1170991978</a></p><p>Spotify: <a href="https://open.spotify.com/show/2Op1WO3gwVwCrYHg4eoGyP?ref=podcast.futureoflife.org">https://open.spotify.com/show/2Op1WO3gwVwCrYHg4eoGyP</a></p><p></p><hr> ]]></content:encoded>
          <enclosure url="" length="0" type="audio/mpeg" />
          <itunes:title>Why the AI Race Undermines Safety (with Steven Adler)</itunes:title>
          <itunes:author>Gus Docker</itunes:author>
          <itunes:subtitle>Former OpenAI safety researcher Stephen Adler discusses governing increasingly capable AI, including competitive race dynamics, gaps in testing and alignment, chatbot mental-health impacts, economic effects on labor, and international rules and audits before training superintelligent models.</itunes:subtitle>
          <itunes:summary><![CDATA[ <h2 id="watch-episode-here">Watch Episode Here</h2><figure class="kg-card kg-embed-card"><iframe width="560" height="315" src="https://www.youtube.com/embed/-idQtT8WIr8" frameborder="0" allowfullscreen=""></iframe></figure><hr><h2 id="listen-to-episode-here">Listen to Episode Here</h2><figure class="kg-card kg-embed-card"><iframe src="https://share.transistor.fm/e/43cb0f10/?color=444444&amp;background=ffffff" height="180" width="100%" frameborder="0" scrolling="no" seamless="true"></iframe></figure><hr><h2 id="show-notes">Show Notes</h2><p>Stephen Adler is a former safety researcher at OpenAI. He joins the podcast to discuss how to govern increasingly capable AI systems. The conversation covers competitive races between AI companies, limits of current testing and alignment, mental health harms from chatbots, economic shifts from AI labor, and what international rules and audits might be needed before training superintelligent models.&nbsp;</p><p></p><p><strong>LINKS:</strong></p><ul><li><a href="https://stevenadler.substack.com/?ref=podcast.futureoflife.org" rel="noreferrer">Steven Adler's Substack</a></li></ul><p></p><p><strong>CHAPTERS:</strong></p><p>(00:00) Episode Preview</p><p>(01:00) Race Dynamics And Safety</p><p>(18:03) Chatbots And Mental Health</p><p>(30:42) Models Outsmart Safety Tests</p><p>(41:01) AI Swarms And Work</p><p>(54:21) Human Bottlenecks And Oversight</p><p>(01:06:23) Animals And Superintelligence</p><p>(01:19:24) Safety Capabilities And Governance</p><p></p><p><strong>PRODUCED BY:</strong></p><p><a href="https://aipodcast.ing/?ref=podcast.futureoflife.org">https://aipodcast.ing</a></p><p></p><p><strong>SOCIAL LINKS:</strong></p><p>Website: <a href="https://podcast.futureoflife.org/">https://podcast.futureoflife.org</a></p><p>Twitter (FLI): <a href="https://x.com/FLI_org?ref=podcast.futureoflife.org">https://x.com/FLI_org</a></p><p>Twitter (Gus): <a href="https://x.com/gusdocker?ref=podcast.futureoflife.org">https://x.com/gusdocker</a></p><p>LinkedIn: <a href="https://www.linkedin.com/company/future-of-life-institute/?ref=podcast.futureoflife.org">https://www.linkedin.com/company/future-of-life-institute/</a></p><p>YouTube: <a href="https://www.youtube.com/channel/UC-rCCy3FQ-GItDimSR9lhzw/?ref=podcast.futureoflife.org">https://www.youtube.com/channel/UC-rCCy3FQ-GItDimSR9lhzw/</a></p><p>Apple: <a href="https://geo.itunes.apple.com/us/podcast/id1170991978?ref=podcast.futureoflife.org">https://geo.itunes.apple.com/us/podcast/id1170991978</a></p><p>Spotify: <a href="https://open.spotify.com/show/2Op1WO3gwVwCrYHg4eoGyP?ref=podcast.futureoflife.org">https://open.spotify.com/show/2Op1WO3gwVwCrYHg4eoGyP</a></p><p></p><hr> ]]></itunes:summary>
            <itunes:image href="https://storage.aipodcast.ing/permanent/squarefli-20251212T113524748Z.jpg" />
          <itunes:explicit>no</itunes:explicit>
        </item>
        <item>
          <title>Why OpenAI Is Trying to Silence Its Critics (with Tyler Johnston)</title>
          <link>https://podcast.futureoflife.org/why-openai-is-trying-to-silence-its-critics-with-tyler-johnston/</link>
          <description>Tyler Johnston of the Midas Project discusses applying corporate accountability to the AI industry, focusing on OpenAI&#x27;s actions, including subpoenas, and the need for transparency and public awareness regarding AI risks.</description>
          <pubDate>Thu, 27 Nov 2025 13:58:46 +0000</pubDate>
          <guid isPermaLink="false"><![CDATA[ 69285397db9fa600018942af ]]></guid>
          <category><![CDATA[ Governance &amp; Policy ]]></category>
          <content:encoded><![CDATA[ <h2 id="watch-episode-here">Watch Episode Here</h2><figure class="kg-card kg-embed-card"><iframe width="560" height="315" src="https://www.youtube.com/embed/jqPDc9JpOc0" frameborder="0" allowfullscreen=""></iframe></figure><hr><h2 id="listen-to-episode-here">Listen to Episode Here</h2><figure class="kg-card kg-embed-card"><iframe src="https://share.transistor.fm/e/d4ed41d7/?color=444444&amp;background=ffffff" height="180" width="100%" frameborder="0" scrolling="no" seamless="true"></iframe></figure><hr><h2 id="show-notes">Show Notes</h2><p>Tyler Johnston is Executive Director of the Midas Project. He joins the podcast to discuss AI transparency and accountability. We explore applying animal rights watchdog tactics to AI companies, the OpenAI Files investigation, and OpenAI's subpoenas against nonprofit critics. Tyler discusses why transparency is crucial when technical safety solutions remain elusive and how public pressure can effectively challenge much larger companies.</p><p></p><p><strong>LINKS:</strong></p><ul><li><a href="https://www.themidasproject.com/?ref=podcast.futureoflife.org">The Midas Project Website</a></li><li><a href="https://www.linkedin.com/in/tyler-johnston-479672224?ref=podcast.futureoflife.org">Tyler Johnston's LinkedIn Profile</a></li></ul><p></p><p></p><p><strong>CHAPTERS:</strong><br>(00:00) Episode Preview<br>(01:06) Introducing the Midas Project<br>(05:01) Shining a Light on AI<br>(08:36) Industry Lockdown and Transparency<br>(13:45) The OpenAI Files<br>(20:55) Subpoenaed by OpenAI<br>(29:10) Responding to the Subpoena<br>(37:41) The Case for Transparency<br>(44:30) Pricing Risk and Regulation<br>(52:15) Measuring Transparency and Auditing<br>(57:50) Hope for the Future</p><p><br><strong>PRODUCED BY:</strong></p><p><a href="https://aipodcast.ing/?ref=podcast.futureoflife.org">https://aipodcast.ing</a></p><p></p><p><strong>SOCIAL LINKS:</strong></p><p>Website: <a href="https://podcast.futureoflife.org/">https://podcast.futureoflife.org</a></p><p>Twitter (FLI): <a href="https://x.com/FLI_org?ref=podcast.futureoflife.org">https://x.com/FLI_org</a></p><p>Twitter (Gus): <a href="https://x.com/gusdocker?ref=podcast.futureoflife.org">https://x.com/gusdocker</a></p><p>LinkedIn: <a href="https://www.linkedin.com/company/future-of-life-institute/?ref=podcast.futureoflife.org">https://www.linkedin.com/company/future-of-life-institute/</a></p><p>YouTube: <a href="https://www.youtube.com/channel/UC-rCCy3FQ-GItDimSR9lhzw/?ref=podcast.futureoflife.org">https://www.youtube.com/channel/UC-rCCy3FQ-GItDimSR9lhzw/</a></p><p>Apple: <a href="https://geo.itunes.apple.com/us/podcast/id1170991978?ref=podcast.futureoflife.org">https://geo.itunes.apple.com/us/podcast/id1170991978</a></p><p>Spotify: <a href="https://open.spotify.com/show/2Op1WO3gwVwCrYHg4eoGyP?ref=podcast.futureoflife.org">https://open.spotify.com/show/2Op1WO3gwVwCrYHg4eoGyP</a></p><p></p><hr> ]]></content:encoded>
          <enclosure url="" length="0" type="audio/mpeg" />
          <itunes:title>Why OpenAI Is Trying to Silence Its Critics (with Tyler Johnston)</itunes:title>
          <itunes:author>Gus Docker</itunes:author>
          <itunes:subtitle>Tyler Johnston of the Midas Project discusses applying corporate accountability to the AI industry, focusing on OpenAI&#x27;s actions, including subpoenas, and the need for transparency and public awareness regarding AI risks.</itunes:subtitle>
          <itunes:summary><![CDATA[ <h2 id="watch-episode-here">Watch Episode Here</h2><figure class="kg-card kg-embed-card"><iframe width="560" height="315" src="https://www.youtube.com/embed/jqPDc9JpOc0" frameborder="0" allowfullscreen=""></iframe></figure><hr><h2 id="listen-to-episode-here">Listen to Episode Here</h2><figure class="kg-card kg-embed-card"><iframe src="https://share.transistor.fm/e/d4ed41d7/?color=444444&amp;background=ffffff" height="180" width="100%" frameborder="0" scrolling="no" seamless="true"></iframe></figure><hr><h2 id="show-notes">Show Notes</h2><p>Tyler Johnston is Executive Director of the Midas Project. He joins the podcast to discuss AI transparency and accountability. We explore applying animal rights watchdog tactics to AI companies, the OpenAI Files investigation, and OpenAI's subpoenas against nonprofit critics. Tyler discusses why transparency is crucial when technical safety solutions remain elusive and how public pressure can effectively challenge much larger companies.</p><p></p><p><strong>LINKS:</strong></p><ul><li><a href="https://www.themidasproject.com/?ref=podcast.futureoflife.org">The Midas Project Website</a></li><li><a href="https://www.linkedin.com/in/tyler-johnston-479672224?ref=podcast.futureoflife.org">Tyler Johnston's LinkedIn Profile</a></li></ul><p></p><p></p><p><strong>CHAPTERS:</strong><br>(00:00) Episode Preview<br>(01:06) Introducing the Midas Project<br>(05:01) Shining a Light on AI<br>(08:36) Industry Lockdown and Transparency<br>(13:45) The OpenAI Files<br>(20:55) Subpoenaed by OpenAI<br>(29:10) Responding to the Subpoena<br>(37:41) The Case for Transparency<br>(44:30) Pricing Risk and Regulation<br>(52:15) Measuring Transparency and Auditing<br>(57:50) Hope for the Future</p><p><br><strong>PRODUCED BY:</strong></p><p><a href="https://aipodcast.ing/?ref=podcast.futureoflife.org">https://aipodcast.ing</a></p><p></p><p><strong>SOCIAL LINKS:</strong></p><p>Website: <a href="https://podcast.futureoflife.org/">https://podcast.futureoflife.org</a></p><p>Twitter (FLI): <a href="https://x.com/FLI_org?ref=podcast.futureoflife.org">https://x.com/FLI_org</a></p><p>Twitter (Gus): <a href="https://x.com/gusdocker?ref=podcast.futureoflife.org">https://x.com/gusdocker</a></p><p>LinkedIn: <a href="https://www.linkedin.com/company/future-of-life-institute/?ref=podcast.futureoflife.org">https://www.linkedin.com/company/future-of-life-institute/</a></p><p>YouTube: <a href="https://www.youtube.com/channel/UC-rCCy3FQ-GItDimSR9lhzw/?ref=podcast.futureoflife.org">https://www.youtube.com/channel/UC-rCCy3FQ-GItDimSR9lhzw/</a></p><p>Apple: <a href="https://geo.itunes.apple.com/us/podcast/id1170991978?ref=podcast.futureoflife.org">https://geo.itunes.apple.com/us/podcast/id1170991978</a></p><p>Spotify: <a href="https://open.spotify.com/show/2Op1WO3gwVwCrYHg4eoGyP?ref=podcast.futureoflife.org">https://open.spotify.com/show/2Op1WO3gwVwCrYHg4eoGyP</a></p><p></p><hr> ]]></itunes:summary>
            <itunes:image href="https://storage.aipodcast.ing/permanent/tyleraudio-20251127T132346675Z.jpg" />
          <itunes:explicit>no</itunes:explicit>
        </item>
        <item>
          <title>We&#x27;re Not Ready for AGI (with Will MacAskill)</title>
          <link>https://podcast.futureoflife.org/we-re-not-ready-for-agi-with-will-macaskill/</link>
          <description>William MacAskill discusses his Better Futures essay series, arguing that improving the future&#x27;s quality deserves equal priority to preventing catastrophe. The conversation explores moral error risks, AI character design, space governance, and ethical reasoning for AI systems.</description>
          <pubDate>Fri, 14 Nov 2025 14:35:28 +0000</pubDate>
          <guid isPermaLink="false"><![CDATA[ 6916d2a4db9fa6000189429e ]]></guid>
          <category><![CDATA[ Ethics &amp; Philosophy ]]></category>
          <content:encoded><![CDATA[ <h2 id="watch-episode-here">Watch Episode Here</h2><figure class="kg-card kg-embed-card"><iframe width="560" height="315" src="https://www.youtube.com/embed/LhFyXrBl2xo" frameborder="0" allowfullscreen=""></iframe></figure><hr><h2 id="listen-to-episode-here">Listen to Episode Here</h2><figure class="kg-card kg-embed-card"><iframe src="https://share.transistor.fm/e/01b3b195/?color=444444&amp;background=ffffff" height="180" width="100%" frameborder="0" scrolling="no" seamless="true"></iframe></figure><hr><h2 id="show-notes">Show Notes</h2><p>William MacAskill is a senior research fellow at Forethought. He joins the podcast to discuss his Better Futures essay series. We explore moral error risks, AI character design, space governance, and persistent path dependence. The conversation also covers risk-averse AI systems, moral trade between value systems, an d improving model specifications for ethical reasoning.</p><p><strong>LINKS:</strong></p><ul><li>Better Futures Research Series: <a href="https://www.forethought.org/research/better-futures?ref=podcast.futureoflife.org">https://www.forethought.org/research/better-futures</a></li><li>William MacAskill Forethought Profile: <a href="https://www.forethought.org/people/william-macaskill?ref=podcast.futureoflife.org">https://www.forethought.org/people/william-macaskill</a></li></ul><p><br><strong>CHAPTERS:</strong><br>(00:00) Episode Preview<br>(01:03) Improving The Future's Quality<br>(09:58) Moral Errors and AI Rights<br>(18:24) AI's Impact on Thinking<br>(27:17) Utopias and Population Ethics<br>(36:41) The Danger of Moral Lock-in<br>(44:38) Deals with Misaligned AI<br>(57:25) AI and Moral Trade<br>(01:08:21) Improving AI Ethical Reasoning<br>(01:16:05) The Risk of Path Dependence<br>(01:27:41) Avoiding Future Lock-in<br>(01:36:22) The Urgency of Space Governance<br>(01:46:19) A Future Research Agenda<br>(01:57:36) Is Intelligence a Good Bet?</p><p><br><strong>PRODUCED BY:</strong><br><a href="https://aipodcast.ing/?ref=podcast.futureoflife.org">https://aipodcast.ing</a></p><p></p><p><strong>SOCIAL LINKS:</strong></p><p>Website: <a href="https://podcast.futureoflife.org/">https://podcast.futureoflife.org</a></p><p>Twitter (FLI): <a href="https://x.com/FLI_org?ref=podcast.futureoflife.org">https://x.com/FLI_org</a></p><p>Twitter (Gus): <a href="https://x.com/gusdocker?ref=podcast.futureoflife.org">https://x.com/gusdocker</a></p><p>LinkedIn: <a href="https://www.linkedin.com/company/future-of-life-institute/?ref=podcast.futureoflife.org">https://www.linkedin.com/company/future-of-life-institute/</a></p><p>YouTube: <a href="https://www.youtube.com/channel/UC-rCCy3FQ-GItDimSR9lhzw/?ref=podcast.futureoflife.org">https://www.youtube.com/channel/UC-rCCy3FQ-GItDimSR9lhzw/</a></p><p>Apple: <a href="https://geo.itunes.apple.com/us/podcast/id1170991978?ref=podcast.futureoflife.org">https://geo.itunes.apple.com/us/podcast/id1170991978</a></p><p>Spotify: <a href="https://open.spotify.com/show/2Op1WO3gwVwCrYHg4eoGyP?ref=podcast.futureoflife.org">https://open.spotify.com/show/2Op1WO3gwVwCrYHg4eoGyP</a></p><hr> ]]></content:encoded>
          <enclosure url="" length="0" type="audio/mpeg" />
          <itunes:title>We&#x27;re Not Ready for AGI (with Will MacAskill)</itunes:title>
          <itunes:author>Gus Docker</itunes:author>
          <itunes:subtitle>William MacAskill discusses his Better Futures essay series, arguing that improving the future&#x27;s quality deserves equal priority to preventing catastrophe. The conversation explores moral error risks, AI character design, space governance, and ethical reasoning for AI systems.</itunes:subtitle>
          <itunes:summary><![CDATA[ <h2 id="watch-episode-here">Watch Episode Here</h2><figure class="kg-card kg-embed-card"><iframe width="560" height="315" src="https://www.youtube.com/embed/LhFyXrBl2xo" frameborder="0" allowfullscreen=""></iframe></figure><hr><h2 id="listen-to-episode-here">Listen to Episode Here</h2><figure class="kg-card kg-embed-card"><iframe src="https://share.transistor.fm/e/01b3b195/?color=444444&amp;background=ffffff" height="180" width="100%" frameborder="0" scrolling="no" seamless="true"></iframe></figure><hr><h2 id="show-notes">Show Notes</h2><p>William MacAskill is a senior research fellow at Forethought. He joins the podcast to discuss his Better Futures essay series. We explore moral error risks, AI character design, space governance, and persistent path dependence. The conversation also covers risk-averse AI systems, moral trade between value systems, an d improving model specifications for ethical reasoning.</p><p><strong>LINKS:</strong></p><ul><li>Better Futures Research Series: <a href="https://www.forethought.org/research/better-futures?ref=podcast.futureoflife.org">https://www.forethought.org/research/better-futures</a></li><li>William MacAskill Forethought Profile: <a href="https://www.forethought.org/people/william-macaskill?ref=podcast.futureoflife.org">https://www.forethought.org/people/william-macaskill</a></li></ul><p><br><strong>CHAPTERS:</strong><br>(00:00) Episode Preview<br>(01:03) Improving The Future's Quality<br>(09:58) Moral Errors and AI Rights<br>(18:24) AI's Impact on Thinking<br>(27:17) Utopias and Population Ethics<br>(36:41) The Danger of Moral Lock-in<br>(44:38) Deals with Misaligned AI<br>(57:25) AI and Moral Trade<br>(01:08:21) Improving AI Ethical Reasoning<br>(01:16:05) The Risk of Path Dependence<br>(01:27:41) Avoiding Future Lock-in<br>(01:36:22) The Urgency of Space Governance<br>(01:46:19) A Future Research Agenda<br>(01:57:36) Is Intelligence a Good Bet?</p><p><br><strong>PRODUCED BY:</strong><br><a href="https://aipodcast.ing/?ref=podcast.futureoflife.org">https://aipodcast.ing</a></p><p></p><p><strong>SOCIAL LINKS:</strong></p><p>Website: <a href="https://podcast.futureoflife.org/">https://podcast.futureoflife.org</a></p><p>Twitter (FLI): <a href="https://x.com/FLI_org?ref=podcast.futureoflife.org">https://x.com/FLI_org</a></p><p>Twitter (Gus): <a href="https://x.com/gusdocker?ref=podcast.futureoflife.org">https://x.com/gusdocker</a></p><p>LinkedIn: <a href="https://www.linkedin.com/company/future-of-life-institute/?ref=podcast.futureoflife.org">https://www.linkedin.com/company/future-of-life-institute/</a></p><p>YouTube: <a href="https://www.youtube.com/channel/UC-rCCy3FQ-GItDimSR9lhzw/?ref=podcast.futureoflife.org">https://www.youtube.com/channel/UC-rCCy3FQ-GItDimSR9lhzw/</a></p><p>Apple: <a href="https://geo.itunes.apple.com/us/podcast/id1170991978?ref=podcast.futureoflife.org">https://geo.itunes.apple.com/us/podcast/id1170991978</a></p><p>Spotify: <a href="https://open.spotify.com/show/2Op1WO3gwVwCrYHg4eoGyP?ref=podcast.futureoflife.org">https://open.spotify.com/show/2Op1WO3gwVwCrYHg4eoGyP</a></p><hr> ]]></itunes:summary>
            <itunes:image href="https://storage.aipodcast.ing/permanent/willsquare-20251114T063237697Z.jpg" />
          <itunes:explicit>no</itunes:explicit>
        </item>
  </channel>

</rss>