Holly Elmore on Pausing AI, Hardware Overhang, Safety Research, and Protesting
Holly Elmore joins the podcast to discuss pausing frontier AI, hardware overhang, safety research during a pause, the social dynamics of AI risk, and what prevents AGI corporations from collaborating.
Holly Elmore joins the podcast to discuss pausing frontier AI, hardware overhang, safety research during a pause, the social dynamics of AI risk, and what prevents AGI corporations from collaborating. You can read more about Holly's work at https://pauseai.info Timestamps: 00:00 Pausing AI 10:23 Risks during an AI pause 19:41 Hardware overhang 29:04 Technological progress 37:00 Safety research during a pause 54:42 Social dynamics of AI risk 1:10:00 What prevents cooperation? 1:18:21 What about China? 1:28:24 Protesting AGI corporations
Researcher Oly Sourbut discusses how AI tools might strengthen human reasoning, from fact-checking and scenario planning to honest AI standards and better coordination, and explores how to keep humans central while building trustworthy, society-wide sensemaking.
Technical specialist Nora Ammann of the UK's ARIA discusses how to steer a slow AI takeoff toward resilient, cooperative futures, covering risks from rogue AI and competition to scalable oversight, formal guarantees, secure infrastructure, and AI-supported bargaining.
Maya Ackerman discusses human and machine creativity, exploring its definition, how AI alignment impacts it, and the role of hallucination. The conversation also covers strategies for human-AI collaboration.