Anders Sandberg from The Future of Humanity Institute joins the podcast to discuss ChatGPT, large language models, and what he's learned about the risks and benefits of AI.
Anders Sandberg from The Future of Humanity Institute joins the podcast to discuss ChatGPT, large language models, and what he's learned about the risks and benefits of AI. Timestamps: 00:00 Introduction 00:40 ChatGPT 06:33 Will AI continue to surprise us? 16:22 How do language models fail? 24:23 Language models trained on their own output 27:29 Can language models write college-level essays? 35:03 Do language models understand anything? 39:59 How will AI models improve in the future? 43:26 AI safety in light of recent AI progress 51:28 AIs should be uncertain about values
Emilia Javorsky explores how AI can realistically aid cancer research, where current hype exceeds evidence, and what changes researchers, policymakers, and funders must make to turn AI advances into real clinical impact.
Researcher Zak Stein discusses how anthropomorphic AI can exploit human attachment systems, its psychological risks for children and adults, and ways to redesign education and cognitive security tools to protect relationships and human agency.
Andrea Miotti, founder of Control AI, discusses the extreme risks from superintelligent AI and his case for a global ban on systems that could outsmart humans, touching on industry lobbying, regulation strategies, public awareness, and citizen actions.