Samuel Hammond joins the podcast to discuss how AGI will transform economies, governments, institutions, and other power structures. You can read Samuel's blog at https://www.secondbest.ca Timestamps: 00:00 Is AGI close? 06:56 Compute versus data 09:59 Information theory 20:36 Universality of learning 24:53 Hards steps in evolution 30:30 Governments and advanced AI 40:33 How will AI transform the economy? 55:26 How will AI change transaction costs? 1:00:31 Isolated thinking about AI 1:09:43 AI and Leviathan 1:13:01 Informational resolution 1:18:36 Open-source AI 1:21:24 AI will decrease state power 1:33:17 Timeline of a techno-feudalist future 1:40:28 Alignment difficulty and AI scale 1:45:19 Solving robotics 1:54:40 A constrained Leviathan 1:57:41 An Apollo Project for AI safety 2:04:29 Secure "gain-of-function" AI research 2:06:43 Is the market expecting AGI soon?
Former OpenAI safety researcher Stephen Adler discusses governing increasingly capable AI, including competitive race dynamics, gaps in testing and alignment, chatbot mental-health impacts, economic effects on labor, and international rules and audits before training superintelligent models.
Tyler Johnston of the Midas Project discusses applying corporate accountability to the AI industry, focusing on OpenAI's actions, including subpoenas, and the need for transparency and public awareness regarding AI risks.
Karl Koch discusses the AI Whistleblower Initiative, focusing on transparency and protections for AI insiders who identify safety risks. The episode explores current policies, legal gaps, and practical guidance for potential whistleblowers.