Carl Robichaud joins the podcast to discuss the new nuclear arms race, how much world leaders and ideologies matter for nuclear risk, and how to reach a stable, low-risk era.
Carl Robichaud joins the podcast to discuss the new nuclear arms race, how much world leaders and ideologies matter for nuclear risk, and how to reach a stable, low-risk era. You can learn more about Carl's work here: https://www.longview.org/about/carl-robichaud/ Timestamps: 00:00 A new nuclear arms race 08:07 How much do world leaders matter? 18:04 How much does ideology matter? 22:14 Do nuclear weapons cause stable peace? 31:29 North Korea 34:01 Have we overestimated nuclear risk? 43:24 Time pressure in nuclear decisions 52:00 Why so many nuclear warheads? 1:02:17 Has containment been successful? 1:11:34 Coordination mechanisms 1:16:31 Technological innovations 1:25:57 Public perception of nuclear risk 1:29:52 Easier access to nuclear weapons 1:33:31 Reaching a stable, low-risk era
Former OpenAI safety researcher Stephen Adler discusses governing increasingly capable AI, including competitive race dynamics, gaps in testing and alignment, chatbot mental-health impacts, economic effects on labor, and international rules and audits before training superintelligent models.
Tyler Johnston of the Midas Project discusses applying corporate accountability to the AI industry, focusing on OpenAI's actions, including subpoenas, and the need for transparency and public awareness regarding AI risks.
Karl Koch discusses the AI Whistleblower Initiative, focusing on transparency and protections for AI insiders who identify safety risks. The episode explores current policies, legal gaps, and practical guidance for potential whistleblowers.