Annie Jacobsen on Nuclear War - a Second by Second Timeline
Annie Jacobsen joins the podcast to lay out a second by second timeline for how nuclear war could happen.
View episodeExamines threats that could permanently curtail humanity's potential or cause human extinction. Includes nuclear warfare, engineered pandemics, climate catastrophe, unaligned AI, and other global catastrophic risks that threaten civilization's long-term survival.
Annie Jacobsen joins the podcast to lay out a second by second timeline for how nuclear war could happen.
View episode
Holly Elmore joins the podcast to discuss pausing frontier AI, hardware overhang, safety research during a pause, the social dynamics of AI risk, and what prevents AGI corporations from collaborating.
View episode
Roman Yampolskiy joins the podcast again to discuss whether AI is like a Shoggoth, whether scaling laws will hold for more agent-like AIs, evidence that AI is uncontrollable, and whether designing human-like AI would be safer than the current development path.
View episode
Carl Robichaud joins the podcast to discuss the new nuclear arms race, how much world leaders and ideologies matter for nuclear risk, and how to reach a stable, low-risk era.
View episode
Darren McKee joins the podcast to discuss how AI might be difficult to control, which goals and traits AI systems will develop, and whether there's a unified solution to AI alignment.
View episode
Dan Hendrycks joins the podcast again to discuss X.ai, how AI risk thinking has evolved, malicious use of AI, AI race dynamics between companies and between militaries, making AI organizations safer, and how representation engineering could help us understand AI traits like deception.
View episode
Steve Omohundro joins the podcast to discuss Provably Safe Systems, a paper he co-authored with FLI President Max Tegmark.
View episode
Tom Davidson joins the podcast to discuss how AI could quickly automate most cognitive tasks, including AI research, and why this would be risky.
View episode
Jason Crawford joins the podcast to discuss the history of progress, the future of economic growth, and the relationship between progress and risks from AI.
View episode
On this special episode of the podcast, Jaan Tallinn talks with Nathan Labenz about Jaan's model of AI risk, the future of AI development, and pausing giant AI experiments.
View episode
Joe Carlsmith joins the podcast to discuss how we change our minds about AI risk, gut feelings versus abstract models, and what to do if transformative AI is coming soon.
View episode
Dan Hendrycks joins the podcast to discuss evolutionary dynamics in AI development and how we could develop AI safely.
View episodeNo matter your level of experience or seniority, there is something you can do to help us ensure the future of life is positive.