Why AIs Misbehave and How We Could Lose Control (with Jeffrey Ladish)
On this episode, Jeffrey Ladish from Palisade Research joins me to discuss the rapid pace of AI progress and the risks of losing control over powerful systems.
View episodeExamines threats that could permanently curtail humanity's potential or cause human extinction. Includes nuclear warfare, engineered pandemics, climate catastrophe, unaligned AI, and other global catastrophic risks that threaten civilization's long-term survival.
On this episode, Jeffrey Ladish from Palisade Research joins me to discuss the rapid pace of AI progress and the risks of losing control over powerful systems.
View episode
Ann Pace joins the podcast to discuss the work of Wise Ancestors.
View episode
David "davidad" Dalrymple joins the podcast to explore Safeguarded AI — an approach to ensuring the safety of highly advanced AI systems.
View episode
Connor Leahy joins the podcast to discuss the motivations of AGI corporations, how modern AI is "grown", the need for a science of intelligence, the effects of AI on work, the radical implications of superintelligence, open-source AI, and what you might be able to do about all of this.
View episode
Suzy Shepherd joins the podcast to discuss her new short film "Writing Doom", which deals with AI risk.
View episode
Andrea Miotti joins the podcast to discuss "A Narrow Path" — a roadmap to safe, transformative AI.
View episode
Tamay Besiroglu joins the podcast to discuss scaling, AI capabilities in 2030, breakthroughs in AI agents and planning, automating work, the uncertainties of investing in AI, and scaling laws for inference-time compute.
View episode
Ryan Greenblatt joins the podcast to discuss AI control, timelines, takeoff speeds, misalignment, and slowing down around human-level AI.
View episode
Anton Korinek joins the podcast to discuss the effects of automation on wages and labor, how we measure the complexity of tasks, the economics of an intelligence explosion, and the market structure of the AI industry.
View episode
Christian Ruhl joins the podcast to discuss US-China competition and the risk of war, official versus unofficial diplomacy, hotlines between countries, catastrophic biological risks, ultraviolet germicidal light, and ancient civilizational collapse.
View episode
Dan Faggella joins the podcast to discuss whether humanity should eventually create AGI, how AI will change power dynamics between institutions, what drives AI progress, and which industries are implementing AI successfully.
View episode
Liron Shapira joins the podcast to discuss superintelligence goals, what makes AI different from other technologies, risks from centralizing power, and whether AI can defend us from AI.
View episodeNo matter your level of experience or seniority, there is something you can do to help us ensure the future of life is positive.