Andrea Miotti on a Narrow Path to Safe, Transformative AI
Andrea Miotti joins the podcast to discuss "A Narrow Path" — a roadmap to safe, transformative AI.
View episodeExamines threats that could permanently curtail humanity's potential or cause human extinction. Includes nuclear warfare, engineered pandemics, climate catastrophe, unaligned AI, and other global catastrophic risks that threaten civilization's long-term survival.
Andrea Miotti joins the podcast to discuss "A Narrow Path" — a roadmap to safe, transformative AI.
View episode
Tamay Besiroglu joins the podcast to discuss scaling, AI capabilities in 2030, breakthroughs in AI agents and planning, automating work, the uncertainties of investing in AI, and scaling laws for inference-time compute.
View episode
Ryan Greenblatt joins the podcast to discuss AI control, timelines, takeoff speeds, misalignment, and slowing down around human-level AI.
View episode
Anton Korinek joins the podcast to discuss the effects of automation on wages and labor, how we measure the complexity of tasks, the economics of an intelligence explosion, and the market structure of the AI industry.
View episode
Christian Ruhl joins the podcast to discuss US-China competition and the risk of war, official versus unofficial diplomacy, hotlines between countries, catastrophic biological risks, ultraviolet germicidal light, and ancient civilizational collapse.
View episode
Dan Faggella joins the podcast to discuss whether humanity should eventually create AGI, how AI will change power dynamics between institutions, what drives AI progress, and which industries are implementing AI successfully.
View episode
Liron Shapira joins the podcast to discuss superintelligence goals, what makes AI different from other technologies, risks from centralizing power, and whether AI can defend us from AI.
View episode
Annie Jacobsen joins the podcast to lay out a second by second timeline for how nuclear war could happen.
View episode
Holly Elmore joins the podcast to discuss pausing frontier AI, hardware overhang, safety research during a pause, the social dynamics of AI risk, and what prevents AGI corporations from collaborating.
View episode
Roman Yampolskiy joins the podcast again to discuss whether AI is like a Shoggoth, whether scaling laws will hold for more agent-like AIs, evidence that AI is uncontrollable, and whether designing human-like AI would be safer than the current development path.
View episode
Carl Robichaud joins the podcast to discuss the new nuclear arms race, how much world leaders and ideologies matter for nuclear risk, and how to reach a stable, low-risk era.
View episode
Darren McKee joins the podcast to discuss how AI might be difficult to control, which goals and traits AI systems will develop, and whether there's a unified solution to AI alignment.
View episodeNo matter your level of experience or seniority, there is something you can do to help us ensure the future of life is positive.