
Brain-like AGI and why it's Dangerous (with Steven Byrnes)
On this episode, Steven Byrnes joins me to discuss brain-like AGI safety.
View episodeDiscusses technological progress broadly and its implications for humanity's trajectory. Covers synthetic biology, space exploration, computation, emerging technologies, forecasting, and how science and innovation shape possible futures.
On this episode, Steven Byrnes joins me to discuss brain-like AGI safety.
View episodeOn this episode, Ege Erdil from Epoch AI joins me to discuss their new GATE model of AI development, what evolution and brain efficiency tell us about AGI requirements, how AI might impact wages and labor markets, and what it takes to train models with long-term planning.
View episodeIn this special episode, we feature Nathan Labenz interviewing Nicholas Carlini on the Cognitive Revolution podcast.
View episodeOn this episode, I interview Anthony Aguirre, Executive Director of the Future of Life Institute, about his new essay Keep the Future Human: https://keepthefuturehuman.ai AI companies are explicitly working toward AGI and are likely to succeed soon, possibly within years. Keep the Future Huma...
View episodeOn this episode, physicist and hedge fund manager Samir Varma joins me to discuss whether AIs could have free will (and what that means), the emerging field of AI psychology, and which concepts they might rely on.
View episodeOn this episode, Jeffrey Ladish from Palisade Research joins me to discuss the rapid pace of AI progress and the risks of losing control over powerful systems.
View episodeAnn Pace joins the podcast to discuss the work of Wise Ancestors.
View episodeDavid "davidad" Dalrymple joins the podcast to explore Safeguarded AI — an approach to ensuring the safety of highly advanced AI systems.
View episodeNick Allardice joins the podcast to discuss how GiveDirectly uses AI to target cash transfers and predict natural disasters.
View episodeNathan Labenz joins the podcast to provide a comprehensive overview of AI progress since the release of GPT-4.
View episodeConnor Leahy joins the podcast to discuss the motivations of AGI corporations, how modern AI is "grown", the need for a science of intelligence, the effects of AI on work, the radical implications of superintelligence, open-source AI, and what you might be able to do about all of this.
View episodeNo matter your level of experience or seniority, there is something you can do to help us ensure the future of life is positive.