Why Building Superintelligence Means Human Extinction (with Nate Soares)
Nate Soares discusses his book on the risks of superintelligent AI, arguing that current approaches make AI unpredictable and uncontrollable, and advocates for an international ban on research toward superintelligence.
View episode