William MacAskill discusses his Better Futures essay series, arguing that improving the future's quality deserves equal priority to preventing catastrophe. The conversation explores moral error risks, AI character design, space governance, and ethical reasoning for AI systems.
Maya Ackerman discusses human and machine creativity, exploring its definition, how AI alignment impacts it, and the role of hallucination. The conversation also covers strategies for human-AI collaboration.
Beatrice Erkers discusses the AI pathways project, focusing on approaches to maintain human oversight and control over AI, including tool AI and decentralized development, and examines trade-offs and strategies for safer AI futures.