Christian Nunes joins the podcast to discuss deepfakes, how they impact women in particular, how we can protect ordinary victims of deepfakes, and the current landscape of deepfake legislation.
Christian Nunes joins the podcast to discuss deepfakes, how they impact women in particular, how we can protect ordinary victims of deepfakes, and the current landscape of deepfake legislation. You can learn more about Christian's work at https://now.org and about the Ban Deepfakes campaign at https://bandeepfakes.org
Timestamps:
00:00 The National Organisation for Women (NOW)
05:37 Deepfakes and women
10:12 Protecting ordinary victims of deepfakes
16:06 Deepfake legislation
23:38 Current harm from deepfakes
30:20 Bodily autonomy as a right
34:44 NOW's work on AI
Here's FLI's recommended amendments to legislative proposals on deepfakes:
Former OpenAI safety researcher Stephen Adler discusses governing increasingly capable AI, including competitive race dynamics, gaps in testing and alignment, chatbot mental-health impacts, economic effects on labor, and international rules and audits before training superintelligent models.
Tyler Johnston of the Midas Project discusses applying corporate accountability to the AI industry, focusing on OpenAI's actions, including subpoenas, and the need for transparency and public awareness regarding AI risks.
William MacAskill discusses his Better Futures essay series, arguing that improving the future's quality deserves equal priority to preventing catastrophe. The conversation explores moral error risks, AI character design, space governance, and ethical reasoning for AI systems.