Listen to Episode Here
Show Notes
From Max Tegmark's Life 3.0 to Stuart Russell's Human Compatible and Nick Bostrom's Superintelligence, much has been written and said about the long-term risks of powerful AI systems. When considering concrete actions one can take to help mitigate these risks, governance and policy related solutions become an attractive area of consideration. But just what can anyone do in the present day policy sphere to help ensure that powerful AI systems remain beneficial and aligned with human values? Do today's AI policies matter at all for AGI risk? Jared Brown and Nicolas Moës join us on today's podcast to explore these questions and the importance of AGI-risk sensitive persons' involvement in present day AI policy discourse.
Topics discussed in this episode include:
- The importance of current AI policy work for long-term AI risk
- Where we currently stand in the process of forming AI policy
- Why persons worried about existential risk should care about present day AI policy
- AI and the global community
- The rationality and irrationality around AI race narratives
Timestamps:
0:00Intro
4:58 Why it’s important to work on AI policy
12:08 Our historical position in the process of AI policy
21:54For long-termists and those concerned about AGI risk, how is AI policy today important and relevant?
33:46 AI policy and shorter-term global catastrophic and existential risks
38:18 The Brussels and Sacramento effects
41:23 Why is racing on AI technology bad?
48:45 The rationality of racing to AGI
58:22 Where is AI policy currently?
We hope that you will continue to join in the conversations by following us or subscribing to our podcasts on Youtube, Spotify, SoundCloud, iTunes, Google Play, Stitcher, iHeartRadio, or your preferred podcast site/application. You can find all the AI Alignment Podcasts here.