Skip to content
AIAP: On the Long-term Importance of Current AI Policy with Nicolas Moës and Jared Brown
· Technology & Future

AIAP: On the Long-term Importance of Current AI Policy with Nicolas Moës and Jared Brown

Listen to Episode Here


Show Notes

From Max Tegmark's Life 3.0 to Stuart Russell's Human Compatible and Nick Bostrom's Superintelligence, much has been written and said about the long-term risks of powerful AI systems. When considering concrete actions one can take to help mitigate these risks, governance and policy related solutions become an attractive area of consideration. But just what can anyone do in the present day policy sphere to help ensure that powerful AI systems remain beneficial and aligned with human values? Do today's AI policies matter at all for AGI risk? Jared Brown and Nicolas Moës join us on today's podcast to explore these questions and the importance of AGI-risk sensitive persons' involvement in present day AI policy discourse.

Topics discussed in this episode include:

Timestamps:

0:00Intro

4:58 Why it’s important to work on AI policy

12:08 Our historical position in the process of AI policy

21:54For long-termists and those concerned about AGI risk, how is AI policy today important and relevant?

33:46 AI policy and shorter-term global catastrophic and existential risks

38:18 The Brussels and Sacramento effects

41:23 Why is racing on AI technology bad?

48:45 The rationality of racing to AGI

58:22 Where is AI policy currently?

We hope that you will continue to join in the conversations by following us or subscribing to our podcasts on Youtube, Spotify, SoundCloud, iTunes, Google Play, Stitcher, iHeartRadio, or your preferred podcast site/application. You can find all the AI Alignment Podcasts here.


Related episodes

No matter your level of experience or seniority, there is something you can do to help us ensure the future of life is positive.