Skip to content
Max Tegmark and the FLI Team on 2020 and Existential Risk Reduction in the New Year
· Technology & Future

Max Tegmark and the FLI Team on 2020 and Existential Risk Reduction in the New Year

Listen to Episode Here


Show Notes

Max Tegmark and members of the FLI core team come together to discuss favorite projects from 2020, what we've learned from the past year, and what we think is needed for existential risk reduction in 2021.

Topics discussed in this episode include:

Timestamps:

0:00 Intro

00:52 First question: What was your favorite project from 2020?

1:03 Max Tegmark on the Future of Life Award

4:15 Anthony Aguirre on AI Loyalty

9:18 David Nicholson on the Future of Life Award

12:23 Emilia Javorksy on being a co-champion for the UN Secretary-General's effort on digital cooperation

14:03 Jared Brown on developing comments on the European Union's White Paper on AI through community collaboration

16:40 Tucker Davey on editing the biography of Victor Zhdanov

19:49 Lucas Perry on the podcast and Pindex video

23:17 Second question: What lessons do you take away from 2020?

23:26 Max Tegmark on human fragility and vulnerability

25:14 Max Tegmark on learning from history

26:47 Max Tegmark on the growing threats of AI

29:45 Anthony Aguirre on the inability of present-day institutions to deal with large unexpected problems

33:00 David Nicholson on the need for self-reflection on the use and development of technology

38:05 Emilia Javorsky on the global community coming to awareness about tail risks

39:48 Jared Brown on our vulnerability to low probability, high impact events and the importance of adaptability and policy engagement

41:43 Tucker Davey on taking existential risks more seriously and ethics-washing

43:57 Lucas Perry on the fragility of human systems

45:40 Third question: What is needed in 2021 to make progress on existential risk mitigation

45:50 Max Tegmark on holding Big Tech accountable, repairing geopolitics, and fighting the myth of the technological zero-sum game

49:58 Anthony Aguirre on the importance of spreading understanding of expected value reasoning and fixing the information crisis

53:41 David Nicholson on the need to reflect on our values and relationship with technology

54:35 Emilia Javorksy on the importance of returning to multilateralism and global dialogue

56:00 Jared Brown on the need for robust government engagement

57:30 Lucas Perry on the need for creating institutions for existential risk mitigation and global cooperation

1:00:10 Outro

We hope that you will continue to join in the conversations by following us or subscribing to our podcasts on Youtube, Spotify, SoundCloud, iTunes, Google Play, Stitcher, iHeartRadio, or your preferred podcast site/application. You can find all the AI Alignment Podcasts here.


Related episodes

No matter your level of experience or seniority, there is something you can do to help us ensure the future of life is positive.