Skip to content
Roman Yampolskiy on the Uncontrollability, Incomprehensibility, and Unexplainability of AI
· Technology & Future

Roman Yampolskiy on the Uncontrollability, Incomprehensibility, and Unexplainability of AI

Listen to Episode Here


Show Notes

Roman Yampolskiy, Professor of Computer Science at the University of Louisville, joins us to discuss whether we can control, comprehend, and explain AI systems, and how this constrains the project of AI safety.

Topics discussed in this episode include:

You can find FLI's three new policy focused job postings here

Have any feedback about the podcast? You can share your thoughts here

Timestamps:

0:00 Intro

2:35 Roman’s primary research interests

4:09 How theoretical proofs help AI safety research

6:23 How impossibility results constrain computer science systems

10:18 The inability to tell if arbitrary code is friendly or unfriendly

12:06 Impossibility results clarify what we can do

14:19 Roman’s results on unexplainability and incomprehensibility

22:34 Focusing on comprehensibility

26:17 Roman’s results on uncontrollability

28:33 Alignment as a subset of safety and control

30:48 The relationship between unexplainability, incomprehensibility, and uncontrollability with each other and with AI alignment

33:40 What does it mean to solve AI safety?

34:19 What do the impossibility results really mean?

37:07 Virtual worlds and AI alignment

49:55 AI security and malevolent agents

53:00 Air gapping, boxing, and other security methods

58:43 Some examples of historical failures of AI systems and what we can learn from them

1:01:20 Clarifying impossibility results

1:06 55 Examples of systems failing and what these demonstrate about AI

1:08:20 Are oracles a valid approach to AI safety?

1:10:30 Roman’s final thoughts

Paper's discussed in this episode:

On Controllability of AI

Unexplainability and Incomprehensibility of Artificial Intelligence

Unpredictability of AI

We hope that you will continue to join in the conversations by following us or subscribing to our podcasts on Youtube, Spotify, SoundCloud, iTunes, Google Play, Stitcher, iHeartRadio, or your preferred podcast site/application. You can find all the AI Alignment Podcasts here.


Related episodes

No matter your level of experience or seniority, there is something you can do to help us ensure the future of life is positive.