Skip to content
Can AI Do Our Alignment Homework? (with Ryan Kidd)
· Existential Risk

Can AI Do Our Alignment Homework? (with Ryan Kidd)

Ryan Kidd of the MATS program joins The Cognitive Revolution to discuss AGI timelines, model deception risks, dual-use alignment, and frontier lab governance, and outlines MATS research tracks, talent needs, and advice for aspiring AI safety researchers.

Watch Episode Here


Listen to Episode Here


Show Notes

Ryan Kidd is a co-executive director at MATS. This episode is a cross-post from "The Cognitive Revolution", hosted by Nathan Labenz. In this conversation, they discuss AGI timelines, model deception risks, and whether safety work can avoid boosting capabilities. Ryan outlines MATS research tracks, key researcher archetypes, hiring needs, and advice for applicants considering a career in AI safety. Learn more about Ryan's work and MATS at: https://matsprogram.org

CHAPTERS:

(00:00) Episode Preview

(00:20) Introductions and AGI timelines

(10:13) Deception, values, and control

(23:20) Dual use and alignment

(32:22) Frontier labs and governance

(44:12) MATS tracks and mentors

(58:14) Talent archetypes and demand

(01:12:30) Applicant profiles and selection

(01:20:04) Applications, breadth, and growth

(01:29:44) Careers, resources, and ideas

(01:45:49) Final thanks and wrap

PRODUCED BY:

https://aipodcast.ing

SOCIAL LINKS:

Website: https://podcast.futureoflife.org

Twitter (FLI): https://x.com/FLI_org

Twitter (Gus): https://x.com/gusdocker

LinkedIn: https://www.linkedin.com/company/future-of-life-institute/

YouTube: https://www.youtube.com/channel/UC-rCCy3FQ-GItDimSR9lhzw/

Apple: https://geo.itunes.apple.com/us/podcast/id1170991978

Spotify: https://open.spotify.com/show/2Op1WO3gwVwCrYHg4eoGyP


Related episodes

No matter your level of experience or seniority, there is something you can do to help us ensure the future of life is positive.