EARadio

How to build a safe advanced AI (Evan Hubinger) | What's up in AI safety? (Asya Bergal)

July 20, 2021
EARadio
How to build a safe advanced AI (Evan Hubinger) | What's up in AI safety? (Asya Bergal)
Show Notes

Evan discusses some of the different proposals for building safe advanced AI that are currently actively being researched at OpenAI and DeepMind. Asya then discusses some recent updates on AI safety work she's excited about.

 Evan Hubinger is a research fellow at MIRI, and before that was an AI safety research intern at OpenAI. His current work is aimed at solving inner alignment for iterated amplification. He was an author on “Risks from Learned Optimization in Advanced Machine Learning Systems,” was previously a MIRI intern, designed the functional programming language Coconut, and has done software engineering work at Google, Yelp, and Ripple. He studied math and computer science at Harvey Mudd College.

 Asya Bergal has an BA in computer science from MIT. Since graduating, she has worked as a trader/software engineer for Alameda Research, and as a research analyst at Open Philanthropy. Most recently, she has been at AI Impacts, heading up their operations and working as a researcher.

This talk was taken from EA Student Summit 2020. Click here to watch the talk with the PowerPoint presentation.