works
Ben Garfinkel On deference and Yudkowsky's AI risk estimates online Eliezer Yudkowsky, a prominent figure in the field of artificial intelligence (AI) risk, has publicly expressed the view that misaligned AI has a virtually 100% chance of killing everyone on Earth. This article argues that people should be wary of deferring too much to Yudkowsky when it comes to estimating AI risk, as his track record of technological risk forecasting is at best fairly mixed and he has a tendency toward expressing dramatic views with excessive confidence. Several examples of Yudkowsky making predictions that turned out to be wrong or overconfident are presented, including his prediction of near-term extinction from nanotech, his prediction that his team would be able to develop AGI before 2010, and his confidence that AI progress would be extremely discontinuous and localized and not require much compute. The article concludes that, while Yudkowsky’s arguments for AI risk deserve serious attention, his confident and dramatic views should not be given undue weight. – AI-generated abstract

On deference and Yudkowsky's AI risk estimates

Ben Garfinkel

Effective Altruism Forum, June 19, 2022

Abstract

Eliezer Yudkowsky, a prominent figure in the field of artificial intelligence (AI) risk, has publicly expressed the view that misaligned AI has a virtually 100% chance of killing everyone on Earth. This article argues that people should be wary of deferring too much to Yudkowsky when it comes to estimating AI risk, as his track record of technological risk forecasting is at best fairly mixed and he has a tendency toward expressing dramatic views with excessive confidence. Several examples of Yudkowsky making predictions that turned out to be wrong or overconfident are presented, including his prediction of near-term extinction from nanotech, his prediction that his team would be able to develop AGI before 2010, and his confidence that AI progress would be extremely discontinuous and localized and not require much compute. The article concludes that, while Yudkowsky’s arguments for AI risk deserve serious attention, his confident and dramatic views should not be given undue weight. – AI-generated abstract

PDF

First page of PDF