works
LessWrong Eliezer Yudkowsky online This work presents several articles by Eliezer Yudkowsky, a researcher in the field of artificial intelligence. The articles focus on the challenges and importance of developing a Friendly AI, an AI that is aligned with human values and goals. Yudkowsky argues that such an AI could help reduce global risks and improve human well-being. He also discusses the difficulties in designing the features and cognitive architecture required to produce a Friendly AI. Finally, he proposes several possible solutions to these challenges, including the use of coherent extrapolated volition and timeless decision theory. – AI-generated abstract.

Eliezer Yudkowsky

LessWrong

LessWrong Wiki, October 29, 2012

Abstract

This work presents several articles by Eliezer Yudkowsky, a researcher in the field of artificial intelligence. The articles focus on the challenges and importance of developing a Friendly AI, an AI that is aligned with human values and goals. Yudkowsky argues that such an AI could help reduce global risks and improve human well-being. He also discusses the difficulties in designing the features and cognitive architecture required to produce a Friendly AI. Finally, he proposes several possible solutions to these challenges, including the use of coherent extrapolated volition and timeless decision theory. – AI-generated abstract.

PDF

First page of PDF