works
Luke Muehlhauser Friendly AI research as effective altruism online This article presents philosophical reasoning to support the idea that it is imperative that humanity shape the development of the far future to achieve the best possible outcomes. It proposes that efforts should be focused on two primary strategies: mitigating existential risks, actions or events that could lead to the extinction of humanity or collapse of civilization, and producing positive trajectory changes on a global scale, such as preventing global catastrophes originating from technological advancements. The article emphasizes the potential long-term negative consequences of failing to act effectively, proposes existential risk reduction as a global priority, and discusses the role of friendly AI research in aligning the development of AI with human values, thus positively shaping the far future. – AI-generated abstract.

Friendly AI research as effective altruism

Luke Muehlhauser

Machine Intelligence Research Institute, June 5, 2013

Abstract

This article presents philosophical reasoning to support the idea that it is imperative that humanity shape the development of the far future to achieve the best possible outcomes. It proposes that efforts should be focused on two primary strategies: mitigating existential risks, actions or events that could lead to the extinction of humanity or collapse of civilization, and producing positive trajectory changes on a global scale, such as preventing global catastrophes originating from technological advancements. The article emphasizes the potential long-term negative consequences of failing to act effectively, proposes existential risk reduction as a global priority, and discusses the role of friendly AI research in aligning the development of AI with human values, thus positively shaping the far future. – AI-generated abstract.

PDF

First page of PDF