works
Holden Karnofsky, Eliezer Yudkowsky, and Luke Muehlhauser Conversation misc This heavily edited email conversation between Holden Karnofsky, Eliezer Yudkowsky, and Luke Muehlhauser explores the philanthropic value of far-future concerns. Karnofsky argues that the direct value of the far future is highly uncertain, and that near-term interventions through organizations like AMF are more sound investments. Yudkowsky challenges this position, emphasizing the catastrophic risk posed by artificial general intelligence (AGI) and the lack of other actors addressing this existential threat. Muehlhauser discusses the difficulty of communicating complex risks and epistemic uncertainty, and suggests that MIRI, an organization working on AGI safety, needs to optimize for non-expert understanding and engagement. Karnofsky concedes that he may be misunderstanding or undervaluing risks, and agrees to investigate further. – AI-generated abstract.

Conversation

Holden Karnofsky, Eliezer Yudkowsky, and Luke Muehlhauser

2014

Abstract

This heavily edited email conversation between Holden Karnofsky, Eliezer Yudkowsky, and Luke Muehlhauser explores the philanthropic value of far-future concerns. Karnofsky argues that the direct value of the far future is highly uncertain, and that near-term interventions through organizations like AMF are more sound investments. Yudkowsky challenges this position, emphasizing the catastrophic risk posed by artificial general intelligence (AGI) and the lack of other actors addressing this existential threat. Muehlhauser discusses the difficulty of communicating complex risks and epistemic uncertainty, and suggests that MIRI, an organization working on AGI safety, needs to optimize for non-expert understanding and engagement. Karnofsky concedes that he may be misunderstanding or undervaluing risks, and agrees to investigate further. – AI-generated abstract.

PDF

First page of PDF