works
Robert Wiblin and Keiran Harris Ajeya cotra on worldview diversification and how big the future could be online This podcast episode features an interview with Ajeya Cotra, a senior research analyst at Open Philanthropy, an organization that aims to use a large portion of its endowment to do good in the world. Cotra discusses Open Philanthropy’s worldview diversification approach to cause prioritization. She argues that Open Philanthropy’s funding decisions should be influenced by philosophical considerations as well as empirical assessments of the effectiveness of different interventions. Cotra discusses a range of potential worldviews, such as longtermism, near-termism, animal welfare, and basic science, and suggests that Open Philanthropy could be justified in investing in each to some degree despite not necessarily assigning a high degree of probability to any single worldview. The episode also explores the potential size of humanity’s future. Cotra argues that while space colonization with biological humans may be challenging, space colonization with computers may be more feasible. Cotra also discusses the simulation argument and the doomsday argument, two philosophical arguments that could be interpreted as suggesting that the size of humanity’s future is smaller than it might appear. Finally, Cotra discusses her work on AI timelines, a research project aimed at estimating when transformative AI might be developed. She discusses her methodology and her conclusions, which suggest that transformative AI could arrive as soon as 30 to 40 years from now. – AI-generated abstract.

Ajeya cotra on worldview diversification and how big the future could be

Robert Wiblin and Keiran Harris

80,000 Hours, January 19, 2021

Abstract

This podcast episode features an interview with Ajeya Cotra, a senior research analyst at Open Philanthropy, an organization that aims to use a large portion of its endowment to do good in the world. Cotra discusses Open Philanthropy’s worldview diversification approach to cause prioritization. She argues that Open Philanthropy’s funding decisions should be influenced by philosophical considerations as well as empirical assessments of the effectiveness of different interventions. Cotra discusses a range of potential worldviews, such as longtermism, near-termism, animal welfare, and basic science, and suggests that Open Philanthropy could be justified in investing in each to some degree despite not necessarily assigning a high degree of probability to any single worldview. The episode also explores the potential size of humanity’s future. Cotra argues that while space colonization with biological humans may be challenging, space colonization with computers may be more feasible. Cotra also discusses the simulation argument and the doomsday argument, two philosophical arguments that could be interpreted as suggesting that the size of humanity’s future is smaller than it might appear. Finally, Cotra discusses her work on AI timelines, a research project aimed at estimating when transformative AI might be developed. She discusses her methodology and her conclusions, which suggest that transformative AI could arrive as soon as 30 to 40 years from now. – AI-generated abstract.

PDF

First page of PDF