works
David Denkenberger Cost-effectiveness of foods for global catastrophes: even better than before? online As part of a Centre for Effective Altruism (CEA) grant, I have updated the cost effectiveness of preparing for agricultural catastrophes such as nuclear winter (previous analysis here°). This largely involves planning and research and development of alternate foods (roughly those not dependent on sunlight such as mushrooms, natural gas digesting bacteria, and extracting food from leaves). I have refined a model that uses Monte Carlo (probabilistic) sampling to estimate uncertain results using open source software (Guesstimate) that incorporates an earlier model of artificial general intelligence safety (hereafter AI) cost-effectiveness. A major change is broadening the routes to far future impact from only loss of civilization and non-recovery to include making other catastrophes more likely (e.g. totalitarianism) or worse values ending up in AGI. Additional changes include the provision of moral hazard, performing a survey of global catastrophic risk (GCR) researchers for key parameters, and using better behaved distributions in the AI model (increasing the cost-effectiveness of AI by a factor of two).

Cost-effectiveness of foods for global catastrophes: even better than before?

David Denkenberger

Effective Altruism Forum, November 18, 2018

Abstract

As part of a Centre for Effective Altruism (CEA) grant, I have updated the cost effectiveness of preparing for agricultural catastrophes such as nuclear winter (previous analysis here°). This largely involves planning and research and development of alternate foods (roughly those not dependent on sunlight such as mushrooms, natural gas digesting bacteria, and extracting food from leaves). I have refined a model that uses Monte Carlo (probabilistic) sampling to estimate uncertain results using open source software (Guesstimate) that incorporates an earlier model of artificial general intelligence safety (hereafter AI) cost-effectiveness. A major change is broadening the routes to far future impact from only loss of civilization and non-recovery to include making other catastrophes more likely (e.g. totalitarianism) or worse values ending up in AGI. Additional changes include the provision of moral hazard, performing a survey of global catastrophic risk (GCR) researchers for key parameters, and using better behaved distributions in the AI model (increasing the cost-effectiveness of AI by a factor of two).

PDF

First page of PDF