August 2019: Long-Term Future Fund grants and recommendations
Effective Altruism Funds, 2019
Abstract
Long-Term Future Fund (LTFF) grants and recommendations made in 2019, totaling $415,697, are described relating to existential risks. These included grants to HIPE to place a staff member within the government, to Stag Lynn to level up in technical AI safety skills, to Roam Research to improve their work-flowy tool, to Alexander Gietelink Oldenziel to study self-reference in complex systems, to Alexander Siegenfeld to characterize the complex systems and their external interactions, to Soren Mindermann for an AI strategy PhD, to Miranda Dixon-Luinenburg to write EA themed fiction to address x-risk topics, to David Manheim for a multi-model approach to corporate and state actor relevant to existential risk mitigation, to Joar Skalse to upskill in ML to be able to do productive AI safety research sooner than otherwise, to Chris Chambers to combat publication bias in science by promoting and supporting Registered Reports journal format, to Jess Whittlestone to research the links between short- and long-term AI policy while skilling up in technical ML, to Lynette Bye for productivity coaching for effective altruists to increase their impact. The LTFF didn’t end up funding the Center for Applied Rationality for help with reliably reasoning and finding high-impact work. – AI-generated abstract
