A case for AGI safety research far in advance
Effective Altruism Forum, March 26, 2021
Abstract
I posted My AGI Threat Model: Misaligned Model-Based RL Agent on alignmentforum yesterday. Among other things, I make a case for misaligned AGI being an existential risk which can and should be mitigated by doing AGI safety research far in advance. So that’s why I’m cross-posting it here!
