Short-term AI alignment as a priority cause
Effective Altruism Forum, February 10, 2020
Abstract
In this post, I will argue that short-term AI alignment should be viewed as today’s greatest priority cause, whether you are concerned by long-term AGI risks or not. To do so, I will first insist on the fact that AIs are automating information collection, storage, analysis and dissemination; and that they are now doing a lot of this much better than humans. Yet, many of the priority cause areas in EA strongly depend on collecting, storing, analyzing and disseminating quality information. As of today, an aligned large-scale AI would thus be a formidable ally for EA.
