AI alignment, philosophical pluralism, and the relevance of non-Western philosophy
EAGx Asia-Pacific, November 30, 2020
Abstract
How can we build (super) intelligent machines that are robustly aligned with human values? AI alignment researchers strive to meet this challenge, but currently draw upon a relatively narrow set of philosophical perspectives common in effective altruism and computer science. This could pose risks in a world where human values are complex, plural, and fragile. Xuan discusses how these risks might be mitigated by greater philosophical pluralism, describing several problems in AI alignment where non-Western philosophies might provide insight.