My Objections to "We’re All Gonna Die with Eliezer Yudkowsky"
March 21, 2023
Abstract
This article presents a critical analysis of Eliezer Yudkowsky’s pessimistic view on AI alignment, arguing that the difficulty of aligning artificial general intelligence (AGI) with human values is significantly overstated. The author, an AI alignment researcher, refutes several specific arguments made by Yudkowsky, such as the claim that current AI approaches lack the generality needed for AGI or that the complexity of human values makes alignment intractable. The article argues that human values are not that complex and that current learning processes, such as deep learning, are capable of aligning AI with those values. The author emphasizes that deep learning is not analogous to other fields, like computer security, and that the complexity of mind space is misleading, as real-world intelligences are likely to occupy a much smaller, less diverse manifold. The author concludes that current AI development suggests that alignment might be achievable with current approaches and that a more optimistic perspective on the field is warranted. – AI-generated abstract.
