[T]he evolution of superior intelligence in humans was bad for chimpanzees, but it was good for humans. Whether it was good or bad “from the point of view of the universe” is debatable, but if human life is sufficiently positive to offset the suffering we have inflicted on animals, and if we can be hopeful that in future life will get better both for humans and for animals, then perhaps it will turn out to have been good. Remember Bostrom’s definition of existential risk, which refers to the annihilation not of human beings, but of “Earth-originating intelligent life.” The replacement of our species by some other form of conscious intelligent life is not in itself, impartially considered, catastrophic. Even if the intelligent machines kill all existing humans, that would be, as we have seen, a very small part of the loss of value that Parfit and Bostrom believe would be brought about by the extinction of Earth-orginating intelligent life. The risk posed by the development of AI, therefore, is not so much whether it is friendly to us, but whether it is friendly to the idea of promoting wellbeing in general, for all sentient beings it encounters, itself included.
Peter Singer, Doing the Most Good: How Effective Altruism Is Changing Ideas about Living Ethically, New Haven, 2015, p. 176