Summary of Eric Drexler’s work on reframing AI safety
Reducing Risks of Future Suffering, May 21, 2020
Abstract
This post contains a bullet point summary of Reframing Superintelligence: Comprehensive AI Services as General Intelligence. (I wrote this in 2017, so it does not necessarily refer to the most up-to-date version of Drexler’s work.)
I find Drexler’s work very interesting because he has a somewhat unusual perspective on AI. My take is that his ideas have some merit, and I like that he’s questioning key assumptions. But I’m less sure I would agree with all the details, and I think we should be much more uncertain about AI than his texts often (implicitly) suggest.
The key ideas are:
He thinks AGI isn’t necessarily agent-like. Instead, we might build “comprehensive AI services” (CAIS) which are superintelligent, but don’t act like an opaque agent. He thinks the usual concept of intelligence is misguided, and that AI is radically unlike human intelligence. He thinks humans might retain control of high-level strategic decisions. In the following, I will summarise the chapters that I found most interesting.