The precipice: existential risk and the future of humanity
London, 2020
Abstract
If all goes well, human history is just beginning. Our species could survive for billions of years – enough time to end disease, poverty, and injustice, and to flourish in ways unimaginable today. But this vast future is at risk. With the advent of nuclear weapons, humanity entered a new age, where we face existential catastrophes – those from which we could never come back. Since then, these dangers have only multiplied, from climate change to engineered pathogens and artificial intelligence. If we do not act fast to reach a place of safety, it will soon be too late.
Quotes from this work
safeguarding humanity’s future is the defining challenge of our time. For we stand at a crucial moment in the history of our species. Fueled by technological progress, our power has grown so great that for the first time in humanity’s long history, we have the capacity to destroy ourselves—severing our entire future and everything we could become. Yet humanity’s wisdom has grown only falteringly, if at all, and lags dangerously behind. Humanity lacks the maturity, coordination and foresight necessary to avoid making mistakes from which we could never recover. As the gap between our power and our wisdom grows, our future is subject to an ever-increasing level of risk.
In this way, existential risk was a highly influential idea of the twentieth century. But because there was one dominant risk, it all happened under the banner of nuclear war, with philosophers discussing the profound new issues raised by “nuclear ethics,” rather than by “existential risk.” And with the end of the Cold War, this risk diminished and the conversation faded. But this history shows that existential risk is capable of rousing major global concern, from the elite to the grass roots.
Losing our potential means getting locked into a bad set of futures. We can categorize existential catastrophes by looking at which aspects of our future get locked in. This could be a world without humans (extinction) or a world without civilization (unrecoverable collapse). But it could also take the form of an unrecoverable dystopia—a world with civilization intact, but locked into a terrible form, with little or no value.
the truth of an idea is only one contributor to its memetic potential—its ability to spread and to stick. But the more that rigorous and rational debate is encouraged, the more truth contributes to memetic success. So encouraging a culture of such debate may be one way we can now help avoid this fate.
To truly expand our potential, we would need a spacecraft to reach another star, then stop and use the resources there to build a settlement that could eventually grow into a new bastion of civilization.23 Such a trip requires four challenging phases: acceleration, surviving the voyage, deceleration and building a base of operations.
With entire galaxies receding beyond our reach each year, one might think this pushes humanity toward a grand strategy of haste—a desperate rush to reach the technologies of intergalactic travel as soon as possible. But the relative loss each year is actually rather slow—about one part in five billion—and it is this relative reduction in our potential that matters.
A further reason some people avoid giving numbers is that they don’t want to be pinned down, preferring the cloak of vagueness that comes with natural language. But I’d love to be pinned down, to lay my cards on the table and let others see if improvements can be made. It is only through such clarity and openness to being refuted that we make intellectual progress.