[D]espite what our intuition tells us, changes in the world’s population are not generally neutral. They are either a good thing or a bad thing. But it is uncertain even what form a correct theory of the value of population would take. In the area of population, we are radically uncertain. We do not know what value to set on changes in the world’s population. If the population shrinks as a result of climate change, we do not know how to evaluate that change. Yet we have reason to think that changes in population may be one of the most morally significant effects of climate change. The small chance of catastrophe may be a major component in the expected value of harm caused by climate change, and the loss of population may be a major component of the badness of catastrophe.
How should we cope with this new, radical sort of uncertainty? Uncertainty was the subject of chapter 7. That chapter came up with a definitive answer: we should apply expected value theory. Is that not the right answer now? Sadly it is not, because our new sort of uncertainty is particularly intractable. In most cases of uncertainty about value, expected value theory simply cannot be applied.
When an event leads to uncertain results, expected value theory requires us first to assign a value to each of the possible results it may lead to. Then it requires us to calculate the weighted average value of the results, weighted by their probabilities. This gives us the event’s expected value, which we should use in our decision-making.
Now we are uncertain about how to value the results of an event, rather than about what the results will be. To keep things simple, let us set aside the ordinary sort of uncertainty by assuming that we know for sure what the results of the event will be. For instance, suppose we know that a catastrophe will have the effect of halving the world’s population. Our problem is that various different moral theories of value evaluate this effect differently. How might we try to apply expected value theory to this catastrophe?
We can start by evaluating the effect according to each of the different theories of value separately; there is no difficulty in principle there. We next need to assign probabilities to each of the theories; no doubt that will be difficult, but let us assume we can do it somehow. We then encounter the fundamental difficulty. Each different theory will value the change in population according to its own units of value, and those units may be incomparable with one another. Consequently, we cannot form a weighted average of them.
For example, one theory of value is total utilitarianism. This theory values the collapse of population as the loss of the total well-being that will result from it. Its unit of value is well-being. Another theory is average utilitarianism. It values the collapse of population as the change of average well-being that will result from it. Its unit of value is well-being per person. We cannot take a sensible average of some amount of well-being and some amount of well-being per person. It would be like trying to take an average of a distance, whose unit is kilometers, and a speed, whose unit is kilometers per hour. Most theories of value will be incomparable in this way. Expected value theory is therefore rarely able to help with uncertainty about value.
So we face a particularly intractable problem of uncertainty, which prevents us from working out what we should do. Yet we have to act; climate change will not wait while we sort ourselves out. What should we do, then, seeing as we do not know what we should do? This too is a question for moral philosophy.
Even the question is paradoxical: it is asking for an answer while at the same time acknowledging that no one knows the answer. How to pose the question correctly but unparadoxically is itself a problem for moral philosophy.
John Broome, Climate Matters: Ethics in a Warming World, New York, 2012