It’s important to think ahead, even to worry, about problems that have yet to materialize. But if we try to think too far ahead, we won’t be able to justify predictions about the overall balance of good versus bad outcomes. The best known versions of “longtermism" hold that our decisions should be influenced by our expectations for the extremely long-term future—not just the next ten years or the next thousand years, but the next billion-plus years. If you think there's a one in a trillion chance some action you do now will result in a trillion more happy lives a billion years in the future, that's good reason to choose that action.

To take one extreme example, Bostrom estimates that our descendants could potentially experience 1034 years of life. Crunching the numbers, he concludes: “Even if we give this allegedly lower bound on the cumulative output potential of a technologically mature civilization a mere 1% chance of being correct, we find that the expected value of reducing existential risk by a mere one billionth of one billionth of one percentage point is worth a hundred billion times as much as a billion human lives.” Similarly, in a research paper on the Effective Altruism forum, Jan M. Bruner and Friederike M. Grosse-Holz write “It morally matters what happens in the billions of years to come. From this very long-term view, making sure the future plays out well is a primary moral concern.”

According to what I'll call the washout argument (partly following Greaves and MacAskill), such thinking is mistaken. There is nothing you could possibly do right now that you could be justified in believing will have a non-negligible overall positive influence on events more than a billion years from now. Estimates of the value or disvalue of any currently available actions based on guesses about what might be the case in a billion-plus years are always unjustified. Every action has too many complex possible causal consequences, both positive and negative. Therefore, on epistemic grounds, we should temporally discount, all the way to zero, the well-being of anyone who might exist in a billion years. We can make guesses about the total good or bad consequences of our actions for the hundred-year future, and maybe the thousand-year future, but not for the far-distant future.

In a forthcoming paper, I defend the washout argument in more detail, specifically targeting MacAskill’s “Strong Longtermism”, which seeks to make moral decisions with no temporal discounting over massive timescales. Today, I'll defend four versions of the washout argument against longtermism.

The Infinite Washout Argument

If the cosmos is infinite, our actions could have infinitely many good and bad effects, making their total expected utility incalculable.

This version of the argument assumes an infinite future. In other words, a future with no future reckoning point (such as heat death) at which the total good and bad consequences of an action can be added up.

As I've argued elsewhere, it's rational to have a non-zero credence that any action you do will have infinitely many positive and negative consequences, echoing into an infinitely enduring post-heat-death universe. Raise your hand now and, plausibly, you start a never-ending ripple of effects—light particles reflecting differently off your hand, which then affect other particles which in turn behave differently, which then affect further particles, and so on forever, including in galaxies in the googolplex-year future that might fluctuate into existence through random chance, through the seeding of new cosmic inflations, or by some other process.

Suppose you attribute a mere 0.001% credence to such a cosmological model. (I think the credence should be much higher, but any non-zero credence will suffice for the present argument.) Suppose that you are now faced with the choice between Action A (say, donating $100,000 toward distributing anti-malarial drugs to children) with near-term expected value m, and Action B (say setting that money on fire in order to burn down the house of a neighbor with an annoying dog) with near-term expected value n. The total expected value of Action A will be m + .00001 * (∞ - ∞), and the total expected value of Action B will be n + .00001 * (∞ - ∞). Because anything multiplied by or added to infinity is just infinity, these values are both equally ∞ - ∞. The expected value calculations give you no grounds to choose one over the other.

From Infinite to Finite Cases

Suppose the longtermist attempts to avoid troubles with infinitude by truncating, at the heat death of the universe, the range of consequences they're willing to consider. An action can be evaluated as overall good or bad based on its consequences between now and then, without having to think about further consequences. The next two arguments consider only finite consequences, on billion-year-plus time frames.

Longtermists such as William MacAskill and Toby Ord normally argue that it would be better for the billion-year-plus future if humanity does not destroy itself or permit large-scale catastrophes such as nuclear war. However, their arguments for these conclusions are card towers of breezy plausibility. In the next two arguments—the Dolphin Argument and the Nuclear Catastrophe Argument—I rebut their claims by constructing what I take to be equally plausible arguments in favor of human extinction or catastrophe. These two arguments are versions of what I call the Cluelessness Argument:

We cannot justifiably guess which actions will be more or less likely to have a positive effect after a billion years.

The Dolphin Argument

The most obvious solution to the Fermi Paradox is also the most depressing. The reason we see no evidence of extraterrestrials is that technological civilizations inevitably destroy themselves in short order. Suppose that it's inevitable that we wipe ourselves out in the next 10,000 years, with some extinction scenarios being more destructive and full of suffering than others. According to the Dolphin Argument, it would be better for us to extinguish ourselves peacefully now—for example by ceasing reproduction as recommended by antinatalists—than to hang on until we catastrophically extinguish ourselves.

Suppose that Earth with humans and other thriving species is worth X utility per year, and that Earth with no humans is worth X/100 utility per year (generously assuming that humans contribute 99% of the value to the planet). A planet damaged by a catastrophic human self-destructive event is worth an expected X/200 utility per year, since it's likely that destroying all humans would involve destroying many other valuable species as a side-effect, for example, dolphins. Catastrophic human self-destruction could also quite possibly render Earth uninhabitable for complex multicellular life far into the future. Running the numbers: If we catastrophically destroy ourselves in 10,000 years, the total billion-year value of life on Earth is approximately 5,000,000 * X. If we peacefully cease reproduction tomorrow, the total billion-year value is 10,000,000 * X. By this calculation, we should exit peacefully now.

Now longtermists emphasize that there's a chance that our descendants will survive far more than 10,000 years, and the consequences of that might be so good that it's worth risking the dolphins.

However, counterbalancing that chance is another chance. Although technological civilizations might be rare and short-lived, it doesn't follow that civilizations are rare and short-lived. Over very long time scales, more intelligent species have tended to evolve on Earth. It's thus possible that a highly intelligent non-technological civilization will arise—the descendants of dolphins perhaps—rich with love, play, and art, but less self-destructively Promethean than we humans are. Which is the likelier path to a billion-year-plus happy civilization on Earth: that we somehow manage to keep our collective fingers off the button for century after century for ten million consecutive centuries, or that some other biological clade finds a stable, happy, non-technological equilibrium? My bet is the latter.

Ah, but we might spread to other planets! Yes, the but argument repeats: If there's a 0.01% chance per century that our descendants in Star System X destroy themselves in a way that extinguishes valuable and much more durable life already in Star System X, then it would be best overall if we chose a quiet extinction now rather than risk such enormous disasters later.

To be clear: My aim with the Dolphin Argument is not to support antinatalism. It is to show that plausible longtermist reasoning can just as easily be deployed against human survival as in favor of it. We don't really know what's best for the billion-year-plus future, and utilitarian calculations over these time periods are more likely to mislead than enlighten.

The Nuclear Catastrophe Argument

Longtermists generally hold that we should take the risk of human extinction very seriously and devote considerable resources to reducing it. They also generally hold that nuclear war, though catastrophic, would not entirely wipe us out. Humans are a resourceful bunch and, if their estimates are correct, some are likely to survive. This suggests a straightforward way to teach our descendants to take existential risk seriously: Start a nuclear war.

Suppose that a nuclear war has a 5% chance of destroying humanity but that if it doesn't, for the next 10,000 years the survivors are more cautious about existential risk, reducing the risk of extinction from 2.0% per century to 1.9% per century. If risk per century is otherwise constant and independent, the chance that we survive 10,000 years would then be 14.0% with nuclear war versus 13.3% without nuclear war.

This is not so implausible, I think. People learn better from hard lessons than from abstract treatises about risks and benefits. We'd be like the teenage driver whose nearly fatal drunk driving accident scares them into sobriety.

Again, my aim here is not actually to support nuclear war! Rather, it is to display the ease with which longtermist styles of thinking can be marshaled for seemingly opposite conclusions. We should not put serious decision-theoretical weight behind such thinking.

The Negligibility Argument

Even if we could justifiably guess that some particular action is likelier to have good than bad consequences in a billion years, the odds of good consequences would be negligibly tiny due to the compounding of probabilities over time.

David Thorstad has argued that longtermist models tend to substantially overvalue the benefits of current efforts at existential risk reduction for several related reasons. The main problem is the compounding of probabilities over time.

What are the odds, really, that something you can do now would lead to, say, a trillion more happy people a billion years in the future? I submit that the odds are very small. After all, there’s a long chain of causation connecting your action to the beneficial result in the far future. Without going into specific details, let's suppose that there's something you could do now that, if things go right, would have that effect. Very optimistically, let's assume that in order for your action to have the billion-year positive consequences you desire, you'd have to get a little lucky every million years: Every million years, there's a 10% chance that the causal chain you started continues in the hoped-for positive direction.

On these assumptions, after a billion years, there will be only a one in 101000 chance that your action is still effective—which is less likely than a one in a googol chance. Even if we multiply that chance by a trillion possible happy descendants who wouldn't otherwise exist, the expected value is still far less than one in a googolth of a life. If we add another 10% chance of ten trillion happy lives that wouldn't otherwise exist, another 10% chance, conditional on that, of a hundred trillion happy lives, etc., up to a maximum of, say, 1050 lives between now and heat death, the expected value still remains well below one in a googolth of a happy life.

Against this minuscule expected benefit, we now ought to weigh the costs of longtermist thinking. I see three types of possible costs. There are cognitive opportunity costs: The time and cognitive effort we spend on thinking about billion-year-plus outcomes might be more productively spent on something else. There's risk of error: Given the difficulty of such long-term thinking, it's likely we will make mistakes. We risk messing up otherwise good decisions by adding erroneous longtermist considerations into our calculus, rather than constraining ourselves to the more realistically foreseeable future. Finally, longtermist thinking might have other risks for one's cognitive life , for example, by encouraging us to undervalue currently suffering people. If any one of these costs is non-trivial, it will outweigh the tiny expected benefit of longtermist calculations.

The Medium Term

We cannot productively make moral calculations about the impact of our actions in the billion year future. At some point—a thousand years? a hundred thousand years? ten million years?—our ability to weigh the expected good versus the expected bad collapses either to zero or to so near zero as to not be worth the costs and risks of thinking in that time frame.

All this said, I think longtermists are generally right that we should be concerned about existential risks and think more about our descendants. Nothing wrong with that! But it should be grounded in humble and imperfect guesses about the next few hundred years, not in speculations concerning trillions of descendants in the billion-year future.

About the author

Eric Schwitzgebel

Eric Schwitzgebel is a professor of philosophy at University of California, Riverside, and author of The Weirdness of the World, forthcoming in January 2023 with Princeton University Press.

RELATED ARTICLES
Recent Article