Ethics relies upon the recognition that other people matter. And it seems most principled to hold that how much others matter doesn't depend upon their location in space or time. If an evil genie offered you riches today in exchange for torturing an innocent population a billion years hence, it would obviously be wrong to accept the deal. Distant people matter too.

But ethics is harder to put into practice in real life than in stylized thought experiments where all the relevant effects of our actions are stipulated. If we can't possibly have any idea how, in real life, our actions would affect distant others, then there would seem less point to our even trying to take their interests into account. Along these lines, Eric Schwitzgebel recently offered three intriguing epistemic arguments for the conclusion that we should "temporally discount, all the way to zero, the well-being of anyone who might exist in a billion years." But his arguments rely too much on precisely the sort of naive instrumentalist reasoning that utilitarian moral philosophers have traditionally cautioned against. Prudent longtermists will not recognize themselves in his caricature of their view.

I'll suggest an alternative response, one that seeks to reconcile common sense with moral principle. Rather than discounting people, we should discount proposals that violate robustly-reliable rules on the basis of flimsy speculative reasoning. And we should do this precisely because it is the best method we have for helping everyone.

Ideal vs Approximate Guidance

I start with a guiding principle, that I hope should be uncontroversial:

Approximating Ideal Guidance: We should ideally want to perform the act that would be morally correct, given all the relevant facts. 1 1 This is sometimes characterized as what an "ideal observer" would wish for us to do. Expand Footnote Collapse Footnote In the face of uncertainty, we should be guided by whatever considerations will allow us to best approximate the objectively morally correct decision, by choosing the option with the best ex ante prospect (aptly balancing possible up-sides against possible down-sides), given our available evidence. 2 2 This is the general task of a decision theory: telling us how to combine particular goals and credences to yield an instrumentally apt or sensible choice, given various competing options. Expand Footnote Collapse Footnote

One way to approximate ideal guidance, given limited information, can be to maximize expected value. Suppose you have a headache, but someone has been messing with your medicine cabinet. Two identical pills remain, along with a note specifying that one is aspirin while the other contains arsenic—but not which is which. It would be ideal to take the pill that is actually aspirin in order to relieve your headache. But you don't have the information necessary to be able to do that reliably. Given how much worse it would be to accidentally take arsenic, it would be wisest to steer clear of both mystery pills. Taking either pill has a 50% chance of being slightly good, and a 50% chance of being extremely bad, resulting in an overall negative prospect.  Since doing nothing has a better prospect than taking a mystery pill (being neutral rather than negative in value), doing nothing is what best approximates ideal guidance in this situation.

Two important clarifications:

(1) As this case demonstrates, approximating ideal guidance is not the same thing as maximizing the probability of doing the ideal thing. 3 3 Maximizing the probability of doing the ideal thing would have you take one of the pills hoping that it’s aspirin. The obvious problem with this is that it also maximizes your probability of doing the worst thing. Expand Footnote Collapse Footnote Sometimes a safe "second-best" option is better in prospect.

(2) The process of attempting to calculate expected values is not guaranteed to be your best way to evaluate prospects or approximate ideal guidance. If your attempted calculations are biased and error-prone, you may do better to rely on common sense norms. For instance, maybe you think arsenic is a new brand of pain-relief medication rather than a deadly poison, in which case, “not taking tampered medicine” would be a sane common-sense rule to follow . In such a case, prudently following the generally-reliable rule offers a better prospect, all things considered, than following one's unreliable calculations. 4 4 Naive calculations supporting intuitively horrific actions, involving clear immediate harms that are "justified" on the basis of a speculative "greater good", seem especially unreliable and apt to be overridden by commonsense rules that prudently prohibit committing atrocities. See the various thought-experiments about butchering people for organs. Expand Footnote Collapse Footnote

Given this background principle of approximating ideal guidance—and our recognition that all other people, no matter how distant, matter morally—it follows that we can reasonably focus on nearterm goods only if we can reasonably expect that doing so better serves the overall good. Epistemic arguments might influence how we pursue our moral goals; they can't rationally influence what our ultimate goals ought to be. So they cannot, in particular, undermine the principle that how much other people's interests matter does not depend upon their location in space or time.

Equipped with this background understanding, we are now in a position to assess Schwitzgebel's three arguments.

The Infinite Washout Argument

Schwitzgebel argues that we should give non-zero credence to any action having "infinitely many positive and negative consequences." But then everything washes out to having the exact same expected value of ∞ - ∞.

I'm not sure how best to deal with infinite ethics, but it surely isn't this. After all, Schwitzgebel's imagined extension of expected value reasoning—unlike the finite version—violates the Pareto principle that if outcome A is at least as good for everyone, and strictly better for some, than an alternative B, then A is morally preferable to B. For example, suppose you start with an outcome that has both infinitely many happy people and also infinitely many unhappy people.  If we number all the unhappy people, and are given the option to greatly benefit all of the evenly-numbered individuals (shifting them into the "happy" group), that would clearly be an improvement: better for those individuals, and worse for nobody. Schwitzgebel's approach implies that the value of the outcome remains unchanged (∞ - ∞). Since that violates the Pareto principle, it seems like it can't be the right way to evaluate outcomes with infinite populations.

If our ideal theory should include the Pareto principle, then we have grounds to reject Schwitzgebel's extension of expected value reasoning to the infinite case. Until we have a better understanding of how to sensibly incorporate infinite ethics into our reasoning, our best means of approximating ideal guidance may just be to bracket infinite cases, and set them aside. Our current tools don't work there. But we may still reasonably expect that the guidance we get by applying our tools to finite considerations is better (more closely approximating the ideal verdicts) than any alternative currently on offer. There's no reason to throw finite babies—even distant ones—out with the infinite bathwater.

The Cluelessness Argument

Next, Schwitzgebel claims that "We cannot justifiably guess which actions will be more or less likely to have a positive effect after a billion years."  Whereas longtermists "normally argue that it would be better for the billion-year-plus future if humanity does not destroy itself or permit large-scale catastrophes such as nuclear war," he takes there to be "equally plausible arguments in favor of human extinction or catastrophe."

I think that good judgment requires us to reject the extraordinary claim that anti-catastrophe and pro-catastrophe verdicts are "equally plausible". We should have a strong prior expectation that (e.g.) nuclear war is overall bad, and only revise this in light of robustly compelling arguments to the contrary.  But there is nothing either robust or compelling about Schwitzgebel's arguments. Rather, what he offers is a speculative account of how one possible future would lead to nuclear war being overall good.

This should not update our expectations at all, because we already know that it's possible for anything (even nuclear war) to be overall good. It just isn't likely. Crucially, Schwitzgebel doesn't offer any reason for us to judge his imagined scenario to be the most likely of all possible futures. He merely offers his personal opinion that it is "not so implausible", which is far short of what would be needed for his argument to actually work.

By contrast, the standard longtermist view—that catastrophes are bad, actually—is much more in line with common sense. Being a less extraordinary claim, it requires less evidence to justify. It's effectively just highlighting what we (should have) expected all along.  Some longtermists make more surprising claims, like that this is the most important century in human history. Such claims require robust argument. I think Holden Karnofsky's arguments to this effect meet this requirement, and warrant serious consideration. They remain contestable, but they're nothing like the handwavy nonsense that Schwitzgebel takes to constitute an "equally plausible" parody. That's my judgment, at least; readers are free to compare the arguments and draw their own conclusions.

Why Not Be Clueless?

At this point, Schwitzgebel responds that he agrees with common sense—nuclear war is bad!—he just thinks we need to abandon the ambitions of longtermism.  We can say that nuclear war is bad simply because it is bad in the near (or middle) term, never mind our cluelessness about the long term.

But this risks violating our principle of approximating ideal guidance. If it's really equally plausible that nuclear war is overall good rather than bad, and to an equal extent, then it seems we can no longer take our current opposition to it to be an accurate approximation of the ideal verdict. We might as well just flip a coin.

We should only epistemically discount some factor if we have reason to think that doing so serves to get us closer to the right answer—that is, better approximates ideal guidance. Action is instrumental: it aims to achieve some goal. Moral action aims to achieve moral goals. Ideal guidance would tell us how to actually achieve the correct moral goals. When faced with uncertainty, the best we can hope for is to approximate ideal guidance, in light of the available evidence about the likely and possible effects of our actions. But we have no reason to follow "guidance" that has literally no correlation with our ultimate moral aims. We shouldn't bother to follow the random "guidance" of a magic 8-ball, for example, because that doesn't help us to achieve our moral goals. It doesn't even approximate ideal guidance. Do commonsense judgments opposing nuclear war do any better? Schwitzgebel's imputed cluelessness suggests that they don't. If we want to secure a better future (without temporal restriction), we would do just as well to consult a magic 8-ball to decide whether or not to start a global nuclear war.

I think that judgment is mistaken—crazy, even. But to agree with me on this point requires believing that commonsense judgments here adequately approximate ideal guidance. We can reasonably bracket infinite cases on this basis, for example: we can think that we'll get closer to the right answer by not letting weird reasoning about infinite cases distract us. And we can similarly dismiss speculation about how "maybe nuclear war is good after all," if we're justified in a strong prior expectation to the contrary.  We can again expect that we'll get closer to the right answer by discounting such flimsy speculative reasoning. But we can't rationally ignore high stakes possibilities while simultaneously expecting that they would determine what we ideally ought to do. To justify setting something aside, we need to justifiably believe that this process gets us closer to the correct answer. We need to have a clue about the ideal view.

The Negligibility Argument

Finally, Schwitzgebel argues that "Even if we could justifiably guess that some particular action is likelier to have good than bad consequences in a billion years, the odds of good consequences would be negligibly tiny due to the compounding of probabilities over time."

This may often be true. But it is very extreme to claim that it must always be true, without exception—yet that is what Schwitzgebel's argument requires.

Consider his toy model: "Very optimistically, let's assume that in order for your action to have the billion-year positive consequences you desire, you'd have to get a little lucky every million years: Every million years, there's a 10% chance that the causal chain you started continues in the hoped-for positive direction. On these assumptions, after a billion years, there will be only a one in 101000 chance that your action is still effective."

We should never have 100% confidence in any particular model. Schwitzgebel regards the one he describes above as an "optimistic" model, so presumably he thinks a significant chunk of our credence should be reserved for less rosy prospects. What he neglects is that some non-trivial credence should also be reserved for more optimistic models.  For example, we should include some non-trivial credence that the time of perils hypothesis is correct, and that if humanity can successfully navigate the coming challenges (such as AI alignment), then subsequent risk of existential disaster will eventually approach zero. I actually think there’s a good chance that if humanity survives the next few centuries our descendents will eventually reach such a stable state of robust existential safety, for the sorts of reasons that Carl Shulman lays out here.

But you needn't share my optimism. Even if you merely allow 1% (or 0.1%, or any non-negligible) credence for this more optimistic model, then the probability that averting an existential risk this century leads to better results after a billion years is vastly greater than Schwitzgebel imagines.  The negligibility argument only goes through if the time of perils hypothesis warrants negligible credibility. And Schwitzgebel has not established that.

Conclusion: The Faint Voices of the Distant Future

In many cases, I agree with Schwitzgebel that "calculations over [the billion-year plus future] are more likely to mislead than enlighten." In fact, this is why I disagree with many of Schwitzgebel’s arguments: we should be very skeptical of flimsy speculative arguments that violate reasonable prior expectations. Arguing that nuclear war is "plausibly" good is a big claim and requires more than arbitrary numbers to justify. Such extraordinary claims require correspondingly robust supporting arguments and evidence. Calculations with made-up numbers lack the requisite robustness if the numbers in question could easily be so far off-target as to reverse the intended conclusion.

But there are some guiding principles that we should not lose track of. We should remember that reasonable guidance cannot be entirely unrelated to ideal guidance; it has to at least approximate the latter. The best available guidance will more closely approximate ideal guidance than any alternative method available to us. So if we're confident that nuclear war is (from our current epistemic position) worth opposing, it must be that we have grounds to expect that nuclear war would be overall bad from an ideal perspective. Since we have no grounds to doubt that distant people matter morally, it must be that we can reasonably take nuclear war to have a negative prospect, even while taking all times into account. 5 5 At least until we're given a robustly compelling argument for thinking otherwise. I'm not advocating dogmatism here, just a sensible degree of skepticism about extraordinary claims, as a ward against gullibility. Expand Footnote Collapse Footnote Our expectations about the future are fallible and revisable. But we're not really clueless: some expectations are obviously more reasonable than others. In particular, it's more reasonable to expect nuclear war to be overall bad than overall good in its effects on sentient beings.

We should remember that common sense can guide us, often better than explicit calculations. If someone claims to have calculated that nuclear war is a good idea, you can reasonably expect that they have made a mistake. Part of what you are then expecting is that someone with more accurate numbers would reach a different verdict from theirs. This is similar to how, if someone claims to have produced a mathematical "proof" that 1 = 0, you can predict in advance that they must have made a mistake. Rather than being clueless, you are staking a claim to knowledge that is more robust than their first-pass calculations. We should only revise our expectations about the badness of nuclear war in light of truly compelling, comprehensively considered arguments. A hand-wavy paragraph or two with made-up numbers cannot possibly suffice.

Finally, we should remember that all models are fallible and we should distribute our credence across a wide range of them. In particular, we should assign non-negligible credence to the time of perils hypothesis. It doesn't even have to be the most likely outcome in order for it to establish that existential-risk mitigation is extremely worthwhile, in prospect—even more worthwhile than it would seem if one were to consider only the near and medium terms.

The voices of those around us ring loudly in our ears. With some effort, we can bring to mind the voices of the next few generations. But ideally, the longer term would also have a voice in our moral imaginations. It's a faint voice that might easily be drowned out by our own biases. 6 6 But again, note that this is first and foremost a reason to discount speculative reasoning that lacks robust supporting evidence; not a reason to discount the interests of distant people as such. Expand Footnote Collapse Footnote This isn't unique to longtermism: any genuine moral consideration may be best ignored by those who would distort and manipulate it to their own ends. But I don't think we should ever want everyone to ignore a genuine moral consideration. Some may be more careful listeners and inquirers, and find a way to amplify the truth of the matter. So hope remains that we may better approximate ideal guidance if we listen, with care and consideration, to what the many faint voices of the distant future have to say.

About the author

Richard Yetter Chappell

Richard Yetter Chappell is an associate professor of philosophy at the University of Miami. He works on ethical theory, applied ethics, and the relation between the two. Chappell is the author of Parfit's Ethics, and lead author and editor of an open-access textbook on utilitarianism. He blogs about moral philosophy at Good Thoughts.

RELATED ARTICLES
Recent Article