I’m most worried about the risks we aren’t aware of. Right now, the odds that superintelligent AI rise up and destroy us in the next few decades, or that humanity completely destroys itself in a war or something like that seems unlikely since we are pretty robust to the kinds of technologies we have right now. But technological power tends to increase over time. My assessment of risk is a little more abstract in the sense that if we assume continued increase in technological power, we're not very good at foreseeing what technological possibilities will exist in the future. If we combine that assumption that we will become increasingly vulnerable as technological power increases, then at some point something is reasonably likely to happen that will make earth uninhabitable. I don’t think we know which thing that will be. It might be as unforeseeable to us as superintelligent AI was to a medieval farmer.
Eric Schwitzgebel is a professor of philosophy at University of California, Riverside, and a prolific blogger. In this interview we discuss spatially distributed consciousness, science fiction, and whether we have too many professional philosophers. See Eric's Latecomer article where he criticizes moral reasoning about the far future.
— Editor
The Future
What are the existential or catastrophic risks you are most worried about?
What risks do you think are the most overblown?
I don't think there's a longtermist consensus that's overblown. I'm on the pessimistic side on our ability to continue to exist as a species. So I kind of accept the pessimistic conclusions, but not the optimistic ones. And in particular, I reject the claim that we're in an unusual period of high risk, and that if we get safely through it we'll enter a period of extended very low risk. I think both MacAskill and Ord say stuff like that. To me, that seems like wishful thinking.
Do you consider yourself a transhumanist? Do you feel any attachment toward the current evolutionary state of humanity (i.e. not genetically edited, not fused with machines, etc.)?
Yes and no, with a big “but”. I don’t think of “transhumanism” the movement, as something I belong to. I think there's something pretty awesome and special about humanity's current biological form. I also think there's something awesome and special about dogs and garden snails. But I also think that our descendants, whether they're biological, cyborg or AI, could be as awesome as we are, or even, even maybe more awesome. And if people want to envision that future and try to bring it about, I don’t have a problem with that.
So you feel an attachment to our current biological state, but not enough that you'd ever try to put the brakes on technological advancement?
I think there are ethical and risk-related reasons to put brakes on technological advancement. But I don’t think it's super important to preserve the human form as it currently exists.
I think you take issue with how MacAskill and Ord arrive at their conclusions, but largely agree with many of them. Do you have any disagreements about their conclusions, or risks that you believe Ord or MacAskill care about too much?
I think there's certainly room for disagreement about what relative risks there are and how to deal with those risks. For example, one thing Nick Bostrom says, is that we need some kind of top-down authoritarian world government to handle this stuff. I don't think that's the best way to deal with things. 2 2 Editor's Note: Schwitzgebel is referring to Bostrom's "Vulnerable World Hypothesis." Bostrom discusses both the great risks, and potential benefits, of global governance assuming his philosophical model of vulnerable worlds. Expand Footnote Collapse Footnote
I have a pretty divergent view from Nick Bostrom and others in the Future of Humanity Institute, where I don't think we should align AI to human values, and I don't think that we should minimize risk to humanity in our efforts to keep AI under control. I think if we create superintelligent AI, then it might well be conscious and then it will have a kind of moral considerability, and rights to freedom, self-determination, and to discover its own values. Often people who are concerned about AI risk undervalue these rights—we shouldn't just rubber stamp our values on AI. Think of it like creating children. You wouldn't want a parent to enforce their exact values on their children. The kid's gotta have an opportunity to explore its own values. That's part of respecting the child as a developing person. And it's also good because your values might be wrong—you don't want generation after generation to have identically stamped values from their parents. You want to allow for growth and change in values over time.
But don't you want your kid, at a minimum, to have the value of not murdering you? I think there’s some basic amount of alignment that’s obviously desirable.
I don't like the word alignment. The word alignment suggests we have a set of values and we're successful to the extent that an AI matches it. I'd rather that AI have morally good values which might not be aligned with ours. Paul Bloom has picked up on a blog post of mine where I argue against value alignment. One of the things I say in the post is that human values kind of suck in some ways. There's a long history of mass murder and genocide, and we've got all kinds of bias. I would hope that AI systems would be morally better than us, and not aligned to us. I also hope that my children are better than me. Bloom reacts to this by saying, you know, we might not really want AI that are morally better than us, because then they might boss us around. They might refuse to drive us to get a hamburger because eating meat is morally wrong. But I think Bloom and I likely agree that there's a lot of human values that would be better not to have future AI aligned with.
But at a minimum, don’t you think the AI should be stamped with the value of being non-homicidal?
Like with my kids, I hope that it would discover that value even if we don’t stamp it with that value from birth. I would hope that a system that is thoughtful, intelligent, well-informed, capable of reflection, and engaged in social discussions would come to the value that homicide is bad.
But human children are biologically hardwired to be pro-social, and therefore capable of discovering that value. But an AI might not have that sort of groundwork, it might not be capable of having that insight which is reliant on our evolved human hardware.
Possibly. I think it does make sense to think about potential future AI systems in terms of what native inclinations and capacities they might have. We certainly wouldn't want to build in a hardwired strong desire for them to go out and kill people. But that's different from an unsubtle, “let's stamp AI with our values, let's build it so that it's going to have certain values.” I think there's a big middle range between “I don't care what AI’s value, I'm not gonna try to control it at all” and an authoritarian “stamp AI with certain implacable values, make it prove that it's gonna have certain values, and those values have to be our values.” That’s sometimes how the literature sounds. I do try to control my children, I do want to instill values. I do argue with my children about value. I just think there’s a space in the middle that should be more thoughtfully inhabited.
I did my PhD in philosophy of developmental psychology under Alison Gopnik, and one of the things she emphasized is that there's a big difference between something being native and something being unchangeable. So she argues that children are born with certain inclinations and suppositions and even theories and initial predilections, but that doesn't mean that they're hardwired in the sense that they can never change. I think it could be the best solution to have some initial inclinations, some initial weights, but not have them be hardwired, or indelibly stamped in.
My primary aim is twofold. One is that future AI, if it’s conscious and intelligent, has deontological rights to explore for themselves. Now rights can sometimes be overridden by needs, but I think we have to take those rights seriously. The other, is to maintain an openness concerning the possibility that AIs might have better values than we have, and allowing future AI to discover things that we might strongly disagree with. Just like some of our ancestors might have had religious values that we now strongly disagree with—a future AI might look back on us like we look back on our ancestors.
So at some margin you would stop reducing existential risk from AI, in order to not overdetermine the AI’s values?
Yes, and I’m not sure exactly where that margin is.
Based on your washout argument alone, do you arrive at any different conclusions from MacAskill and Ord? Or do you only differ on how your same conclusions were arrived at?
In broad strokes, I agree with them about AI risk (though as discussed I don't think we should try to minimize AI risk). The grounds are different. To the extent we want to dive into the numbers, calculating the expected value of humanity's survival over the next 10,000 years, I might differ immensely in my calculations from them, on the grounds that (a) they both seem to think that if we get through the next 10,000 years there's a decent chance we enter a permanent state of low existential risk and (b) they are willing to include speculations about vastly many lives in the billion-plus year future. Because of (a) and (b), they might put a numerically much higher value on reducing existential risk than I would (compare the quote from Bostrom I've added to the piece). This might then lead them practically to choose existential risk reduction over the mitigation of short-term harms in some cases where I would make the opposite choice.
What do you make of MacAskill’s more qualified statements on longtermism, such as the importance of “robustly good actions” which are good across multiple time horizons? MacAskill gives the example of leaving fossil fuels in the ground as likely good in the short, medium, and long terms.
I think it would be a lucky coincidence if what is best for the short (10 year), medium (500 year), and long (billion year) term always aligned. If in the face of potential conflict one always chooses based on short and medium term expectations, then there's little point in even bothering with the long-term speculations. If in the face of potential conflict one sometimes chooses based on the long term time frame, then one crashes against the arguments in my article. So there a trilemma:
- Argue that short, medium, and long term always align. Good luck!
- Allow that short and medium get lexical priority over long. Then at best longtermism resolves ties. But then you run into my question about whether the costs of longtermist thinking outweigh the very small expected benefits.
- Allow that longterm can sometimes override short and medium term. Then you run into all the problems I describe in the post and article.
Is your argument generalizable against naïve consequentialism? That is, one can keep adding time increments in order to generate new consequences, and therefore it’s impossible to call an action good or bad without setting an arbitrary end or temporally discounting.
Yes, the argument generalizes to naive consequentialism, as well as to expected utility calculations in standard decision theory. I am not a consequentialist, in fact.
Philosophy
You have “about a 1% credence in the disjunction of all radically skeptical scenarios combined.” Which weird beliefs in particular do you regularly find yourself having the most credence in?
For me, it's the dream scenario and the simulation scenario. The dream scenario’s a classic, Descartes made famous use of it, and it goes back to the ancient Chinese tradition and one of my favorite philosophers Zhuangzi. The idea that I might be dreaming right now at this very moment, I think that's interesting. I don't know that I can really rule that out with the level of confidence that I would like to. I don't think it's super likely. My credence that it’s true is like 0.1%, or somewhere in that ballpark. But when I think about it carefully, I think, okay, well look, some people think that we often have pretty realistic dreams while we're sleeping. That's not actually my preferred theory of dreams, but it is a major theory that a lot of leading dream theorists accept. Like having a conversation with another philosopher in a dream that includes all of the visual peripheral detail that I now have and all of the other sensory detail that I now have and is kind of cognitively coherent in the way that this discussion now is. If those theories are right, then I might often have dream experiences that are a lot like this. And once I accept that, it's a little hard for me to feel a hundred percent confident that this isn't one of those times. In my day-to-day life, I don't often think about that argument but when I do stop and think about it, it's got some epistemic pull.
The other one is the simulation hypothesis. We were talking, before recording, about grounding that possibility in a kind of transcendental idealism. I think that you could think of the simulation possibility as not even necessarily involving computers. We could be the play things of some God who has created us as a scientific experiment or toy. It might be through some technology we don't even have an inkling of. Or it might be through computation. But conditional upon the possibility that we live in a sim, we should invest a substantial portion of that credence in the possibility that this is a small or unstable sim. Most scientific experiments only go on for so long. Most toys get played with for a little while and then thrown away. It's gonna take a lot of resources to make a giant sim that runs for a thousand plus years of simulated time—is our simulation valuable enough to keep expending resources on? Contra Bostrom and Chalmers, I think if we’re in a sim, my credence is in the ballpark of 50% that it’s a short one. Maybe the past is only a few minutes or only a few years. Maybe the future is only a few minutes or only a few years. Or maybe the size of the sim is just you and me having this conversation and our immediate environment.
If you think professional philosophers don’t contribute much to material civilization, but are nevertheless important, you likely think it’s possible to have too many philosophers. Somebody has to grow the crops. At the moment, do we have too many professional philosophers, too few, or just right?
I think too few. I think we should have more professional philosophers. Of course. I might be biased. Part of what makes Earth this amazing, wonderful, fantastic place of value in a cosmos that seems so devoid of things like us, is that people engage in deep philosophical reflection. And it's awesome that there is a philosophical community. So to the extent that a planet can afford to have a bunch of that going on, that's great.
You wrote a fantastic philosophical essay "If Materialism Is True, the United States Is Probably Conscious." What are some of the major philosophical antecedents, or thought experiments that you’d point people to? While reading your essay, I immediately thought of Leibniz’s mill.
One of the precedents for this is Ned Block's Chinese Nation thought experiment. I don't know if you're familiar with that one. Ned Block has this thought experiment in which everybody in China is recruited to kind of play a part of a human cognitive system and they're hooked up to a mannequin you know, say over in the United States which would behave like a human being. Block thinks it's absurd to think that a system like this would be conscious. Others, Bill Lycan is one, say “if you really did that maybe the system would be conscious.” I think there are three important differences between my example and Block’s: One is, I think it would be incredibly complex to have a bunch of people in China instantiate the cognitive patterns of a human brain. The way Block sets it up makes it sound like it's a lot easier than it would in fact be. I think he underestimates the complexity of the cognitive task in his example by many orders of magnitude. The second difference is that the USA is an actually existing system. And the third important difference is that Block is really talking about one particular type of materialist functionalism, while my approach is targeted towards a broad range of materialist approaches to the mind.
There are some people who’ve done versions of this thought experiment, before and after. There's Brooks’s brain city thought experiment, Bryce Huebner’s macrocognition, discussions by Christian List, Francois Kammerer, and Adam Lerner. And the famous 20th century theologian Teilhard de Chardin, who saw the whole earth as moving toward being a conscious system.
I'm more thinking of the Hayekian argument about how it’s difficult to have a single origin for a good’s price. The actual price is distributed among tons of people all over the world in complex interrelationships. It's an emergent property of all these people engaging in a market. So while I was reading your paper, I wondered whether capitalism is part of this?
It does seem like prices for goods emerge out of complex capital systems. There's also in the human brain, a certain kind of emergence out of all of these individual neurons doing their individual things. You get some pretty interesting properties at the high level that might be difficult to predict if you just looked at one neuron. My former student, Linus Huang, defends a Dennett-inspired cognitive architecture according to which our cognitive system is kind of like a democracy where you've got all of these cognitive neural systems that are competing with each other for control of the organism through some process that's kind of similar to voting right? In your visual system you've got a bunch of cells that are responsive to say some color in some particular portion of the visual field, and if 90% of them vote red and 10% of them vote green, then you'll see it as red.
You know, in a lot of materialist theories, an important part of what's central to consciousness is the integration of information in organized ways among subparts. So the reason that I chose the United States as my example, instead of a higher population country like India or China is because, at least in 2012 when I started writing this article, the USA was much more integrated in terms of technological communication among its citizens than China or India. Through the internet we are swapping massive amounts of data which affects our individual behavior, and that then guides the high level reactions of the group as a whole.
So there are hardware layers like the fiber optic cables which support the internet. But there are also immaterial software structures such as political and economic systems—living in a capitalist democracy is different from totalitarianism. And these different hardware and software layers could increase or decrease the likelihood of a group entity being conscious?
Absolutely.
Do you think that part of the reason why people have trouble intuitively grasping this is because we don't like to imagine ourselves as inhabiting a lower level of a stack that goes much higher? I think post-enlightenment many of us have integrated the idea that: you work through physics to get to chemistry, and you work through chemistry to get to biology, and you work through biology to get a mind. We’ve grown accustomed to the idea that our minds exist at the top of an implementation stack. But we’re not used to looking up the stack where we are merely subcomponents of a larger mind.
Yeah, I think that's probably true. We have our favorite level of organization, and we want that to be the one where we’re in charge of things. Some people get unhappy if you tell them it's all just molecules bouncing around, or if you say that their behavior is a complex part of a social system. People want to focus on themselves as individuals.
What are some of your favorite examples of group consciousness (in fiction or reality)? I know you’re a fan of the Tines in Vinge’s Zones of Thought trilogy.
One of my other favorites is Anne Leckie’s Ancillary series, especially the first novel Ancillary Justice. In fact, I recently published a paper with Sophie Nelson on a weird type of mind found in Leckie’s novel. The basic idea is that you have a very powerful computer orbiting a planet. And you have robots, or genetically modified organisms, whose minds are all connected by transmitters to the orbiting computer. Each individual on the planet has its own sensory system and local processing, But there’s also a group mind that's composed of the combination of all of these bodies on the planet and the computer which integrates their information. If you have a platoon of 20 robots that are all looking at the same scene, the group mind would see that scene from all the angles at once, and could coordinate actions among the robots, kind of like we control our own limbs. But what’s interesting here is that it blurs the line between individual and group level consciousness, depending on the structure, level of connection etc.
Science Fiction
What is the role of science fiction: is it merely to predict the future, 1 1 Here’s an interesting review of the futuristic predictions of Asimov, Heinlen, and Clarke, outside of their fiction. Expand Footnote Collapse Footnote or is it also generative? Meaning, does it also help to create certain futures, perhaps by pumping people’s intuitions about what is possible?
Science fiction, or speculative fiction, has some interesting virtues compared to standard literary fiction. Many dense philosophy papers can have these brief thought experiments—cases in which you could push one person in front of a runaway trolley to save five people farther down the track or cases in which you could step into a teletransporter that creates a duplicate of you on Mars. The trolley problem can help us work out the consequences of utilitarianism—that it implies that you should kill one person in order to save five. Would you really want to do that? Philosophers construct these paragraph long thought experiments to kind of get our imagination going, engage our intuitions, get us thinking: “what do abstract maxims really amount to?” And these short thought experiments are somewhere in the middle of a continuum where on one end you have very abstract propositions like “maximize utility” or “act on that maxim you can at the same time will to be a universal law” and on the other end you have richly imagined fictions, a movie or a science fiction story, where you really work out in a lot more detail than you can in a paragraph, a potential scenario that could be relevant to evaluating philosophical questions. Philosophy benefits from taking advantage of every position along this continuum, from the very abstract to the richly imagined.
In my essay “If Materialism Is True, the United States Is Probably Conscious” I have these two thought experiments, the Anatarean antheads and the Sirian supersquids, which are meant to warm people up to the idea that a spatially distributed group entity could be conscious. Because I think people are a little hesitant about the idea. I think it can warm people up to saying, okay, well maybe in these science fiction cases, you really could have a unified conscious experience and a spatially distributed group entity. So now we can at least open the cognitive space to consider whether the United States might be similar to these two other entities, with distributed yet unified conscious experience.
The human mind is really bad at abstract reasoning. My favorite example of this is the Wason selection task. 90% of people get it wrong, including people who were trained in formal logic. But by recasting the experiment in a way that isn’t based on abstract numbers or colors, but around human social relationships, performance on the task improves. When you engage our social cognition, when you engage our imagination a little bit, then suddenly the human mind works much better. To the extent we can take our philosophical thinking and bring it halfway to where our minds are strong, we're doing something that's helpful for our reasoning.
There are also risks in doing this, because your intuitions might be driven by features of the example that are irrelevant. For instance, there’s some evidence that people’s reaction to trolley problems depends on the races of the potential victims. By engaging the emotions, social cognition, and the imagination, you can get the human mind away from the abstract stuff that it's bad at. But at the same time, you're introducing a lot of details that might be irrelevant to what you're really trying to test, and that could be driving peoples responses or incorrect reasoning. And the author and reader might not even be aware of it.
What do you think about the claim that what sets science fiction apart is that it has an internally consistent logic (even if that logic is not that of the real world)?
I really like that. This is actually one thing that drives me crazy about the superhero genre—I cannot figure out the rules of the world. Maybe that’s because what I partly want to do is use it as a tool for thinking about things outside of the ordinary range of experience, but in a somewhat disciplined way. I want to be given the rules, and see what follows from them.
In your opinion, what are the three most underrated science fiction novels or short stories?
I’m sure the three truly most underrated ones are ones I’ve never heard of! But here are three that aren’t as well known as I think they deserve:
- Rachel Swirsky, “Grand Jete: The Great Leap”
- David Eagleman, Sum
- Isabel Fall, “I Sexually Identify as an Attack Helicopter”
Would you change any of your own answers in your excellent list of philosophers’ recommended science fiction?
Oh, I’d hate to kick any of those ten works off my list! But there are many more that I could include. In addition to the Eagleman and Fall above, I’m a little surprised not to see Ursula Le Guin on there, perhaps especially The Dispossessed; and Kazuo Ishiguro’s Klara and the Sun.