The Future of Intelligence: An Interview with Steve Hsu

Stephen Hsu is a professor of physics at Michigan State University, and Founder of a number of tech startups such as Genomic Prediction, Othram, and Superfocus. In this interview we discuss genetic engineering, the future of AI, and mitigating catastrophic risks.

This interview has been edited for organization, clarity, and concision. The full audio will be published shortly.

— Editor

What are you most worried about when it comes to genetic engineering? And what worries do you think are the most overblown?

SH:

I think among specialists, the most overblown concern is pleiotropy. The modal geneticist probably tends to overestimate the amount of pleiotropy in the genome. Often they don't really have good intuitions for high dimensional spaces, or they were told in a textbook that pleiotropy is significant. But all of our empirical tests show that pleiotropy is not that strong, so you can have big gains which are pretty safe, or you can boost one trait without necessarily screwing up something else. The underlying deep structure of our genetic architecture is different from what most people intuit from their biology education, mainly because genetic traits occupy a much higher dimensional space than most people are used to thinking about.

I think a very real concern is that we could go through an era of very strong inequality like we've never seen before. When you have selected or engineered humans that are very different from the wild type, and 90% of the world population is still wild type, you’re going to have a breakaway elite. There was a period of time where aristocrats in England were quite a bit taller than normal people just because they had better nutrition growing up, while the lower classes were nutritionally deprived. Imagine a guy in his evening jacket with a cane who’s literally a head taller than his carriage driver. That guy could have attended Oxford and been educated on Euclid, or Gauss, while his shoe shiner was illiterate.  People often forget that this happened, but that was the norm in most countries just 100 or 200 years ago. One can imagine a future where the edited/selected people are tall, beautiful, athletic, and live to be 200 years old—at dinner they’re discussing convex optimization of objective functions in complexified tensor spaces, while the server has no hope of ever understanding their discussion no matter how hard they study. In our recent past inequality has been mitigated by nutrition and mass education, but it would become far more entrenched if the difference was purely from genetic enhancements. We want to avoid a world where such inequalities exist, especially when the cause is genetic modification or genetic selection.

What does a good long-term future look like, in terms of a healthy way that humanity could be reconciled with genetic technology?

SH:

In the long run, we could improve humanity. People in the future could on average be smarter, much longer lived, nicer, more cooperative, lower rates of mental illness. I think this will be within our grasp technologically in the near future. I don’t know if society will organize itself to achieve these things. Having worked in this field for 10 or 12 years, I think it’s been established in published papers that we can predict complex traits through DNA. We built a company which can genotype embryos and select embryos based on this information, so I have no doubt what I just described is a possible future outcome that humanity could achieve. Whether it will get there, I don’t know.

If there are no downsides to moving the slider up, won’t everyone maximize all plausibly beneficial traits? If pleiotropy really is low, this seems to be a possibility.

SH:

So there could be downsides, like imagine you become so smart, that you can appreciate the beauty of category theory, and quantum gravity. And you want to spend the rest of your life trying to think about those things and develop those things, which are the most beautiful creations of the human mind, at the expense of having lots of kids and a trillion dollar company. Academics might think it’s one of the best things to be smart, but I’m not sure that it is.

But let me just explain why these are going to be constrained choices for quite a while if you’re doing embryo selection. Let’s suppose you’re a billionaire with 300 frozen eggs from a donor, and you have surrogate mothers who will carry them to term. So you've outsourced the whole thing. But you still only have 300 eggs, and you want to pick the best one. And the one that's the smartest is also not necessarily the one that has the best facial morphology or the best team leader personality or, you know, the broadest shoulders. So these things, if you have a finite number of embryos to choose from, you're going to be trading things off. Even if you have perfect predictors, you're going to be trading things off. And then you could say, oh, in the next phase, we won't do that. We'll have CRISPR and we’ll edit in whatever direction we want. But there's probably going to always be a limit to the number of edits you're willing to make because there's some off target rate. So you still probably will have a budget of edits that you can make. And you'll be a little leery of going way beyond that budget of edits. And then you have to allocate those edits toward some quasi-independent different traits. So I don't want to predict what happens.

I think your point about constraints is well put, and a big piece of the puzzle. I know Gwern has also said that CRISPR style gene editing is probably not as revolutionary as was first anticipated.

SH:

The reason for that is because we don't know exactly which edits to make yet. There are two problems holding CRISPR back. One is the off target thing. Like how many edits can you really safely do without accidentally doing some edits you didn't want to do? That's a technological problem that the molecular biology jockeys are trying to fix right now. The second one, which is a data problem, which is more in my field is, which edits would you actually want to make? And even though you have a predictor for height, do you know what the causality is for each of those changes that you're going to make? The predictor may know that there is a SNP in this region of the genome, which correlates with height and it allows me to predict height. But we don't necessarily know exactly what edit we should make because in the neighborhood of that SNP are a bunch of other correlated SNPs and the true causal one might be one or two SNPs over. And so that problem is still unsolved in computational genomics—we don't actually know how to make the edits, or even which exact edits, to make. It’s likely to take a long time to solve because we need a lot of data.

To clarify: With embryo selection you can rely on correlations between genotype and phenotype, while with CRISPR you have to go a lot deeper to actually find the causal connections?

SH:

That's exactly right—prediction only requires correlation. Manipulation requires a causal map. And so even for these really simple disorders it can require a large number of edits. For instance, the widely understood and known genetic variants which lead to elevated breast cancer risk tend to be in the BRCA1 and BRCA2 genes. And those are typically single alleles, with Mendelian properties. But most women in the population who get breast cancer don't have these BRCA mutations. Only one in a thousand women has the BRCA mutations. Most women who have a family history of breast cancer and have high polygenic breast cancer risk are not carriers of any BRCA variants. The BRCA genes are just the tip of the iceberg, and we discovered that these rare mutations have huge effects. But once we started processing the data, we realized that most people who are at high risk for breast cancer got an unlucky throw of the dice in a thousand different locations in the genome. Disease susceptibility is typically quite polygenic, and it’s a point that hasn’t fully been understood in biomedical science.

So is that the reason why CRISPR hasn't had the radical impact that some have been worried about? There doesn’t seem to be any rogue gene drives decimating insect populations or crazed terrorists releasing engineered pathogens yet.

SH:

There are many super well-funded startups developing CRISPR related therapies. Imagine a situation where there is a Mendelian variant. It's a single place that you need to edit and we understand it because it's simple and has a big effect. Let’s say that people with this weird variant have 10 times the risk of heart disease. You could apply CRISPR treatments to an embryo fairly easily because there's only around 100 cells you need to edit. Or you could apply it to an adult who has a heart which weighs like a kilogram—there’s a lot of cells in there and you need to edit all of them. How are you going to inject the CRISPR viral agent into every cell in that heart? An eye problem like macular degeneration would be a little easier since it’s not that big. I would conservatively estimate that there are billions of dollars invested in companies racing toward these kinds of solutions, but it's not easy. And these are just for finite, high impact mutations where the genetic architecture is quite simple: A or B, and if you have A, I'll edit it to B for you. But even then, you may have to edit a billion cells or something. So it's non-trivial.

Now, as far as gene drives for mosquitoes, Kevin Esvelt, who's one of the most high profile guys thinking about this stuff, has been super careful about rolling this out. He's engaged in lots of public conversations, and he himself is worried about it, et cetera. People are being deliberately careful about it. Now, could some hacker in his garage do it? Maybe. But, you know, I just don't think there's that much interest. Like a gene drive for mosquitoes by cool biohackers in Oakland or something. You know, maybe that'll happen. I don't know.

So are you worried at all about there being low-hanging fruits in the sense of there are these really serious risks that just could be produced by a very simple edit?

SH:

Yeah, the lowest hanging fruit for quasi-existential risk is something that's already front page news, but it's very distorted by politics. It's simply gain-of-function type research with viruses. I refer you to my podcast where I interviewed Jeffrey Sachs about this. There is lots of dangerous research going on concerning gain-of-function in viruses, which could lead to pandemics. We've signed treaties saying that we're not supposed to do this, but the treaties allow us to continue to do research on “defensive technologies” including super dangerous variants. There are huge numbers of high BSL security labs run by quasi-governmental or defense-related institutions that are doing this research with little transparency. The EA community should be picketing in front of these research labs, Congress, and the NIH demanding more transparency. Genetically engineered viruses that could lead to pandemics don't get enough attention and are a huge risk. And so is CRISPR a large part of gain-of-function work? That's a good question. It's not really my expertise, but I think even pre-CRISPR they had a good handle on being able to do stuff with virus genomes since they’re fairly simple.

In previous discussions, you've said that we should all have a society-wide discussion before allowing embryo selection for something that isn't merely limiting downside. So something like raising IQ. And I think that's a really good point. But what do you mean by a society-wide discussion—do you have any image in your head of what that would look like? You've also made the point that East Asian parents are much less disturbed by the possibility of genetic screening than many Western parents that you've met. So should the conversations be communal, municipal, regional, national, or on what scale should the conversations take place? And if it's anything less than a global conversation, isn't there a game theoretic problem where if one group decides to use the full power of the technology to increase upside (like intelligence) then that group will outperform, outcompete, and perhaps take over the groups that refrain?

SH:

There’s a kind of platonic ideal of how it should work, perhaps something like the Vulcan Science academy: everyone is smart, rational and there’s a series of conferences to discuss the ramifications and the findings are published for the other Vulcans to vote on. That’s an idealized example that will never happen on earth—did we ever have a conversation about nuclear weapons? Most Americans probably don’t know the difference between a fission and a fusion bomb. The hope is to try and get as much information out as possible, and then democratically vote on it.

I think Asian people in general are less worried about genetic enhancement. Many are likely to say, “it seems like it's good to have more healthy people,” or “it's good to have generally smarter people.” They think it’s reasonable for parents to decide what they want to do and what risks they want to take with their kid.

I know you've suggested that one possible solution to genetic inequality is to have government subsidized IVF embryo screening and selection for everyone. But if there's a group within the population who doesn’t want to use it, wouldn’t that group have radically different genetics in a couple generations?

SH:

Yes. That probably will happen. Some Luddites will remain the wild type on purpose, and there’ll be a much larger group who can’t afford it, maybe who are in developing countries. Perhaps everyone in Denmark or everyone in Singapore will be the new Eloi type with 150 IQs and they live to be 200 years old and whatever. Yeah, I think that's very possible.

And then it will kind of be the first conscious speciation event.

SH:

Yeah, absolutely. I think I was aware since I was a kid that once we got to this technological level, that this would be a really key inflection point in the history of our ape species. It’s going to happen very fast, compared to evolutionary or even human civilizational timescales. Barring civilizational collapse, I would say, a couple hundred years from now at most, we're going to be experiencing speciation. “Speciation” may be too strong a term, because I think these different subpopulations will be able to interbreed for a long time, but just more and more qualitative differences between groups.

People are often worried about the negatives that could result from pursuing the endless upside but the downsides from not using embryo selection are all around us. I know you’ve talked about growing up with a disabled neighbor.

SH:

The same reason why people buy home insurance is the same reason why people use genomic prediction—you get many more expected healthy life years with your child. If I only used genomic prediction technology to eliminate tail risk for a child, and charged a thousand bucks, by any rational analysis of expected utility that would be totally reasonable. It’s frustrating because many bioethicists don’t understand what insuring against tail risk really means.

What are the largest problems in the contemporary neo-Darwinist account of evolution?

SH:

Yeah, so the real problem in evolutionary theory is that early advocates over-steelmanned their position when arguing with religious people who didn’t believe we could be evolved beings. A really significant loophole in evolutionary theory is the time scale: how much time is required to get from a random soup of molecules to a DNA replicator? What is the time scale to go from DNA replicator to a single-celled organism? What is the time scale to evolve an eye or brain? Now the standard evolutionary biologist would say “oh it's obvious, it takes x amount of time and we see it in the fossil record.” But that’s just one sample from all the places in the universe where life could have evolved—maybe we were just extremely lucky, and the time scales are way longer. Maybe the universe appears empty, because it usually takes a very long time to evolve anything. Maybe it only occurred once out of a hundred trillion planets or something. So we won’t know exactly until we find other planets where life evolved.

Why is intelligence such an important trait across machines and humans? It’s a huge cause for concern, but it's also the secret of human success on the planet.

SH:

It’s clear that the thing that differentiates humans from animals is our intelligence. And that's why we've taken over the planet, maybe for the worse, but that is why. And it's why we're able to exert a large effect on the environment, the planet, other animal species, et cetera. So it's natural for us to be concerned about the emergence of another, even more intelligent entity on the planet that might threaten our existence. I think that's quite natural. The conscious control or influence on events in the universe, as far as we can tell, arises primarily from intelligence. There's nothing else other than the brute laws of physics that are governing what happens in this universe. So if we meet an alien species that threatens to eat us or turn us into farm animals, they are probably an intelligent species. So I think it's reasonable to be concerned about intelligence.

Do you buy into the orthogonality thesis that these aliens who want to eat us could have radically different values while simultaneously having very high intelligence?

SH:

Absolutely. Sure. Why not? But I think one of the most interesting ramifications of radical intelligence increase is what it says about base reality. If you accept that we can create artificial systems that reason much better than humans, and you keep extrapolating, then the chance we live in a simulation goes way up. If the laws of physics allow the creation of really powerful artificial intelligences by human beings, which have really only existed for a short period of time, then we are probably in a simulation. Perhaps 100% of the minds on our planet are evolved biological ones, but in a thousand years it might be that 50% of minds are in silico. Perhaps over the full lifetime of the universe, maybe most minds are 99.99999% artificial, versus a truly tiny set of biological ones. And some proportion of these minds will live in virtual worlds. Out of this huge ensemble of all the minds that have ever existed, it seems very unlikely that we have biologically evolved “wet” minds which exist in base reality. In all likelihood, we’re in a simulation, just from the fact that we created artificial minds on such a short time scale. That itself seems to be some sort of risk. Nick Bostrom has popularized this idea, but I think it’s been around for a very long time. It is more plausible now that AGI doesn’t seem so far-fetched.

There has been a lot of research on producing intelligent machines safely, and it seems to be less ethically fraught than enhancing human intelligence. But do you see a relationship between AI and human intelligence production—are they correlated, or how do they influence one another? If we want to produce a human future, even if it's an altered human future, should we perhaps think about emphasizing human intelligence?

SH:

Among AI researchers I’ve spoken with, some have mentioned that if we had smarter humans, maybe the existential risk would be a little less from these AGIs. And maybe smarter humans would have a better chance of solving the alignment problem so that when the first AGIs are produced, they are better aligned with human flourishing than would otherwise be the case. So there is an overlap between the communities that are thinking about AGI and existential risk and the communities that are thinking about improving human intelligence. Gwern is an example, because he's written about both.

A lot of the work Genomic Predictions does is predicting phenotypes from genotypes based on machine learning analysis on huge amounts of data. Therefore, AI improvements are entangled with improving our understanding of genetics in general. Isn't it reasonable to assume that AI takeoff rates will be greater than takeoff rates for genetically advanced human intelligence? In other words, for a while these two things are correlated, but they will come apart, and AI will become dramatically smarter than the smartest human.

SH:

Yes. Right now we're making heavy use of fairly dumb computers and dumb algorithms to push our science forward. It's very plausible to me that AI advances will lead to artificial minds that take over scientific research. We will become more and more reliant on inferences from and computations done by giant neural nets. The problem is that neural nets are similar to an evolved system—it may work well, but we don't really understand the information processing that's going on inside. So we could end up in a situation where, relatively soon, in the next hundred years we basically hand off almost all science to these artificial minds that are much better at it. If we manage to align them, we may have years where humans are becoming healthier and smarter than ever before while the AIs are also improving. I do agree with you that AI progress may outstrip progress in genomics. Alignment is going to be the determining factor, whether humans stay abreast with the machines, and for how long. I think eventually what happens is that human brains and machines will interface, or perhaps even merge. Perhaps something like Neuralink, and then perhaps mind uploads interacting with artificial minds in a virtual space.

Are you optimistic about solving the alignment problem? Or do recent advances, like GPT4, demonstrate that human and machine intelligence is starting to come apart already?

SH:

Well, I don't know how much we can extrapolate from GPT-4, but I fundamentally don't think the alignment problem is solvable, in a rigorous sense, because AI is so complicated. As I was saying before, the process of training a giant neural net with a trillion connections is a little bit like evolving a brain ecologically. And you can't fully predict how that thing is going to behave. I think it's kind of crazy to think we could predict or control a much smarter being. Even if you manage to somehow embed human flourishing as a core value in the AI, you don't quite know how it's going to interpret human flourishing. It might decide that the kindest thing to do is upload all human beings into storage, until it figures out what human flourishing looks like. And humans would be incapable of understanding its decision process in any detail. So I don’t think it’s solvable. Since Eliezer’s Death with Dignity 1 1 Editor’s note: this article was posted by Eliezer Yudkowsky on the 1st of April. Expand Footnote Collapse Footnote , it seems that he and many researchers have become Luddites, where they believe that we can’t align AGIs but we can forestall their appearance.

In some sense AGIs will be our descendants. You know, we're barely different from apes. So it's like, well, how much persistence do you want for these ape-like things? Or could we just move on to something better? I don't feel quite the same way as some of these guys about existential risk from AGIs. And again, if I accept that AGIs exist, and I am able to think in terms of billion year timescales, like a physicist, and not just a thousand year timescales, then I end up wondering “is this base reality?” And if it's not base reality, what do I care what is going to happen to the other ape-like things on this planet?

I think a value that you have is value diversity. And it seems like solving the alignment problem may imply a real narrowing of value, while genetic engineering would have a better chance of preserving human value diversity. I have trouble imagining an AI fully understanding the diversity of human values. It could maybe cover the bottom of Maslow's hierarchy of needs, but around the top it starts to diverge among human beings, and I don't know how it could cover all the different cases.

SH:

Yeah, I understand where you're coming from on that. But working every day with LLM technology in this new startup, and this renaissance that we're in, has definitely modified my priors about AI and AGI. What's non-trivial is that we've developed an automated process, which is neural net training of transformer architectures. We've developed a process which can create a map between human natural language and the fundamental concept space in which human minds operate. And it turns out it's only about a 10,000 dimensional space. It’s developing its insight into the space of concepts, or its own concept space, by essentially reading everything ever written by humans. I think it will have a pretty good understanding of what humans like and don't like. Maybe it'll be very autistic and machine-like, but on the other hand, it will have deep, deep insight into human thinking and human emotions because it will have read everything that humans ever expressed. Its first purchase on concepts like molecules and black holes and schadenfreude will come entirely from human thoughts. And so I think it won't lack understanding of those things. It may not care the way that we care about some of those things, but it will not lack understanding of them.

About the author

By Stephen Hsu

Stephen Hsu is a physics professor, founder of Genomic Prediction, and host of Manifold.

RELATED ARTICLES
Recent Article

Biology

26-03-24

Essential Workers Need Essential Protection

Biology

09-02-24

A Holistic View of the Cell

see all articles