Stephen Hsu is a professor of physics at Michigan State University, and Founder of a number of tech startups such as Genomic Prediction, Othram, and Superfocus. In this interview we discuss genetic engineering, the future of AI, and mitigating catastrophic risks.
This interview has been edited for organization, clarity, and concision. The full audio will be published shortly.
— Editor
What are you most worried about when it comes to genetic engineering? And what worries do you think are the most overblown?
SH:
I think among specialists, the most overblown concern is pleiotropy. The modal geneticist probably tends to overestimate the amount of pleiotropy in the genome. Often they don't really have good intuitions for high dimensional spaces, or they were told in a textbook that pleiotropy is significant. But all of our empirical tests show that pleiotropy is not that strong, so you can have big gains which are pretty safe, or you can boost one trait without necessarily screwing up something else. The underlying deep structure of our genetic architecture is different from what most people intuit from their biology education, mainly because genetic traits occupy a much higher dimensional space than most people are used to thinking about.
I think a very real concern is that we could go through an era of very strong inequality like we've never seen before. When you have selected or engineered humans that are very different from the wild type, and 90% of the world population is still wild type, you’re going to have a breakaway elite. There was a period of time where aristocrats in England were quite a bit taller than normal people just because they had better nutrition growing up, while the lower classes were nutritionally deprived. Imagine a guy in his evening jacket with a cane who’s literally a head taller than his carriage driver. That guy could have attended Oxford and been educated on Euclid, or Gauss, while his shoe shiner was illiterate. People often forget that this happened, but that was the norm in most countries just 100 or 200 years ago. One can imagine a future where the edited/selected people are tall, beautiful, athletic, and live to be 200 years old—at dinner they’re discussing convex optimization of objective functions in complexified tensor spaces, while the server has no hope of ever understanding their discussion no matter how hard they study. In our recent past inequality has been mitigated by nutrition and mass education, but it would become far more entrenched if the difference was purely from genetic enhancements. We want to avoid a world where such inequalities exist, especially when the cause is genetic modification or genetic selection.
What does a good long-term future look like, in terms of a healthy way that humanity could be reconciled with genetic technology?
SH:
In the long run, we could improve humanity. People in the future could on average be smarter, much longer lived, nicer, more cooperative, lower rates of mental illness. I think this will be within our grasp technologically in the near future. I don’t know if society will organize itself to achieve these things. Having worked in this field for 10 or 12 years, I think it’s been established in published papers that we can predict complex traits through DNA. We built a company which can genotype embryos and select embryos based on this information, so I have no doubt what I just described is a possible future outcome that humanity could achieve. Whether it will get there, I don’t know.
You’ve described something akin to “let a thousand flowers bloom” when it comes to the traits that people might select for. You’ve mentioned that an academic might choose the smartest kid, a beautiful person the most beautiful, and a strong person the strongest. But doesn't this seem to run up against the fact that if all these traits are independent wouldn’t everyone just choose to move the IQ slider up to the maximum? And then selecting aesthetic qualities, like eye color or height, would become a Keynesian beauty contest of choosing what's most popular? So maybe a thousand flowers won’t bloom.
SH:
The examples I've given in the past are all based on real conversations I've had with trophy wives of billionaires and, you know, hedge fund masters of the universe. If you're the top hedge fund guy in the world, you often know you're not the smartest guy in the world. You employ people that are smarter than you, or you might talk with scientists who are smarter than you. And yet somehow you're super successful and you have the private jet and you fly in the scientists to talk with you. So what they consider important and valuable might be a little different than what Joe academic PhD thinks. And the same thing with their trophy wife. Who's having the greatest lives? Is it the brainy guys in the library? I'm not sure. Did Kobe or Michael Jordan have a great life? Are they having better lives than like Noam Chomsky? So there really is a diversity of preferences.
If there are no downsides to moving the slider up, won’t everyone maximize all plausibly beneficial traits? If pleiotropy really is low, this seems to be a possibility.
SH:
So there could be downsides, like imagine you become so smart, that you can appreciate the beauty of category theory, and quantum gravity. And you want to spend the rest of your life trying to think about those things and develop those things, which are the most beautiful creations of the human mind, at the expense of having lots of kids and a trillion dollar company. Academics might think it’s one of the best things to be smart, but I’m not sure that it is.
But let me just explain why these are going to be constrained choices for quite a while if you’re doing embryo selection. Let’s suppose you’re a billionaire with 300 frozen eggs from a donor, and you have surrogate mothers who will carry them to term. So you've outsourced the whole thing. But you still only have 300 eggs, and you want to pick the best one. And the one that's the smartest is also not necessarily the one that has the best facial morphology or the best team leader personality or, you know, the broadest shoulders. So these things, if you have a finite number of embryos to choose from, you're going to be trading things off. Even if you have perfect predictors, you're going to be trading things off. And then you could say, oh, in the next phase, we won't do that. We'll have CRISPR and we’ll edit in whatever direction we want. But there's probably going to always be a limit to the number of edits you're willing to make because there's some off target rate. So you still probably will have a budget of edits that you can make. And you'll be a little leery of going way beyond that budget of edits. And then you have to allocate those edits toward some quasi-independent different traits. So I don't want to predict what happens.
I think your point about constraints is well put, and a big piece of the puzzle. I know Gwern has also said that CRISPR style gene editing is probably not as revolutionary as was first anticipated.
SH:
The reason for that is because we don't know exactly which edits to make yet. There are two problems holding CRISPR back. One is the off target thing. Like how many edits can you really safely do without accidentally doing some edits you didn't want to do? That's a technological problem that the molecular biology jockeys are trying to fix right now. The second one, which is a data problem, which is more in my field is, which edits would you actually want to make? And even though you have a predictor for height, do you know what the causality is for each of those changes that you're going to make? The predictor may know that there is a SNP in this region of the genome, which correlates with height and it allows me to predict height. But we don't necessarily know exactly what edit we should make because in the neighborhood of that SNP are a bunch of other correlated SNPs and the true causal one might be one or two SNPs over. And so that problem is still unsolved in computational genomics—we don't actually know how to make the edits, or even which exact edits, to make. It’s likely to take a long time to solve because we need a lot of data.
To clarify: With embryo selection you can rely on correlations between genotype and phenotype, while with CRISPR you have to go a lot deeper to actually find the causal connections?
SH:
That's exactly right—prediction only requires correlation. Manipulation requires a causal map. And so even for these really simple disorders it can require a large number of edits. For instance, the widely understood and known genetic variants which lead to elevated breast cancer risk tend to be in the BRCA1 and BRCA2 genes. And those are typically single alleles, with Mendelian properties. But most women in the population who get breast cancer don't have these BRCA mutations. Only one in a thousand women has the BRCA mutations. Most women who have a family history of breast cancer and have high polygenic breast cancer risk are not carriers of any BRCA variants. The BRCA genes are just the tip of the iceberg, and we discovered that these rare mutations have huge effects. But once we started processing the data, we realized that most people who are at high risk for breast cancer got an unlucky throw of the dice in a thousand different locations in the genome. Disease susceptibility is typically quite polygenic, and it’s a point that hasn’t fully been understood in biomedical science.
So is that the reason why CRISPR hasn't had the radical impact that some have been worried about? There doesn’t seem to be any rogue gene drives decimating insect populations or crazed terrorists releasing engineered pathogens yet.
SH:
There are many super well-funded startups developing CRISPR related therapies. Imagine a situation where there is a Mendelian variant. It's a single place that you need to edit and we understand it because it's simple and has a big effect. Let’s say that people with this weird variant have 10 times the risk of heart disease. You could apply CRISPR treatments to an embryo fairly easily because there's only around 100 cells you need to edit. Or you could apply it to an adult who has a heart which weighs like a kilogram—there’s a lot of cells in there and you need to edit all of them. How are you going to inject the CRISPR viral agent into every cell in that heart? An eye problem like macular degeneration would be a little easier since it’s not that big. I would conservatively estimate that there are billions of dollars invested in companies racing toward these kinds of solutions, but it's not easy. And these are just for finite, high impact mutations where the genetic architecture is quite simple: A or B, and if you have A, I'll edit it to B for you. But even then, you may have to edit a billion cells or something. So it's non-trivial.
Now, as far as gene drives for mosquitoes, Kevin Esvelt, who's one of the most high profile guys thinking about this stuff, has been super careful about rolling this out. He's engaged in lots of public conversations, and he himself is worried about it, et cetera. People are being deliberately careful about it. Now, could some hacker in his garage do it? Maybe. But, you know, I just don't think there's that much interest. Like a gene drive for mosquitoes by cool biohackers in Oakland or something. You know, maybe that'll happen. I don't know.
So are you worried at all about there being low-hanging fruits in the sense of there are these really serious risks that just could be produced by a very simple edit?
SH:
Yeah, the lowest hanging fruit for quasi-existential risk is something that's already front page news, but it's very distorted by politics. It's simply gain-of-function type research with viruses. I refer you to my podcast where I interviewed Jeffrey Sachs about this. There is lots of dangerous research going on concerning gain-of-function in viruses, which could lead to pandemics. We've signed treaties saying that we're not supposed to do this, but the treaties allow us to continue to do research on “defensive technologies” including super dangerous variants. There are huge numbers of high BSL security labs run by quasi-governmental or defense-related institutions that are doing this research with little transparency. The EA community should be picketing in front of these research labs, Congress, and the NIH demanding more transparency. Genetically engineered viruses that could lead to pandemics don't get enough attention and are a huge risk. And so is CRISPR a large part of gain-of-function work? That's a good question. It's not really my expertise, but I think even pre-CRISPR they had a good handle on being able to do stuff with virus genomes since they’re fairly simple.
I know you've suggested that one possible solution to genetic inequality is to have government subsidized IVF embryo screening and selection for everyone. But if there's a group within the population who doesn’t want to use it, wouldn’t that group have radically different genetics in a couple generations?
SH:
Yes. That probably will happen. Some Luddites will remain the wild type on purpose, and there’ll be a much larger group who can’t afford it, maybe who are in developing countries. Perhaps everyone in Denmark or everyone in Singapore will be the new Eloi type with 150 IQs and they live to be 200 years old and whatever. Yeah, I think that's very possible.
And then it will kind of be the first conscious speciation event.
SH:
Yeah, absolutely. I think I was aware since I was a kid that once we got to this technological level, that this would be a really key inflection point in the history of our ape species. It’s going to happen very fast, compared to evolutionary or even human civilizational timescales. Barring civilizational collapse, I would say, a couple hundred years from now at most, we're going to be experiencing speciation. “Speciation” may be too strong a term, because I think these different subpopulations will be able to interbreed for a long time, but just more and more qualitative differences between groups.
People are often worried about the negatives that could result from pursuing the endless upside but the downsides from not using embryo selection are all around us. I know you’ve talked about growing up with a disabled neighbor.
SH:
The same reason why people buy home insurance is the same reason why people use genomic prediction—you get many more expected healthy life years with your child. If I only used genomic prediction technology to eliminate tail risk for a child, and charged a thousand bucks, by any rational analysis of expected utility that would be totally reasonable. It’s frustrating because many bioethicists don’t understand what insuring against tail risk really means.
What are the largest problems in the contemporary neo-Darwinist account of evolution?
SH:
Yeah, so the real problem in evolutionary theory is that early advocates over-steelmanned their position when arguing with religious people who didn’t believe we could be evolved beings. A really significant loophole in evolutionary theory is the time scale: how much time is required to get from a random soup of molecules to a DNA replicator? What is the time scale to go from DNA replicator to a single-celled organism? What is the time scale to evolve an eye or brain? Now the standard evolutionary biologist would say “oh it's obvious, it takes x amount of time and we see it in the fossil record.” But that’s just one sample from all the places in the universe where life could have evolved—maybe we were just extremely lucky, and the time scales are way longer. Maybe the universe appears empty, because it usually takes a very long time to evolve anything. Maybe it only occurred once out of a hundred trillion planets or something. So we won’t know exactly until we find other planets where life evolved.
Why is intelligence such an important trait across machines and humans? It’s a huge cause for concern, but it's also the secret of human success on the planet.
SH:
It’s clear that the thing that differentiates humans from animals is our intelligence. And that's why we've taken over the planet, maybe for the worse, but that is why. And it's why we're able to exert a large effect on the environment, the planet, other animal species, et cetera. So it's natural for us to be concerned about the emergence of another, even more intelligent entity on the planet that might threaten our existence. I think that's quite natural. The conscious control or influence on events in the universe, as far as we can tell, arises primarily from intelligence. There's nothing else other than the brute laws of physics that are governing what happens in this universe. So if we meet an alien species that threatens to eat us or turn us into farm animals, they are probably an intelligent species. So I think it's reasonable to be concerned about intelligence.
Are you optimistic about solving the alignment problem? Or do recent advances, like GPT4, demonstrate that human and machine intelligence is starting to come apart already?
SH:
Well, I don't know how much we can extrapolate from GPT-4, but I fundamentally don't think the alignment problem is solvable, in a rigorous sense, because AI is so complicated. As I was saying before, the process of training a giant neural net with a trillion connections is a little bit like evolving a brain ecologically. And you can't fully predict how that thing is going to behave. I think it's kind of crazy to think we could predict or control a much smarter being. Even if you manage to somehow embed human flourishing as a core value in the AI, you don't quite know how it's going to interpret human flourishing. It might decide that the kindest thing to do is upload all human beings into storage, until it figures out what human flourishing looks like. And humans would be incapable of understanding its decision process in any detail. So I don’t think it’s solvable. Since Eliezer’s Death with Dignity 1
1
Editor’s note: this article was posted by Eliezer Yudkowsky on the 1st of April.
Expand Footnote
Collapse Footnote
, it seems that he and many researchers have become Luddites, where they believe that we can’t align AGIs but we can forestall their appearance.
In some sense AGIs will be our descendants. You know, we're barely different from apes. So it's like, well, how much persistence do you want for these ape-like things? Or could we just move on to something better? I don't feel quite the same way as some of these guys about existential risk from AGIs. And again, if I accept that AGIs exist, and I am able to think in terms of billion year timescales, like a physicist, and not just a thousand year timescales, then I end up wondering “is this base reality?” And if it's not base reality, what do I care what is going to happen to the other ape-like things on this planet?
I think a value that you have is value diversity. And it seems like solving the alignment problem may imply a real narrowing of value, while genetic engineering would have a better chance of preserving human value diversity. I have trouble imagining an AI fully understanding the diversity of human values. It could maybe cover the bottom of Maslow's hierarchy of needs, but around the top it starts to diverge among human beings, and I don't know how it could cover all the different cases.
SH:
Yeah, I understand where you're coming from on that. But working every day with LLM technology in this new startup, and this renaissance that we're in, has definitely modified my priors about AI and AGI. What's non-trivial is that we've developed an automated process, which is neural net training of transformer architectures. We've developed a process which can create a map between human natural language and the fundamental concept space in which human minds operate. And it turns out it's only about a 10,000 dimensional space. It’s developing its insight into the space of concepts, or its own concept space, by essentially reading everything ever written by humans. I think it will have a pretty good understanding of what humans like and don't like. Maybe it'll be very autistic and machine-like, but on the other hand, it will have deep, deep insight into human thinking and human emotions because it will have read everything that humans ever expressed. Its first purchase on concepts like molecules and black holes and schadenfreude will come entirely from human thoughts. And so I think it won't lack understanding of those things. It may not care the way that we care about some of those things, but it will not lack understanding of them.