Human civilization is only a few thousand years old (depending on how we count). So if civilization will ultimately last for millions of years, it could be considered surprising that we’ve found ourselves so early in history. Should we therefore predict that human civilization will probably disappear within a few thousand years? This “Doomsday Argument” shares a family resemblance to ideas used by many professional cosmologists to judge whether a model of the universe is natural or not. Philosopher Nick Bostrom is the world’s expert on these kinds of anthropic arguments. We talk through them, leading to the biggest doozy of them all: the idea that our perceived reality might be a computer simulation being run by enormously more powerful beings.
Support Mindscape on Patreon.
Nick Bostrom received his Ph.D. in philosophy from the London School of Economics. He also has bachelor’s degrees in philosophy, mathematics, logic, and artificial intelligence from the University of Gothenburg, an M.A. in philosophy and physics from the University of Stockholm, and an M.Sc. in computational neuroscience from King’s College London. He is currently a Professor of Applied Ethics at the University of Oxford, Director of the Oxford Future of Humanity Institute, and Director of the Oxford Martin Programme on the Impacts of Future Technology. He is the author of Anthropic Bias: Selection Effects in Science and Philosophy and Superintelligence: Paths, Dangers, Strategies.
[accordion clicktoclose=”true”][accordion-item tag=”p” state=closed title=”Click to Show Episode Transcript”]Click above to close.
0:00:00 Sean Carroll: Hello everyone. Welcome to the Mindscape podcast. I’m your host, Sean Carroll and today’s guest is someone I had in mind right from the beginning as a wonderful guest for Mindscape, as soon as I started the podcast. It’s taken a while for us to work it out and get it to happen, but I’m very happy to have Nick Bostrom as the guest on today’s podcast. Nick, of course, is relatively famous in the public sphere as a philosopher because he’s one of the driving forces behind the simulation argument. The idea that it is more likely than not that we human beings, and anyone else in our observable universe, are actually simulated consciousnesses, part of a program being run on a computer, designed and maintained by a far more advanced civilization. But he didn’t start there. Nick did not first start with the simulation argument. He got there from his thinking in philosophy. Some of his first work was on the anthropic principle. Cosmologists of course, know the anthropic principle as trying to figure out the selection effects that we should impose on cosmological parameters, given the fact that we have to live in a part of the universe where human beings can live. But the anthropic principle is not just for cosmologists.
0:01:10 SC: There’s a famous version of it, or I should say, a famous application of it called the Doomsday argument that goes back to John Leslie, Richard Gott, and other people. The idea is our technological civilization is not that old, right? I mean, maybe 500 years old, a few thousand years old, depending on what you count as technological civilization. But the point is, let’s imagine you’re hoping that our civilization at its technological peak is gonna last for millions of years. And then you say, “Well, the population is only growing. So it’s actually extremely unlikely to find ourselves as people who live right at the beginning of our technological civilization.” And therefore, people like Leslie and Gott and others have argued, the probable lifespan of our civilization is not that long. It’s measured in thousands of years, not millions of years. So this seems a little bit presumptuous. How can we decide the future lifetime of our civilization without getting out of our armchairs in some sense? That’s the philosophical problem that people like Nick Bostrom and others have attached their thought processes to, and it leads us to think about what a typical observer is like, and therefore, as we’ll get to in the podcast, could typical observers actually be simulated agents rather than biological ones?
0:02:27 SC: This is a really fun podcast. I think it’s very important stuff. Nick is now at the Future of Life Institute at the University of Oxford, where he also very famously worries about artificial intelligence becoming super intelligent and doing bad things to the world. So we’ll talk a little bit about that, but mostly today, in this conversation, we’re about the philosophical underpinnings or about how to think about these problems. And I think that the conversation we have will be very helpful to all of us when we do so. So with that, let’s go.
[music]
0:03:15 SC: Nick Bostrom, welcome to the Mindscape Podcast.
0:03:16 Nick Bostrom: Thanks for inviting me.
0:03:18 SC: So we know that you’re very interested in the future of humanity, or literally, the director of a place called The Future of Humanity Institute, which sounds like an awesome responsibility [chuckle] to have. But there’s different ways in which one can approach this issue of the future of humanity. You can very specifically say, “Well, this technological change will have this certain impact.” And I know that you probably do that at the institute. But there’s another angle one can take which is more strictly philosophical, and I know you come from a philosophy background, is this angle that we can just use general principles of reasoning and very, very meager data that we have, to make grandiose pronouncements about the possible future development of humanity. And what I’m thinking of are things like the Doomsday argument that you didn’t originate, but I know you’ve had a lot to say about. So for those of us in the audience who are not familiar with the Doomsday argument, why don’t you explain that a little bit?
0:04:12 NB: Yeah, okay, so by context, it falls within the broader category of anthropic’s reasoning about the observation selection effects, which then comes up in different areas, I know in foundations of quantum physics and cosmology. These kind of methodological clusters become important. But the Doomsday argument is one particular application of this style of reasoning, a controversial one. And I, at the end of the day, I’m rather doubtful about its soundness. But for what it’s worth, it goes something like this… Well, so it might be easiest to explain it via an analogy. So let’s consider a simple analogy first. Let’s suppose that there are two urns and they are filled with balls. And you know that in one of these two urns, there are 10 balls, numbered one, two, three, up through 10. And in the other urn, there are a million balls numbered one through a million. Now, somebody flips a coin and select one of these urns at random, and they place it in front of you there. And they ask, “What’s the probability that this urn in front of you has 10 balls?” So you say, “50%, right? That’s easy.”
0:05:42 SC: Mm-hmm.
0:05:44 NB: Now, let’s suppose you then reach in to this urn and you pull out the ball and you look at it. It’s number seven. So then you gotta update, right? So you say, “More likely that I would get ball number seven if there were only 10 balls in the urn than if there were a million balls in the urn.” So you can just use space theorem, you conditionalize on this evidence, and you get a posterior that overwhelmingly attaches probability to the 10 ball urn hypothesis. So all of that is uncontroversial, that’s just elementary probability theory. Now, the Doomsday argument is the idea that you should apply similar reasoning to different hypotheses about how long the human species will survive, how many humans that would be in the future. So instead of these two different hypotheses about urn, now consider two different hypotheses about the total number of people that will ever have been born. So to keep things simple, let’s suppose there are only two hypotheses, either that will be 200 billion in total or 200 trillion in total.
0:07:00 NB: Now, corresponding to this prior probability of 50%, of the urn, now we’re supposed to use some empirical prior about the different things that could cause humanity to go extinct. So you have some views about the risk of nuclear wars and meteors or whatnot. And maybe say, let’s suppose there’s a 10% chance based on those normal considerations that humanity will go extinct within the next century, when there may have been about 200 billion people. And a 90% chance that maybe we’ll survive for a long time and in millions of years, well, there would be 200 trillion. And corresponding now to the idea of reaching into the urn and pulling out ball number seven, in the case of the Doomsday argument, you are supposed to consider your own birth rank, your own place in the sequence of all humans who have been born. And that is roughly number 100 billion. That’s more or less, how many humans have come before you.
0:08:06 NB: And so that… Just as in the urn case, where we’re pulling out such a low number, we’re supposed to increase the likelihood that there are only few balls in the urns. So similarly here, you having this… Finding that you were born so early, is supposed to increase the likelihood that the total number of humans will be 200 billion rather than 200 trillion. It would kind of make you more typical. If your number were roughly 100 billion out of 200 billion, it is no big surprise about that. Whereas it would kind of be surprising and extraordinary to think you were in the very earliest, small fraction of one… The first percent of all humans that will ever have been born. So that’s the structure of the Doomsday argument. I think the critical part of that is this idea that you should reason as if you were a random sample from the set of all humans that will ever have existed.
0:09:02 SC: That’s right. Yeah.
0:09:03 NB: But if you do accept that, then the rest follows pretty straightforwardly.
0:09:08 SC: But it does seem just at glance, to be very honest, let me put my own cards on the table. I have some opinions about this kind of reasoning, but they’re pretty mild [chuckle] I go back and forth and I keep changing my mind, so I’m interested to see where we go here. But clearly, the point of view of the skeptic here is going to say, “How in the world can you reach any conclusions about the far future rate of humanity just without leaving your armchair?” This seems like you’ve sneaked in some innocent-seeming assumptions to do a lot of heavy lifting.
0:09:41 NB: Indeed, yes. So this was, I think, the universal reaction when the Doomsday argument was initially presented. It originated, I think, from a physicist, Brandon Carter. And then was written up by a philosopher, John Leslie in the ’90s, properly and put into this patient framework. And most people thought, “There gotta be something wrong with it. There’s just no way you could cut through all the fog up on probability about different future risks of wars, and new technologies, and all that, and derive this very striking conclusion, just from this very seemingly very meager piece of evidence.” But then when it came to trying to explain what was wrong with the Doomsday argument, the disagreement stopped. Or many people who were very confident that they knew precisely what was wrong with it, but they all seemed to have a different explanation. And many of those critique and objections could quite easily be seen to be flawed if we tried to apply the same kind of critique to in other cases. It would produce absurd results. So it’s actually quite tricky to… If the Doomsday argument is false, it’s non-trivial to say why it’s false.
0:11:06 SC: Right. Well, and maybe you can explain a little bit more… And you can be judgy, you can give your own opinions here about this step that says, “I should reason as if I am a typical, randomly selected observer from the history of all human kind.” I think that’s clearly the place where we should interrogate what’s going on to maybe see that we haven’t sneaked something in.
0:11:29 NB: Yeah, so you might wonder why on earth would we think that? It’s not as if there were some, time traveling stork that picked up a human at random, and then dropped them into the year you were born. So why would one believe something like that? Well, it seems that a methodological principle similar to that is kind of necessary in order to make a lot of perfectly reasonable finds work out. So in your field of cosmology, these days, what I call “Big World hypotheses” are quite widely embraced. These would be hypotheses according to which the world is so big and locally varied that you would expect all possible observations to be made by some observer somewhere.
0:12:21 SC: Yeah.
0:12:23 NB: Just if not… For no other reasons, just because of local thermal fluctuations. You have the Boltzmann brain, then you have all kinds of local circumstances that, just by random accidents, would have differed. Then if you have an infinite or maybe to bracket different issues that come up when we talk about infinity, is just an enormously, but still finitely large universe. You wouldn’t have all these different theories that we might have about cosmology and about the derivative value of different critical constants. All of those theories predicting that whatever you predict would in fact be predicted… Would in fact be observed.
0:13:01 SC: Yeah, I’ll just say very quickly, I think that, just so everyone knows, this is an open question in cosmology. There are absolutely… The possibility’s on the table, the universe is infinite, there’s an infinite number of observers of all different kinds, and there’s a possibility on the table that the universe is finite, and there’s not that many observers, we just don’t know right now. So it’s important to keep up.
0:13:22 NB: Right. We do think that, by looking through the telescopes, or building big accelerators, we can get some useful evidence that potentially could have some bearing on these questions. And without introducing something that looks a little bit like this typicality assumption, I call it “The self-sampling assumption.” The idea that you should think of yourself as though you were randomly sampled from some reference class. It’s very hard to see how those observational predictions would be produced, that you could then test. But if you have something like that, then you could say that…
0:14:00 NB: The vast majority of off Harvard are likely to not be these weird, rare, deluded, freak observers who happen to see some extremely unusual local fluctuation, but most would be typical. And then the theory predict that… The theory that matches what we observe would be one that said that what we observe actually reflects the typical conditions throughout the universe. And you can kind of then see how you would be able to connect these big world theories to observation in a way that seems kind of commonsensical. Alternatively, you could try to argue for this self-sampling assumption by thinking instead of a simpler thought experiment. So just imagine that you have an… You start with an empty world, you have 100 rooms, cubicles, and in each of these 100 cubicles, you create one observer. And then on the outside, you paint maybe 90 of these rooms blue and 10 red. And now you find yourself having been created in this world, you’re told about this whole setup, and you have to guess what color is the room you are in.
0:15:23 NB: At some point you can exit the room and you can find out the true answer. It seems very likely that in this case, you should set your credence to your room being red equal to the fraction of all the observers that are in red rooms. This would match… If everybody bet this way, you would maximize your expected winnings. And you could consider limiting cases. If instead of, say, 90% of the rooms being red, what if in 99% were? As you approach the limit, if all the rooms were red, you could just use logic to deduce that your room is red and it seems that the probability, it should gradually approach probability one as you move to a situation that is more and more similar to the case where all the rooms are red. So there are these different, both scientific applications that seem to require the self-sampling assumption and also these thought experiments where it seems intuitively possible that that’s the correct way of reasoning.
0:16:26 SC: I guess I’m on board with the second one, with the thought experiment red rooms. Then the worry is that that’s not a good analogy to the real world. I’m not sure I quite got the first one there. Yes, I can imagine cosmological scenarios where there are many, many observers. There seems to be a little bit of an extra leap to say, and I should assume that I’m typical within them or that there’re some properties that I’m randomly sampled from… Could you maybe say a little bit more about this? Are you claiming that there is empirical evidence for this kind of reasoning? Or just that it is the only logical thing to do?
0:17:05 NB: I’m saying that it looks as though one needs this methodological principle in order to get the result that are ordinary practice of trying to test these different theories we have make sense. So let’s take something more specific. So, the temperature of the cosmic background at this stage of cosmic evolution. What is it? 2.7 Kelvin or something?
0:17:30 SC: Yeah. 2.74.
0:17:33 NB: Right. So that’s what we think and we think we have evidence for that in that people have measured it and stuff like that. There’s various observational data. But now consider a different theory, it says that, actually the cosmic background is 3.1 Kelvin. So we think, “Okay, well, that’s a possible theory,” we have strong reason against it. But now suppose that we combine both of these hypotheses about the temperature of the background with the claim that the universe is so big that whichever of these hypotheses is true, there will be some observers who observe either value because locally there would have been some statistical fluctuation. In a sufficiently large universe there will be a sort of a little bubble where the temperature just happens to be a bit lower or higher. And so it looks then that these two claims together imply that there will exist some observers who will see… Who will measure a value of 2.7 Kelvin and there will exist some observers who will measure a value of 3.1.
0:18:46 NB: Both the hypothesis that, in general, the average value is 2.7 and the hypothesis that the average value is 3.1, will predict this that there exists these observers who make either observation. So then the question is, if they both say that those observations were made, what can we conclude from the fact that, well, we’ve made this observation of 2.7? It seems to be perfectly consistent with both of these different hypotheses, and yet we think intuitively it obviously favors the hypothesis of 2.7.
0:19:16 SC: Yeah. Okay.
0:19:18 NB: And so there I’m saying that, well, if you add in the methodological principle that we should think most likely our observation is kind of what an average observer, a typical observer, a random observer would read, then you get the probabilistic inference to come through and you can conclude that, “Yeah, it’s possible the temperature is really 3.1 and we were just this very rare observer who saw something very unusual. But most likely with overwhelming probability, the average temperature is 2.7 as indeed we observed.” ‘Cause that’s what’s almost all observers would see, if the real temperature were actually 2.7.
0:19:56 SC: I guess there’s a tiny bit of daylight here between this way of casting it and what I think is the traditional cosmologist way of casting it. And to be honest, let me be very clear, as a cosmologist, I know that typical cosmologists are not philosophically very sophisticated and they don’t think about these things too hard, they just get the right answer and move on with their lives. But there’s the way of saying, you’re casting it as sort of if the universe were very, very large and there were lots of observers and we were a typical observer in that ensemble. Whereas, I think most physicists would just say, let’s imagine we’re at a typical place in the universe, regardless of how many observers there are, even if the universe is not that much bigger than what we can observe in our horizon. Do you think that’s an important difference? Or basically, the math works out to be the same in those two justifications?
0:20:44 NB: Well, I think there is an important difference in principle. I think as… It might, for various applications kind of come out to more or less the same result. If you think that observers are sufficiently, uniformly distributed. But if you have a different application where you’re trying to evaluate different hypotheses, say, about where observers are. Like if you think about the galaxy and where there would likely be, planets conducive for the evolution of intelligent life and so on, then it could start to be important that you focus not on spatial temporal regions, but actually on counting observers. It could be more likely that you would find yourself within a certain small region of space-time, which is dense, say, in planets likely to create intelligent life, than with it so much larger region of space-time, where observers would be more scarce.
0:21:58 SC: Okay, I know that… To get back to the Doomsday argument, ’cause I wanna be… I know that philosophers like to list all the possibilities, but I wanna at some point actually, get an argument that one is correct or incorrect. There is another counter argument, not just saying we’re not typical observers, but there’s a counter argument that says, well, if your two scenarios are 200 billion people or 200 trillion people in the history of humanity, because there are more observers in one of those scenarios, I should… I am more likely to be in that universe. I should give that theory a boost in my prior probability just because there are more observers there. And that sort of cancels out the fact that I am early on. And therefore, even though I’m unlikely to be early in that big universe, that the universe is also more likely, and therefore I can say nothing about which of these two universes I live in.
0:22:53 NB: Yeah, that is one of the most important possible responses to the Doomsday argument. And this initial idea that I explained earlier, the idea that you should think of yourself as a random sample from all the observers that exist in some reference class, I call that the self-sampling assumption. And what you just mentioned is what I would call the self-indication assumption. And it’s roughly the idea that the very fact that you find yourself existing, you have been born into the world, it gives you evidence that a lot of observers probably exist in this world. In some sense, it’s as if there would then have been more slots for you to have been born into it.
0:23:42 SC: Right.
0:23:42 NB: And as you say that, yeah, it does turn out that if you do accept this self-indication assumption, then that exactly cancels out the probability shift that the Doomsday argument says we should make in favor of there being fewer. It’s like you first register the fact, “Oh, I exist.” That gives me increased evidence that probably there will be a lot of people existing by the self-indication assumption. Then you notice, “Oh, I’m really early. That makes it more likely that I will be the fewer observer in the future.” But those two shifts exactly, cancel out. So that has that neat feature. You get rid of the Doomsday argument in one fell swoop. And that might in fact, be the strongest argument for the self-indication assumption. It does have some counterintuitive implications of its own though. So there is what I call the “presumptuous philosopher” thought experiment.
0:24:47 SC: Mm-hmm. I love that phrase, by the way. I use that in many, many talks and writings. ’cause I really think it captures something that we should be worried about. So please explain what it is.
0:25:00 NB: Well, it’s the idea that it seems somewhat of an open question at the moment, whether the universe is finite or infinite or… If it’s finite then just how big it is? We know it’s very big, but is it very, very big? Or very, very, very big?
[laughter]
0:25:20 NB: That seems like something you can’t just sit and answer in your armchair. You have to actually build cosmological models, and measure the expansion speed, and try to evaluate inflationary cosmologies and… But if you accept the self-indication assumption, it does seem like you would have overwhelmingly strong evidence for concluding that out of two hypotheses, one of which postulate that there are many orders of magnitude more observer, that that hypothesis is true. So consider, one… Suppose the universe is finite, but yeah, according to one hypothesis, there are trillion-trillion observers. On a rival hypothesis, there are trillion-trillion-trillion observers. And let’s suppose the physicists say, “Well, this is like, interesting. The super-duper symmetry considerations show that one of these two possibilities is true. And now we just need to run this experiment and it will definitively tell us which of these is correct. And we just need… ” I suppose you just need $20,000 to build this very simple machine, like a trivial thing in comparison to…
[laughter]
0:26:36 NB: And some of these philosophers says, “No, no, no, no. It’s not worth wasting $20,000. I can just tell you what the answer is. Of course, it’s the trillion-trillion-trillion observer hypothesis, that’s true. That’s a trillion times more likely than the other one,” And by the self-indication assumption comes directly out of the self-indication assumption. And so the objection is that we don’t really accept that. It just seems crazy that… About as crazy as the idea that you could rule out hypothesis where the human species will last for a very long time just by reflecting on your birth rank. So here instead, you’re ruling out hypotheses, where there are slightly fewer, but still a huge number of people in the universe, just by considering the fact that you exist. It also seems a little bit like an overreach.
0:27:24 SC: So, good. So it does seem like in either case, just to be super-duper clear, ’cause I know that when I was learning this, this all confused me very quickly. So the two options on the table are, give theories prior probabilities by how elegant or reasonable they seem. And then within each theory, assume you’re a typical observer. And then the other is, assume you’re a typical observer but boost those theories that have a lot of observers in their prior probabilities, because observers like you are more likely to appear there. And you’re making the argument that either one of these seems to give us leverage over the world that goes beyond what we should be able to get sitting from our armchair. So, what is your recommendation? How should we think about these questions?
0:28:13 NB: Well, I’m kind of reluctant to embracing either of these. And what we found a lot in the early literature on this, when people were starting to discuss the Doomsday argument, is that they would usually just pick one of these alternatives or rather, usually they would reinvent it, not realizing that other people… And then that would kind of just be either ignore the argument on the other side or be oblivious to it. And they would just feel very pleased. So they might invent the self-indication assumption and say, “Ha, I’ve refuted the Doomsday argument.” But then, not try to address or reflect on the fact that it on its own, have these other counterintuitive implications. So I think it’s unclear what we should do. It certainly seems worth exploring, whether there will come third alternative. So you could say that, the self-indication assumption says, “Think of yourself as a random sample from all possible observers.” Whereas the self-sampling assumption said, “Think of yourself as a random sample from all actual observers that exist.
0:29:26 SC: Yeah.
0:29:27 NB: But maybe what you should do is to say, “Think of yourself as a random sample from all actual observers within from reference class, that might not be identical to the set of all of observers.” Maybe you don’t need to think of yourself as randomly sampled from all observers, including ones that will exist far into the future, if you manage to survive. But maybe from somewhat narrower class of observer. So for example, if you think these future humans… They would be very different from us. Maybe they will be post-human., They’ll certainly be in an epistemically very different situation from… They will know, among other things, that the human species survived the 21st century. So maybe they will be so different that we shouldn’t think of ourselves now as a random sample from some set that includes those. Maybe they’re just too different from us, just as we don’t think of ourselves as a random sample from all physical objects that includes rocks and windows and trees…
0:30:31 NB: Either that… Then you would block the Doomsday argument. You say, “Well, we can’t rule out these futures where there’s gonna be a lot of future observers, as long as we say they are different from us in such a way that they fall outside our reference class.” You could try to go down a path like that. In my doctoral dissertation back in the ’90s, where I developed a theory of anthropics, I explored the possibility of whether you could relativize the reference class, so that different observers should use different reference classes. And think of themselves as if they were random sample from some different reference classes, depending on which observer it was and which time it was. And it might then be possible to avoid both the counterintuitive implications like the Doomsday argument and the ones like the presumptuous philosophers that come if you accept the self-indication assumption.
0:31:32 SC: Well, my own… What I’m tempted to think, and I haven’t really completely nailed this down myself yet, is that maybe we’re just… It’s just a mistake to think of ourselves as typical observers in some class that is much bigger than me. [chuckle] In other words, I know a lot of things about my non-typicality already. Most people are not theoretical physicists. There’s plenty of obvious ways in which I’m not a typical observer. And maybe I should judge cosmological scenarios on the likelihood that observers exactly like me should arise, but not go beyond that at all, and therefore, draw no conclusions on the basis of how many alien life forms or post-humans that might exist.
0:32:16 NB: So that, I think, is too narrow. So if we go back to the example of the cosmic background, whether it’s 2.7 or 3.1 Kelvin. So, on both of these hypotheses, we imagine in a big enough world, there would be some observers who would be seeing 2.7 when they run some measurements.
0:32:42 SC: Yeah.
0:32:43 NB: And there would be some that would be seeing 3.1. But if you only included in your reference class, observers who were exactly like you in the same mental state with the same evidence, then that would only include ones that saw 2.7. Since that’s what you are seeing. Right?
0:33:00 SC: Yeah, yeah.
0:33:01 NB: So in that case, it would be true that on both of these different theories, that 2.7 theory under 3.1, a 100% of all the observers in that reference class would be seeing 2.7.
0:33:18 SC: Well, right, but I’m suggesting that we can judge theories on the basis of whether or not the likelihood that they predict any observers that would predict, that would see exactly that already… In other words, it’s sort of the old evidence versus new evidence issue. I don’t want to forget that I already know I’m an observer who sees the CMB with 2.7 degrees. I can judge theories in the basis of whether there should be people like me in them, but I can’t say that those people are typical.
0:33:49 NB: Right. So in this case, both of these theories predict, that there would be people seeing 2.7. In fact predicted with…
0:33:58 SC: Oh, I’m sorry. Yes. Right. So you’re comparing… Maybe I misunderstood. There’s sort of a small universe where the universe is 2.7 everywhere and a large universe in which the CMB is usually 3.1, but in some places it’s 2.7. Is that what we are comparing?
0:34:16 NB: Yeah, or compare two large universes where the average temperature in one is 2.7 and the average temperature in the other is 3.1, but both are large enough that there would be pockets of different temperatures.
0:34:31 SC: Right yeah.
0:34:32 NB: So both of those Big world theories predict that there will be some observers seeing 2.7, some seeing 3.1. What I disagree about is kind of what the average observer or the vast majority of observers will see.
0:34:43 SC: That’s right. So I think the bullet I would like to bite is, in those cases where both universes are big enough that even though the averages are different there… It is very likely that an observer just like me exists in both of them. I cannot judge between them. That’s what I… I would actually accept that conclusion. That seems to be the least presumptuous thing I can do.
0:35:04 NB: Yeah. Except, I think the universe probably is like that. I think that all of these different finite-sized brains and brain states that humans could occupy are instantiated. And I still believe that we gain some useful information about the layout of our universe from doing astronomy.
0:35:30 SC: Yeah, no, I think that’s true. Actually, let me steer the conversation in that direction a little bit, because in the philosophy literature, we have these discussions about the future of humanity and Boltzmann brains and stuff like that. But the down-to-earth working cosmologists, some of them do appeal to anthropic reasoning to explain for example, the value of the cosmological constant or other parameters that we observe, the fine-tuning of the fine structure constant that allows for atoms in chemistry and life. Do you think that this kind of reasoning… Do you think that the working scientists who use that kind of anthropic reasoning are on the right track?
0:36:10 NB: Yeah, well, you have to look at it application by application, but the general idea that you might initially be puzzled by the apparent fine tuning of our universe. That there are a number of different parameters and constants that seem to have values that permit intelligent life to exist, obviously, but also to be such that had the value been very different, no observers could have existed. It would just have been a highly diluted, hydrogen gas, or some other degenerate state. I think it’s right to initially be struck by that as like, “Huh, that’s kind of weird.”
0:36:52 SC: Yeah.
0:36:52 NB: And also to recognize that one possible explanation of that is that there is an ensemble of universes with a wide distribution of different parameter values instantiated within that ensemble. And then apply anthropic reasoning to say that this ensemble theory nevertheless predicts that what we should observe is this apparently very fine-tuned universe where the fine-tuning is kind of apparent in that the ensemble as a whole, ideally would not need to be very fine-tuned. But nevertheless all observers would find themselves in a universe that was fine-tuned. And that would then constitute an explanation for the fine-tuning that we seem to see. I think that line of reasoning is basically sound.
0:37:42 SC: Do you think that we can go beyond that to actually say… To make a calculation and a prediction? The thing that Steven Weinberg, for example, tried to do back in the ’80s with the cosmological constant was to say, “Let’s imagine there is a smooth distribution of values of the cosmological constant, and the value affects how many galaxies are produced. Therefore, let’s ask what an average observer might see.” And he predicted more or less the right value. Are you on board with… By the right value, I mean the value that was empirically discovered 10 years later. So do you think that’s a kind of valid inference?
0:38:19 NB: Yeah, so I think that there is a piece of methodology that is needed in order to go from some theory, about what exists, what galaxies there are out there and what planets and what observers. To go from some kind of objective model of the world to some observational prediction about what you or I are likely to see. And this is where the anthropics comes in as the little piece of methodology that tries to bridge that gap. In a sense, it’s a methodology about how to reason about indexical information. That is information that has to do with where you are, what time it is, who you are. As opposed to the kind of objective structure about what exists. And so that kind of methodology can then be used in combination with different hypotheses about the objective structures out there to derive different predictions that we can test.
0:39:33 SC: Can we make… Sorry.
0:39:35 NB: No, go ahead.
0:39:36 SC: Can we make… Aside from the cosmological constant and cosmological things, can we use this kind of reasoning in your view to reason about the existence of intelligent civilizations elsewhere? The Fermi paradox kind of thing?
0:39:54 NB: Yeah, well, so I’ve always thought that the Fermi paradox is not very paradoxical. We don’t see any signs of intelligent life, that’s true. But I don’t know what… For there to be a paradox, there has to be some sort of argument for X and then some other argument for not X that both seem persuasive. And then we are left with this conflict that we don’t know how to resolve, but here it just seems that there is an argument for one, we haven’t seen any aliens, but I’m not sure what the argument would be that we should be surprised about that.
0:40:38 NB: We certainly know that there are a lot of planets but that doesn’t mean there was a high likelihood that aliens would result. Because there are a lot of steps between having a planet and having life, let alone intelligent life. And for all that we know, those steps might be very improbable or there might be one improbable step. So maybe just going even for the simplest replicators, maybe that just required an astronomical coincidence. The right amino acids might like hundred of them, might just have to have bumped into each other in precisely the right way to create something that could get self-replication going. For all that we know in bio-chemistry, that’s perfectly possible that that could be such an improbable step. Now, you might then say, “Well, it’s something we should be very reluctant to do, to postulate something that improbable because we exist, and… “
0:41:36 SC: [chuckle] I was gonna say that, yeah, go ahead.
0:41:36 NB: If that became the improbable, then our existence seems to conflict with that story. So that’s where these anthropics would come in. Well, it would, if there was only one planet, and we existed on that planet, that would be evidence against the idea that there were some extremely improbable step in evolution or history. But if there are gazillions of planets and only the one where intelligent life are in the end observed, then it might not be so surprising that we find ourselves on a planet where intelligent life’s observed, even if there are some extremely improbable steps in going from one planet to intelligent life. If there are enough planets, it’s like enough lottery tickets. It’s not surprising you win the lottery, even if every ticket has only a one in a million chance of winning, if you bought 10 million lottery tickets.
0:42:27 SC: Right. Okay. I’m actually very sympathetic to that point of view. I think that people tend to say, “Well, there’s a lot of planets and how small could the probability of life forming be?” and then the answer could be, it could be really, really small…
0:42:42 NB: Very small.
0:42:43 SC: Yeah, it could be really small.
0:42:44 NB: But I think… Is the Drake equation must be one of the most over hyped in all, because.
[laughter]
0:42:49 NB: It gives this appearance of some rigorous scientific grasp that you can use to calculate how many aliens there are. But then there’re some parameter values there that we are not just uncertain about, but the value might be 1, a 100% probability, or it might be 10 to the power of minus a thousand, and we just have no clue.
0:43:12 SC: But you seem… And I completely, 100% agree with that, but you do seem more sympathetic to the Doomsday argument kind of reasoning than to the presumptuous philosopher that follows from the self-indication assumption, the boosting that we would give to theories with lots of observers. Is there some combination of Doomsday-like arguments, and there’s no other aliens out there that seem highly technologically advanced empirical observation that lets us conclude, “Wow, we’re probably doomed, we’re probably gonna destroy ourselves within a few generations.”?
0:43:46 NB: Well, it gets complicated. Just to be clear, I think there are Aliens out there, I just think they are far away. So if the universe is infinite then certainly, there would be aliens out there. But maybe not within our light-cone. Yeah, the Doomsday argument is counterintuitive and surprising, but maybe you could sort of persuade yourself to accept it. There is a thought experiment that has the same structure as the Doomsday argument, but maybe takes the counterintuitive-ness one level up. So consider what I call the Adam and Eve thought experiment. So imagine that the world was created and that there were initially two people, Adam and Eve. And then whether there were gonna be additional people, whether there was gonna be a whole human race coming into existence at a later time, maybe depends on the choices that Adam and Eve make. And so let’s say that they have a… They might be tempted by carnal desire.
0:45:07 SC: They were, we know from the books that they were. So yeah.
0:45:10 NB: But let’s suppose that they thought, “If we do create a whole race of humans, that would be very bad. We were not supposed to do that. So we definitely would not wanna create billions of humans.” We might know that that would be the inevitable result if Eve has a child as a result of our sin. But we’re not sure. Obviously, they could carnally embrace and Eve might not get pregnant, right? But that might… Let’s suppose it’s 1 in 10 times that she would bear a child as a result based on your normal facts about the human reproductive system. So they say, “Well, it’s not worth risking a 10% of creating this disaster. We are not gonna do it.” But now, let’s suppose there’s this snake that slithers up and whispers, “Well, okay. So you’re either gonna become pregnant or not if you carnally embrace, but really think about what the chances are? If you do become pregnant then you would be the first out of billions and billions of humans, an extraordinarily unusual position in the distribution of all observers. And by the self-sampling assumption, the probability that you would be the first to… If there were gonna be billions of humans would be what? Well, one in several billion.”
0:46:41 SC: Yeah.
0:46:41 NB: Extremely improbable. So when you conditionalize on the fact that you were among the first three, you can basically disregard that hypothesis. And conversely, if she doesn’t get pregnant, it would be completely normal, you would be the part two out of three people, probability one.
0:47:00 NB: So updating on this then, they could become extremely confident that this act would not produce what seems in theory, like they should think would have about the 10% chance of resulting. That seems counterintuitive. You could go further. Let’s suppose that, Adam is a bit lazy. He doesn’t wanna go out hunting anything. And he thinks it would be really convenient if like a wounded deer just happen to limp by the cave in the next 10 minutes… You know, save him a lot of effort. So then it seems he could form the firm intention that if a wounded deer does not appear, they will produce offspring and that will then go on to create billions of people. And then suddenly, he would then have strong evidence to be fully convinced that a wounded deer would appear. And that also… It seems kind of hard to think that, that would be the rational thing for him to believe. So I think those are maybe even more counterintuitive consequences of accepting that self-sampling assumption.
0:48:12 SC: Okay.
0:48:12 NB: More than [0:48:14] ____ reference class.
0:48:14 SC: And therefore we should not accept it?
0:48:18 NB: Well, it certainly would count against accepting it, right?
0:48:22 SC: Yeah.
0:48:23 NB: And then you have to then look at what the alternatives are. If you don’t accept it, what do you do then? Do you accept the self-indication assumption? Well, then, I’m just philosopher. Do you instead go down the path maybe of relativizing the reference class, like I was alluding to earlier? Maybe. Although, that also has its own possible problems. In general, I think it’s an area where there is still some murkiness and unclarity.
0:48:54 SC: Okay.
0:48:55 NB: I think it goes quite deep, these methodological questions. And we don’t get… It might be that one of these answers is correct, but I’m not sure we have understood enough yet to be justifiably very confident in any of them.
0:49:11 SC: Okay, that makes perfect sense. So in other words, for… Specifically for the Doomsday argument, it’s something we should think about, and maybe be worried about, and maybe found institutes to try to avoid Doomsday. But it’s not like a…
0:49:26 NB: Yeah. Good idea, independently. Yeah.
0:49:28 SC: Yeah, [chuckle] but you don’t think that we understand it well enough to just say, “Yes. That’s the correct conclusion”?
0:49:35 NB: That’s right.
0:49:37 SC: Okay. Very good. But I can’t let you get away without saying we can extend this idea of self-sampling, of typicality among this sense of all observers. You have famously said, “Well, let’s include in the set of all observers that we might be typical among, observers that are being simulated by some higher level intelligence and live their whole lives out in a computer.” So that leads us to the simulation argument. Is that right?
0:50:05 NB: Yeah.
0:50:06 SC: Why don’t you tell us what… Let’s imagine that there are people… It’s unlikely, but that there are people in the audience who don’t know what the simulation argument is.
0:50:18 NB: Well, the simulation argument tries to show that one of three possibilities obtains. And one of those is that there is a very strong convergence where virtually all civilizations that are current state of technological development, go extinct before they reach technological maturity. So that’s one possibility, something that could be true. A second possibility is that there is a very strong convergence amongst all technologically mature civilizations, in that they all lose interest in creating computer simulations with conscious people, ancestor-simulations, if you want. And the third possibility is that we are almost certainly living in a computer simulation.
0:51:17 NB: So the argument involves some simple probability theory and stuff, but the basic idea is very possible to grab just intuitively. Which is, suppose that the third possibility does not obtain, that at least some civilizations at our current stage eventually reach technological maturity, even if it’s just one in a thousand. And suppose the second possibility also does not obtain but at least some reasonable fraction of those who do become technologically mature, still are interested in using some non-trivial fraction of the resources to creating ancestor-simulations. That you can then show that there would be many, many more people, like us living in simulations than would be have lived in original history.
0:52:07 NB: Just because if you estimate the amount of compute power that the mature civilization would have, and you compare that to estimates of the cost of creating simulations of conscious beings like humans, you just see that even by devoting a tiny fraction of 1% of their compute power for just like one minute, that they could create thousands and thousands of [0:52:28] ____ full of human history. And so if you reject the first two possibilities, you are then forced to conclude that the vast majority of people with our experiences are simulated. And then I claim, conditional on that, we can think we are probably one of the simulated ones.
0:52:46 SC: Mm-hmm.
0:52:47 NB: And the anthropic stuff comes in only in this third, last step in going from, “Most people with our kinds of experiences are simulated,” to therefore, “We are probably simulated.”
0:53:01 SC: Right. And you buy this. This is one that you’re well willing to stand behind, this argument?
0:53:07 NB: Yeah.
0:53:08 SC: Right. And so because of that, let me just push back on it a little bit, I’m very open-minded about this, I’m agnostic. I really don’t know. Can you say a little bit more about how we know how much compute power it takes to effectively simulate a reasonable consciousness. I mean, I can at least imagine that we understand… Once we understand better what that would take, if we understand the efficiency of the brain, etcetera, we could argue that to truly simulate human consciousness requires almost as many atoms as a brain has, of order of magnitude. And you’re clearly assuming it takes a lot less.
0:53:52 NB: Yeah, a lot less. So we have to, I guess, first, be clear on what the success criterion is for having successfully created one of these ancestor-simulations. So it’s not that you would create a simulation that behaved exactly like the original, in as much as every microscopic behavior would be captured perfectly and you could use it as a perfectly reliable predictive model. Now, there might be all kinds of random stochastic events in human brains that, by a butterfly effect, eventually have big implications on our behavior.
0:54:31 SC: Yeah.
0:54:31 NB: Like maybe a single elementary particle move one plank length, might a day later, make you say something different than you would otherwise have said. But it’s rather that it’s close enough that you couldn’t tell from the inside whether you were the simulated one or the non-simulated one.
0:54:52 SC: Right.
0:54:54 NB: And so to do that, I think, it would be sufficient to capture a human mental phenomena at a computational level, more or less something that would have maybe neurons, and maybe synapses. And then the properties of a synapse might be represented by some reasonably sized vector, like a thousand values to represent a synapse. But certainly I don’t think that would go down to having to keep track of where every atom is at any given point in time, or anything. I think that would be vast overkill.
0:55:38 SC: Okay.
0:55:39 NB: There are different ways you could come at it, as well. You could look, for example, at… Yeah, if you have estimates of the human brain’s processing power, estimates of say our sensory perception, like how high resolution does a screen have to be for us not to detect pixels and you could… Another kind of line of argument for why this could be feasible to do is you could… Like creating… So in a sense you would have to not just create the simulated brains, right? Because of some sort of environment for them to experience. And so you might think that would be hard. But then you think, even our own humble little 3 pound organic brains managed every night to create a kind of virtual reality simulation that sometimes seem pretty realistic to the person dreaming. And if they can do it, like without training, then presumably, a post-human civilization with planetary-sized nano computers would be able to do this without breaking a sweat.
0:56:53 SC: Yeah, no, I’m on board with the environment, I think you can trick people into thinking their environment is realistic with rather low amounts of sensory input, but it’s the brain and the Connect Home I’m less sure about. I mean, we have 85 billion neurons and they’re connected in complicated ways, and so… I guess I’m just a little wary when people… I think people leap a little bit too easily into imagining how easy it would be to simulate a human consciousness. I mean, one of the… I didn’t wanna bring this up, but one of the other ways that the argument could fail is if it’s just impossible to simulate human consciousness on a computer, I think that we’re both on the side that it shouldn’t be, but there are definitely people who would disagree, right?
0:57:35 NB: There are, yeah, so the simulation argument assumes, I call the substrate-independence thesis.
0:57:47 SC: Yeah.
0:57:48 NB: Which a lot of the people accept, I think in philosophy of mind and amongst computer scientists and physicists, I think a majority opinion would be that what’s necessary for a conscious phenomena to arise is not that some specific material is being used like carbon atom, but rather that there is a certain structure of a computation that has to be performed.
0:58:13 SC: Yeah.
0:58:15 NB: So the paper in which I presented the simulation argument just makes that assumption.
0:58:21 SC: Sure, okay.
0:58:22 NB: And then you can look for arguments for that elsewhere in the literature.
0:58:26 SC: Yeah, so I do still worry that simulating consciousness is harder than we think, even though it should be in principle, possible. But the other worry I have is that if I take seriously some version of the cell sampling assumption, I just say some version because I’m unclear on what version it would be. Not because it is intrinsically unclear, but isn’t there a prediction then that you would make that since it’s easier to do low resolution simulations than higher resolution ones, most observers should find themselves living in the lowest possible resolution simulations, the clunkiest versions of reality?
0:59:06 NB: Well, there are two sides to the equation. So there is the cost of a simulation and other things equal, yes, the lower the cost of running a particular simulation, the more of those simulations you’d expect to be run, but the other side is the benefit. That’s like, the people creating the simulations might have different reasons for creating them, and it might be that some of those reasons… Maybe the most common reasons would require something more than the minimal level of resolution. And then if you have most observers of our kind living in higher than the minimal level of resolution simulations.
0:59:49 SC: Yeah, maybe, I don’t know. I think that when we start doing these simulations, we’ll start doing them at pretty low resolution. It becomes all fuzzy to me once we think of the practicalities of actually doing this. So that’s why I am agnostic about it. But you would go so far as to say you think that we probably are in a simulation right now?
1:00:11 NB: I tend to punt on that question. Pregnant pause.
[chuckle]
1:00:20 NB: It’s a trick that journalists sometimes have. By just saying nothing for a while, you usually get the subject to kind of say more than they wanted to say.
[chuckle]
1:00:28 NB: But I’m not falling for it.
1:00:32 SC: Well, okay. I mean it is… But… That’s fine, I will let you punt on it, but let’s get on to the record the idea that we can’t punt forever, right? The idea behind these arguments is that there’s supposed to be a right or wrong version of them and I am very happy to, sort of, punt provisionally. Say, “Well, we just don’t know yet,” but presumably, it’s knowable? I mean how should we get better at this? How should we figure it out?
1:01:06 NB: Yeah, well. So I believe in the simulation argument in that the disjunction between these three hypotheses. Then the question arises, “How should we apportion our credence between these three alternatives?” And I mean, more than one of them could be true. And as a first-cut it seems, we have quite a lot of uncertainty about these matters in general. So probably, each one of them should have some non-trivial amount of probability. Beyond that though… I mean, that doesn’t imply they should each have exactly a third probability, or you might just kind of think one of them deserves the lion’s share. That then becomes a less clear-cut issue, where the original simulation argument is silent and you need to bring in some additional evidence, or arguments, or speculations. Which I think yeah, one definitely wants to do. But then one takes like a step beyond the original simulation argument.
1:02:13 SC: I guess, let me put it this way, are there predictions you would make on the basis of the hypothesis that we live in a simulation? Are there things that we should expect to see about reality, if that were true?
1:02:26 NB: Well, so if we conditionalize on what simulation hypothesis that we are in a simulation, then does that have any observational consequences, any predictions following from that? And I think yes, but they are a kind of probabilistic in nature. So we start with I think, there are certain possible observations that would be extremely low-probability otherwise, that at least become conceivable if we are in a simulation. That means… Take something trivial, so if you are in a simulation, you could imagine a window popping up in front of you at some point saying, “Click, you’re in a simulation. Click here for more information.” [chuckle] That would be extremely strong evidence for the simulation hypothesis.
1:03:15 SC: Yeah.
1:03:17 NB: Other things like say, an afterlife might seem, if we are in a naturalistic world, not simulated, that that would be more of a stretch. In a simulation, it seems like a perfectly natural thing that may or may not, depending on just how the simulation is set up, and what the simulators have in mind. But there would be no impediment running that they mind repeatedly in different simulations or environments. Another type of implication would be via the simulation argument itself. So if the simulation argument is right that at least one of these three possibilities is true, then if we get evidence that the simulation hypothesis, that the current hypothesis is true, that might then lower the probability of the other two hypotheses. They…
1:04:16 SC: Right.
1:04:16 NB: We have already satisfied the requirement that at least one of them is true. And so, the others might still be true, but that would be last reason to believe it. So the probability of those might go up. Then you might predict as well that further insight into, I don’t know, neuroscience, and hardware design, and nanotech, would tend to not reveal information, suggesting that simulations are infeasible to build. That doesn’t follow with logical necessity because you could imagine that the simulation would have a different physics than the universe in which the computer running the simulation. Nevertheless, I think that would be, and other things equal, implication in that direction. And a whole host of other maybe more fanciful possibilities as well, that just seem kind of hard to reconcile if we live in basement level or physical reality. In a simulation, you could imagine the simulators acting more or less like gods, able to intervene in and shape things in ways that might not make sense if you were thinking of the universe as just this blind equation of evolving particles according to some simple differential equations. But if you thought of it as being interactive with that system, and then some kind of intelligence, purposeful, designer/whatever role they are playing, that was kind of interacting with this, then it might make it less than likely that that would be phenomena that they might introduce into the world that would otherwise be kind of weird.
1:06:16 SC: Well, could we talk to them? Could we attract their attention somehow? You know, I mean I figure typically when we do simulations of large scale structure in the universe, we don’t necessarily pay attention to what every single particle in the mesh is doing, and likewise, they probably don’t pay attention to every single planet among the billions and billions in observable universe. So would it be wise or possible to raise our hand a little bit and say, “Hey simulators, we’ve reached self-consciousness. Why don’t you say hi?”
1:06:46 NB: Yeah. I mean the cost of keeping track of what we are doing, it would be I think small compared to the cost of running the simulation in the first place. So I think in a wide range of scenarios, they would be easily able to monitor and see whatever the most significant things that were coming out of human civilization… Possibly everything that they might keep track of every thought. Then it depends on the purpose of this, right? Like whether this information would be relevant for what they were doing or wishing for me.
1:07:19 SC: Yeah. [chuckle] Well, it’s good to think about it, I like whether or not I believe it, I do think that it is an option we should keep on the table and maybe… I don’t know if it affects how we act in the world, but it definitely is something that cosmological thinkers should have as one of their things on the table, and it leads to… I think the last topic I wanna get to just very quickly is you’ve been talking recently about artificial intelligence, and certainly I would imagine that if you grant that we could in principle simulate pretty convincingly a human or human-like intelligence, then why not have things that are similar, but different. Completely artificial intelligences, maybe ones that are much smarter than us, and so you wrote the provocatively titled book, Superintelligence. What should we… ‘Cause we’ve had some talks on the podcast already with people like Stuart Russell and Max Tegmark, but what should the first on the street keep in mind about the prospects for truly superintelligent AI?
1:08:30 NB: Yeah, I think it’s coming, unless we manage to destroy ourselves before, by some other means, which unfortunately cannot be completely ruled out, but since the book came out, I think progress in AI has been quite impressive and things seems to be coming together. And so what the book really tries to do though, is not so much describe the current state of players play in AI or predict the timelines, there’s one or two chapters in the beginning, but the book really then focused on the dynamics that would arise if and when you do attain something comparable to human level of general intelligence in machines. And I argue that you would probably then fairly shortly thereafter have superintelligence, things that radically outperform us across all cognitive domains. And that if you think about the implications of having machine superintelligence, it really would be the last inventions we’d ever need to make in as much as the superintelligence would then be much better at inventing and you would get a kind of telescoping of the future.
1:09:53 NB: So all of those science fiction-like technologies that maybe humanity would produce if we had 20,000 years of working on the problem, maybe we will have perfectly realistic virtual realities, have colonies on Mars and we’d have cures for aging and all kinds of other things that… These are consistent with the laws of physics, just very hard to do, but if you had this science and technology, being developed at superintelligent timescales, then all of those things might happen quickly. So what otherwise might seem like a far future could happen quite shortly after you have superintelligence and all in all, then seems to make this maybe the most important transition in all of human history.
1:10:44 NB: Such that if you think that there’s even some reasonable probability this may happen in our lifetime, let’s say, then that should make it a very high priority to better understand it, and in particular to see whether there are things we could do to increase the chances that things will go well in this transition to the machine intelligence era. And so a lot of the work that I’ve been doing and other researchers here at the institute, have focused on AI, both working on some technical issues related to AI alignment, how to design algorithms that would make it possible to get arbitrarily smart systems to actually understand human goals and values and play some beneficial role. And also at the governance level, thinking about how the world might be and assuming we solve the alignment problem, increase the chances that this powerful technology is used for some beneficial purpose, rather than to wage war against one another or break one another, and a bunch of ethical questions as well, that arise when we are starting to think about the possibility of advanced digital minds.
1:12:03 SC: We had an interesting conversation on the podcast with John Danaher about automation and what it means for employment going forward, and his argument was that robots will basically take over essentially all of our jobs and that’s a good thing. Do you think that superintelligent AI will help us solve the problem of needing to work for a living, is there sort of an economic and social structure angle to this kind of stuff?
1:12:32 NB: Yeah, yeah. That would happen certainly for all the jobs that are kind of functionally defined. So if you think about the job of being an Olympic athlete or something like that, so you’re Usain Bolt and you can run faster than everybody else and you get paid for doing that. It directly follow that he would lose his job, even if we could create like a bipedal machine that can run faster because in a sense, the task is intrinsically defined in terms of a human doing certain things.
1:13:18 NB: We might just have a basic preference for certain things being done by human. Just as some people pay extra for some product to be made by hand rather than by machine or be made by indigenous people or some favorite group, but setting aside those types of jobs then the others, yeah, I think once you have artificial general intelligence at a broadly human level, you could automate a whole bunch of them and the rest… When you also sort out the the robotics part of it, which I think, yeah, you’d be able to do most of the things that humans need their bodies to do fine motoric control and stuff with… You see, I think the AI part is kind of the key part in robotics as well…
1:14:11 SC: Yeah.
1:14:11 NB: And you could get by with a fairly clunky robotic body if you had enough intelligence to operate it.
1:14:20 SC: Well, yeah, I mean okay, there’s clearly many, many things we could talk about here, but let me just… I have a final question, fairly open-ended, so you can choose what direction you wanna go in, but you are the director of The Future of Humanity Institute, so I have to ask you about what you actually would predict for the future of humanity. What I would like to say is, what do you think things are gonna be like 50 years from now? But I’ll let you choose whatever timescale that you’re most comfortable talking about.
1:14:47 NB: Yeah, so we often try to, I think, separate out the timescale questions from the other questions. So some of the things one might be thinking about with respect to the future can kind of be explored without making specific assumptions about when they will happen. You could, for example, say, if and when… You might say that there’s this set of technological capacities, they look like they are physically possible, there’s some trajectory that eventually will lead there. We’re not sure how long it will take. But we might be able to say that if and when we do get those capabilities, here is a bunch of things that would enable us to do, and one might then think about how that would play out strategically, how it will interact with other things.
1:15:40 SC: Yeah, okay.
1:15:42 NB: So I find it, like, if you take some intermediate timescale, like a few decades, I find it very hard to predict what the world would be like then, because they are certain radically transformative technological developments that I think will happen. I don’t think that they will happen next year, I do think they will happen within the next 100 years, but if you’re asking me in 20 years, I’m kind of… [chuckle] That’s very unsure, whether say the AI revolution will have happened by then or not. So I would be then in an epistemic super position regarding what the world would look like then. But that doesn’t mean I have no opinions, it just means that they kind of bifurcate into two broad classes of scenarios.
1:16:23 SC: Well, I think that’s fair. I mean I think let’s put the timescale aside, are there… What are the sort of conceptual changes you would most want to have people be appreciative of when they think about what the future will bring?
1:16:40 NB: Well, I guess the most important might be the [1:16:44] ____ one, but it is a topic about which it’s possible to think hard and be better or worse. I think traditionally the future has been a more… It’s been a free fall, like you sort of feel… I think people feel they can just make stuff up. You can relax, it’s like the land of fantasy and fancy, and it’s almost like a projection screen, where we can display our hopes and fears and tell some morality tale, but actually trying to get it right, I think that just hasn’t been the driver of much thinking up until more recently.
1:17:26 NB: And now, and over the last a couple of decades, there’s actually started to be developed, the set of concepts and that enable us to start, structure our thinking about the future, I think, in a much deeper and more insightive way. And there are little… We don’t have all the pieces of it, but we have some important pieces. We’ve actually covered a few of them in our conversation, I think the simulation argument is one of these pieces, a clue, it doesn’t tell us exactly what would happen, but it narrows down the range of possibilities to three, if you accept it, there’s the Doomsday argument, which may or may not be sound, but if it were sound, certainly, that would be an important clue.
1:18:10 NB: We’ve talked about machine intelligence, if you think that that is going to happen, then that looks like it’s a pivot point and you could then broadly divide humanity’s future into pre and post superintelligence, and most of what happens pre-superintelligence might mainly important in so far as it impacts how this transition to the machine intelligence era goes. But certainly, I’m over-simplifying here, but if you accepted something even vaguely like that, it would radically simplify the task of thinking about the future, because now instead of almost anything possibly being really important and relevant to think about, now, it’s a much smaller set of developments that really could be pivotal in this sense. The concept of an existential risk, I think is another one of these that helps us… It’s like a lens that sort of sees certain structural elements of the human condition and its future.
1:19:11 NB: Questions about whether there are extraterrestrials on stuff like that, they could be relevant as well, and so we’ve already covered a few of these and there’s a bunch of other concepts and ideas and arguments like that, that together makes it now, the case almost, I think, that the hard thing is to conceive of even one coherent future that satisfy all of these constraints, or one strategic picture that tells us what we should do, that meets all of these criteria.
1:19:40 NB: So it’s not as if there’s this space where you could just make anything up, and the difficult thing is finding some way to choose between them, now it’s more that they’re all so many constraints that it’s hard even to figure out one thing that kind of fits them all, which I think is a big change compared to say, futurism in, I don’t know, in the ’70s and ’80s.
1:20:01 SC: But I think that the implicit message in what you’re saying is that to best prepare for the future, people should listen to the Mindscape Podcast, ’cause we’ve talked about many of these issues. [chuckle]
1:20:09 NB: It’s a good start, a good start, yes.
1:20:11 SC: It’s a very good start. Alright, Nick Bostrom, thanks so much for being on the podcast.
1:20:15 NB: Thank you, thank you Sean.
[music][/accordion-item][/accordion]
Nooooooo! I came to look for a word I didn’t understand (English is my second language) and the answer is:
“Well, I guess the most important might be the [1:16:44] ____ one”
NOOOOOOOOOOO! Pls pls whoever got it pls let me know what he said! 🙂
One thought that occurred to me when listening to this podcast is… “Who’s to say we are experiencing a high-resolution simulation?”.
Situation one… What we experience is”true,” but a simplified version. The real “reality” is much more complex and detailed than the one we experience. They couldn’t duplicate that with their hardware, so they had to settle for four space-time dimensions. (Maybe they have 100)
Situation two… The computer simulation controls our thoughts, and simply instructs our brains to accept the lower resolution universe as a high-resolution one. Similar to how in a dream you accept reality as given, and things seem real, only later when you look back do you realize it made no sense.
Situation three… Our simulated universe is compressed like a zip file. Maybe the reason we have unknown superpositions that only reveal themselves in one state or another is a result of the compression algorithm throwing away information. Why keep all the quantum state information of every particle in memory when no one is looking?
Hernan Coronel, If I’m not mistaken I think he said “meta level”.
I got “metalevel”.
https://www.lexico.com/definition/metalevel
Pingback: Sean Carroll's Mindscape Podcast: Nick Bostrom on Anthropic Selection and Living in a Simulation | 3 Quarks Daily
I have heard the “Doomsday Argument” before and think the “balls in an urn” analogy has a serious flaw, if you think time is real. That is to say that time actually flows, that there is a difference between past, present and future. If you believe this then a better analogy would be you have two urns in front of you, both with 10 numbered balls. One urn will be filled with an additional 10 balls and the other with an additional million balls. You pick a ball from one of the urns and its a seven like in Philosopher Nick Bostrom’s analogy. What have you learned about the urn you selected from . . . nothing. Still 50/50 it’s the 20 ball or the million ball urn.
The only way the “Doomsday Argument” is valid in showing the human race has a relatively short lifespan ahead of it is if time is more like a spatial dimension and everything that will ever happen already has and our perception of time flowing is an illusion.
As far as the Simulation Hypothesis, I think you need one more proposition, call it the zero-ith. And I think this one is fairly likely.
0. “The number of human-level civilizations in the universe is very small and maybe only one”
So many people argue that there must be a lot of advanced intelligence in the galaxy or at least in the universe. I often see numbers in the Drake Equation variable fi (Fraction of fl where intelligent evolves) from 0.1-1.0. I think this is ridiculous. While simple, prokaryotic life developing on a planet seems probable, advanced intelligence is more likely 1 in a trillion or worse.
Here are few variables for advanced intelligence to develop:
1) Development Eukaryotic Cells or similar to allow large, efficient complex cells.
2) Development of multicellular animals
3) Development land animals. Dolphins and Octopuses are intelligent but with no fire, no chemistry, no metallurgy
4) Right kind of planet for stable evolution of life. Magnetic field to keep atmosphere from being stripped off by solar winds, for example.
5) Right size planet and conditions for significant oceans and some land.
6) Stable planetary orbit to maintain relatively stable environment.
7) Relatively stable planetary tilt. Is a large moon necessary to make planetary tilt stable required to limit huge global temperature swings.
8) No major extinction events that kill off intelligent species or its progenitors.
9) No Supernovae nearby so stars near the Galactic Center are out as potential planets
10) No really big asteroids during critical stages of development.
11) No sufficiently large super volcanoes during critical stages of development. The Toba volcanoes almost got us.
12) Plate tectonics or similar to recycle materials and make metals available for use.
13) Right kind of animal to development advanced intelligence. Some dinosaurs were smart but no advanced intelligence developed in almost 200M years of evolution. Some birds are highly intelligent but even without humans would they ever development advanced intelligence? It seems there needs to be a good evolutionary reason for the expense of a really large brain.
If each one of these (and there are many more conditions) are 1 in 10, then advanced intelligence is one in trillions and the zero-ith postulate above seems likely.
Nice discussion! I personally feel these arguments are more about epistemology than reality itself, they talk more about themselves and shed light on their applicability than on the reality they are trying to explain. At the beginning of the interview was a discussion of whether we should assume a Bayesian or a frequentist standpoint on reality. Only a Bayesian approach can be made in this circumstance (there is no possible sample), and in this framework, the question as to what are the correct priors to use is fundamental in making any sense of the argument, this is where the urns analogy to me breaks down, the urns have a well-known distribution (50/50), but what is the probability distribution of the longevity and development of possible civilizations? uniform, scale-free, fat-tailed, gaussian, any reasonable assumption will produce completely different results.
I am not sure I understand/agree with the argument given around the 41:30-42:30 mark. Assume that it is really very unlikely that life develops on our earth and due to slightly different boundary conditions it does not develop. Then intelligent civilizations might still develop with high probability on some planets in the universe. As long as we are not one of these civilizations, we would still not exist. The idea that if intelligent life would not develop on our earth, we would end up being an alien civilization in a different planetary system seems to be needed to make Nicks argument work. That premise would correspond to having all the tickets in the lottery analogy. Any thoughts?
Here is my argument against Ancestors Simulation.
How will the Simulators get the details about, say what happened in 1960s or earlier?
No one had any means of recording anything, there were no DVDs or tapes, so how will they recreate those times without any data?
My problem with the Doomsday argument and similar arguments is that the future is not well-defined. There is a infinite number of possible futures (from our viewpoint). You can only make a judgement of the probability of being in some time slot if you the class you’re considering is well-defined; IOW it can only be done in hindsight. The longer humans have existed, the more confidence we can have in our continued survival, but at the outset we simply don’t know.
The Doomsday argument is equivalent to the urns and balls example where one urn has an unknown number of balls.
The simulation argument has a similar problem, in that even if we accept the the necessary assumptions required, reasonable counter-arguments that use the same ‘average expectation’ form, such as Sean’s idea that the most common simulations would be the cheapest, crudest, lowest resolution ones, can be dismissed by special pleading ‘utility’ arguments that assume insight into intent and motivation of the simulators. Not to mention that the simulators themselves would, by the same argument, have to conclude they were themselves simulations…
Sean’s suggestion that the most common simulation is likely to be the ‘cheapest’ also provides a counter to the Boltzmann Brain argument, which ignores the fact that, given the complexity of the human brain, of the random fluctuations that produce brains capable of some level of awareness, the overwhelming majority will not be competent, fully functioning brains complete with coherent memories, but severely disabled brains without coherent memories. So, under the Boltzmann Brain argument, the chances of you having the relatively coherent mental state you do are astronomically remote.
Even though I agree with you Mr RIGHT about Boltzman Brain, but consider this, overwhelming number of planets will not have life due to almost statistical impossibility of life emerging with a sheer chance, and yet we find our self in what could be the only planet in the Universe with life 🙂
Excellent discussion!
When you asked Nick about what we could see in this world that seems to be bit off, something to make you if this stuff is real, my thought that we have two huge theories about the universe, general relativity and quantum mechanics, each of which creates great results in its area, but neither of which has the slightest idea about the other theory. Perhaps the designers of the simulation didn’t think they would need to go further?
Mike
Re the simulation hypothesis, perhaps evidence of low resolution can be found in consciousness? Given that consciousness seems to be non-continuous, with plenty of gaps. E.g., the phenomenon of change blindness, where we fail to perceive something that happens within the very field of view that we’re looking at. Perhaps the simulator does that to save cycles.
Also, I may have missed something here, but the simulation argument seems unaccountably deistic and anthropomorphic. Who’s running the simulation? And why do we assume we earthlings are the subject of it? And when will our overlords look at the test tube in which our simulation is running and think, “This one’s turning into rancid slime. Let’s pour it out and use it for something else.”
Anthropocentric, I meant, not anthropomorphic.