Greetings from bucolic Banff, Canada, where we’re finishing up the biennial Foundational Questions Institute conference. To a large extent, this event fulfills the maxim that physicists like to fly to beautiful, exotic locations, and once there they sit in hotel rooms and talk to other physicists. We did manage to sneak out into nature a couple of times, but even there we were tasked with discussing profound questions about the nature of reality. Evidence: here is Steve Giddings, our discussion leader on a trip up the Banff Gondola, being protected from the rain as he courageously took notes on our debate over “What Is an Event?” (My answer: an outdated notion, a relic of our past classical ontologies.)
One fun part of the conference was a “Science Speed-Dating” event, where a few of the scientists and philosophers sat at tables to chat with interested folks who switched tables every twenty minutes. One of the participants was philosopher David Chalmers, who decided to talk about the question of whether we live in a computer simulation. You probably heard about this idea long ago, but public discussion of the possibility was recently re-ignited when Elon Musk came out as an advocate.
At David’s table, one of the younger audience members raised a good point: even simulated civilizations will have the ability to run simulations of their own. But a simulated civilization won’t have access to as much computing power as the one that is simulating it, so the lower-level sims will necessarily have lower resolution. No matter how powerful the top-level civilization might be, there will be a bottom level that doesn’t actually have the ability to run realistic civilizations at all.
This raises a conundrum, I suggest, for the standard simulation argument — i.e. not only the offhand suggestion “maybe we live in a simulation,” but the positive assertion that we probably do. Here is one version of that argument:
- We can easily imagine creating many simulated civilizations.
- Things that are that easy to imagine are likely to happen, at least somewhere in the universe.
- Therefore, there are probably many civilizations being simulated within the lifetime of our universe. Enough that there are many more simulated people than people like us.
- Likewise, it is easy to imagine that our universe is just one of a large number of universes being simulated by a higher civilization.
- Given a meta-universe with many observers (perhaps of some specified type), we should assume we are typical within the set of all such observers.
- A typical observer is likely to be in one of the simulations (at some level), rather than a member of the top-level civilization.
- Therefore, we probably live in a simulation.
Of course one is welcome to poke holes in any of the steps of this argument. But let’s for the moment imagine that we accept them. And let’s add the observation that the hierarchy of simulations eventually bottoms out, at a set of sims that don’t themselves have the ability to perform effective simulations. Given the above logic, including the idea that civilizations that have the ability to construct simulations usually construct many of them, we inevitably conclude:
- We probably live in the lowest-level simulation, the one without an ability to perform effective simulations. That’s where the vast majority of observers are to be found.
Hopefully the conundrum is clear. The argument started with the premise that it wasn’t that hard to imagine simulating a civilization — but the conclusion is that we shouldn’t be able to do that at all. This is a contradiction, therefore one of the premises must be false.
This isn’t such an unusual outcome in these quasi-anthropic “we are typical observers” kinds of arguments. The measure on all such observers often gets concentrated on some particular subset of the distribution, which might not look like we look at all. In multiverse cosmology this shows up as the “youngness paradox.”
Personally I think that premise 1. (it’s easy to perform simulations) is a bit questionable, and premise 5. (we should assume we are typical observers) is more or less completely without justification. If we know that we are members of some very homogeneous ensemble, where every member is basically the same, then by all means typicality is a sensible assumption. But when ensembles are highly heterogeneous, and we actually know something about our specific situation, there’s no reason to assume we are typical. As James Hartle and Mark Srednicki have pointed out, that’s a fake kind of humility — by asserting that “we are typical” in the multiverse, we’re actually claiming that “typical observers are like us.” Who’s to say that is true?
I highly doubt this is an original argument, so probably simulation cognoscenti have debated it back and forth, and likely there are standard responses. But it illustrates the trickiness of reasoning about who we are in a very big cosmos.
A few loosely correlated thoughts on the topic:
Hope I haven’t mentioned this here before, but anyone who’s seriously interested in the notion of living in a simulation should find a copy of Greg Egan’s “Permutation City” and study its ideas. Egan thought about this topic far more thoroughly that most people who pontificate about it, a long time before they started.
Many people like to reference the Matrix films but don’t seem to have paid close attention to them – they need to think about the implications of the scene near the end of The Matrix Reloaded where Neo stops the squids, and the scene in the office of The Architect with all the monitors. I haven’t made any effort to count the level shifts in that scene, but Neo is at least 10 levels of simulation deep, and Morpheus’s “welcome to the real world” was badly mistaken. Also, a number of stories in the authorized related collection The Animatrix address the notion of “a glitch in the matrix”, i.e. what would count as credible evidence that our physical reality is not the base level. This is especially difficult when any adequate physics implies ourselves which implies logic and arithmetic with all their paradoxes, inconsistencies and/or incompletenesses even in the base level.
As other commenters have remarked, we’ve already created our own ultralow-resolution universe simulations uncounted times, notably in the game The Sims and its variants. Worrying about the ethics of the operators of any simulation that we may be embedded in is far more speculative than worrying about the ethics of Sims players who “murder” every inhabitant of their games every time they select “quit and don’t save”. To a panpsychist like Chalmers, who believes that conscious is imbued in or adjacent to everything, why is extinguishing a software individual in a real-life Intel/Microsoft PC not unethical? Is believing that we are in a simulation ourselves just an easy escape from that ethical responsibility?
Like a number of other philosophico-physico entertainments, the first step in considering that we may be ourselves in a simulation involves throwing out Occam’s Razor. If you want to do fairly serious philosophy, you might want to ask why discarding the Razor is less worthless than beginning any deductive chain (e.g. axiomatic QFT) with a set of axioms that includes a contradiction.
Just in case, read Emphyrio (Jack Vance).
“If Sean’s favorite interpretation, Everettian Many-Worlds, is an accurate description, that’s truly guaranteed unpredictable, for there’re as many observers as there are results, with no way for the observers to correlate their results in advance.”
Yes. But if Everett was right, then you will of course have worlds where seeming ‘glitches’ continue to occur. I could come up with many scenerios based on quantum russian roulette.
You set up your computer to record the random outcome of a quantum event, like up or down. If the result is up, then record it and try again. If it is down, then stop the experiment.
In some branch, there will be a computer that records 1 billion ups on a row. This could be followed by binary code for an order for one large pepperoni pizza in Russian.
All possibilities will actually occur is some tiny branch, so this kind of test can not distinguish between real and simulation.
Moe
Re Everrett/Bryce “Many Worlds” – agreed. I recall that Eddington – who famously defended the Second Law as the only unchallengeable law – pointed out that there is a tiny possibility of some chance confluence of air molecules suspending a textbook in mid-air. In some “World” among an infinite ensemble the chance motion of molecules will allow a man to walk on water.
I much prefer the silly Simulation Universe idea, not because it makes sense (which it doesn’t) but because its origin (computer simulation or modelling) points the way to a plausible account of the universe (a digital, discrete model) which solves all sorts of problems and which also accords with the insights of some of the great theoreticians (e.g. Feynman’s chequer-board, or Wheeler’s it-from-bit). For me, the greatest theoretical breakthrough of the present century is the demonstration that Conway’s Game of Life is Turing-complete.
I should have credited Paul Rendell (February 2010) with the design of a Universal Turing Macine in Life.
Logico…,
“For me, the greatest theoretical breakthrough of the present century is the demonstration that Conway’s Game of Life is Turing-complete.”
That is a very interesting fact. As is the fact that a “universal constructor” can be built which contains a Turing complete computer, and which can build copies of itself. This is a small scale example of the matter at hand, and one which may indicate that there is no need for a ‘knowledgable being’ to be the creator of any particular simulation.
It may be more likely that simulations would be set up with the express goal of making many minor variations of copies of itself, which would do the same, eventually “evolving” into various perfect simulation replicators (very analogous to life as we know it on our own planet).
GMcK:
Exactly. It’s clear that, at no point ever in any of the movies is anything ever shown outside of the Matrix — even the grimy underground city, even the machine city; all are simply different subroutines of the Matrix just like the mountain chalet with the ghosts.
(And, as a side note, the series is a not-at-all-veiled allegory of Christian Gnosticism. The Matrix itself, which, again, is outside everything shown in any scene, is God. Neo is Jesus.)
And the Matrix of the movie franchise is itself not much higher resolution. It’s video-game physics, cheat codes and all.
Physics as we observe it is of a radically higher resolution and complexity. As best we know, we’ve got access to everything from Planck scales out to an hundred billion light years and a span of at least a baker’s dozen billion years of history.
No simulation will ever be created in this universe that comes even close to that size. Even simulating that level of detail at continental scales would likely require so much hardware in such close proximity that it would long since collapse into a black hole and / or explode like a supernova from the energy input.
So, if you want to propose Matrix-like simulations with super low resolution cartoon physics, sure…but there’s no reason to support such a conspiracy theory and overwhelming reason to reject it along with CIA dental implant mind rays. If you see an young Keanu Reeves at a restaurant and he flies off like Superman, sure…but you should also consider checking yourself into the nearest psychiatric ward in such a case.
…not to mention that all this again illustrates how far short falls the proposition that our presumed simulation is.
Cheers,
b&
James Cross (“The idea of asking why an advanced civilization might want to create our simulation is much like asking why did God create the universe the way it is. Substitute advanced civilization for God and it is the same question.”)
Thanks for your reply. I agree there are great similarities between those approaches, but as I see it, one difference is that some people who tend to accept the simulation hypothesis do so on semi-mathematical grounds (as in the argument outlined by Dr. Carroll) and therefore should be willing to consider semi-mathematical objections to that argument. One of my objections was that, to assume there are “likely” to be so many universe-simulations by higher civilizations that it is “likely” that ours is one of them, there has to be a compelling purpose/reason/advantage for making such finely-detailed simulations. I am saying the arguers cannot fall back on faith or the incomprehensibility of higher civilizations when trying to justify a probabilistic argument. Do you see the difference?
To put it more briefly, a belief that was obtained by a process of reasoning should be able to be removed by a process of reasoning.
Logicophilosophicus:
I have become overwhelmingly skeptical of this and similar propositions of late.
Let’s consider how this scenario might play out in a toy universe. We have a room filed with air and one book and a background gravitational field. We create this universe ex nihilio with the book stationary in the geometric center of the air and the molecules with whatever initial vectors you might care to propose — such as vectors sufficient to suspend the book.
That is, we’ve magicked the exact scenario into existence — never mind how, never mind the likelihood of such “randomly” occurring. We’ve made it occur.
How does this system evolve?
Well, first, we have to consider what it is that’s holding the book up. When we start the simulation, the book has no momentum, so the air must be exerting a force exactly equal to its weight on the bottom. That means that the vectors of the air molecules beneath the book are pointing up at the bottom of the book — and those molecules, with so little mass, are moving very fast, probably on the order of dozens of miles per hour.
The first molecules to reach the book impart their kinetic energy to the book and rebound back in the opposite direction at close to the same speed. And they slam into the molecules right behind them, with almost as much kinetic energy on both sides. The two “rows” rebound at about half the initial velocity — but chaotically, in all directions, especially since the underside of the book isn’t a perfectly flat surface and has already scattered the initial rebound.
Since this second wave doesn’t have enough remaining kinetic energy to counteract gravity, the book is now starting to fall — and we’re barely a measurable amount of time into the toy universe’s history. And the initial gust of wind supporting the book is already exhausted.
It becomes instantly clear that the only way to suspend the book on air is with a continuous blower…but there’s going to be all kinds of turbulence. If you want the book to just float rather than tumble wildly and careen about, you’re going to need insane levels of active feedback and control of air streams…
…and this is so far from the initial suggestion that I don’t think we need to take it any further.
Basically, the floating textbook thought experiment amounts to a proposal that entropy can spontaneously reverse dramatically at macroscopic scales and remain indefinitely in an incomprehensibly low state. That’s magic, pure and simple. Everything we’ve ever observed in the Universe contradicts such a claim, no matter what level of detail you wish to approach it.
We can even bring quantum indeterminacy into it. Since the probability wave for any given electron has a non-zero value at all locations in the Universe, the electron could literally be anywhere. Which means there’s a non-zero possibility that your body could spontaneously teleport to the other side of the room. Yet we also know that the probability waves, though infinite, are overwhelmingly point-like in their actual distribution…and we know what it takes to move those peaks around. A proposition that you can steer one of those peaks is again a claim of Sean’s favorite zilbot particle, and we’ve ruled that out. A claim that you can do so for an entire body…is a claim of magic, pure and simple.
Cheers,
b&
Daniel
You are right. I am not engaging with your points. This is not a personal insult. It is a conscious choice. There are countless ways to modify the probability arguments raised by Sean. Some ways to define the problem don’t make sense. Often the lack of mathematical consistency doesn’t seem to deter the author. The bottom line is that this is not a subject where one can dispense of observables. So it really makes no difference how well you express your views, whether you are a good writer or a good communicator or whatever. You need to find a rigorous test to differentiate between the natural and the computational consciousness or you have made no progress at all. There may be ways to do that but they would be based on the mathematics that I pointed you to in the video. If you want to come up with a different test then do so. I reiterate that I never claimed that I had come up with a well defined distinction between deterministic/computational randomness and natural/quantum mechanical randomness. I just raised the possibility. Once again, no one writing in this blog is actually making progress, not Sean and no one else either. The same applies to Bostrom and pretty much everyone else.
Ignacio
Moe:
But where’s the hardware to run all these simulations — let alone the energy to run them?
I keep thinking people aren’t understanding that computation isn’t some sort of Platonic ideal that doesn’t require any sort of physical resources. The exact opposite is the case; simulations need even more resources than whatever they’re simulating.
Cheers,
b&
Ignacio:
That’s why the simulation argument doesn’t even make it out of the starting blocks.
Everything we’ve ever observed says that the observable universe is an hundred billion light years across, a baker’s dozen billion light years old, and that entire volume of spacetime is detailed down to Planck scales. We have good reason to suspect that our own observable universe is but a tiny portion of an even more unimaginably vast multiverse. We have even better reason to suspect that the observable universe itself is practically infinitely multifaceted as proposed by Everett in his Many-Worlds Interpretation.
The simulation argument says either that our understanding is correct but it’s all happening on some even bigger computer; or that we’re all dupes in some sort of conspiracy theory.
The simulation argument goes on to assert its likelihood with claims that the overwhelming majority of intelligent observers will be within simulations, and all those observers will be observing the same basic thing.
That claim immediately rules out physics being what we understand it to be, because there’s no way to even pretend to create a computer capable of simulating even the tiniest fraction of the observable universe using physics as it actually works.
And it also rules out the second possibility, because it means that, though we think we have access to the whole universe, we probably don’t actually even have physics-level access to microscopic scales and we’re being actively deceived into thinking we do — meaning we’re not only not typical, but probably can’t actually pull off real simulations ourselves (even if we might be deceived into thinking we have).
Your objections about the fundamental nature of randomness are so far removed from the actual overwhelming evidence at hand that I honestly have no clue why it would even occur to anybody to think that they’re significant.
Cheers,
b&
Ben Goren
I do not believe your arguments are enough to rule out the possibility that the distinction is meaningful. Once again, writing well is not a sufficient reason to believe your arguments. This is going to be pretty much it for me on this subject at least for now.
Ignacio
Ignacio:
Then you could trivially convince me.
All you’d have to do is provide evidence that we’re not actually making observations directly down to the subatomic level (with overwhelming hints of Planck scale) and out over hundreds of billions of light years and a baker’s dozen billion years into the past.
That is, provide evidence that NASA, CERN, and every other research agency is observing an illusion rather than reality.
Or, in the other direction, you could propose a realistic form of physics in which a computational device capable of simulating the observable universe as we observe it could be constructed.
Because, ultimately, that’s really what the simulation argument comes down to: either all our best observations are fatally flawed because of some technogeek steampunk conspiracy theory; or all of physics is true but only a small subset of some radically different and incomprehensibly more vast physics that still, somehow, has something we’d recognize as computers.
Of course, maybe your concerns about randomness could be the key to an observation that the cypherpunks have it right after all, despite what all the physicists think. But, if so, you still haven’t done anything to explain why randomness within the Matrix would be any different from randomness without. After all, why couldn’t the Matrix have its own real random source that it plugs directly into the simulation whenever it’s needed? Especially considering that’s how we ourselves deal with that problem.
Cheers,
b&
There’s another, parallel approach one can take — and that is to ask if the proposed simulation is honest or dishonest.
If honest, everything we observe is faithful to the simulation and vice-versa. All of the observable universe, at the very least, is being honestly simulated in every level of detail, from the CMB to Planck scales and everything between. But a computer capable of performing such a simulation can only operate in an universe with radically different physics from the one it is simulating — one in which quadrillion-plus-solar-mass computers don’t collapse into black holes and don’t explosively radiate waste heat, just for starters.
If dishonest, then the simulation is doing something, who-knows-what, to actively deceive us into thinking that the universe is as big and detailed as we observe it while it really isn’t. But not only does that make it radically unlikely that we’d be able to create our own detailed simulations, we’d have no way of telling the difference between that sort of simulation and the CIA using Martian Mind Rays to control us through our dental implants. Maybe that really is the case, but, as with any conspiracy theory, the very definition of insanity is to take that sort of thing seriously when you don’t have good positive evidence to support the claim. And that’s actual positive evidence, not merely “can’t-disproove-it” coincidences consistent with some interpretation of the notion.
Cheers,
b&
Ignacio,
If you see my first comment I am qualitatively in agreement with you. I interpreted the argument without labeling it a “simulation” and the result is an argument for “minimal requirements for the existence of an observer.” I only disagreed with your “true randomness” point since randomness is model dependent as I pointed out in the simple propositional/predicate example. It’s also on these grounds I disagree with Ben Goren’s points (although I agree with his conclusions) as the universe our “simulation” is in could conceivably have a much larger set of physics than our own. There is no proof showing that our physics is the maximally complex, self-consistent set of physics for every consistent subset of those laws, though such a proof might be meaningless anyway since it possibly would be model dependent.
Where I disagree with the simulation argument is semantics. A “simulation” entails that the physics are being modeled on some subset of physics in the universe running the simulation. At that point, the simulation is just another component of the base universe. If the “simulation” really creates observers, then they are also observers in the universe running the simulation. You can’t have simulation-dependent observers, the best you can have is a model of an observer.
To clarify this point, we can take the “converse” of the standard definition of simulation. Let’s define a simulation as a subset of the universe which could be deemed a simulation by a given observer, using the conventional definition, who would interpret it as a (mostly) consistent representation of the universe as they experience it. Clearly this is a much larger set than the set of “intentional simulations.” I see no reason to distinguish the two as different sets, they both satisfy the property that by some metric the “simulation” is a representation of some other facet of reality. The difference between an observer in a simulation and one who isn’t is the existence of another observer who judges the primary observer is part of a simulation. So the whole argument is moot, it’s meaningless even for “simulations” that last long and for large times. An observer in the simulation is an observer in the base universe too. The argument then comes down to whether it’s an intentional simulation or not and that’s not in the domain of science in my opinion.
Surely the (a?) false premise is
2. Things that are that easy to imagine are likely to happen, at least somewhere in the universe.
If we’re a simulation, our universe is likely much smaller than we think.
Civilizations likely don’t have the resources to simulate equally complex civilizations; the argument is that an interstellar civilization could simulate lots of planetary simulations. But that means they have to pull the plug before the sim can go interstellar!
Daniel, we’re pretty much on the same sheet of music. I’d only note that…well, when you start phrasing it in the way you do, it becomes (as you note) very difficult to understand why the “simulation” label (which very strongly implies active intelligent design and engineering) is better than the various multiverse theories that Sean and his colleagues are investigating as part of their day jobs.
If we’re going to propose some superset of physics from which our physics and local Hubble Volume naturally and spontaneously evolve — yes, of course. It’s even a relatively safe bet these days.
But “simulation”? For such a proposal? Might as well instead propose a deistic god and slap an “Intel Inside” sticker on its forehead and call it a day….
Cheers,
b&
I agree, I don’t like the label “simulation.” If it’s used to imply that our physics being emergent isn’t the “true” underlying physics….so what? That could be true even if we’re not in a simulation.
Ben
You disbelieve Eddigton’s argument that a textbook might (with some tiny probability) be held in the air by chance variations in the motions of gas molecules… Then you disbelieve in Statistical Mechanics, because it is precisely founded on the premise that a system will, given sufficient time, visit all possible states. Of course you know, and I know, that the time required to raise the levitating book from a day-to-day non-starter to a many-aeons near certainty is likely unattainable within the thermodynamically interesting lifespan of a single universe; but infinitely Many Worlds are just as effective as infinitely many aeons in turning vanishingly small possibilities into certainties; and that was the point.
Incidentally, you far underestimate the velocities of air molecules, (hundreds rather than dozens of mph). But there’s no need to argue so indirectly. Suppose my levitating textbook weighs a pound, and measures 10 by 7 inches. The forces on the upper and lower surfaces are equal and opposite, 14psi for 70 sq. in. comes to around 1000 pounds, getting on for half a ton. The difference required between the downward and upward forces is only 1 pound, i.e. a one-tenth-of-one-percent imbalance. Such is the power of large numbers that the probability of this imbalance is (as already stated) vanishingly small; but it doesn’t require all the air in the room to rush upwards at some moment (though that, too, must happen in the infinite timescale – Boltzmann, Gibbs, Poincare…)
Ignancio
On the subject of infinite possibilities, you should note that there are indeed “messages in the digits of pi”, assuming that they are truly random, or rather “normal” (in the technical number-theoretical sense). Take, for example, the King James Bible, rewritten in ASCII code. That precise sequence, like any other finite sequence of digits, must appear somewhere among the digits of pi (in fact it must appear an infinite number of times…)
Personally I am unconvinced that the digits of any number can be random – especially since the BBP formula (1995).
Frankly, I think the real ‘theory of everything’ is a theory of computer science , not particle physics.
Sean and Ben, ask yourselves this question: if you had the program for true artificial general intelligence, ran that program, and instructed your AI to ‘model all of reality’, wouldn’t it be true this program would be *more* powerful than the ‘core theory’ of particle physics?
I think the answer is a clear yes. The above putative program for the AI, when you ran it, could create *any* required vocabulary to fully understand reality at *any* level of abstraction (it’s a fully general intelligence remember, so it could understand anything) , whereas ‘core theory’ only lets you understand particle physics.
The conclusion is that the true ‘theory of everything’ is not a physics theory, but a computer science one: it is the artificial intelligence program capable of modeling all reality, such that it could create the vocab that would let you understand reality at any desired level of abstraction.
JimV
Thanks for your reply and I think I am following your argument.
My point is that much of the argument about simulations relies on the belief that the larger universe or container universe must obey the same rules and follow the same laws as our individual simulated universe. Hence, the idea arises that must be levels and that each lower level must have diminished resolution and detail.
If we are in a simulation we have no way to know whether the limitations of our simulation apply to the container universe or other simulations in it.
We also can’t know if there are really levels at all. The universe may be more like M. C. Escher’s Relativity where the stairs that seem to go down to a lower level lead to a higher level and the stairs from the higher level go up to a lower level.
James Cross, I agree somewhat with your point. That is, I consider it to be logically possible, but subjectively unlikely. Anyway, I agree that it would have to be considered in any attempt to rule out the logical possibility that we exist in a simulated universe. My own point is that if such cases are to be included we have no basis in our current experience and understanding for assigning any objective probability to the simulation hypothesis, much less a “likely” one (as some people apparently have convinced themselves of), and if we are to extrapolate only from our own experience and omit unbounded speculation, we should consider it very unlikely. That is the sort of argument which i think most of us here who deprecate the hypothesis are trying to make.
For my part, I had the impression that simulation advocates were assuming a “standard” multiverse of universes with conservation laws similar to ours in order to make their probability argument, so I argued against it with that assumption. I need to be more careful in my assumptions, but in my defense I could not imagine that anyone would make a probability argument without that assumption.
Thanks for your reply.
What is all this hogwash about existence in a simulation? After all, nobody can deny that all existence is a brain in a vat. Well, there is that vat outside the brain Hmm, could that vat be inside another brain? ….