Greetings from bucolic Banff, Canada, where we’re finishing up the biennial Foundational Questions Institute conference. To a large extent, this event fulfills the maxim that physicists like to fly to beautiful, exotic locations, and once there they sit in hotel rooms and talk to other physicists. We did manage to sneak out into nature a couple of times, but even there we were tasked with discussing profound questions about the nature of reality. Evidence: here is Steve Giddings, our discussion leader on a trip up the Banff Gondola, being protected from the rain as he courageously took notes on our debate over “What Is an Event?” (My answer: an outdated notion, a relic of our past classical ontologies.)
One fun part of the conference was a “Science Speed-Dating” event, where a few of the scientists and philosophers sat at tables to chat with interested folks who switched tables every twenty minutes. One of the participants was philosopher David Chalmers, who decided to talk about the question of whether we live in a computer simulation. You probably heard about this idea long ago, but public discussion of the possibility was recently re-ignited when Elon Musk came out as an advocate.
At David’s table, one of the younger audience members raised a good point: even simulated civilizations will have the ability to run simulations of their own. But a simulated civilization won’t have access to as much computing power as the one that is simulating it, so the lower-level sims will necessarily have lower resolution. No matter how powerful the top-level civilization might be, there will be a bottom level that doesn’t actually have the ability to run realistic civilizations at all.
This raises a conundrum, I suggest, for the standard simulation argument — i.e. not only the offhand suggestion “maybe we live in a simulation,” but the positive assertion that we probably do. Here is one version of that argument:
- We can easily imagine creating many simulated civilizations.
- Things that are that easy to imagine are likely to happen, at least somewhere in the universe.
- Therefore, there are probably many civilizations being simulated within the lifetime of our universe. Enough that there are many more simulated people than people like us.
- Likewise, it is easy to imagine that our universe is just one of a large number of universes being simulated by a higher civilization.
- Given a meta-universe with many observers (perhaps of some specified type), we should assume we are typical within the set of all such observers.
- A typical observer is likely to be in one of the simulations (at some level), rather than a member of the top-level civilization.
- Therefore, we probably live in a simulation.
Of course one is welcome to poke holes in any of the steps of this argument. But let’s for the moment imagine that we accept them. And let’s add the observation that the hierarchy of simulations eventually bottoms out, at a set of sims that don’t themselves have the ability to perform effective simulations. Given the above logic, including the idea that civilizations that have the ability to construct simulations usually construct many of them, we inevitably conclude:
- We probably live in the lowest-level simulation, the one without an ability to perform effective simulations. That’s where the vast majority of observers are to be found.
Hopefully the conundrum is clear. The argument started with the premise that it wasn’t that hard to imagine simulating a civilization — but the conclusion is that we shouldn’t be able to do that at all. This is a contradiction, therefore one of the premises must be false.
This isn’t such an unusual outcome in these quasi-anthropic “we are typical observers” kinds of arguments. The measure on all such observers often gets concentrated on some particular subset of the distribution, which might not look like we look at all. In multiverse cosmology this shows up as the “youngness paradox.”
Personally I think that premise 1. (it’s easy to perform simulations) is a bit questionable, and premise 5. (we should assume we are typical observers) is more or less completely without justification. If we know that we are members of some very homogeneous ensemble, where every member is basically the same, then by all means typicality is a sensible assumption. But when ensembles are highly heterogeneous, and we actually know something about our specific situation, there’s no reason to assume we are typical. As James Hartle and Mark Srednicki have pointed out, that’s a fake kind of humility — by asserting that “we are typical” in the multiverse, we’re actually claiming that “typical observers are like us.” Who’s to say that is true?
I highly doubt this is an original argument, so probably simulation cognoscenti have debated it back and forth, and likely there are standard responses. But it illustrates the trickiness of reasoning about who we are in a very big cosmos.
James,
Since you ask:
It would appear that it’s a vat in a brain, but not a brain in a vat, primarily. But, just like a-turtle-below-a-turtle, here, it’s a-vat-in-a-brain-in-a-vat-in-a-brain…, all the way down. Physics being a quantitative science, an important question which then naturally arises is, whether there can be multiple vats in a single brain or not. The answer provided is in the affirmative. Which answer, BTW, also implies that there are multiple brains in a vat. And oh, yes, just one more proviso: both the brains and the vats are simulated entities.
Or at least, that’s my understanding of it all. [It took something like 4 days for me to wrap my head around the issue, but with your question, I finally got there!]
…Hmmm. … These are foundational questions; physicists not only think but also confer about such (or at least these) things.
–Ajit
[E&OE]
JimV
“For my part, I had the impression that simulation advocates were assuming a “standard” multiverse of universes with conservation laws similar to ours in order to make their probability argument, so I argued against it with that assumption. ”
Yes, exactly that is what they are assuming. But once you raise the simulation argument at all then you have no reason to make any assumptions about the sorts of rules and laws that might apply in other simulations or in the larger universe.
James Goetz
Ultimately the brains and the vats exist inside our conscious representation of the world which is all we have access to anyway.
To answer many of the questions in the thread we must first find a viable model of reality that is simulatable. I think I have such a model but I don’t know how far it can take us yet. The idea needs some more work and maybe people here can extend it, but you can try it right now and see what it can do, it is based on simple JavaScript programs.
http://www.reality-theory.net/a.htm
http://fqxi.org/community/forum/topic/2451
Note: gravity program does produce gravity now, it was solved during last month.
“But once you raise the simulation argument at all then you have no reason to make any assumptions about the sorts of rules and laws that might apply in other simulations or in the larger universe.”
Here is a possible reason: a starting point for a search based on experience (e.g., I usually leave my keys on the dresser in the bedroom) is at least as good as any other, and you have to start somewhere, in any search or inquiry. I see nothing wrong in making assumptions. Others may disagree with them, or conclusions may not follow from them, but without making trials and errors there would be no evolution. I would find it interesting and a bit convincing if the statement “it is likely that our universe is a simulation” followed from the assumption “there are an infinite number of universes similar to our own” and conversely, since I do not think the conclusion is valid I am somewhat convinced that our universe is not a simulation. There is no logical certainty for either conviction, but this is the way science works, it seems to me.
Logicophilosophicus,
I think whether the digits of pi are random or not depends on your definition of randomness. I think by the definition you are using, any computable number (even if irrational) would not be random since an algorithm exists which can compute it. However considering there are countably infinite such strings of algorithmically defined numbers, from the perspective of somebody who doesn’t know the number is pi, the string of digits is random. The question comes down to when one has sampled enough digits of pi to know that each digit uniquely satisfies the bbp formula. My intuition would be that for any finite sample of pi’s digits there are many (perhaps infinitely many) finite bbp-like formulae which compute pi up to the nth decimal place. The nth+1 draw of a digit then would seem random from such a perspective since one would not be able to determine which formula was the generating formula.
Clarification:
in my last post the FQXI article says this in the last line in abstract “Gravity appears when lines meet” which while does give minute force, however, it does not reproduce Newtons law at large distances( the article is from last year contest). The correct constraint which I have found out in the past month is
if ( p == dt && p1 == dt)… where dt=d0/2
and that is implemented in the program shown giving the correct Newton law.
Daniel K
I’m not sure what you mean. The BBP formula st happen to agree with the digits of pi it has been checked against, surely. It generates any requiired (hexadecimal) digit of pi without calculating the previous digits – so via BBP pi fails Kolmogorov’s incompressibility criterion, I suppose, and is predictable rather than ‘random’. Of course it is trivially true – Babbage pointed this out 150 years ago – that there are infinitely many computational rules capable of generating any specified finite sequence of digits. But BBP generates any previously unspecified/uncalculated digit of pi – a different situation altogether.
Typo: ‘…The BBP formula doesn’t just happen to agree…’
Daniel K
I should have said ‘demonstrates a property for an infinite string once considered (pre-BBP) analogous to the Kolmogorov complexity of a “random” finite string.’ As you can tell from my clumsiness with these concepts, I am not a mathematician. However, on philosophical grounds I believe the universe to be computational in nature, and therefore that the mathematics describing it is discrete and finite (hence my references to Feynman and Wheeler). For me, infinite strings (or, for that matter, a continuum of infinitesimal points) are at best potential. (And therefore all actual strings of numbers or digits are computable.) But that’s really just tangential to the present discussion. Mea culpa.
I have assembled my own ‘big picture’ of realiity, dividing all knowledge at the top-level into 27 core categories. Each knowledge domain is represented by a letter of the archaic Greek alphabet.
For each knowledge domain, I attempted to provide a primer of all knowledge in that area, by carefully selecting the best wikipedia articles covering the most important concepts. Clicking on the name of each knowledge domain will take you through to my A-Z list of concepts in each area.
Link:
http://www.zarzuelazen.com/CoreKnowledgeDomains2.html
I believe my scheme represents the ‘theoretical minimum’ required to intellectually tackle ‘reality theory’ . The wikipedia articles nowadays are of a quality that is actually very high, and when you group multiple related articles together, it’s quite surprising how comprehensive and effective the resulting ‘knowledge primer’ is.
I recommend that everyone interested in ‘reality theory’ check out my knowledge portal and make sure you are familiar with *all* the concepts listed. On average, I needed about 64 articles per knowledge domain to get an effective primer in the area. Skim through all of these, and I promise you will be ahead of 95% of all amateurs in the game.
Cheers!
“I highly doubt this is an original argument, so probably simulation cognoscenti have debated it back and forth”
Indeed. I independently arrived at the same argument, and wrote it down, a bit more than two years ago (by coincidence, I was reading From Eternity to Here at the time).
Another point: It is rather pointless to try to calculate the probability that we are living in a simulation if this assumes that the physics in the simulation is the same as the physics in the real universe in which it is running. Of course, there is no reason that this must be the case.
There is a great SMBC which addresses this topic.
Logicophilosophicus, my point is that pi is only not random if you know ahead of time that the digits the bbp formula produces is pi. If you are reading off one digit at a time without looking at the form of the formula producing it, then you have no way of determining exactly which number is being computed unless you let it run for infinite time. The only point I’m making is that if we abandon the assumption that you know the number a priori (i.e. its generating formula), you have no way of knowing with absolute certainty that any finite string that agrees with pi on all of its digits will if ran out forever. It’s perhaps a different notion of randomness than the one you are using but one I think practically applies to “random numbers” as we experience them in measurement.
Sean,
I don’t think we can easily imagine “creating” a simulated universe. What we can easily do is imagine a simulated universe; its creation is entirely a different matter and a lot more non-trivial problem. As such, I have a problem with the first argument in your set.
It seems to me that some of you dear speculators about universal simulations should put some thought to the number of states and cycles needed for such simulations. To me it is rather like the discussion of free will as relating to invocations of the uncertainty principle. We do not need the latter as free will is a really question of computability and information. Of course advocates of simulation world may try to escape by saying the G, hbar and c (and a GUT?) are just a parameters chosen for this simulation, submerged in a more grand universe with far greater total “information”, and maybe extensive dimensions, available.
In any case I cannot get very excited about this debate, since it is to me just a modern version of Descarte’s solipsism, which has always seemed to me to be a waste of those precious moments that we have on earth. Let us get back to saving the world and figuring out how the whole damn thing works!
When we simulate reality, we create things that are not alive. In other words, we model life with non-life. Organisms with code and hardware or whatever. This has to be, because we have never managed to make anything alive. We may simulate life to some extent, but that is the best we can manage.
So ‘simulation’ as a concept is constrained for us in this way.
It could be possible that a higher level entity might be able to create life, and confer that life into living beings for whom he had delineated a lower level reality than his own.
We (whoops….) would then be in a lower level reality, rather than a simulation, IMO.
Simon Packer said:
Not yet perhaps, but we seem to be very close. Craig Venter and his teams have recently managed to design and create a novel genome from scratch by chemical processes, insert the synthetic genetic material into a cell from which the original genetic material had been removed, and the result was a viable, fecund single cell organism that was a new species.
Note that they did not take various bits of existing DNA from living organisms and splice them together. They designed the DNA on a computer (lots of material experiments & trials involved of course) and then synthesized the DNA from chemical stocks. And to date they have already figured out what about 2/3 of the DNA in the genome they created does. There are no fundamental obstacles to continued progress in understanding genetic codes or creating life completely from scratch. No magic necessary, except the magic of the steady progression of science.
I have been participating in the Philosophy Talk blog, out of Stanford University since about 2012. Recently, I happened upon The Big Picture and am enjoying it immensely. Thanks for setting me straight on complexity. My previous view was that complexity compounds chaos and simplicity supports serenity. Now I get the error of my ways, after reading what you wrote on page 235. Guess I had it just about backwards? Or maybe, my observation was applicable to a much smaller world discipline, such as sociology? Anyway, thanks again.
darelle
I don’t think we are very close to making artificial life. At best, we are reprogramming the machinery nature already uses.
Venter has been accused of hype and showmanship by colleagues. To me, there seems to be a continuum between genetic editing and these claims to have made life from scratch. For one thing oligonucleotides max out presently at 200 or so base pairs, and I believe PCR type replication is used for long sequences. This requires existing cellular DNA to produce sequences. I am interested in how exactly the Venter genome was made; what processes and which ultimately require a living template. Secondly, and obviously, a living host cell is required, natural or pre-modified from same.
I concede that even if my doubts about Venter using truly synthetic DNA from scratch are correct, it remains true that there is the potential for progress toward a fully synthetic genome. But regarding the host cell in this ‘chicken and egg’ genotype/phenotype situation, we are light years away from building a synthetic cell from scratch. So on your statement about there being no insurmountable obstacles to artificial life, I disagree with that qualitatively as well as quantitatively, as it were.
Simon,
They are actually getting pretty close to making a completely novel 100% RNA replicase, which is an RNA molecule that can create copies of itself using only ribonucleotides. They already have one that can copy chains as long as itself (a polymerase), but it can’t yet copy itself (because it is too highly structured).
Although this may not exactly be artificial life, it is a strong first step. When they have one that can self-replicate, they can evolve new interacting chains that also get replicated if it can help the polymerase perform better.
It is not hard to image quickly evolving new functions by splitting and diluting the pools, while spiking in random RNA chains, and it will only be a matter of time until rudimentary cells exist that are based on chemistry performed by only RNA molecules.
Sean
Really like your blog. Wanted to get your view on a theory. I call it the cricket theory of why we are alone in the multiverse. The most common score in cricket is a duck (zero) , then 1, then 2, etc (actually the next one is 4 not three but that is because we have boundaries but that is a digression). If there is a multiverse then we would expect this kind of distribution (is that pareto?). So the vast majority of universes that have life (clearly this one is not completely barren) would have life occurring only once by some freak accident and only a small proportion would be teaming with life, therefore we are almost certainly alone.
What do you think?
Moe
There seem to be interesting leads with ribozymes (there’s a good wiki article), though again I think (I might be wrong) they are synthesized at least indirectly and in part from once living matter, using a process that has been described as ‘directed evolution’, i.e. unnatural selection. (Well, I suppose we are natural…)
A lot hangs on your phrase ‘it is not hard to imagine’. Now that what be a long discussion…
Simon,
It might take a few decades of time, but in this case, it actually is not hard to imagine at all. Getting to a system running on only simple chemicals instead of complex molecules will certainly happen. It is very low hanging fruit.
Getting a system that doesn’t require intervention to dilute and mutate and evolve with the help of human selection, well, that would probably leave you with a system that needs millions of years to do interesting things. And you can’t really write a paper about something like that.
In any case, once this has been done with RNA, then people will try with other, non-natural polymers. It will be expensive at first, but should lead to the same abilities, including cells approaching on qualifying as ‘life’.
Moe
We’ll see. It can be hard to predict with technology.
My thoughts are that, for example, silicon miniaturization worked out rather well, but nano-technology and manned space travel didn’t go anywhere much, so far at least. What can be imagined cannot always be realized, or not as easily as some thought.
It depends also on what you mean by a running system and whether at the end of the day it is in essence a partial re-run of what nature already does.
Personally, I’m not optimistic that major progress will be fast toward either a new type of life, widely held as such, built on new molecular machinery, or truly synthetic life built on essentially the chemistry we have uncovered in nature, if, indeed either turns out to be achievable.
The next step in understanding the big picture (in contrast to pretending) is to differentiate, mathematically, between “multi-world-interpretations” and “multi-word-interpretations”; that is: we can account for “imaginary” or “poetic” worlds as “fiction”. More specifically, the “prior credence” that “one thing is not another” has two applications: complete and directional; complete opposites can only be discerned by priority as in “before annihilation”; directional opposites can only be discerned as “prior distances” by “exclusive travelers”: one point is not another. Synonymously (decentralizing priority): instead of prioritizing zero (zero-centricity) we can prioritize self-multiplication: map the smallest “integer” cube in terms of its three “signature” distances: the square roots of one, two and three (in any potential direction): one point is not another (aka diagonalization, disproving epistemic finitism).