Longtime readers know that I’ve made a bit of an effort to help people understand, and perhaps even grow to respect, the Everett or Many-Worlds Interpretation of Quantum Mechanics (MWI) . I’ve even written papers about it. It’s a controversial idea and far from firmly established, but it’s a serious one, and deserves serious discussion.
Which is why I become sad when people continue to misunderstand it. And even sadder when they misunderstand it for what are — let’s face it — obviously wrong reasons. The particular objection I’m thinking of is:
MWI is not a good theory because it’s not testable.
It has appeared recently in this article by Philip Ball — an essay whose snidely aggressive tone is matched only by the consistency with which it is off-base. Worst of all, the piece actually quotes me, explaining why the objection is wrong. So clearly I am either being too obscure, or too polite.
I suspect that almost everyone who makes this objection doesn’t understand MWI at all. This is me trying to be generous, because that’s the only reason I can think of why one would make it. In particular, if you were under the impression that MWI postulated a huge number of unobservable worlds, then you would be perfectly in your rights to make that objection. So I have to think that the objectors actually are under that impression.
An impression that is completely incorrect. The MWI does not postulate a huge number of unobservable worlds, misleading name notwithstanding. (One reason many of us like to call it “Everettian Quantum Mechanics” instead of “Many-Worlds.”)
Now, MWI certainly does predict the existence of a huge number of unobservable worlds. But it doesn’t postulate them. It derives them, from what it does postulate. And the actual postulates of the theory are quite simple indeed:
- The world is described by a quantum state, which is an element of a kind of vector space known as Hilbert space.
- The quantum state evolves through time in accordance with the Schrödinger equation, with some particular Hamiltonian.
That is, as they say, it. Notice you don’t see anything about worlds in there. The worlds are there whether you like it or not, sitting in Hilbert space, waiting to see whether they become actualized in the course of the evolution. Notice, also, that these postulates are eminently testable — indeed, even falsifiable! And once you make them (and you accept an appropriate “past hypothesis,” just as in statistical mechanics, and are considering a sufficiently richly-interacting system), the worlds happen automatically.
Given that, you can see why the objection is dispiritingly wrong-headed. You don’t hold it against a theory if it makes some predictions that can’t be tested. Every theory does that. You don’t object to general relativity because you can’t be absolutely sure that Einstein’s equation was holding true at some particular event a billion light years away. This distinction between what is postulated (which should be testable) and everything that is derived (which clearly need not be) seems pretty straightforward to me, but is a favorite thing for people to get confused about.
Ah, but the MWI-naysayers say (as Ball actually does say), but every version of quantum mechanics has those two postulates or something like them, so testing them doesn’t really test MWI. So what? If you have a different version of QM (perhaps what Ted Bunn has called a “disappearing-world” interpretation), it must somehow differ from MWI, presumably by either changing the above postulates or adding to them. And in that case, if your theory is well-posed, we can very readily test those proposed changes. In a dynamical-collapse theory, for example, the wave function does not simply evolve according to the Schrödinger equation; it occasionally collapses (duh) in a nonlinear and possibly stochastic fashion. And we can absolutely look for experimental signatures of that deviation, thereby testing the relative adequacy of MWI vs. your collapse theory. Likewise in hidden-variable theories, one could actually experimentally determine the existence of the new variables. Now, it’s true, any such competitor to MWI probably has a limit in which the deviations are very hard to discern — it had better, because so far every experiment is completely compatible with the above two axioms. But that’s hardly the MWI’s fault; just the opposite.
The people who object to MWI because of all those unobservable worlds aren’t really objecting to MWI at all; they just don’t like and/or understand quantum mechanics. Hilbert space is big, regardless of one’s personal feelings on the matter.
Which saddens me, as an MWI proponent, because I am very quick to admit that there are potentially quite good objections to MWI, and I would much rather spend my time discussing those, rather than the silly ones. Despite my efforts and those of others, it’s certainly possible that we don’t have the right understanding of probability in the theory, or why it’s a theory of probability at all. Similarly, despite the efforts of Zurek and others, we don’t have an absolutely airtight understanding of why we see apparent collapses into certain states and not others. Heck, you might be unconvinced that the above postulates really do lead to the existence of distinct worlds, despite the standard decoherence analysis; that would be great, I’d love to see the argument, it might lead to a productive scientific conversation. Should we be worried that decoherence is only an approximate process? How do we pick out quasi-classical realms and histories? Do we, in fact, need a bit more structure than the bare-bones axioms listed above, perhaps something that picks out a preferred set of observables?
All good questions to talk about! Maybe someday the public discourse about MWI will catch up with the discussion that experts have among themselves, evolve past self-congratulatory sneering about all those unobservable worlds, and share in the real pleasure of talking about the issues that matter.
” ..to obscure or too polite..” I have watched many of your lectures and most definitely the latter.
Sean,
It seems to me that you strawman a bit the main objection. Indeed, as you say
> MWI certainly does predict the existence of a huge number of unobservable worlds. But it doesn’t postulate them. It derives them, from what it does postulate.
However, this does not answer the objection that
>MWI is not a good theory because it’s not testable
if you phrase it the way the MWI opponents usually mean it:
> MWI is no more testable than shut-up-and-calculate-only-probabilities-are-real
or than any other interpretation favored by a particular MWI opponent, as long as that interpretation makes exact same predictions as the orthodox QM.
You are certainly right, MWI does not need an extra collapse postulate, it comes out in many possible ways from using the L^2 norm for probability in conjunction with, say, unitarity, or some other equivalent experimentally justified assumption.
Unfortunately your “rightness” is rather hollow, because you still have no definitive experiment that would convince your opponent. And so the argument becomes philosophical rather than physical, as it cannot be resolved by the scientific method.
Needless to say, a QM formulation which would lead to a testable prediction beyond those derivable from the orthodox approach, would be an exceedingly big deal. But one can hope.
My discomfort (“objection” is too confrontational) is somewhat related, so maybe this is an opportunity for me to learn.
After decoherence, the state in the Hilbert space transforms effectively into a classical probability distribution. We have certain probabilities, given by the Born rule, assigned for every possible outcome. Those possible outcomes no longer interfere. In all other cases where this situation occurs in science, we understand those different possibilities as potentialities, but we shy away from attributing to them independent “existence”.
Now, I am not too worried about ontological baggage. I suspect that in the present context ontology as we understand it cannot be made well defined. Rather, I am worried about the epistemological baggage: what does it buy you, declaring that all those potentialities actually “exist”? Is it more than a rhetorical move? And, why make that move here and not, for example, in the context of statistical mechanics?
Sean,
“Which saddens me, as an MWI proponent, because I am very quick to admit that there are potentially quite good objections to MWI, and I would much rather spend my time discussing those, rather than the silly ones.”
Why don’t you write a post on those, then? If you want to stimulate a discussion on the interesting/unresolved issues of MWI, then maybe you should try to explain (to the non-experts) what are the *right* objections, rather than writing about the *wrong* objections.
Best, 🙂
Marko
I’ll be the first to admit that I don’t understand MWI or quantum mechanics. What actually happens when a photon goes through a beam splitter and I observe it to activate a counter on one side but not the other? Does MWI actually answer this question, or leave it unanswered? The two postulates above don’t seem to address this question at all.
Although I agree that “it’s not testable” is not a good argument, this is a minor point in the article.
The main arguments are: it completely dissolves personhood, and assuming that everything (physically) possible exists trivializes the theory.
Of course personhood could be an illusion, but shouldn’t it be possible, at least in principle, to explain why we have the illusion? Or else doesn’t it undermine scientific inquiry?
The problem as I see it is that branches separate too fast and are too numerous for there to be consistent macroscopic objects at all.
Perhaps the MWI proponent could give such an explanation (are there some kinds of “macro-branches” on our scale? I heard that branch-counting is scale-relative) but at least that’s a sound objection.
Regarding the second aspect (the trivialization of the theory) perhaps the intuition underlying this objection is not too far from the problem of probabilities in MWI. If probabilities can be meaningfully assigned to branches, then the theory is not trivial, it is explanatory (e.g. we’ve observed such and such experimental outcomes more often so far because they really are more probable, or we have more chance to be in a branch where they happen). Without any probabilities, or with subjective probabilities solely grounded in past outcomes, the theory doesn’t seem explanatory anymore: everything happened this way because it did, end of story… That would really be a problem.
So again I would say that the article was tackling an important issue.
Moshe– I might be misunderstanding the question, but I’ll try. I think this is a case where the ontology does matter. In statistical physics, the theory says that there is some actual situation given by a microstate, but we don’t know what it is. So instead we work with probability distributions; they can evolve, and we can update them appropriately in response to new information. None of this changes the fact that there is a microstate, and it evolves (typically) deterministically once you know the whole state.
In QM the situation is just completely different. You don’t have a probability distribution over microstates, you have a quantum state. You use that quantum state to calculate the probability of experimental outcomes, but we aren’t allowed to think that the outcome we observe represents some truth that was there all along, but we just didn’t know. That’s what interference experiments (and Bell’s theorem etc) tell us. The quantum state isn’t just a probability distribution. See also the PBR Theorem (http://en.wikipedia.org/wiki/PBR_theorem) and David Wallace’s guest post (https://www.preposterousuniverse.com/blog/2011/11/18/guest-post-david-wallace-on-the-physicality-of-the-quantum-state/).
Now, of course you are welcome to invent a theory (a “psi-epistemic” model) in which the wave function isn’t the reality, but just a black box we use to calculate probabilities. Good luck — it turns out to be hard, and as far as we know there isn’t currently a satisfactory model. The Everettian says, Why work that hard when the theory we already have is extremely streamlined and provides a perfect fit to the data? (Answer: because people are made uncomfortable by the existence of all those universes, which is not a good reason at all.)
Great post Sean.
I put up a link (https://www.preposterousuniverse.com/blog/2014/06/30/why-the-many-worlds-formulation-of-quantum-mechanics-is-probably-correct/) from Preposterous Universe in the comments section underneath the article, as I thought its inclusion in a reply might be able to help some of the misinformed.
If we start with a minimalist, instrumentalist statement about QM, “QM is a procedure for calculating probabilities associated with experimental outcomes.”, do we obtain any new predictive power if we additionally interpret ψ as the ontological state of affairs, as opposed to just another part of the QM toolbox for calculating probabilities?
But I don’t think predictive power is necessary for MWI to be useful or interesting. If it motivates a new, unambiguous treatment of probability, that would be a significant contribution in and of itself.
Thanks, Sean. I suspect this is my own personal misunderstanding, so I don’t want to take too much of your time. Let me try just once again to state my confusion.
I have no problem believing in Everettian Quantum Mechanics, and I certainly see the appeal of getting everything from unitary evolution with no additional assumptions. So we don’t have any real disagreement. But, I am confused about the natural language description of the situation. So maybe it is about ontology, a concept I clearly have some trouble with.
I take it that the crucial part in taking different possibilities as actualities is not in the post-decoherence description. If the world was fundamentally stochastic, simply described by an evolution of a density matrix, not too many people would claim that the different possibilities are more than just potentialities, and most will agree that only one of them is realized. And, this is what I feel uncomfortable about — pre-decoherence it is certainly murky to discuss the world in classical terms and argue on what exists and what not. And post-decoherence we have a probability distribution, for which normally we only believe one situation is realized. At which point are we forced into believing that all branches co-exist?
(Independently, as I complained before, almost everything is physics has continuous spectrum, so “branches” and worlds “splitting” must be only a metaphor).
Regarding your recent message, you make an interesting point: “Now, of course you are welcome to invent a theory (a “psi-epistemic” model) in which the wave function isn’t the reality, but just a black box we use to calculate probabilities.”
I would emphasise that the minimalist “black-box” position (QM is how we calculate probabilities) is not equivalent to the psi-epistemic position. The psi-epistemic approach also includes an additional postulate. The postulate that there really is a hidden-but-specifiable, universally true, representable state of affairs. More typical positions (which I like to call ψ-positivism) would say that, while a mind-independent reality exists, and is consistent for all observers, it does not submit to a description involving an ontic state λ.
Pingback: The Wrong Objections to the Many-Worlds Interpretation of Quantum Mechanics | Sean Carroll | Starman's Meanderings
Sean,
Does the theory predict that these universes have been around the whole time, are that they are continually being “created”?
Also, how does this tie in with general relativity and spacetime? Are these universes embedded within a higher dimensional spacetime and just stack? Does the energy content of each universe exert a gravitational force on all other universes?
If I prepare a quantum state, |psi> = 1/sqrt(3)|up> + sqrt(2/3)|down>, does that mean that there are twice as many universes with |down> as there are |up> ?
Moshe– Of course the world can perfectly well be said to be described by a density matrix, since any pure state determines a density matrix. The real question is, how does the density matrix evolve? We sometimes think of decoherence happening, off-diagonal elements disappearing, and the state “branching.” But that’s only for the reduced density matrix for some subsystem; the full density matrix obeys the unitary von Neumann equation, from which the above description can be derived.
So you have a choice: you can believe all that, and take the “probability distribution” to be a measure on which branch you find yourself on, like a good Everettian. Or you can — do something else! Change the formalism in some way so that you get to say “but those other parts of the density matrix aren’t real things.” You could invent a hidden variable that points to one particular branch (as in Bohm), or you could explicitly change the dynamics so that the other branches actually aren’t there, or you could invent a completely new (and as-yet unspecified) ontology such that the density matrix simply provides a probability distribution over some different set of variables. But you have to do something — otherwise you’re just stamping your feet and insisting that some parts of the formalism are “real” and some are not, for no obvious reason.
Mulder– Whether the universes have been around forever is a somewhat ambiguous question. The theory isn’t ambiguous — it says there is a quantum state, which evolves deterministically. But the division into branches is a human convenience (like coarse-graining in statistical mechanics), and people argue about it.
Each branch describes its own independent spacetime; there’s no higher-dimensional spacetime, no stacking, no gravitational forces.
For the third question, see answer to the first question. It depends.
Mulder:
Thinking of the worlds as being literally created is rather misleading. If one knew the whole history of the universe (multiverse, actually) from beginning to end (let’s assume it has an end, for the moment), then all you see is the unitary evolution (a sort of spinning) of the state describing the whole universe, with different small but macroscopic parts of it behaving as though a bunch of inaccessible but “real” worlds is constantly being created, as far as observers in those parts are concerned. Hence Everett’s “relative state” terminology.
And it does not mesh well with GR at all, but neither does any other formulation of QM. This is an open problem whose resolution requires the elusive Quantum Gravity, the current Holy Grail of High-Energy Physics.
As for the relative number of worlds, Sean’s other post and comments to it discuss this issue: https://www.preposterousuniverse.com/blog/2014/06/30/why-the-many-worlds-formulation-of-quantum-mechanics-is-probably-correct/ (TL;DR: there are various proposals but no consensus on how to count the universes, and counting/number is not even the right term.)
Sean,
I am more strident than you in defending the Everettian interpretation. Like you, I don’t buy the “it’s not testable!” counter-argument. In fact, unless I’m misunderstanding QM, even the “other worlds” of MWI are testable in a few different ways.
1. We can certainly create, observe, and test small-scale superpositions and quantum effects. “Many-worlds” are just big superpositions. These are superpositions that we are a part of (and which have decohered into mixed states). But they are conceptually the same. So when someone says “we can’t measure the other MWI branches”, I reply “we routinely measure multiple branches for electrons, photons, etc.”.
2. Decoherence suggests that the interference terms tend towards zero, so that what we’re left with is a mixed state. But those terms don’t go to exactly zero, so in principle you can do careful measurements and detect the effect of the other branches. In practice for large systems these terms may be so close to zero that we don’t have a hope of detecting them; but can we hold that against MWI? Moreover, it seems like we should be able to set up careful experiments where we drive the interference terms very close to zero, but still detectable. This ‘nearly-mixed-state’ would seem, to me, like an experimental proof that the other branch ‘really exist’.
3. People get very concerned about how MWI intersects with conscioussness. But, in principle, we could experimentally put humans into superpositions. The size-scale of the largest acheived superposition keeps getting bigger. We can imagine having the experimental finesse required to put a human into a superposition, and prove (via interference experiments, etc.) that they really are simultaneously evolving along multiple paths (I’m being loose with language: obviously what I mean is that they are a state evolving according to SE). After we ‘collapse’ them, they will only remember one history (even though we have data showing they were superposed). Even without being able to do the experiment, it points to another in-principle-testable aspect. As a thought experiment it helps differentiates interpretations. What do the other interpretations think will happen in such a case? (Do they really think you can’t have such a big superposition? If so, that’s an experimentally testable difference in the theories.) If people accept that human-sized superpositions can exist (in principle), then they’ve already accepted that MWI-style multiple branches can be real. At that point, extending the logic to the planet, galaxy, or universe seems straightforward.
Sorry, in ” described by an evolution of a density matrix” I meant “described by an evolution of a probability distribution”, which unfortunately changed the meaning quite a bit.
Anyhow, I am not confident enough to have a real opinion on the reality of the wavefunction in the fully quantum regime, or whether this question makes sense.
But, I thought that the real force of the “many-worlds” interpretation of Everettian Quantum Mechanics is that you don’t have a choice, and this I don’t see. I see a plausible scenario of how you get classical probabilistic description out of QM, which is quite a bit! But I don’t see why you need to declare the alternatives as “real”, any more than you do for other classical probability theories.
For example, in the fully quantum regime you can take the view so nicely expressed here by Tom. If the question of “what is real” only makes a meaningful appearance post-decoherence, I think you never have a situation with co-existing worlds. And, if the many-worlds part of the interpretation depends on how you interpret the wave-function pre-decoherence, I think the inevitability claim is not that strong.
But anyhow, thanks for the discussion. I am probably missing some of your points, but it’s been useful for me nonetheless.
I think the responsibility for lack of acceptance of the MWI should lie with the proponents. They have not explained what is meant by the word “world”. Is this the world we live in and can touch various objects by our hands or is it in our minds or is it something we would never interact with? If the splitting takes place when the observer decides to make measurement, then you are giving too much arbitrary power to human beings! If the splitting is already made in heavens and the observer merely chooses branches then it is too much metaphysical. In either case there is not a slightest advance in our understanding of the world from MWI. At the end of the experiment, all the observers, regardless of race or religion agree on the result!! That is the main reason why after 90 years of debate, there is no agreement. Why not frankly admit following Bohr that the quantum world is intrinsically probabilistic and unknowable in advance and every time we make a measurement we gain some additional knowledge of the system?
The topic of of Phillip Ball’s attack on MWI came up yesterday at Uncommon Descent, the fanatical and obnoxious Intelligent Design creationist blog, where Sean comes in for repeated insults and sneering. I went over there and demolished Phillip Ball a little, more than Sean does here. I also quoted Sean a little which enraged the natives, and waved some mathematics at them, as if I were shouting “Fire! Fire!” and as you might expect, the half-naked natives circled about me and shrieked “Fire bad! Fire burn!”
If you don’t know Uncommon Descent, it’s populated by ID creationists who hate scientists with a purple passion, and who engage in constant name-calling when real scientists occasionally pop in to set them straight. Their objections to MWI are all religiously motivated– “There’s that atheist idea! Kill it! Kill it in the head!”– ooh the whole blog is set up for hating on atheists, but their “scientific” complaints are the usual ignorant ones, pompously expressed by people using science jargon the meaning of which they don’t understand.
Sean, above, worries he is too polite to respond appropriately to misrprepresentations of MWI. Uh, I’m not polite. It was about 10 of them against me, so they were badly outnumbered.
Oh and then I pushed their buttons by bringing up their widespread (no, really) belief in the Quantum Shroud of Turin, I kid you not, they all think the Shroud of Turin violates physical laws like the law of gravity (!!) and that the Shroud literally “unifies” Quantum Mechanics and General Relativity! I couldn’t make this stuff up.
You may go over there and lurk for entertainment value, to see what it means to be not polite, or to correct my math! But if you join in, expect a mountain of name-calling, ad hominems, and hypocrisy of the most un-self-aware kind. Just so you know.
Folks: no name-calling or gratuitous insults, please.
I don’t really see how those two postulates are sufficient for mwi. They don’t lay the groundwork for defining the notion of a “world.” How do you define a “world” without referring to some concept outside those postulates?
It’s not just the MWI that gets distorted and wrongly criticized. The majority of people don’t understand the scientific method. They don’t know the difference between a theory, a hypothesis, or a postulate. Without that foundation, it’s not surprising at all that Mr. Ball’s criticism is misplaced. You see the same kind of misplaced criticisms of Evolution, Cosmology, etc. that is rooted in these foundational gaps.
[Sorry for the duplicate post – I must have accidentally hit the Post button early.]
I don’t think any discussion of MWI is complete without giving due respect to the very serious and justifiable negative emotional response that many folks feel toward it. In many fields of science, this would not be appropriate, but with MWI it is.
There are two reasons why it is appropriate and necessary:
1. We don’t currently have a way to distinguish MWI from Copenhagen experimentally.
2. For many of us, the philosophical implications of MWI are extremely undesirable.
So, given #1, we have a way out of #2. And we’re going to take it unless you can prove otherwise.
I’ve read your blog for years now and love it. And this post doesn’t change that. It makes many good points and those points are valid. But, they are all scientific. You are making scientific replies to what is at its core a visceral emotional revulsion to the implications of MWI. I’m as scientific as any other reader of your blog, but in this case I have to sympathize with that negative reaction. I love the conceptual simplicity of MWI, and wish we could do away with the ugliness that is collapse. But, I very much hope we can someday prove that MWI is wrong, and I for one can understand why so many folks feel compelled to write articles against it.
Hi Sean
But there is no compelling reason WHY the many-worlds universe evolves according to postulate 2 – why doesn’t it evolve instantaneously so that all historical and future states exist already?
We need to slow the universe down, but we need postulates 1 and 2 pretty much intact.
So we need random jumps, with a “half-life” ~ planck-time (10^-43 secs) – and then the universe does book-keeping with a unitary evolution of the entire state vector after each random jump.
There, no many-worlds, but a naturally evolving universe with superpositions and a speed limit.