3:am magazine (yes, that’s what it’s called) has a very good interview with Craig Callender, philosopher of physics at UC San Diego and a charter member of the small club of people who think professionally about the nature of time. The whole thing is worth reading, so naturally I am going to be completely unfair and nitpick about the one tiny part that mentions my name. The interviewer asks:
But there is nothing in the second law of thermodynamics to explain why the universe starts with low entropy. Now maybe its just a brute fact that there’s nothing to explain. But some physicists believe they need to explain it. So Sean Carroll develops an idea of a multiverse to explain the low entropy. You make this a parade case of the kind of ontological speculation that is too expensive. Having to posit such a huge untestable ontological commitment to explain something like low entropy at the big bang you just don’t think is worth it.
There is an interesting issue here, namely that Craig likes to make the case that the low entropy of the early universe might not need explaining — maybe it’s just a brute fact about the universe we have to learn to accept. I do try to always list this possibility as one that is very much on the table, but as a working scientist I think it’s extremely unlikely, and certainly it would be bad practice to act as if it were true. The low entropy of the early universe might be a clue to really important features of how Nature works, and to simply ignore it as “not requiring explanation” would be a terrible mistake, even if we ultimately decide that that’s the best answer we have.
But what I want to harp on is the idea of “ontological speculation that is just too expensive.” This is not, I think, a matter of taste — it’s just wrong.
Which is not to say it’s not a common viewpoint. When it comes to the cosmological multiverse, and also the many-worlds interpretation of quantum mechanics, many people who are ordinarily quite careful fall into a certain kind of lazy thinking. The hidden idea seems to be (although they probably wouldn’t put it this way) that we carry around theories of the universe in a wheelbarrow, and that every different object in the theory takes up space in the wheelbarrow and adds to its weight, and when you pile all those universes or branches of the wave function into the wheelbarrow it gets really heavy, and therefore it’s a bad theory.
That’s not actually how it works.
I’m the first to admit that there are all sorts of very good objections to the cosmological multiverse (fewer for the many-worlds interpretation, but there are still some there, too). It’s hard to test, it’s based on very speculative physics, it has a number of internal-consistency issues like the measure problem, and we generally don’t know how it would work. I consider these “work in progress” types of issues, but if you take them more seriously I certainly understand. But “wow, that sure is a lot of universes you’re carrying around” is not one of the good objections.
When we’re adding up our ontological commitments (i.e., the various beliefs about reality we are willing to hypothesize or even accept), the right way to keep track is not to simply add up the number of objects or universes or whatevers. It’s to add up the number of separate ideas, or concepts, or equations. There are an infinite number of integers, and there are only a finite number of integers between zero and a googol; that doesn’t make the former set somehow ontologically heavier. If you want to get fancy, you could try to calculate the Kolmogorov complexity of the description of your theory. A theory that can be summed up in fewer words wins, no matter how many elements are in the mathematical structures that enter the theory. Any model that involves the real numbers — like, every one we take seriously as a theory of physics — has an uncountable number of elements involved, but that doesn’t (and shouldn’t) bother us.
By these standards, the ontological commitments of the multiverse or the many-worlds interpretation are actually quite thin. This is most clear with the many-worlds interpretation of quantum mechanics, which says that the world is described by a state in a Hilbert space evolving according to the Schrodinger equation and that’s it. It’s simpler than versions of QM that add a completely separate evolution law to account for “collapse” of the wave function. That doesn’t mean it’s right or wrong; but it doesn’t lose points because there are a lot of universes. We don’t count universes, we count elements of the theory, and this one has a quantum state and a Hamiltonian. A tiny number! (The most egregious version of this mistake has to belong to Richard Swinburne, an Oxford theologian and leading figure in natural theology, who makes fun of the many-worlds interpretation but is happy to accept a completely separate, unobservable, ill-defined metaphysical category into his ontology.)
The cosmological multiverse, while on much shakier empirical ground than the many-worlds interpretation, follows the same pattern. The multiverse is not a theory, it’s a prediction. You don’t start with the idea “hey, let’s add an infinite number of extra universes!” You start with our ideas of general relativity, and quantum mechanics, and certain plausible field content, and the multiverse comes out, like it or not. You can even get a “landscape of different vacua” out of very little theoretical input; Johnson, Randall and I showed that transitions between states with different numbers of macroscopic spatial dimensions are automatic in a theory with just gravity, vacuum energy, and an electromagnetic field, while Arkani-Hamed et al. showed that the good old real-world four-dimensional Standard Model coupled to gravity supports a landscape of different vacua that depends on the masses of the neutrinos. The point is that these very complicated cosmologies arise from very simple theories, and it’s the theories we should be judging, not their solutions.
The idea of a multiverse is extremely speculative and very far from established — you are welcome to disagree, or even better to ignore it entirely. But please disagree for the right reasons!
“I’m the first to admit that there are all sorts of very good objections to the cosmological multiverse (fewer for the many-worlds interpretation, but there are still some there, too).”
I find it amusing that you, as a cosmologist, believe this, but I, as someone who works on the foundations of quantum theory, see it the other way round. On my understanding, we have pretty good evidence for believing inflation, and pretty good reasons for believing that the best way of implementing inflation involves a cosmological multiverse. Those “other universes” would be unambiguously real, in just the same sense that ours is.
On the other hand, the many-worlds multiverse is only compelling if you believe that the wavefunction should be treated ontologically, as a literal description of reality, but there are many compelling arguments that suggest it should be treated as an epistemic object, more like a probability distribution. The supposed “killer argument” that wavefunctions can interfere is not compelling, because it has been shown that interference can arise naturally in an epistemic interpretation. Therefore, I would say that the many-worlds multiverse rests on much shakier ground than the cosmological one, and that is even before we start to think about probabilities.
As far as I understand as a layman, hypotheses of this sort don’t just posit an ever-branching “tree” structure, but also the fact that one particular branch of the tree is “indexed” as “real” or “ours” (or whatever – it’s importantly unlike the other branches). The tree itself may not be complicated, but the fact that one particular branch of it is set apart from the others calls for explanation. Isn’t the explanation of that extra fact every bit as demanding as an explanation that does not involve an ever-branching tree structure? And if the extra fact is not addressed at all, it looks to me as if the tree itself is just non-explanatory “bloat”.
How big a role does Occam’s Razor actually play in Cosmology in practice? Do physicists spend a lot a time measuring the complexities of their theories?
I think this is not a fair criticism. I also tend to think that what should count is not the number of onto.ogical units (whatever that may mean), but the number of independent assumptions that goes into the theory. So, as far as the existence of other universes, or other branches of the wave function is concerned, I agree with your sentiment. But, you and others are trying something harder — to explain some features of our own universe, or our own branch of the wave function, in terms of some structure on the space of all possible worlds/branches. For that you need know many things beyond the mere existence of other universes: e.g. what those other universes look like, what made them come into being, and even what questions make sense in this context. I think that amount of uncertainty in all these questions currently (and perhaps even in principle) makes the properties of all the other universes/branches (or their distribution, which amounts to the same thing) independent moving parts of the theory. In this context, I think it is not ridiculous to complain about excessive ontological baggage, or (if you don’t care for ontology since you don’t know a sharp definition of “existing”) about the lack of “compression” in any multiverse-based explanation, these are basically isomorphic questions.
Moshe — I take your point, but I don’t think they are truly isomorphic. Basically you are making the (fair) criticism that we don’t know much about the multiverse, so there’s a certain vagueness in what the other regions are like, so you can get away with a lot. But that’s a criticism of our current state of theory-building, and would hopefully be a temporary condition; if we knew the underlying theory better, that vagueness would dissipate. So I still think it’s wrong to pinpoint the problem as one of ontological extravagance.
Craig — cosmologists do it all the time (as do all scientists), but usually just very informally.
Jeremy — it’s true that you do need that extra bit of information, and I would provisionally agree that it should be taken into consideration in one’s ontological accounting. But it’s just one fact, not an infinite number of universes, so it doesn’t count as all that much.
Matt — vive la différence!
Sean, I really appreciate your willingness to critique what I call “deontological arrogance”, the idea that the possibility a question may be too corrupt to give a meaningful answer is more likely than there being an explanation. This reminds me of the “North of the North Pole” answer that used to be one of the most common replies given to the layperson as to “what happened before the Big Bang?” Honest experts, I contend, recast this awkward layman’s question which is sometimes simply dismissed as a corrupt chicken-and-egg recursion argument into a more full discussion of models that potentially avoid Big Bang singularities or other proposals that may show time-like worldlines could extend beyond a 13.7 billion year old “initial state”.
Another example might be that the heliocentric models of Copernicus or Kepler that lacked an explanation as to why the Sun was at the center of the universe. It really took a Newtonian perspective to explain its prominence in the solar system in terms of gravitational mass. I can imagine Copernicans and Keplerians responding with deontological assurances before Newton that the Sun being at the center was a bald fact of our universe that need not be explained any more than the question as to why the Earth had a single moon or a 23 degree axis tilt. While it was to some extent not really possible to answer the question as to why heliocentrism “worked” until Newton, the question wasn’t really corrupt and we have pretty good explanations for this question today that aren’t just “the question is meaningless”.
I am not going to argue too hard for any ontological position, since in my current state of knowledge any question of ontology, as applied to regimes far from our daily experience, is ill-defined (or at least I cannot make sense out of it). But, I do think that the criticism of epistemological extravagance is not all that different, and I tend to sympathize with it, and also tend to think that it is not just a matter of our current knowledge but more of a matter of principle. For example, even in the context of simple toy models in which every concept is finite and calculable, it is not clear to me what a multiverse based prediction might look like (e.g. what physical principle picks the measure, and what prevents me from picking another one).
Pingback: Does This Ontological Commitment Make Me Look Fat? – - ScienceNewsX - Science News AggregatorScienceNewsX – Science News Aggregator
Sean: “By these standards, the ontological commitments of the multiverse or the many-worlds interpretation are actually quite thin. This is most clear with the many-worlds interpretation of quantum mechanics, which says that the world is described by a state in a Hilbert space evolving according to the Schrodinger equation and that’s it…”
A theory is not just mathematics but also interpretation which is needed to relate that mathematics to the real world. In case of many-worlds the interpretation is so excessive and ill-defined as to completely outweigh any benefit from having one less assumption about measurement.
Similarly with the early universe, it’s preferable to have one puzzle – why the entropy was low – to countless new questions concerning properties and nature of the hypothetical unobservable portions of the multiverse which might somehow “explain” low entropy.
In general an explanation which introduces more questions then answers is not good. The goal of science is to explain more with less not the other way around.
I have no idea why Callender thinks of the multiplicity of worlds as ontological extra baggage, but I think lots of folks who balk at the multiverse or the many-worlds interpretation of quantum mechanics are simply falling victim to the asparagus fallacy. (I’m glad I don’t like asparagus because if I liked it, I’d eat it; and I can’t stand the stuff.) The notion that this world is, after all, the world is common sense so it’s easy to privilege this belief even if you know that common sense intuition isn’t really an argument in cosmology or physics. The universe (or multiverse) is under no obligation to respect our sensibilities, but it’s hard to process that fact. We claim to consider all alternatives equally; but when it comes to it we find ourselves rejecting some options, not because they are more complicated, but simply because they are more foreign to our usual way of thinking.
Sean,
You’re creating a straw man argument. The quote explicitly states that the problem is the “huge untestable ontological commitment.” The “untestable” qualifier which you drop is crucial. Despite what you seem to think, everyone understands the concept of a simple, testable set of ideas leading to a multiverse scenario, and no one has a problem with this, philosophically or scientifically. Moshe explains the real issues better than I can.
Where this ends up is with the “work in progress” argument, and there you need to identify progress towards having a legitimate, testable framework. The question of whether there’s a healthy research program here making progress, or just a set of excuses for a failed one, is what the argument about the multiverse is actually about, not the naive straw arguments you prefer to debate.
Peter, despite what you seem to think, I explicitly mention testability as a legitimate problem right there in the post.
Sean,
Again, a straw argument. I’m not claiming you don’t acknowledge that testability is a legitimate problem. What I’m pointing out is that when you write:
“But what I want to harp on is the idea of “ontological speculation that is just too expensive.” This is not, I think, a matter of taste — it’s just wrong.”
you are ignoring the words immediately before and after the ones you quote. A fuller quote is:
“..the kind of ontological speculation that is too expensive. Having to posit such a huge untestable ontological commitment to explain something…
The “the kind of” modifier which you drop is explained by the “huge untestable”, and you just drop the “untestable” part in order to be able to claim “it’s just wrong”.
Cosmos is absolute state of relative multiverse. -Aiya-Oba (Philosopher).
I think Occam’s “Razor” would be better understood as Occam’s “weighing scales” – with “unexplained mysteries” in one pan, balanced against “explained mysteries” in the other pan. The reason why we don’t want to multiply entities beyond necessity is that doing so usually adds more to the “unexplained mysteries” pan than is counterbalanced by the “explained mysteries” pan.
It’s easy to see that sheer “number of entities” is often irrelevant. For example, consider the apparently accelerating expanding universe. If we could explain it by positing twice, thrice – or a million times – as many atoms of the sort we are already familiar with, we wouldn’t hesitate to do so. The problem is positing “dark matter”: we’re unfamiliar with it. It isn’t clear at all that we are balancing Occam’s “scales” in the right way.
There are often situations in which we clearly tip the balance the way we want. For example (in my opinion) positing prions explains some brain diseases by getting rid of more mystery than it adds. (I say “in my opinion”, because there are differences of opinion here, and it’s often hard to call.)
My main problem with “multiverse”-type theories is not that “entities” are multiplied beyond necessity, but that Occam’s “scales” may be tipped the wrong way.
By analogy, suppose I am a pre-Darwinian, and I have my own half-baked “theory of evolution”. It says that all life is descended from a single original life form, and that random variations occur, some of which die out, some of which survive. So far so good.
But instead of describing the mechanism of natural selection, this half-baked theory says every possible variation that can occur actually does occur – somewhere or other on the vast number of other planets in the universe. We on Earth just happen to inhabit a planet in which giraffe necks and peacock tails happen to be long.
It seems to me that this wouldn’t be an explanation worth the name. It tips the scales the wrong way. The real problem is not the “number of entities”, but the extra weight added to the “unexplained mysteries” pan of the scales.
Entropy is a measure of the number of available states in the system. At the Big Bang the universe was much much smaller, so the number of available states is also much much smaller, hence entropy starts at a minimum. Now the universe is bigger, many more available states, so the entropy can increase.
Also (and I admit this sounds crazy and probably is) if the entropy of the universe was at a minimum at t=0, wouldn’t the temperature of the universe be 0 Kelvin at the instant of the Big Bang?
Since I like beating dead horses, I still don’t see why a low entropy early universe is all that strange if we interpret entropy as a measure of “possibility” (as opposed to probability, per se) or as a scaled number of configurations (this is all apart from the fact that there seem to be slightly disparate uses of the term entropy floating around out there to begin with so you’d need to specify which one you are talking about). Complexity, on the other hand, does need some serious explaining. But I think that perhaps part of the problem is hinted at in one of your own slides from the FQXi conference that compares entropy and complexity. In fact, I’m still not really sure what that slide was meant to convey (no offense). From my standpoint, the graph of complexity on that slide is far more mysterious than the graph of entropy (why does complexity eventually decrease?). Of course, there is some intuitive sense of what complexity is that is built into this, but the only attempt to clearly define what complexity is, that I am aware of, is by Scott Aaronson.
As for the multiverse concept or many worlds or whatever, as numerous people have pointed out (including Jeffrey Barrett and David Albert as well as David Wallace), there are some issues underlying precisely what we mean by a “universe” or “world” that have yet to be worked out. Thus it seems a little premature to use the multiverse to solve a problem (that actually might not even be a problem in the first place) when its own foundation is not well-defined. It feels too much like a house of cards.
Hi Sean,
I think that you’re right that we should make a distinction between what the philosopher David Lewis called *quantitative* simplicity and what he called *qualitative* simplicity. Multiplying kinds of things in a theory seems a theoretical vice, whereas multiplying instances of a kind doesn’t seem so bad. As the intuition is usually put: when considering the “price” of positing the electron, no one objected that there would then be too many! So people typically think that quantitative simplicity is no theoretical virtue at all. And as you point out, on that measure multiverses and Everettian branches don’t count as bloated at all.
That said, I think that there are cases in the history of science where quantitative simplicity *did* count (and seemingly rightly). I’m no expert on this, but I know that some philosophers of science have written on cases where quantitative simplicity did function as a virtue, e.g., the number of particles (of a given kind) invoked in saving a conservation principle. Intuitively that seems like it could be right — don’t posit more you need, either kinds of instances or kinds. Perhaps it would be worth thinking about these cases where quantitative simplicity seems to matter and seeing whether anything remains of the “too fat” objection to Everett or multiverses. Surely if I have two hypotheses, H and H’, and they both predicted the same phenomena, except that one posited multiverses and the other didn’t people would go for the one without. So quantitative parsimony must count for *something*?
The real question, of course, is need. And especially in the case of inflation, where one might get empirical predictions and experiments, then it’s hard to make any in principle objection. Maybe I should have distinguished between two different motivations for multiverses (I think Tim Maudlin recently did this in an interview I read after I provided my answers.). One is a kind of “fine-tuning” motivation. I’ve definitely seen this style of motivation at physics conferences, as I’m sure you have. I’m just against people “knowing” that the low entropy constraint is a big problem a priori, for that closes off some possibilities. That’s my target–I hope the rest of the interview makes that clear. The other is a mechanism that will explain some empirical phenomenon and that mechanism has the by-product of predicting lots of pocket universes. The former seems a lot more controversial than the latter, which is part and parcel of successful science.
Craig– yeah, I tried to make clear that there was an interesting argument to be had (about what the low entropy of the early universe tells us), but that I wasn’t talking about that in this blog post, just nitpicking on a common but incorrect (in my view) application of parsimony.
I would be interested in knowing of an example where quantitative simplicity actually did count, especially if it were more than just a tiebreaker between two otherwise equivalent theories. I can’t think of any off the top of my head.
I think that if we could predict exactly the same phenomenon with and without the multiverse (and with the same simplicity of equations and concepts), people would quite rightly tend to prefer the non-multiverse version. But I would argue that “extravagance of ontological commitment” would not be a good reason for doing so. The good reasons would be the other ones I mentioned — testability, vagueness, well-definedness, etc., if those indeed were worries in this hypothetical scenario. If they were not, I don’t think we’d be justified in rejecting the multiverse, other than on grounds that we are prejudiced against things we can’t see.
Hi, Sean. In this post and on other occasions you express the view that the low entropy of the early universe is likely not just a historical accident but a clue toward some yet unknown principle of how the universe works. I’m curious about the reasoning that lead you to this opinion. I have to confess that, thinking about this question myself, I cannot find a sufficiently close analogy from what I know of how other known and tested physical principles have been discovered. Thus I personally tend to gravitated toward the “historical accident until proven otherwise” point of view. However, I’m genuinly interested in the details of reasoning that bring someone to the opposite opinion.
Igor– well, there is a measure on the space of states. Defining that entropy relies on that measure. Saying that the early universe had a low entropy is saying that it is in a very tiny region of the space of states. If we hadn’t observed anything about the universe, but were told what its space of states looked like, we would have expected it to be more likely to be high-entropy than low-entropy. The fact that it’s not suggests that there is possibly more going on to things than a universe just picked out of a hat. It’s not that the measure is some sort of “correct” probability distribution (it’s clearly not), but that its failure suggests that we can learn something by trying to explain this feature of the universe.
It’s easy to find analogies. Say you’re on a beach where there are both light and dark grains of sand. If the grains are all mixed together, there’s nothing to be explained. If all the dark grains are piled in one particular location, you figure there is probably some explanation.
Sorry, Sean, but you’re argument falls short. It’s wrong because I don’t like it. Also, I’m sorry to inform you but your ass looks fat in those jeans- your wife is lying to you. (neither comment meant to be a factual statement)
Sean, Does the multiverse theory conserve all of the theoretical truths embodied in relativity and in QM? Rovelli and Smolin don’t seem to think so (see this month’s edge.org interview with Rovelli). It Seems to me that conserving facts we know as true is not the same as failing to imagine new perspectives on those facts. It seems like criticizing a theory because it postulates ideas not anchored in previously successful theories is a valid criticism that encompasses a notion of excess ontological baggage. However, I am asking because I am not sophisticated enough to know if I am approaching this based on a valid interpretation I the postulates of MV/MW theory.
Sean, thanks for the explanation. I recognize this argument. But, for the record, I personally don’t find it convincing. Continuing the analogy, the concentration of dark grains of sand is, from our perspective, unusual because we can compare to many other instances of isolated beaches where the two grain kinds are mixed. This and other similar analogies don’t take into account that, on the other hand, we have only one universe.
aww man, somebody doesn’t like me.
As far as a muiltiverse is concerned; so many issues come up. I personally believe in the concept of a multiverse. From the perspective of our universe being finite; I can’t imagine there being “nothing” beyond the boundaries of our universe. That just doesn’t compute for me. From the perspective of our universe being infinite; vacuum energy doesn’t seem to work and the whole concept of quantization into any sort of variance seems meaningless and impossible without some sort of boundary.
I think you are correct though, Sean. Ignoring the low entropy of the early universe without a good reason is not very bright. The change in entropy suggests many major implications about Nature. What physicist would not be compelled to try and explain this? I would argue that a changing state of entropy, which can be summed up as ‘Dynamics’, is the foundation of Nature. Explaining why the universe started out with low entropy would be critical in understanding why dynamical systems (the universe) exists. I think Craig Callender is taking the stereotypical philosopher’s approach to all hard questions.