Philosophy and Cosmology: Slow Live-Blogging

Greetings from Oxford, a charming little town across the Atlantic with its very own university. It’s in the United Kingdom, a small island nation recognized for its steak and kidney pie and other contributions to world cuisine. What you may not know is that the UK has also produced quite a few influential philosophers and cosmologists, making it an ideal venue for a small conference that aims to bring these two groups together.

george_ellis The proximate reason for this particular conference is George Ellis’s 70th birthday party. Ellis is of course a well-known general relativist, cosmologist, and author. Although the idea of a birthday conference for respected scientists is quite an established one, Ellis had the idea of a focused and interdisciplinary meeting that might actually be useful, rather than just bringing together all of his friends and collaborators for a big party. It’s to his credit that they invited as many multiverse-boosters as multiverse-skeptics. (I would go for the party, myself.)

George is currently very interested and concerned by the popularity of the multiverse idea in modern cosmology. He’s worried, as many others are (not me, especially), that the idea of a multiverse is intrinsically untestable, and represents a break with the standard idea of what constitutes “science.” So he and the organizing committee have asked a collection of scientists and philosophers with very different perspectives on the idea to come together and hash things out.

It appears as if there is working wireless here in the conference room, so I’ll make some attempt to blog very briefly about what the different speakers are saying. If all goes well, I’ll be updating this post over the next three days. I won’t always agree with everyone, of course, but I’ll try to fairly represent what they are saying.

Saturday night:

Like any good British undertaking, we begin in the pub. I introduce some of the philosophers to Andrei Linde, who entertains us by giving an argument for solipsism based on the Wheeler-deWitt equation. The man can command a room, that’s all I’m saying.

(If you must know the argument: the ordinary Schrodinger equation tells us that the rate of change of the wave function is given by the energy. But for a closed universe in general relativity, the energy is exactly zero — so there is no time evolution, nothing happens. But you can divide the universe into “you” and “the rest.” Your own energy is not zero, so the energy of the rest of the universe is not zero, and therefore it obeys the standard Schrodinger equation with ordinary time evolution. So the only way to make the universe real is to consider yourself separate from it.)

Sunday morning: Cosmology

9:00: Ellis gives the opening remarks. Cosmology is in a fantastic data-rich era, but it is also coming up against the limits of measurement. In the quest for ever deeper explanation, increasingly speculative proposals are being made, which are sometimes untestable even in principle. The multiverse is the most obvious example.

Question: are these proposals science? Or do they attempt to change the definition of what “science” is? Does the search for explanatory power trump testability?

The questions aren’t only relevant to the multiverse. We need to understand the dividing line between science and non-science to properly classify standard cosmology, inflation, natural selection, Intelligent Design, astrology, parapsychology. Which are science?

9:30: Joe Silk gives an introduction to the state of cosmology today. Just to remind us of where we really are, he concentrates on the data-driven parts of the field: dark matter, primordial nucleosynthesis, background radiation, large-scale structure, dark energy, etc.

Silk’s expertise is in galaxy formation, so he naturally spends a good amount of time on that. Theory and numerical simulations are gradually making progress on this tough problem. One outstanding puzzle: why are spiral galaxies so thin? Probably improved simulations will crack this before too long.

10:30: Andrei Linde talks about inflation and the multiverse. The story is laden with irony: inflation was invented to help explain why the universe looks uniform, but taking it seriously leads you to eternal inflation, in which space on extremely large (unobservable) scales is highly non-uniform — the multiverse. The mechanism underlying eternal inflation is just the same quantum fluctuations that give rise to the density fluctuations observed in large-scale structure and the microwave background. The fluctuations we see are small, but at earlier times (and therefore on larger scales) they could easily have been very large — large enough to give rise to different “pocket universes” with different local laws of physics.

Linde represents the strong pro-multiverse view: “An enormously large number of possible types of compactification which exist e.g. in the theory of superstrings should be considered a virtue.” He said that in 1986, and continues to believe it. String theorists were only forced to take all these compactifications seriously by the intervention of a surprising experimental result: the acceleration of the universe, which implied that there was no magic formula that set the vacuum energy exactly to zero. Combining the string theory landscape with eternal inflation gives life to the multiverse, which among other things offers an anthropic solution to the cosmological constant problem.

Still, there are issues, especially the measure problem: how do you compare different quantities when they’re all infinitely big? (E.g. number of different kinds of observers in the multiverse.) Linde doesn’t think any of the currently proposed measures are completely satisfactory, including the ones he’s invented. A big problem with Boltzmann brains.

Another problem is what we mean by “us,” when we’re trying to predict “what observers like us are likely to see.” Are we talking about carbon-based life, or information-processing computers? Help, philosophers!

Linde thinks that the multiverse shows tendencies, although not cut-or-dried predictions. It prefers a cosmological constant to quintessence, and increases the probability that axions rather than WIMPs are the dark matter. Findings to the contrary would be blows to the multiverse idea. Most strongly, without extreme fine-tuning, the multiverse would not be able to simultaneously explain large tensor modes in the CMB and low-energy supersymmetry.

12:00: Raphael Bousso talks about the multiverse in string theory. Note that “multiverse” isn’t really an accurate description; we’re talking about connected regions of space with different low-energy excitations, not some metaphysical collection of completely distinct universes. The multiverse is not a theory — need some specific underlying dynamics (e.g. string theory) to make any predictions. It’s those theories that are tested, not “the multiverse.” Predictions will be statistical, but that’s okay; everyone’s happy with statistical mechanics. “Even if you were pretty neurotic about it, you could only throw a die a finite number of times.” We do need to assume that we are in some sense typical observers.

The cosmological constant problem (why is the vacuum energy so small?) is an obvious candidate for anthropic explanation. String theory is unique at a deep-down level, but features jillions of possible compactifications down to four dimensions, each with different low-energy parameters. Challenges to making predictions: landscape statistics (how many of each kind of vacua?), cosmological dynamics (how does the universe evolve?), measure problem (how do we count observers?). Each is hard!

For the cosmological constant, the distribution of values within the string landscape is actually relatively understandable: there is basically a uniform distribution of possible vacuum energies between minus the Planck scale and plus the Planck scale. Make the right vacua via eternal inflation, which populates the landscape. Our universe decayed from a previous vacuum, an event that must release enough energy to produce the hot Big Bang. That’s a beneficial feature of the multi-dimensional string landscape: “nearby” vacua can have enormously different vacuum energies.

The measure problem is trickier. In the multiverse, any interesting phenomenon happens an infinite number of times. Need some sort of way to regularize these infinities. A problem for eternal inflation; not only for the string landscape. Bousso’s favorite solution is the causal patch measure, which only counts events that happen in the past light cone of any particular event, not throughout a spacelike surface. In that measure, most observers see a cosmological constant comparable to the age of the universe they observe — that’s compatible with what we see, and directly solves the coincidence problem.

Sunday afternoon: Philosophers’ turn

2:00: John Norton talks about the “Bayesian failure” of cosmology and inductive inference. (He admits off the bat that it’s kind of terrifying to have all these cosmologists in the audience.) Basic idea: the Bayesian analysis that cosmologists use all the time is not the right tool. Instead, we should be using “fragments of inductive logics.”

The “Surprising Analysis”: assuming that prior theory is neutral with respect to some feature (e.g. the value of the cosmological constant), we observe a surprising value, and then try to construct a framework to explain it (e.g. the multiverse). This fits in well with standard Bayesian ideas. But that should worry you! What is really the prior probability for observing some quantity? In particular, what if our current theory were not true — would we still be surprised?

We shouldn’t blithely assume that the logic of physical chances (probabilities) is the logic of all analysis. The problem is that this framework has trouble dealing with “neutral evidence” — almost everything is taken as either favoring or disfavoring the hypothesis. We should be talking about whether or not a piece of evidence qualifies as support, not simply calculating probabilities.

The disaster that befell Bayesianism was to cast it in terms of subjective degrees of belief, rather than support. A prior probability distribution is pure opinion. But your choice of that prior can dramatically effect how we interpret particular pieces of evidence.

Example: the Doomsday argument — if we are typical, the universe (or the human race, etc.) will probably not last considerably longer than it already has (or we wouldn’t be typical). All the work in that argument comes from assuming that observers are sampled uniformly. But the fact that 60 billion people have lived so far isn’t really evidence that 100 trillion people won’t eventually live; it’s simply neutral.

Heretical punchline: cosmic parameters can’t be judged as “improbable,” so long as they’re consistent with theory and observation.

[David Wallace, during questions: Do you really mean to say that if we observed the stars in the sky spelling out the message “Oxford is better than Cambridge,” all we could say is “Well, it’s consistent with the laws of physics, so we can’t really conclude anything from that”?]

2:45: Simon Saunders talks about probability and anthropic reasoning in a multiverse. Similar issues as the last talk, but he’ll be defending a Bayesian analysis.

Sometimes we think of probability objectively — a true physical structure — and sometimes subjectively — a reflection of the credence we give to some claim.

Problems with anthropic arguments involve: linguistics (what is included in “observer” and “observed”?), theory (how do we calculate the probability of finding certain evidence given a particular theory?), and realism (why worry about what is observed?). On the latter point: do we conditionalize on the existence of human life, or the existence of some observers, or simply on the existence of conditions compatible with observers? Saunders argues for the latter: all we care about are physical conditions, not whether or not observers come into existence. Call this “taming” the anthropic principle.

Aside on vacuum energy: are we really sure it’s finely-tuned? Condensed-matter analogues give a different set of expectations — maybe the vacuum energy just adjusts to zero after phase transitions.

For branches of the wavefunction, there exist formal axioms (e.g. Deutsch-Wallace) for evaluating the preferences of a rational observer which recover the conventional understanding of Copenhagen probabilities even in a many-worlds interpretation. For a classical multiverse, the argument is remarkably similar; for a fully quantum inflationary multiverse, it’s less clear.

4:00: Panel discussion with Alex Vilenkin, Wayne Myrvold, and Christopher Smeenk. It’s a series of short talks more than an actual discussion. Vilenkin goes first, and discusses — wait for it — calculating probabilities in the multiverse. The technical manifestation of the assumption that we are typical observers is the self-sampling assumption: assume we are chosen randomly from within some reference class of observers. The probability that we observe something is just the fraction of observers within this class that observe it. But how do we choose the class? Vilenkin argues that we can choose the set of all observers with identical information content. (Really all information: not just “I am human” but also “My name is Alex,” etc.) That sounds like a very narrow class, but in a really big multiverse there will be an infinite number of members of each such class. (This doesn’t solve the measure problem — still need to regulate those infinities.) In fact we should use the Principle of Mediocrity: assume we are typical in any class to which we belong, unless there is evidence to the contrary.

Myrvold is next up, and he chooses to respond mostly to John Norton’s talk. Most of the time, when we’re evaluating theories in light of evidence, we don’t need to be too fancy about our background analytical framework. The multiverse seems like an exception. More generally, theories with statistical predictions are tricky to evaluate. If you toss a coin 1000 times, any particular outcome is highly improbable. You have to choose some statistics ahead of time, e.g. the fraction of heads. Cosmological parameters might be an example of where we don’t know how to put sensible prior probabilities on different outcomes.

Smeenk wants to talk about how big a problem “fine-tuning” really is. Sometimes more than others: when some parameter (e.g. the vacuum energy) is not just chosen from a hat, but gets various contributions about which we have sensible expectations, it’s giving up on too much to simply take any observed value as neutral with respect to theory evaluation. He’s reacting against Norton’s prescription a bit. Nevertheless, we should admit that choosing measures/probability distributions in cosmology is a very different game than what we do in statistical mechanics, if only because we don’t actually have more than one member of the ensemble in front of us.

Sunday evening: “Ultimate Explanation”

After dinner we reconvene for a talk and a response. The talk is by Timothy O’Connor on Ultimate Explanation: Reforging Natural Philosophy.” He reminds us that Newton insisted that he did not “feign hypotheses” — he concentrated on models that he claimed were deduced from the phenomena, and thought that any deeper hypothetical explanations had “no place in experimental philosophy.” The implication being that Newton would not have approved of the multiverse.

O’Connor says that a fully complete, “ultimate” explanation cannot possibly be attained through science. Nevertheless, it’s a perfectly respectable goal, as part of the larger project of natural philosophy.

He defines an “ultimate explanation” as something that involves no brute givens — “such that one could not intelligibly ask for anything more.” That’s not attainable by science. If nothing else, “the most fundamental fact of existence itself” will remain unexplained, even if we knew the theory of everything and the wave function of the universe. Alternatively, if we imagine “plenitude” — everything possible exists — it would still be possible to imagine something less than that, so a contingent explanation is still required.

We are led to step outside science and consider the idea of an ultimately necessary being — something whose existence is no more optional than that of mathematical or logical truths. We could endorse such an idea if it provided explanations without generating insoluble puzzles of its own, and if we thought we had considered an exhaustive list of alternatives, all of which fell short. Spinoza and Leibniz are invoked.

Note the peculiar logic: if a necessary being does not exist, it was not simply an optional choice; it must necessarily not exist. (Because if a necessary being is conceivable, it must necessarily exist. Get it?)

Punchline: Science is independent of any/most metaphysical claim. But that means it can’t possibly “explain” everything; there must be metaphysical principles/assumptions. Some of these might be part of the ultimate explanation of the actual world in which we live.

The response comes from Sir Martin Rees. He opens by quoting John Polkinghorne: “Your average quantum mechanic is no more philosophical than your average motor mechanic.” But maybe cosmologists are a bit more sympathetic. He then recalls that Dennis Sciama — who was the thesis advisor of George Ellis, Stephen Hawking, and Rees himself — was committed as a young scientist to the Steady State model of cosmology, primarily for philosophical reasons. He did give up on it when confronted with data from the microwave background, but it was an anguished abandonment.

Searching for “explanations,” we should recognize that different fields of science have autonomous explanatory frameworks. People who study fluid mechanics would like to understand turbulence, and they don’t need to appeal to the existence of atoms to do so — atoms do exist, and one can derive the equations of fluid mechanics from them, but their existence sheds no light whatsoever on the phenomenon of turbulence.

Note also that, even if there is an ultimate explanation in the theory-of-everything sense, it may simply be too difficult for our limited human minds to understand. “Many of these problems may have to await the post-human era for their solution.”

Continued at Day Two.

29 Comments

29 thoughts on “Philosophy and Cosmology: Slow Live-Blogging”

  1. Nice post – funny how things come full circle. For the early astronomers faith never presented a problem, indeed it was the reason for looking to the stars – ‘the heavens proclaim the glory of God’.

  2. Hearing GE rant against the multiverse at a conference earlier this year was definitely a highlight. Wish I was there to listen in now! I really apprecate your posting this in some detail.

  3. Cool conference, and fabulous to be getting the reports.

    But how come the conference’s gender balance is so skewed? Two women and 32 men, according to speaker list. Makes me want to postulate, of course, the existence of a very large number of similar conferences going on in parallel universes (or this one) which together sample the general population more plausibly.

  4. Pingback: Philosophy of Cosmology « Hyper tiling

  5. Go! John Norton! The Problem with Physics is that this is heretical as in “Heretical punchline: cosmic parameters can’t be judged as “improbable,” so long as they’re consistent with theory and observation.”

  6. Very interesting conference! What was the reaction among the scientists to Norton’s talk? Maybe it’s just your filtering of it, but his argument comes across as obviously wrongheaded in your post.

  7. I think scientists were skeptical (I know I was), but in fairness John had very little time to explain his ideas in any detail. I worry that similar arguments would make it impossible to do statistical mechanics, since we wouldn’t be allowed to invoke the principle of indifference over different microstates.

    Robert, it’s a shame that the gender balance is skewed compared to the balance within the population as a whole; however, it’s not significantly skewed with respect to the population of senior cosmologists and philosophers. So the problem is a lot deeper than simply with this conference.

  8. Pingback: Populär Astronomi - » Finns det andra universa därute?

  9. “… the United Kingdom, a small island nation recognized for its steak and kidney pie and other contributions to world cuisine.”

    Don’t forget to mention Chicken Tikka Masala!

  10. this post reminds me i’m not nearly as smart as i think i am. did not understand a single damn thing. yet i read every word! kinda like watching the news on tv in a hotel room in italy… nice words… nice sounds… very relaxing… no idea what the guy is saying.

  11. I hope someone will tackle the real reason we have all those ridiculous claims in science now – the fact that while the number of REAL problems in physics get’s smaller and they are getting tougher the numbers of physicists are rising.

    One of the main goals of academic institutions is to keep producing new researchers. Since this is their mission and their source of income they will keep churning them out no matter what. They don’t care that most of those people won’t have a chance to contribute to REAL problems facing the field (unification of QM and gravity; advancement of SM) since those problems are exceptionally hard.

    So what are those new researchers coming into field of theoretical physics supposed to do? They have to publish to advance their careers so they invent alternative problems, and work on those. The multiverse nonsense is a prime example.

    Normally nonsense like that should not pass peer review but there have hardly been any progress in theoretical physics for years and many reviewers are in the same position as those they review so naturally the rules end up being relaxed.

  12. [David Wallace, during questions: Do you really mean to say that if we observed the stars in the sky spelling out the message “Oxford is better than Cambridge,” all we could say is “Well, it’s consistent with the laws of physics, so we can’t really conclude anything from that”?]

    This is precisely the point. You cannot say that any particular star pattern, for example the one spelling the above message is more or less probable then any other unless you have a theory predicting matter distribution from which you can derive probabilities of various configurations. Isn’t it obvious?

    This is also how we humans evaluate such claims, the reason we find such a pattern unlikely is that we have a rough idea of how matter distribution should look based on our experience and we use it to assign probabilities to various configurations. This rough idea however is not enough to make a scientific claim based on it, it has to be made into a consistent theory.

  13. Hmm, I’ve always wondered about whether the Bayesian approach is the right one when dealing with the multiverse ala’ eternal inflation. On the other hand, it is a consistent approach (doesn’t mean it is the best, or the right one).

    A version of David Wallace’s question actually exists in real CMB physics : “Why do we have low power in the low multipole regimes of the CMB?”

    (Btw, one can d/l John Norton’s paper online :

    http://www.pitt.edu/~jdnorton/homepage/cv.html#Cosm_induction.)

    Eugene

  14. “…increases the probability that axions rather than WIMPs are the dark matter.”

    Sean, could you please elaborate on this point? I’m not familiar with this argument.

  15. “Heretical punchline: cosmic parameters can’t be judged as “improbable,” so long as they’re consistent with theory and observation.”

    Jeez, I thought only physicists would spout this kind of crap. [“It’s meaningless to talk about the initial conditions of the universe. That’s just how things are, blah blah blah….”]

    How dispiriting that philosophers can also talk like that.

  16. Pingback: Occam’s Machete » Blog Archive » Awesome State-Of-Cosmology Post

  17. Pingback: Philosophy and Cosmology: Day Two | Cosmic Variance | Discover Magazine

  18. I worry that similar arguments would make it impossible to do statistical mechanics, since we wouldn’t be allowed to invoke the principle of indifference over different microstates.

    From Norton’s paper,
    http://www.pitt.edu/~jdnorton/papers/Cosm_inductive.pdf
    I understand that one of the main points is that sometimes the principle of indifference cannot be turned into a probability distribution – but often it can, and I surmise that we can (eventually) formally state the conditions under which it can. In particular, since there is a mechanism which we understand from Hamiltonian mechanics, that a system does sample all of its allowed phase space, this won’t turn out to be a problem for statistical mechanics.

    Norton discusses briefly the example of two coins on a table. (Coins on table already carries a lot of baggage, so let us be more abstract and talk about a two instances of a two-valued physical variable, which could be head or tails of a coin, or spin-up or spin-down of an electron.) When Norton talks of the coins, he points out that if our existing information says nothing more about the coins, then the support for two heads up, two tails up and one-heads,one-tails are all equal. (Support is a technical term in his paper). But equally, the support for HH, HT, TH, TT are also all equal. A single probability distribution cannot express this indifference.

    In the literature on probability theory, this outcome is taken to be paradoxical since it can be satisfied non-trivially by no probability measure. It should now be clear that there is no paradox. The paradox is generated by the presumption that a probability measure can represent evidential neutrality in the first place.

    I think it would be paradoxical indeed if we could assign a probability measure with just the information we have. We need additional assumptions in order to assign a probability measure. E.g., the H,T of the coins could represent the spins of the two electrons in the Helium 1S orbital, in which case we would have a probability distribution of probability[H,H] = probability[T,T] = 0. Or the coins could be fair tossed coins, in which case p[H,H]=p[T,H]=p[H,T]=p[T,T]. And so on. In the absence of information, we cannot know which of these probability distributions to pick.

  19. Pingback: Characterising Science and Beyond « Not Even Wrong

  20. “We shouldn’t blithely assume that the logic of physical chances
    (probabilities) is the logic of all analysis.”

    Physical chances don’t exist, and there is quite a lot of evidence
    suggesting that probabilities *are* the logic of all analysis (e. g.
    axiomatic derivations a la Cox, di Finetti), and the fact that the
    probability calculus is bloody obvious and also works.

    “The problem is that this framework has trouble dealing with
    “neutral evidence” — almost everything is taken as either favoring or
    disfavoring the hypothesis.”

    This can happen, it usually means the prior probabilities were badly
    assigned. i.e. the likelihood function is too sharp. Using
    probabilities is a necessary condition for good analysis, not a
    sufficient condition.

    – Brendon

  21. “This can happen, it usually means the prior probabilities were badly assigned.”

    It may not be possible to assign prior probabilities at all.

  22. Pingback: Philosophy and Cosmology: Day Three | Cosmic Variance | Discover Magazine

Comments are closed.

Scroll to Top