Science

A Conversation on the Existence of Time

You know, other people talk a lot about time, too — it’s not just me. Here’s a great video from Nature, featuring a conversation between David Gross and Itzhak Fouxon about the existence of time. (Via Sarah Kavassalis.) Itzhak plays the role of the starry-eyed young researcher — he opens the video by telling us how he originally went into physics to impress girls, although apparently he has stuck with it for other reasons. Gross, of course, shared a Nobel Prize for asymptotic freedom, and has become one of the most influential string theorists around. David plays the role of the avuncular elder statesman (I’ve seen him be somewhat more acerbic in his criticisms) — but he’s one of the smartest people in physics, and his admonitions are well worth listening to. He gives some practical advice, but also advises young people to think big.

Unfortunately the video doesn’t seem to be embeddable, but you can go to the video page and click on the “David Gross” entry. (The others are good, too!)

davidgross

You all know my perspective here — time probably exists, and we should try to understand it rather than replace it. But I’ll agree with David — let’s not ignore more “practical” problems, but not be afraid to tackle the big ideas!

A Conversation on the Existence of Time Read More »

17 Comments

Has Fermi Seen New Evidence for Dark Matter?

Speaking of successful NASA/DOE collaborations, there’s an interesting new paper on astro-ph claiming that the Fermi gamma-ray satellite has found evidence for a gamma-ray excess in the vicinity of the galactic center — similar to what you might expect from high-energy electrons produced by annihilations or decays of dark matter.

The Fermi Haze: A Gamma-Ray Counterpart to the Microwave Haze
Authors: Gregory Dobler, Douglas P. Finkbeiner, Ilias Cholis, Tracy R. Slatyer, Neal Weiner

Abstract: The Fermi Gamma-Ray Space Telescope reveals a diffuse inverse Compton signal in the inner Galaxy with the same spatial morphology as the microwave haze observed by WMAP, confirming the synchrotron origin of the microwaves. Using spatial templates, we regress out pi0 gammas, as well as ICS and bremsstrahlung components associated with known soft-synchrotron counterparts. We find a significant gamma-ray excess towards the Galactic center with a spectrum that is significantly harder than other sky components and is most consistent with ICS from a hard population of electrons. The morphology and spectrum are consistent with it being the ICS counterpart to the electrons which generate the microwave haze seen at WMAP frequencies. In addition to confirming that the microwave haze is indeed synchrotron, the distinct spatial morphology and very hard spectrum of the ICS are evidence that the electrons responsible for the microwave and gamma-ray haze originate from a harder source than supernova shocks. We describe the full sky Fermi maps used in this analysis and make them available for download.

In English: if the dark matter is a weakly-interacting massive particle (WIMP), individual WIMPs should occsasionally annihilate with other WIMPs, giving off a bunch of particles, including electron/positron pairs as well as high-energy photons (gamma rays). Indeed, searching for such gamma rays was one of the primary motivations behind the Fermi mission (formerly GLAST). And it makes sense to look where the dark matter is most dense, in the center of the galaxy. But it’s a very hard problem, for a simple reason — there’s lots of radiation coming from the center of the galaxy, most of which has nothing to do with dark matter. Subtracting off these “backgrounds” (which would be very interesting in their own right to galactic astronomers) is the name of the game in this business.

But Doug Finkbeiner at Harvard has for a while now been suggesting that there was already evidence for something interesting going on near the galactic center — not in the form of high-energy photons, but in the form of low-energy photons. The so-called WMAP haze is alleged to be radiation emitted when high-energy electrons are being accelerated by magnetic fields, leading to low-energy photons (synchrotron radiation). And Finkbeiner and collaborators claim that a careful analysis of data from WMAP (whose primary mission was to observe the cosmic microwave background) reveals exactly the kind of radiation you would expect from annihilations near the galactic center.

If that model is right, it gives us some guidance about what to look for in the gamma rays themselves, which Fermi is now observing. And according to this new paper, this is what we see.

Excess gamma rays from the galactic center, from Dobler et al.

That’s one of many images, and has been extensively processed; see paper for details. The new paper claims that there is an excess of gamma rays, and that it has just the right properties to be arising from the same population of electrons that gave rise to the WMAP haze. These much higher-energy photons arise from inverse Compton scattering — electrons bumping into photons and pushing them to higher energies — rather than synchrotron emission. So we’re not talking about gammas that are produced by dark-matter annihilations, but ones that might arise from electrons and positrons that are produced by such annihilations. The authors pointedly do not claim that what we see must arise from dark matter, or even delve very deeply into that possibility.

There have been speculations that the microwave haze could indicate new physics, such as the decay or annihilation of dark matter, or new astrophysics. We do not speculate in this paper on the origin of the haze electrons, other than to make the general observation that the roughly spherical morphology of the haze makes it difficult to explain with any population of disk objects, such as pulsars. The search for new physics – or an improved understanding of conventional astrophysics – will be the topic of future work.

That’s as it should be; whether or not the gamma-ray haze is real is a separate question from whether dark matter is the culprit. But on a blog we can speculate just a bit. Therefore I’m going to go out on a limb and say: maybe it is! Or maybe not. But a wide variety of promising experimental techniques are attacking the problem of detecting the dark matter, and we’ll be hearing a lot more in the days to come.

Has Fermi Seen New Evidence for Dark Matter? Read More »

32 Comments

How to Go After Dark Energy?

It’s well known that dark energy is a mystery — both for scientists, and apparently for funding agencies who are trying to figure out how best to learn more about this stuff that makes up about 73% of the energy of the universe. I haven’t been paying close attention to the ins-and-outs of this saga (there are more rewarding ways to give yourself an ulcer), but last I had heard the National Academy of Sciences had given very high priority to a satellite observatory meant to pin down the properties of dark energy. This was the JDEM idea — Joint Dark Energy Mission, where “joint” indicates a partnership between NASA and the Department of Energy. (They don’t always play well together, but the Fermi satellite is a notable recent success.)

Now, via Dan Vergano’s Twitter feed, I see a story in Nature News to the effect that things have become murky once again. The proposals got too expensive, so NASA turned to the European Space Agency for help, but ended up giving away things the DOE thought were in their domain, so they threatened to take their toys and go home, giving up on the idea of a satellite altogether.

The story is complicated by disagreement over how important it is to measure the dark energy equation-of-state parameter, the number characterizing how quickly the energy density changes (if at all). It’s frequently said that “we know nothing” about dark energy, but that’s not true; we know that it’s smoothly distributed and nearly-constant in density through time. We even have a very natural candidate for what it is: the vacuum energy. There is of course the problem that the vacuum energy is much smaller than it should be, but that problem is there whether it’s strictly zero or just really small. Other models still have that problem, and tend to add other fine-tunings on top. It would be great, and we would certainly learn a lot, if the dark energy were not simply vacuum energy; but right now we have no compelling reason to think it’s not, so it’s a bit of a long shot.

How to Go After Dark Energy? Read More »

18 Comments

Talking About Time

I’m in the middle of jetting hither and yon, talking to people about the arrow of time. (Wouldn’t it be great if I had a book to sell them?) Right now, as prophesyed, I’m at the Quantum To Cosmos Festival at the Perimeter Institute. They’re extremely on the ball over here, so every event is being recorded by the ultra-professional folks at TVO, and instantly available on the web. So here is the talk I gave on Saturday night — a public-level discussion of entropy and how it connects to the history of our universe.

Yes, that’s a pretty suave picture of me on the image capture. What can I say? I’m just one of those lucky folks with an effortless magic in front of the camera.

_stwVar[“player”]= “generic_singlev2”;_stwVar[“width”]= “600”;_stwVar[“height”]= “425”;_stwVar[“autostart”]= “0”;_stwVar[“skintemplate”]= “stw_dark”;_stwVar[“clientid”]= “2121”;_stwVar[“clientcheck”]= “9Huudq3”;_stwVar[“mediaid”]= “570901”;_stwVar[“lang”]= “en”;_stwVar[“activesprinkler”]= “0”;_stwVar[“clientname”]= “perimeterinstitute”;_stwVar[“mediafileid”]= “893391”;embed();

If you prefer to get your talks about entropy unadulterated by voice and motion, and don’t mind a more technical presentation, I’ve put the slides from my recent Caltech colloquium online. These are aimed basically at grad students in physics, so there is an equation or two, and the caveats are spelled out more clearly. But the punchline is the same.

ouaot

Talking About Time Read More »

18 Comments

Spooky Signals from the Future Telling Us to Cancel the LHC!

A recent essay in the New York Times by Dennis Overbye has managed to attract quite a bit of attention around the internets — most of it not very positive. It concerns a recent paper by Holger Nielsen and Masao Ninomiya (and some earlier work) discussing a seemingly crazy-sounding proposal — that we should randomly choose a card from a million-card deck and, on the basis of which card we get, decide whether to go forward with the Large Hadron Collider. Responses have ranged from eye-rolling and heavy sighs to cries of outrage, clutching at pearls, and grim warnings that the postmodernists have finally infiltrated the scientific/journalistic establishment, this could be the straw that breaks the back of the Enlightenment camel, and worse.

Since I am quoted (in a rather non-committal way) in the essay, it’s my responsibility to dig into the papers and report back. And my message is: relax! Western civilization will survive. The theory is undeniably crazy — but not crackpot, which is a distinction worth drawing. And an occasional fun essay about speculative science in the Times is not going to send us back to the Dark Ages, or even rank among the top ten thousand dangers along those lines.

The standard Newtonian way of thinking about the laws of physics is in terms of an initial-value problem. You specify the state of the system (positions and velocities) at one moment, then the laws of physics tell you how it will evolve into the future. But there is a completely equivalent alternative, which casts the laws of physics in terms of an action principle. In this formulation, we assign a number — the action — to every possible history of the system throughout time. (The choice of what action to assign is simply the choice of what laws of physics are operative.) Then the allowed histories, the ones that “obey the laws of physics,” are those for which the action is the smallest. That’s the “principle of least action,” and it’s a standard undergraduate exercise to show that it’s utterly equivalent to the initial-value formulation of dynamics.

In quantum mechanics, as you may have heard, things change a tiny bit. Instead of only allowing histories that minimize the action, quantum mechanics (as reformulated by Feynman) tells us to add up the contributions from every possible history, but give larger weight to those with smaller actions. In effect, we blur out the allowed trajectories around the one with absolutely smallest action.

Nielsen and Ninomiya (NN) pull an absolutely speculative idea out of their hats: they ask us to consider what would happen if the action were a complex number, rather than just a real number. Then there would be an imaginary part of the action, in addition to the real part. (This is the square-root-of-minus-one sense of “imaginary,” not the LSD-hallucination sense of “imaginary.”) No real justification — or if there is, it’s sufficiently lost in the mists that I can’t discern it from the recent papers. That’s okay; it’s just the traditional hypothesis-testing that has served science well for a few centuries now. Propose an idea, see where it leads, toss it out if it conflicts with the data, build on it if it seems promising. We don’t know all the laws of physics, so there’s no reason to stand pat.

NN argue that the effect of the imaginary action is to highly suppress the probabilities associated with certain trajectories, even if those trajectories minimize the real action. But it does so in a way that appears nonlocal in spacetime — it’s really the entire trajectory through time that seems to matter, not just what is happening in our local neighborhood. That’s a crucial difference between their version of quantum mechanics and the conventional formulation. But it’s not completely bizarre or unprecedented. Plenty of hints we have about quantum gravity indicate that it really is nonlocal. More prosaically, in everyday statistical mechanics we don’t assign equal weight to every possible trajectory consistent with our current knowledge of the universe; by hypothesis, we only allow those trajectories that have a low entropy in the past. (As readers of this blog should well know by now; and if you don’t, I have a book you should definitely read.)

To make progress with this idea, you have to make a choice for what the imaginary part of the action is supposed to be. Here, in the eyes of this not-quite-expert, NN seem to cheat a little bit. They basically want the imaginary action to look very similar to the real action, but it turns out that this choice is naively ruled out. So they jump through some hoops until they get a more palatable choice of model, with the property that it is basically impotent except where the Higgs boson is concerned. (The Higgs, as a fundamental scalar, interacts differently than other particles, so this isn’t completely ad hoc — just a little bit.) Because they are not actually crackpots, they even admit what they’re doing — in their own words, “Our model with an imaginary part of the action begins with a series of not completely convincing, but still suggestive, assumptions.”

Having invoked the tooth fairy twice — contemplating an imaginary part of the action, then choosing its form so as to only be relevant where the Higgs is concerned — they consider consequences. Remember that the effect of the imaginary action is non-local in time — it depends on what happens throughout the history of the universe, not just here and now. In particular, given their assumptions, it provides a large suppression to any history in which large numbers of Higgs bosons are produced, even if they won’t be produced until some time in the future.

Spooky Signals from the Future Telling Us to Cancel the LHC! Read More »

119 Comments

Data on “Facts” and Facts on “Data”

A philosophy professor of mine used to like to start a new semester by demanding of his class, “How many facts are in this room?” No right answer, of course — the lesson was supposed to be that the word “fact” doesn’t apply directly to some particular kind of thing we find lying around in the world. Indeed, one might go so far as to argue that what counts as a “fact” depends on one’s theoretical framework. (Is “spacetime is curved” a fact? What if spacetime isn’t fundamental in quantum gravity?)

Nevertheless, people sometimes use the word. A recent post by PZ reminded me of how it comes up especially in arguments over evolution, which is occasionally accused of being “just a theory.” I’ve tried to make my own view clear — when we as scientists use these words, we shouldn’t pretend they have some once-and-for-all meanings that were handed down by Francis Bacon when he was putting the finishing touches on the scientific method. Rather, we should be honest about how they are actually used. “Theory,” in particular, isn’t cleanly separate from words like “law” or “hypothesis” or “model,” and doesn’t have any well-defined status on the spectrum from obviously false to certainly true. And “fact” — well, that’s a word scientists hardly use at all. We use words like “data” or “evidence,” but the concept of a “fact” simply isn’t that useful in scientific practice.

But you know what would really be useful here? Some facts! Or at least some data. There’s one repository of professional scientific communication that I know very well — SPIRES, the high-energy physics literature database run by SLAC. (My hypothesis guess is that any other field would turn up similar results.) I don’t know an easy way to search entire papers, but it’s child’s play to search the titles. So let’s ask it — how often do scientists (as represented by high-energy physicists) use the word “fact”?

find t fact or t facts
120 records

Okay, they clearly use the word sometimes. What about some competitors?

find t data
9909 records

Ha! Now that’s the kind of word scientists like to use. And the others?

find t evidence
4396 records

find t observation or t observations
10924 records

You get the picture. Scientists prefer not to talk about “facts,” because it’s hard to tell what’s a fact and what isn’t. Science looks at the data, and tries to understand it in terms of hypothetical models, which rise or fall in acceptance as new data are gathered and better theories are proposed. Just for fun:

find t theory
42285 records

find t model
45977 records

find t hypothesis
578 records

find t law
1293 records

So I’m happy to say evolution is “true,” or is “correct,” but I’ll leave “facts” to Joe Friday.

Data on “Facts” and Facts on “Data” Read More »

13 Comments

A New Challenge to Einstein?

General relativity, Einstein’s theory of gravity and spacetime, has been pretty successful over the years. It’s passed numerous tests in the Solar System, scored a Nobel-worthy victory with the binary pulsar, and gets the right answer even when extrapolated back to the first one second after the Big Bang. But no scientific theory is sacred. Even though GR is both aesthetically compelling and an unquestioned empirical success, it’s our job as scientists to keep probing it in different ways. Especially when it comes to astrophysics, where we need dark matter and dark energy to explain what we see, it makes sense to put Einstein to the most stringent tests we can devise.

So here is a new such test, courtesy of Rachel Bean of Cornell. She combines a suite of cosmological data, especially measurements of weak gravitational lensing from the Hubble Space Telescope, to see whether GR correctly describes the behavior of large-scale structure in the universe. And the surprising thing is — it doesn’t. At the 98% confidence level, Rachel finds that general relativity is inconsistent with the data. I’m not sure why we haven’t been reading about this in the science media or even on other blogs — it’s certainly a newsworthy result. Admittedly, the smart money is still that there is some tricky thing that hasn’t yet been noticed and Einstein will eventually come through the victor, but this is serious work by a respected cosmologist. Either the result is wrong, and we should be working hard to find out why, or it’s right, and we’re on the cusp of a revolution.

Here is the abstract:

A weak lensing detection of a deviation from General Relativity on cosmic scales
Authors: Rachel Bean

Abstract: We consider evidence for deviations from General Relativity (GR) in the growth of large scale structure, using two parameters, γ and η, to quantify the modification. We consider the Integrated Sachs-Wolfe effect (ISW) in the WMAP Cosmic Microwave Background data, the cross-correlation between the ISW and galaxy distributions from 2MASS and SDSS surveys, and the weak lensing shear field from the Hubble Space Telescope’s COSMOS survey along with measurements of the cosmic expansion history. We find current data, driven by the COSMOS weak lensing measurements, disfavors GR on cosmic scales, preferring η < 1 at 1 < z < 2 at the 98% significance level.

Let’s see if we can’t unpack the basic idea. The real problem in testing GR in cosmology is that any particular kind of spacetime curvature can be a solution to Einstein’s theory — all you need are the right sources of matter and energy. So in order to do a real test, you need to have some confidence that you understand what is creating the gravitational field — in the Solar System it’s the Sun and planets, in the binary pulsar it’s two neutron stars, and in the early universe it’s radiation. For large-scale structure things are a bit less clear — there’s ordinary matter, and dark matter, and of course dark energy.

Nevertheless, even though there are some things we don’t know about dark matter and dark energy, there are some things we think we do know. One of those things is that they don’t create any “anisotropic stress” — basically, a force that pulls different sides of things in different directions. Given that extremely reasonable assumption, GR makes a powerful prediction: there is a certain amount of curvature associated with space, and a certain amount of curvature associated with time, and those two things should be equal. (The space-space and time-time potentials φ and ψ of Newtonian gauge, for you experts.) The curvature of space tells you how meter sticks are distorted relative to each other as they move from place to place, while the curvature of time tells you how clocks at different locations seem to run at different rates. The prediction that they are equal is testable: you can try to measure both forms of curvature and divide one by the other. The parameter η in the abstract is the ratio of the space curvature to the time curvature; if GR is right, the answer should be one.

There is a straightforward way, in principle, to measure these two types of curvature. A slowly-moving object (like a planet moving around the Sun) is influenced by the curvature of time, but not by the curvature of space. (That sounds backwards, but keep in mind that “slowly-moving” is equivalent to “moves more through time than through space,” so the curvature of time is more important.) But light, which moves as fast as you can, is pushed around equally by the two types of curvature. So all you have to do is, for example, compare the gravitational field felt by slowly-moving objects to that felt by a passing light ray. GR predicts that they should, in a well-defined sense, be the same.

We’ve done this in the Solar System, of course, and everything is fine. But it’s always possible that some deviation from Einstein shows up at much larger distance and weaker gravitational fields than we have access to in our local neighborhood. That’s basically what Rachel’s paper does, considering different measures of the statistical properties of large-scale structure and comparing them to the predictions of a phenomenological model of the gravitational field. A crucial role is played by gravitational lensing, since that’s where the deflection of light comes in.

And here is the answer: the likelihood, given the data, for different values of 1/η, the ratio of the time curvature to the space curvature. The GR prediction is at 1, but the data show a pronounced peak between 3 and 4, and strongly disfavor the GR prediction. If both the data and the analysis are okay, there would be less than a 2% chance of obtaining this result. Not as good as 0.01%, but still pretty good.

bean-eta

So what are we supposed to make of this? Don’t get me wrong: I’m not ready to bet against Einstein, at least not yet. Mostly my pro-Einstein prejudice comes from long experience trying to come up with alternative theories of gravity that are simultaneously logically sensible and observationally consistent; it’s just very hard to do. But more generally, good scientists naturally have a strong suspicion of any claimed observational result that purports to overthrow an extremely well-established theory. That’s just common sense, not hidebound establishmentarianism; most such anomalies eventually go away.

But that doesn’t mean that you ignore anomalies; you just treat them with caution. In this case, there could be an unrecognized systematic error in the data set, or a subtle error in the analysis. Given 1:1 odds, that’s certainly where the smart money would bet right now. It’s also possible that the fault lies with dark matter or dark energy, not with gravity — but it’s hard to see how that could work, to be honest. Happily, it’s an empirical question — more data and more analysis will either reinforce the result, or make it go away. After all, some anomalies turn out to be frighteningly real. This one is worth taking seriously, to say the least.

A New Challenge to Einstein? Read More »

64 Comments

Practicality and the Universe

This year’s Nobel Prizes in Physics have been awarded to Charles Kao, for fiber optics, and Willard Boyle and George Smith, for charge-coupled devices (CCD’s, which have replaced film as the go-to way to take pictures). Very worthy selections, which are being justly celebrated in certain quarters as a triumph of practicality. Can’t argue with that — as Chad says, things like the internet (brought to you in part by fiber-optic cables) and digital cameras (often based on CCD’s) affect everyone’s lives in tangible ways.

But they are also important for lovely impractical uses! When I hear “fiber optics” and “CCD’s” in the same breath, I am immediately going to think of the Sloan Digital Sky Survey (SDSS), which has provided us with the most detailed map we have of our neighborhood of the universe. Almost a million galaxies, and over 100,000 quasars, baby! How impractical is that?

Sloan telescope

The SDSS is a redshift survey, which means it’s not sufficient to just snap a picture of all those galaxies; you also want to measure their spectra (i.e., break down their light into individual frequencies) to see how much they have been shifted to the red by the cosmological expansion. And you just want the spectra of the galaxies, not the blank parts of the sky in between them. The Sloan technique was to drill giant plates for each patch of sky, with one hole corresponding to the position of every galaxy to be surveyed. (There were a lot of plates.) This image from the Galaxy Zoo blog.

Sloan plate

Then you want to bring that light down to the camera. You guessed it — fiber-optic cables. Thanks, Dr. Kao.

Sloan fibers

The camera in question was possibly the most complex camera ever built — thirty separate CCD’s, combining for 120 megapixels in total, all cooled to -80 degrees Celsius. Thanks, Drs. Boyle and Smith.

Sloan Camera

And the result is — well, it’s pretty, but it doesn’t materially affect your standard of living. It’s a map of our local neighborhood in the universe. Extremely useful if you’d like to understand something about the evolution of large-scale structure, for example to pin down the properties of dark matter and dark energy.

Sloan map of the universe

Also useful for providing a bit of perspective. It’s technological advances like those honored in this year’s Prize that make it possible for we insignificant sacs of organic matter to stretch our senses out into the universe and understand the much bigger picture of which we are a part.

Practicality and the Universe Read More »

29 Comments

Philosophy and Cosmology: Day Three

Back for the third and final day of the Philosophy and Cosmology conference in honor of George Ellis’s birthday. I’ll have great memories of my time in Oxford, almost all of which was spent inside this lecture hall. See previous reports of Day One, Day Two.

It’s become clear along the way that I am not as accurate when I’m trying to represent philosophers as opposed to physicists; the vocabularies and concerns are just slightly different and less familiar to me. So take things with an appropriate grain of salt.

Tuesday morning: The Case for Multiverses

9:00: Bernard Carr, one of the original champions of the anthropic principle, has been instructed to talk on “How we know multiverses exist.” Not necessarily the title he would have chosen. Of course we don’t observe a multiverse directly; but we might observe it indirectly, or infer it theoretically. We should be careful to define “multiverse,” not to mention “exist.”

There certainly has been a change, even just since 2001, in the attitude of the community toward the multiverse. Quotes Frank Wilczek, who tells a parable about how multiverse advocates have gone from voices in the wilderness to prophets. That doesn’t mean the idea is right, of course.

Carr is less interested in insisting that the multiverse does exist, and more interested in defending the proposition that it might exist, and that taking it seriously is perfectly respectable science. Remember history: August Comte in 1859 scoffed at the idea we would ever know what stars were made of. Observational breakthroughs can be hard to predict. Rutherford: “Don’t let me hear anyone use the word `Universe’ in my department!” Cosmology wasn’t respectable. For what it’s worth, the idea that what we currently see is the whole universe has repeatedly been wrong.

So how do we know a multiverse exists? Maybe we could hop in a wormhole or something, but let’s not be so optimistic. There are reasons to think that multiverses exist: for example, if we find ourselves near some anthropic cutoff for certain parameters. More interesting, there could be semi-direct observational evidence — bubble collisions, or perhaps giant voids. Discovering extra dimensions would be good evidence for the theories on which the multiverse is often based.

The only direct observations that currently exists that might bear directly on multiverses is the prediction of giant voids and dark flows by Laura Mersini-Houghton and collaborators.

Carr believes that the indirect evidence from finely-tuned coupling constants is actually stronger. Existence of planets requires a very specific relationship between strength of gravity and electromagnetism, which happens to exist in the real world. There is a similar gravity/weak tuning needed to make supernovae and heavy elements. Admittedly, many physicists dislike the multiverse and find it just as unpalatable as God. But ultimately, multiverse ideas will become normal science by linking up with observations; we just don’t know how long it will take.

9:45: George Ellis follows Carr’s talk with what we’ve been waiting for a while — a strong skeptical take on the multiverse idea.

There are lots of types of multiverses: many-worlds, separated by space or time, or completely disjoint. Anthropic arguments are what make the idea go. The project is to make the apparently improbable become probable.

The very nature of the scientific enterprise is at stake: multiverse proponents are proposing that we weaken the idea of scientific proof. Science is about two things: testability and explanatory power. Is it worth giving up the former to achieve the latter?

The abstract notion of a multiverse doesn’t get you anything; you need a specific model, with a distribution of probabilities. (Does Harry Potter exist somewhere in your multiverse?) But if there is some process that generates universes, how do you test that process? Domains beyond our particle horizon are unobservable. How far should we expect to be able to extrapolate? Into a region which, in principle, we will never be able to observe.

In the good old days we accepted the Cosmological Principle, and assumed things continued uniformly forever beyond our observable horizon. Completely untestable, of course. If all the steps in the extrapolation are perfectly tenable, extrapolations are fine — but that’s not the case here. In particular, the physics of eternal inflation (gravity plus quantum field theory, Coleman-de Luccia tunneling) has never been tested. It’s unknown physics used to infer an unobservable realm. Inflation itself is not yet a well-defined theory, and not all versions of inflation are eternal. We haven’t even found a scalar field!

There is a claim that a multiverse is implied by the fine-tuning of the universe to allow life. At best a weak consistency test. Can never actually do statistical tests on the purported ensemble. Another claim is that the local universe, if it’s inside a bubble, should have a slight negative curvature — but that’s easily avoided by super-Hubble perturbations, so it’s not a strong prediction. We could, however, falsify eternal inflation by observing that we live in a “small” (topologically compact) universe. But if we don’t, it certainly doesn’t prove that eternal inflation is right. Finally, it’s true that we might someday see signatures of bubble collisions in the microwave background. But if we don’t, then what? Again, not a firm prediction.

Ultimately: explanation and testability are both important, but one shouldn’t overwhelm the other. “The multiverse theory can’t make any prediction because it can explain anything at all.” Beware! If we redefine science to accommodate the multiverse, all sorts of pseudo-science might sneak inside the tent.

There are also political/sociological issues. Orthodoxy is based on the beliefs held by elites. Consider the story of Peter Coles, who tried to claim back in the 1990’s that the matter density was only 30% of the critical density. He was threatened by a cosmological bigwig, who told him he’d be regarded as a crank if he kept it up. On a related note, we have to admit that even scientists base beliefs on philosophical agendas and rationalize after the fact. That’s often what’s going on when scientists invoke “beauty” as a criterion.

Multiverse theories invoke “a profligate excess of existential multiplicity” in order to explain a small number of features of the universe we actually see. It’s a possible explanation of fine tuning, but is not uniquely defined, is not scientifically testable, and in the end “simply postpones the ultimate metaphysical question.” Nevertheless — if we accumulated enough consistency tests, he’d be happy to eventually become convinced.

Philosophy and Cosmology: Day Three Read More »

28 Comments

Philosophy and Cosmology: Day Two

The previous post on the Philosophy and Cosmology conference in Oxford was growing to unseemly length, so I’ll give each of the three days its separate post.

Monday morning: The Case for Multiverses

9:00: We start today as we ended yesterday: with a talk by Martin Rees, who has done quite a bit to popularize the idea of a multiverse. He wants to argue that thinking about the multiverse doesn’t represent any sort of departure from the usual way we do science.

The Big Bang model, from 1 second to today, is as uncontroversial as anything a geologist does. Easily falsifiable, but it passes all tests. How far does the domain of physical cosmology extend? We only see the universe out to the microwave background, but nothing happens out there — it seems pretty uniform, suggesting that conditions inside extend pretty far outside. Could be very far, but hard to say for sure.

Some people want to talk only about the observable universe. Those folks need aversion therapy. After all, whether a particular distant galaxy eventually becomes observable depends on details of cosmic history. There’s no sharp epistemological distinction between the observable and unobservable parts of the universe. We need to ask whether quantities characterizing our observable part of the universe are truly universal, or merely local.

So: what values of these parameters are consistent with some kind of complexity? (No need to explicitly invoke the “A-word.”) Need gravity, and the weaker the better. Need at least one very large number; in our universe it’s the ratio of gravity to electromagnetic forces between elementary particles. Also need departure from thermodynamic equilibrium. Also: matter/antimatter symmetry, and some kind of non-trivial chemistry. (Tuning between electromagnetic and nuclear forces?) At least one star, arguably a second-generation star so that we have heavy elements. We also need a tuned cosmic expansion rate, to let the universe last long enough without being completely emptied out, and some non-zero fluctuations in density from place to place.

If the amplitude of density perturbations were much smaller, the universe would be anemic: you would have fewer first-generation stars, and perhaps no second-generation stars. If the amplitude were much larger, we would form huge black holes very early, and again we might not get stars. But ten times the observed amplitude would actually be kind of interesting. Given an amplitude of density perturbations, there’s an upper limit on the cosmological constant, so that structure can form. Again, larger perturbations would allow for a significantly larger cosmological constant — why don’t we live in such a universe? Similar arguments can be made about the ratio of dark matter to ordinary matter.

Having said all that, we need a fundamental theory to get anywhere. It should either determine all constants of nature uniquely, in which case anthropic reasoning has no role, or it allows ranges of parameters within the physical universe, in which case anthropics are unavoidable.

10:00: Next up, Philip Candelas to talk about probabilities in the landscape. The title he actually puts on the screen is: “Calabi-Yau Manifolds with Small Hodge Numbers, or A Des Res in the Landscape.”

A Calabi-Yau is the kind of manifold you need in string theory to compactly ten dimensions down to four, picked out among all possible manifolds by the requirement that we preserve supersymmetry. There are many examples, and you can characterize them by topological invariants as well as by continuous parameters. But there is a special corner in the space of Calabi-Yau’s where certain topological invariants (Hodge numbers) are relatively small; these seem like promising places to think about phenomenology — e.g. there are three generations of elementary particles.

Different embeddings lead to different gauge groups in four dimensions: E6, SO(10), or SU(5). Various models with three generations can be found. Putting flux on the Calabi-Yau can break the gauge group down to the Standard Model, sometimes with additional U(1)’s.

Philosophy and Cosmology: Day Two Read More »

28 Comments
Scroll to Top