Don’t be surprised if you keep reading astronomy stories in the news this week — the annual meeting of the American Astronomical Society is underway in Washington DC, and it’s common for groups to announce exciting results at this meeting. Today there was a provocative new claim from Bradley Schaefer at Louisiana State University — the dark energy is evolving in time! (Read about it also from Phil Plait and George Musser.)
Short version of my own take: interesting, but too preliminary to get really excited. Schaefer has used gamma-ray bursts (GRB’s) as standard candles to measure the distance vs. redshift relation deep into the universe’s history — up to redshifts of greater than 6, as opposed to ordinary supernova studies, that are lucky to get much past redshift 1. To pull this off, you want “standard candles” — objects that are really bright (so you can see them far away), and have a known intrinsic luminosity (so you can infer their distance from how bright they appear). True standard candles are hard to find, so we settle for “standardizable” candles — objects that might vary in brightness, but in a way that can be correlated with some other observable property, and therefore accounted for. The classic example is Cepheid variables, which have a relationship between their oscillation period and their intrinsic brightness.
Certain supernovae, known as Type Ia’s, have quite a nice correlation between their peak brightness and the time it takes for them to diminish in brightness. That makes them great standardizable candles, since they’re also really bright. GRB’s are much brighter, but aren’t nearly so easy to standardize — Schaefer used a model in which five different properties were correlated with peak brightness (details). The result? The best fit is a model in which the dark energy density (energy per cubic centimeter) is gradually growing with time, rather than being strictly constant.
If it’s true, this is an amazingly important result. There are four possibilities for why the universe is accelerating: a true cosmological constant (vacuum energy), dynamical (time-dependent) dark energy, a modification of gravity, or something fundamental being missed by all us cosmologists. The first possiblity is the most straightforward and most popular. If it’s not right, the set of theoretical ideas that physicists pursue to help explain the acceleration of the universe will be completely different than if it is right. So we need to know the answer!
What’s more, the best-fit behavior for the dark energy density seems to have it increasing with time, as in phantom energy. In terms of the equation-of-state parameter w, it is less than -1 (or close to -1, but with a positive derivative w’). That’s quite bizarre and unexpected.
As I said, at this point I’m a bit skeptical, but willing to wait and see. Most importantly, the statistical significance of the finding is only 2.5σ (97% confidence), whereas the informal standard in much of physics for discovering something is 3σ (99% confidence). As a side worry, at these very high redshifts the effect of gravitational lensing becomes crucial. If the light from a GRB passes nearby a mass concentration like a galaxy or cluster, it can easily be amplified in brightness. I am not really an expert on how important this effect is, nor do I know whether it’s been taken into account, but it’s good to keep in mind how little we know about GRB’s and the universe at high redshift more generally.
So my betting money stays on the cosmological constant. But the odds have shifted, just a touch.
Update: Bradley Schaefer, author of the study, was nice enough to leave a detailed comment about what he had actually done and what the implications are. I’m reproducing it here for the benefit of people who don’t necessarily dip into the comments:
Sean has pointed me to this blog and requested me to send along any comments that I might have. His summary at the top is reasonable.
I’d break my results into two parts. The first part is that I’m putting forward a demonstration of a new method to measure Dark Energy by means of using GRBs as standard candles out to high red shift. My work is all rather standard with most everything I’ve done just following what has been in the literature.
The GRB Hubble Diagram has been in print since 2003, with myself and Josh Bloom independently presenting early version in public talks as far back as 2001. Over the past year, several groups have used the GRB Hubble Diagram to starting putting constraints on cosmology. This prior work has always used only one GRB luminosity indicator (various different indicators for the various papers) and for no more than 17 GRBs (neglecting GRBs with only limits).
What I am doing new is I am using much more data and I’m directly addressing the question of the change of the Dark Energy. In all, I am using 52 GRBs and each GRB has 3-4 luminosity indicators on average. So I’ve got a lot more data. And this allows for a demonstration of the GRB Hubble Diagram as a new method.
The advantages of this new method is that it goes to high redshift, that is, it looks at the expansion history of the Universe from 1.7-6.3 in redshift. It is impervious to extinction. Also, I argue that there should be no evolution effects as the GRB luminosity indicators are based on energetics and light travel time (which should not evolve). Another advantage is that we have the data now, with the size of the data base to be doubled within two years by HETE and Swift.
One disadvantage of the GRB Hubble Diagram is that the GRBs are lower in quality than supernovae. Currently my median one sigma error bar is 2.6-times worse in comparing a single GRB and a single supernova. But just as with supernovae, I expect that the accuracy of GRB luminosities can be rapidly improved. [After all, in 1996, I was organizing debates between the gradaute students as to whether Type Ia SNe were standard candles or not.] Another substantial problem that is hard to quantify is that our knowledge of the physical processes in GRBs is not perfect (and certtainly much worse than what we know for SNe). It is rational and prudent for everyone to worry that there are hidden problems (although I now know of none). A simple historical example is how Cepheids were found to have two types with different calibrations.
So the first part of my talk was simply presenting a new method for getting the expansion histoy of the Universe from redshifts up to 6.3. For this, it is pretty confident that the method will work. Inevitably there will be improvements, new data, corrections, and all the usual changes (just as for the supernova).
The second part of my talk was to point out the first results, which I could not avoid giving. It so happens that the first results point against the Cosmological Constant. I agree with Sean that this second part should not be pushed, for various reasons. Foremost is that the result is only 2.5-sigma.
Both parts of my results are being cast onto a background where various large groups are now competing for the a new dedicated satellite.