Don’t be surprised if you keep reading astronomy stories in the news this week — the annual meeting of the American Astronomical Society is underway in Washington DC, and it’s common for groups to announce exciting results at this meeting. Today there was a provocative new claim from Bradley Schaefer at Louisiana State University — the dark energy is evolving in time! (Read about it also from Phil Plait and George Musser.)
Short version of my own take: interesting, but too preliminary to get really excited. Schaefer has used gamma-ray bursts (GRB’s) as standard candles to measure the distance vs. redshift relation deep into the universe’s history — up to redshifts of greater than 6, as opposed to ordinary supernova studies, that are lucky to get much past redshift 1. To pull this off, you want “standard candles” — objects that are really bright (so you can see them far away), and have a known intrinsic luminosity (so you can infer their distance from how bright they appear). True standard candles are hard to find, so we settle for “standardizable” candles — objects that might vary in brightness, but in a way that can be correlated with some other observable property, and therefore accounted for. The classic example is Cepheid variables, which have a relationship between their oscillation period and their intrinsic brightness.
Certain supernovae, known as Type Ia’s, have quite a nice correlation between their peak brightness and the time it takes for them to diminish in brightness. That makes them great standardizable candles, since they’re also really bright. GRB’s are much brighter, but aren’t nearly so easy to standardize — Schaefer used a model in which five different properties were correlated with peak brightness (details). The result? The best fit is a model in which the dark energy density (energy per cubic centimeter) is gradually growing with time, rather than being strictly constant.
If it’s true, this is an amazingly important result. There are four possibilities for why the universe is accelerating: a true cosmological constant (vacuum energy), dynamical (time-dependent) dark energy, a modification of gravity, or something fundamental being missed by all us cosmologists. The first possiblity is the most straightforward and most popular. If it’s not right, the set of theoretical ideas that physicists pursue to help explain the acceleration of the universe will be completely different than if it is right. So we need to know the answer!
What’s more, the best-fit behavior for the dark energy density seems to have it increasing with time, as in phantom energy. In terms of the equation-of-state parameter w, it is less than -1 (or close to -1, but with a positive derivative w’). That’s quite bizarre and unexpected.
As I said, at this point I’m a bit skeptical, but willing to wait and see. Most importantly, the statistical significance of the finding is only 2.5σ (97% confidence), whereas the informal standard in much of physics for discovering something is 3σ (99% confidence). As a side worry, at these very high redshifts the effect of gravitational lensing becomes crucial. If the light from a GRB passes nearby a mass concentration like a galaxy or cluster, it can easily be amplified in brightness. I am not really an expert on how important this effect is, nor do I know whether it’s been taken into account, but it’s good to keep in mind how little we know about GRB’s and the universe at high redshift more generally.
So my betting money stays on the cosmological constant. But the odds have shifted, just a touch.
Update: Bradley Schaefer, author of the study, was nice enough to leave a detailed comment about what he had actually done and what the implications are. I’m reproducing it here for the benefit of people who don’t necessarily dip into the comments:
Sean has pointed me to this blog and requested me to send along any comments that I might have. His summary at the top is reasonable.
I’d break my results into two parts. The first part is that I’m putting forward a demonstration of a new method to measure Dark Energy by means of using GRBs as standard candles out to high red shift. My work is all rather standard with most everything I’ve done just following what has been in the literature.
The GRB Hubble Diagram has been in print since 2003, with myself and Josh Bloom independently presenting early version in public talks as far back as 2001. Over the past year, several groups have used the GRB Hubble Diagram to starting putting constraints on cosmology. This prior work has always used only one GRB luminosity indicator (various different indicators for the various papers) and for no more than 17 GRBs (neglecting GRBs with only limits).
What I am doing new is I am using much more data and I’m directly addressing the question of the change of the Dark Energy. In all, I am using 52 GRBs and each GRB has 3-4 luminosity indicators on average. So I’ve got a lot more data. And this allows for a demonstration of the GRB Hubble Diagram as a new method.
The advantages of this new method is that it goes to high redshift, that is, it looks at the expansion history of the Universe from 1.7-6.3 in redshift. It is impervious to extinction. Also, I argue that there should be no evolution effects as the GRB luminosity indicators are based on energetics and light travel time (which should not evolve). Another advantage is that we have the data now, with the size of the data base to be doubled within two years by HETE and Swift.
One disadvantage of the GRB Hubble Diagram is that the GRBs are lower in quality than supernovae. Currently my median one sigma error bar is 2.6-times worse in comparing a single GRB and a single supernova. But just as with supernovae, I expect that the accuracy of GRB luminosities can be rapidly improved. [After all, in 1996, I was organizing debates between the gradaute students as to whether Type Ia SNe were standard candles or not.] Another substantial problem that is hard to quantify is that our knowledge of the physical processes in GRBs is not perfect (and certtainly much worse than what we know for SNe). It is rational and prudent for everyone to worry that there are hidden problems (although I now know of none). A simple historical example is how Cepheids were found to have two types with different calibrations.
So the first part of my talk was simply presenting a new method for getting the expansion histoy of the Universe from redshifts up to 6.3. For this, it is pretty confident that the method will work. Inevitably there will be improvements, new data, corrections, and all the usual changes (just as for the supernova).
The second part of my talk was to point out the first results, which I could not avoid giving. It so happens that the first results point against the Cosmological Constant. I agree with Sean that this second part should not be pushed, for various reasons. Foremost is that the result is only 2.5-sigma.
Both parts of my results are being cast onto a background where various large groups are now competing for the a new dedicated satellite.
Lets see if I got this right?..The GRB’s are closer to the big-bang than our Galaxy…the local expansion field to the GRB, must be where all the action is, this is where most of the Universe must be expanding greatest?..otherwise, our local field around our Galaxy, if it was expanding at the same rate, would mean that we would physically observe our closest neigbours..ie..Andromeda, at a vastly greater redshift than what is observed.
So thus, the increasing expansion, must be calibrated in some way to the appearence/increase of excess GRB’s?
The Luminosity Function must itself, be tuned into the expansion rate, and therefore the light emmanating from a part of the Universe that is expanding at a greater rate, would have its light “stretched” locally, at the farthest location from the Milky Way, and could give the impression that its luminosity is greater than it actually is?
Science?
I’m recovering from CSL-1 infor..ma..tion…….:)
Ya Paul I’m having hard time accepting how this information can be read, considering the lensing that goes on. If from the distance information is to travel, how do we know that the fastest route is not being considered, while the influences along the way can hold this information?
Would this not Probably be one of these stupid statements that we make sometimes?:)Galaxie formations as part of the larger expansion process create curvature parameter readings askew?
I dunno…..
Sean, Brad , others,
Do these and previous results from the GRB Hubble diagram conclusively rule out claims of
phioton-axion oscillation (which have been proposed to explain the SN results)?
My question for Brad (hope he’s still monitoring) relates to his statistical procedures, not the physics.
The GRB diagram presented above presents a “best fit” curve for magnitude vs. redshift. As both Dan and others attending the original meeting have already commented, the variability of the magnitude data is quite high, especially the next to largest redshift datum at z~5.xx.
I wonder if Brad fitted his data using linear ordinary least squares, linearizing the model in the variables (using log transformations, for example), or used a form of non-linear regression. Looking at the “details” of “calibrating the luminosity relations” provided in the link, I’d guess he used a linear fitting technique linearized by transformation in the redshift variable.
If so, I then wonder if Brad took advantage of the variability of the magnitude data in fitting his model. A well-studied fitting technique called variance weighted least squares is designed to use variance in the variable on the left-hand side of the model to inversely weight the importance of each data point in the fit according to its error of measurement, data points with high error contributing less to the fit and data points with low error contributing more.
Given the nature of the GRB data distribution, the sparse and highly variable data at high red shift will unduly influence the entire fit of a linear ordinary least squares regression. Those high-z points will have what statisticians call “high leverage” in the model. Weighted least squares fitting would address some of that problem, could definitely change the difference between the “cosmological constant” curve and the observed curve, and could also alter the calculated probability that the two curves differ by chance alone if only ordinary least squares fitting was used for the presented fits.
It would also appear that the fitted log(L) – log(V) relationship shown in “calibrating the luminosity relations” violates the assumption of homogeneity of error variance that ordinary least squares fitting requires, so perhaps I’m wrong in assuming that this was the fitting technique used for the magnitude – z relationship.
Though I’m writing from ignorance of details that might have already been presented and of standard practices in handling such data in physics, I’d be interested in hearing comments or corrections from Dan, Sean, and the rest of the list.
Here are a answers to the questions posed earlier:
I don’t know what the GRB Hubble Diagram has to say about photon-axion oscillations.
The effects of gravitational lensing are to magnify and demagnify the GRB, making them appear brighter or dimmer than would be deduced from the GRB luminosity indicators. This will cause some ‘random’ noise in the vertical direction of the Hubble Diagram. This noise gets larger as we go to higher redshift because the line of sight will pass near more-and-more galaxies. The same problem exists for supernovae, but for GRBs the effects will be larger due to the higher redshifts. Premadi & Martel (astro-ph/0503446) show that the effects don’t rise linearly; with the 1-sigma scatter (in the distance modulus) in the Hubble Diagram at z=1 is ~0.10 mag and at z=5 is ~0.34 mag. This is generally smaller than my typical error, so the effect won’t dominate. Lensing conserves flux, and with the *average* magnification of unity there should simplistically be no effect on the observed shape of the Hubble Diagram. But with the distribution being slightly skewed, any offsets will depend on the numbers of GRBs (of which I have 52 [9 in the z>3 ‘bin’]) as described by Holz & Linder (2005, ApJ, 631, 678). The real story is more complex as it depends on whether more GRBs just outside the limiting distance are magnified into the sample than the GRBs just inside the limiting distance that are demagnified out of the sample. The limits for inclusion in the sample of GBRs with redshifts are fuzzily known, so this calculation is not easy. Holz tells me that he would expect the effects to be small (perhaps 0.05 mag in the distance modulus at z~6) when comparing high and low redshifts; and such effects are not significant given my larger error bars. A real calculation is needed to be sure of all this.
Steve asked about whether I used ordinary least squares techniques for fitting the five calibration relations. As he points out, the error bars vary substantially from burst-to-burst, so some account must be made of this. I do so by minimizing the chi-square of the fit, as this has the natural variance in the denominator. Another point is that the observed scatter about the best fit calibration relations is substantially larger than expected from the measurement errors on the indicators alone. This implies that there is some additional scatter, caused for example by gravitational lensing. This is no surprise, as for example we don’t know what the best way to define the ‘variability’. I model this by adding in quadrature a constant plus the Premadi & Martel lensing scatter to the scatter from the measurement errors. I vary the constant until the best fit reduced chi-square is unity. Another point is that I do the fit in log-log space. This is because (a) the expected relations are power laws from theory, (b) the error sources are multiplicative, and (c ) the observed error distributions appear Gaussian in log-space. From the best fit calibration curves, I get values of Log(L) for each measured indicator for each burst. Each will have an associated error bar, generally dominated by the systematic errors. Then I combine the various Log(L) values for each burst in a weighted average, to get a combined Log(L) which will then yeild a distance modulus.
As a layman I wonder, if you implicate lagrange points, how would this change the way you see?
It was phrased earlier as information “skewed” but held in context of that implcation, might we have rasied the bar?
You had to look at dark energy in context, a little differently, as it pervades the universe in that expansitory progress?
Brad, thanks for the additional details of your fitting procedures. Could you point us to a prepublication reprint of your work, all the slides of your presentation or post the details of the Cosmological Constant (CC) “fit” and those of your fit to the GRB data? I’d like to see the equations and the constants (with standard errors) that were derived as well as the formal test between observation and CC prediction.
In another issue, if theory predicts that scatter in the GRB magnitude estimates will increase with increasing redshift, it will require an enormous amount of high red shift GRB data to reach the very low probability values that the physics community appears to demand for accepting the evidence for a significant difference between observation and CC predictions. Will such additional high red shift GRB data become available in the foreseeable future? Do you believe that the substantial background gamma count really represents presently unresolved GRBs, implying that there is an enormous number of high red shift GRBs just waiting for detection and measurement with improved techniques and continued observation? Is, in fact, the backgound gamma count higher than that predicted for black body radiation derived from the various big bang models and cosmological background radiation observations?
Questions, questions!
Sorry for my request for slides of the presentation, above. I see that those questions have already been addressed by Brad at http://www.phys.lsu.edu/GRBHD/details/ .
Pingback: Cycle Quark » Can Gamma Ray Bursts Be Used to Measure Dark Energy
Pingback: Cycle Quark » Can Gamma Ray Bursts Be Used to Measure Dark Energy
Sean, how strong is the statistical evidence for dark energy from ONLY type 1a SN(with latest
data)? In astro-ph/0207347 (which probably includes data up to 2001 or so) they claim the evidence is only 2 sigma.
Recently astro-ph/0511628 claims that the latest SN data rule out all cosmologies.
so question is how strong is the statistical evidence dor dark energy from the latest SN data ONLY?
Shantanu, newer data are certainly better: see astro-ph/0309368 or astro-ph/0402512. And they certainly don’t rule out all cosmologies; they’re perfectly consistent with ordinary LambdaCDM.
But supernovae by themselves are not extremely statistically significant indicators of the existence of dark energy; I don’t know how many sigma, but it’s just a few. But that’s because you can come close to fitting them if you assume the universe is nearly empty of matter and highly spatially curved. And we know those things just aren’t true: we’ve measured the matter density from dynamics, and the CMB tells us that there is not appreciable spatial curvature. In a flat universe, the supernovae require dark energy at more than ten sigma.
A new take on Matter’s: http://arxiv.org/abs/astro-ph/0601517
The authors delve into things, and cite Sean’s paper.
Pingback: The future of the universe | Cosmic Variance
if different colors reflect different light speeds then why does mars appear red when viewed from earth and also appears red when viewed from mars? what is actually the speed of light or for that matter the speed of sight when considered that light breaks up into the spectrum when passed through a prism? is the speed of sight a significant variable previously ignored? is temperature a variable to the speed of light? are our measurements of star distances highly exaggerated because of the unknown extreme cold in outer space?
cox,
your questions contain some myths that need to be addressed before they can be answered.
1) the speed of light is constant (the same) for all colors, the frequency and wavelength change but not the speed.
2) there is no physical parameter called “speed of sight”. It is just the speed of light + the processing of the light by the eye and nervous system. Not a meaningful parameter to consider.
3) Temperature does not have an affect on the speed of light. So there is a high degree of confidence that distances to stars are not exaggerated by this.
4) Mars is red because the material it is composed of. Therefore it would look the same from Earth and Mars.
Hope that helps.
Pingback: light dimmer
Pingback: Cosmology at Professor Cormac O’Raifeartaigh’s blog « Gauge theory mechanisms
Pingback: The future of the universe | Cosmic Variance | Discover Magazine