Thursday (“today,” for most of you) at 1:00 p.m. Eastern, there will be a NASA Media Teleconference to discuss some new observations relevant to the behavior of dark energy at high redshifts (z > 1). Participants will be actual astronomers Adam Riess and Lou Strolger, as well as theorist poseurs Mario Livio and myself. If the press release is to be believed, the whole thing will be available in live audio stream, and some pictures and descriptions will be made public once the telecon starts.
I’m not supposed to give away what’s going on, and might not have a chance to do an immediate post, but at some point I’ll update this post to explain it. If you read the press release, it says the point is “to announce the discovery that dark energy has been an ever-present constituent of space for most of the universe’s history.” Which means that the dark energy was acting dark-energy-like (a negative equation of state, or very slow evolution of the energy density) even back when the universe was matter-dominated.
Update: The short version is that Adam Riess and collaborators have used Hubble Space Telescope observations to discover 21 new supernovae, 13 of which are spectroscopically confirmed as Type Ia (the standardizable-candle kind) with redshifts z > 1. Using these, they place new constraints on the evolution of the dark energy density, in particular on the behavior of dark energy during the epoch when the universe was matter-dominated. The result is that the dark energy component seems to have been negative-pressure even back then; more specifically, w(z > 1) = -0.8+0.6-1.0, and w(z > 1) < 0 at 98% confidence.
Longer version: Dark energy, which is apparently about 70% of the energy of the universe (with about 25% dark matter and 5% ordinary matter), is characterized by two features — it’s distributed smoothly throughout space, and maintains nearly-constant density as the universe expands. This latter quality, persistence of the energy density, is sometimes translated as “negative pressure,” since the law of energy conservation relates the rate of change of the energy density to (ρ + p), where ρ is the energy density and p is the pressure. Thus, if p = -ρ, the density is strictly constant; that’s vacuum energy, or the cosmological constant. But it could evolve just a little bit, and we wouldn’t have noticed yet. So we invent an “equation-of-state parameter” w = p/ρ. Then w = -1 implies that the dark energy density is constant; w > -1 implies that the density is decreasing, while w < -1 means that it’s increasing.
In the recent universe, supernova observations convince us that w = -1+0.1-0.1; so the density is close to constant. But there are puzzles in the dark-energy game; why is the vacuum energy so small, and why are the densities of matter and dark energy comparable, even though matter evolves noticeably while dark energy is close to constant? So it’s certainly conceivable that the behavior of the dark energy was different in the past — in particular, that the density of what we now know as dark energy used to behave similarly to that of matter, fading away as the universe expanded, and only recently switched over to an appreciably negative value of w.
These new observations speak against that possibility. They include measurements of supernovae at high redshifts, back when the density of matter was higher than that of dark energy. They then constrain the value of w as it was back then, at redshifts greater than one (when the universe was less than half its current size). And the answer is … the dark energy was still dark-energy-like! That is, it had a negative pressure, and its energy density wasn’t evolving very much. It was in the process of catching up to the matter density, not “tracking” it in some sneaky way.
Of course, to get such a result requires some assumptions. Riess et al. consider three different “priors” — assumed behaviors for the dark energy. The “weak” prior makes no assumptions at all about what the dark energy was doing at redshifts greater than 1.8, and draws correspondingly weak conclusions. The “strong” prior uses data from the microwave background, along with the assumption (which is really not that strong) that the dark energy wasn’t actually dominating at those very high redshifts. That’s the prior under which the above results were obtained. The “strongest” prior imagines that we can extrapolate the behavior of the equation-of-state parameter linearly back in time — that’s a very strong prior indeed, and probably not realistic.
So everything is consistent with a perfectly constant vacuum energy. No big surprise, right? But everything about dark energy is a surprise, and we need to constantly be questioning all of our assumptions. The coincidence scandal is a real puzzle, and the idea that dark energy used to behave differently and has changed its nature recently is a perfectly reasonable one. We don’t yet know what the dark energy is or why it has the density it does, but every new piece of information nudges us a bit further down the road to really understanding it.
Update: The Riess et al. paper is now available as astro-ph/0611572. The link to the data is broken, but I think it means to go here.
Sean, you quoted
Are those 2-sigma errors, then? Otherwise I don’t see how the two statements could be consistent.
Ali–
The error bars are probably seriously non-Gaussian. In fact, I’d be surprised if they weren’t, given past experience with this.
As such, it’s entirely possible that the +0.6 on the w(z>1)=-0.8 is a 1-sigma (68%) thing, and that that is consistent with w(z>1)
Ali, I was just quoting from the paper (which I don’t believe is yet publicly available, although it’s been accepted by ApJ). I’m guessing that either those are 2-sigma errors, or they’re not Gaussian.
Arun, the results provide some more support for a cosmological constant, but there’s still room to play. SUSY is no obvious help, since it must be broken in the real world, leading to a vacuum energy at least 60 orders of magnitude greater than what we observe (absent miraculous cancellations).
In my 1998 James Binney and Michael Merrifield, “Galactic Astronomy” there is this:
“A wide range of techniques have been applied to measuring the distance to the Virgo Cluster. In their extensive review of the subject, Jacoby et al. (1992) showed that the three methods with the smallest uncertainties (surface-brightness fluctuations, planetary-nebula luminosity function, and the Tully-Fisher relation) all provided consistent distance estimates of ~ 16 ± 1 Mpc. The one seriously conflicting measurement comes from the analysis of type Ia supernovae….the Virgo Cluster would have to lie at a distance of 23 ± 2 Mpc…As we have seen in § 7.3.3, some doubt has now been cast on the role of type 1a supernovae as standard candles”.
Probably months after this particular edition was printed, Perlmutter and others announced that the universe was expanding at an accelerating rate, using Type 1a supernovae as standard candles!
Clearly, the textbook was way behind the research! There must be quite a story in how Type 1a supernovae were successfully calibrated to be standard candles, and I hope one of the experts here will some time go into the details.
For now, I’d settle for an answer to the question – what was the cause of the discrepancy in distance to the Virgo Cluster and how was it resolved?
Thanks in advance!
Pingback: » Links for 17-11-2006 » Velcro City Tourist Board » Blog Archive
Arun —
Of course, textbooks are written a year or so before they’re published, so it’s not as close as all that.
However, it’s also entirely possible that SNe Ia could have a “wrong” distance to Virgo, while they were still standard candles good enough to measure the accelerating Universe. I’d have to go back and think a lot to figure out what the real story with SN distances to Virgo were in 1998.
Here’s the key, though : using SNe to measure the distance to Virgo requires knowing the *absolute* luminosity of a supernova. The discovery of the accelerating Univesre did *not* require this, it only required that they be a standard candle. As long as they were always the same, we could measure Omega_M and Omega_L without actually knowing the true luminosity of an SN Ia! What we did (effectively) was compare the slope of the low-redshift and high-redshift supernovae.
If you want to use supernovae to measure H_0, you do need to know the absolute luminosity of a supernova, but we were able to measure the acceleration even without really knowing the current expansion rate. Indeed, in the fits that at least the SCP did (which is where I was), we had a parameter “script-M” which contained the joint effect of the supernova absolute magnitude and H0. We didn’t try to separate them out. Mathematically, it turns out that a “brigther absolute supernova” would cancel a “higher H0” perfectly, and vice versa.
As such… we’re very sure that most SNe Ia make pretty good standard candles (good to 20% or so, good to 10% or so if you calibrate out a light curve decline rate), even if we don’t know the SN absolute luminosity that well. (Which nowadays we probably do, becuase even if we don’t have a good absolute measurement of it, we have a few good measurements of H0.)
-Rob
The likelihood is indeed non-Gaussian for the high redshift bin, which can lead to some funny looking statements if you’re not used to such things. There are several figures of likelihood histograms in the paper, and for non-Gaussian stuff I think it’s really best to look at the distribution to get an idea of what’s going on.
For the quoted “-0.8+0.6-1.0” number those are “one-sigma” intervals defined by where the likelihood falls to 0.6 of it’s peak value (which is where one sigma is for a Gaussian). You get fairly similar numbers if you define things by looking at the FWHM or 68% contours or various other things you might think of. That w < 0 at 98% comes from integrating the full likelihood to get the CDF. It’s so close to the “one-sigma” number because the likelihood falls off very sharply as w increases.
There’s a table in the paper that also reports the 95% intervals, so there are a lot of statistics to contemplate. This mostly only matters for that high redshift bin, though, the others are much closer to Gaussian.
I know it’s churlish to bring it up, but still here goes: there is an alternative explanation of the data which involves no dark energy at all. It is simply that we are near the centre of a major inhomogeneity in the universe, and what the supernova data are measuring is the amount of spatial inhomogeneity of the universe. One can thereby fit the supernova data exactly with no cosmological constant or dark energy at all (that’s a theorem). Now this proposal is very unpopular for philosophical reasons – we would be near the centre of the universe, or at least of a large inhomogeneity in the universe in this case; but it is surprisingly difficult to disprove it by any astronomical observations. It remains a possibile explanation of the data.
The reason for mentioning this is that the existence of a cosmological constant of this small magnitude has been characterised by many as one of the greatest crises facing present day theoretical physics, and has led to extravagances such as anthropic explanations in the context of multiverse proposals that cannot be observationally tested in any ordinary sense of the concept of `observational test’. Hence one should at least look at alternatives that avoid this problem, even if that involves being a little bit more open minded about the geometry of the universe than is conventional.
Manual trackback from us at Populär astronomi: Den mörka energin, som astronomer tror ligger bakom universums acceleration, verkar ha varit sig lik sedan universum var ungt. Nya observationer pÃ¥ avlägsna supernovor med Hubbleteleskopet…[…]
I have a fairly ignorant question from a non-astronomer:
I read (probably on wikipedia) that type Ia supernovae are casued by exploding white dwarves that approach the Chandrasaekar (sp?) limit by gaining mass.
I recall from planetary geology that white dwarves form when sun-sized stars burn out, a process that takes about 9 Ga.
I have read in numerous places that the universe is only about 13.5 Ga in age.
If 9-10 Ga galaxies contain type 1a supernovae, then one of the above “facts” must be wrong, since they predict that the oldest white dwarfs should not be older than about 4.5 Ga. So where have I screwed up?
Rob,
My understanding of the chain of reasoning is as follows:
Cepheids were used to calibrate nearby Type 1a supernovas’ absolute magnitudes, and then this calibration puts the Virgo Cluster too far away compared to all other measures.
This means at least one of the following:
1. The Cepheid distance scale has a problem.
2. The supernovae calibrated using Cepheids were unusual
3. The nearby supernovae and those in the Virgo Cluster have different absolute magnitudes, which would cast doubt on using them as a standard candle.
Presumably something like the following is true – and here I’m really guessing – that by the time scale of the light curve and/or the spectrum one can bin type1a supernova into classes; the supernovae in each class have essentially a unique absolute magnitude.
Calling a press conference for something that has been determined with 98% confidence is quite the publicity stunt. That’s barely 2.5 sigma in Gaussian-speak. If particle physicists called the media in every time something was 2.5 sigma out … well, that was the problem with the Higgs pseudo-signal back in 2000, I seem to recall.
What’s the deal with the spectroscopic determinations, or lack of them?
PS Was that *the* George Ellis?
Lab Lemming — according to our best understanding of stellar evolution, white dwarfs are left behind by stars 8 solar masses and lighter. A star just under 8 solar masses lives (if memory serves) less than 100 million years.
As such, it’s possible to make a white dwarf very quickly after you form a bunch of stars — at least on cosmological time scales.
I have to go back and remember where I read this, but I think there have also been some studies that suggest that galaxies 1 Gyr after a starburt (starburst galaxies being those forming lots of stars right now) show chemical signatures of enhanced SNe Ia. If that’s right, that would suggest that indeed Type Ia supernovae are *more* common from the white dwarfs left behind by the rarer, more massive stars — and thus would potentially have a short average time to go from gas cloud to Chandrasaekar-mass star.
Arun — again, I don’t have knowledge of how many or what supernovae were found in the Virgo cluster at my fingertips, so I’d have to dig a bit to figure that out. Howevre, your “bins” thing is almost right. Really, the light curve decay rate is a parameter that smoothly varies with supernova peak luminosity. Even without that correction, though, most Ia supernovae are consistent to 20%. There are a handful of outliers. This isn’t a showkiller, though. As long as you have a lot of them, it’s easy to identify the outliers. And, indeed, that’s the case with the supernovae used for cosmology. If you take the “low redshift” (z<.1 or some such) sets that come from Hamuy and Riess papers of 1998 and before, the dispersion of those supernovae around a Hubble expansion is small. This is the empirical evidence that supernovae are consistent. Add to that the fact that the high-redshift and low-redshift supernovae have consistent spectra, and we’re pretty sure we know what we’re doing.
Thoms Dent — spectroscopic determinations of supernovae at really high redshift is *hard*. It takes a lot of Hubble Space Telescope time, and even then the signals are often marginal. That’s probably the main reason some of them are lacking. I haven’t read the paper yet, but it may be that there wasn’t time to attempt confirmations for all of the supernovae. It may also be that they attempted it, but the signal was too crappy to see anything convincing.
-Rob
Does dark energy mean that Vacuum Abhors A Nature ?
George, you’re right that we could do away with dark energy by imagining that we lived at the center of a spherical inhomogeneity. (At least as far as supernovae and other kinematical tests go; I’m less sure about whether you could simultaneously fit structure formation.) But:
(1) That would actually be more surprising than a cosmological constant. Anthropic-type explanations would seem even more tempting in such circumstances.
(2) There’s no reason why such a configuration would give us something extremely close to w = -1, as we seem to be observing. It would be allowed, but so would any other value.
(3) The “biggest crisis” is really the fact that the vacuum energy is small, and zero would still count. An inhomogeneity wouldn’t solve that problem.
Sean:This latter quality, persistence of the energy density, is sometimes translated as “negative pressure,”
We needed explanation for the “why of it” and I was just wondering about the cross over point in LHC? More on name.
Sean says,
“you’re right that we could do away with dark energy by imagining that we lived at the center of a spherical inhomogeneity. (At least as far as supernovae and other kinematical tests go; I’m less sure about whether you could simultaneously fit structure formation.)”
right. Such other tests still need careful consideration.
” But: (1) That would actually be more surprising than a cosmological constant. Anthropic-type explanations would seem even more tempting in such circumstances.”
What is surprising or not is a matter of opinion and philosophical stance. There is no physical experiment to say this is more suprising than that; and if there was. it would still not *prove* anything about the way the universe actually is – sometimes reality is indeed very surprising. So this is an example of how much of modern cosmology, despite the appearances, is philosophically rather than data driven. There is nothing wrong with that, but it should be acknowledged.
“(2) There’s no reason why such a configuration would give us something extremely close to w = -1, as we seem to be observing. It would be allowed, but so would any other value.”
And there is no reason why there should be a cosmological constant or quintessence or whatever with the observed values.
“(3) The “biggest crisis” is really the fact that the vacuum energy is small, and zero would still count. An inhomogeneity wouldn’t solve that problem. ”
yes but there used to be the assumpotin that something (supersymmetry?) would cause cancellations leading to an exact zero, while a value of 10^{-80} or so would requre huge fine tuning. The zero would in some sense be more natural. Of course again a philosophical argument.
What is actually happening in the way things are done at present is that the assumption of spatial homegenity is put in by hand, and then used to derive an equation of state for “dark energy” that then follows from the astronomical data. A geometrical assumption is used to determine the physics that would lead to that desired geometrical result. So the question is, What independent test could there be of that supposed physics? Will it explain anything else other than the one item it was invented to explain?
Now you could claim of course that inflation would prevent any such inhomogeneities occurring (indeed I am surprised this was not on your list!). But inflation is a flexible enough subject that it can probably be varied enough to include such inhomogeneities. You can probably run the equations backwards to get a potential that will give the required result.
I like Ned Wright’s comment on his Cosmology Tutorial website:
So and 8 s.m. star ends up as a 1.4 s.m. white dwarf? I guess I was incorrectly assuming mass conservation throughout the star’s lifetime. Thanks for clearing that up.
Lab Lemming- There is a conservation of mass/energy. A star becoming a white dwarf will throw off its outer layers to form a planetary nebula, leaving behind a core. (Mass loss) It will then gradually cool down and radiate away energy until it can no longer prevent gravitational collapse. It then becomes supported only by electron degeneracy pressure. After it is all said and done it will end up as an object of 1.4 s.m. So conservation of mass energy is not violated. Here is a cool picture of a planetary nebulae: http://antwrp.gsfc.nasa.gov/apod/ap061112.html
This is what I put on “The News of the Universe” part of my cosmology tutorial:
————–
NASA fails to produce new data on dark energy
16 Nov 06 – NASA held a press telecon today about dark energy, but neither the press release nor the images accompanying it contained any useful information. There was no paper about the data on the preprint server either.
————–
It might be a good idea to stop participating in press events where the PIO has been so totally successful in Preventing Information Outflow.
In any case, your w = -0.8+0.6-1.0 for z gt 1 is the only quantitative result available. It also cannot possibly be correct without caveats. Certainly the data must be consistent with w = 0 for z gt 10. Or w could be 0 for z between 1.40 and 1.41.
I expect the analysis assumed a flat Universe, which is either faith-based following the prophet Guth, or a circular argument based on the consistency of all data with a flat lambda-CDM model which assumes w = -1. Then a correct statement of the significance of the results is that the set of all data has grown by a few percent and it is still all consistent with flat lambda-CDM.
I apologize ahead of time for my ignorance – I’m probably the equivalent of my mechanic buddy’s archnemesis – the “guy who thinks he knows more than he does”.
Anyhow, I am just throwing this out there for thoughts. I read the CNN article, and it spurred me to post here because I had recently had a traffic jam “brain storm”.
So here’s the resulting question/comment:
Has it ever been proposed that the bigbang was closer to a massive “crystallization event” than an explosion? I ask because I recall watching, as a child, a supersaturated sugar solution “instantly” crystalize from a seed or major disturbance to the container.
I wondered then, what if Dark Matter is really just the core solution of everything? And, at some point billions of years ago a huge super hot solution of dark matter “soup” just had a huge “crystallization event” with matter as we know it falling out of solution and propelled away as it no longer mixed well with its parent material??
I wonder if there’d be “dark matter paths” towards the interior of the universe to replace the matter “dropping out” of that primordial solution?
ugh. I know… crazy talk. 🙂
I’m sorry. I don’t pretend to know anything concrete in this field, but figure even as a common joe – it may be a “brainstorming” idea worth at least a mention.
(PS. while I’m out here feeling selfconscious, I’ll add a followup thought that the “disturbance event” which triggered the mass crystallization originated in an adjacent universe/dimension?)
FWIW
Abe Miller
Pingback: Coast to Coast | Cosmic Variance