In honor of the Nobel Prize, here are some questions that are frequently asked about dark energy, or should be.
What is dark energy?
It’s what makes the universe accelerate, if indeed there is a “thing” that does that. (See below.)
So I guess I should be asking… what does it mean to say the universe is “accelerating”?
First, the universe is expanding: as shown by Hubble, distant galaxies are moving away from us with velocities that are roughly proportional to their distance. “Acceleration” means that if you measure the velocity of one such galaxy, and come back a billion years later and measure it again, the recession velocity will be larger. Galaxies are moving away from us at an accelerating rate.
But that’s so down-to-Earth and concrete. Isn’t there a more abstract and scientific-sounding way of putting it?
The relative distance between far-flung galaxies can be summed up in a single quantity called the “scale factor,” often written a(t) or R(t). The scale factor is basically the “size” of the universe, although it’s not really the size because the universe might be infinitely big — more accurately, it’s the relative size of space from moment to moment. The expansion of the universe is the fact that the scale factor is increasing with time. The acceleration of the universe is the fact that it’s increasing at an increasing rate — the second derivative is positive, in calculus-speak.
Does that mean the Hubble constant, which measures the expansion rate, is increasing?
No. The Hubble “constant” (or Hubble “parameter,” if you want to acknowledge that it changes with time) characterizes the expansion rate, but it’s not simply the derivative of the scale factor: it’s the derivative divided by the scale factor itself. Why? Because then it’s a physically measurable quantity, not something we can change by switching conventions. The Hubble constant is basically the answer to the question “how quickly does the scale factor of the universe expand by some multiplicative factor?”
If the universe is decelerating, the Hubble constant is decreasing. If the Hubble constant is increasing, the universe is accelerating. But there’s an intermediate regime in which the universe is accelerating but the Hubble constant is decreasing — and that’s exactly where we think we are. The velocity of individual galaxies is increasing, but it takes longer and longer for the universe to double in size.
Said yet another way: Hubble’s Law relates the velocity v of a galaxy to its distance d via v = H d. The velocity can increase even if the Hubble parameter is decreasing, as long as it’s decreasing more slowly than the distance is increasing.
Did the astronomers really wait a billion years and measure the velocity of galaxies again?
No. You measure the velocity of galaxies that are very far away. Because light travels at a fixed speed (one light year per year), you are looking into the past. Reconstructing the history of how the velocities were different in the past reveals that the universe is accelerating.
How do you measure the distance to galaxies so far away?
It’s not easy. The most robust method is to use a “standard candle” — some object that is bright enough to see from great distance, and whose intrinsic brightness is known ahead of time. Then you can figure out the distance simply by measuring how bright it actually looks: dimmer = further away.
Sadly, there are no standard candles.
Then what did they do?
Fortunately we have the next best thing: standardizable candles. A specific type of supernova, Type Ia, are very bright and approximately-but-not-quite the same brightness. Happily, in the 1990’s Mark Phillips discovered a remarkable relationship between intrinsic brightness and the length of time it takes for a supernova to decline after reaching peak brightness. Therefore, if we measure the brightness as it declines over time, we can correct for this difference, constructing a universal measure of brightness that can be used to determine distances.
Why are Type Ia supernovae standardizable candles?
We’re not completely sure — mostly it’s an empirical relationship. But we have a good idea: we think that SNIa are white dwarf stars that have been accreting matter from outside until they hit the Chandrasekhar Limit and explode. Since that limit is basically the same number everywhere in the universe, it’s not completely surprising that the supernovae have similar brightnesses. The deviations are presumably due to differences in composition.
But how do you know when a supernova is going to happen?
You don’t. They are rare, maybe once per century in a typical galaxy. So what you do is look at many, many galaxies with wide-field cameras. In particular you compare an image of the sky taken at one moment to another taken a few weeks later — “a few weeks” being roughly the time between new Moons (when the sky is darkest), and coincidentally about the time it takes a supernova to flare up in brightness. Then you use computers to compare the images and look for new bright spots. Then you go back and examine those bright spots closely to try to check whether they are indeed Type Ia supernovae. Obviously this is very hard and wouldn’t even be conceivable if it weren’t for a number of relatively recent technological advances — CCD cameras as well as giant telescopes. These days we can go out and be confident that we’ll harvest supernovae by the dozens — but when Perlmutter and his group started out, that was very far from obvious.
And what did they find when they did this?
Most (almost all) astronomers expected them to find that the universe was decelerating — galaxies pull on each other with their gravitational fields, which should slow the whole thing down. (Actually many astronomers just thought they would fail completely, but that’s another story.) But what they actually found was that the distant supernovae were dimmer than expected — a sign that they are farther away than we predicted, which means the universe has been accelerating.
Why did cosmologists accept this result so quickly?
Even before the 1998 announcements, it was clear that something funny was going on with the universe. There seemed to be evidence that the age of the universe was younger than the age of its oldest stars. There wasn’t as much total matter as theorists predicted. And there was less structure on large scales than people expected. The discovery of dark energy solved all of these problems at once. It made everything snap into place. So people were still rightfully cautious, but once this one startling observation was made, the universe suddenly made a lot more sense.
How do we know the supernovae not dimmer because something is obscuring them, or just because things were different in the far past?
That’s the right question to ask, and one reason the two supernova teams worked so hard on their analysis. You can never be 100% sure, but you can gain more and more confidence. For example, astronomers have long known that obscuring material tends to scatter blue light more easily than red, leading to “reddening” of stars that sit behind clouds of gas and dust. You can look for reddening, and in the case of these supernovae it doesn’t appear to be important. More crucially, by now we have a lot of independent lines of evidence that reach the same conclusion, so it looks like the original supernova results were solid.
There’s really independent evidence for dark energy?
Oh yes. One simple argument is “subtraction”: the cosmic microwave background measures the total amount of energy (including matter) in the universe. Local measures of galaxies and clusters measure the total amount of matter. The latter turns out to be about 27% of the former, leaving 73% or so in the form of some invisible stuff that is not matter: “dark energy.” That’s the right amount to explain the acceleration of the universe. Other lines of evidence come from baryon acoustic oscillations (ripples in large-scale structure whose size helps measure the expansion history of the universe) and the evolution of structure as the universe expands.
Okay, so: what is dark energy?
Glad you asked! Dark energy has three crucial properties. First, it’s dark: we don’t see it, and as far as we can observe it doesn’t interact with matter at all. (Maybe it does, but beneath our ability to currently detect.) Second, it’s smoothly distributed: it doesn’t fall into galaxies and clusters, or we would have found it by studying the dynamics of those objects. Third, it’s persistent: the density of dark energy (amount of energy per cubic light-year) remains approximately constant as the universe expands. It doesn’t dilute away like matter does.
These last two properties (smooth and persistent) are why we call it “energy” rather than “matter.” Dark energy doesn’t seem to act like particles, which have local dynamics and dilute away as the universe expands. Dark energy is something else.
That’s a nice general story. What might dark energy specifically be?
The leading candidate is the simplest one: “vacuum energy,” or the “cosmological constant.” Since we know that dark energy is pretty smooth and fairly persistent, the first guess is that it’s perfectly smooth and exactly persistent. That’s vacuum energy: a fixed amount of energy attached to every tiny region of space, unchanging from place to place or time to time. About one hundred-millionth of an erg per cubic centimeter, if you want to know the numbers.
Is vacuum energy really the same as the cosmological constant?
Yes. Don’t believe claims to the contrary. When Einstein first invented the idea, he didn’t think of it as “energy,” he thought of it as a modification of the way spacetime curvature interacted with energy. But it turns out to be precisely the same thing. (If someone doesn’t want to believe this, ask them how they would observationally distinguish the two.)
Doesn’t vacuum energy come from quantum fluctuations?
Not exactly. There are many different things that can contribute to the energy of empty space, and some of them are completely classical (nothing to do with quantum fluctuations). But in addition to whatever classical contribution the vacuum energy has, there are also quantum fluctuations on top of that. These fluctuation are very large, and that leads to the cosmological constant problem.
What is the cosmological constant problem?
If all we knew was classical mechanics, the cosmological constant would just be a number — there’s no reason for it to be big or small, positive or negative. We would just measure it and be done.
But the world isn’t classical, it’s quantum. In quantum field theory we expect that classical quantities receive “quantum corrections.” In the case of the vacuum energy, these corrections come in the form of the energy of virtual particles fluctuating in the vacuum of empty space.
We can add up the amount of energy we expect in these vacuum fluctuations, and the answer is: an infinite amount. That’s obviously wrong, but we suspect that we’re overcounting. In particular, that rough calculation includes fluctuations at all sizes, including wavelengths smaller than the Planck distance at which spacetime probably loses its conceptual validity. If instead we only include wavelengths that are at the Planck length or longer, we get a specific estimate for the value of the cosmological constant.
The answer is: 10120 times what we actually observe. That discrepancy is the cosmological constant problem.
Why is the cosmological constant so small?
Nobody knows. Before the supernovae came along, many physicists assumed there was some secret symmetry or dynamical mechanism that set the cosmological constant to precisely zero, since we certainly knew it was much smaller than our estimates would indicate. Now we are faced with both explaining why it’s small, and why it’s not quite zero. And for good measure: the coincidence problem, which is why the dark energy density is the same order of magnitude as the matter density.
Here’s how bad things are: right now, the best theoretical explanation for the value of the cosmological constant is the anthropic principle. If we live in a multiverse, where different regions have very different values of the vacuum energy, one can plausibly argue that life can only exist (to make observations and win Nobel Prizes) in regions where the vacuum energy is much smaller than the estimate. If it were larger and positive, galaxies (and even atoms) would be ripped apart; if it were larger and negative, the universe would quickly recollapse. Indeed, we can roughly estimate what typical observers should measure in such a situation; the answer is pretty close to the observed value. Steven Weinberg actually made this prediction in 1988, long before the acceleration of the universe was discovered. He didn’t push it too hard, though; more like “if this is how things work out, this is what we should expect to see…” There are many problems with this calculation, especially when you start talking about “typical observers,” even if you’re willing to believe there might be a multiverse. (I’m very happy to contemplate the multiverse, but much more skeptical that we can currently make a reasonable prediction for observable quantities within that framework.)
What we would really like is a simple formula that predicts the cosmological constant once and for all as a function of other measured constants of nature. We don’t have that yet, but we’re trying. Proposed scenarios make use of quantum gravity, extra dimensions, wormholes, supersymmetry, nonlocality, and other interesting but speculative ideas. Nothing has really caught on as yet.
Has the course of progress in string theory ever been affected by an experimental result?
Yes: the acceleration of the universe. Previously, string theorists (like everyone else) assumed that the right thing to do was to explain a universe with zero vacuum energy. Once there was a real chance that the vacuum energy is not zero, they asked whether that was easy to accommodate within string theory. The answer is: it’s not that hard. The problem is that if you can find one solution, you can find an absurdly large number of solutions. That’s the string theory landscape, which seems to kill the hopes for one unique solution that would explain the real world. That would have been nice, but science has to take what nature has to offer.
What’s the coincidence problem?
Matter dilutes away as the universe expands, while the dark energy density remains more or less constant. Therefore, the relative density of dark energy and matter changes considerably over time. In the past, there was a lot more matter (and radiation); in the future, dark energy will completely dominate. But today, they are approximately equal, by cosmological standards. (When two numbers could differ by a factor of 10100 or much more, a factor of three or so counts as “equal.”) Why are we so lucky to be born at a time when dark energy is large enough to be discoverable, but small enough that it’s a Nobel-worthy effort to do so? Either this is just a coincidence (which might be true), or there is something special about the epoch in which we live. That’s one of the reasons people are willing to take anthropic arguments seriously. We’re talking about a preposterous universe here.
If the dark energy has a constant density, but space expands, doesn’t that mean energy isn’t conserved?
Yes. That’s fine.
What’s the difference between “dark energy” and “vacuum energy”?
“Dark energy” is the general phenomenon of smooth, persistent stuff that makes the universe accelerate; “vacuum energy” is a specific candidate for dark energy, namely one that is absolutely smooth and utterly constant.
So there are other candidates for dark energy?
Yes. All you need is something that is pretty darn smooth and persistent. It turns out that most things like to dilute away, so finding persistent energy sources isn’t that easy. The simplest and best idea is quintessence, which is just a scalar field that fills the universe and changes very slowly as time passes.
Is the quintessence idea very natural?
Not really. An original hope was that, by considering something dynamical and changing rather than a plain fixed constant energy, you could come up with some clever explanation for why the dark energy was so small, and maybe even explain the coincidence problem. Neither of those hopes has really panned out.
Instead, you’ve added new problems. According to quantum field theory, scalar fields like to be heavy; but to be quintessence, a scalar field would have to be enormously light, maybe 10-30 times the mass of the lightest neutrino. (But not zero!) That’s one new problem you’ve introduced, and another is that a light scalar field should interact with ordinary matter. Even if that interaction is pretty feeble, it should still be large enough to detect — and it hasn’t been detected. Of course, that’s an opportunity as well as a problem — maybe better experiments will actually find a “quintessence force,” and we’ll understand dark energy once and for all.
How else can we test the quintessence idea?
The most direct way is to do the supernova thing again, but do it better. More generally: map the expansion of the universe so precisely that we can tell whether the density of dark energy is changing with time. This is generally cast as an attempt to measure the dark energy equation-of-state parameter w. If w is exactly minus one, the dark energy is exactly constant — vacuum energy. If w is slightly greater than -1, the energy density is gradually declining; if it’s slightly less (e.g. -1.1), the dark energy density is actually growing with time. That’s dangerous for all sorts of theoretical reasons, but we should keep our eyes peeled.
What is w?
It’s called the “equation-of-state parameter” because it relates the pressure p of dark energy to its energy density ρ, via w = p/ρ. Of course nobody measures the pressure of dark energy, so it’s a slightly silly definition, but it’s an accident of history. What really matters is how the dark energy evolves with time, but in general relativity that’s directly related to the equation-of-state parameter.
Does that mean that dark energy has negative pressure?
Yes indeed. Negative pressure is what happens when a substance pulls rather than pushes — like an over-extended spring that pulls on either end. It’s often called “tension.” This is why I advocated smooth tension as a better name than “dark energy,” but I came in too late.
Why does dark energy make the universe accelerate?
Because it’s persistent. Einstein says that energy causes spacetime to curve. In the case of the universe, that curvature comes in two forms: the curvature of space itself (as opposed to spacetime), and the expansion of the universe. We’ve measured the curvature of space, and it’s essentially zero. So the persistent energy leads to a persistent expansion rate. In particular, the Hubble parameter is close to constant, and if you remember Hubble’s Law from way up top (v = H d) you’ll realize that if H is approximately constant, v will be increasing because the distance is increasing. Thus: acceleration.
Is negative pressure is like tension, why doesn’t it pull things together rather than pushing them apart?
Sometimes you will hear something along the lines of “dark energy makes the universe accelerate because it has negative pressure.” This is strictly speaking true, but a bit ass-backwards; it gives the illusion of understanding rather than actual understanding. You are told “the force of gravity depends on the density plus three times the pressure, so if the pressure is equal and opposite to the density, gravity is repulsive.” Seems sensible, except that nobody will explain to you why gravity depends on the density plus three times the pressure. And it’s not really the “force of gravity” that depends on that; it’s the local expansion of space.
The “why doesn’t tension pull things together?” question is a perfectly valid one. The answer is: because dark energy doesn’t actually push or pull on anything. It doesn’t interact directly with ordinary matter, for one thing; for another, it’s equally distributed through space, so any pulling it did from one direction would be exactly balanced by an opposite pull from the other. It’s the indirect effect of dark energy, through gravity rather than through direct interaction, that makes the universe accelerate.
The real reason dark energy causes the universe to accelerate is because it’s persistent.
Is dark energy like antigravity?
No. Dark energy is not “antigravity,” it’s just gravity. Imagine a world with zero dark energy, except for two blobs full of dark energy. Those two blobs will not repel each other, they will attract. But inside those blobs, the dark energy will push space to expand. That’s just the miracle of non-Euclidean geometry.
Is it a new repulsive force?
No. It’s just a new (or at least different) kind of source for an old force — gravity. No new forces of nature are involved.
What’s the difference between dark energy and dark matter?
Completely different. Dark matter is some kind of particle, just one we haven’t discovered yet. We know it’s there because we’ve observed its gravitational influence in a variety of settings (galaxies, clusters, large-scale structure, microwave background radiation). It’s about 23% of the universe. But it’s basically good old-fashioned “matter,” just matter that we can’t directly detect (yet). It clusters under the influence of gravity, and dilutes away as the universe expands. Dark energy, meanwhile, doesn’t cluster, nor does it dilute away. It’s not made of particles, it’s some different kind of thing entirely.
Is it possible that there is no dark energy, just a modification of gravity on cosmological scales?
It’s possible, sure. There are at least two popular approaches to this idea: f(R) gravity , which Mark Trodden and I helped develop, and DGP gravity, by Dvali, Gabadadze, and Porati. The former is a directly phenomenological approach where you simply change the Einstein field equation by messing with the action in four dimensions, while the latter uses extra dimensions that only become visible at large distances. Both models face problems — not necessarily insurmountable, but serious — with new degrees of freedom and attendant instabilities.
Modified gravity is certainly worth taking seriously (but I would say that). Still, like quintessence, it raises more problems than it solves, at least at the moment. My personal likelihoods: cosmological constant = 0.9, dynamical dark energy = 0.09, modified gravity = 0.01. Feel free to disagree.
What does dark energy imply about the future of the universe?
That depends on what the dark energy is. If it’s a true cosmological constant that lasts forever, the universe will continue to expand, cool off, and empty out. Eventually there will be nothing left but essentially empty space.
The cosmological constant could be constant at the moment, but temporary; that is, there could be a future phase transition in which the vacuum energy decreases. Then the universe could conceivably recollapse.
If the dark energy is dynamical, any possibility is still open. If it’s dynamical and increasing (w less than -1 and staying that way), we could even get a Big Rip.
What’s next?
We would love to understand dark energy (or modified gravity) through better cosmological observations. That means measuring the equation-of-state parameter, as well as improving observations of gravity in galaxies and clusters to compare with different models. Fortunately, while the U.S. is gradually retreating from ambitious new science projects, the European Space Agency is moving forward with a satellite to measure dark energy. There are a number of ongoing ground-based efforts, of course, and the Large Synoptic Survey Telescope should do a great job once it goes online.
But the answer might be boring — the dark energy is just a simple cosmological constant. That’s just one number; what are you going to do about it? In that case we need better theories, obviously, but also input from less direct empirical sources — particle accelerators, fifth-force searches, tests of gravity, anything that would give some insight into how spacetime and quantum field theory fit together at a basic level.
The great thing about science is that the answers aren’t in the back of the book; we have to solve the problems ourselves. This is a big one.
Pingback: Dark Energy, Dark Flow, and can we explain it away? [Starts With A Bang] | Digital Brain ; Science and Technology News
“The belief that we can’t live in a special place but we can live at a special time (which is what Sean’s “preposterous universe” boils down to) is exactly one of those theoretical prejudices that is better challenged by hard data. Fortunately, there are some people doing this.”
I’m very interested in this question myself (look for a paper on the arXiv within the next few weeks), but I don’t see how that living at a special time can provide large-scale inhomogeneity.
“As for the rest of the issues you raise, I could stop to discuss each in detail, but actually – as I’ve just submitted a PhD thesis that deals with exactly this topic”
Can you send me a link to it?
” – I think I’d rather sit in the sunshine than argue on a blog.”
Of course, one can do both simultaneously. 🙂
Thank you for this FAQ Sean. I think I learned 10x more than I ever knew before about Dark Energy. I didn’t understand it all but I did understand most of it.
In particular I liked that you laid out the lines of evidence, skimmed through possible answers, and attempted to quantify the uncertainties involved.
@32. SteveB & @6. olsenjaynelson, I agree on all your major points.
Pingback: Dark Energy FAQ « Follow Me Here…
Pingback: I’ve got your missing links right here (8 October 2011) | Not Exactly Rocket Science | Freedom Developers
Pingback: 2011 Nobel Prize Recap « Science Picks
http://www.cosmicsignals.wordpress.com, post 15 in particular.
Pingback: Bowerbird #25: Upper Peninsula Spirit Quest « avian architext
I’ve looked far and wide for the pennies that have just dropped.
Thank you.
Pingback: Länkar vecka 41 | Stjärnstoft och kugghjul
SUPERLUMINELLE
— James Ph. Kotsybar
The Universe is expanding,
Faster than the limit of light,
Beyond common understanding.
Cosmology is demanding. I
ts study is by no means slight.
The Universe is expanding.
Physics’ heroes, quite outstanding,
Have applied their full mental might
Beyond common understanding.
There’s no point in reprimanding,
As we gaze out into the night,
The Universe is expanding.
The truth of fact is commanding.
Whatever is has to be right,
Beyond common understanding.
Einstein’s physics notwithstanding,
Much quicker than what we call bright,
The Universe is expanding
Beyond common understanding.
Pingback: Slow Down, It’s Sunday « 'tis nobler – to learn and change
Please anyone, everyone, tell me where I am wrong here. Sean, in November Discover lists
the four forces: gravity, electromagnetism, the strong and weak nuclear forces. He challenges
us to find the fifth force. It seems to me that the fifth force is the preprogrammed pattern in
every dynamic system for its own maturation and regeneration. A living organism or an
evolving solar system follows a predertmined course. This preprogramming is as much
a force as the others. By adding gravity to this theorem, the universe is expanding as part of the preprogrammed evolution of the multiverse so that the gravitational pull of other universes on the matter in our universe on the way to its eventual demise and contribution to the rebirth of new universes explains why matter is moving apart at an increasing rate toward the edge of our universe possibly to combine with matter from other universes. At the very least, if gravity is one of stronger of the four known forces, why do we not allow for the tremendous gravitaional pull of the multiverse? A far simpler explanation than the elusive search for dark matter
O.K. Beat up on me, student of the behavioral sciences that I am.
Pingback: Ummmm, tell me about dark energy | Koppernigk
Sean – Excellent posting & discussion. Kudos also to Philip Helbig (hi!)
In very broad, simple strokes:
Since the universe has been expanding for a very long time and, I presume, the local effects of gravitation have increased the universal ‘clumpiness’ of matter (galaxies merge, etc.), it seems that it should be primarily the size of voids that is generally increasing.
Since any effect of gravitation that opposes expansion should be diminishing within the voids, might not the expansion of spacetime within the voids be accelerating – simply because of the diminishing long range effects of gravitation (as distances between clumps increase)?
Descriptions of the accelerating expansion found in popular journals such as Scientific American, Physics Today, and the above set of FAQ all state that distant supernovas with very large red-shift appear to be farther away than they would be if the expansion rate were constant or if there were a deceleration due to gravitation.
I think this is something that a simple-minded high school physics teacher should be able to understand, but I am having trouble seeing how that observation leads to that conclusion.
If one car or galaxy travels away from me with constant speed and another accelerates away from me (starting at the same time but with any smaller speed) then at some time the two objects will have equal speeds away from me and therefore equal red-shifts. But the accelerating object will have a smaller average speed than the first one. Doesn’t that mean that it must be closer to me at the moment when the speeds are equal?
Art Hovey – I’m never studied physics, but during the past couple of years I did study the original research reports for which the Nobel prize was awarded. As I understand:
– The more accurate (presumed) consistent peak emission luminosity of type Ia supernovae were used to estimate their distance based on their observed apparent luminosity.
– These ‘new’ SNe distance estimations were compared with traditional distance estimates derived from standard cosmological models based on the redshift of their host galaxies’ light emissions.
– The more distant ‘high-z’ group of SNe observed (3-5 billion light years away) were determined to be more distant than the standard cosmological models had predicted. The nearer group of SNe observed did not exhibit conflicting distance estimates – the distances predicted by standard cosmological models were in general agreement with the SNe based distance estimates.
– To adjust the cosmological models’ estimates to match the SNe distance estimates for the ‘high-z’ SNe group, the researchers employed the models’ cosmological constant parameter and changed the normally positive deceleration parameter to a negative value, indicating ‘negative deceleration’, or acceleration.
To this innocent bystander, it would seem that it is the the more ancient light emissions of distant galaxies that indicate the acceleration of expansion – not the more recently emitted light from nearer galaxies. I have been assured that this is just an artifact of the complex procedures employed by physicists in their analysis.
As I understand, the necessary observations of initial type Ia SNe peak emissions have not been made for more distant galaxies, so the relationship between distance and acceleration of expansion for the ‘edge’ or periphery of the observed universe has not been established – the ‘acceleration’ of expansion has only been determined from SNe that are 3-5 billion light years away, in the ‘middle’ of the observed universe.
I’m sure I just don’t properly understand the findings within the full context of current cosmological understanding…
As I understand, there are still several uncertainties presumed in the analysis of type Ia supernovae luminosity.
One such uncertainty is the effect of metalicity on type Ia SNe peak emission luminosity, used to precisely derive distance in the studies that concluded the universe is accelerating.
Metalicity is the measure of heavier elements contained within the universe: it has generally increased as the universe developed, with each new generation of massive main-sequence stars.
It is currently presumed to have no effect, but the metalicity of more distant type Ia SNe would be different from that of nearer type Ia SNe and its effect on their peak emission luminosity has not yet been determined. Please see: http://blogs.nature.com/news/2011/08/bright_supernova_one_of_the_ne.html#more
@66: It’s not that simple. The redshift (actually 1+z where z is the redshift) gives us the ratio of the size of the universe now to when the light was emitted. To convert this into a distance (and there are many in cosmology, depending on how they are defined (and which are all the same when the redshift is negligible), one needs to know how the universe has expanded since the light was emitted and what it’s geometry is. Both of these are determined by the cosmological parameters. So what one does is measure the luminosity distance (from the apparent magnitude and the (presumed known or calculable) absolute magnitude and then compare it to predictions for various combinations of the cosmological parameters. In particular, the distance as a function of redshift is different for different combinations of parameters, so one determines the parameters by finding the best-fit curve to match the observations.
Since the Hubble constant, i.e. present rate of expansion, is the same in both cases, you need to think of a universe which is accelerating now as having expanded slower in the past than a comparison universe. Let’s compare the two for observing an object at a redshift of 1. That means that, in both cases, the light was emitted when the universe was half the size it is now. But because the universe which is accelerating now was expanding more slowly in the past, the object has to be farther away, since otherwise the light would have reached us too soon.
You might be getting confused by the (wrong) assumption that equal speeds implies equal redshifts. Equating redshift with speed like this can only be done at very small redshifts.
Sorry guys, this is from a layman’s point of view:
It’s a matter of perspective.
On the large scale obviously the universe is larger than we can ever observe.
And if it were rotating, which it probably is, as everything does, then all the matter in the universe will eventually be pulled outward at an accelerating rate forming an immensely thinning outer shell by centrifugal force. Which is what I think we are observing.
On the small scale as matter decomposes and as the universe expands the distance between atoms increases and density goes down eventually forming a predictably loosely defined apparently steady state of really decreasing gaseous like pressure independent of it’s composition ( no pun intended ). Hence the uniformity and persistence observation.
And as these intervening levels of ” vacuums ” being virtually nothing continuing to become even less nothing ” they ” are even less likely to interact with anything and become more and more undetectable. Which is what we are also observing.
The shoe is beginning to fit.
We need to do two things:
1 . Measure the density of empty space around us as far out as we can and see if it does
decrease at an increasing rate implying it is accelerating dispersion breaking the bonds
of gravity between atoms.
2. Try to vector back all the paths these outer galaxies have traveled to try and locate a
central point everything may have emanated from to see if there is any uniform
evidence of rotation about that point.
3. Then predict future positions of the outer galaxies and vacuum densities as a possible
way of explaining both the reason the universe is expanding and the mystery of what
dark matter is and what appears to be dark energy.
pete veslocki
Pingback: Dark Energy, Dark Flow, and can we explain it away? | My Blog
70. Pete Veslocki – It seems to me you’re envisioning a universe still composed of plasma. I don’t think there’s any evidence for any spacetime expansion within galaxies, for example. In fact, galaxies seem to have been generally merging for most of the universe’s existence, and are expected to continue to do so – the Milky Way is generally expected to merge with the Andromeda galaxy in the next 3-4 billion years.
Since the general effect of gravitation is to localize matter and the general effect of universal expansion (as observed) is to expand intergalactic spacetime, universal expansion may not produce any decomposition of matter.
Extrapolating existing trends, one could project the future state of the universe to be ever more isolated galaxies of increasing size, or ‘island universes’, since an observer in each may not be able to detect the others’ existence.
However, this is also just the speculations of a lay person.
I tried to stay with this piece to the end, but my almost uncontrolled laughter at a point interfered. I got as far as “the coincidence problem”…. after numerous other “Jabberwocky” questions and probable, possible, maybe, but we’re not sure answers, as if a group of physicists and cosmologists got together and made all this up. No disrespect intended, but the maze of questions and “around unending corners” answers eventually seemed hilarious.
69. Phillip Helbig – I think there were some transcription errors as you recorded your thoughts in responding to Art Hovey. I think your explanation would be very interesting and hope you can restate or clarify. You stated:
“To convert this into a distance (and there are many in cosmology, depending on how they are defined (and which are all the same when the redshift is negligible), one needs to know how the universe has expanded since the light was emitted and what it’s geometry is. Both of these are determined by the cosmological parameters.”
BTW, the statement “one needs to know how the universe has expanded since the light was emitted and what it’s geometry is” would be more correctly stated as something like: ‘models must presume…’ since it is not definitively known how the universe has expanded and what it’s geometry is – thus the requirement to evaluate model results, correct?
Likewise the statement “Both of these are determined by the cosmological parameters” would be more correctly stated as ‘Both of these are specified by the cosmological parameters’, correct?
Thanks in advance.
What is ment by Comoving objects in space.