Certain subsectors of the scientifically-oriented blogosphere are abuzz — abuzz, I say! — about this new presentation on Dark Energy at the Hubblesite. It’s slickly done, and worth checking out, although be warned that a deep voice redolent with mystery will commence speaking as soon as you open the page.
But Ryan Michney at Topography of Ignorance puts his finger on the important thing here, the opening teaser text:
Scientists have found an unexplained force that is changing our universe,
forcing galazies farther and farther apart,
stretching the very fabric of space faster and faster.
If unchecked, this mystery force could be the death of the universe,
tearing even its atoms apart.We call this force dark energy.
Scary! Also, wrong. Not the part about “tearing even its atoms apart,” an allusion to the Big Rip. That’s annoying, because a Big Rip is an extremely unlikely future for a universe even if it is dominated by dark energy, yet people can’t stop putting the idea front and center because it’s provocative. Annoying, but not wrong.
The wrong part is referring to dark energy as a “force,” which it’s not. At least since Isaac Newton, we’ve had a pretty clear idea about the distinction between “stuff” and the forces that act on that stuff. The usual story in physics is that our ideas become increasingly general and sophisticated, and distinctions that were once clear-cut might end up being altered or completely irrelevant. However, the stuff/force distinction has continued to be useful, even as relativity has broadened our definition of “stuff” to include all forms of matter and energy. Indeed, quantum field theory implies that the ingredients of a four-dimensional universe are divided neatly into two types: fermions, which cannot pile on top of each other due to the exclusion principle, and bosons, which can. That’s extremely close to the stuff/force distinction, and indeed we tend to associate the known bosonic fields — gravity, electromagnetism, gluons, and weak vector bosons — with the “forces of nature.” Personally I like to count the Higgs boson as a fifth force rather than a new matter particle, but that’s just because I’m especially fastidious. The well-defined fermion/boson distinction is not precisely equivalent to the more casual stuff/force distinction, because relativity teaches us that the bosonic “force fields” are also sources for the forces themselves. But we think we know the difference between a force and the stuff that is acting as its source.
Anyway, that last paragraph got a bit out of control, but the point remains: you have stuff, and you have forces. And dark energy is definitely “stuff.” It’s not a new force. (There might be a force associated with it, if the dark energy is a light scalar field, but that force is so weak that it’s not been detected, and certainly isn’t responsible for the acceleration of the universe.) In fact, the relevant force is a pretty old one — gravity! Cosmologists consider all kinds of crazy ideas in their efforts to account for dark energy, but in all the sensible theories I’ve heard of, it’s gravity that is the operative force. The dark energy is causing a gravitational field, and an interesting kind of field that causes distant objects to appear to accelerate away from us rather than toward us, but it’s definitely gravity that is doing the forcing here.
Is this a distinction worth making, or just something to kvetch about while we pat ourselves on the back for being smart scientists, misunderstood once again by those hacks in the PR department? I think it is worth making. One of the big obstacles to successfully explaining modern physics to a broad audience is that the English language wasn’t made with physics in mind. How could it have been, when many of the physical concepts weren’t yet invented? Sometimes we invent brand new words to describe new ideas in science, but often we re-purpose existing words to describe concepts for which they originally weren’t intended. It’s understandably confusing, and it’s the least we can do to be careful about how we use the words. One person says “there are four forces of nature…” and another says “we’ve discovered a new force, dark energy…”, and you could hardly blame someone who is paying attention for turning around and asking “Does that mean we have five forces now?” And you’d have to explain “No, we didn’t mean that…” Why not just get it right the first time?
Sometimes the re-purposed meanings are so deeply embedded that we forget they could mean anything different. Anyone who has spoken about “energy” or “dimensions” to a non-specialist audience has come across this language barrier. Just recently it was finally beaten into me how bad “dark” is for describing “dark matter” and “dark energy.” What we mean by “dark” in these cases is “completely transparent to light.” To your average non-physicist, it turns out, “dark” might mean “completely absorbs light.” Which is the opposite! Who knew? That’s why I prefer calling it “smooth tension,” which sounds more Barry White than Public Enemy.
What I would really like to get rid of is any discussion of “negative pressure.” The important thing about dark energy is that it’s persistent — the density (energy per cubic centimeter) remains roughly constant, even as the universe expands. Therefore, according to general relativity, it imparts a perpetual impulse to the expansion of the universe, not one that gradually dilutes away. A constant density leads to a constant expansion rate, which means that the time it takes the universe to double in size is a constant. But if the universe doubles in size every ten billion years or so, what we see is distant galaxies acceleratating away — first they are X parsecs away, then they are 2X parsecs away, then 4X parsecs away, then 8X, etc. The distance grows faster and faster, which we observe as acceleration.
That all makes a sort of sense, and never once did we mention “negative pressure.” But it’s nevertheless true that, in general relativity, there is a relationship between the pressure of a substance and the rate at which its density dilutes away as the universe expands: the more (positive) pressure, the faster it dilutes away. To indulge in a bit of equationry, imagine that the energy density dilutes away as a function of the scale factor as R-n. So for matter, whose density just goes down as the volume goes up, n=3. For a cosmological constant, which doesn’t dilute away at all, n=0. Now let’s call the ratio of the pressure to the density w, so that matter (which has no pressure) has w=0 and the cosmological constant (with pressure equal and opposite to its density) has w=-1. In fact, there is a perfectly lockstep relation between the two quantities:
n = 3(w + 1).
Measuring, or putting limits on, one quantity is precisely equivalent to the other; it’s just a matter of your own preferences how you might want to cast your results.
To me, the parameter n describing how the density evolves is easy to understand and has a straightforward relationship to how the universe expands, which is what we are actually measuring. The parameter w describing the relationship of pressure to energy density is a bit abstract. Certainly, if you haven’t studied general relativity, it’s not at all clear why the pressure should have anything to do with how the universe expands. (Although it does, of course; we’re not debating right and wrong, just how to most clearly translate the physics into English.) But talking about negative pressure is a quick and dirty way to convey the illusion of understanding. The usual legerdemain goes like this: “Gravity feels both energy density and pressure. So negative pressure is kind of like anti-gravity, pushing things apart rather than pulling them together.” Which is completely true, as far as it goes. But if you think about it just a little bit, you start asking what the effect of a “negative pressure” should really be. Doesn’t ordinary positive pressure, after all, tend to push things apart? So shouldn’t negative pressure pull them together? Then you have to apologize and explain that the actual force of this negative pressure can’t be felt at all, since it’s equal in magnitude in every direction, and it’s only the indirect gravitational effect of the negative pressure that is being measured. All true, but not nearly as enlightening as leaving the concept behind altogether.
But I fear we are stuck with it. Cosmologists talk about negative pressure and w all the time, even though it’s confusing and ultimately not what we are measuring anyway. Once I put into motion my nefarious scheme to overthrow the scientific establishment and have myself crowned Emperor of Cosmology, rest assured that instituting a sensible system of nomenclature will be one of my very first acts as sovereign.
The region of space bounded by the horizon will have local regions of vacua which are not unitarily equivalent. If the horizon length is very small then these local regions of different vacua are proportionately smaller. This is what gives rise to Hawking-Gibbon radiation, analogous to the Hawking radiation from black holes. The matter and radiation we observe in the universe may have been generated by this process, along with some sort of symmetry breaking process and a CP violation which gave rise to a dominance of matter over anti-matter. Of course this happened when the horizon distance was very small, which then inflated out to its current distance.
Lawrence B. Crowell
Lawrence, folks: Isn’t it true, that the easy way to track the progress of light across the cosmos is to pretend that the scale of objects and speeds is shrinking, while the separations between galaxies (those stationary relative to the background radiation, the neo-standard of rest) stay the same? So, I imagine that c in my adjusted coordinate system is a function of time (the universal local time since the BB for any observer at that CBR standard), the inverse of the cosmic scale factor. Then I can track how it progresses. In a flat space this also allows to find apparent sizes, since we imagine photons to continue in straight lines by symmetry. (So, think of photons coming from opposite sides of a galaxt at time t1, then as the galaxy “shrinks” they continue as before – the distant galaxy will appear as big as if it was still only as far as when the light was emitted, as you might intuitively expect.)
Presumably one can adapt this to tardyons by making adjustments for relativistic transformation (so that a particle emitted by me at 0.6c will have proper velocity addition as it passes a galaxy receeding at 0.001c etc., hence the galaxy finds it locally whizzing by at 0.6c – 0.00064c. IOW, the infinitesimal idealized velocity correction is gamma^(-2) times the simple Hubble factor.
Note, that I meant “pretend” that galaxies are shrinking rather than imagine that the “actually are.” Hoyle and some others thought the two equivalent, but: suppose we imagine that “things” actually shrink with time, but that dynamical rules stay the same relative to the old standard. Then, a particle emitted and coasting through space would seem to speed up, since it be passing shrinking length standards while keeping much of its original velocity (or at least, even if the net result was not a velocity increase, inhabitants could tell the difference.)
The spacetime is not flat, even if the space is flat. The Ricci curvature terms are of the form R_{00} = a” – 3(a’)^2, primes = time derivative and a the FRW radius, which means that even if the space is flat it’s extrinsic curvature with respect to how it is embedded in spacetime is not. So spacetime is not exactly flat. Galaxies are moving out not because they have a velocity as we normally think of it, but because the space they exist in is expanding outwards.
The Hubble factor H enters into the cosmological constant as
/ = 3H^2(Omega)/c^2
where we can set Omega =~ 1. For a radius r = sqrt{3/ /} the metric factor A = 1 – /r^2/3 is zero in the line element
ds^2 = -Adt^2 + (1/A)dr^2 + r^2d(angular parts).
For this line element one can compute curvatures and the rest and find that the spacetime is not flat.
As it turns out it is not possible to find a quantum vacuum for a quantum field that is unitarily equivalent throughout the spacetime. As Wald points out with Unruh radiation the departures between the Killing time and standard time are a measure of this departure. This spacetime shares some features similar to the Rindler spacetime for an accelerated observer, and different observers will have different measures of time. The departures between these are involved with the lack of a single equivalent quantum vacuum for the cosmology. This is why it is that the cosmological event horizon has a thermal temperature, which is very very tiny. This is not to be confused with the CMB temperature, even though that may be ultimately due to the same physics during the inflationary phase.
This may sound rather odd, but it is not possible to define energy conservation in a cosmology and there is really no univerally defined concept of time either. One person’s time “here” is different than “there.” The invariance of time translation is then not something which can be established globally. The generator of time translation by Noether’s theorem is the Hamiltonian (energy) and so this means that energy or energy conservation can’t be established globally.
Lawrence B. Crowell
Actually, it is possible to define energy conservation, using the Hamiltonian formalism. Granted, I don’t know a whole lot about it, but supposedly it’s a valid construct there.
Lawrence,
You are right, that the speed of light is a local effect and thus logical that it might slow near a black hole, but the expansion of the universe isn’t a local effect, presumably it is a universal effect, so wouldn’t the speed of light everywhere be increasing as the universe and the total space, local and otherwise, expands?
The speed of light is only a well defined quantity locally. When talking about the speed of light far away from the observer, there is no objective method of stating what that speed is. So, in essence, it all depends upon what arbitrary method of estimating the speed you choose, just as long as you’re aware that an observer at the location that you’re actually measuring the speed would always measure the same speed of light that we measure.
To paraphrase the statement about politics, “All physics is local.” The speed of light is something defined on a local frame. It applies on all local frames, but if you tried to measure the speed of light “there,” say by watching a pulse of light cross a nebula and illuminate the dust and gas as it goes, you might observe something different.
There is a lot of mathematical structure to this involving local charts on a space (what is called a manifold) and how on the overlap of these charts there exist transformation functions and connection coefficients. This extends to gauge theories, such as electromagnetism, as well. If in one chart a wave function has one phase and in the other it has a slightly different phase then the change in this phase in a transformation between the charts determines fields. The transformation functions determine something called the vector potential A, which in an elementary setting can determine the magnetic field by B = curl A or the electric field E = -&A/&t. General relativity is similar to this, but the charts are not some internal phase but changes in the space or spacetime itself, which leads to curvatures — and curvatures define what we call gravity.
So everything is built up from local regions or charts, and in relativity theory this is a flat spacetime frame. If the region is small enough it is flat, or approximately so. This is where you define what time is by your clock and where you measure the speed of light and observe the motion of bodies. Newton’s first law “presaged” this for, “A body remains in a constant state of motion unless acted on by a force,” means one must observe physics from these inertial frames.
Then we come to the issue of cosmology, where we are trying to understand a global system, but one which is not globally flat or with a constant phase, and frankly does not even have a globally defined quantum vacuum. So we are saying “All physics is local,” and we are trying to understand a nontrivial global system — think globally, act locally.
There are a few things we might consider. The entire cosmology is probably given by a grand path integral or partition function. The initial and final points of the path integral should be related by some grand-global transformation function, similar to a development operator or e^{iS} that maps the initial configuration to the final one. We might of course suppose that this development is some grand product of such objects which define the foliation of spaces in what we locally call “time.” The initial configuration is a fine grained set of many vacua with some sort of superselection rule. The final state is hard to know, for we will never observe it. Yet we can make some reasoned guesses. The accelerated universe, known by SN1 data and WMAP, means the universe will asymptotically approach a pure DeSitter cosmology. Further, Hawking-Gibbon radiation will cause the cosmological horizon to receed away to infinity. I am not considering phantom energy or big rip for various reasons. This means the target or attractor point in superspace for this is a Minkowski spacetime — a perfect void with zero mass-energy content inside.
The reason for thinking about the ultimate fate of the universe is that if the final point is a Minkowski space (at conformal infinity, with AdS/CFT etc), then this gives us an anchor we understand. A Minkowsi space is one with a globally defined time, the speed of light is uniform everywhere and so forth. This is the configuration of maximal entropy as well, for all information about the universe prior is completely lost, which is S_{max} in a Shannon-Khinchin sense. Then there is the initial state defined by a fine grained set of distinct and inequivalent vacua. All we need to do is find a grand transformation principle that links the two.
As a bit of a final side note, Christianity is based on the idea that light triumphs over dark. Hmmm…., given what we might suppose about the future of the universe, stars will wink out, things are absorbed into black holes, black holes quantum decay in 10^{100} years and so forth, and if “at infinity” the whole thing ends up in a perfect Minkowskian void, then I’d say darkness wins.
Lawrence B. Crowell
Jason, Lawrence,
I realize there are limits on what we can measure, but according to theory, if two galaxies are 100 million lightyears apart now, when the universe has expanded to twice its current size, are they still one hundred lightyears apart, since the lightyear is our most fundamental measure of interstellar and intergalactic space, or are they 200 lightyears apart?
As I see it, both answers cause problems for Big Bang/Inflation theory. If they still appear 100 million lightyears apart, then how can we say the space is expanding, since our most basic ruler is expanding along with it? Would it even redshift if the speed of light increases along with the expansion?
On the other hand, if they are 200 million lightyears apart, that’s not expanding space, that’s increasing distance in stable space and this raises a series of issues. For one thing we would appear to be at the exact center of the universe. That would be on the list of internal problems. On the list of external problems, pre-existing space would be subject to quantum energy and the inflation stage in this sea of energy would burn everything to a crisp, to say the least.
As for Inflation; Why did it slow down to the observed rate of expansion? It would seem the inertia of this stage would need something more then observed gravity to brake it down to the current rate.
Lawrence,
People can be like bugs in the night. We head for the light. Sometimes it’s best to go with the herd. Sometimes it’s best to go the other way. Fortunately we have somewhat more brains then bugs do.
The galaxies at 100 million light years at time = T will be at more than 200 million light years at time = 2T, where here the time is measured by the Hubble relation and distance. That distance is then given by Cepheid variables or SN1 or … .
The rapid rate of expansion was stopped because we might think of the space itself as expanding. Now there are some problems with this, for this viewpoint runs into subtle problems with covariance.
As for “the dark,” well we humans are afraid of the dark. It is a strange factor in our species. And if you have ever been in the wilderness at night it can be really dark and disorienting. Right now with Hannukah and Christmas and what was call Sol Invictus the idea was at the solstice to build fires to try to bring back the light. Lighting menorahs or Christmas trees and so forth goes back to some ancient ideas and some psychological aspects of us two legged biological misfits.
Lawrence B. Crowell
Lawrence,
That’s what I thought and it does raise the question of what dimension of space lightspeed is measuring. Conventional Dppler effect isn’t about expanding space, it’s about increasing distance in stable space. The train is heading down the tracks, the tracks are not being stretched. The same would seem to apply here, in that lightspeed represents the tracks. So how is it that we say that “space” is expanding, when our most basic measure of space isn’t?
In a lot of cases, it’s the subtle details that are trying to tell us that some initial navigational errors have us off course and reviewing what has been done, before continuing to dig ourselves in further, is wise.
There are some definite motivational impulses at work here. I think that “seasonal affective disorder” is a natural psychological gearshift between fat and happy summer and downshifting for the climb through winter.
Measurements of distant galaxies is a Doppler shift, but that in general is a gravitational Doppler shift. For a black hole a light source near the horizon will have a gravitational Doppler shift. With cosmology a similar redshifting occurs due to the curvature of spacetime.
When it comes to the halting of inflation, think of a balloon expanding as gas is forced into it. Now assume that this balloon is being inflated by a helium pressure tank. It inflates rapidly and then the valve is cut off so it stops or inflates at a much slower rate. The rapid exponential inflation of the universe pushed the nascent cosmology from a little sphere smaller than a nucleus to something maybe several meters in radius, and then as the Goldstone analogue of the Higgs field (here the inflaton) was absorbed the inflationary pressure stopped. The space or manifold then expanded onward from there. In part this is one reason that the observable universe may constitute one part in 10^{50} of the whole thing.
Lawrence B. Crowell
Lawrence, thanks a lot for your erudite discussions of cosmology issues. I see stuff about you on the Internet, but no clear main web page – let us know if you have one. I think, that in the Robertson-Walker metric one can indeed track the progress of photons in the manner I stated (and sure, that is based on the idea of space expanding and not of ordinary velocity differences – so maybe I screwed up the issue regarding sub-c particle progression.)
I wonder what you think of the thought experiment I posed in the thread “Thanksgiving” about hard containers impeding space contraction, such as #3,5,6 and a final thought in #35:
There are contradiction problems if you try to imagine what happens to all the bodies in expanding/contracting universes if some things are impeded by material barriers/obstructions and other things just move like dust in free fall.
I meant, things like space being filled with hard balls with little test masses in the middle: when the balls crunch together, symmetry implies that the central masses should stay in each center, yet from a local Newtonian approximation, the test masses farther from a “favored” central mass should continue moving toward it, etc, (IOW, it can’t be consistent.) And, even if we accept some consistent way in GR to keep things symmetrical if there is a uniform distribution of such extended objects, what about local variations, such as a large region of balls closer together than the rest? The behavior has to be consistent with Newtonian expectations at some extent of distribution, which brings up interesting questions of how the two realms transition.
Lawrence,
I have no trouble with redshift due to the “curvature of spacetime.” I disagree that it is due to objective recessional velocity.
I certainly never set out with the intention of questioning modern physics, since my original motivation was learning the givens in this transitory life. The point where I started to question how it is postulated was learning that for the universe to be as stable as it is, Omega had to be very close to, if not equal to 1. If the expansion of space is being balanced by the contraction of gravity, then it seemed logical to assume the universe as a whole is not expanding, as gravity would neutralize the expansion effect. So it seemed the most logical reason these two effects would be in equilibrium was that there was a convective system of sorts and they were opposite sides of a larger cycle.
Yes, the light of distant sources appears redshifted, but if this is due to the curvature of spacetime, then it is an effect similar to trying to walk up the down escalator. The space may be expanding, but it is also contracting into gravitational wells at the same time. The balloon has holes in it. The pressure of this expansion isn’t causing some larger expansion because we are not measuring the pressure lost to gravity.
To put this in the analogy of gravity as a ball on a rubber sheet; Where there is no ball, the sheet is not flat but rises in gentle hills, that if were bulldozed into those gravity wells, would yield flat space. Thus Omega=1.
Light is redshifted when it travels over these hills of spacetime curvature and the longer it travels on the hills and doesn’t fall into the wells, the more it is redshifted. Since this effect compounds, the further light travels, the faster the source appears to recede. Eventually it is redshifted enough that the source appears to recede at the speed of light and any source past this point is over the horizon line of visibility.
I put “curvature of spacetime” in parentheses because it is the vortex of gravity which causes curvature, as well as contraction, but since the expansion effect is evenly distributed across space, it doesn’t “curve” around anything, it just expands. That’s why it is redshifted. To me, this is the cosmological constant and the whole Big Bang/expanding from a point scenario and all the patchwork required to hold it together, is based on an incomplete picture.
The result is also a much less complex picture, as Inflation and Dark Energy are two factors that would be unnecessary.
If we were to futher extend the analogy of a convective cycle, galaxies and the black holes at their center would be gravitational storms around the eye, with whatever falls in being ejected as charged particles out the poles. While the CMBR, up to 2.7K, would be the stable level of radiation in the atmosphere of space, below its dew point. Above this and it starts to condense back out as the most elemental forms of gravitational contraction and eventually what might be considered mass.
I think there is a bit of confusion. John Wheeler called gravitation geometrodynamics, in that it is the dynamics of space, which defines a foliation in spacetime. So in the case of a cosmology we might think of points in space as dynamically receeding away from each other. Any particle in the space get dragged along for the ride, and in fact that is exactly what is happening for it is a form of frame dragging. It is in this way that an expanding universe is measured by a gravitational redshift.
The term geometrodynamics is confusing, for really general relativity determines curvatures and intervals involved with relationships between particles. General relativity does not do this really with points, for if it were to do this the theory would not be generally covariant by specifying a point by point coordinate dependent map. Yet we can help but use the idea of there being points moving away from each other.
The bit above about little gravitational wells, those are what galaxies are bound within. This is what keeps galaxies from flying apart due to the expansion of the universe.
Lawrence B. Crowell
Neil B. on Dec 12th, 2007 at 8:58 am
Lawrence, thanks a lot for your erudite discussions of cosmology issues. I see stuff about you on the Internet, but no clear main web page – let us know if you have one.
————–
No I don’t have one, though I am intending to do so. I am just a bit lazy at this time.
As for things impeding the expansion of space, in principle that can happen and there have been ideas about huge vacuum domain walls in the universe. I am not a partisan of these ideas, but very dense or massive objects can stop the expansion of space. In fact enough mass added to the universe, say by God, would halt the expansion of the universe and cause it to recollapse.
Lawrence B. Crowell
Lawrence,
This goes back to my point about why doesn’t the speed of light increase, as space is expanding? Evidence for expansion is the redshifting of lightwaves, yet the speed of light is stable, so it would seem space suffers from schizophrenia.
Possibly the problem is defining space in terms of its contents. We say space is curved because light passing a gravitational body is bent around it. On the other hand, energy and mass may be drawn into a gravity well, but the well doesn’t shrink because there is always new energy and mass being captured. Maybe the same is true for expanding space, in that while it may take a particular photon longer to cross intergalactic space, due to expansion/negative curvature/cosmological constant/constant waves of radiation from all sources and directions/whatever, the actual distance doesn’t increase, because this space, as defined by its contents, is also falling into these gravity wells. Sort of like it takes more steps to climb up the down elevator, but the floors are not actually moving apart.
Geometry assigns zero to the neutral dimension of geometric forms, ie. points, lines and planes, but that is actually a virtual dimension. Any number multiplied by zero is zero, so the real geometric zero would be empty space, not any particular point. While geometry defines space, it doesn’t create it. The absolute isn’t a point, it’s the potential for any point. Empty space.
There is a crucial difference between the speed of an object through space-time and the expansion of space-time itself.
The speed of light is determined locally.
If you compare the time taken for a radio signal to come from a spacecraft with that predicted from its orbital ephemeris and then repeat when the signal has to pass close by the Sun you find in the latter case a extra delay has crept in.
This delay might be thought to indicate that the speed of light has reduced, however in GR it is explained by an increase in radio-path length caused by the curvature of space-time around the Sun.
This question may have been lost upthread, so again: is it so, we can track the progress of light in the cosmos by the device of pretending that objects shrink, stay in the same place, and the speed of light decreases according to the scale factor? This leads to a specific rate of travel relative to cosmic time (the time ticked since BB by the “stationary” galaxies, which presumably all see average isotropic CBR etc.) I wonder: would the difference between “space expanding” and “actual recession” be shown by whether relativistic velocity addition applied? I mean, if I shot a bullet away at 0.995 c etc, and it passed by a galaxy receding by expansion at 0.01c, would the galaxy consider (by its own standards, kinetic energy equivalence, etc.) that bullet now to be going at 0.995 (relativistic minus) 0.001c = 0.99499, or only at 0.985c (or maybe I neglected further corrections, but which standard applies to that part of the result)?
Oh, pardon me to anyone I or others offended with mere, uninformed “cocktail party physics” chatter like this about philosophically deep and contentious foundations, while they labor here on very specific and clearly stated physics problems like band structure in semiconductors, bunching and antibunching of photons, branching ratios for bottom mesons, etc. 😐
I second Barber’s comment. The speed of light is something measured in a local frame. We might better think of there being a spacetime curvature that on any local frame results in space being stretched out as measured by the local observer’s time.
The thing that is a bit strange is the we have the Einstein field equation
G_{ab} = -kT_{ab} + /g_{ab},
and if the T_{ab} = 0 then we are left with G_{ab} = /g_{ab}if / = 0 then under the reversed trace it is easy to show that R_{ab} = 0, but for / =/= 0 we have an Einstein space where the Ricci curvature is proportional to the metric. We then go off and say that / = tr(T_{ab}) and assign an energy density and pressure and so forth as if it is a material momentum energy tensor — when it really is not. The problem is that the vacuum is not a “fluid,” or once we start thinking that way we may have jumped into some sort oflatter day aether idea.
This is a part of Sean’s critique of the idea of a negative pressure. The Einstein field equation is nabla-field = source, common to all field equations. But really there is no source, but instead the space is Einsteinian. So there is not in the strict sense a negative pressure which “causes” the accelerated expansion of the cosmology.
Lawrence B. Crowell
A “dark misleading force” that could explained given a cosmology based on a causal theory of quantum mechanics?
So one can think that describing enough details of a cause acting in addition to the forces to account for quantum wave, spin and entanglement could finally explain how matter can persist as atoms and molecules despite the forces acting within and upon it.
And then one could ask might not such causal quantum theory explain the large scake structure of the universe without the need for the dark matter that hasn’t been directly detected despite 20 years of experiments?
And then suppose a theory of such a casuse that would act universally and nonlocally in addition to the forces could explain the close relationship between galaxy rotation described by Milgrom’s law and the acceleration in the expansion of the universe, as mentioned by Lee Smolin in his book The Trouble with Physics pp 210-2?
If you are watching two objects traveling at relativistic velocities an observer on one of those objects would observe the second moving at a velocity as given by “boosting” to that frame. This results in the relativistic addition formula. If the two objects are moving at velocities u and v and you boost to the object with velocity u you would see the second object move at
v’ = (v – u)/(1 – u^2/c^2).
Now this is for special relativity, but for galaxies with modest values of z this works well enough. So if u = .8c and v = .9c then
v’ = (.9 – .8)c/(1 – .64) = .28c.
So on the frame travelling at u = .8 you’d see the object travelling at v = .9 relative to the original frame moving with velocity .28c. I’ll leave it as an exercise to consider the particular case above.
The dark mysterious force is really a manifestation of spacetime being an Einstein space. In this case the Ricci curvature is equal to a constant times the metric. This is why there is a subtle issue of assigning the accelerated expansion as due to a negative pressure. To do so is to assign this to a source of gravitation or curvature, which is not entirely the case. The origin of this, to use that language which is not entirely appropriate, may be due to the quantum substructure of spacetime. The idea of a negative pressure suggests this as well, for there it is associated with the zero point energy of the vacuum. Though I think there are questions about the reality of this ZPE and our exploitation of this to model the cosmological constant, where it has pressures and the rest, I think illustrates something very incomplete about our understanding of this problem.
Lawrence B. Crowell
Garth,
If it’s space expanding, it is local.
Lawrence,
Einsteinian space measures the medium, not the geometry. Spacetime may curve, but it’s the light we measure.
There’s a very simple result based on symmetry and Killing’s theorem that yields all cosmological redshifts (whether for photons or material particles) in one stroke. (See e.g. Wald’s General Relativity section 5.3)
The result of this is, for any object in free fall:
p_2 / p_1 = a(t_1) / a(t_2)
where p is the momentum of the object, measured by an observer at rest in the cosmological coordinates, and a(t) is the scale factor for the universe.
It makes no difference whether the object is a photon or a meteor, how far it travels, and what a(t) does between t_1 and t_2 (so long as it’s a smooth function that remains positive); in a homogeneous, isotropic universe this result will always hold exactly.
Killing’s theorem gives the relationship between the tangent to a geodesic and a direction that generates a symmetry. For example, suppose you are travelling along a great-circle route on the surface of the Earth, at a uniform speed. At any given latitude, theta, consider the component of your velocity that points east-west, and call it e(theta). It’s not hard to verify that:
e(theta) r(theta) = constant
where r(theta) is the radius of the circle of latitude. This in turn means that:
e_2 / e_1 = r(theta_1) / r(theta_2)
What’s more, this result will still hold if we replace the spherical surface of the Earth with any surface of revolution, however weird and complicated. So long as you follow a geodesic at a uniform speed, the component of your velocity in the direction that generates the surface of revolution will be “red-shifted” exactly in inverse proportion to the radius of the circle of revolution you’re on.
In both cases, it’s the same simple underlying geometrical principle.