Certain subsectors of the scientifically-oriented blogosphere are abuzz — abuzz, I say! — about this new presentation on Dark Energy at the Hubblesite. It’s slickly done, and worth checking out, although be warned that a deep voice redolent with mystery will commence speaking as soon as you open the page.
But Ryan Michney at Topography of Ignorance puts his finger on the important thing here, the opening teaser text:
Scientists have found an unexplained force that is changing our universe,
forcing galazies farther and farther apart,
stretching the very fabric of space faster and faster.
If unchecked, this mystery force could be the death of the universe,
tearing even its atoms apart.We call this force dark energy.
Scary! Also, wrong. Not the part about “tearing even its atoms apart,” an allusion to the Big Rip. That’s annoying, because a Big Rip is an extremely unlikely future for a universe even if it is dominated by dark energy, yet people can’t stop putting the idea front and center because it’s provocative. Annoying, but not wrong.
The wrong part is referring to dark energy as a “force,” which it’s not. At least since Isaac Newton, we’ve had a pretty clear idea about the distinction between “stuff” and the forces that act on that stuff. The usual story in physics is that our ideas become increasingly general and sophisticated, and distinctions that were once clear-cut might end up being altered or completely irrelevant. However, the stuff/force distinction has continued to be useful, even as relativity has broadened our definition of “stuff” to include all forms of matter and energy. Indeed, quantum field theory implies that the ingredients of a four-dimensional universe are divided neatly into two types: fermions, which cannot pile on top of each other due to the exclusion principle, and bosons, which can. That’s extremely close to the stuff/force distinction, and indeed we tend to associate the known bosonic fields — gravity, electromagnetism, gluons, and weak vector bosons — with the “forces of nature.” Personally I like to count the Higgs boson as a fifth force rather than a new matter particle, but that’s just because I’m especially fastidious. The well-defined fermion/boson distinction is not precisely equivalent to the more casual stuff/force distinction, because relativity teaches us that the bosonic “force fields” are also sources for the forces themselves. But we think we know the difference between a force and the stuff that is acting as its source.
Anyway, that last paragraph got a bit out of control, but the point remains: you have stuff, and you have forces. And dark energy is definitely “stuff.” It’s not a new force. (There might be a force associated with it, if the dark energy is a light scalar field, but that force is so weak that it’s not been detected, and certainly isn’t responsible for the acceleration of the universe.) In fact, the relevant force is a pretty old one — gravity! Cosmologists consider all kinds of crazy ideas in their efforts to account for dark energy, but in all the sensible theories I’ve heard of, it’s gravity that is the operative force. The dark energy is causing a gravitational field, and an interesting kind of field that causes distant objects to appear to accelerate away from us rather than toward us, but it’s definitely gravity that is doing the forcing here.
Is this a distinction worth making, or just something to kvetch about while we pat ourselves on the back for being smart scientists, misunderstood once again by those hacks in the PR department? I think it is worth making. One of the big obstacles to successfully explaining modern physics to a broad audience is that the English language wasn’t made with physics in mind. How could it have been, when many of the physical concepts weren’t yet invented? Sometimes we invent brand new words to describe new ideas in science, but often we re-purpose existing words to describe concepts for which they originally weren’t intended. It’s understandably confusing, and it’s the least we can do to be careful about how we use the words. One person says “there are four forces of nature…” and another says “we’ve discovered a new force, dark energy…”, and you could hardly blame someone who is paying attention for turning around and asking “Does that mean we have five forces now?” And you’d have to explain “No, we didn’t mean that…” Why not just get it right the first time?
Sometimes the re-purposed meanings are so deeply embedded that we forget they could mean anything different. Anyone who has spoken about “energy” or “dimensions” to a non-specialist audience has come across this language barrier. Just recently it was finally beaten into me how bad “dark” is for describing “dark matter” and “dark energy.” What we mean by “dark” in these cases is “completely transparent to light.” To your average non-physicist, it turns out, “dark” might mean “completely absorbs light.” Which is the opposite! Who knew? That’s why I prefer calling it “smooth tension,” which sounds more Barry White than Public Enemy.
What I would really like to get rid of is any discussion of “negative pressure.” The important thing about dark energy is that it’s persistent — the density (energy per cubic centimeter) remains roughly constant, even as the universe expands. Therefore, according to general relativity, it imparts a perpetual impulse to the expansion of the universe, not one that gradually dilutes away. A constant density leads to a constant expansion rate, which means that the time it takes the universe to double in size is a constant. But if the universe doubles in size every ten billion years or so, what we see is distant galaxies acceleratating away — first they are X parsecs away, then they are 2X parsecs away, then 4X parsecs away, then 8X, etc. The distance grows faster and faster, which we observe as acceleration.
That all makes a sort of sense, and never once did we mention “negative pressure.” But it’s nevertheless true that, in general relativity, there is a relationship between the pressure of a substance and the rate at which its density dilutes away as the universe expands: the more (positive) pressure, the faster it dilutes away. To indulge in a bit of equationry, imagine that the energy density dilutes away as a function of the scale factor as R-n. So for matter, whose density just goes down as the volume goes up, n=3. For a cosmological constant, which doesn’t dilute away at all, n=0. Now let’s call the ratio of the pressure to the density w, so that matter (which has no pressure) has w=0 and the cosmological constant (with pressure equal and opposite to its density) has w=-1. In fact, there is a perfectly lockstep relation between the two quantities:
n = 3(w + 1).
Measuring, or putting limits on, one quantity is precisely equivalent to the other; it’s just a matter of your own preferences how you might want to cast your results.
To me, the parameter n describing how the density evolves is easy to understand and has a straightforward relationship to how the universe expands, which is what we are actually measuring. The parameter w describing the relationship of pressure to energy density is a bit abstract. Certainly, if you haven’t studied general relativity, it’s not at all clear why the pressure should have anything to do with how the universe expands. (Although it does, of course; we’re not debating right and wrong, just how to most clearly translate the physics into English.) But talking about negative pressure is a quick and dirty way to convey the illusion of understanding. The usual legerdemain goes like this: “Gravity feels both energy density and pressure. So negative pressure is kind of like anti-gravity, pushing things apart rather than pulling them together.” Which is completely true, as far as it goes. But if you think about it just a little bit, you start asking what the effect of a “negative pressure” should really be. Doesn’t ordinary positive pressure, after all, tend to push things apart? So shouldn’t negative pressure pull them together? Then you have to apologize and explain that the actual force of this negative pressure can’t be felt at all, since it’s equal in magnitude in every direction, and it’s only the indirect gravitational effect of the negative pressure that is being measured. All true, but not nearly as enlightening as leaving the concept behind altogether.
But I fear we are stuck with it. Cosmologists talk about negative pressure and w all the time, even though it’s confusing and ultimately not what we are measuring anyway. Once I put into motion my nefarious scheme to overthrow the scientific establishment and have myself crowned Emperor of Cosmology, rest assured that instituting a sensible system of nomenclature will be one of my very first acts as sovereign.
I generally concur with Jason’s statements here. The first law of thermodynamics is a statement about the conservation of energy. The total energy E is equal to the energy taken out in “work” W plus energy acuumulated in entropy time temperature
dE = -dW + TdS
and for certain systems one integrates the change of these quantities. One result for a closed system, which involves the exact differential employed is that the entropy increases in time dS/dt >= 0. When dS/dt = 0 the system has reached equilibrium.
In a statistical mechanical setting a system is partitioned into sets of macrostates, where a reshuffling of the microstates that compose it do not change it. If you define a system as existing in a macrostate, or equivalently a phase space volume V in a total volume V’ > = V, the entropy of the system is defined as
S = -k*log(V/V’ ) > 0 since V/V’ = infinity.
Lawrence B. Crowell
[this cut off again, so here is the rest (I hope)]
where k is the Boltzmann constant. There are a couple of things which are apparent. How one defines this volume is subjective, and there is no clear procedure for doing this. However, the logarithm comes into save our tails, for any “slop” in defining V results in a small entropy change. How this is constructed is given by the H-theorem and has some connections with Bayesian statistics. If entropy increases it is clear that V increases in size. The system evolves through larger macrostates or phase space volumes. This means that the state of the system approaches the most probable, which occurs when the system reaches equilibrium.
The entropy of the universe is a complicated subject. If we consider it to be a quantum wave function(al) then on a fine grained scale there is no entropy. Similarly a macroscopic system is considered to be built from molecules that obey dynamical principles, but since there are so many of them we parition the system into course grained states. In the same way the universe, which is a closed system will on a coarse grained descriptive level evolve to ever greater entropy. There is a further complicating matter that spacetime has a negative effective heat capacity. In a model universe with a black hole who’s horizon temperature equals the background (eg CMB) temperature, if that black hole absorbs a photon or emits one by quantum radiance its thermodynamic state is removed from equal temperature with the background. So there is no stable equilibrium state for the universe, at least not locally or probably not in a finite time. The entire path integral of the universe may end up as a flat Minkowski space, similar to how the anti-deSitter spacetime has Minkowski spacetime at conformal infinity. This will then be the final state of the universe, as it appears recollapse does not happen, which occurs as time —> infinity.
Lawrence B. Crowell
Lawrence,
You’re like an atomic clock to my sundial.
I will try to make some sort of statement here about attempting to theorize about physics or cosmology.
It is common for people to say that modern physics is an “epicycle on epicycle” endevour and that maybe somebody from the outside can come in and throw the whole thing in a cocked hat. The often made statement about “scientific authority” and so forth is made to draw analogues with the pre-Copernicus state of affairs, or with what Galileo faced. Biblical creationists make similar gripes against the predominance of evolutionary theory in biology.
The conditions then in no way match conditions today. We live in a very different world from that of the late middle ages or the renaissance. The Church codified Aristotle’s physics and Ptolemy’s astronomy, or what might be called proto-physics, into their theology. It was in some sense a model building process. Yet the purpose of this was very different. The Aristotlean-Ptolemaic physical and astronomical systems were incorporated into theological canon in order to bolster a faith system. This faith system was beyond question, and is still so in some circles, and the extensions of this were also put beyond question. The whole system was founded on philosophical methods without empirical input, and was ultimately completely wrong. In effect science was starting from the ground floor.
This is very different from the revolution in physics at the turn of the 20th century. Classical mechanics was not found to be completely wrong, but more incomplete or inadequate outside some domain of observation or experimentation. In the education of a physicist Newtonian mechanics and its extension with Lagrangian and Hamiltonian mechanics are ground work. At the same time physics degrees do not involve first a study of Aristotle or Ptolemy. This is carried on today. In order to understand contemporary physics and cosmology some familiarity with established physics is needed. In order to do research and publish papers a strong grounding in these is necessary. The conditions are very different from 400 years ago.
This is not to say there are not entrenched schools of thought, particularly in frontier research areas. You have communities of people determined to pursue a line of thought and investigation, and to out argue competing schools of thought. The string theory vs loop quantum gravity debate suggest this sort of thing. People are people after all. Yet as the process continues eventually a single “picture” emerges as the new theory, particularly if it is upheld by observational or experimental results. As time goes on that theory becomes a canon of sorts in the field of physics.
So as things stand nobody who is completely outside of physics, who has never darkened the door of a physics classroom, or has never solved a Lagrangian problem or a wave equation or … , is going to suddenly jump on the scene with the brilliant answers to contemporary problems in physics.
Lawrence B. Crowell
Of course, I think every clock is it’s own dimension of time, that being why time is relative and not absolute.
Uh, no. First of all, “Punctuated Equilibrium” is very poor terminology. Equilibrium, by definition, is the final state to which a system eventually evolves. Without changing the system, it will never get out of it.
This is not the case at all with the ecological systems which Stephen Jay Gould was describing, which can undergo massive changes without any external change to the system. These systems aren’t in equilibrium at all between these revolutions: they are in metastable states. If they were in equilibrium, then even after an external forcing of the system (e.g. a meteor impact), they’d go right back to the previous state in some finite time: you’d have to make a permanent change to the system for the state which the system approaches to be changed, such as by adding a new constant energy source or some such.
Biological systems are, in fact, massively out of equilibrium. But, regardless, this is rather off topic. I’m wasn’t discussing anything at all like that. What I was talking about is that systems tend towards higher-entropy states, and there’s no going back. If you lower the entropy of one isolated system (e.g. by heating or cooling your home), you have to raise the entropy somewhere else even more than you lower the entropy of your system. So, overall, the entropy increases. Eventually, our universe will (as near as we can tell) approach a state where there is nothing but empty space (and vacuum fluctuations): there won’t be any more entropy increase that living organisms like ourselves can exploit. Every fusion reaction in a star, every fission reaction here on Earth, every new star that is born, every old star that dies, every galaxy that forms, every two galaxies that merge, all of these bring us one step closer to this eventual fate of nothing left but empty space. Granted, it will take an astronomically long time to get there, but it is moving in that direction nonetheless:
http://en.wikipedia.org/wiki/Heat_death
No, we don’t know that the age of the universe is finite. We know that the age of our region of the universe is finite, but our region of the universe could easily have been born from some other region, which could have been born from some other region in perpetuity.
It seems to me, John, that you’re entire reasons for thinking that there’s something wrong with the Big Bang theory are based upon misconceptions brought about by confusing or inaccurate language used to attempt to describe the science to lay people. Really, now, you have no good reason to suspect that many thousands of highly intelligent people who have dedicated their lives to understanding this are all wrong just because we have a hard time communicating the concepts without a great deal of mathematics to describe what we’re talking about.
Honestly, you simply do not have the tools at your disposal to properly evaluate any of the current theories on the nature of the universe. Above all, you need to learn what scientists are talking about in the first place before saying it’s all wrong, and you aren’t even close to understanding that. Every single one of the “problems” you brought up in your above post is nothing more than a misunderstanding of what the physics are actually describing. It’s not necessarily that you’re too stupid (though that certainly is a possibility), it’s that you aren’t even going by the true description of the physics, which is mathematical. Any description of a physical law or process can only be accurate if it is a mathematical description: this is the language of the universe, and if you don’t understand that language, then you simply aren’t going to be capable of understanding the physics we’re talking about.
That said, I might as well try to clear up a few of your misconceptions.
The expansion is evidence of curvature of space-time. Space can be flat while space-time is curved. If space is flat but space-time curved, it just means that the only non-zero elements of the Ricci tensor are the time-time and time-space elements, with all the space-space elements zero.
This basically means that when describing a spatially-flat universe we often end up using Cartesian coordinates for our spatial coordinates that are dependent upon time but independent of space (note: there are an infinite number of possible coordinate choices, time-dependent Cartesian coordinates are just one). If there is spatial curvature, our Cartesian-like coordinates also become dependent upon space as well as time.
One way of visualizing this is that a spatially-flat universe with space-time curvature might potentially be visualized as a rubber sheet that is being stretched with time, or as a raisin cake that is rising: at any given time Cartesian coordinates are perfectly good, but those coordinates change with time. That change can be described as the curvature.
An expanding universe with both curvature in space and in space-time might be visualized as a balloon that is being blown up: spatially, it’s the surface of a sphere, which is spatially curved, and the points are also moving away from one another, which is space-time curvature.
The two have nothing to do with one another, so I’m really not sure what your issue with this is. The expansion of space doesn’t change the laws of physics as space expands, so I don’t see why it’d change something that appears to be part of the fundamental laws of physics.
Well, it’s just incorrect to say that the expansion was ever faster than the speed of light (or slower for that matter). The units of expansion are inverse time. The units of speed are distance over time. So saying that the expansion was ever greater than the speed of light would be rather like saying that two miles is greater than three kilograms: it’s a nonsensical statement. When physicists talk about a superluminal expansion, they’re not really talking about an expansion that is faster than the speed of light. They’re using incorrect, misleading language to describe a rapid acceleration of the rate of expansion. This is just one among many ways where you really need to understand the math to understand what’s going on, and it isn’t an expansion faster than light.
So? First of all, this is only because of the particular time that we are observing the universe. If we were observing it a few billion years ago, or a few billion years in the future, we would see quite different ratios. Secondly, there’s no reason to expect that we should be able to use electromagnetic radiation to observe most of the universe. Some particles just don’t have electric charge.
I didn’t say that. I don’t know what the right answers to some of these questions are. Nobody does. But just because we don’t yet know the right answers to all of these questions doesn’t mean that we can’t point out answers that are wrong. And yours are wrong. There are things that I’m pretty certain about (e.g. the existence of dark matter), due to the weight of the evidence, and many others which I have no idea about (e.g. the nature of dark energy), because the evidence just isn’t yet in. But some ideas are just nonsense, because they either are in direct contradiction to what we do know, or are in direct contradiction to themselves. Your ideas are in direct contradiction to what we do know (they aren’t well-formed enough to be self-contradicting, as near as I can tell).
Jason,
You are to be commended on your patience. Since part of my problem seems to be the disconnect between the math and the analogies being used, here is one I keep bringing up that might be clarified;
As I keep mentioning, my problem is relating this to the speed of light. Let’s say there are two sheets parallel to each other. One is being stretched out and the points are moving away from each other. The other remains stable. The redshifted light is the expanding sheet, while the stable sheet is the speed of light. So when the redshifted sheet is twice the original size, it takes light twice as long to cross between any two points as it did originally. Yes, the speed of light is a local effect, but it would seem that the stretching of the other sheet is a local effect as well, given that every place on it is being stretched. It would seem that if space is a fundamental dimension, logically we would use the most stable method of measuring it to determine its size, just as we use the most stable measures of atomic activity to measure time. So saying two points are twice as far apart, as measured by the speed of light, would seem to be increasing distance in stable space, not expanding space?????
I realize I’m probably not going to understand whatever math formulae you think is necessary to properly explain this and I admit that my scepticism towards the math being the reality, as opposed to modeling the reality, is deep, so I suppose this will be a difficult issue to resolve, but the effort does stretch the mind and that is healthy for everyone.
I will second that so called superluminal expansion, such as in inflationary cosmology, does not mean things travelling faster than light. Locally the speed of light is still c = 299,980 km/sec. This only means that there are distinct frames on different regions where the so called frame bundle differs by a large curvature on the spacetime.
Punctuated equilibrium is a misnomer. Equilibrium in biology is equal to death. If there is no Krebs cycle, no metabolism and no more ATP to phosphorylate peptides in kinase activity etc, then that organism is simply dead — and it is in equilibrium, or is on a monotonic course to equilibrium as it decays or is consumed by other organisms. Homestasis is a more operative word in biology and it is a sort of analogue of the limit cycle or strange attractor in nonHamiltonian physics. An ecosystem can be in homeostasis, evolutionarily stable and it can be punctuated by some chaotic even — volcanos, asteroid impacts, etc.
A part of JM’s confusion is that general relativity is built up from local “patches” that are glued together in a chart system. The whole space is built up this way, but certain things such as the speed of light are measured on a local patch which is small enough to be flat, or approximately so. There is considerable Riemannian geometry behind this, in fact it really is Riemannian geometry.
Lawrence B. Crowell
Lawrence,
Doesn’t the space between galaxies consist of innumerable “local patches,” that might appear on a small enough scale to be flat, yet wouldn’t these have to expand/curve to some small degree for the galaxies to be moving away from each other? Why wouldn’t this expansion also be “superluminal?”
At what point did the superluminal expansion of inflation become the non-superluminal expansion of the current expanding universe? Is there a theory for why this expansion ceased to carry the light along with it and could be measured against a stable speed of light?
If the patches can be made small enough the curvature becomes vanishingly small. You don’t notice any expansion or acceleration of objects away from you due to cosmological expansion right around you. It only become apparent when we look far out.
I must confess I don’t quite understand what you are asking in the second paragraph.
Lawrence B. Crowell
This analogy just makes no sense whatsoever. The speed of light can’t be thought of as a static sheet. The units are wrong.
What do you mean by stable space vs. expanding space? If space is expanding, then it’s not stable.
Lawrence,
Regarding the Pioneer anomaly…
You speculated that
A few years ago I attended an informal talk by John Anderson, a NASA scientist who first noticed the possible anomalous acceleration of the Pioneer spacecraft toward the sun when he analyzed their radio Doppler and ranging data. He and others have done an extensive analysis that take into account the particular characteristics of the spacecraft, along with all known plausible sources of the apparent acceleration. What they have found is that after adding up all these sources, they are insufficient to account for the anomaly.
Wikipedia has a short article about the subject, including some references like one of Anderson’s papers (also available on the arXiv). It mentions some of the problems with trying to observe the anomaly with later spacecraft (design issues, etc.).
What is puzzling is that both Pioneer spacecraft observed the same anomaly, and they both saw it begin at similar distances from the sun. It would be surprising to some people if by coincidence both spacecraft had malfunctions that conspired to produce the anomaly at similar radii, but such a coincidence is certainly possible. They used the same design, so an independent mission is needed to eliminate that source of systematic error.
You may be right. We’ll have to wait to find out…
Given that the Voyager spacecraft have now traveled farther, we should have independent confirmation of the Pioneer anomaly, if it’s real.
Lawrence,
Large distances are still made up of lots of small distances, so while the effect may be too small to detect at these short distances, it still must exist in them to become apparent at large distances, re; curvature. The speed of light isn’t “curved” because it doesn’t change over time.
If Inflation is superluminal expansion, then it is presumably carrying the light along with it, so it would seem that during the inflation stage, the speed of light did increase as the universe expanded. So at some point, the speed of light stablized, while the universe continued to increase in size.
Jason,
If the distance between two objects is x lightyears and 10000 years from now, it is still x lightyears, that would imply a stable unit of distance. Currently metric distances are defined in terms of lightspeed, so it is certainly considered as defining a stable unit of distance.
That’s the point. We consider it as expanding because we measure it in terms of lightyears and assume the distance between objects is increasing in terms of lightyears. So that as the universe expands, it takes light longer to cross the space between them. If lightspeed is stable then this expansion isn’t expanding space, rather it is increasing distance in stable space.
Of course we have no other point in time to measure from. If we were actually able to compare measurements taken thousands of years apart, would these other sources appear further away, due to actual recession, or would they still seem to be at the same distances, but still redshifted? If the second, then redshift would be due to something other then recession, but we have no way of making these measurements.
John,
Superluminal expansion, as I have repeatedly explained, is a misnomer. Furthermore, velocity as a quantity only makes sense locally, and locally the speed of light never changes. Think about it for a second: does the far away expansion of space really have any effect on how we think of the behavior of light here on Earth?
This has always been true: at any point in space and time, you could (as a thought experiment) set up a little experimental apparatus to measure the speed of light at that location. Provided the experimental apparatus was small enough compared to the local curvature that it could be approximated as flat space-time, that apparatus would always measure the same value for the speed of light, no matter where or when the apparatus was set up, whether it was during inflation, or here on Earth, or inside the event horizon of a black hole. This is the real meaning of the constant speed of light in General Relativity: it’s constant with respect to measurement by any observer measuring it at their current location in space-time.
Nope. Describe a universe where all of the objects are moving away from one another at constant speed, use the Einstein equations to solve for the curvature, and you’ll find curvature that indicates that space is expanding. The curvature of space-time and the matter content of the universe are coupled, and cannot be separated from one another.
That the anomalous acceleration is occurring with the Pioneer craft and not the Voyagers indicates a device dependent process. This is most often a sign of an instrumentation problem. I can’t say why the Pioneers are experiencing this. Differential scattering of light from its surface, outgassing, or … ? A part of the problem is that it is an uncontrolled experiment, the craft were not designed specifically for this purpose.
To be honest if there were some physics involved with this the same decelleration should be observed in the motion of planets. This should produce a small deviation from Kepler’s laws of planetary motion as dictated by Newtonian mechanics. A planet is big enough so that photon pressure can’t nudge the body, there is no outgassing and so forth. If this is due to a cosmological constant / ~ 10^{-56}cm^{-2} and if the acceleration is
a = (2//3)rc^2,
for r the relative separation between two bodies, then there should be some orbital ephermis deviation for the outerplanets.
The term superluminal is unfortunate, and to be honest I cringe a bit when ever I hear the word. I can only say that with curved spacetime the usual notions of distance, velocity and the rest is “deformed.” The universe has some four dimensional curvature, even if the spatial surface (or coordinate choice or spatial “slice”) is flat. As a result our usual ideas about space and velocity are changed. This is why we exist in a universe that is 70 billion light years out to the deinoization limit or the CMB, but the universe is 13.7 billion years in age. Particles on different frames have been comoved with their frames to give rise to this.
Lawrence B. Crowell
Jason,
So we know gravity bends space, because it distorts the path of light and we know space is expanding because light from distant sources is redshifted.
The speed of light? That’s just a local effect. If a galaxy that appears 100,000 lightyears away now should appear 200,000 lightyears away in 20,000,000 years, that’s not increasing distance, no, that’s expanding space. Oh yes, I get it now! Not.
The speed of light is the most stable measure of space, in fact;
Speed of light set by definition
In 1983, the 17th Conférence Générale des Poids et Mesures adopted a standard value, 299,792,458 m/s for the speed of light. This in turn defines the length of a metre in terms of the speed of light, so that further refinements in the current experimental value of the speed of light would only refine the definition of a metre. (http://en.wikipedia.org/wiki/Speed_of_light)
It is considered the most basic measure of distance, so if something is further way then it was, according to how long it takes light to travel from there, that is increasing distance, not expanding space.
That’s like saying, “That’s not a car, that’s a hatchback!” Why can’t it be both?
Jason,
Because if it is increasing distance in stable space, that would mean the space pre-exists the expansion. Sort of like the train moving away isn’t stretching the tracks, but moving along pre-existing tracks.
This poses serious problems for Big Bang theory, in that if space isn’t curved/expanded, then we would seem to be at the center of the universe, since other galaxies are all redshifted directly away from us, proportional to distance. It also poses serious problems for Inflation theory, because if space pre-exists, it’s subject to quantum fluctuation, if not other material and energy and the superluminal expansion would result in a shockwave that would burn everything to a crisp.
If redshift is somehow a function of light crossing distance, then that we appear at the center isn’t a problem, as every point would appear to be the center. The math is the same, it’s just that gravity wells cancel out the effect, rather then the entire universe expanding.
No, John, it doesn’t work that way. The geometry of space-time is intimately connected to the matter that inhabits it. The Einstein equations describe the relationship between space-time and matter, and if we have matter that is moving apart, then space is necessarily expanding.
Jason,
Space, as I pointed out, is defined in terms of light speed. So if something is moving away, as measured in lightyears, that is not expanding space. That is increasing distance.
The difference is that if the markers on your stretching sheet are stretched as well, that would be expanding space. If it takes more markers to cover the stretched area, that is increasing distance.
Space and time (or spacetime) are the fields in gravitation, just as the electric and magnetic fields act on charged particles. There is a whole lot I could write about this, but I am not sure how far to go and I have limited time. In the ADM form of gravitation the metric of a spatial surface g_{ij} and its conjugate momentum variable pi^{ij} obey a commutator relationship similar to that of the electric and magnetic fields. The conjugate momentum is defined according to Gauss’ second fundamental form, or the extrinsic curvature of a manifold (space) embedded in a manifold of higher dimension. This is the bedrock for the loop quantum gravity theoretic approach.
There is no preexisting space, nor is there a background space or spacetime upon which general relativistic spacetime exists. This type of construction has problems with the general covariant basis of general relativity. This bi-metric approach only works approximately if the metric with curvatures has small deviations from the background.
I would recommend to JM that he do some reading of popularizations on this subject, and be sure you understand what the author is writing and not what you previously thought. This forum appears to be caught to a degree in the confusion you appear to have, and nobody can write the thousands of word required to enlighten you on this subject. General relativity is an old subject by this time, first written down 93 years ago. Of course the subject is rich and offers research problems, but when it comes to some of these basic questions there are really no mysteries. Some of these matters are well understood and have been for many decades.
Lawrence B. Crowell
Lawrence,
I understand this. My point is that space is defined by the speed of light, so if you say the result of expansion means it takes light longer to cross between two points of reference, then that is an increased distance, as measured by lightspeed as the standard measure of space. If this expansion resulted in an increase in the speed of light, so that it took the same amount of time to cross that space, then that would be expanding space, as measured by lightspeed.
Well were are back to square one. The speed of light is defined locally. Look at an Eddington-Finkelstein diagram for a black hole. Outgoing light rays asymptotically peel away from near the cyclindrical horizon and the curve out. So you might be tempted to say that the speed of light is slow near a black hole. But this is just a particular coordinate representation. There are others which appear different. The same is the case for light which traverses large distances in the universe. It is tempting to say that since the manifold they are moving on is spreading apart ever faster that light must be coing faster. But this really is an abuse of language, or relativity. If you were in a laboratory falling towards a black hole you would measure the speed of light to be the same, even within the fraction of a second before crossing the event horizon. Indeed, you would measure the same light speed inside the black hole. So there is no real slowing down of light, and by the same token there is no real speeding up of light due to the eternal inflationary expansion of the universe.
Yeah, it’s kind of strange. It is a good idea we are not discussion Bell inequalities in quantum mechanics, for there things get really strange. Physics is that way. The deeper we probe into its foundations the further it departs from our ordinary experience and intuition of things based on that.
Lawrence B. Crowell
John,
There is no ruler out in space. None. The numbers we place on space-time are entirely arbitrary, human constructions. Now, placing numbers out there is a useful tool for human understanding of what nature is doing, but when we do it we understand that we are making one of many possible choices of labels to place upon space-time, and our interpretation of what’s going on necessarily depends upon those labels in some regards.
When we say that space is expanding, or that space is flat while space-time is curved, we are taking the most simple, straight-forward labeling of space-time that can be used for our observable universe as a whole: FLRW coordinates. In the context of FLRW coordinates, these statements make perfect sense, and are a good physical description of what is going on: as time goes on, space expands (or, in general, contracts, though not in our region).