Greetings from Paris! Just checking in to do a bit of self-promotion, from which no blog-vacation could possibly keep me. I’ve written an article in this month’s Scientific American about the arrow of time and cosmology. It’s available for free online; the given title is “Does Time Run Backward in Other Universes?”, which wasn’t my choice, but these happenings are team events.
As a teaser, here is a timeline of the history of the universe according to the standard cosmology:
- Space is empty, featuring nothing but a tiny amount of vacuum energy and an occasional long-wavelength particle formed via fluctuations of the quantum fields that suffuse space.
- High-intensity radiation suddenly sweeps in from across the universe, in a spherical pattern focused on a point in space. When the radiation collects at that point, a “white hole” is formed.
- The white hole gradually grows to billions of times the mass of the sun, through accretion of additional radiation of ever decreasing temperature.
- Other white holes begin to approach from billions of light-years away. They form a homogeneous distribution, all slowly moving toward one another.
- The white holes begin to lose mass by ejecting gas, dust and radiation into the surrounding environment.
- The gas and dust occasionally implode to form stars, which spread themselves into galaxies surrounding the white holes.
- Like the white holes before them, these stars receive inwardly directed radiation. They use the energy from this radiation to convert heavy elements into lighter ones.
- Stars disperse into gas, which gradually smooths itself out through space; matter as a whole continues to move together and grow more dense.
- The universe becomes ever hotter and denser, eventually contracting all the way to a big crunch.
Despite appearances, this really is just the standard cosmology, not some fairy tale. I just chose to tell it from the point of view of a time coordinate that is oriented in the opposite direction from the one we usually use. Given that the laws of physics are reversible, this choice is just as legitimate as the usual one; nevertheless, one must admit that the story told this way seems rather unlikely. So why does the universe evolve this way? That’s the big mystery, of course.
Inelastic collisions are just completely conventional examples of macroscopic time-asymmetry: start with low-entropy initial conditions, and watch the entropy increase. (Inelastic collisions produce heat, sound, and stresses, all of which increase the entropy.) They have nothing to do with why the arrow of time “must” run the way it does, since they say nothing about why the entropy was low to begin with.
What about wavefunction collapse or world-splitting (or whatever)? A particle seems more ordered than a wave (but maybe not), and i guess the larger system must be taken into account. In an Everett scenario, is the tree balanced on both sides?
What was Veneziano thinking?;)
And it’s Gabriele Veneziano. He is what we call a “super cosmologist” because, he thinks “outside the box.”:)
No, sorry. I didn’t meant that this explains why the entropy of the Universe was low to begin with. But as far as I understand it, it doesn’t matter what the value of entropy is or was at any point.
What matters is that the entropy is higher now than it was in the past. All the interactions that we know of either increase entropy or conserve it. I guess what I don’t understand is what all the fuss is over. You have some initial state, like a nearly homogeneous and isotropic expanding Universe, it evolves according to the laws of physics, entropy increases, the end.
Given that our Universe obeys the laws of physics we know, I don’t understand what question you’re actually attempting to address about the arrow of time.
“You have some initial state, like a nearly homogeneous and isotropic expanding Universe,”
Why?
That’s the question.
Energy radiates across 13.7 billion lightyears. Gravitational structure coalesces from within a radius of a few hundred million lightyears. While the energy loss of radiation is diffused across much more area then the gases from which mass congeals, it is also radiating in from the same amount of area. Unless of course you assume it’s all just expanding into a void that doesn’t otherwise exist, from a point that has no boundaries. All the while measured in lightspeed that remains stable, as the very space it measures expands. It is a question.
Neil,
Someone correct me if I’m wrong, but my impression of what C2 meant was it was a way to express the volume of energy, since light is a wave spreading out in all directions, it is the x axis by the y axis, not the speed of light multiplied by the figure of its own rate. I even forget if I read this somewhere, or it was just an assumption.
Tom,
That’s not an interesting scientific question. “Why” do we have our Universe? As in for what reason? Not a question science pretends it can answer.
If you meant “how” did we get to have an isotropic and homogeneous expanding Universe, well, the best mechanism we know about is inflation. Inflation sets up many of the initial conditions that we need in the big bang, including a flat, homogeneous, isotropic Universe with a spectrum of inhomogeneities, devoid of monopoles and topological defects, hot enough to bring about baryogenesis and nucleosynthesis. But that’s not an arrow of time question.
And with an initial amount of entropy, that either stays the same or increases over time. It’s not like it’s increased all that much, either: over the course of our 14 billion years here, entropy’s increased by roughly a factor of 2. But again, what I don’t get is this notion that you can run time backwards and all the laws of physics will still be time-symmetric. They aren’t: try to run a Sun backwards. Or perhaps simpler, try to fry an egg backwards in time, or drop a glass of water onto the ground backwards. You can’t do that in this Universe. And that’s what I don’t get — since we know this, what can we ask that’s interesting about the arrow of time?
Ethan, I think you’re missing the whole point: The laws of physics are time-symmetric (at least the most common processes are). The laws of physics do not say that an egg cannot unfry or a puddle of water cannot spontaneously gather into a glass. That’s exactly the question that Sean is trying to answer: how can a time-asymmetric macroscopic rule (entropy always increases) arise from time-symmetric microscopic rules?
The partial answer is initial conditions. If the initial state of a system has a very low entropy (compared with its highest possible entropy), then the system is overwhelmingly likely to evolve in a way that increases entropy. On the other hand, if the initial state has the maximum possible entropy, then it will evolve so that entropy decreases (slightly, for a very short period of time). But this partial answer just brings up a followup question: why was entropy low in the early universe?
Saying that it was lower in the early universe than it is now because entropy always increases is circular thinking, because the reason entropy always increases is because it started out low.
Daryl,
I’m sure I’m missing the whole point. Many common processes involving the laws of physics are time-symmetric, as you state. But many processes are not time-symmetric, as well, even on microscopic scales.
Consider any 3-body collision, for example, or neutron decay, or the process of successful recombination in the hydrogen atom. If I were to run these processes forwards in time, they would occur with a certain probability. If I were to run them backwards in time according to the same laws of physics, they occur with a different, specifically a smaller probability.
I’m arguing two things here, I suppose. First, the microscopic rules are not all time-symmetric, and we know exactly how they aren’t time symmetric. How, therefore, could you reasonably expect that the macroscopic results would be time-symmetric? Second, the entropy was “low” (by which you mean non-maximal) in the early Universe because the physics that created the big bang created a certain amount of entropy, which happens to be low compared to the maximal value. It isn’t a philosophical question, though. The amount of entropy produced in the big bang is a consequence of the laws of physics that we have, and it turns out to be far below the maximum possible value.
I guess I still don’t see why this is interesting.
Ethan
Ethan writes: But many processes are not time-symmetric, as well, even on microscopic scales.
No, that’s not true. There are violations of time-reversal in weak interactions, but there is no reason to believe that that violation is responsible for the second law of thermodynamics. It clearly is not.
Consider any 3-body collision, for example, or neutron decay, or the process of successful recombination in the hydrogen atom. If I were to run these processes forwards in time, they would occur with a certain probability. If I were to run them backwards in time according to the same laws of physics, they occur with a different, specifically a smaller probability.
No, that isn’t true.
First, the microscopic rules are not all time-symmetric, and we know exactly how they aren’t time symmetric.
That isn’t true, either. The microscopic rules governing ordinary interactions (which covers your examples of eggs frying and spilled water soaking into the ground) are time-symmetric.
Ethan: everything that Daryl said, plus: it isn’t true that inflation explains the initial conditions. In fact, it just makes the problem worse! All explained in SC and Chen’s paper on the arxiv.
Okay. I guess it just doesn’t make sense to me for a quantum theory of gravity to be unitary. If we consider, for example, that a patch inside the horizon of a region of de Sitter space, it has a finite number of degrees of freedom. But it evolves into a state that includes many now causally-disconnected patches, each of which has the exact same number of degrees of freedom. Since there are more such regions, it would appear that the total number of degrees of freedom has increased, in violation of unitarity.
So, why should a quantum theory of gravity retain unitarity? Any such theory that retains unitarity would require, for example, that each of the patches into which a de Sitter region evolves be different in some way from the original patch, such that they individually have fewer degrees of freedom.
And if you don’t have unitarity, it becomes really easy, it seems to me, to have inflation generate a low-entropy region: you start with a region with few degrees of freedom but high entropy, and if it evolves into a state with many degrees of freedom, it pretty much automatically ends up in a very low-entropy state. So why should quantum gravity be unitary?
Doesn’t Inflation theory just explain how thermal equilibrium is possible within the constraints of the BBT timeline while breaking the fewest possible fundamental constants? I know this is Chinese water torture for some, but wouldn’t an infinite universe have a natural equilibrium?
Isn’t there a basic conflict between the first and second laws of thermodynamics; All energy is conserved, yet it devolves to a thermal equilibrium. ?
What if a complete thermal equilibrium, given the level of energy present, is just not stable. The cosmic microwave background radiation seems to be the energy closest to equilibrium, but there is a definite phase transition at 2.7k. Space doesn’t seem able to hold energy above that in a stable, uniform state, so it is constantly collapsing and expanding around equilibrium, creating a perpetual convective cycle.
Sean saying it’s running backwards, but not in this universe.
Are not similar conditions created in particle collisions?
So are there not instances where such events within this universe could be considered “running backwards” and thusly, contributing to the dark energy? It would be by the assumption that such super symmetric states can indeed be reached and lie “under” the existing universe.
Plato,
There are two sides to the cycle quite evident in this universe. Gravity contracts energy into structure, until such a point where it breaks down, blows up, radiates back out, expanding the space occupied, as it diminishes the degree of structure. Then the process of collapse starts again. The reason there is an eternal amount of usable energy is because it is not stable at complete thermal equilibrium, so we have these tidal forces pulling it in and thus heating it up, which expands it, so it pushes back out again. No need for a singularity as uncaused cause of low entropy state.
The microscopic laws of physics are time symmetric because that is how we set up the equations. They run from time1 to time2 and then we can run them backwards from time2 to time1 and say they are the same. But, what if the nature of time isn’t captured by time1 and time2, what if time2 is inherently different than time1? Time as expressed in equations is not derived from experiment is it? Its basically a clock. It may be a simplification that misses something critical that if captured would show the source of time asymmetry. Until we can run experiments backward and forward in time, we can’t prove that the microscopic laws are time symmetric.
John,
Sometimes a picture is worth a thousand words.
You had to know that for every “entry” there is a previous entry in thought.
Sandy, yes, it is true that we use time-symmetric equations to describe microphysics, but that is because empirically we find that time-symmetric equations most accurately describe our observations at the microscopic level.
More specifically, you start an experiment by setting up a system in state A, you let it run for T seconds, then you check to see if the system is now in state B. Doint this repeatedly allows you to empirically determine a transition probability: The probably P(A,B,T) that the system will go from A to B in T seconds. But now, suppose we put the system in state B first, and then wait T seconds, then check to see if it is in state A. So we can do this repeatedly and compute P(B,A,T).
Empirically, what we find is that for systems that are microscopic, involving only a very small number of particles, the two transition probabilities are the same: that is, P(A,B,T) = P(B,A,T). So at the microscopic level, anything that is possible, the reverse is also possible, and has the same probability.
However, if instead of looking at tiny systems with only a few particles, we look at macroscopic systems involving 100000000000000000000000000 particles, then there are transitions (an egg frying, ice cubes melting in the hot sun, etc.) such that the process runs perfectly fine from state A to state B, but the reverse is never seen.
So the problem of how to explain how microscopic reversibility gives rise to macroscopic irreversibility is not simply an artifact of the way we set up our equations.
Jim Antoniadis, many thanks for your reply (#70) to my #59. I can certainly see the similarity between your intriguing model and mine in some respects.
For now I’ll just make one point, which may have a bearing on the spheres you mentioned. Although it doesn’t detract from your ideas, I guess the moral is that in the context of cosmic evolution one must be flexible in thinking about the shape and structure of regions, and even whether these are bounded or otherwise, because suitable transformations can turn everything around in all kinds of ways. (That’s not speculative, but routine physics!)
In my model, once the collapsing “late” universe is past the self-dual boundary (the critical density I referred to) and thus inside the black hole and approaching the Cauchy horizon inside, its rotation frame-drags it into a tremendously tightly wound spacetime “reel”, with each layer perhaps a Planck distance deep.
The emergent dual universe (i.e. our “young” universe) is then manifested by particles and radiation travelling round and tunneling between these layers. In other words it is “intertwined” with the late universe, which thus underpins it so to speak. (To an observer *in* the dual universe, these layers appear as the miniscule “curled up” dimensions one reads about!)
It’s the cosmic equivalent of living in a warren of limestone caves, where the stone is made up of the stacked bodies of countless creatures that once swam free, perhaps in long vanished limestone caves themselves!
Getting to the point, some years ago, a variant Big Bang theory called the
Ekpyrotic Model appeared. You’re probably familiar with it, and one can always pursue the references in that Wikipedia article. In summary, the idea was that a pair of parallel branes, floating in a high-dimensional space triggered the Big Bang by “colliding” and bouncing apart. I thought at the time it seemed slightly contrived. For example, why were these branes parallel in the first place?
But the “leapfrogging duals” model I sketched is compatible (in essence) with this colliding brane scenario if one assumes that these two interacting branes are none other than the collapsing late universe, and its emergent dual (which arguably starts out parallel to the late universe, by its very nature).
The above mentioned Wikipedia article also references related and more recent
cyclic models, one of which involves orbifolds. As far as I can discern, these are foliated or (roughly speaking) “wrapped” objects akin to the “reel” referred to above; but the Wikipedia article on them is rather terse and formal and not very illuminating.
You see what I mean about needing flexibility (literally!) when thinking about branes and their interactions? They needn’t be like large sheets which happen to be parallel and clash together and bounce apart. Their structure and “collision” can be a more subtle, as summarized above.
P.S. in mentioning Fourier transforms I wasn’t simply name-dropping. Like similar transforms, it has a built-in uncertainty principle based on the product of variances.
Thank-you Daryl, that is helpful. I wonder, is saying something is reversible (the probability of it going from state A to state B is the same as it going from state B to state A) – is that the same as saying it can go either way in time?
So the basic question is – why do we have a constraint (increasing/decreasing entropy) at macroscopic scales that we don’t have at microscopic scales? Are we using entropy as a proxy for the arrow of time? Is this warranted?
Sandy,
It’s a political fact of life that you can’t question the rules and play the game. Presumably this shouldn’t apply to science, since science is all about finding what the rules are in the first place, but the irony is that instead of negating this fact, often it is re-enforced by the process, since it is assumed those ideas which have withstood questioning must be the objective truth and become the foundation of further enlightenment, which only re-enforces their status. Consider; In fifth grade geometry, did they explain that zero is the center point on the graph sheet, or did they point out that zero would actually be a blank sheet of paper? Suffice to say, the center point has become enshrined as the zero point from which everything else is defined in terms of a four dimensional coordinate system, with time represented as a dimensional projection similar to the three minimalist spatial dimensions. Since we record time as a series of events, this has instinctive logic, but formalizing it as objective reality leads to such discussions as why we can go both directions on the x, y, and z axes, so why can’t we go both directions on the t axis. Eventually this train of logic devolves into the current nonsense of multiple universes, worlds, etc, because the initial assumption that since time can be modeled as a dimension means it must be a dimension, is wrong. Actually there are natives of South America who view the past as in front of them and the future behind. That is because their point of reference is the energy, not the observer and the event occurs before the observation, so time travels toward them, with the future becoming the present, then the past. As I keep pointing out, this does represent an opposing direction of time, but using a different model is very much like speaking a different language. If your brain is not wired for it, it is just gibberish.
Plato,
Back up to what we really do know. We live on a very small spot in space, in a galaxy that is trillions of time larger then this spot, yet is itself a small spot very far from other such galaxies. We observed the light from these other galaxies to be redshifted proportional to their distance from us, although there likely are other reasons for redshift. While the history of this argument is too long to describe, accept that I’m not one who thinks redshift is best described by Doppler Shift, due to recessional velocity. As I keep pointing out, it cannot logically be argued that space itself is expanding and still have a constant speed of light to measure it against. Either space really is expanding, but lightspeed would have to increase accordingly, in which case it wouldn’t appear to expand, as any source x lightyears away would always appear x lightyears away, or it isn’t expanding space, but an increasing distance of stable space(which is what Doppler Effect is anyway) and then we get back to the reason way it is claimed space is expanding, which is that we would be at the center of the entire universe, since every other galaxy is redshifted, relative to distance, directly away from us. Which is nonsense.
So. Is there another explanation for redshift that does make sense? We do perceive gravity as curving the measure of space around a body, yet radiation climbs directly out of this gravity well. Could it be that radiation does effectively have the opposite effect on space, since it does expand, just as mass contracts? Since it would be a far more gradual effect and effectively hidden in the vicinity of a gravity field, with no point of reference around which it curves, the only viable proof would be the redshift of light from distant galaxies. When the effect first described as proof of dark energy was first discovered by Perlmutter, et al, in 1998, it was that the supposed rate of expansion wasn’t being slowed by gravity, since the assumption was that it was all a consequence of the initial singularity and should be cooling off. On the other hand, if redshift is an optical effect on light crossing enormous expanses of space, with the effect compounded, so that earlier redshift is further redshifted the further light travels, the more the effect is multiplied and appears faster the further away it is, so that eventually the source appears to recede at the speed of light and beyond which it is no longer visible, then we have the signature of a cosmological constant that balances gravity, not the afterglow of a singularity with dark energy tacked on to explain the observation of continuing expansion. Since this is an optical effect, the actual sources are no more receding then a gravitationally lensed object moves around in space because an intervening gravity field makes it appear that way. Therefore the energy to make these objects actually move away is unnecessary.
John,
Physically, the effect can be interpreted as an object moving from the “false vacuum” (where = 0) to the more stable “true vacuum” (where = v). Gravitationally, it is similar to the more familiar case of moving from the hilltop to the valley. In the case of Higgs field, the transformation is accompanied with a “phase change”, which endows mass to some of the particles.
Spontaneous Symmetry Breaking
Keeping this in context of the “current state of the universe” helps me in some regard to point out the “perspective of others “who believe that the valleys can contain entropic valuations based on that “earlier time” in the universe’s expression.
We do not disregard the phase changes.
G -> H -> … -> SU(3) x SU(2) x U(1) -> SU(3) x U(1)
Here, each arrow represents a symmetry breaking phase transition where matter changes form and the groups – G, H, SU(3), etc. – represent the different types of matter, specifically the symmetries that the matter exhibits and they are associated with the different fundamental forces of nature
Unification of all the forces(including gravity) saids there is a earlier time?
So we keep a “birds eye view” on the “nature of the universe” from this perspective? Applying “theoretical bends” to the interpretations, is just another way of exploring the potentials and realization of what has transpired from the beginning(where is that?).
There had to “exist a place” for this cyclical nature, to join the beginning and end? Sean saids it’s not in this universe, and I am saying it is so.:)
On why the universe had such low entropy in the past…if there was a singularity, wouldn’t that be the ultimate in low entropy? All potential matter/energy united into one thing. Couldn’t you say there were no microstates, just one state?
If there was then a big bang, all initial relationships, time and space are created…that is a lot of order as well, as the initial conditions for the whole universe are formed; all the “rules” of chemistry, physics. That too is a whole lot of order. If the question is WHY there was such low entropy in the past, that isn’t really a question that can get answered. We don’t know why anything.