The arrow of time is hot, baby. I talk about it incessantly, of course, but the buzz is growing. There was a conference in New York, and subtle pulses are chasing around the lower levels of the science-media establishment, preparatory to a full-blown explosion into popular consciousness. I’ve been ahead of my time, as usual.
So, notwithstanding the fact that I’ve disquisitioned about this a great length and considerable frequency, I thought it would be useful to collect the salient points into a single FAQ. My interest is less in pushing my own favorite answers to these questions, so much as setting out the problem that physicists and cosmologists are going to have to somehow address if they want to say they understand how the universe works. (I will stick to more or less conventional physics throughout, even if not everything I say is accepted by everyone. That’s just because they haven’t thought things through.)
Without further ado:
What is the arrow of time?
The past is different from the future. One of the most obvious features of the macroscopic world is irreversibility: heat doesn’t flow spontaneously from cold objects to hot ones, we can turn eggs into omelets but not omelets into eggs, ice cubes melt in warm water but glasses of water don’t spontaneously give rise to ice cubes. These irreversibilities are summarized by the Second Law of Thermodynamics: the entropy of a closed system will (practically) never decrease into the future.
But entropy decreases all the time; we can freeze water to make ice cubes, after all.
Not all systems are closed. The Second Law doesn’t forbid decreases in entropy in open systems, nor is it in any way incompatible with evolution or complexity or any such thing.
So what’s the big deal?
In contrast to the macroscopic universe, the microscopic laws of physics that purportedly underlie its behavior are perfectly reversible. (More rigorously, for every allowed process there exists a time-reversed process that is also allowed, obtained by switching parity and exchanging particles for antiparticles — the CPT Theorem.) The puzzle is to reconcile microscopic reversibility with macroscopic irreversibility.
And how do we reconcile them?
The observed macroscopic irreversibility is not a consequence of the fundamental laws of physics, it’s a consequence of the particular configuration in which the universe finds itself. In particular, the unusual low-entropy conditions in the very early universe, near the Big Bang. Understanding the arrow of time is a matter of understanding the origin of the universe.
Wasn’t this all figured out over a century ago?
Not exactly. In the late 19th century, Boltzmann and Gibbs figured out what entropy really is: it’s a measure of the number of individual microscopic states that are macroscopically indistinguishable. An omelet is higher entropy than an egg because there are more ways to re-arrange its atoms while keeping it indisputably an omelet, than there are for the egg. That provides half of the explanation for the Second Law: entropy tends to increase because there are more ways to be high entropy than low entropy. The other half of the question still remains: why was the entropy ever low in the first place?
Is the origin of the Second Law really cosmological? We never talked about the early universe back when I took thermodynamics.
Trust me, it is. Of course you don’t need to appeal to cosmology to use the Second Law, or even to “derive” it under some reasonable-sounding assumptions. However, those reasonable-sounding assumptions are typically not true of the real world. Using only time-symmetric laws of physics, you can’t derive time-asymmetric macroscopic behavior (as pointed out in the “reversibility objections” of Lohschmidt and Zermelo back in the time of Boltzmann and Gibbs); every trajectory is precisely as likely as its time-reverse, so there can’t be any overall preference for one direction of time over the other. The usual “derivations” of the second law, if taken at face value, could equally well be used to predict that the entropy must be higher in the past — an inevitable answer, if one has recourse only to reversible dynamics. But the entropy was lower in the past, and to understand that empirical feature of the universe we have to think about cosmology.
Does inflation explain the low entropy of the early universe?
Not by itself, no. To get inflation to start requires even lower-entropy initial conditions than those implied by the conventional Big Bang model. Inflation just makes the problem harder.
Does that mean that inflation is wrong?
Not necessarily. Inflation is an attractive mechanism for generating primordial cosmological perturbations, and provides a way to dynamically create a huge number of particles from a small region of space. The question is simply, why did inflation ever start? Rather than removing the need for a sensible theory of initial conditions, inflation makes the need even more urgent.
My theory of (brane gasses/loop quantum cosmology/ekpyrosis/Euclidean quantum gravity) provides a very natural and attractive initial condition for the universe. The arrow of time just pops out as a bonus.
I doubt it. We human beings are terrible temporal chauvinists — it’s very hard for us not to treat “initial” conditions differently than “final” conditions. But if the laws of physics are truly reversible, these should be on exactly the same footing — a requirement that philosopher Huw Price has dubbed the Double Standard Principle. If a set of initial conditions is purportedly “natural,” the final conditions should be equally natural. Any theory in which the far past is dramatically different from the far future is violating this principle in one way or another. In “bouncing” cosmologies, the past and future can be similar, but there tends to be a special point in the middle where the entropy is inexplicably low.
What is the entropy of the universe?
We’re not precisely sure. We do not understand quantum gravity well enough to write down a general formula for the entropy of a self-gravitating state. On the other hand, we can do well enough. In the early universe, when it was just a homogenous plasma, the entropy was essentially the number of particles — within our current cosmological horizon, that’s about 1088. Once black holes form, they tend to dominate; a single supermassive black hole, such as the one at the center of our galaxy, has an entropy of order 1090, according to Hawking’s famous formula. If you took all of the matter in our observable universe and made one big black hole, the entropy would be about 10120. The entropy of the universe might seem big, but it’s nowhere near as big as it could be.
If you don’t understand entropy that well, how can you even talk about the arrow of time?
We don’t need a rigorous formula to understand that there is a problem, and possibly even to solve it. One thing is for sure about entropy: low-entropy states tend to evolve into higher-entropy ones, not the other way around. So if state A naturally evolves into state B nearly all of the time, but almost never the other way around, it’s safe to say that the entropy of B is higher than the entropy of A.
Are black holes the highest-entropy states that exist?
No. Remember that black holes give off Hawking radiation, and thus evaporate; according to the principle just elucidated, the entropy of the thin gruel of radiation into which the black hole evolves must have a higher entropy. This is, in fact, borne out by explicit calculation.
So what does a high-entropy state look like?
Empty space. In a theory like general relativity, where energy and particle number and volume are not conserved, we can always expand space to give rise to more phase space for matter particles, thus allowing the entropy to increase. Note that our actual universe is evolving (under the influence of the cosmological constant) to an increasingly cold, empty state — exactly as we should expect if such a state were high entropy. The real cosmological puzzle, then, is why our universe ever found itself with so many particles packed into such a tiny volume.
Could the universe just be a statistical fluctuation?
No. This was a suggestion of Bolzmann’s and Schuetz’s, but it doesn’t work in the real world. The idea is that, since the tendency of entropy to increase is statistical rather than absolute, starting from a state of maximal entropy we would (given world enough and time) witness downward fluctuations into lower-entropy states. That’s true, but large fluctuations are much less frequent than small fluctuations, and our universe would have to be an enormously large fluctuation. There is no reason, anthropic or otherwise, for the entropy to be as low as it is; we should be much closer to thermal equilibrium if this model were correct. The reductio ad absurdum of this argument leads us to Boltzmann Brains — random brain-sized fluctuations that stick around just long enough to perceive their own existence before dissolving back into the chaos.
Don’t the weak interactions violate time-reversal invariance?
Not exactly; more precisely, it depends on definitions, and the relevant fact is that the weak interactions have nothing to do with the arrow of time. They are not invariant under the T (time reversal) operation of quantum field theory, as has been experimentally verified in the decay of the neutral kaon. (The experiments found CP violation, which by the CPT theorem implies T violation.) But as far as thermodynamics is concerned, it’s CPT invariance that matters, not T invariance. For every solution to the equations of motion, there is exactly one time-reversed solution — it just happens to also involve a parity inversion and an exchange of particles with antiparticles. CP violation cannot explain the Second Law of Thermodynamics.
Doesn’t the collapse of the wavefunction in quantum mechanics violate time-reversal invariance?
It certainly appears to, but whether it “really” does depends (sadly) on one’s interpretation of quantum mechanics. If you believe something like the Copenhagen interpretation, then yes, there really is a stochastic and irreversible process of wavefunction collapse. Once again, however, it is unclear how this could help explain the arrow of time — whether or not wavefunctions collapse, we are left without an explanation of why the early universe had such a small entropy. If you believe in something like the Many-Worlds interpretation, then the evolution of the wavefunction is completely unitary and reversible; it just appears to be irreversible, since we don’t have access to the entire wavefunction. Rather, we belong in some particular semiclassical history, separated out from other histories by the process of decoherence. In that case, the fact that wavefunctions appear to collapse in one direction of time but not the other is not an explanation for the arrow of time, but in fact a consequence of it. The low-entropy early universe was in something close to a pure state, which enabled countless “branchings” as it evolved into the future.
This sounds like a hard problem. Is there any way the arrow of time can be explained dynamically?
I can think of two ways. One is to impose a boundary condition that enforces one end of time to be low-entropy, whether by fiat or via some higher principle; this is the strategy of Roger Penrose’s Weyl Curvature Hypothesis, and arguably that of most flavors of quantum cosmology. The other is to show that reversibilty is violated spontaneously — even if the laws of physics are time-reversal invariant, the relevant solutions to those laws might not be. However, if there exists a maximal entropy (thermal equilibrium) state, and the universe is eternal, it’s hard to see why we aren’t in such an equilibrium state — and that would be static, not constantly evolving. This is why I personally believe that there is no such equilibrium state, and that the universe evolves because it can always evolve. The trick of course, is to implement such a strategy in a well-founded theoretical framework, one in which the particular way in which the universe evolves is by creating regions of post-Big-Bang spacetime such as the one in which we find ourselves.
Why do we remember the past, but not the future?
Because of the arrow of time.
Why do we conceptualize the world in terms of cause and effect?
Because of the arrow of time.
Why is the universe hospitable to information-gathering-and-processing complex systems such as ourselves, capable of evolution and self-awareness and the ability to fall in love?
Because of the arrow of time.
Why do you work on this crazy stuff with no practical application?
I think it’s important to figure out a consistent story of how the universe works. Or, if not actually important, at least fun.
Very nice discussion!
I have what possibly is a very silly question, but still here it goes.
How does the concept of “arrow of time” appear in general relativity? There is this idea that time in GR is just a coordinate, without objective meaning. In a different coordinate system, the arrow of time would become the “arrow of down”, so to speak, and maybe even lose its directionality. Is the arrow of time diffeomorphism-invariant?
Apologies if it does not make much sense.
Greg,
Nah, just describe the time reversal of the black hole out to some thin shell just outside the black hole. Doesn’t change anything.
Jesse,
But that’s what we mean when we say “white hole”: the time reversal of a black hole. The only time reversal that makes any sense is the thermodynamic arrow of time: all fundamental physical laws are symmetric under time reversal.
Well, pure GR isn’t completely accurate, so it’s not useful to use in such arguments. Our understanding of quantum mechanics indicates that the properties of the black hole (e.g. inability for light to escape) are fundamentally thermodynamic in nature.
Jason wrote:
But that’s what we mean when we say “white hole”: the time reversal of a black hole. The only time reversal that makes any sense is the thermodynamic arrow of time: all fundamental physical laws are symmetric under time reversal.
Its seems like you’re still not addressing my point, namely: “It would be entirely consistent with T-symmetry if each of the following were allowed solutions to a theory incorporating both GR and quantum effects: black holes with increasing entropy, black holes with decreasing entropy, white holes with increasing entropy, and white holes with decreasing entropy.” What did you think of the analogy with the time-reversed solar system? Do you agree that a pure GR description of a stable black hole shows neither increasing nor decreasing entropy, and that we need to analyze quantum field theory on curved spacetime (or quantum gravity) to derive the prediction of Hawking radiation and increasing entropy? Do you agree that this derivation probably does not bother to check whether it’s physically possible to have a black hole decreasing in entropy, since this would likely require an unrealistic low-entropy future boundary condition on the radiation? Do you agree that if a decreasing-entropy black hole was physically possible (even if totally unrealistic in our universe), then by T-symmetry an increasing-entropy white hole would also be possible? (and if it was possible, we could no longer use pure thermodynamics to explain why we don’t see any, although there might be other good reasons such as there being no natural process that would create one).
I am going to give my piece on this. For any phase space volume V the entropy is S = k log(V). Thermodynamics is nifty in that it saves a lot of hastle with those logarithms. This approach means that a coarse grained description with V and another with V’ will result in a small error due to the logarithm. The evolution of a system from one macrostate to another is statistically most likely to go from one macrostate to another with a larger volume. That we the change in entropy will be positive. This likely plays a role in quantum gravity and the origin of time, for quantum gravity states do not have a unique correlation to classical spacetime variables. Further, if a quantum wave function of a universe exists we are unable to assign a globally defined time variable to the entire superposition of metrics, or states whose configuration variables are these metrics.
A standard view of quantum gravity is the path integral, which Hawking and other Euclideanize. In this perspective quantum gravity is obtained up to the one-loop correction as the result of some steepest descent method. Effectively the total path integral
Psi[g, phi] = int DgDphi exp(iS[g, phi]),
which is often Euclideanized by the Wick rotation i —> 1,is written under a steepest descent method as
Psi[g, phi] =~ sum_{&g}int Dphi exp(I[g_0 + &g, phi] + iS'[g_0,phi])
which sums over paths that have small deviations from the expected path g_0, eg the classical spacetime and the action is expanded into a real and imaginary parts. The real part is the instanton component that is a nabla S >> nabla I, reflecting the small quantum aspect of the spacetime — eg a WKB type of approximation. From this
I[g_0 + &g,phi] = I[g, phi] – Gam(loop),
where the loop part are the O(hbar) corrections from the &g content. The remainder of the spacetime content is on the tree level, which is essentially the classical spacetime. If we let &g = 0 then the above path integral is equal to that for a quantum field in any spacetime, and if we restrict our attention to flat spacetime this recovers the standard textbook quantum field theory.
There are two problems that we have in all generality. The first is that not all quantum states over a metric space as a the configuration variable have a classical spacetime for that configuration variable. The other is that we have a counting problem. We really don’t know how to count states in general. This is why we are left doing saddle point integrations of the action around small quantum variations in a metric. We are stuck with effective theories. Can quantum states be counted in general? If we have a set of quantum states Psi(Y) over fields Y, then a process involving these fields is a time ordered product of these fields that enters into the path integral, acts on the vacuum state weighted by the measure term (the exponent of the action etc) to return a value. Can we define a time ordered product of states where the field or configuration variable is space or spacetime?
I say no, and here is why. The Wheeler DeWitt equation Hpsi[g] = 0 may be converted to a Schrodinger equation if a harmonic oscillator term is introduced. Some work is done and the energy eigenvalues of this can provide a stationary phase e^{iEt} to the wave equation so it which converts the WD equation into the SE. Now this SE obtains for each eigen-wave function(al) psi_n[g], and the “time” involved pertains specifically to that space. So if we have a superposition of states or entanglements can we define a global “time?” The answer is no, for this implies a coordinate dependent map between metrics and general relativity is covariant, or as Wheeler loves to say is independent of coordinate descriptions. The operator i&/&t = iK_t is a Killing vector, which is unique to a spacetime, and is coordinate independent. So in trying to define a general time coordinate for all possible eigenstates we will commit a “crime” against general relativity.
So is making an assumption that two metrics which are “close enough” to bad a crime to commit? Maybe not, and if we are careful it might be a “good” thing. Assume we make an assignment of a Killing vector we impose an error determined by the difference in the two metrics, &g = g’ – g (& = delta). The Einstein field equation R_{ab} – 1/2Rg_{ab} = -k T_{ab} in the trace reversed form is R_{ab} = k(T_{ab} – 1/2Tg_{ab}). For a source free spacetime, T_{ab} = 0 the Ricci curvature is zero. In a source free region R_{ab} = 1/2Rg_{ab}, but under the assignment of two metric in delta g assume a small violation of the Einstein field equations means a nonzero Ricci curvature is determined by a “potential,”
R_{ab} = nabla_a nabla_bV =/= 0
where the potential is a metric difference V = (g’ – g)_{ab}g^{ab}. (nabla = vector derivative etc) The perturbed vacuum Einstein field equation may then be written as
nabla_a nabla_bV = 1/2 nabla^2V &g_{ab}
or according to the difference in the metric
nabla_a nabla_bdelta g = ½ &g_{ab}nabla^2&g.
When contracted on indices and integrated over a region volume in M^4 we find that
int_{vol} dv &g nabla^2 delta g = – int_{vol}dv nabla &g nabla &g = – int_{vol} dv (nabla g’ – nabla g)^2,
which is the source of the energy error functional &E_g = |nabla g’ – nabla g|^2. This is a coarse graining over quantum gravity states.
What this does is to start the process of coarse graining the quantum states of gravity. In doing so a set of states which are “close enough” to a classical spacetime g_c may be course grained around the state for g_c with the energy error function &E_g &T >= hbar/2. So let us assume that the universe is described by a set of states, and indeed a set of vacua, vacua which are not unitarily equivalent in the standard sense. The early universe was described by these states on a more fine grained level, or to use the macrostate analogy the states of the universe are sharp or have a high degree of “fidelity” or distinguishability and &E_g = 0 between all quantum states with a metric configuration variable. This means that &T —> infinity, or that time is so uncertain that …., well to put it bluntly there is no time. The process of a cosmology tunnelling out of the vacua is analogous I think to squeezed states or parametric amplification in quantum optics, which means that E_g becomes larger and &T becomes smaller and the cosmology has a coarser grained description over quantum states.
Of course. The problem is that there’s no reason whatsoever to suspect that it is possible to produce a decreasing-entropy black hole.
On equilibrium: Equilibrium is not the stable state in general relativity. It is not difficult to see why. Suppose we have a black hole of mass M which has an event horizon at radius r = 2GM/c^2 and an area A = 4pi r^2. The entropy of a black hole is S = (k/4)A, for k a multiplied set of constants. Now if the black hole is in equilibrium it would mean that the temperature of the even horizon is equal to the background, say the universe at large T = 2.7K. Such a black hole would have a mass about equal to the moon and the size of a pinhead or so. Now assume the black hole emits a quanta by the Hawking process so that its mass M —> M – &m, for &m
Lawrence,
I think you meant to say that a black hole at equal temperature to its surroundings is not in equilibrium. Yeah, now that I think about it, that seems to be correct. However, though your post appears to have been cut off, you have to bear in mind that it is necessary to consider the surroundings when deciding whether or not this equilibrium is stable: if the Hawking radiation from the black hole heats up the surroundings enough, then it may remain in equilibrium.
The post apparently got cut. To make it short the heat capacity of spacetime is negative. This means that contrary to standard thermodynamics high temperature low entropy and visa versa. So a black hole who’s horizon temperature is equal to the background temperature will not stay there. The quantum emission of a particle or the absorption of a particle means that the black hole will runaway in that direction. So with spacetime thermodynamics equilibrium really does not exist.
Also as the universe expands the CMB temperature will over time approach absolute zero asymptotically. This is in the semi-classical sense, where their might turn out to be zeta function condesate-like partition functions for some tiny terminal temperature, but that is conjectural. Black holes will then eventually fall below the horizon temperature of the universe, due to Hawking-Gibbon radiation from the cosmological horizon at r = sqrt{3/ /}. Here / is the cosmological constant in the DeSitter spacetime, which the universe appears to approximately represent.
Lawrence B. Crowell
Jason wrote:
Of course. The problem is that there’s no reason whatsoever to suspect that it is possible to produce a decreasing-entropy black hole.
“Possible to produce” in the sense of it having a non-negligible possibility of occurring in our universe (which would not be true of any large decrease in entropy for a closed system), or “possible to produce” in the sense of it being an allowable solution to whatever the fundamental laws of physics are? My argument is just that it might be possible in the second sense–you haven’t offered any rigorous argument as to why we should be confident it wouldn’t be, and as I said, it’s unlikely that physicists were looking for such a solution when they calculated the behavior of Hawking radiation at the event horizon of a black hole (since such a solution would probably require imposing a strange low-entropy future boundary condition on this radiation).
Think back to my thought-experiment about simulating a black hole in an enormous computer simulation whose rules were based on whatever fundamental laws govern a black hole and the particles around it (including Hawking radiation). If we take some later state of the simulation, and reverse all the particle momenta as well as whatever else needs to be reversed to flip the arrow of time, then if we now run the simulation forward from this state, do you agree we’d see a reverse-entropy white hole? But now what happens if we perturb the initial state of some particles outside the event horizon–won’t this perturbation likely cause the arrow of time outside the horizon to flip back to the forward direction? Are you asserting that merely flipping the thermodynamic arrow of time outside the horizon would be sufficient to turn the object from a white hole to a black hole, so that the horizon would no longer be impenetrable from the outside and matter could fall in? I’m not so sure that’s how it works, the object might continue to behave as a white hole but with the entropy around it increasing. (And obviously if this method will produce a simulated white hole with entropy increasing, then one could simply reverse the momenta etc. of a later state of this new run to get a new simulation containing a decreasing-entropy black hole).
Actually, this suggest another question for Greg Egan, or anyone else here who’s knowledgeable about the mathematics of general relativity–in terms of pure GR, what’s the reason why it’s impossible to enter a white hole event horizon? Is it just that all the test particle trajectories happen to lead out, or is there some reason it would be impossible to add a new test particle trajectory leading in? (so that it would be impossible for an object that had previously been behaving as a white hole to suddenly start behaving as a black hole, as would be required for the perturbation in my thought-experiment to convert a white hole into a black hole) I know in the case of a black hole, if one wants to maintain a constant distance from the horizon (at a distance closer than it’s possible to orbit), then one must continually thrust away from the horizon–would the same be true for maintaining a constant distance from a white hole, or would it be reversed, so that one would have to continually thrust towards the horizon to keep from moving outwards? Does the white hole behave as a source of “anti-gravity”, in other words?
Lawrence Crowell wrote:
To make it short the heat capacity of spacetime is negative. This means that contrary to standard thermodynamics high temperature low entropy and visa versa. So a black hole who’s horizon temperature is equal to the background temperature will not stay there. The quantum emission of a particle or the absorption of a particle means that the black hole will runaway in that direction. So with spacetime thermodynamics equilibrium really does not exist.
What about the situation of a black hole in a perfectly reflective box–wouldn’t all the photons radiated away as Hawking radiation fall back into the black hole, so an equilibrium would be reached? The abstract of this article seems to say you could have a stable thermal equilibrium in this case, for example, as does this page from the website of Piet Hut at the Institute for Advanced Study. And if you can have an equilibrium in a box, then whatever the temperature and other properties of the photons outside the horizon in the box, what is it that prevents you from duplicating this in an infinite universe and getting a solution where the black hole is at equilibrium with the radiation around it?
To enter a white hole horizon, you’d have to be travelling faster than light — just as you’d have to be travelling faster than light to cross outwards through a black hole’s horizon.
A horizon is a null surface, which means it’s generated by paths that light rays would follow. At any event in spacetime, you (or any massive test particle) can only be following a timelike worldline, which lies inside the light cone at that event. Every light cone comes in two pieces: one facing into the past, one facing into the future (with the definition being a matter either of convention or thermodynamics; there’s nothing in local spacetime geometry to tell you which is which).
At a point on the event horizon of a black hole, the future-pointing half of the light cone also points entirely inwards, into the hole (except for a sliver that remains exactly on the horizon, i.e. the cone is tangent to the horizon). Equally, the past-pointing half of the light cone points entirely outwards (except for that single tangent line), away from the hole. So if you’re at the horizon, you must have got there from the exterior, and you must be heading into the interior.
But as I said, there’s nothing in the local spacetime geometry to tell you which way is the future and which way is the past, so if we swap the roles of “future” and “past” in that description, we find that at the horizon of a white hole, you must have got there from the interior, and you must be heading into the exterior.
It’s really only those future/past labels that distinguish a black hole from a white hole.
No, you’d still have to exert thrust away from a white hole to keep still, because accelerations are unchanged by time-reversal. And if you stop thrusting and let yourself fall towards a white hole, you would always continue moving towards its horizon, but you would never pass through, you would just get asymptotically closer and closer. This is the time reverse of what happens to things that escape from close to a black hole (under inertial motion, by virtue of having a large initial outwards radial velocity): they were never inside the horizon, but if they were arbitrarily close to the horizon and moving away from it with sufficient speed they will eventually escape — but it will take a very long time if they were very close to the horizon.
Black holes are interesting constructs, but the primary real black holes are the vortexes at the center of galaxies. Much of the mass falling into galaxies gets radiated out prior to its falling into the vortex and it would seem that what does, is ejected as charged particles out the poles. This would seem to be half of a convective cycle of collapsing mass and expanding energy/radiation. Essentially the eye of a hurricane. Could there be another half, where this radiation cools down to the point where it starts to condense back out as mass? One prediction of this theory would be a stable quantity of radiation in space, similar to moisture in the atmosphere, with a clear cut-off level, similar to the dew-point. Say a cosmic background radiation, up to the level of 2.7K.
Since gravity causes the metric of the dimensionality of space to contract, could radiation cause it to expand? Since there is no gravitational vortex around which this effect bends, it wouldn’t “curve” space, but it might manifest in other ways, such as red-shifting the spectrum of extremely distant light sources. This would be equivalent to a cosmological constant, which Einstein proposed to balance out the gravitational collapse. Surprisingly, this is what redshift appears to model, but since that isn’t accepted theory, dark energy has been proposed to fill in the very large blank.
http://www.plasmacosmology.net/
Greg Egan wrote:
But as I said, there’s nothing in the local spacetime geometry to tell you which way is the future and which way is the past, so if we swap the roles of “future” and “past” in that description, we find that at the horizon of a white hole, you must have got there from the interior, and you must be heading into the exterior.
It’s really only those future/past labels that distinguish a black hole from a white hole.
If there’s no difference in the local spacetime geometry, is there anything to prevent a single object that behaves both ways at different times, or even simultaneously? i.e. at some time test particles are departing from the singularity and crossing the event horizon in the outward direction, at another time (or the same time) test particles are entering the horizon from outside and falling into the singularity? This gets back to the question I was asking in my comment #110 about whether, by flipping the arrow of time for matter outside a simulated white hole (by introducing a perturbation in the initial state of a simulated run that in its unperturbed version would look like a white hole), you would then see the white hole itself flip and start to behave more like a black hole, with matter able to fall in from the outside. If this is in fact possible, then it would lend support to the idea that the only difference between a black hole and a white hole is the direction of the thermodynamic arrow.
Hi Sean,
It appears the same as Penrose’s argument in his road to reality. My question is who is the first guy to give the answers to faq. Another question is who do you think is the most respectful live guy besides yourself.:)
Thanks.
Jesse M. wrote:
As far as I can see, if you’re able to switch the thermodynamic/cosmological arrow of time in the external universe, you could have a single structure act as a black hole for some of the time and a white hole for some of the time. The hole carries with it an enduring distinguished direction in time: the direction in time that accompanies outwards passage through the horizon. If the thermodynamic arrow associated with that direction in the external universe changed, then you’d be entitled to call the hole by different names in the different epochs.
The “even simultaneously” is trickier; I think that depends on exactly where and when and in what coordinates you ask the question. The singularity inside a hole is actually spacelike — i.e. extended in a spatial direction, not in time — like the Big Bang or Big Crunch. When you fall into a BH, you don’t arrive at the singularity like you’re reaching the centre of the Earth; what you see is the space around you getting crushed to a point in two directions while expanding in a third, until at a certain time everything is destroyed. Conversely, everything that leaves a WH singularity could be said to emerge from it at the same time, in at least one set of coordinates.
Now the thing that’s really messy about white holes is that, at least under classical GR, you have no idea what’s going to emerge from the singularity. When a hole’s “acting as a black hole”, we’re happy to say that we can explain the states of matter at the singularity by knowing the history of what falls in. But when we’re treating the hole as a white hole, what is there to constrain the entropy, or internal thermodynamic arrows, of objects that the singularity spits out? It’s hard to see any clear resolution of that coming without quantum gravity. And it seems to be a bit of a cheat to say “The black holes we know about eat low entropy matter, whose entropy is increasing as it hits the singularity, therefore white holes will emit low entropy matter whose entropy is decreasing as it flies away from the singularity.”
Jesse, I have a lot of trouble figuring out what would happen in your simulation with regard to Hawking radiation, because the treatments of Hawking radiation that I have to hand all do global calculations that follow waves all the way from “past null infinity” to “future null infinity”, and at some stages even invoke the collapse that forms the black hole. I’m not competent to answer the question “Is there some local process dictated by the spacetime geometry alone that determines what the Hawking radiation just outside the horizon must be doing, irrespective of all boundary conditions?”
Zitron said: “How does the concept of “arrow of time” appear in general relativity? There is this idea that time in GR is just a coordinate, without objective meaning. In a different coordinate system, the arrow of time would become the “arrow of down”, so to speak, and maybe even lose its directionality. Is the arrow of time diffeomorphism-invariant?”
The laws of physics, including General Relativity and significantly Quantum Mechanics, are almost invariably reversable. CP symmetry has been violated, but not for hadrons.
There is no “arrow of time” in General Relativity. Time in GR is conventionally treated as a “space-like” dimension, so locations in space-time are geometric coordinates. There are a number of geometries which satisfy the GR equations, but the first one developed was that of Schwarzschild. Since GR is satisfied by sets of positive and negative solutions and because the Schwarzschild “mirror” geometry satisfies the complete batch, one could argue Schwarzschild geometry to be the most complete geometric reflection of the concept.
The “arrow of time” is a thermodynamic issue…and a multiple-faceted issue at that, as can be seen by the divergence of point of view expressed on this thread.
A universe without an information paradox is a universe where all information is conserved and preserved…everywhere. A careful evaluation of the significance of that discovery and of what has been learned about the universe over the past century suggests that the Schwarzschild metric may, in the final analysis support more of the data than any other of the possible GR universal geometries.
Max Tegmark is well known for his theoretical treatment of the “multiverse concept”, yet Max makes it clear that there are certain known facts about the power specturm, and certain initial conditions necessary during the big bang which could demand a universe of finite mass and marginally closed space (as also depicted in the NASA “shape of space” diagram based on the WMAP results).
Ned Wright makes it clear that the standard model requires a certain specific density at the big bang…for the universe to have developed as it is observed. The density formula does not admit infinite values. Without being too exhaustive, it is pretty obvious we live in a universe where matter, energy (and according to the recent findings about the lack of an information paradox), entropy in all its forms is likewise conserved in the universe at large.
To understand the expanding and accelerating universe within the Schwarzschild geometry, it is only necessary to accept the idea that the negative sets of solutions to GR and the “second side of the universe” of Schwarzschilds mirror geometry may not be mathematically and geometrically vestigial…and to conceptualize appropriately.
Gravity is a direct reflection of the momentum of GR and in toto gravity via black holes, prevents a universal heat death. For gravity to accomplish this act of conservation it is only necessary that the universe be found as a quantum entity, in GR scale and with a Schwarzschild “two-sphere” marginally closed geometry.
I used two thermodynamic analogies on this thread. In one the fallen egg is made into an omlet, which is then eaten by a chicken and formulated into its original biological equivalent…with very minor differences. This analogy follows a continuously existing quasi-static universe where continuously existing complex information is the method of thermodynamically “re-assembling” the universe and everything in it perpetually.
The second analogy was the merry go round, in which events are observed to pass in two different “arrows of time” or directions simultaneously…without the presence of any inverse process.
However, filming and observing the dropping of an egg in forward and reverse also is worth some discussion. The fact that the egg is shattered does nothing to change the history of the egg…that a chicken made it, or that at one time it was a perfectly formed egg. In the film it seems inconguous to watch the egg come together and leap off the floor at 1G acceleration, but to insist that such behaviour would require an inverse thermodynamic process is unjustified, just as watching folks move in two directions on a merry go round would not indicate inverse process.
This is not the place to discuss the effects of scale and complexity on our perception of an arrow of time. However, the fact that such things as scale and particulate complexity affect our perception of an “arrow of time” connects the process of observation itself with the kind of existence (universe) we perceive and our feeling that time does indeed have a single process.
I smiled when someone remarked that as they sat in their chair writing, they possessed an “inertial” frame of reference. Sitting in a chair on the surface of the Earth is a non-inertial frame of reference- just as non-inertial a frame as a spacecraft accelerating at a constant 1G toward the stars. The way we confidently observe and interpret the universe is undoubtedly far from the way things really are.
Einstein made a bold step when he asserted the equivalence of non-inertial and gravitational frames of reference. Just as people in the spacecraft would observe an outside universe filled with relativistic effects, we here on earth as we undergo a constant gravitational acceleration also observe a universe which is, from our frame of reference, a relativistic product of our accelerated frame of reference.
In fact, we and the particles of which we are made, existing on 4D particulate event horizon surfaces as we do, are with the Earth itself, relativistic effects. Our sense of motion, change- and the “arrow of time” are only products of the way we observe the cosmos at our present coordinates.
Sam,
Consider that as gravity collapses mass, radiation expands energy. Mass is composed of energy, which is constantly breaking down old forms and creating new ones, so consider the analogy of a movie projector, where the energy of the projector light is constantly going from previous frames to succeeding ones, while these frames go from being in the future to being in the past. Just as the energy of sunlight goes from previous days to succeeding ones, as particular days go from being in the future to being in the past. Just as the process of life is going on to future generations, as it is shedding old ones, while the frames of these individual lives go from being in the future to being in the past.
The arrow of time for process/energy, ie. the hands of the clock, is from past to future, while the events being recorded, the face of the clock, go from future to past. So the arrow of time for mass/form collapses, as the arrow of time for energy/process expands.
Of course these two effects interact, so that open forms which absorb more energy then they lose are expanding, such as a growing child, or the warming morning and it is as they peak and start to lose energy that they contract.
Pingback: links for 2007-12-08 « Qulog 2.0
I feel like the little boy who wonders why the emperor is nude.
If one takes the uncertainty principle as a primitive and the unsolvability of the many-body problem as a given, it seems to me that microscopic irreversibility follows as the source of a quantum second law and the source of the arrow of time.
So I googled (“microscopic irreversibility” and “uncertainty principle”) and got very few hits. One by Karl Gustafson, and another by Huaiyu Zhu, convinced me that the answer is not so simple…but nonetheless plausible.
Any comments?
Greg Egan wrote:
The “even simultaneously” is trickier; I think that depends on exactly where and when and in what coordinates you ask the question. The singularity inside a hole is actually spacelike — i.e. extended in a spatial direction, not in time — like the Big Bang or Big Crunch. When you fall into a BH, you don’t arrive at the singularity like you’re reaching the centre of the Earth; what you see is the space around you getting crushed to a point in two directions while expanding in a third, until at a certain time everything is destroyed. Conversely, everything that leaves a WH singularity could be said to emerge from it at the same time, in at least one set of coordinates.
Well, suppose we take the perspective of an external observer hovering at some short distance above the horizon. In “pure” classical GR terms, is it possible for him to see both a steady stream of test particles passing him as they fall into the horizon, and a steady stream of test particles passing him as they emerge out of it, with each stream individually looking to him just like what he might see if he were hovering outside a normal black hole or a normal white hole?
If this is possible, it would be interesting to then consider what the outgoing stream would look like to someone riding along with one of the ingoing test particles, and vice versa. For example, for the observer outside the horizon, is there any finite time-interval T such that, if he labels the ingoing particle which is passing him at a particular moment “A”, and then labels the outgoing particle that is passing him T later “B”, that the worldlines of A and B would actually have crossed somewhere inside the horizon? If so, then if A and B both had clocks attached which appeared to be ticking forward at the moment each one was passing the external observer, then given the way the radial dimension becomes a time dimension once inside the horizon, would that mean A would have seen B’s clock ticking backwards at the moment their worldlines crossed? Also, if the external observer sees particles passing in discrete intervals rather than continuously–say, 1 ingoing particle per second and 1 outgoing particle per second–then would an ingoing particle pass an infinite or finite number of outgoing particles before it reached a) the horizon and b) the singularity? Obviously I’m not asking you to do any detailed calculations here, just wondering aloud (and I suppose the answer may be that the original idea of an external observer seeing a constant stream of both ingoing and outgoing particles is impossible for some reason).
Jesse, I have a lot of trouble figuring out what would happen in your simulation with regard to Hawking radiation, because the treatments of Hawking radiation that I have to hand all do global calculations that follow waves all the way from “past null infinity” to “future null infinity”, and at some stages even invoke the collapse that forms the black hole. I’m not competent to answer the question “Is there some local process dictated by the spacetime geometry alone that determines what the Hawking radiation just outside the horizon must be doing, irrespective of all boundary conditions?”
It may be outdated, but in this page on Hawking radiation by John Baez he says that no one has managed to figure out a local description:
On the other hand, Steve Carlip elaborates on the “heuristic” description Baez talked about above on this page, I’m not sure if this contradicts what Baez said about the lack of a local description of Hawking radiation or not.
I was happy to see that the explanation about Entropy did not use the “mess image”.
The article about the Second Law in Wikipedia (your link and people will see that there are “sub-links” about Second law in Wikipedia because the stuff is not so obvious and exempt of debate) is quite good in English (yet a bit disorganised because of frequent changes) but quite old-fashioned.
I tried to modify the French “Second law” article in Wikipedia with no avail.
Once again, it is very important that people remember with your example of the egg and omelet that the Second Law does not say that an omelet is more “messy” than a egg but that it is more likely to make an omelet than an egg but to say then because the egg is more “ordered” would be WRONG.
A video to illustrate (funny) : http://www.youtube.com/watch?v=CyySAAc_KNI
Jesse wrote:
No, it’s not possible.
To be precise about what I mean: suppose your observer is hovering above the horizon, and at some instant in time we declare that he is at spacetime event E. There is then a certain collection of possible worldlines that (a) intersect the hole’s singularity, (b) pass through E, and (c) escape to infinity. Now, there is nothing in the geometry itself that orders those three events along each worldline, but according to the observer’s own personal arrow of time, either all the worldlines will be coming from the singularity, or all of them will be heading into it. If he himself could see a mixture of cases, then nobody would ever have made the statement “Nothing can escape from a classical black hole”!
However, if you’re allowing the universe at large to contain systems with contradictory arrows of time, then if the objects travelling past the observer are undergoing complex processes (rather than being featureless particles), then there’s nothing [except the logistical issues of keeping such systems isolated and running in their chosen direction] to prevent some of the objects from “thinking” that they’re falling in and others that they’re emerging. If their arrows of time are tied to the particular epoch when they are very far from the hole, certainly some of these objects passing through E can be far from the hole at very different times than others — that’s just a matter of choosing their velocities in such a way that the “earlier” ones can catch up with the “later” ones (imposing a single arrow of time on the description there for the sake of clarity).
I’ll have to think a lot more about the Hawking radiation issue; I’ve read both pages you linked to (and the section in Wald that deals with Hawking radiation), but I still can’t figure out if the process really is independent of all assumptions about distant boundary conditions.
Greg wrote:
No, it’s not possible.
To be precise about what I mean: suppose your observer is hovering above the horizon, and at some instant in time we declare that he is at spacetime event E. There is then a certain collection of possible worldlines that (a) intersect the hole’s singularity, (b) pass through E, and (c) escape to infinity. Now, there is nothing in the geometry itself that orders those three events along each worldline, but according to the observer’s own personal arrow of time, either all the worldlines will be coming from the singularity, or all of them will be heading into it. If he himself could see a mixture of cases, then nobody would ever have made the statement “Nothing can escape from a classical black hole”!
I guess I was thinking that the scenario I described could still be loosely consistent with the “nothing can escape” statement since it might still be that nothing that fell in could ever escape, the only particles that could escape would be ones spit out directly from the singularity (the time-reversal of trajectories falling in). But if it’s not possible, then I’m confused about what it is that prevents an observer outside a white hole from entering the horizon. You said earlier that a ship wouldn’t experience anti-gravity outside a white hole, that to maintain a constant distance from the horizon the ship would still have to thrust outward just like with a black hole…so if an observer was hovering above the horizon of a white hole and turned off the thrust, what would happen?
Jesse wrote:
I answered that in the last paragraph of #112: you’d always be moving towards the horizon, but never catching up with it (not in your proper time, or by anyone else’s coordinates). Trying to cross a horizon the “wrong way” is like trying to catch up with a pulse of light — and if you remember that the horizon itself consists of potential world lines of photons this becomes a bit less mysterious. Whereas crossing a horizon the “right way” is like allowing yourself to be overtaken by a pulse of light — all too easy. (But as you probably know, in SR if you have a head start and you accelerate constantly, you can avoid being overtaken even by light. Similarly, by constantly accelerating — applying thrust — you can avoid falling into a black hole.)