Boltzmann’s Anthropic Brain

A recent post of Jen-Luc’s reminded me of Huw Price and his work on temporal asymmetry. The problem of the arrow of time — why is the past different from the future, or equivalently, why was the entropy in the early universe so much smaller than it could have been? — has attracted physicists’ attention (although not as much as it might have) ever since Boltzmann explained the statistical origin of entropy over a hundred years ago. It’s a deceptively easy problem to state, and correspondingly difficult to address, largely because the difference between the past and the future is so deeply ingrained in our understanding of the world that it’s too easy to beg the question by somehow assuming temporal asymmetry in one’s purported explanation thereof. Price, an Australian philosopher of science, has made a specialty of uncovering the hidden assumptions in the work of numerous cosmologists on the problem. Boltzmann himself managed to avoid such pitfalls, proposing an origin for the arrow of time that did not secretly assume any sort of temporal asymmetry. He did, however, invoke the anthropic principle — probably one of the earliest examples of the use of anthropic reasoning to help explain a purportedly-finely-tuned feature of our observable universe. But Boltzmann’s anthropic explanation for the arrow of time does not, as it turns out, actually work, and it provides an interesting cautionary tale for modern physicists who are tempted to travel down that same road.

The Second Law of Thermodynamics — the entropy of a closed system will not spontaneously decrease — was understood well before Boltzmann. But it was a phenomenological statement about the behavior of gasses, lacking a deeper interpretation in terms of the microscopic behavior of matter. That’s what Boltzmann provided. Pre-Boltzmann, entropy was thought of as a measure of the uselessness of arrangements of energy. If all of the gas in a certain box happens to be located in one half of the box, we can extract useful work from it by letting it leak into the other half — that’s low entropy. If the gas is already spread uniformly throughout the box, anything we could do to it would cost us energy — that’s high entropy. The Second Law tells us that the universe is winding down to a state of maximum uselessness.

Ludwig Boltzmann Boltzmann suggested that the entropy was really counting the number of ways we could arrange the components of a system (atoms or whatever) so that it really didn’t matter. That is, the number of different microscopic states that were macroscopically indistinguishable. (If you’re worried that “indistinguishable” is in the eye of the beholder, you have every right to be, but that’s a separate puzzle.) There are far fewer ways for the molecules of air in a box to arrange themselves exclusively on one side than there are for the molecules to spread out throughout the entire volume; the entropy is therefore much higher in the latter case than the former. With this understanding, Boltzmann was able to “derive” the Second Law in a statistical sense — roughly, there are simply far more ways to be high-entropy than to be low-entropy, so it’s no surprise that low-entropy states will spontaneously evolve into high-entropy ones, but not vice-versa. (Promoting this sensible statement into a rigorous result is a lot harder than it looks, and debates about Boltzmann’s H-theorem continue merrily to this day.)

Boltzmann’s understanding led to both a deep puzzle and an unexpected consequence. The microscopic definition explained why entropy would tend to increase, but didn’t offer any insight into why it was so low in the first place. Suddenly, a thermodynamics problem became a puzzle for cosmology: why did the early universe have such a low entropy? Over and over, physicists have proposed one or another argument for why a low-entropy initial condition is somehow “natural” at early times. Of course, the definition of “early” is “low-entropy”! That is, given a change in entropy from one end of time to the other, we would always define the direction of lower entropy to be the past, and higher entropy to be the future. (Another fascinating but separate issue — the process of “remembering” involves establishing correlations that inevitably increase the entropy, so the direction of time that we remember [and therefore label “the past”] is always the lower-entropy direction.) The real puzzle is why there is such a change — why are conditions at one end of time so dramatically different from those at the other? If we do not assume temporal asymmetry a priori, it is impossible in principle to answer this question by suggesting why a certain initial condition is “natural” — without temporal aymmetry, the same condition would be equally natural at late times. Nevertheless, very smart people make this mistake over and over, leading Price to emphasize what he calls the Double Standard Principle: any purportedly natural initial condition for the universe would be equally natural as a final condition.

The unexpected consequence of Boltzmann’s microscopic definition of entropy is that the Second Law is not iron-clad — it only holds statistically. In a box filled with uniformly-distributed air molecules, random motions will occasionally (although very rarely) bring them all to one side of the box. It is a traditional undergraduate physics problem to calculate how often this is likely to happen in a typical classroom-sized box; reasurringly, the air is likely to be nice and uniform for a period much much much longer than the age of the observable universe.

Faced with the deep puzzle of why the early universe had a low entropy, Boltzmann hit on the bright idea of taking advantage of the statistical nature of the Second Law. Instead of a box of gas, think of the whole universe. Imagine that it is in thermal equilibrium, the state in which the entropy is as large as possible. By construction the entropy can’t possibly increase, but it will tend to fluctuate, every so often diminishing just a bit and then returning to its maximum. We can even calculate how likely the fluctuations are; larger downward fluctuations of the entropy are much (exponentially) less likely than smaller ones. But eventually every kind of fluctuation will happen.

Entropy Fluctuations

You can see where this is going: maybe our universe is in the midst of a fluctuation away from its typical state of equilibrium. The low entropy of the early universe, in other words, might just be a statistical accident, the kind of thing that happens every now and then. On the diagram, we are imagining that we live either at point A or point B, in the midst of the entropy evolving between a small value and its maximum. It’s worth emphasizing that A and B are utterly indistinguishable. People living in A would call the direction to the left on the diagram “the past,” since that’s the region of lower entropy; people living at B, meanwhile, would call the direction to the right “the past.”

During the overwhelming majority of such a universe’s history, there is no entropy gradient at all — everything just sits there in a tranquil equilibrium. So why should we find ourselves living in those extremely rare bits where things are evolving through a fluctuation? The same reason why we find ourselves living in a relatively pleasant planetary atmosphere, rather than the forbiddingly dilute cold of intergalactic space, even though there’s much more of the latter than the former — because that’s where we can live. Here Boltzmann makes an unambiguously anthropic move. There exists, he posits, a much bigger universe than we can see; a multiverse, if you will, although it extends through time rather than in pockets scattered through space. Much of that universe is inhospitable to life, in a very basic way that doesn’t depend on the neutron-proton mass difference or other minutiae of particle physics. Nothing worthy of being called “life” can possibly exist in thermal equilibrium, where conditions are thoroughly static and boring. Life requires motion and evolution, riding the wave of increasing entropy. But, Boltzmann reasons, because of occasional fluctuations there will always be some points in time where the entropy is temporarily evolving (there is an entropy gradient), allowing for the existence of life — we can live there, and that’s what matters.

Here is where, like it or not, we have to think carefully about what anthropic reasoning can and cannot buy us. On the one hand, Boltzmann’s fluctuations of entropy around equilibrium allow for the existence of dynamical regions, where the entropy is (just by chance) in the midst of evolving to or from a low-entropy minimum. And we could certainly live in one of those regions — nothing problematic about that. The fact that we can’t directly see the far past (before the big bang) or the far future in such a scenario seems to me to be quite beside the point. There is almost certainly a lot of universe out there that we can’t see; light moves at a finite speed, and the surface of last scattering is opaque, so there is literally a screen around us past which we can’t see. Maybe all of the unobserved universe is just like the observed bit, but maybe not; it would seem the height of hubris to assume that everything we don’t see must be just like what we do. Boltzmann’s goal is perfectly reasonable: to describe a history of the universe on ultra-large scales that is on the one hand perfectly natural and not finely-tuned, and on the other features patches that look just like what we see.

But, having taken a bite of the apple, we have no choice but to swallow. If the only thing that one’s multiverse does is to allow for regions that resemble our observed universe, we haven’t accomplished anything; it would have been just as sensible to simply posit that our universe looks the way it does, and that’s the end of it. We haven’t truly explained any of the features we observed, simply provided a context in which they can exist; but it would have been just as acceptable to say “that’s the way it is” and stop there. If the anthropic move is to be meaningful, we have to go further, and explain why within this ensemble it makes sense to observe the conditions we do. In other words, we have to make some conditional predictions: given that our observable universe exhibits property X (like “substantial entropy gradient”), what other properties Y should we expect to measure, given the characteristics of the ensemble as a whole?

And this is where Boltzmann’s program crashes and burns. (In a way that is ominous for similar attempts to understand the cosmological constant, but that’s for another day.) Let’s posit that the universe is typically in thermal equilibrium, with occasional fluctuations down to low-entropy states, and that we live in the midst of one of those fluctuations because that’s the only place hospitable to life. What follows?

The most basic problem has been colorfully labeled “Boltzmann’s Brain” by Albrecht and Sorbo. Remember that the low-entropy fluctuations we are talking about are incredibly rare, and the lower the entropy goes, the rarer they are. If it almost never happens that the air molecules in a room all randomly zip to one half, it is just as unlikely (although still inevitable, given enough time) that, given that they did end up in half, they will continue on to collect in one quarter of the room. On the diagram above, points like C are overwhelmingly more common than points like A or B. So if we are explaining our low-entropy universe by appealing to the anthropic criterion that it must be possible for intelligent life to exist, quite a strong prediction follows: we should find ourselves in the minimum possible entropy fluctuation consistent with life’s existence.

And that minimum fluctuation would be a “Boltzmann Brain.” Out of the background thermal equilibrium, a fluctuation randomly appears that collects some degrees of freedom into the form of a conscious brain, with just enough sensory apparatus to look around and say “Hey! I exist!”, before dissolving back into the equilibrated ooze.

You might object that such a fluctuation is very rare, and indeed it is. But so would be a fluctuation into our whole universe — in fact, quite a bit more rare. The momentary decrease in entropy required to produce such a brain is fantastically less than that required to make our whole universe. Within the infinite ensemble envisioned by Boltzmann, the overwhelming majority of brains will find themselves disembodied and alone, not happily ensconsed in a warm and welcoming universe filled with other souls. (You know, like ours.)

This is the general thrust of argument with which many anthropic claims run into trouble. Our observed universe has something like a hundred billion galaxies with something like a hundred billion stars each. That’s an extremely expansive and profligate universe, if its features are constrained solely by the demand that we exist. Very roughly speaking, anthropic arguments would be more persuasive if our universe was minimally constructed to allow for our existence; e.g. if the vacuum energy were small enough to allow for a single galaxy to arise out of a really rare density fluctuation. Instead we have a hundred billion such galaxies, not to count all of those outside our Hubble radius — an embarassment of riches, really.

But, returning to Boltzmann, it gets worse, in an interesting and profound way. Let’s put aside the Brain argument for a moment, and insist for some reason that our universe did fluctuate somehow into the kind of state in which we currently find ourselves. That is, here we are, with all of our knowledge of the past, and our observations indicating a certain history of the observable cosmos. But, to be fair, we don’t have detailed knowledge of the microstate corresponding to this universe — the position and momentum of each and every particle within our past light cone. Rather, we know some gross features of the macrostate, in which individual atoms can be safely re-arranged without our noticing anything.

Now we can ask: assuming that we got to this macrostate via some fluctuation out of thermal equilibrium, what kind of trajectory is likely to have gotten us here? Sure, we think that the universe was smaller and smoother in the past, galaxies evolved gradually from tiny density perturbations, etc. But what we actually have access to are the positions and momenta of the photons that are currently reaching our telescopes. And the fact is, given all of the possible past histories of the universe consistent with those photons reaching us, in the vast majority of them the impression that we are observing an even-lower-entropy past is an accident. If all pasts consistent with our current macrostate are equally likely, there are many more in which the past was a chaotic mess, in which a vast conspiracy gave rise to our false impression that the past was orderly. In other words, if we ask “What kind of early universe tends to naturally evolve into what we see?”, the answer is the ordinary smooth and low-entropy Big Bang. But here we are asking “What do most of the states that could possibly evolve into our current universe look like?”, and the answer there is a chaotic high-entropy mess.

Of course, nobody in their right minds believes that we really did pop out of a chaotic mess into a finely-tuned state with false memories about the Big Bang (although young-Earth creationists do believe that things were arranged by God to trick us into thinking that the universe is much older than it really is, which seems about as plausible). We assume instead that our apparent memories are basically reliable, which is a necessary assumption to make sensible statements of any form. Boltzmann’s scenario just doesn’t quite fit together, unfortunately.

Price’s conclusion from all this (pdf) is that we should take seriously the Gold universe, in which there is a low-entropy future collapsing state that mirrors our low-entropy Big Bang in the past. It’s an uncomfortable answer, as nobody knows any reason why there should be low-entropy boundary conditions in both the past and the future, which would involve an absurd amount of fine-tuning of our particular microstate at every instant of time. (Not to mention that the universe shows no sign of wanting to recollapse.) The loophole that Price and many other people (quite understandably) overlook is that the Big Bang need not be the true beginning of the universe. If the Bang was a localized baby universe in a larger background spacetime, as Jennie Chen and I have suggested (paper here), we can comply with the Double Standard Princple by having high-entropy conditions in both the far past and the far future. That doesn’t mean that we have completely avoided the problem that doomed Boltzmann’s idea; it is still necessary to show that baby universes would most often look like what we see around us, rather than (for example) much smaller spaces with only one galaxy each. And this whole “baby universe” idea is, shall we say, a mite speculative. But explaining the difference in entropy between the past and future is at least as fundamental, if not more so, as explaining the horizon and flatness problems with which cosmologists are so enamored. If we’re going to presume to talk sensibly and scientifically about the entire history of the universe, we have to take Boltzmann’s legacy seriously.

102 Comments

102 thoughts on “Boltzmann’s Anthropic Brain”

  1. The anthropic coincidences mostly occur over a fine “range” of potential, and nobody really understands what this range is for if the forces are “set” as illustrated in the physics lecture on the anthropic principle at this website:

    http://abyss.uoregon.edu/~js/images/instability.gif

    In other words the coincidences should be set-up exactly balanced as this is idealistically depicted in the illustration, where “any” perturbation causes the runaway effect, (like runaway expansion). This is not true, however, if something causes the pencil to lean back to the left after something causes it to lean to the right, where anthropic selection does indeed still occur at an exact balance point.

    As I understand this, the “range of potential” increases in one direction only as the universe “ages”, so anthropic selection remains fixed between whatever relevant extreme runaway tendencies are involved in the given coincidence, but that ideal location slides progressively forward with time in order to remain between a uni-directionally increasing range of potential.

  2. mqk writes:

    I am under the (perhaps mistaken) impression that given a sufficiently fine-grained view, entropy is actually a conserved quantity. Certainly this is true for collisionless systems, for which Liouville’s theorem ensures that the phase space volume a system occupies is conserved.

    We don’t need to assume the system is “collisionless” – Liouville’s theorem applies to any classical system described by a Hamiltonian on a phase space that’s a symplectic manifold.

    (If you don’t know what all that jargonesque fine print means, just pretend I said “any classical system”. The fine print is designed to rule out certain funky classical systems you might rather not know about.)

    Does a quantum description somehow save the day and allow entropy to increase?

    Nope: the “entropy conservation” theorem you mention has an equally general quantum version. Whenever the space of states is a Hilbert space and time evolution is given by a unitary operator, the entropy tr(D ln D) of a density matrix D is conserved.

    So, as you hint, to get entropy to increase, most people switch to a coarse-grained definition of entropy, and make certain assumptions on the initial state of the system. This is what Ludwig Boltzmann did in his marvelous H-theorem!

    But, the H-theorem is not the last word on the subject. We need to see if we can justify the assumptions of this theorem – in particular, the Stosszahlansatz, or “assumption of molecular chaos”. And that’s where the real fun starts….

    So, the arrow of time is a subtle thing even before take gravity into account. But in our universe, gravity makes it a lot more subtle.

  3. Island, Dirac’s Large Number Hypothesis is a facinating. Eddington used a similiar basis for his Fundamental Theory. A nearly infinite c can start a Big Bang without the usual need for negative pressure.

  4. Dirac’s large numbers hypothesis is also resolved when you increase both the matter density and negative pressure in dispropotionally equal “see-saw” fashion, per the physics that I previously gave here:

    http://blogs.discovermagazine.com/cosmicvariance/2006/08/01/boltzmanns-anthropic-brain#comment-109844

    So the volume of the vacuum is currently about 120 orders of magnitude greater than one particle in every volume equal to the Compton wavelength of the particle cubed. As Dirac suspected, this means that the size of the universe is directly proportional to the number of particles in it, because in Einstein’s static model if you condense vaccum energy, then you necessarily increase negative energy and pressure, as well, by way of rarefaction, so the vaccum necessarily expands during pair production.

    The off-set increase in both mass-energy and negative pressure means that an expanding universe is not unstable, nor will it “run-away”, because Dr. Einstein’s equation…

    g=(4pi/3)G(rho(matter)-2rho(vacuum))R=0

    … works just fine with vacuum expansion, while at the same time repairing the gravitational flaw in Dirac’s Large Numbers Hypothesis when both particles in the pair leave real holes in the vacuum.

    The graviational acceleraton is zero if the density of the static vacuum is -0.5*rho(matter) because,

    rho+3P/c^2=0

    If you condense enough of this vacuum energy over a finite region of space to achieve postive matter-density, then the local increase in mass-energy is immediately offset by the increase in negative pressure that occurs via the rarefying effect that real particle creation has on the vacuum.

    That means that created particles have positive mass, regardless of sign, and this resolves a very important failure of particle theory, becuase it explains how and why there is no contradiction with the asymmetry that appears to exist between matter and antimatter. This is the reason that we don’t observe nearly as much antimatter as particle theory predicts exists, because the energy that comprises the observed antimatter particles normally exists in a more rarefied state than observed antiparticles do.

    Of course, this requires that we dump a lot of assumptions that are commonly taken for granted about a quantum gravity theory that doesn’t even exist, contrary to the opinion of your infinity worshiping buddy… 😉

    In QG we don’t like to think of the universe as something fixed and objective.

    Give me a freaking break!

  5. Correction:

    The off-set increase in both mass-energy and negative pressure

    Should have read:

    The off-set increase in both the matter density, (positive gravitaional curvature), and negative pressure…

    Mass-energy remains constant, of course.

  6. Give me a freaking break!

    I am quite keen to hear your explanation of how one can have both (a) a theory that operates with a single objective external universe roughly independent of one’s observations and (b) a theory with well-defined quantum spacetime numbers whose corresponding observables cannot all be tied to that of a universal observer. Moreover, the theory must of course be capable of recovering both GR and a rigorous formulation of the standard model.

    It would indeed be foolish to deny the existence of the elephant, but that is not what I meant. You have your part of the elephant here and I have mine here and there is no way that either of these is the whole elephant.

  7. I’ll make a deal with you Kea, since I personally cannot do the following, maybe you can, and that will settle this once and for all for me. YOU, have everything to gain, (in a really big way), and I have nothing to gain or lose, except this curse:

    In Einstein’s static model, G=0 when there is no matter.

    He initially added the cosmological constant to balance things out, because we do have matter, but you have to condense the matter density from the zero pressure metric in order to get rho>0 from of Einstein’s matter-less spacetime structure, and in doing so the pressure of the vacuum necessarily becomes less than zero, Pnever been proven wrong. He simply didn’t know something that quite obviously does justify his argument that the universe is finite, even though it is expanding, and it will not run-away, so there was no logical reason for him to abandon this model, given what is now obvious and factually verified information about the particle potential of the quantum vacuum.

    Dirac’s Hole Theory works (without need for a reinterpretation of the negative energy states) to hold this model stable and “flat” as the universe expands, because particle creation becomes the mechanism for expansion when the normal distribution of negative energy does not contribute to particle pair creation, which can only occur in this vacuum by way of the condensation of negative pressure energy into isolated depatures from the normal background energy density.

    So it is my strong suspicion that the Dirac equation will work in this background to unify GR and QM in the exact same manner that he did SR and QM.

    Write down the basis of wave functions in this background, including an expansion of the field in corresponding creation and annihilation operators – compute the stress-energy tensor in that background – quantitatively describe the vacua – and then work out the matrix elements of the stress-energy tensor between the vacuum and the one-particle states and see what happens.

  8. Layman scratches head again and again…

    If the conditions are found to be inherent in high energy physics(QGP) then how would such a condition run counter-intuitively to what curvature had been implied?

    A “state of equillibrium” in a highly curved world?

    Georgi Dvali:

    The cosmic acceleration of the universe indicates that the laws of General Relativity get modified not only at very short but also at very large distances, Dvali says. It is this modification, and not dark energy, that is responsible for the accelerated expansion of the universe.

    A “determinism” at planck scale? 🙂

  9. Hi Island. I’m not really sure what the contention is here. You bring up an interesting point.

    In Einstein’s static model, G=0 when there is no matter.

    Rather, G=0 when there is no matter density, no mass generation, just as G->0 in the new class of massless spin foam QFTs. The use of a so-called (LQG) cosmological constant to ‘perturb’ about this point doesn’t mean taking Lambda literally in the classical theory. On the contrary, this sort of perturbation appears to destroy the validity of GR at large scales.

    In the Cornell thread you recently pointed out that

    In the static state, pressure is proportional to -rho …

    which, again, is a kind of topological condition in the spin foam constructions.

    Dirac’s Hole Theory works to hold this model stable and “flat” …

    It turns out that we need a better physical picture than this to get everything to work, so this is where we begin to disagree. ‘Flatness’ should be a direct result of a Machian principle, which of course was never implemented in GR. But any naive attempt to consider the mutual dependence of local acceleration and ‘distant’ stuff runs into a problem analogous to that of instantaneous action in Newtonian gravity, so something has to give.

  10. A determinism at planck scale?

    Determinism here means that the Planck scale itself goes to zero as hbar does. L^2 = G(hbar)/c^3 which means that L goes like c^(-2) in Louise’s picture. In Padmanabhan’s thermodynamic gravity the Euler equation sees horizon area entropy in that S = (A/4)L^2 = (A/4)c^(-4).

  11. I realise that Sean is travelling, but it would be really good if somebody would weed out all the obvious psychoceramictry in this thread — I nearly missed John Baez’ comment because of it. Worse, it killed off the serious discussion that was going on here.

  12. Sean – thanks for the post. I could follow quite a bit of it! and the best bit for me was your explanation of why arguing from anthropic principles wasn’t as simple as it at first seems when you hear of the idea (I mean, we’re here, aren’t we?)

    now, as for following the comments …

    cheers

    a>

  13. Wow – I hadn’t been paying attention to this thread. Since I’m not sure if Sean is able to look at it currently, I’m going to ask commenters myself to please stay on topic. I know some of you have your own ideas about a number of different areas of physics, but the best place for discussing those is on your own blogs, if you have them. If you don’t, then you could start one. Please don’t get into them here. Thanks.

  14. Maybe you have the Boltzmann brain the wrong way round, It’s not ” hey I exist”, but “Hey! How come I never existed?” When the only real probability shows that I did.

  15. I think all the posts have been on topic. Unless we’re only allowed to talk about Boltzmann and Price, in which case I apologise.

  16. No need at all to apologize Kea. I’m referring to the repeated discussions of people’s own, personal “theories”. They come up on many of our posts and we usually try to stop them derailing the discussion. Cheers,

  17. John Baez:

    So, the arrow of time is a subtle thing even before take gravity into account. But in our universe, gravity makes it a lot more subtle.

    Was the universe ever really flat? Sorry for layman generalizations, with increase in curvature, it being qunatum dynamcial views seems consistent with and up to a point?

    While these seem like “simple generalized deductions” I think one must want to have current “experimental developement” caring the thought processes along some road currently being explored. 🙂

    While one talks about “reductionistic processes” we are still refering to the universe as it was developing along the microseconds, “still” within the capability of the universe in expression.

    The Pierre Auger Observatory in Malargue, Argentina, is a multinational collaboration of physicists trying to detect powerful cosmic rays from outer space. The energy of the particles here is above 1019eV, or over a million times more powerful than the most energetic particles in any human-made accelerator. No-one knows where these rays come from.

    Not pet theories. Strominger’s theory(?) perhaps along with the basis of particle creation “pointing” the way towards entropic complexity and expansion, in the resulting particle showers? 🙂

    Does this all fit?

  18. okay,

    If you allow “monte carlo” methods, then I suggest the valuation of “Boltzman’s brain” held a time of “illumination” and supersymmetriclaly explained the universe in expression?

    “Reimann hypothesis” has to have some (phenom)validation process? 🙂

    So ya, here is one way to occupy your mind while explaining supersymmetry? 🙂

  19. Pingback: Rapped on the Head by Creationists | Cosmic Variance

  20. The foregoing seems to assume that the laws of physics are time-reversible. It has always seemed to me that both quantum mechanics (at least in the Copenhagen interpretation) and general relativity are not. Wavefunction collapse can’t be time-reversible. And what about black holes? Matter can fall in but can’t escape? If GR were time-reversible then under suitable initial conditions matter could be ejected from a black hole (and I’m not talking about Hawking radiation).

    (my personal opinion is that wavefunction collapse doesn’t occur and black holes don’t exist, but I only studied physics up to 2nd year of university, so please correct any misunderstandings I’ve made)

  21. Hugh, general relativity is definitely time-reversible, although specific solutions might not be. The time-reversed version of a black hole is a white hole, which is a perfectly good solution to Einstein’s equation. We don’t see white holes in the real world, but that’s precisely because of entropy considerations.

    I think the same is true for quantum mechanics, but will readily admit that I don’t understand the details and might be wrong. Wavefunction collapse ala the Copenhagen interpretation is definitely not reversible, although evolution according to the Schrodinger equation definitely is. My suspicion is that a more complete understanding will be able to derive the apparent collapse of the wavefunction from ordinary Schrodinger evolution plus thermodynamic considerations, but I don’t think this is well understood right now.

  22. Time reversed scenario’s/equations, must take into considerations some fundamental processes?

    What happens for a reversed Blackhole process, white-hole?, could not occur without DarkMatter becoming visible?..Darkmatter would actually have to be the radiative source of visible Matter.

    Einstein field equations have an expression for Darkmatter to convert to Light Matter, entropy in a time-reversed universe would insist that particle collisions, become more feable and less energetic, producing less visible light from atomic interations, light would tend to be fading to grey!

    The further one travels back along a “time-reversed” arrow, the more one becomes entangled into a “ONE-PARTICLE” quark soup ?

    The initial state may be comprable to that of a Bose-Einstein-Condensate singularity ?..a “one-particle” fluctuation would really be an “all-particle” fluctuation!

  23. Pingback: Coast to Coast | Cosmic Variance

Comments are closed.

Scroll to Top