Richard Feynman on Boltzmann Brains

The Boltzmann Brain paradox is an argument against the idea that the universe around us, with its incredibly low-entropy early conditions and consequential arrow of time, is simply a statistical fluctuation within some eternal system that spends most of its time in thermal equilibrium. You can get a universe like ours that way, but you’re overwhelmingly more likely to get just a single galaxy, or a single planet, or even just a single brain — so the statistical-fluctuation idea seems to be ruled out by experiment. (With potentially profound consequences.)

The first invocation of an argument along these lines, as far as I know, came from Sir Arthur Eddington in 1931. But it’s a fairly straightforward argument, once you grant the assumptions (although there remain critics). So I’m sure that any number of people have thought along similar lines, without making a big deal about it.

One of those people, I just noticed, was Richard Feynman. At the end of his chapter on entropy in the Feynman Lectures on Physics, he ponders how to get an arrow of time in a universe governed by time-symmetric underlying laws.

So far as we know, all the fundamental laws of physics, such as Newton’s equations, are reversible. Then were does irreversibility come from? It comes from order going to disorder, but we do not understand this until we know the origin of the order. Why is it that the situations we find ourselves in every day are always out of equilibrium?

Feynman, following the same logic as Boltzmann, contemplates the possibility that we’re all just a statistical fluctuation.

One possible explanation is the following. Look again at our box of mixed white and black molecules. Now it is possible, if we wait long enough, by sheer, grossly improbable, but possible, accident, that the distribution of molecules gets to be mostly white on one side and mostly black on the other. After that, as time goes on and accidents continue, they get more mixed up again.

Thus one possible explanation of the high degree of order in the present-day world is that it is just a question of luck. Perhaps our universe happened to have had a fluctuation of some kind in the past, in which things got somewhat separated, and now they are running back together again. This kind of theory is not unsymmetrical, because we can ask what the separated gas looks like either a little in the future or a little in the past. In either case, we see a grey smear at the interface, because the molecules are mixing again. No matter which way we run time, the gas mixes. So this theory would say the irreversibility is just one of the accidents of life.

But, of course, it doesn’t really suffice as an explanation for the real universe in which we live, for the same reasons that Eddington gave — the Boltzmann Brain argument.

We would like to argue that this is not the case. Suppose we do not look at the whole box at once, but only at a piece of the box. Then, at a certain moment, suppose we discover a certain amount of order. In this little piece, white and black are separate. What should we deduce about the condition in places where we have not yet looked? If we really believe that the order arose from complete disorder by a fluctuation, we must surely take the most likely fluctuation which could produce it, and the most likely condition is not that the rest of it has also become disentangled! Therefore, from the hypothesis that the world is a fluctuation, all of the predictions are that if we look at a part of the world we have never seen before, we will find it mixed up, and not like the piece we just looked at. If our order were due to a fluctuation, we would not expect order anywhere but where we have just noticed it.

After pointing out that we do, in fact, see order (low entropy) in new places all the time, he goes on to emphasize the cosmological origin of the Second Law and the arrow of time:

We therefore conclude that the universe is not a fluctuation, and that the order is a memory of conditions when things started. This is not to say that we understand the logic of it. For some reason, the universe at one time had a very low entropy for its energy content, and since then the entropy has increased. So that is the way toward the future. That is the origin of all irreversibility, that is what makes the processes of growth and decay, that makes us remember the past and not the future, remember the things which are closer to that moment in history of the universe when the order was higher than now, and why we are not able to remember things where the disorder is higher than now, which we call the future.

And he closes by noting that our understanding of the early universe will have to improve before we can answer these questions.

This one-wayness is interrelated with the fact that the ratchet [a model irreversible system discussed earlier in the chapter] is part of the universe. It is part of the universe not only in the sense that it obeys the physical laws of the universe, but its one-way behavior is tied to the one-way behavior of the entire universe. It cannot be completely understood until the mystery of the beginnings of the history of the universe are reduced still further from speculation to scientific understanding.

We’re still working on that.

114 Comments

114 thoughts on “Richard Feynman on Boltzmann Brains”

  1. Temperature (mean kinetic energy) may not be well defined for a system out of equilibrium, but entropy certainly is.
    If entropy would only be defined for a system in equilibrium, we would not need the 2nd law, because we would always have dS/dt = 0.

    That doesn’t follow at all. Entropy is just not a continuous function of time; it’s only defined for equilibrium states. This is the basis of the Lieb-Yngvason approach, for example.

  2. > it’s only defined for equilibrium states.

    So you are saying that e.g. entropy is not defined for a black hole, because a black hole (radiating into the universe) is not in equilibrium ?

  3. Pingback: links for 2009-01-03 < kulturbrille:amanuensis

  4. Pingback: What is The Universe | DesiPundit

  5. Low Math, Meekly Interacting

    Greg, thank you very much for your clear reply to my question, it was very helpful. I hadn’t considered the possibility that one could use the paradox, perhaps by itself, to argue the universe must inevitably decay relatively soon to suppress BB’s. Wonder how one tests that empirically. I must confess I’m still very uneasy with the argument that the utility of statistical mechanics in our horizon provides adequate justification to extrapolate that approach to the megaverse. The whole thing makes me wonder yet again whether or not one should avoid the megaverse like the plague, even if it’s the right answer, because of all the necessary, but potentially untestable, assumptions.

  6. Pingback: tomate :: Gravitation and thermodynamics :: January :: 2008

  7. Pingback: tomate :: Gravitation and thermodynamics :: January :: 2009

  8. Pingback: what age did you stop beleaving in the religion you where raised with? - Page 2

  9. Pingback: what age did you stop beleaving in the religion you where raised with? - Page 2

  10. You can get a universe like ours that way, but you’re overwhelmingly more likely to get just a single galaxy, or a single planet, or even just a single brain — so the statistical-fluctuation idea seems to be ruled out by experiment. (With potentially profound consequences.)

    This argument seems incredibly weak to me. Think of it this way: I roll a 10 sided die a hundred times and write down the string of resulting digits. You examine the string of digits and exclaim: “do you know how unlikely it is that you got that exact string? This can’t be a statistical fluctuation”. You can’t look at a situation after the fact and say it couldn’t have happened at random because it’s very unlikely.

    The other argument I’ve heard here – that statistical fluctuations would mean that you couldn’t trust memories of the past – seems more powerful, but still strangely unsatisfying. Apparently the reason that can’t be true is that it would be unpleasant to believe that the universe faked all the evidence of our past. Which is a sentiment that I agree with, but it doesn’t say much about whether the universe actually DID fake all the evidence of our past. I guess I’m willing to write that possibility off on practical grounds – there would be no point in doing science at all if it were true – but it’s still sort of disturbing.

  11. The low degree of entropy is not a consequence of the Boltzmann hypothesis (we are in a low entropy region), but a consequence of the nature of entropy itself. Entropy also generates order. Out of entropy new orders emerge. If I take a perfectly ordered and homogeneous quantity of milk, and mix it with a perfectly non-entropic coffee, both elements loose order, their molecules become entropic and chaotic, homogeneity is lost. But from there emerges a perfectly ordered Cappuccino. Things are not as simplistic as “everything flows from order to entropy”. Chaos creates new orders; what looks entropic at one scale, may have a higher-level entity-like coherence. Leave a dead rat on the forest long enough, and the entropy will act on it, in such a way that its molecules will be reabsorbed by nature and reordered.
    I hereby decree that this theory be called “the Lospennato hypothesis”.
    Just Kidding.
    But not really.
    Regards to all!

  12. Fritz Lorenz Doerring

    Decay is obvious and visible, and experiential. Multiverse is not in our present sphere of experience. Might it ever become so? Wait: Be patient!
    It may be a long time. Fritz

  13. The Boltzmann brain paradox assumes a fixed (and rather naive) prior: that all regions of spacetime are independent. Bayesians would like us to consider the weighted probability over all priors. That’s the same as considering each observation to be the output of a Turing machine on a random input. From such a computational perspective, if the probability of one brain is B, the probability of two brains BB is B. Indeed, most of the time, Earth will sustain either many human brains or none. B can only obtain for ~ 100 years.

    The probability of many brains may still be lower than the probability of none, so a low-entropy initial state is still needed. The Big Bang seems to provide it in spades.

    -Carl

  14. There is a simple vacuum solution of Einstein’s field equations, obtained from Minkowski universe by the replacement-

    t -> sin t

    Geodesics in this universe (which I refer to as the “Periodic Minkowski universe”), are such that particles oscillate about a fixed position. This leads to recurrence via “Loschmidt’s velocity reversion”, however without the time reversal, i.e., the time parameter continues to increase monotonically. See my paper –

    http://arxiv.org/abs/0907.3165
    Title: Loschmidt’s paradox, entropy and the topology of spacetime

    If one does mixing of two gases experiment, in periodic Minkowski, one will see the gases initially mixing, and then deterministically seperating. And this happens not as a probablistical process, but due to the causal structure of the space time, as encoded in the line element of the periodic Minkowski universe. However, one can still consider probablistic process within this spacetime background – and vacuum fluctuations forming Boltzmann brains. The periodicity constraint would require that any Boltzmann brains created in such a universe, would eventually be destroyed.

Comments are closed.

Scroll to Top