CV readers, ahead of the curve as usual, are well aware of the notion of Boltzmann’s Brains — see e.g. here, here, and even the original paper here. Now Dennis Overbye has brought the idea to the hoi polloi by way of the New York Times. It’s a good article, but I wanted to emphasize something Dennis says quite explicitly, but (from experience) I know that people tend to jump right past in their enthusiasm:
Nobody in the field believes that this is the way things really work, however.
The point about Boltzmann’s Brains is not that they are a fascinating prediction of an exciting new picture of the multiverse. On the contrary, the point is that they constitute a reductio ad absurdum that is meant to show the silliness of a certain kind of cosmology — one in which the low-entropy universe we see is a statistical fluctuation around an equilibrium state of maximal entropy. According to this argument, in such a universe you would see every kind of statistical fluctuation, and small fluctuations in entropy would be enormously more frequent than large fluctuations. Our universe is a very large fluctuation (see previous post!) but a single brain would only require a relatively small fluctuation. In the set of all such fluctuations, some brains would be embedded in universes like ours, but an enormously larger number would be all by themselves. This theory, therefore, predicts that a typical conscious observer is overwhelmingly likely to be such a brain. But we (or at least I, not sure about you) are not individual Boltzmann brains. So the prediction has been falsified, and that kind of theory is not true. (For arguments along these lines, see papers by Dyson, Kleban, and Susskind, or Albrecht and Sorbo.)
I tend to find this kind of argument fairly persuasive. But the bit about “a typical observer” does raise red flags. In fact, folks like Hartle and Srednicki have explicitly argued that the assumption of our own “typicality” is completely unwarranted. Imagine, they say, two theories of life in the universe, which are basically indistinguishable, except that in one theory there is no life on Jupiter and in the other theory the Jovian atmosphere is inhabited by six trillion intelligent floating Saganite organisms.
In the second theory, a “typical” intelligent observer in the Solar System is a Jovian, not a human. But I’m a human. Have we therefore ruled out this theory? Pretty clearly not. Hartle and Srednicki conclude that it’s incorrect to imagine that we are necessarily typical; we are who we observe ourselves to be, and any theory of the universe that is compatible with observers like ourselves is just as good as any other such theory.
This is an interesting perspective, and the argument is ongoing. But it’s important to recognize that there is a much stronger argument against the idea that Boltzmann’s Brains were originally invented to counter — that our universe is just a statistical fluctuation around an equilibrium background. We might call this the “Boltzmann’s Universe” argument.
Here’s how it goes. Forget that we are “typical” or any such thing. Take for granted that we are exactly who we are — in other words, that the macrostate of the universe is exactly what it appears to be, with all the stars and galaxies etc. By the “macrostate of the universe,” we mean everything we can observe about it, but not the precise position and momentum of every atom and photon. Now, you might be tempted to think that you reliably know something about the past history of our local universe — your first kiss, the French Revolution, the formation of the cosmic microwave background, etc. But you don’t really know those things — you reconstruct them from your records and memories right here and now, using some basic rules of thumb and your belief in certain laws of physics.
The point is that, within this hypothetical thermal equilibrium universe from which we are purportedly a fluctuation, there are many fluctuations that reach exactly this macrostate — one with a hundred billion galaxies, a Solar System just like ours, and a person just like you with exactly the memories you have. And in the hugely overwhelming majority of them, all of your memories and reconstructions of the past are false. In almost every fluctuation that creates universes like the ones we see, both the past and the future have a higher entropy than the present — downward fluctuations in entropy are unlikely, and the larger the fluctuation the more unlikely it is, so the vast majority of fluctuations to any particular low-entropy configuration never go lower than that.
Therefore, this hypothesis — that our universe, complete with all of our records and memories, is a thermal fluctuation around a thermal equilibrium state — makes a very strong prediction: that our past is nothing like what we reconstruct it to be, but rather that all of our memories and records are simply statistical flukes created by an unlikely conspiracy of random motions. In this view, the photograph you see before you used to be yellow and wrinkled, and before that was just a dispersed collection of dust, before miraculously forming itself out of the chaos.
Note that this scenario makes no assumptions about our typicality — it assumes, to the contrary, that we are exactly who we (presently) perceive ourselves to be, no more and no less. But in this scenario, we have absolutely no right to trust any of our memories or reconstructions of the past; they are all just a mirage. And the assumptions that we make to derive that conclusion are exactly the assumptions we really do make to do conventional statistical mechanics! Boltzmann taught us long ago that it’s possible for heat to flow from cold objects to hot ones, or for cream to spontaneously segregate itself away from a surrounding cup of coffee — it’s just very unlikely. But when we say “unlikely” we have in mind some measure on the space of possibilities. And it’s exactly that assumed measure that would lead us to conclude, in this crazy fluctuation-world, that all of our notions of the past are chimeric.
Now, just like Boltzmann’s Brain, nobody believes this is true. In fact, you can’t believe it’s true, by any right. All of the logic you used to tell that story, and all of your ideas about the laws of physics, depend on your ability to reliably reconstruct the past. This scenario, in other words, is cognitively unstable; useful as a rebuke to the original hypothesis, but not something that can stand on its own.
So what are we to conclude? That our observed universe is not a statistical fluctuation around a thermal equilibrium state. That’s very important to know, but doesn’t pin down the truth. If the universe is eternal, and has a maximum value for its entropy, then we it would (almost always) be in thermal equilibrium. Therefore, either it’s not eternal, or there is no state of maximum entropy. I personally believe the latter, but there’s plenty of work to be done before we have any of this pinned down.
Hal S.,
I think everyone agrees that in principle it will never happen (it would require a time period on the order of Poincare recurrence time, an ungodly large amount of time, much, much longer than the lifetime of the universe). However, such an assembly is not impossible. It doesn’t violate any of the laws of physics. So I don’t know why you are saying it will never happen.
Hal S,
Why do you think that Bose condensates are relevant here?
“No energy or matter gets in or out”
and you provided no source of heat
Of course you could argue that when the some of the elements combine chemically they will produce heat, you then have a question of how much, and how big is your room.
You also have a problem in that you are in an inertial frame of reference. If the temp does get high enough, all elements will gravitate into blobs.
This leaves out the question of electrical currents that will be caused in any of the atoms become ionized. How do you prevent a capacitance from forming?
There will be an evolution to this process, and entropy will increase, but again as outlined in 48, you can never get rid of all the correlations, and there will be some finite record of this evolution
Hal S,
I don’t believe that you are correct. The laws of physics don’t in any way say that there must remain a record of past evolution.
http://en.wikipedia.org/wiki/Diffusion
http://en.wikipedia.org/wiki/Bound_state
Here’s a more mathematically precise claim about what is possible from fluctuations.
Let |Psi> and |Phi> be two different states of a quantum system involving many, many particles. Then the prediction of quantum mechanics is that the probability that the system, when put in state |Psi> at time 0 will be found in state |Phi> at a later time t is given by
P = |A|^2
where A = is the transition amplitude and where H is the Hamiltonian.
The question is: under what circumstances do we expect the transition amplitude A to be zero for all values of t? Well, certainly A will always be zero if Phi and Psi have different eignvalues for conserved quantities. But if that’s not the case; if Phi and Psi have the same values for charge, total momentum, total energy, total angular momentum, etc., then I would think that A would be nonzero for most values of t.
Hal S,
Please don’t post any more urls without explaining why you think that any of them are relevant. Why are any of those web pages relevant?
This is actually rather neat.
We know that fundamental forces unify at higher energies, and we know the universe is expanding and energy density is decreasing. Thus we have a permanent record of the fact that the universe had a higher energy density early in its evolution
Somehow, my definition of the transition amplitude A disappeared. But it’s just the element: A(t) = .
All stable heavy elements must have been produced in large stars through nucleosynthesis, therefore we have permanent record that there was some large star that existed before our own solar system
Hal S,
Your conclusion does not follow from that. The average energy density for the entire universe may be decreasing with time, but that doesn’t imply that local densities might not increase.
The universe evolves through the process of diffusion…some bound states are only producable at energy scales larger than what we see today
ie supermassive black holes and spiral galaxies
Sorry, I don’t see how any of your comments are relevant. To say that some reaction is only possible at such and such an energy scale is a probabilistic statement. What it means is that without that energy, the transition becomes much less likely. But the probability never goes to zero unless the transition violates a conserved quantity.
I can live my life backwards through time, while every one else lives forwards through time and we can all see the exact same history of the world. I dont think am real enough and am probably dead already because my life is improbable.
What they dont tell you at school is that it is possable to produce an imaginary black hole in your mind, information on its own is enough to do this. This is called frame setting, it makes your life real and observed up to that point. While you live on like being born again; in other words you exist in every single point in the universe as a real object. Then you simply are a universe at that point. Everthing before it simply is your imagination, that has become real. You surround the universe like a dyson sphere; like around a star and then you take everthing you need to carry on you life as yourself and nothing more. Doing this is the same as an animal that has no concious ablity to understand that its giving birth.
If I got something wrong then its your fault, because you used me to do it! You cant expect me to get “everything” right. Your frame is the one I set and unless you are “Your” then you can dismiss this as nonsense.
Its not wrong anyway, its entropy is just too high at the begining to be right, it needs to be a lot lower. The universe is not real, its proably a simulation and that really does suck!
Look left see blue “Think way of the mushroom!”, “Eyes forward!”, “A cat see’s with two eyes”, look right? “I cant remember?”
Do it yourselves next time!
If you dont understand this, then ignore it! And don’t comment!
To discuss quantum fluctuations I am going to texify some here. I hope I make no errors. Often when I do this I have to repair errors I make in doing this, but there is no preview. So here goes.
If you have a state psi(t) it will evolve into a state psi(t + &t) by the Schrodinger equation
$latex ihbarfrac{partialpsi(t)}{partial t}~=~Hpsi(t)$
where I will from now set hbar to one. The Hamiltonian H defines a unitary time development operator U(t) with
$latex |psi(t_0~+~t)rangle~=~e^{-iH(t_0~+~t)}|psi(t_0)rangle$
where the unitary operator is the exponential term on the right. We then consider the overlap of a state and its time development some small increment of time
$latex langlepsi(t)|psi(t~+~delta t)rangle~=~langlepsi(t)|e^{-iHdelta t}|psi(t)rangle$
which with the Taylor theorem gives
$latex langlepsi(t)|psi(t~+~delta t)rangle~simeq~langlepsi(t)|(1~-~iHdelta t~-~frac{1}{2}H^2delta t^2|psi(t)rangle$
This is the overlap between a state and its time development into the future. Now what we do is to take the modulus square of this to get
$latex |langlepsi(t)|psi(t~+~delta t)rangle|^2~simeq~(1~-~Big(langlepsi(t)|H^2|psi(t)~-~|langlepsi(t)|H|psirangle^2Big)delta t^2$
The last two terms on the right are
$latex
(langle~H^2~rangle~-~langle~H~rangle^2)delta t^2,
$
with some compression of notation. The term in the parenthesis is the square of
$latex
Delta H~=~sqrt{langle~H^2~rangle~-~langle~H~rangle^2}
$
Now this is the Heisenberg uncertainty principle for the whole package in the modulus squared is
$latex
Delta Hdelta t/hbar~=~sqrt{langle~H^2~rangle~-~langle~H~rangle^2})delta t/hbar.
$
This says that if I sample the system in a short time I will get a range of possible values for the energy of the system.
As a digression this has some interesting properties, for this is the metric of the Fubini-Study space and the fibration over the projective Hilbert space.
The quantum fluctuation is physically the result of sampling or measuring the system. Quantum mechanics is contrary to popular belief a completely deterministic physics. The Schr{“o}dinger wave equation is completely deterministic. But since the wave is complex valued, and there is this fibration given by the unitary operator over the projective Hilbert space what we measure in real variables has this range of possible outcomes.
In thermodynamics there are also fluctuations, and a formula for them based on the deBroglie wavelength. It is analogous to this, but where one has to look at Fokker-Planck equations or Langevin processes. These are similar to quantum fluctuations, but are really different physics.
Lawrence B. Crowell
Pingback: Starts With A Bang! » Brain-damaged arguments and Boltzmann Brains
Maybe there is a killer virus that lurks in the universe, one that makes the odds of a boltzmann brain popping out of the ether at any point almost impossible. Viruses don’t have brains, but they do a great job of killing them and are a hell of a lot more likely to pop into existence than a brain. It’s not unconceivable that a type of virus removes all none real boltzman brains, leaving only evolved life forms possible. The nature of the universe is extremely violent and unforgiving; the universe simply has no time for things that are not real! Of course there is a chance boltzman brain can get lucky, but you simply just have to look at conception in humans to realise that once in the egg, no other sperm can get in; then the odd of twins are statistically possible, but in universe terms very unlikely.
You could tell your self in the past this, if you found a way; but you would end up leading yourselves into an un-real universe, once you passed the point you sent the information to yourselves in the past, you would not know what to do next. Unless you told yourselves what to do and that would be like creating a big polo mint in space and driving through the hole! Where would that get you?
Pingback: Everyone's a Critic | Cosmic Variance
Pingback: The lure of science pornography » Undress Me Robot
Pingback: The Boltzmann Brain Controversy « In Other Words
Anybody read Huw Price’s “Time’s Arrow and Archimedes’ Point”?
To paraphrase Price, the mystery is not why entropy always increases but why entropy is always lower in the past. If we assume that physics is time symmetric, then not only can cream spontaneously jump out of a cup of coffee and pebbles leap out of ponds, but they do, regularly. To see this all one has to do is run the tape backwards.
Price’s argument against Boltzman goes something like this:
1) We are not now in thermal equilibrium.
2) The statistical basis of the 2nd law dictates all matter moves toward thermal equilibrium.
3) So, we have to explain why we are not in equilibrium now, in the present.
4) Boltzman’s solution was to allow random fluctations from equilibrium, which is certainly possible, even inevitable, given enough time.
5) So, maybe we are in a non-equilibrium state because of such a random fluctuation in a universe that is otherwise in thermal equilibrium (most of the time).
6) The past, however, appears to us to have been even further from equilibrium, in fact, as one goes back in time, the further from equilibrium the (observable) universe becomes. I.e. entropy paradoxically decreases going back into time. If this doesn’t seem like a problem, please understand that we want a physics that is time-symmetric, i.e. has rules that work going forward or backward in time. So it’s no good to just say the 2nd law only applies going forward in time.
7) If it is the case, however, that our current departure from thermal equilibrium is due to a random fluctuation, then it is quite likely that our past is “fake”, that is, it only appears to be lower in entropy (further from thermal equilibrium) than it is now. This is because the more a possible fluctuation moves away from equilibrium, the less likely it is to happen. Assuming our current state is due to such a fluctuation, it will always be a “cheaper” or a more likely solution, statistically, to say the past is fake than to say the past really was lower in entropy than now. If one admits that the past is fake, then the furthest departure from equilibrium one must account for is now. But the further you push the point of “reality” into the past, the further the explanatory fluctuation must have departed from equilibrium, and thus the less likely it is to have actually happened that way.
This really isn’t to say that I believe our brains spontaneously appeared out of the muck a nanosecond ago. It’s really a reductio ad absurdum argument. It just shows that Boltzman’s proposed fluctuation is really not a satisfactory explanation of why the (observable) universe has a low entropy in the past.
And I think Carroll is wrong about this too:
“Note that this scenario makes no assumptions about our typicality — it assumes, to the contrary, that we are exactly who we (presently) perceive ourselves to be, no more and no less.”
If statistically the past is more likely to be “fake” than “real” (i.e. only appearing to be lower in entropy than now), then why not the present also? It’s more likely that the monkeys only typed out the first page of Hamlet rather than waiting around to finally produce the whole thing. If just we popped out of the uniform muck a nanosecond ago, it’s more likely that what popped out was just an empty, ephemeral stage-set, and less likely to be the “real thing”, no?