Lenny Susskind has a new book out: The Black Hole War: My Battle with Stephen Hawking to Make the World Safe for Quantum Mechanics. At first I was horrified by the title, but upon further reflection it’s grown on me quite a bit.
Some of you may know Susskind as a famous particle theorist, one of the early pioneers of string theory. Others may know his previous book: The Cosmic Landscape: String Theory and the Illusion of Intelligent Design. (Others may never have heard of him, although I’m sure Lenny doesn’t want to hear that.) I had mixed feelings about the first book; for one thing, I thought it was a mistake to put “Intelligent Design” there in the title, even if it were to be dubbed an “Illusion.” So when the Wall Street Journal asked me to review it, I was a little hesitant; I have enormous respect for Susskind as a physicist, but if I ended up not liking the book I would have to be honest about it. Still, I hadn’t ever written anything for the WSJ, and how often does one get the chance to stomp about in the corridors of capitalism like that?
The good news is that I liked the book a great deal, as the review shows. I won’t reprint the thing here, as you are all well-trained when it comes to clicking on links. But let me mention just a few words about information conservation and loss, which is the theme of the book. (See Backreaction for another account.)
It’s all really Isaac Newton’s fault, although people like Galileo and Laplace deserve some of the credit. The idea is straightforward: evolution through time, as described by the laws of physics, is simply a matter of re-arranging a fixed amount of information in different ways. The information itself is neither created nor destroyed. Put another way: to specify the state of the world requires a certain amount of data, for example the positions and velocities of each and every particle. According to classical mechanics, from that data (the “information”) and the laws of physics, we can reliably predict the precise state of the universe at every moment in the future — and retrodict the prior states of the universe at every moment in the past. Put yet another way, here is Thomasina Coverley in Tom Stoppard’s Arcadia:
If you could stop every atom in its position and direction, and if your mind could comprehend all the actions thus suspended, then if you were really, really good at algebra you could write the formula for all the future; and although nobody can be so clever as to do it, the formula must exist just as if one could.
This is the Clockwork Universe, and it is far from an obvious idea. Pre-Newton, in fact, it would have seemed crazy. In Aristotelian mechanics, if a moving object is not subject to a continuous impulse, it will eventually come to rest. So if we find an object at rest, we have no way of knowing whether until recently it was moving, or whether it’s been sitting there for a long time; that information is lost. Many different pasts could lead to precisely the same present; whereas, if information is conserved, each possible past leads to exactly one specific state of affairs at the present. The conservation of information — which also goes by the name of “determinism” — is a profound underpinning of the modern way we think about the universe.
Determinism came under a bit of stress in the early 20th century when quantum mechanics burst upon the scene. In QM, sadly, we can’t predict the future with precision, even if we know the current state to arbitrary accuracy. The process of making a measurement seems to be irreducibly unpredictable; we can predict the probability of getting a particular answer, but there will always be uncertainty if we try to make certain measurements. Nevertheless, when we are not making a measurement, information is perfectly conserved in quantum mechanics: Schrodinger’s Equation allows us to predict the future quantum state from the past with absolute fidelity. This makes many of us suspicious that this whole “collapse of the wave function” that leads to an apparent loss of determinism is really just an illusion, or an approximation to some more complete dynamics — that kind of thinking leads you directly to the Many Worlds Interpretation of quantum mechanics. (For more, tune into my Bloggingheads dialogue with David Albert this upcoming Saturday.)
In any event, aside from the measurement problem, quantum mechanics makes a firm prediction that information is conserved. Which is why it came as a shock when Stephen Hawking said that black holes could destroy information. Hawking, of course, had famously shown that black holes give off radiation, and if you wait long enough they will eventually evaporate away entirely. Few people (who are not trying to make money off of scaremongering about the LHC) doubt this story. But Hawking’s calculation, at first glance (and second), implies that the outgoing radiation into which the black hole evaporates is truly random, within the constraints of being a blackbody spectrum. Information is seemingly lost, in other words — there is no apparent way to determine what went into the black hole from what comes out.
This led to one of those intellectual scuffles between “the general relativists” (who tended to be sympathetic to the idea that information is indeed lost) and “the particle physicists” (who were reluctant to give up on the standard rules of quantum mechanics, and figured that Hawking’s calculation must somehow be incomplete). At the heart of the matter was locality — information can’t be in two places at once, and it has to travel from place to place no faster than the speed of light. A set of reasonable-looking arguments had established that, in order for information to escape in Hawking radiation, it would have to be encoded in the radiation while it was still inside the black hole, which seemed to be cheating. But if you press hard on this idea, you have to admit that the very idea of “locality” presumes that there is something called “location,” or more specifically that there is a classical spacetime on which fields are propagating. Which is a pretty good approximation, but deep down we’re eventually going to have to appeal to some sort of quantum gravity, and it’s likely that locality is just an approximation. The thing is, most everyone figured that this approximation would be extremely good when we were talking about huge astrophysical black holes, enormously larger than the Planck length where quantum gravity was supposed to kick in.
But apparently, no. Quantum gravity is more subtle than you might think, at least where black holes are concerned, and locality breaks down in tricky ways. Susskind himself played a central role in formulating two ideas that were crucial to the story — Black Hole Complementarity and the Holographic Principle. Which maybe I’ll write about some day, but at the moment it’s getting late. For a full account, buy the book.
Right now, the balance has tilted quite strongly in favor of the preservation of information; score one for the particle physicists. The best evidence on their side (keeping in mind that all of the “evidence” is in the form of theoretical arguments, not experimental data) comes from Maldacena’s discovery of duality between (certain kinds of) gravitational and non-gravitational theories, the AdS/CFT correspondence. According to Maldacena, we can have a perfect equivalence between two very different-looking theories, one with gravity and one without. In the theory without gravity, there is no question that information is conserved, and therefore (the argument goes) it must also be conserved when there is gravity. Just take whatever kind of system you care about, whether it’s an evaporating black hole or something else, translate it into the non-gravitational theory, find out what it evolves into, and then translate back, with no loss of information at any step. Long story short, we still don’t really know how the information gets out, but there is a good argument that it definitely does for certain kinds of black holes, so it seems a little perverse to doubt that we’ll eventually figure out how it works for all kinds of black holes. Not an airtight argument, but at least Hawking buys it; his concession speech was reported on an old blog of mine, lo these several years ago.
Pingback: The Black Hole War: My Battle with Stephen Hawking to Make the World Safe for Quantum Mechanics
“Not an airtight argument…..”
Oh come. Neglecting tiny details like the sign of the cosmological constant, the timelike character of the AdS conformal infinity, the non-compactness of AdS spatial sections, the existence of a timelike Killing vector, trivial stuff like that, our Universe is EXACTLY THE SAME as AdS. So of course unitarity is preserved in black hole evaporation!
Let me try again. Some black holes evaporate in a unitary way, therefore all black holes…..no wait, I will get this somehow……
Pingback: more storms « blueollie
Hi Sean,
Thanks for the link. I can’t avoid noticing you’ve made a leap there from determinism to unitarity? Evolution can be deterministic but that doesn’t imply it’s also unitary. Best,
B.
Hi B– That’s true, I was being sloppy. But unitarity can be thought of as “determinism in both directions in time.” Unless you are thinking of something more subtle?
Absalom, you seem to be missing the point quite a bit. It was not “all black holes are just like ones in AdS_5 x S^5”. It’s that all of the objections to unitary evolution apply there as well as they do anywhere else, yet we know that they are somehow avoided. It’s perfectly reasonable to think that they can therefore be avoided elsewhere, although it’s clearly not an airtight argument.
“This is the Clockwork Universe, and it is far from an obvious idea. Pre-Newton, in fact, it would have seemed crazy. ”
Hmmm…several people actually built clockworks that simulated the universe several centuries before Newton was born (de Dondi, Wallingford, et al.). Surely a few people saw these great machines and thought, “…maybe not so crazy after all?”
Haha, so presumptous… as if the “black hole war” is over. Normally experiment decides who is right, not famous people =). That said, it’s probably a fun read for a layman. I just hope they don’t come out of it thinking that in science what is right is determined by what Stephen Hawking thinks.
Something that has really bothered me is applying the unitaity argument to black body radiation modes. The example is this. If you have a photon gas in a black body cavity and you point a specific frequency laser into the cavity(specifically the photon gas bulk), then if you pulse the laser encoding a message (ie. morse code) the photons of the laser pulse will be eventually be included in the modes of the black body distribution. The difficulty of retrieving the message from the black body emission seems formidable. Laser photons will either interact with the walls of the cavity or be reflected multiple times before entering the black body distribution. Thus any pulse becomes indescernible and there would not necessaarily be any temperature flux with a pattern (the message). Some laser photons would even be promoted to a cascade of different frequency photons from interaction with the matter in the cavity walls, thus frequency may be changed. A solution to this would be the Holographic principle, whereas the laser pulse upon entering the photon gas bulk would be a smeared state (entering a non-classical state) like the rest of the bulk (photons wll not collide with each other). Any information in the bulk may write information on the boundary (cavity wall) and possibly enable the information to remain coherent. I understand that black body radiation is quantum unitary and that black holes are quantum unitary , but still how is the information that is emitted encoded?
I could never figure out why Hawking’s idea that information was destroyed was ever taken seriously. The unwarranted assumptions in his various arguments were obvious to me when I was in high school! To wit, if you dump some information into a hot thermal system, it rapidly becomes intertwined with everything else going in that system. You can’t get the information out again without measuring ALL the subsequent radiation from the system. The same should be true of the black hole; you won’t be able to recover the information unless you watch the black hole radiate until it evaporates. The early stages of radiation may be semiclassical, but the final evaporation is not. Hence there is no reason to believe that the information that you put in initially is not imprinted on the final burst of radiation from the evaporating black hole. (And, lo and behold, that was exactly what was found to happen in AdS/CFT! Or at least, that’s one interpretation of the result; some people disagree that that’s what’s happening.)
Brett, I don’t think that’s right. People certainly understood that you need to capture all of the radiation to restore the purity of the final state. But there is a simple question of numbers: There is a huge amount of radiation that comes out when the black hole is large and purportedly semi-classical, while only a very tiny amount that comes out when it is small and close to the Planck scale. When you check things quantitatively, there is no way for the late radiation to restore the purity of the entire state. Somehow, information has to be encoded in the early stages as well.
From Sean’s review in the Wall Street Journal:
“And what was the outcome of the black-hole war? A Susskind victory, it would appear. It seems that information is not lost, even when black holes evaporate. In 1997, a young theorist named Juan Maldacena showed how, in certain cases, questions in quantum gravity can be “translated” into equivalent questions in a different-looking theory, one that doesn’t involve gravity at all — a theory, moreover, in which it is perfectly clear that information is never lost. So we don’t know exactly how information escapes when a black hole evaporates. But we can start with a black hole, translate it into the new theory (where we do know how to keep track of information), let the black hole “evaporate” and translate it back. A bit indirect, but the logic seems solid.”
“a different looking theory, one that doesn’t involve gravity at all”….sound esoteric? Far from it, for in GR gravity, as “real” as it is, is a fictitious force, created by the way the momentum of the universe is observed from coordinates within a manifold.
Sean, that was a very thoughtful- and informative- review!
Very very tangentially related but … well, this is the blogosphere …
There’s one thing – well, one thing in particular, I guess – that bothers me about the ‘many worlds’ scenario:
Where does all the mass and energy come from?
It probably just goes to show how little I understand any of it, but if … the universe/reality is a four dimensional manifold with some ‘stuff’ in it, how can the ‘stuff’ continually multiply to keep all these ‘many worlds’ populated?
I’m sure I just have much too naïve a picture of the problem, but I’d really appreciate some pointers towards comprehension.
Sili– Different branches of the wave function don’t actually cost extra energy. Forget about many worlds, just think of ordinary quantum mechanics. If an electron is in a superposition of two different positions, the quantum state doesn’t have twice the mass as it would if the wave function were localized in one position. The mass is just that of one electron, but it’s in a superposition of two different possible positions. Likewise with the rest of the universe.
Sili,
You’re not the only one that’s confused. Obviously quantum information isn’t like what goes through our minds, but presumably it’s what our brains are made of.
The energy is conserved, but I don’t see how every bit of information ever recorded by that energy is retained. Yes, we can assume one set of events leads up to this moment and one set of events will follow it, but an objective perspective is a contradiction. As Stephen Wolfram put it, ‘It would take a computer the size of the universe to compute the universe.’
Sean,
If the universe is one great super position, than does the concept of “information,” i.e. distinction, even apply?
So … the MW sorta corresponds to the electron pre-observation? Would it be right to say that the MW ‘universe’ just is? Never actually observed in a manner that forces the energy into one single state?
errrr … that can’t be right … or perhaps it’s just my struggling brain. I think it’s high time I reread some QM.
Thank you.
That’s exactly right. The MWI simply says that the wavefunction is always there, evolving peacefully according to the Schrodinger equation, without any collapses.
Hi Sean:
But unitarity can be thought of as “determinism in both directions in time.” Unless you are thinking of something more subtle?
I don’t mean anything very subtle. Yes, I meant determinism ‘in both directions in time’. But what I was trying to say is that an evolution of a wave-function can be deterministic without being unitary. Unitarity doesn’t only mean there is an operator H that works forwards and backwards but that the evolution you get from it is, well, unitary. Just add a damping term to the exponent that spoils the preservation of the norm. It’s still a deterministic one-to-one map but not unitary. Best,
B.
Sean said:
Unitarity ensures that probabilities add up to one. This definition itself implies determinism in “both directions of time”, right? But then one wonders whether it makes any sense in talking about causality in the “other direction of time”…
In any sensible theory of quantum dynamics, probabilities must add up to one. There is no theory of the world in which the probability of all mutually exclusive events adds up to anything other than one. There might be a mathematical formalism which leads you to such a conclusion — e.g. perturbation theory for massive vector bosons without a Higgs mechanism or other new physics — but that’s a sign that your theory is incomplete or ill-defined, not that probabilities really don’t add up to one.
In particular, multiplying the wave function by a number less than one is not a violation of unitarity; you would simply renormalize so that the norm was one. Quantum states are not really vectors in a Hilbert space, they are equivalence classes of such vectors up to scaling by non-zero complex numbers.
On the other hand, “unitarity” (i.e., evolution of the state vector is described by the action of a unitary operator) implies more than conservation of probability, it also implies reversibility (because unitary operators have inverses). If a well-defined theory is truly non-unitary, it’s not because probabilities don’t add up to one, it’s because the evolution isn’t invertible. For example, if many different states at time t1 evolved into some particular single state at time t2. That would be a genuinely non-unitary evolution (and is closely analogous to what happens when wave functions collapse).
Is a universe in which information is not conserved possible at all (in the sense that internal observers would notice this)? Sean gave the example of Aristotelian mechanics, but I don’t find that very convincing.
If you observe that some object has come to rest and conclude that “all information about the intitial state has been lost”, the very fact that you know this means that not all the information has been lost. Some of the information about the previous state has been stored in your memory, otherwise you wouldn’t be aware of this fact.
If information does not get lost when you are observing, then it is hard to see how you would arrive at fundamental laws of physics in which information really does get lost.
It would be more natural to assume that the information that gets lost when you are not observing actually doesn’t get lost but ends up in some hard to detect degrees of freedom of the universe.
This is a bit similar to the MWI. It is then perhaps not surprising that an attempt by ‘t Hooft to find a local deterministic theory underlying quantum mechanics led him to models in which information is not conserved. 🙂
Sean’s post 20 is really interesting and is loaded with all kinds of subtle ramifications.
A unitary universe is also a static (deterministic) universe, but the very nature of the GR spherical geometry (with the irrational “pi”) implies a certain mathematical irrationality in cosmology which results in both the existence of time and an irreversible time process. A universe which is everywhere, all the time constrains a powerful determinism, but it also points (“all the time”) to a universe which is eternally existing.
I don’t believe it is at all conceptually trivial that we observe both the existence of time and an irreversible time process at our macroscopic coordinates. However, just as in engineering problems, the sum of the moments in the universal structure must total 0 to assure stability. What causes our existence is a foundational momentum, rigid structural constraints, an overall conservation of matter, energy- and a multi-faceted entropy. Perpetual order is foundational to existence, but so is change…ordered- if gradual- change.
Since the SR/GR/QM universe exists only as it is observed, it can be seen that any increase in overall thermal entropy can thus, over eternity, be traded for a slight decrease in informational entropy (increase in complexity), conserving, not only matter and energy, but the overall total entropy, themal, informational, and by implication, observational in the system. The universe (contrary to our strong first impression) is gradually and irreversibly becoming more complex in its overall structure, yet it never exactly repeats itself when observed on 4D event horizon surfaces…in a very important sense, it is not perfectly unitary.
The whole is greater than the sum of its parts… the sum of the terms is, in an important way, greater than 1. The 33RPM record is a lower dimensional projection which includes periodicity, motion and change- of and within an ordered structure. In the same way, a human being is more, much, much more, than a certain total mass of oxygen, hydrogen, carbon, nitrogen..etc.
One final thought. The mathematical irrationality of the system applies within the manifold from which the universe is observed at sets of coordinates… pi is not necessarily an inherent quality of the Planck Realm. Hence, the non-unitary universe we feel we observe may in fact, ultimately and cosmologically, be truly unitary.
Sean, it is certainly not uncontroversial when exactly the information passes out of the black hole, but there is no quantitative reason why it can’t all come out at the end. The basic reason is that there are enough correlations between the final burst and the previous radiation to contain all the information. So the final burst on its own doesn’t contain the information, the entire radiation history does; but it can’t be decoded until the evaporation is observed.
No, that’s just not true; there are not enough correlations in the final radiation to restore the purity of the final state. (Unless you change the rules of Hawking radiation so that the final photons come out extremely slowly and with very low energies, so that you essentially have a stable remnant.) Think of it this way: if a small number of particles are correlated with a large number of particles, that larger number of particles must clearly have been correlated with each other. See this paper by John Preskill.
Sean wrote (#20):
Interestingly, though, any linear map that preserves an inner product will be one-to-one, even if it doesn’t have an inverse and hence is non-unitary. (That’s easy to prove; just define a norm from the inner product, then that norm will be preserved. And since ||Tx-Ty||=||x-y||, if Tx=Ty then x=y.)
For example, the right-shift map on a countably infinite Hilbert space, R((x1,x2,x3,…))=(0,x1,x2,x3,…) preserves the inner product. This isn’t unitary because the left shift L((x1,x2,x3,…))=(x2,x3,…) is only a “left inverse”, LR=I, but RL is not equal to I.
So there’d definitely be something weird (and non-time-reversible) about a physical system that evolved under R … but different initial states still evolve to the same final state.