A paper just appeared in Physical Review Letters with a provocative title: “A Quantum Solution to the Arrow-of-Time Dilemma,” by Lorenzo Maccone. Actually just “Quantum…”, not “A Quantum…”, because among the various idiosyncrasies of PRL is that paper titles do not begin with articles. Don’t ask me why.
But a solution to the arrow-of-time dilemma would certainly be nice, quantum or otherwise, so the paper has received a bit of attention (Focus, Ars Technica). Unfortunately, I don’t think this paper qualifies.
The arrow-of-time dilemma, you will recall, arises from the tension between the apparent reversibility of the fundamental laws of physics (putting aside collapse of the wave function for the moment) and the obvious irreversibility of the macroscopic world. The latter is manifested by the growth of entropy with time, as codified in the Second Law of Thermodynamics. So a solution to this dilemma would be an explanation of how reversible laws on small scales can give rise to irreversible behavior on large scales.
The answer isn’t actually that mysterious, it’s just unsatisfying. Namely, the early universe was in a state of extremely low entropy. If you accept that, everything else follows from the nineteenth-century work of Boltzmann and others. The problem then is, why should the universe be like that? Why should the state of the universe be so different at one end of time than at the other? Why isn’t the universe just in a high-entropy state almost all the time, as we would expect if its state were chosen randomly? Some of us have ideas, but the problem is certainly unsolved.
So you might like to do better, and that’s what Maccone tries to do in this paper. He forgets about cosmology, and tries to explain the arrow of time using nothing more than ordinary quantum mechanics, plus some ideas from information theory.
I don’t think that there’s anything wrong with the actual technical results in the paper — at a cursory glance, it looks fine to me. What I don’t agree with is the claim that it explains the arrow of time. Let’s just quote the abstract in full:
The arrow of time dilemma: the laws of physics are invariant for time inversion, whereas the familiar phenomena we see everyday are not (i.e. entropy increases). I show that, within a quantum mechanical framework, all phenomena which leave a trail of information behind (and hence can be studied by physics) are those where entropy necessarily increases or remains constant. All phenomena where the entropy decreases must not leave any information of their having happened. This situation is completely indistinguishable from their not having happened at all. In the light of this observation, the second law of thermodynamics is reduced to a mere tautology: physics cannot study those processes where entropy has decreased, even if they were commonplace.
So the claim is that entropy necessarily increases in “all phenomena which leave a trail of information behind” — i.e., any time something happens for which we can possibly have a memory of it happening. So if entropy decreases, we can have no recollection that it happened; therefore we always find that entropy seems to be increasing. Q.E.D.
But that doesn’t really address the problem. The fact that we “remember” the direction of time in which entropy is lower, if any such direction exists, is pretty well-established among people who think about these things, going all the way back to Boltzmann. (Chapter Nine.) But in the real world, we don’t simply see entropy increasing; we see it increase by a lot. The early universe has an entropy of 1088 or less; the current universe has an entropy of 10101 or more, for an increase of more than a factor of 1013 — a giant number. And it increases in a consistent way throughout our observable universe. It’s not just that we have an arrow of time — it’s that we have an arrow of time that stretches coherently over an enormous region of space and time.
This paper has nothing to say about that. If you don’t have some explanation for why the early universe had a low entropy, you would expect it to have a high entropy. Then you would expect to see small fluctuations around that high-entropy state. And, indeed, if any complex observers were to arise in the course of one of those fluctuations, they would “remember” the direction of time with lower entropy. The problem is that small fluctuations are much more likely than large ones, so you predict with overwhelming confidence that those observers should find themselves in the smallest fluctuations possible, freak observers surrounded by an otherwise high-entropy state. They would be, to coin a pithy phrase, Boltzmann brains. Back to square one.
Again, everything about Maccone’s paper seems right to me, except for the grand claims about the arrow of time. It looks like a perfectly reasonable and interesting result in quantum information theory. But if you assume a low-entropy initial condition for the universe, you don’t really need any such fancy results — everything follows the path set out by Boltzmann years ago. And if you don’t assume that, you don’t really explain our universe. So the dilemma lives on.