A paper just appeared in Physical Review Letters with a provocative title: “A Quantum Solution to the Arrow-of-Time Dilemma,” by Lorenzo Maccone. Actually just “Quantum…”, not “A Quantum…”, because among the various idiosyncrasies of PRL is that paper titles do not begin with articles. Don’t ask me why.
But a solution to the arrow-of-time dilemma would certainly be nice, quantum or otherwise, so the paper has received a bit of attention (Focus, Ars Technica). Unfortunately, I don’t think this paper qualifies.
The arrow-of-time dilemma, you will recall, arises from the tension between the apparent reversibility of the fundamental laws of physics (putting aside collapse of the wave function for the moment) and the obvious irreversibility of the macroscopic world. The latter is manifested by the growth of entropy with time, as codified in the Second Law of Thermodynamics. So a solution to this dilemma would be an explanation of how reversible laws on small scales can give rise to irreversible behavior on large scales.
The answer isn’t actually that mysterious, it’s just unsatisfying. Namely, the early universe was in a state of extremely low entropy. If you accept that, everything else follows from the nineteenth-century work of Boltzmann and others. The problem then is, why should the universe be like that? Why should the state of the universe be so different at one end of time than at the other? Why isn’t the universe just in a high-entropy state almost all the time, as we would expect if its state were chosen randomly? Some of us have ideas, but the problem is certainly unsolved.
So you might like to do better, and that’s what Maccone tries to do in this paper. He forgets about cosmology, and tries to explain the arrow of time using nothing more than ordinary quantum mechanics, plus some ideas from information theory.
I don’t think that there’s anything wrong with the actual technical results in the paper — at a cursory glance, it looks fine to me. What I don’t agree with is the claim that it explains the arrow of time. Let’s just quote the abstract in full:
The arrow of time dilemma: the laws of physics are invariant for time inversion, whereas the familiar phenomena we see everyday are not (i.e. entropy increases). I show that, within a quantum mechanical framework, all phenomena which leave a trail of information behind (and hence can be studied by physics) are those where entropy necessarily increases or remains constant. All phenomena where the entropy decreases must not leave any information of their having happened. This situation is completely indistinguishable from their not having happened at all. In the light of this observation, the second law of thermodynamics is reduced to a mere tautology: physics cannot study those processes where entropy has decreased, even if they were commonplace.
So the claim is that entropy necessarily increases in “all phenomena which leave a trail of information behind” — i.e., any time something happens for which we can possibly have a memory of it happening. So if entropy decreases, we can have no recollection that it happened; therefore we always find that entropy seems to be increasing. Q.E.D.
But that doesn’t really address the problem. The fact that we “remember” the direction of time in which entropy is lower, if any such direction exists, is pretty well-established among people who think about these things, going all the way back to Boltzmann. (Chapter Nine.) But in the real world, we don’t simply see entropy increasing; we see it increase by a lot. The early universe has an entropy of 1088 or less; the current universe has an entropy of 10101 or more, for an increase of more than a factor of 1013 — a giant number. And it increases in a consistent way throughout our observable universe. It’s not just that we have an arrow of time — it’s that we have an arrow of time that stretches coherently over an enormous region of space and time.
This paper has nothing to say about that. If you don’t have some explanation for why the early universe had a low entropy, you would expect it to have a high entropy. Then you would expect to see small fluctuations around that high-entropy state. And, indeed, if any complex observers were to arise in the course of one of those fluctuations, they would “remember” the direction of time with lower entropy. The problem is that small fluctuations are much more likely than large ones, so you predict with overwhelming confidence that those observers should find themselves in the smallest fluctuations possible, freak observers surrounded by an otherwise high-entropy state. They would be, to coin a pithy phrase, Boltzmann brains. Back to square one.
Again, everything about Maccone’s paper seems right to me, except for the grand claims about the arrow of time. It looks like a perfectly reasonable and interesting result in quantum information theory. But if you assume a low-entropy initial condition for the universe, you don’t really need any such fancy results — everything follows the path set out by Boltzmann years ago. And if you don’t assume that, you don’t really explain our universe. So the dilemma lives on.
Low initial entropy state of the Universe can be explained by everyones favorite anthropic principle – it takes a lot of different macroscopic states to have observers such as us evolve.
Pingback: IanHuston.net — Latest From FriendFeed this week
“I wasn’t thinking of the H theorem. You don’t need that theorem to argue that entropy increases, *if* you begin with the assumption of a low-entropy initial (not final) condition.”
Dr. Carroll,
Could you please elaborate more or give a reference that explains the above statement?
At some point one would need to connect the microphysics with entropy, and that is done via Boltzmann H theorem, that tells us that Boltzmann entropy have the right concavity for thermodynamics. I really can’t see how one could argue that S can only increase — i.e. derive the 2nd law of thermodynamics — with just a boundary condition. What I thought you were arguing before is that if one assumes both then the problem is solved.
Best,
Leonardo
Just a few quick remarks to add to the mix.
Entropy was found to be a useful quantity in classical thermodynamics and
later extended to classical general relativity where in some sense Black holes are the most entropic states.
To answer Lorenzo, unitrary evolving quantum mechanics is only valid in idealized conditions: we are no more entangled systems than we are Bose-Einstein condensates or coherent states. That is why the quantum computer
is so difficult to make.
It might be the case that if everything was quantum mechanical you could
take the Von Neumann entropy to be zero ( ignoring how to include the measurement process and quantum uncertainty) but then the entropy concept wouldn’t be a useful concept.
When we have a quantum gravity theory it will be interesting to see how the
entropy concept will be modified or some other quantity replaces it to
explain the arrow of time.
Incidentally, when you first introduce the concept of entropy increase you say it corresponds to “loss of information” i.e. you knew all the perfume was in the bottle then it spreads all over the room. Hawking’s original calculation of evaporating black holes is consistent with this: entropy increases and information is lost. I think people who claim information
is not lost in black hole evaporation are likewise emphasizing quantum aspects where entropy appears trivially constant – but they nearly all
want to still keep the entropy increasing during the evaporation process- which seems inconsistent with our concepts gleamed from classical systems.
If we consider cell division as generating binary trees in SpaceTime
it would be rather odd to consider entropy and the special character
of the initial gravitational state (Penrose). The ‘arrow’ is in the direction
of more cells – if one is drawing a diagram – but we are not inclined to
think that cells might go backwards in time, and perhaps become anti-cells.
The diagram should not indicate that time is a degree of freedom where we can
play with the direction of the arrows. We do not see the film run backwards and
claim it is just as valid as running forwards, as it would with planetary orbits.
The question is whether the Minkowski structure of spacetime says all that
needs to be said about the phenomenon of time – can it be completely reduced to
geometrical structure ? As long as there is scattering we need an operator, and
that there is one operation after another, so we connect the arrows to make a
world line of an object. So there is something in addition to the geometry, and
more complicated than space and time being on the same footing, and thus
puzzling about an ‘arrow of time’ . They are on the same footing as far as comoving
frames are concerned but the ordering of events along a world line is an ordering
of operators acting on the object – possibilities turning into actual events – and
we draw the arrows to indicate the sequence of operators. It does not seem to be
necessary to invoke increasing entropy to impose an order that nature otherwise
does not care about.
Just my two cents in this very very interesting thread.
I believe the disagreement between Sean and Lorenzo is not just about “subjective” versus “objective” understanding of entropy. It might be much more on what we call time.
Sean seems to view time as an objective geometrical dimension and just ask why do we have entanglement/correlations only in one direction, call it past to future. Then logically he sees Lorenzo paper as solving nothing.
Lorenzo might have a more timelessness approach, where time is a subjective construction of observers related to memory increase. He then explains that any increase of memory is related to entanglement, and that disentanglement cannot be memorized, so is never observed. Time as we live it is just a set of points/instants in a timelessness set of states which is ordered and made subjectively continuous by a uniform increase of entanglement/memory. In this view, Lorenzo explanation is then a real advance in our understanding of the continuous ordering of set of instants by the identification he makes between entanglement and memory increase.
Much to think about anyway!
The arrow of time is still a puzzle because Thomas Kuhn was right. It takes major revolutions in science to change the minds of scientists. Many intelligent people (e.g., Karl Popper, Joe Rosen, etc.) understand time. They know that time is abstract. They know that it does not exist and that nothing can move or change in spacetime. They know that time travel is hogwash because time is not a variable, that time cannot change. They know that spacetime is a myth and that the arrow of time is an oxymoron.
This is the reason that Popper compared Einstein to good old Parmenides who, along with his famous pupil, Zeno of Elea, promoted the idea that change is impossible, contrary to their own observations. In Conjectures and Refutations, Popper called spacetime, “Einstein’s block universe in which nothing happens.” Nobody in the physics community ever dared to contradict Popper because they know he would tear them a new one. I am not making this up. Check it out for yourself.
Conjectures and Refutations:
http://www.stephenjaygould.org/ctrl/popper_falsification.html
Nasty Little Truth About Spacetime:
http://www.rebelscience.org/Crackpots/notorious.htm
I noticed this discussion a few days ago after I had been asked by a popular science journal what I think of the paper. I like its attempt of using universal quantum concepts throughout, but I do not agree with most of its conclusions. The journalist had asked me a few specific questions, the first three of them were: 1. What’s new about the work? 2. Does it get us any closer to solving the arrow of time dilemma? 3. Do you think quantum mechanics can help us resolve the arrow of time dilemma? Since I don’t like to criticize somebody anonymously, I shall here repeat my comment as a whole (some of it is equivalent to what has already been said in this discussion). My formulation in the first sentence may sound a bit harsh (sorry, Lorenzo), but I intended to inform the journalist about my true opinion:
I have re-read the paper by Lorenzo Mascone, and this has confirmed my first reaction: “How could this paper ever be accepted by PRL?” Since asymetric facts are perfectly compatible with symmetric laws, there is actually no real dilemma or paradox. On a closer look, the paper is a bit tricky, though, and this situation may have confused the referees. The main conclusion (as I understand it) is that entropy only SEEMS to increase, since we cannot remember entropy-lowering phenomena. However, this explanation of the arrow of time – if true – would already presume an arrow to apply to our “historical brain” (memory of the past only).
The essential “rigorous” thought experiment assumes that Alice’s lab is perfectly isolated from the arrow of the external world. This is unrealistic because of decoherence that must affect a macroscopic Alice, as the author admits at some point. A similar perfect isolation has been discussed for interference (two-slit) experiments with conscious objects: in order to show interference phenomena, these objects must forget their passage through the slits. So this part of the argument is unrealistic although consistent, even in the presence of external observers who register the experiment and who don’t have to forget anything that was measured. The second part of the claim, namely that phenomena with decreasing entropy cannot be remembered by external observers, is therefore unjustified.
The author says in the first paragraph that irreversibility has been claimed to arise from decoherence. This is wrong: decoherence is an irreversible process that REQUIRES an arrow of time. In fact, he remarks at the end of page 3 (published version?) that correlations “build up” continuously, thus leading to decoherence.
The entropy considerations in connection with Equ. (2) are not even specifically quantum. The sum of classical entropies of all subsystems is in general higher than that of the total system if it were calculated from a statistical ensemble for states of the latter. For example, Boltzmann’s entropy is DEFINED precisely from the independent particle distribution, that is, by neglecting all correlations, and its increase can be described deterministically as the transformation of information about the particles (negentropy) into that about corelations. Would you say that entropy appears to increase only since we cannot remember cases in which all particles hurry to concentrate in one corner of the vessel or where heat flows from the cold to the warm? On the other hand, entropy fluctuations in small systems can be well observed and remembered.The formalism for the dynamics of statistical correlations is precisely the same as for quantum entanglement.
Well – this paper may be a bit mind-boggling, but I don’t think it is serious science. So my answer to your first two questions is negative. The answer to the third question is a partial “yes”: Quantum theory must be essential to correctly formulate the solution of the problem (probably as an appropriate cosmic initial condition for the universal wave function).
Any evolution involves a process of steps that must be followed precisely in order to reach the exact current state. This means that the entropy of the string of variables that describes the evolution of a random process is extraordinarily high, ie there is a very high information demand in order to describe the history of a highly evolved system.
To say that I have erased memory seems to be saying that a certain number of variables in a string of variables have been removed. This reduces the length of the variable string, and thus its history. This would imply that a shorter string of variables represents a shorter history.
In this construction, the arrow of time is essentially related to the length of a variable string. To compare two strings, make the number of digits equivalent by using zeros to lengthen the shorter string:
ABCDEFGHIJKLM
0000000000ABCD
The entropy of the first string is much higher than the second string. The first string also has a longer history than the second since we can effectively ignore the string of zeros in the second string. This suggests any process that can truly erase history does reduce entropy.
However, suppose I write my variable chain like this:
0000000000JKLM
In this case, I have changed the initial conditions of the shorter string. In this situation I would argue that this third string is actually equivalent to the first string since I am starting with an initial condition that is in fact highly evolved, and thus encodes an implied history (ie a string of variables of length J)
What this suggests is that there is an implied history in our choice of initial conditions, and we can not pick our initial conditions arbitrarily since some initial conditions are equivalent to systems with high entropy.
This means we can always time order our initial conditions.
Dear Sean:
Ken, at Open Parachute, is a big fan of yours. He suggested I ask you the following question, and referenced this thread. Here’s the question I have asked at “Physics Forums” http://www.physicsforums.com/showthread.php?t=333160
How would particles behave if “unobserved” systems could go backwards in time as well as forward? I label this notion the “pendulum of time.” In essence, a system of entangled entities would undergo some sort of “random walk” in time and space, with no more preference for a path “forward” in time than for a preference to go “eastward” in space.
I imagine that such a system would “oscillate” in time and space until something caused it to decohere. After decoherence, the system would begin to oscillate again, moving forward in time and then back to the moment of decoherence, like a pendulum that swings back and forth and side to side, but always goes past the bottom of the swing, which, in this model, is the previous moment of decoherence. The “pendulum” could not revert to a point in time BEFORE the decoherence, but it could “explore” all physically possible paths AFTER it. (Sorry to use English to describe something that would be unambiguous as a mathematical formula, but that’s why I’m here asking for help.)
My core question is whether such a “pendulum of time” would be consistent with double-slit experiments. As I understand the findings of modern physics, a particle passing through two slits exhibits a wave-like interference pattern. I am guessing that an unobserved particle in a “pendulum of time” model MIGHT produce exactly the same interference pattern. An observed particle, by contrast, would decohere after a single “swing” and would not produce an interference pattern.
Scott– Sorry for being away from the thread for a while. I’m not sure how to answer your question, because I don’t really know what it would mean. In particular, I don’t know what it means to “go backwards in time as well as forward.” In conventional physics, objects exist exactly once at every moment of time (if they exist at all). The “direction in which they go” is set by convention, although it’s often convenient to use the direction of increasing entropy to determine that convention. You would need a radical departure from conventional physics, and one that would need to be spelled out in much greater detail, to make sense of the notion of “oscillating in time.”
perhaps the secret to time is that particles are oscillators. Velocity is not the only
way to meld space and time. Without particles, all we have is geometry – hence
the question of why isn’t time measured with a ruler ? If algebra defines both
spacetime and oscillators we would be in business – we would not want to put in
this sort of stuff by hand. Then we might sharpen exactly what we mean
by an ‘arrow of time’.
Maybe a lesser mind and a keener eye is needed. The Time reversal at the quantum scale maybe an illustion. I am not fimiliar with the fore mentioned experiment but the quantum state is a small (pun intended) part of the real world that is not reversible. When two cars pass each other while going in the opposite direction their arrow of Time only appears to be reversed. They both are traveling into the Future even though they are traveling in different directions.
Sean, I’m a layman who can’t express myself mathematically, but let me try to clarify what I mean by “go backwards in time.” It doesn’t need a radical departure from conventional physics–just a willingness to get serious about treating time the same as space.
Start with an extremely simplified “unobserved system” that consists of a single particle moving at some initial velocity towards two slits, with some kind of screen on the other side that can detect the particle. Instead of treating time as the independent variable and three spatial dimensions as dependent variables, specify a new parametric variable “p.” The particle’s position and velocity are defined at p=0. Let the particle’s position and velocity at p=1 be randomly selected from all possible states within the limits of Heisenberg’s uncertainty principle, and then repeat in a “random walk” pattern without any preference for increasing t. Continue this until something causes the system to “decohere.”
If we were to plot the path of this paramaterized particle in x, y, and t we would see a “fractal” shape–I think it would look a bit like an old-fashioned shaving brush.
As far as I can tell, this method would generate the interference patterns one sees in a double-slit experiment. In a system with a detector on one of the slits, the system would decohere the first time the p parameter randomly got the particle out to the detector. The moment of decoherence would reset the system with the particle now located near one slit, and all possible paths of that particle would now wind up on the screen at the far end looking just like a single particle going through a single slit.
By contrast, a system without a detector at one of the slits would yield an infinite number of different paths from the original starting point to the screen at the far end–but those paths would show just the kind of interference patters that make double-slit experiments so interesting.
The folks at Physics Forum have provided a link to this article on Time Symmetry which LOOKS like it’s working in the same general area. They don’t characterize the physics as a “pendulum of time,” but they’re asking the same questions and getting to some of the same answers I’ve been groping towards. The problem is, they’ve written 58 pages of math I can’t follow. Can you decipher this for us laymen, Sean? (Maybe a whole new post, hint, hint?)
http://arxiv.org/PS_cache/arxiv/pdf/0706/0706.1232v1.pdf
@Louis Savain: You might want to check your sources there. Granted, this is an “ad-hominem” attack, but from the rebelscience.org website, you might want to check out this link: http://www.rebelscience.org/agenda.htm
Also this one: http://www.rebelscience.org/Seraphim/Physics.htm
Who needs Einstein when you’ve got Isiah?
Louis Savain?!?
Wow, that brings back spr flashbacks….
Pingback: The question of the arrow of time « The Gauge Connection
About the question of the arrow of time, I have always wondered why a beautiful result as that due to Elliot Lieb and Barry Simon has been always overlooked. This is a theorem implying that quantum mechanics manifests an instability when the number of particles goes to infinity with the system that loses quantum coherence. The situation devised by Lieb and Simon is similar to the one seen in thermodynamics and so, it is really effective when quantum fluctuations become smaller and smaller increasing the number of particles. This situation is not always true and, indeed one observes large scale coherence.
Such an instability seems to support Lorenzo’s view and what Zeh above claimed essential to understand arrow of time, that is quantum mechanics. Indeed, studies on Loschmidt’s echo can also give an experimental support to Lieb and Simon theorem even if, in this particular case, there are other competing views worthwhile to be pursued.
Sean, thanks for your response, and I will check out that chapter when I can.
For what it’s worth, I think the connection between low entropy initial conditions for the universe, and the sense of being able to fix initial conditions for local experiments would make for an awesome blog post.
I’ve no problem with the mystery of the low entropy early universe—it’s a mystery and something maybe we can figure out. But why do local experiments have the same arrow of time? I think it’d be a fun blog post to explore that and explain where there are gaps.
it is interesting that nature provides unstable particles , and we want
to create before we annihilate – that recognizes an arrow of time.
There’s a new paper on the subject:
http://arxiv.org/abs/0909.1726
Comment on `Quantum resolution to the arrow of time dilemma’
Recently, a substantial amount of debate has grown up around a proposed quantum resolution to the `arrow of time dilemma’ that is based on the role of classical memory records of entropy-decreasing events. In this note we show that the argument is incomplete and furthermore, by providing a counter-example, argue that it is incorrect. Instead of quantum mechanics providing a resolution in the manner suggested, it allows enhanced classical memory records of entropy-decreasing events.