A paper just appeared in Physical Review Letters with a provocative title: “A Quantum Solution to the Arrow-of-Time Dilemma,” by Lorenzo Maccone. Actually just “Quantum…”, not “A Quantum…”, because among the various idiosyncrasies of PRL is that paper titles do not begin with articles. Don’t ask me why.
But a solution to the arrow-of-time dilemma would certainly be nice, quantum or otherwise, so the paper has received a bit of attention (Focus, Ars Technica). Unfortunately, I don’t think this paper qualifies.
The arrow-of-time dilemma, you will recall, arises from the tension between the apparent reversibility of the fundamental laws of physics (putting aside collapse of the wave function for the moment) and the obvious irreversibility of the macroscopic world. The latter is manifested by the growth of entropy with time, as codified in the Second Law of Thermodynamics. So a solution to this dilemma would be an explanation of how reversible laws on small scales can give rise to irreversible behavior on large scales.
The answer isn’t actually that mysterious, it’s just unsatisfying. Namely, the early universe was in a state of extremely low entropy. If you accept that, everything else follows from the nineteenth-century work of Boltzmann and others. The problem then is, why should the universe be like that? Why should the state of the universe be so different at one end of time than at the other? Why isn’t the universe just in a high-entropy state almost all the time, as we would expect if its state were chosen randomly? Some of us have ideas, but the problem is certainly unsolved.
So you might like to do better, and that’s what Maccone tries to do in this paper. He forgets about cosmology, and tries to explain the arrow of time using nothing more than ordinary quantum mechanics, plus some ideas from information theory.
I don’t think that there’s anything wrong with the actual technical results in the paper — at a cursory glance, it looks fine to me. What I don’t agree with is the claim that it explains the arrow of time. Let’s just quote the abstract in full:
The arrow of time dilemma: the laws of physics are invariant for time inversion, whereas the familiar phenomena we see everyday are not (i.e. entropy increases). I show that, within a quantum mechanical framework, all phenomena which leave a trail of information behind (and hence can be studied by physics) are those where entropy necessarily increases or remains constant. All phenomena where the entropy decreases must not leave any information of their having happened. This situation is completely indistinguishable from their not having happened at all. In the light of this observation, the second law of thermodynamics is reduced to a mere tautology: physics cannot study those processes where entropy has decreased, even if they were commonplace.
So the claim is that entropy necessarily increases in “all phenomena which leave a trail of information behind” — i.e., any time something happens for which we can possibly have a memory of it happening. So if entropy decreases, we can have no recollection that it happened; therefore we always find that entropy seems to be increasing. Q.E.D.
But that doesn’t really address the problem. The fact that we “remember” the direction of time in which entropy is lower, if any such direction exists, is pretty well-established among people who think about these things, going all the way back to Boltzmann. (Chapter Nine.) But in the real world, we don’t simply see entropy increasing; we see it increase by a lot. The early universe has an entropy of 1088 or less; the current universe has an entropy of 10101 or more, for an increase of more than a factor of 1013 — a giant number. And it increases in a consistent way throughout our observable universe. It’s not just that we have an arrow of time — it’s that we have an arrow of time that stretches coherently over an enormous region of space and time.
This paper has nothing to say about that. If you don’t have some explanation for why the early universe had a low entropy, you would expect it to have a high entropy. Then you would expect to see small fluctuations around that high-entropy state. And, indeed, if any complex observers were to arise in the course of one of those fluctuations, they would “remember” the direction of time with lower entropy. The problem is that small fluctuations are much more likely than large ones, so you predict with overwhelming confidence that those observers should find themselves in the smallest fluctuations possible, freak observers surrounded by an otherwise high-entropy state. They would be, to coin a pithy phrase, Boltzmann brains. Back to square one.
Again, everything about Maccone’s paper seems right to me, except for the grand claims about the arrow of time. It looks like a perfectly reasonable and interesting result in quantum information theory. But if you assume a low-entropy initial condition for the universe, you don’t really need any such fancy results — everything follows the path set out by Boltzmann years ago. And if you don’t assume that, you don’t really explain our universe. So the dilemma lives on.
Dear Sean,
I’ve always enjoyed your posts on the AoT, and this one has finally encouraged me to formulate a comment.
I am wondering if there two distinct questions. First, why was the early universe in a relatively low entropy state? We don’t know, but we do know that it was, giving rise to what you called a phenomenological arrow of time.
Second question. Why do teacups smash when we drop them but never spontaneously reform? Is this also explained by the universe being in a state of low initial entropy?
I have always wondered if it is relevant that we have the sense of being able to set `initial’ conditions for an experiment, but not to fix final conditions, and that that is where the asymmetry comes from. Is the paper by Maccone telling us that we can only ever remember setting initial conditions for an experiment?
@Aaron Sheldon [17] “At the risk of sounding trite, but the only thing that prevents wave function collapse is willful ignorance.”
Trite maybe not, but glib more likely. Although your intended meaning can be guessed, a little more detail would have been helpful. As far as I’m aware, this blog is aimed at interested (and reasonably well informed) amateurs and students, as well as professionals.
Lorenzo and Huw,
when you say
(|spin up>+|spin down>)/sqrt(2) |Alice>
evolves into (|spin up>|Alice sees up>+|spin down>|Alice sees down>)/sqrt(2)
do you not already make an assumption about the initial state of the world being quite special in this case? In other words, why is the wave function not ‘fully entangled’ to begin with?
I guess this is similar to Sean’s objection, but in more general terms, if you use the Everett interpretation (MWI), then one issue that needs to be explained is why
you can distinguish between |Alice> and |Bob> etc., which is related to the question of the preferred base. This may be resolved by decoherence, but then you already (implicitly) assume an arrow-of-time imho.
i remember someone distinguishing between the Measurement of time
and the Phenomenon of time – the phenomemon does not seem to be
vectorial but rather a sequence of transformations.
Forgive me for inserting what will probably sound like ignorant queries into a fascinating debate, but I have a two part question (and yes I’m new here);
1. Isn’t entropy necessarily subjective due to Relativity? If I am travelling at close to the speed of light relative to the rest of the universe, will the universe not appear to slow down, thus cool down and have more entropy then if I were travelling at a slower speed? Wouldn’t I see the universe speed up and thus lose entropy as I slowed down?
2. I’ve recently read the Blackhole Wars by Leanord Susskind (sp?). In it he posits that we actually live on the 2 dimensional event horizon of the universe, and not in the 3d interior that we perceive. If this is true, isn’t it possible that we see increasing entropy because the universe is expending into thermal equilibrium? In other words, as matter anti-matter pairs fluctuate into existence near the event horizon and the matter particle falls into the event horizon of the universe, couldn’t this explain why we experience a net increase in entropy?
I don’t see this “arrow of time dilemma” as a dilemma at all. I am having problems seeing how the direction of the arrow of time and the evaluation of entropy are not inherently intertwined. The initial unitary definitions of the system define the direction of the arrow of time and if you change the direction of the passage of time without changing the initial definitions of the other units then haven’t you have missed the first major step of reconciling the two mathematical systems which you are trying to equate. In my understanding “macroscopic entropy will always increase” is synonymous with “absolute values will always be positive”… but possibly I’m missing a few dozzen steps and possibly the underlying question. This seems like a thought experiment gone horibly wrong to me…
Quick fly-by comments:
* Leonardo, I wasn’t thinking of the H theorem. You don’t need that theorem to argue that entropy increases, *if* you begin with the assumption of a low-entropy initial (not final) condition.
* Lorenzo, it doesn’t matter whether the state of the early universe is pure or not. The problem is that it’s a very finely-tuned state, one that looks macroscopically like a very small number of other states. That’s also weichi’s point.
* I think Huw’s points are good, and that wolfgang hits the nail on the head: starting with an uncorrelated observer/system quantum state is implicitly a low-entropy boundary condition.
* boreds, I think that the combination of Boltzmann’s understanding of entropy plus a low-entropy early universe is sufficient to explain irreversible processes in our everyday lives, including breaking teacups. Takes some steps to get there, of course.
* Notatheist, these are big questions, but roughly (1) not really, and (2) the universe may be holographic, but that doesn’t change the basic puzzle that it started in an anomalously low-entropy state.
From Spiv: “Michael Buice: maybe I’m misinterpreting, you’re saying you can predict the future population statistics but not the past ones? Or visa verse?”
Spiv, by asymmetry of population statistics I am referring to the fact that given a known configuration at one time, the inference about future configurations and past configurations is not symmetric. It is the difference in the questions “where can we go from here?” and “how could we have gotten here?”. The population statistics of the first could be described by a Fokker-Planck equation (for some systems) whereas the second would be the adjoint of that equation.
As a quick example, this is important in pricing options. The Black-Scholes option pricing formula is a “backwards” equation which calculates the expected value of an option based on the future value of the underlying asset. You can’t price an option correctly by asking what the possible stock values will be given some current configuration. The conditional probabilities are asymmetric between the past and the future.
Sean, if I understand your points correctly, you aren’t really concerned with the “arrow of time”, per se, which explains how and why entropy increases and was understood classically and now, quantum mechanically. Your issue seems to be the question “why is(was) the early universe was so damn special?”. Aren’t you mislabeling your question?
I don’t think I’m mislabeling the question. I think part of the arrow of time is perfectly well understood: why, if we begin in a low-entropy state, entropy tends to increase. Another part is not understood: why we began in a low-entropy state. The interesting unsolved part of the arrow-of-time puzzle is why the early universe was special.
So you’re thinking of the “arrow of time” like induction. The induction step is solved, but we need to establish the basis step, i.e. explain the early universe.
I can appreciate this, but it seems to conflate two separate questions in my mind, the first being the relationship between information and physical dynamics (which clearly explains temporal asymmetry, i.e. it has a natural direction) and the second the question of our universe’s configuration.
The relationship between information and physical dynamics does not explain the actual temporal asymmetry we observe in our universe. To do that, we need a boundary condition. Which is not to say that studying the relationship between information and temporal evolution isn’t interesting or important; it’s just insufficient to explain the observed arrow of time, which was my point in the original post.
I don’t understand your comment about a boundary condition. As far as what we observe right now, our current configuration is sufficient. Implicitly, all of our observations are conditioned upon our current configuration and as far as the fundamental dynamics of the universe is concerned, this is enough.
Shouldn’t we be introducing adjectives at this point for our arrows? You are worried about the so-called “cosmological arrow-of-time”, which is distinct from the thermodynamic arrow, aren’t you? This still seems to me to be a question of the configuration of our particular space-time and not a question of why we experience a directionality to the flow of time.
I’m so glad that this is being discussed here – what luck that it’s on a blog that I frequent! – as I’ve been thinking about Maccone’s paper for a few days now and don’t have real, live people who know more than I do to discuss it with.
I have a question about whether we can explain why the universe began with such a unique, low entropy, finely tuned state with what Maccone’s paper says. The claim that processes that leave records through entanglement are strictly entropy increasing doesn’t seemed to have been refuted. I wonder, then, it must be true that there can be no record of anything having occurred with an entropy lower than zero, as in anything happening before what we’d call t=0 (when the direction of the past is defined as Maccone – by our memory). What would be the confusion if this explanation is used for why the early universe is special? The early universe must have the lowest entropy, since anything that came before it must’ve had an even lower entropy. Is there no reason to conclude that a low-entropy beginning will be finely-tuned?
Michael, our current condition is enough to predict the future, but not enough to reconstruct the past. I’m certainly interested in thermodynamic arrow of time. My point is that we wouldn’t have such an arrow that was consistent throughout the observable universe (or even consistent between me and you) if it weren’t for an early boundary condition.
Hi Sean
Is the connection between a low-entropy early universe and the irreversibility of processes in our everyday lives something you explore in The Book? I would look forward to seeing that argument laid out somewhere.
I did my best to explain it, especially in Chapter Nine. Parts of the story are incomplete, which is why papers like Lorenzo’s are useful (even if I don’t think they constitute a complete answer).
The early boundary conditions are the problem? With ~85% of the total system admittedly not understood or not accounted for is this really a prudent question to try to attempt to account for the entire system’s entropy?
I’m starting to understand your argument a little better, but I’m not there yet. Are you saying you believe that the arrow of time is explained locally but not globally? There could be a planet of Benjamin Buttons out there? I don’t see how this can be the case in a universe that is mostly flat.
Or are you saying that the early universe’s entropy is a necessary component in explaining why gas molecules do not suddenly congregate in the corner of a room? I hope not because our present knowledge of stat mech and thermodynamics answers this question nicely as explained by Boltzmann, Szilard, & Jaynes.
I don’t understand your statement about reconstructing the past. We can do inference in either direction and that inference is exactly consistent with thermodynamics. The early universe doesn’t affect this.
Is it reasonable to say that:
(a) metric effects influence the magnitude of the arrow of time but not its direction
(b) if we reverse the arrow of time all geodesics reach the hot dense boundary (at which point with our current understanding they become untraceable)
(c) subject to the relativistic speed limit, any observer in any frame will see directionality of the arrow arising from (b) no matter what slice of spacetime the observer is studying
(d) the observer in (c) will note metric distortions to the point of (asymptotically) stopped clocks but will not see clocks ticking in the wrong direction (“backwards” in our case)
consequently the hot dense boundary is a universal candidate for “time 0″ whether we are counting down towards it or counting up from it, no matter what frequency standards we are using?
Universal candidates for anything are spooky.
P.S.: JR @ 38 – ” [W]hy isnt time measured with a ruler instead of a clock?” — using geometrized units (for example, G = c = 1) we have a ruler useful for all dimensions, with marks at one second intervals. Because physical processes which produce frequency (in Hz) drive clocks, and metric distortions cause frequencies of nonlocal clocks to appear to run fast or slow, nonlocal rulers marked in seconds can appear to be too short or too long. “Local” here means that the observer and clock fully agree on the frequencies of everything they both can see, or alternatively that the observer and ruler agree on the lengths and areas (and their reciprocals) of everything they both can measure; this is a question of inertia (because of G and c) rather than spatial or temporal proximity.
Dear Sean, I think you lost me. I’m emphatically NOT just saying that the initial state of the universe is pure (which, as you say, wouldn’t be a satisfactory solution). I’m saying that the state of the universe is PURE AT ALL TIMES (at least from the point of view of the superobserver).
As you point out yourself in your blog entry, the problem with the time arrow is not much that the initial state is a low entropy one, rather that it is a much lower entropy state than today’s. I give a quantum mechanism that shows how the universe’s entropy is CONSTANT:
it was zero initially, and it is still zero now (at least from the point of view of the superobserver).
Why it doesn’t appear so to us? Because we are entangled subsystems of the universe, so correlations between us and our environment give us the SUBJECTIVE impression that the universe is in a high entropy state. In addition I also gave an argument on why we feel that this entropy is also increasing.
I hope I can recapture your attention to end this very interesting debate. Thank you,
Lorenzo
ps I apologize to the other people commenting in this blog: I’m trying not to increase the entropy of the conversation too much by limiting my replies to Sean and Huw’s questions, since I was discussing privately before with them.
The tools we are using to address the problems here are human minds, which, as epiphenomena of some awesome chemistry, are a governed by thermodynamics. Does this place inherent limits on our ability to discuss this issue?
Specifically, is dS/dt >= 0 telling us as much about the nature of time as it is about entropy? Does the *absolute* value of S as perceived by us tell us the absolute value of T?
So: Given that there are more high-entropy states than lower-entropy ones, is the flow of time the same thing as a statistical progression to higher-entropy states? And is the perception of this being a progression from low time to higher time purely because we have thermodynamic minds?
If this were the case, then we would expect to see that the past universe is in a lower entropy state, while this would not be true for a super-observer whose mind is not constructed of thermodynamic processes running within the universe? Such an observer would perceive the universe as a set off all the states that the universe has occupied, and would make no distinction between the ‘Absolute Time’ of a state and ‘Absolute Entropy’ of a state?
Or is this all my utterly trivial misunderstanding?
I still think people get confused about some simple points.
Entropy is a measure of the amount of information needed to describe the microstate of a system (ie, the greater the entropy, the greater the amount of information to describe the exact microstate of the system).
It is also a measure of uncertainty in that if I have a variable X that can take some value, if there is a higher probability of picking one particular value over the others, than I have less uncertainty and therefore less entropy. If I am equally likely to pick any value, then uncertainty is very high, and in fact has been maximized, and equivalently, so has the entropy.
Of note this has some interesting implications in terms of our use of numbers in the real world, where because we favor small numbers in almost all human endeavors, the typical entropy of any set of random variables is small.
In the physical world, what is particularly interesting is that if all particles can be described by the fundamental particles of the standard model (ie states of the standard model), and we place the additional constraint that the number of observable particles in the universe is always countable, then we can understand that the part of the universe we can observe at our energy scale is always in a state of low entropy, since any random observable particle is much more likely to be in its most stable state.
Does this mean that entropy of the universe is decreasing as the universe cools?
The answer is no because of the expansion of space and the intrinsic particle content of the vacuum itself, so although entropy of (relatively) strongly interacting observable particles might be decreasing, entropy of the relatively weakly interacting unobservable particles is steadily increasing.
Granted this description ignores classical notion of particle position and momentum, however, even in QFT, where the number and type of particles are more important than classical variables, we still see that observable particles must have a lower entropy than the virtual particles that occur in loop diagrams that are responsible for vacuum polarization, simply due to natural constraints on observable particle momentum and center of mass position.
Hopefully these comments will stir some thoughts.
Just Learning,
What are the specific “confusions” you are trying to address? I find your post interesting, but don’t see it’s relevance to the rest of the thread. What am I missing?
Are you perhaps arguing that in the absence of expansion of the universe, entropy would actually be decreasing?
“then we can understand that the part of the universe we can observe at our energy scale is always in a state of low entropy, since any random observable particle is much more likely to be in its most stable state.”
So are you saying that 1 m^3 of empty space has higher entropy than 1m^3 of empty space that also contains a gas of protons? I would have thought that the space with the protons has higher entropy … just because you add some protons, you shouldn’t change the possible states for virtual particles in the vacuum, should you? Or am I being really dumb here? I only know a little baby QFT, so the later is quite possible.
Pingback: AdamDodson.org » Blog Archive » Link Digest for August 27th
No, because there is no equivalency between 1^m of empty space and 1^m of empty space with a gas of photons in the way you stated the question. However, if you restate the question by saying that the two systems must have equivalent energy, then the space without protons would have higher entropy since it isn’t constrained to define a certain number of degrees of freedom as protons.