Welcome to this week’s installment of the From Eternity to Here book club. Today we look at Chapter Two, “The Heavy Hand of Entropy.”
[By the way: are we going too slowly? If there is overwhelming sentiment to move to two chapters per week, that would be no problem. But if sentiment is non-overwhelming, we’ll stick to the original plan.]
Excerpt:
While it’s true that the presence of the Earth beneath our feet picks out an “arrow of space” by distinguishing up from down, it’s pretty clear that this is a local, parochial phenomenon, rather than a reflection of the underlying laws of nature. We can easily imagine ourselves out in space where there is no preferred direction. But the underlying laws of nature do not pick out a preferred direction of time, any more than they pick out a preferred direction in space. If we confine our attention to very simple systems with just a few moving parts, whose motion reflects the basic laws of physics rather than our messy local conditions, there is no arrow of time—we can’t tell when a movie is being run backward…
The arrow of time, therefore, is not a feature of the underlying laws of physics, at least as far as we know. Rather, like the up/down orientation space picked out by the Earth, the preferred direction of time is also a consequence of features of our environment. In the case of time, it’s not that we live in the spatial vicinity of an influential object, it’s that we live in the temporal vicinity of an influential event: the birth of the universe. The beginning of our observable universe, the hot dense state known as the Big Bang, had a very low entropy. The influence of that event orients us in time, just as the presence of the Earth orients us in space.
This chapter serves an obvious purpose — it explains in basic terms the ideas of irreversibility, entropy, and the arrow of time. It’s a whirlwind overview of concepts that will be developed in greater detail in the rest of the book, especially in Part Three. As a consequence, there are a few statements that may seem like bald assertions that really deserve more careful justification — hopefully that justification will come later.
Here’s where I got to use those “incompatible arrows” stories I blogged about some time back (I, II, III, IV). The fact that the arrow of time is so strongly ingrained in the way we think about the world makes it an interesting target for fiction — what would happen if the arrow of time ran backwards? The straightforward answer, of course, is “absolutely nothing” — there is no prior notion of “backwards” or “forwards.” As long as there is an arrow of time that is consistent for everyone, things would appear normal to us; there is one direction of time we all remember, which we call “the past,” when the entropy was lower. It’s when different interacting subsystems of the universe have different arrows of time that things get interesting. So we look briefly at stories by Lewis Carroll, F. Scott Fitzgerald, and Martin Amis, all of which use that trick. (Does anyone know of a reversed-arrow story that predates Through the Looking Glass?) Of course these are all fantasies, because it can’t happen in the real world, but that’s part of the speculative fun.
Then we go into entropy and the Second Law, from Sadi Carnot and Rudolf Clausius to Ludwig Boltzmann, followed by some discussion of different manifestations of time’s arrow. All at lightning speed, I’m afraid — there’s a tremendous amount of fascinating history here that I don’t cover in anywhere near the detail it deserves. But the real point of the chapter isn’t to tell the historical stories, it’s to emphasize the ubiquity of the arrow of time. It’s not just about stirring eggs to make omelets — it has to do with metabolism and the structure of life, why we remember the past and not the future, and why we think we have free will. Man, someone should write a book about this stuff!
That would be true if gravity weren’t important, but it is. The smooth configuration of the early universe is very unnatural and low-entropy; a wildly inhomogeneous mess would be higher entropy.
Sean,
First my vote – one chapter/week.
Now a question. I’m trying to get my head around the idea of remembering the future and not the past in a decreasing-entropy universe.
Let’s say I’m measuring the interval between events by the B- decay of some radioactive pile. Event A happens, 200 detector clicks happen, event B happens, in that order. Even if I lived in a universe where the pile was going to absorb electrons and anti-neutrinos (or would it be positrons and neutrinos? damn it gets complicated), wouldn’t I still record the events as “A happened, 200 clicks, B happened? That is to say, wouldn’t I still perceive B as occurring after A.
By the way I finished the book, great read from beginning to end.
I vote for one chapter per week.
J– You can always measure the time between two events. But if there is no entropy gradient, and therefore no arrow of time, there would be no reason to claim that one event is “before” and the other “after” — all you would say is that they occurred a certain time apart.
“The smooth configuration of the early universe is very unnatural and low-entropy; a wildly inhomogeneous mess would be higher entropy.”
I thought it was the other way around. A homogeneous mixture is high entropy (gas particles evenly distributed around the room, the most statistically likely distribution). Inhomogeneous would be low entropy (all the gas particles in one corner).
What do you think of the idea that the universe began in a high entropy state, but as it expanded the ‘maximum allowed entropy’ increased, so that we get something like increased ‘room for entropy’. Then we allow the universe to increase in entropy over time without having to postulate it was in some unlikely low entropy state in the past. Picture it like gas particles spread evenly around the room, but we increase the size of the room faster than the gas can expand to keep up so we end up with all the gas in one corner. I don’t know if I’m wording that well but you’ve probably heard this idea before.
“all you would say is that they occurred a certain time apart”
Ok thanks, I now have the part about if I look at the complete recording, all I can say is that 200 clicks occurred between event A and event B. But what if I sneak a peek at 150 clicks? What do I see then? Am I able to differentiate when I look again at 200 clicks from what the condition was at 100 and 150 and what is it now?
“In the case of time, it’s not that we live in the spatial vicinity of an influential object, it’s that we live in the temporal vicinity of an influential event: the birth of the universe.”
Just incredible Sean! So important, so basic- and so profound.
I’m about 2/3 of the way through the book, reading it slowly- and enjoying it immensely. This is a very interesting conceptual work, tied to the history of science…very objective. You carefully note key loopholes and logical lapses.
Thanks for taking the time….
Jason– You’re neglecting gravity. With gravity, a dense fluid with high entropy becomes very inhomogeneous; the early universe was very low entropy. There can’t be “increased room for entropy,” because the universe is an isolated system with a fixed set of states. (Or so we are assuming.) Detailed discussion in Chapter 13.
J– Time still passes, even if there is no arrow. You can measure the time, no problem; you just can’t say that one direction is “before” and the other “after.” Some particular time coordinate is no better than a reversed one.
“Time still passes, even if there is no arrow”
Gaaa… I’m obviously not asking the question in the right way (if I’m so far off the mark that the space required to answer is a course and not an answer, I won’t be offended if you tell me)
But, damn it, “if time passes” it has to pass something doesn’t it? I fully understand that can I flip a space-time diagram and come up with the same answer. That makes sense. What doesn’t make sense to me is that just because I can flip a diagram that it reflects what really happens. We all throw out irrational roots when they fail to make a prediction that works and accept irrational roots when they make predictions that do work. Based on my interpetation though, if entropy is decreasing, before should be after and after should be before, not that we should all agree that we just can’t tell (if that were the case I’d have no problem understanding).
Maybe it’s an “ideal observer” philosophical question. I really don’t care as long as the testable predictions work. Then again, based on your hypothesis, there wouldn’t be science in a decreasing-entropy universe. We would know the future. And maybe, that’s just why I can’t get my head around it.
I probably shouldn’t have said “time passes” — “time still exists and can be measured” would have been better. Without the arrow defined by increasing entropy, time is like space — you can measure the distance between two points, but there’s no universal notion of one being “first” and the other being “last.”
That’s if there’s no arrow at all, corresponding to constant entropy. If entropy is decreasing everywhere in the same direction, we would always call the direction of increasing entropy “after.” Science would be perfectly okay, because before and after (and past and future) are defined by the direction of entropy increasing, not measure relative to it.
Is there an absolute measure of order? It seems to me that to measure entropy we have to always invent some arbitrary metric of orderliness which applies to the system at hand. Perhaps this is addressed in a later chapter?
With both Lost and Caprica starting up, I vote to stick to the original plan of 1 chapter per week.
Hi Sean (et al),
Three points here, mostly tangentially related. Is Entropy a localised variable (ie, scalar field), or is it just a property of the system (such as “total charge”, or something like the ADM mass)? If we go to Special Relativity, how does Entropy transform under the various types of Poincare transformations available? Finally, you mention the entropy of gravity. I’ve come across this concept in a couple of conference talks, but from memory, both simply stated that “it’s hard to define”. Can you expand on this a bit?
Thanks =)
P.S. My vote’s for one chapter a week
P.P.S. Say hi to Eanna if you see him =)
Clifford– There is some arbitrariness, but it’s not completely arbitrary. Certain qualities are immediately macroscopically observable (temperature, pressure), while others are not. More in Chapter 8.
Jolyon– Entropy isn’t generally localized, although there are some special circumstances when you can define an “entropy density.” Because it’s not local, it doesn’t transform as a tensor field; in flat spacetime you can define a total entropy associated with a spacelike hypersurface, and that would transform as the time component of a four-vector. (Much like the momentum 4-vector.)
And yes, we don’t have a general understanding of the space of states when gravity is involved, so we don’t have a reliable formula for entropy. But there are special cases we do understand, like black holes and empty space. See Chapter 11.
Hi Sean,
I had some catching up to do as I just found your book over the weekend. I’m really enjoying it, thanks!
I found the idea that the early universe was extremely low in entropy especially interesting. Now I was a humanities guy, so my knowledge of this stuff comes only from books on popular science. I’ve not seen other books that highlight the low entropy as one of the early universe’s qualities. I’m not sure why, the idea seems on its face to be a deep and rich one!
Now I tend to associate low entropy with order and structure, and high entropy with disorder and randomness. So seeing that connection highlighted, the vision that comes to mind is of an early universe that displayed elaborate structure. Am I mistaken on that?
Best wishes!
Your examples are great, but I am doing my own to learn better. So I have a sun heated rock in a bowl of cold water on my kitchen counter. If I leave it alone, later everything will be at more or less the same temperature. If I measure at intervals, and do the math, the numbers go up, and can’t go down, and entropy goes up. So now I decide I don’t like how this is going at all. So I put the rock in my gas oven, microwave the bowl of water,turn up the room heat, get out the hair dryer, and start again. Still the entropy will go up overall. The energy from the burning gas, micromaves and electricity all have to be considered. Plus I am burning a lot of calories waving the hair dryer and moving things around. ( I wish! ). Entropy is going up all the time, up until I throw the rock back out in the yard, and let it effect the entropy out there. Now the temperature, measuring and math are probably the 2nd law. The system has had a object added to it and is definitely becoming more disorderly. Both ideas are increasing entropy, so it counts to the same thing, and I can move on to chapter 3. ( If this isn’t mostly right, I will be happy to read 2 over ). NOW it seems to me that the first rock and bowl group is the “closed system”, and the larger messy group should be the “open system”, but that doesn’t seem to say that , as I am reading it. So for clairity, which system is open and which is closed?
Hi there,
just discovered this thread and did not catch up yet, so: Sorry if I repeat something that has already been said.
Right now I’m reading the “spacetime” chapter (p.74), but I found already enough sentences that I would like to see in every textbook on GR or thermodynamics, maybe we could collect them on a web page dedicated to this purpose?
Example: p.50, “The correct deduction is not that general relativity predicts a singularity, but that general relativity predicts that the universe evolves into a configuration where general relativity itself breaks down”.
Of course we can accept that GR hints at something like black holes, and that there is convincing – indirect – evidence for the existence of entities like black holes from cosmological observation, and that it is a valid topic to discuss – but that is often confused with the statement that GR “predicts” black wholes.
Now to a question: Sean, you mention that the concept of free will is connected to the arrow of time. Right now I do not know how to make sense of that. Is it somehow connected to this line of thought: “I have a free will, this is not a contradictory to the existence of an omniscient entity. If you offer me different kinds of ice-cream, I have the free will to choose chocolat or vanilla. After I chose, you know my choice, because now it is in your past. That is not a contradiction to my ability to choose freely. If a beeing exists to whom everything in my future lies in it’s past, it would already know of all my choices, despite the fact that I am, was and will be free to choose.”
Is that somehow connected to what you have in mind? Is it explained in more detail in later chapters of your book?
I vote for 1 chapter a week.
I’m deeply confused about why a smooth distribution of particles in the early universe is low entropy but a smooth distribution in the late universe is high. It doesn’t seem right that the difference is the distance between the particles; that just boils down to how long it will take gravity or chance to affect the distribution, no? And if it is a matter of “how far” and “how long” doesn’t that assume some preferred scale?
As a poster said above, I won’t be offended to learn that I have missed the whole point. I’m once through the book (obviously too quickly) and look forward to reading it a second time in step with the blog.
Joe– It’s misleading to associate “low entropy” with “structure.” It’s true that we often associate low entropy with order and high entropy with disorder, but that’s a casual gloss that doesn’t stand up to closer scrutiny. Low entropy just means unlikely, even if the configuration is extremely simple — like all the air in a room squeezed into a single cubic centimeter. The early universe is in a very simple and structureless configuration, even if it is low entropy.
Susan– I think you’re on the right track. There is no universal choice of “open system” and “closed system”; a closed system is just one that is isolated from the outside world, so we can always turn a closed system into an open system by bringing it into contact with something else.
Tim– Let’s keep the Chapter 3 stuff for next week. And yes, the free will stuff is explained a bit more later, especially in Chapter 9. But the basic point is simple: according to the underlying laws of physics, the past and future are determined by the present. But we don’t know enough about the present to actually do a very precise prediction or retrodiction. For the past, however, we also have access to a low-entropy initial condition, which greatly restricts the space of possible things that could have happened. In the future there is no such boundary condition, so things are much more wide open; that’s what gives us the feeling that the past is settled while the future is still to be decided.
rww– It’s not that a smooth distribution is high-entropy in the late universe, it’s that a smooth distribution is high-entropy when gravity can be neglected. That’s certainly not the case in the early universe. Think of it this way: if a universe like ours were to contract rather than expand, we would not expect it to smooth out along the way. It would get lumpier as it contracted, entropy increasing all along the way. It’s only once we get to the very late universe, when everything has fallen into black holes which then begin to evaporate away, that the universe smooths out again.
I found my question above was from chapter one, oops. I did more investigating since I posted it, and found several places that said the universe is the only truly closed system. Maybe by the end of the book I will be thinking that this idea might not be such a sure statment, we shall see. Thanks.
“because before and after (and past and future) are defined by the direction of entropy increasing, not measure relative to it.”
Defined. Now it makes sense to me. Thanks! I really found that to be the toughest concept in the whole book.
re #44: Is it then the durability of the smoothness despite the influence of gravity that marks it as low entropy in the early universe?
rww– Not really; it’s the smoothness itself, not its durability. The Second Law says that entropy increases, but doesn’t tell us how fast; the rate of increase is a complicated thing that depends on circumstances. What marks the smooth early universe as low-entropy is its instability. A smooth configuration will become non-smooth under the influence of gravity, while a non-smooth configuration isn’t going to smooth itself out.
Thanks Sean, that did it.
A chapter a week is fine with me, but if there are any shorter chapters coming up or consecutive chapters that have a lot dependency on each other – it might be beneficial to discuss both at once? Maybe you can look ahead and give us a few days notice if this occurs? Right now, I’m reading two books – and I am pacing myself with the book club because I want to see if I can adapt to it. But, the book club’s Q&A is sort of beckoning to me, making me want to start plowing into the book some more, right now!
Entropy is sort of challenging to comprehend. I get those “ah-ha” moments and “wait, what?” moments on occasion. As we get further into the book, there may be times that you have to refer us back to the first few chapters or get all Entropy 101 up in here, in the club’s Q&A.