Welcome to this week’s installment of the From Eternity to Here book club. Now for something of a palate-cleanser, in the form of Chapter Nine, “Information and Life.”
Excerpt:
Schrödinger’s idea captures something important about what distinguishes life from non-life. In the back of his mind, he was certainly thinking of Clausius’s version of the Second Law: objects in thermal contact evolve toward a common temperature (thermal equilibrium). If we put an ice cube in a glass of warm water, the ice cube melts fairly quickly. Even if the two objects are made of very different substances—say, if we put a plastic “ice cube” in a glass of water—they will still come to the same temperature. More generally, nonliving physical objects tend to wind down and come to rest. A rock may roll down a hill during an avalanche, but before too long it will reach the bottom, dissipate energy through the creation of noise and heat, and come to a complete halt before very long.
Schrödinger’s point is simply that, for living organisms, this process of coming to rest can take much longer, or even be put off indefinitely. Imagine that, instead of an ice cube, we put a goldfish into our glass of water. Unlike the ice cube (whether water or plastic), the goldfish will not simply equilibrate with the water—at least, not within a few minutes or even hours. It will stay alive, doing something, swimming, exchanging material with its environment. If it’s put into a lake or a fish tank where food is available, it will keep going for much longer.
This chapter starts with something very important: the relationship between entropy and memory. Namely, the reason why we can “remember” the past and not the future is that the past features a low-entropy boundary condition, while the future does not. I don’t go into great detail about this, and we certainly don’t talk very specifically about how real memories are formed in the brain, or even in a computer. But when we get to the next chapter, about recurrences and Boltzmann brains, it will be crucial to understand how the assumption of a low-entropy boundary condition enables us to reconstruct the past. It’s hard for people to wrap their brains around the fact that, without such an assumption, our “memories” or records of the past will generally be unreliable — knowledge of the current macrostate wouldn’t allow us to reconstruct the past any better than it allows us to predict the future. (Which is only logical, since it’s only this hypothesis that breaks time-reversal symmetry.)
The rest of the chapter, meanwhile, is more about having fun and mentioning some ideas that are not directly related to our story, but certainly play a part in understanding the arrow of time. Information theory, life, complexity. I’m not an expert in any of these fields, but it was a lot of fun reading about them to pick out some things that fit into the broader narrative. The Maxwell’s Demon story, in particular, is one that every physicist should know (up through it’s relatively modern resolution), but relatively few do. And I think Jason Torchinsky did a great job with the illustrations of the Demon.
A lot of big ideas here, of course, and much of this stuff is still very much in the working-out stage, not the settled-understanding stage. We’re still arguing about basic things like the definition of “complexity” and “life.” It’s relatively easy to state the Second Law and explain how the arrow of time is related to the growth of entropy, but there’s a tremendous amount of work still to be done before we completely understand the way in which the universe actually evolves from low entropy to high.
Hi Sean-
I have heard that some heat generated by computers, in theory, comes from the processing of information. For instance, an “AND” gate takes two bits of information (each either 1 or 0) and gives out one bit of information (either a 1 or 0); the process is inherently irreversible (0 implies three different past states), and so entropy is generated in the process in the form of heat.
Is this an accurate picture of what’s going on? And do you know if this is a macroscopic amount of heat? That is, is this really the heat that the fan on my computer is dispelling?
Thanks!
Escher,
We can calculate that heat. The basic idea is to use the statistical definition of entropy to find the entropy change in the computer’s hard drive. Then we use the thermodynamic definition of entropy to find the heat released.
The computer might process about a gigabyte of data. The entropy is
S = k ln(W)
and
k = 10^-23 J/K.
The maximum entropy state is when the data is stored as half 1’s and half 0’s, and that entropy is roughly equal to the number of bits (10^10) times Boltzmann’s constant. That gives a maximum entropy near
S = 10^-13 J/K
and because the computer operates at 300K, the heat released (from S = dQ/T) comes to just
Q = 10^-10 J.
That’s about the kinetic energy of an ant crawling across the keyboard. I guess most of the heat comes from resistance in the circuitry.
The calculated entropy change is very low because if you think about the computer’s hard drive as having microstates based on the data it’s storing, you’re missing a lot of microstates. The real hard drive works by flipping the direction of the magnetic field in a ferromagnetic material. On Wikipedia, I found that the layer of ferromagnetic material is about 10nm thick, or 100 atoms.
If a 7cm disk holds 100GB, the area per bit works out to be about 100nm on a side (which suggests the calculation works okay, since it is one order of magnitude greater than the thickness), so a typical bit in a computer hard disk bit actually has about 10^8 atoms. Even if each atom were a 2-state system, we’re already glossing over a lot of entropy by considering how many bits of data the computer processes.
I lot of the heat comes from resistance in the transistors and the like in the logic gates – remember that in comparison to copper, silicon is pretty non-conductive, and the connections inside a processor have incredibly small cross-sections.
I am still a Skeptic, Dr. Carroll, but I cannot find a flaw in your arguments. Chapter 9 is truly fantastic though! My favorite yet.
But I wanted to throw my 2 cents in. I agree that information is physicall however, I think you should have touched on false statements (e.g., lies, false hypotheses, etc) — where do they come from or where do they reside? I would like to posit that all mathematical or logical statements “exist” independently of the physical in a math space, I suppose. While it certainly takes energy to consciously focus on a mathematical statement, we typically think that truth (relative to a given set of axioms) does not require this.
Moreover, on page 198 you claim to have turned Maxwell’s Demon into a protagonist, but it seems that you are showing your hand in stating this. I don’t think your text had previously referred to him as a villain and I don’t think Maxwell or philosophy ever thinks of the Demon as such. He is simply an entity performing a function. I was certain you understood this, but found it odd that, perhaps in the editing process, this slipped out.
Regarding Schrodinger’s attempted definition of life: can you think of a way in which it remains inadequate? Can someone think of a system which is apparently absorbing energy and following the longest path to equilibrium, but it not usually considered alive?
I’m afraid I don’t quite follow your argument on pp. 180-82 that the veracity of our remembered past depends on some version of the Past Hypothesis (PH). Is the version of the PH we invoke here something like “our past could not have represented a state of higher entropy”? Without a constraint like that and relying only on a principle of indifference for macro-states, we are going to have trouble inferring that the occurrence of the past as we remembered it is the best explanation for our memories. If that’s the whole argument, I get it, but it also seems very unclear that the way we “reconstruct” our past actually depends upon an assumption like the PH. In your example, I am confident that the photos and objects photographed and my memories of them just did not pop into existence. That in fact is not the way macro-objects etc behave on this planet, and that is not an assumption but a very secure fact in my experience, and a fact which any satisfactory set of physical laws needs to explain (not challenge). Now I understand that a principle like the PH favors the stable prior & continued existence of macro-objects as a lower entropy arrangement, but I don’t see that I need a PH here, and in terms of explanatory power is surely a much more dubious hypothesis than the evident stability of macro-objects.
Sean, why didn’t you use David Albert’s argument about “records” that are caused by knowing something about the past together with something about the present, giving you information on something inbetween… I found it very convincing… Is it too technical or do you believe there is something wrong with this argument?
A car perhaps? and if you want, with miles of open road, a full tank of gas, and a brick on the pedal.
Just about anything which carries its own fuel and capable of using it. Even a cellphone. I’m not exactly sure what “equilibrium” would be with these objects.
In my opinion, what makes life quite a bit more fancy than these “mundane” objects, is that it is also programmed well enough to “hunt” and find more fuel that is just lying around on Earth, waiting to be burned, and, as always, the key feature of life, it is capable of self-replication. The first property, allows living objects to REALLY live past when you would expect them to reach the equilibrium…
Pingback: From Eternity to Book Club: Chapter Nine « Thoughts About Changes In Time
I was traveling all day and haven’t had a chance to respond until now.
On generating heat, meichenl’s response looks right to me. Just to be clear, however, Charles Bennett did manage to prove that you don’t need to generate heat to do a computation; it’s possible to make computation completely reversible. The cost you pay is that you need to keep track of extra bits. Ultimately you’ll have to erase those bits, which is an irreversible process, and ends up generating heat.
Clifford– Of course there can be false memories and inaccurate reconstructions. That has more to do with the specifics of a memory apparatus or our incomplete knowledge, so I didn’t go into it here. And I have no way of quantifying how useful Schrodinger’s definition is; it just seemed interesting to me.
Philoponus– I’m not sure I understand what you’re saying. Given only the current macrostate of the universe, without any additional Past Hypothesis, our reconstruction of the past would look like the time-reversal of our reconstruction of the future; it wouldn’t look anything like we really think the past looks like. Memories and records would not quickly pop into existence, but they would gradually assemble out of a higher-entropy prior condition.
Oded– The argument I did give was basically a watered-down version of that. I didn’t want to go into too much detail, both to keep from wandering afield and to stick to things I was confident in explaining.
Living matter is not in thermodynamic equilibrium, but is a pumped open system like a laser above threshold – associated with spontaneous symmetry breaking with the emergence of macro-quantum coherence in the many-particle ground state also seems to play an essential role.
“I have heard that some heat generated by computers, in theory, comes from the processing of information. For instance, an “AND” gate takes two bits of information (each either 1 or 0) and gives out one bit of information (either a 1 or 0); the process is inherently irreversible (0 implies three different past states), and so entropy is generated in the process in the form of heat.”
I don’t have any references handy, but there was a “correspondence” in the literature about this between Lawrence M. Krauss and Freeman Dyson. They were looking at life and computation in an infinitely expanding universe.
You can think forever if you think the same thoughts.
Pingback: Free Energy and the Meaning of Life | Cosmic Variance | Discover Magazine
This was not explicitly stated, so I want to make sure that my assumption is correct – in the example of a two-chamber box containing a single particle, you explained how we can use information about the system to derive useful work out of it – namely, by knowing which side to insert the piston into. Am I correct in understanding that the act of moving the piston eliminates our knowledge of which side the particle is on? (i.e. that we have traded our information for work and thus lost the information). Would it be further correct to say that we could find the particle again by putting in some amount of work and use the piston again?
Yes, moving the piston (or letting it be pushed out by the particle inside) erases our knowledge of where it is. There are various ways we could get that information back, but they all cost energy; e.g. we could just push the piston back in (from either side).
Sean
First, congratulations for surviving Colbert – not easy, but well done.
Question: what do you make of Gubser’s comment in “The Little Book of String Theory” that “Time running at different rates at different places is gravity. In fact, that’s all that gravity is….Things fall from places where time runs faster to where time runs slower. That downward pull you feel, and which we call gravity, is just the different rate of time between high places and low places.”
I have read several articles (including the Scientific American article of 6 months or so ago) about the new so called Horava theory of quantum gravity, which employs concepts from condensed matter physics, and apparently splits time from space at the earliest era of the Big Bang and recombines them later as the primordial plasma cools down. Any thoughts about this in relation to your discussion of spacetime and time.
I also am having a difficult time integrating the concepts of the “grid” (a latter day concept of spacetime as a quasi aether) in Wilcziak’s recent book, as well as some similar concepts in Seth Lloyd’s book on quantum information theory and quantum reality (“it from bit”), with the discussion in your book on the nature of the “fabric” of spacetime. I do not have either book with me at the moment, but I do not recall discussions in them regarding entropy as a central organizing concept.
Sean
Just a quick note regarding footnote #52–shouldn’t the size of the observable universe read “100” not “10” billion light years?