There’s a major event in the life of every young book that marks its progression from mere draft on someone’s computer to a public figure in its own right. No, I’m not thinking about when the book gets published, or even when the final manuscript is sent to the publisher. I’m thinking of when a book gets its own page on amazon.com. (The right analogy is probably to “getting your drivers license” or something along those lines. Feel free to concoct your own details.)
So it’s with a certain parental joy that I can announce From Eternity to Here now has its own amazon page. My baby is all grown up! And, as a gesture of independence, has already chosen a different subtitle: “The Quest for the Ultimate Theory of Time.” The previous version, “The Origin of the Universe and the Arrow of Time,” was judged a bit too dry, and was apparently making the marketing people at Dutton scrunch up their faces in disapproval. I am told that “quests” are very hot right now.
All of which means, of course: you can buy it! For quite a handsome discount, I may add.
It also means: I really should finish writing it. Pretty darn close; the last chapters are finished, and I’m just touching up a couple of the previous ones that were abandoned in my rush to tell the end of the story. The manuscript is coming in at noticeably more words than I had anticipated — I suspect the “320 pages” listed on amazon is an underestimate.
And, yes, there is another book with almost the same title and an eerily similar cover, which just appeared. But very different content inside! Frank Viola’s subtitle is “Rediscovering the Ageless Purpose of God,” which should be a clue to the sharp-eyed shopper that the two works are not the same.
Writing a book is a big undertaking, in case no one before me had never noticed that before. I’m very grateful to my scientific collaborators for putting up with my extended disappearances along the way. It’s also very nerve-wracking to imagine sending it out there into the world all by itself. With blog posts there is immediate feedback in terms of comments and trackbacks; you can get a feel for what the reactions are, and revise and respond accordingly. But the book really has a life of its own. People will read and review it for goodness knows how long, and I won’t always be there to protect it.
Frankly, I’m not sure this “book” technology will ever catch on.
Congrats Sean,
Hopefully there will be one winging its way to Ireland for me soon!
Hi Sean,
have you thought of posting sample chapters of the book on the blog to get some feedback? Some authors like Paulo Coelho do that. Reading say 10 pages will give your (future) readers a preview of what’s to come 😀
I might post an excerpt or two, some time between now and when it appears. But not too much; once it actually appears we can discuss it in some detail.
Felicitaciones, Sean! I’m sure that’s a huge relief — i look forward to reading it, when it’s all out.. might have to read it in parallel with the God-book and compare…
Is there a movie on the way?
If not I’m looking to make a new physics-Doc this year 😉
> I might post an excerpt or two, some time between now and when it appears. But not too >much; once it actually appears we can discuss it in some detail.
As I said, a dozen pages of some interesting/thought provoking chapter will do good publicity for the book.
Regarding my previous example with Paulo Coelho, I have to say that he is far ahead of his time… He gives for free pdfs of his books and, as he says in his blog, this has boosted sales 😀
I remember reading the same in Wil Wheaton’s blog.
Congrats!!! I may have missed it completely, you said you are still fine tuning it but is there an official/ball-park release date?
Congrats!
Can we get it on Kindle?
Should be out in October. And there should be a Kindle edition; other Dutton books have them.
Sean-
I’m having trouble with a particular point–the key argument, actually–in your paper with Jennifer Chen in 2004, “Spontaneous Inflation and the Origin of the Arrow of Time,” which, I’m correct, will be the basis of a new book that you’re writing now. (I’m between papers at the moment, and the problem was on my mind.)
You correctly point out that a proto-inflationary patch of size L_I likely has entropy of order 10^12. You estimate this entropy by using the standard formula for the entropy of a quasi-de Sitter space.
But here’s my first point: One could also note that 10^12 is the maximum allowed entropy of a region of radius L_I, from the holographic bound A/4G for a region bounded by area A~L_I^2. (The holographic bound formula gets corrections for very tiny regions, as predicted for example by string theory, so perhaps there’s a loophole here and you can claim that the corrections destroy the bound for present purposes. I’ll pretend that this doesn’t happen.) So the patch that became our observable universe today, if it indeed was once only L_I in radius, could not, before inflation began, have had an entropy larger than 10^12. There was no choice!
Now, I know that you then argue that something must be amiss, because our present observable universe’s entropy is 10^88 or 10^99 (the latter if we count black holes), and although naively this just confirms the 2nd law that entropy increases with time, it also means that it should have been far more likely to find our universe in an initial state like our universe is today rather than in a state with entropy 10^12 << 10^88.
Indeed, you write on p. 12:
"The point of the entropy argument is then very simple: the entropy of the patch that begins to inflate and expands into our observable universe is far less than our current entropy, or even than the entropy at earlier stages in our comoving volume, given approximately by the matter entropy. From the point of view of the Second Law, this makes sense; the entropy has been increasing since the onset of inflation. But from the point of view of a theory of initial conditions, it strongly undermines the idea that inflation can naturally arise from a random fluctuation. In conventional thermodynamics, random fluctuations can certainly occur, but they occur with exponentially smaller likelihood into configurations with lower entropy. Therefore, if we are imagining that the conditions in our universe are randomly chosen, **it is much easier to choose our current universe than to choose a small proto-inflationary patch. The low entropy of the proto-inflationary patch is not a sign of how natural a starting point it is, but of how extremely difficult it would be to simply find ourselves there from a generic state.** If inflation is to play a role in explaining the initial conditions of the universe, we need to understand how it arises from some specific condition, rather than simply appealing to randomness."
I have singled out the statement in asterisks because I think it is incorrect, or at least misleading. Yes, it is highly more like to find a universe **the size of our observable universe today** in a state of entropy 10^88 or 10^99 rather than in a state of entropy 10^12. But a patch of radius L_I could not have had entropy that large, lest it violate the holographic bound A/4G~10^12. If that patch were just a tiny part of a much vaster primordial bath of high entropy, then there would have been many such tiny patches, but each would have been individually bounded in its entropy by 10^12.
If our observable universe was once a patch of size L_I, and a patch of size L_I could not have had entropy greater than 10^12, then, ergo, our universe once had entropy no greater than 10^12. That's just the breaks.
Now, you point out that if the Hilbert space of our observable universe (assumed finite-dimensional, so that there is a maximal entropy) is not changing size with time–a condition of unitarity–then the Hilbert space must have been the same size for the tiny patch of radius L_I, and hence that patch must have had the same maximal entropy of our universe today, rather than a maximal entropy of 10^12 as predicted by the holographic bound. (This, in itself, is a puzzling result.)
So you immediately toss out inflation as a means of solving the entropy problem, also seemingly throwing away the holographic bound and insisting that the patch that gave rise to our observable universe must, despite its physical size, have been capable of having an entropy as large as our observable universe does today.
But there seem to be a lot of other ways around the problem, ways that are consistent with the holographic bound. This brings me to my second point.
First of all, our de Sitter horizon isn't as small as it was during the heyday of inflation—as the de Sitter horizon enlarges, the de Sitter entropy must go up. There's no magic to this—more space is visible as the horizon gets larger. This size increase is distinct from the exponential "stretching" of space when there's a constant de Sitter horizon—here we're talking about the horizon itself getting larger, and the de Sitter entropy goes up with it.
Indeed, it's quite natural to suppose that the early patch that gave rise to our universe was once de Sitter and therefore had a tiny entropy ~10^12, but that the de Sitter horizon itself—not just the space inside—rapidly expanded at the end of inflation, drastically enlarging our Hilbert space and thereby allowing the entropy to start growing to its modern—albeit still surprisingly small—value of 10^88 or 10^99.
But there's more, and this brings me to my third point. A de Sitter space is hot, with quasi-thermal radiation coming in from the cosmic horizon and seeding the complicated cosmic structure of our present-day universe. Once, in the past, our universe was a quasi-de Sitter space of tiny radius with de Sitter entropy 10^12, and today it has an entropy of order 10^88. Could not the Hawking radiation from the horizon have supplied the additional entropy needed to bring our universe up to 10^88?
Indeed, de Sitter space is, in some ways, like an inside-out black hole. (Indeed, when a baby de Sitter universe forms spontaneously, it does look just like a black hole from the outside, with a black-hole entropy seen from the outside equal to the de Sitter entropy seen from the inside, before the black hole evaporates and the baby universe "pinches off" from the surrounding space.)
The resolution to the black hole information paradox, in which the exterior space seems to have lost a huge amount of its entropy behind the black hole horizon when the black hole forms, is to look more carefully at what the Hawking radiation is doing; the Hawking radiation secretly contains correlations that carry the vast entropy of the evaporating black hole back into the exterior space, resupplying the exterior space with that missing entropy. If a typical black hole at the center of a galaxy has entropy 10^89, then Hawking radiation is capable of carting off 10^89 units of entropy back into the exterior space! So why can't the Hawking radiation from the de Sitter horizon of our growing universe carry a "mere" 10^88 units of entropy into our observable universe?
Now, you might ask where that entropy is coming from. For the case of a black hole, the entropy was naively contained inside the black hole, and is leaking out as Hawking radiation. The analogous claim for the de Sitter case would be that entropy is leaking in from the larger primordial bath outside our patch. Indeed, if that larger primordial bath is fairly large, its maximal entropy, as allowed by the holographic bound, could easily be as big as we need.
So I guess what I'm saying is that just as the space exterior to a black hole is not really closed, because the black hole is leaking its entropy via Hawking radiation to the exterior space (this is why there's no violation of unitarity–open systems don't conserved entropy), so our early de quasi-de Sitter universe was not closed either, because Hawking radiation is leaking entropy in. Since it's open, the entropy of our universe can increase as needed, to go from 10^12 to 10^88, without any fundamental unitarity violation.
Indeed, by a simple change of coordinates, you know that we can interpret the inflationary density perturbations as being thermal rather than quantum-mechanical in origin, and coming from that quasi-thermal Hawking radiation. So the entropy needed to seed our universe with density perturbations really can be viewed, in fact, as coming from the Hawking radiation streaming in from the horizon, in seeming consistent with what I've been saying.
Of course—and this is my fourth point—one popular line of thinking nowadays is that when an object falls toward a black hole, its information never really crosses the horizon at all, but just gets flattened pancake-style onto the horizon; the Hawking radiation then carries this "hovering" information back out again.
Indeed, when a black hole forms, the event horizon doesn't magically "appear" at finite size all of a sudden, but grows smoothly from zero size to the Schwarzschild radius as the star collapses, so really everything that was in the original star gets pancaked on top and never crosses the horizon, either. In this way of thinking, everything that should naively be "inside" the black hole is really living just outside the event horizon, and is carried back to the exterior space in the Hawking radiation.
But a similar story holds for a de Sitter universe, with everything inside-out. The de Sitter horizon doesn't magically appear at finite size all of a sudden. As a tiny patch in our roiling primordial bath-space transitions to a proto-inflationary state, a de Sitter horizon comes in from infinity down to a finite size. As it does this, all the junk in the larger primordial bath has to "cross" that shrinking de Sitter horizon; perhaps, like with the holographic interpretation of a black hole horizon, all that junk just gets pancaked to the inside of that shrinking de Sitter horizon, and is gradually carried in by the ingoing Hawking radiation.
In any case, Hawking radiation can carry in a lot of entropy, just as Hawking radiation carries out a lot of entropy in the case of a black hole. That's how you can start with a patch of tiny entropy whose entropy grows with time, consistent with the holographic bound and unitarity, in much the same way that the resolution to the black hole information paradox saves unitarity.
Now, you note that a patch of entropy 10^12 is far less likely than just finding our universe in an initial state of entropy 10^88. But this brings me to my fifth point, that the real question is to look at the larger primordial bath. If the bath is fairly big, then we can easily ensure that its maximal entropy (assuming, contrary to the central thesis of your paper, that there is such a maximal entropy), as allowed by the holographic bound, is far higher than 10^88, so that the whole universe is far more likely to begin in such a bath state than in its modern state.
If the bath's entropy is indeed maximal–so that the bath is in its likeliest possible state–and the bath is exp(10^12) times L_I in size, then there will be of order exp(10^12) different such patches of size L_I in the bath; then essentially every single possible small patch of size L_I will be present somewhere in this bath, and each such patch will have maximal entropy 10^12. Given so many patches, your argument about their relative unlikeliness breaks down. There are so many patches that it offsets their tiny entropy, and one of those patches will invariably have the correct initial conditions for proto-inflation, so we're set.
As for Boltzmann brains, my sixth point is that a typical such Boltzmann brain would have radius L_B much larger than L_I, and need an entropy S_B vastly higher than 10^12, necessitating a primordial bath with size of order exp(S_B) times the size L_B^3 in order to exhibit Boltzmann brains. So provided that the primordial bath is of size exp(10^12) * L_I^3 << exp(S_B) * L_B^3, then Boltzmann brains should be unlikely to exist.
So, to conclude, I think there are some serious problems (my six points) with the central arguments of your paper, and there are some big loopholes that allow one to avoid your conclusions. Most likely I'm being an idiot, but, at the very least, once you have shown my foolishness, you'll have more ammo for future critics. And I'm sure your explanations will be very educational not just for me, but for everybody who reads this blog, too!
Thanks!
Oh, and congrats on the developments with the book! It must be very exciting. I can’t wait to read it!
Cool cover. How much of a say do you have in the cover, anyway?
Sean
I too don’t think books will ever catch on, the user interface is not intuitive, see this:
Medieval help desk for a book replacing scrolls of paper
http://www.youtube.com/watch?v=pQHX-SjgQvQ
Typically authors don’t get much say in book covers. My publisher was willing to let me have some say, but they came up with this one on their own, and I was quite happy with it.
NS, I think there is a mistake in your argument — which will become clear in Chapter 13 of the book! In particular, you discuss the entropy that is allowed in a small region of space according to the holographic principle. I completely agree with all that. However: who says the region has to be small? The size of a region isn’t a conserved quantity in a theory with gravity. That region grows into our entire observable universe — they are clearly the same physical system, not two separate systems. Therefore, among allowed configurations of that system, we have to include regions that are tens of billions of light-years across. The holographic bound only says how much entropy can fit into a region when it is that small, not how big the entropy could have been in principle.
I just preordered the book. Hurry up and finish already 🙂
I see what you’re saying. Our observable universe has some fixed Hilbert space, and among its allowed (mixed) states are one that’s a tiny patch of radius L_I with entropy 10^12, and another is its present state in which it’s very large and has entropy of order 10^88 (or 10^99 if we include all the black holes, but I’ll stick with the former just for clarity). So if we were to choose a (mixed) state from the Hilbert space at random, we would be far more likely to choose the large, contemporary state than the small, primordial state.
Of course, our observable universe also has states of entropy of order 10^122, so it’s not clear why we find it in a state of 10^88 now. That seems to be the central paradox you’re trying to solve.
But we can’t just say that a deity picks a state randomly. We have to find some natural evolution in which to embed our universe.
The whole idea behind the claimed inflationary resolution to the aforementioned paradox is to find a natural way to end up with 10^88. My claim is that this is possible.
Suppose we start with a finite-sized initial maximally-entropic bath, where for the sake of argument we assume a maximal entropy is possible, contra your assertion. Our universe will end up being a subsystem of this bath, as I’ll now explain.
This bath has maximal entropy, so it’s in its most likely (mixed) state. Every patch or “cell” of size L_I has entropy bounded by 10^12.
So this is what we have to work with. This is a highly likely state of the full bath. (Which is a system that will eventually contain a smaller system, our own observable universe.)
The question is: What’s more likely in this situation? For one of these tiny L_I-sized patches to fluctuate into an inflationary state and grow up into our observable universe today, or for a patch the size of our current observable universe H_I inside the finite bath to fluctuate into our universe today?
If the bath is smaller than our modern observable universe, then the former is the only possibility. If the bath is bigger than our modern observable universe, then there will be a huge number of patches of size L_I, so it is extremely likely that one of them will find itself in the required proto-inflationary patch, much far more likely than finding a huge patch the size H_I of our observable universe that already looks like our observable universe; thus, again the former scenario is more likely and therefore more natural.
Let’s boil this down. The central problem we’re trying to avoid is being forced to make an **unlikely choice**—that is, pick an unnatural state—at some point. That is, we’re trying to avoid having to invoke a miraculous selection of a highly unlikely state at any point.
But in the story I’ve laid out, no such choice is made. We start with a maximally entropic bath of a larger system, each of whose “cells” of size L_I must have entropy limited to 10^12, and one of these patches is going to hit the correct inflationary patch we need. At no point have we invoked an unlikely state. And the entropy of the modern observable universe grows as I’ve explained. So what’s the problem?
Maybe I can say this another way.
If we had just a single patch of arbitrary size to work with, I’d agree that our present-day observable universe would be a more likely mixed state than the tiny patch of size L_I.
But if our observable universe is embedded in a larger bath, even if that larger bath has fixed possible entropy, then there are vastly more tiny patches than big patches (if the bath is small enough, there won’t even be any patches as large as our present observale universe at all), and my claim is that this makes starting from a tiny patch more likely than starting from a big one.
An analogy: It’s unlikely that a single eyeless organism will develop a pair of eyes spontaneously, or that a single of its offspring will have eyes. But if it has zillions of offspring (and offspring of offspring), it suddenly becomes a natural possibility that one of them will have eyes.
Matt, your assumptions aren’t mutually consistent. If you start with a state you claim is “maximally entropic,” then by definition the entropy can’t grow. Then you say that one of the cells within that state goes from an entropy of 10^12 to an entropy of 10^88 (or much larger), while the rest of the cells remain as they were. So clearly you weren’t in a maximally-entropic state to begin with.
And I think the resolution is pretty clear: your initial state wasn’t one of maximum entropy, because there is no such thing, if you take inflation at face value. Then you’re most of the way to agreeing with my proposed scenario.
Nice!
Wait–no, I’m still not so sure. The patch grows, and as NS argues, entropy leaks in as Hawking radiation (or just from the growth in radius of the de Sitter horizon of our patch) from the exterior bath, whose own entropy is some maximal amount, say, 10^200. The entropy of the bath remains unchanged throughout.
So, Sean, is there still a problem with this proposal?
Matt, I think the problem is pretty basic. If you are in a state of truly maximum entropy, you tend to stay there. If you have a state that is unstable to evolving into something else, by definition it is not maximum entropy. The only thing a maximum-entropy state could do would be to (rarely) fluctuate into a state of lower entropy. That doesn’t seem to fit the picture of the universe in which we live.
The earliest state of our universe seems to have been characterized by extremely low “geometric” entropy, with all other forms of entropy being not so low. How does the Carroll-Chen idea explain this strange observation?
Will the book give a “thermodynamic” argument for why we remember the past and not the future? I remember you’ve tried to explain this to me before, but brevity (or my thick skull) prevented it from sinking in.
Sean–
I think you may have run into a paradox here.
What you’re basically saying is the following: Suppose we have a finite-sized bath at maximal entropy, which we can always mentally partition into cells of spatial size L_I whose own entropy is bounded by 10^12. Can we say that much? Please tell me if that’s a reasonable statement.
But if so, then, following your logic, because each cell has among its conceivable states the present-day observable universe, whose own entropy is 10^88, then the cell’s entropy is not, in fact bounded by 10^12 at all, and the cell is much more likely to be found as the present-day universe rather than a cell of size L_I. But this is a contradiction, since by construction the cell had size L_I.
Let’s go through this again. By assumption, the finite-sized bath has maximum entropy (which is itself highly unlikely to fluctuate down in entropy to our own present-day universe). We can always mentally partition the bath into cells of spatial size L_I, each of which has its own entropy bounded above by 10^12. Hopefully we’re agreed on that.
You then say, correctly, that each cell has as one of its possible states our present-day observable universe, which has entropy 10^88. But the cell still has to get there through some kind of time evolution. And because, by construction, the cell has an initial size L_I—this spatial size constituting part of the very definition of its macrostate—it simply cannot be in the state of our present-day universe. Our present-day universe, while in principle one of the allowed states of the cell, is not consistent with the macrostate of the cell, which we know already to be of size L_I! A state is only accessible to a system if it’s consistent with the the known macrostate.
Here’s an example to illustrate my point. Consider a pot of water sitting in a room that itself has room temperature. (Don’t worry—this isn’t a boiling-water vs. eternal inflation analogy. It’s a different analogy!) Among the conceivably allowed states of the water, the water can, in principle, have any temperature. Now, higher temperatures correspond to higher entropy. What you’re basically saying is that we’re most likely to find the water at a high temperature, because that’s the highest-entropy configuration. But that state is not consistent with the macrostate of the water, namely, that the water is in equilibrium with the room, which is at room temperature! And to get to higher temperature, we have to evolve there in some smooth fashion, say, by turning on a heater.
We know that the cells of the bath in my cosmological scenario have size L_I by assumption. We can always partition the bath that way. And that’s part of the definition of each cell’s macrostate. So no states of size H_I (one modern Hubble radius) are allowed as accessible states yet, at least until the patch has grown sufficiently through inflation!
In addition to this question, there’s again my earlier question about where I’ve made any unnatural assumptions in my cosmological scenario. At what point did I pick some unnatural state anywhere? Every system starts out with the maximum entropy allowed by its macrostate, and evolves in a smooth manner. So what’s the problem?
Thanks again!