Time

AAAS 2010

The internets have spoken, and it’s a good thing I listened. A few months ago I had the idea to organize a session at the upcoming meeting of the American Association for the Advancement of Science, in San Diego next February. It’s a giant cross-scientific-disciplinary meeting, offering a great chance for journalists and scientists in diverse fields to catch up on what’s happening in other areas.

But I couldn’t decide between two possible topics, both of which are close to my heart: “The Origin of the Universe” or “The Arrow of Time.” (My original book subtitle was “The Origin of the Universe and the Arrow of Time,” before that was squelched by the marketing department and replaced with “The Quest for the Ultimate Theory of Time.” Quests are big these days, apparently.) So I did the natural thing: I Tweeted the question. And the internet spoke with a fairly unambiguous voice: “Arrow of Time” sounded more interesting. So that’s what I proposed.

And now we’ve just been accepted, so it’s on for San Diego 2010. We have a fantastic line-up of speakers (and also me), spanning quite a range of topics:

That’s the fun part about this topic; it ranges naturally from the birth of the universe to the operation of your brain. Should be a good symposium.

Update: Unfortunately, Daniel Schacter won’t be able to make the symposium. Instead, we are very fortunate to have Kathleen McDermott of Washington University in St. Louis. Her research involves how we remember the past and forecast the future.

AAAS 2010 Read More »

7 Comments

Timelessness

After the FQXi Essay Contest, I was asked to comment on some of the essays besides my own, but I never did. Mostly because I didn’t take the time to read them all (there were an awful lot), but also because I just don’t know what to say about many of them. In her essay (which I liked), Fotini Markopoulou divides the world in two:

There are two kinds of people in quantum gravity. Those who think that timelessness is the most beautiful and deepest insight in general relativity, if not modern science, and those who simply cannot comprehend what timelessness can mean and see evidence for time in everything in nature. What sets this split of opinions apart form any other disagreement in science is that almost no one ever changes their mind…

That’s just about right (although perhaps there are also other splits with the same quality). Julian Barbour, whose essay finished first in the judging, has famously championed the view that time does not exist, even writing quite a successful book about it. In a recent Bloggingheads discussion with Craig Callender, Barbour talks a bit more about his view.

To which all I can muster is: I don’t get it. There are a set of technical arguments, which for the most part I do get, that can be used to make it seem as if time does not exist. In ordinary classical mechanics, we can perform some formal tricks to remove the time variable from the conventional equations of physics. More dramatically, in general relativity or quantum gravity we can express Einstein’s equation (at least in certain circumstances) in a form where time does not appear. On the other hand, we can usually re-write any of these equations in a form where time does appear (at least, again, in certain circumstances).

But none of these technical arguments are really the point. What I don’t understand — and this is a sincere lack of understanding on my part, not an indirect claim that this perspective is wrong — is what’s supposed to be so great about timelessness. What are we supposed to gain from thinking in this way? What problems is it supposed to solve?

Put it this way: clearly time appears to exist, at first glance. Even the timelessness crowd somehow manages to submit their essay competition entries by the deadline, and finish their Bloggingheads dialogues within an hour. So the claim “time does not exist” certainly doesn’t mean the same kind of thing as “unicorns do not exist.” It must mean (I suppose) that, while we all find time very useful in our everyday lives, there is a deeper level of description in which time doesn’t appear at all; it only emerges in some sort of approximate description of reality. But that approximate description seems extremely valid and useful, including all of the phenomena in the observable universe. Surely it behooves us to take this purportedly-non-fundamental notion seriously, and attempt to understand some of its puzzling features? Moreover, even if “time” doesn’t turn out to be fundamental, why would that tempt you into saying that it doesn’t exist? Protons are made of quarks, but you don’t hear particle physicists going around claiming that protons don’t exist.

The problem is not that I disagree with the timelessness crowd, it’s that I don’t see the point. I am not motivated to make the effort to carefully read what they are writing, because I am very unclear about what is to be gained by doing so. If anyone could spell out straightforwardly what I might be able to understand by thinking of the world in the language of timelessness, I’d be very happy to re-orient my attitude and take these works seriously.

Timelessness Read More »

38 Comments

Rules for Time Travelers

With the new Star Trek out, it’s long past time (as it were) that we laid out the rules for would-be fictional time-travelers. (Spoiler: Spock travels to the past and gets a sex change and becomes Kirk’s grandfather lover.*) Not that we expect these rules to be obeyed; the dramatic demands of a work of fiction will always trump the desire to get things scientifically accurate, and Star Trek all by itself has foisted half a dozen mutually-inconsistent theories of time travel on us. But time travel isn’t magic; it may or may not be allowed by the laws of physics — we don’t know them well enough to be sure — but we do know enough to say that if time travel were possible, certain rules would have to be obeyed. And sometimes it’s more interesting to play by the rules. So if you wanted to create a fictional world involving travel through time, here are 10+1 rules by which you should try to play.

0. There are no paradoxes.

This is the overarching rule, to which all other rules are subservient. It’s not a statement about physics; it’s simply a statement about logic. In the actual world, true paradoxes — events requiring decidable propositions to be simultaneously true and false — do not occur. Anything that looks like it would be a paradox if it happened indicates either that it won’t happen, or our understanding of the laws of nature is incomplete. Whatever laws of nature the builder of fictional worlds decides to abide by, they must not allow for true paradoxes.

1. Traveling into the future is easy.

We travel into the future all the time, at a fixed rate: one second per second. Stick around, you’ll be in the future soon enough. You can even get there faster than usual, by decreasing the amount of time you experience elapsing with respect to the rest of the world — either by low-tech ways like freezing yourself, or by taking advantage of the laws of special relativity and zipping around near the speed of light. (Remember we’re talking about what is possible according to the laws of physics here, not what is plausible or technologically feasible.) It’s coming back that’s hard.

2. Traveling into the past is hard — but maybe not impossible.

If Isaac Newton’s absolute space and time had been the correct picture of nature, we could simply say that traveling backwards in time was impossible, and that would be the end of it. But in Einstein’s curved-spacetime universe, things are more flexible. From your own personal, subjective point of view, you always more forward in time — more technically, you move on a timelike curve through spacetime. But the large-scale curvature of spacetime caused by gravity could, conceivably, cause timelike curves to loop back on themselves — that is to say, become closed timelike curves — such that anyone traveling on such a path would meet themselves in the past. That’s what respectable, Einstein-approved time travel would really be like. Of course, there’s still the little difficulty of warping spacetime so severely that you actually create closed timelike curves; nobody knows a foolproof way of doing that, or even whether it’s possible, although ideas involving wormholes and cosmic strings and spinning universes have been bandied about.

3. Traveling through time is like traveling through space.

I’m only going to say this once: there would be no flashing lights. At least, there would only be flashing lights if you brought along some strobes, and decided to start them flashing as you traveled along your closed timelike curve. Likewise, there is no disappearance in a puff of smoke and re-appearing at some other time. Traveling through time is just like traveling through space: you move along a certain path, which (we are presuming) the universe has helpfully arranged so that your travels bring you to an earlier moment in time. But a time machine wouldn’t look like a booth with spinning wheels that dematerializes now and rematerializes some other time; it would look like a rocket ship. Or possibly a DeLorean, in the unlikely event that your closed timelike curve started right here on Earth and never left the road.

Think of it this way: imagine there were a race of super-intelligent trees, who could communicate with each other using abstract concepts but didn’t have the ability to walk. They might fantasize about moving through space, and in their fantasies “space travel” would resemble teleportation, with the adventurous tree disappearing in a puff of smoke and reappearing across the forest. But we know better; real travel from one point to another through space is a continuous process. Time travel would be like that.

4. Things that travel together, age together.

If you travel through time, and you bring along with you some clocks or other objects, all those things experience time in exactly the same way that you do. In particular, both you and the clocks march resolutely forward in time, from your own perspective. You don’t see clocks spinning wildly backwards, nor do you yourself “age” backwards, and you certainly don’t end up wearing the clothes you favored back in high school. Your personal experience of time is governed by clocks in your brain and body — the predictable beating of rhythmic pulses of chemical and biological processes. Whatever flow of time is being experienced by those processes — and thus by your conscious perception — is also being experienced by whatever accompanies you on your journey.

5. Black holes are not time machines.

Sadly, if you fell into a black hole, it would not spit you out at some other time. It wouldn’t spit you out at all — it would gobble you up and grow slightly more corpulent in the process. If the black hole were big enough, you might not even notice when you crossed the point of no return defined by the event horizon. But once you got close to the center of the hole, tidal forces would tug at you — gently at first, but eventually tearing you apart. The technical term is spaghettification. Not a recommended strategy for would-be time adventurers.

Wormholes — tunnels through spacetime, which in principle can connect widely-separated events — are a more promising alternative. Wormholes are to black holes as elevators are to deep wells filled with snakes and poisoned spikes. The problem is, unlike black holes, we don’t know whether wormholes exist, or even whether they can exist, or how to make them, or how to preserve them once they are made. Wormholes want to collapse and disappear, and keeping them open requires a form of negative energies. Nobody knows how to make negative energies, although they occasionally slap the name “exotic matter” on the concept and pretend it might exist.

Rules for Time Travelers Read More »

229 Comments

Evolution and the Second Law

Since no one is blogging around here, and I’m still working on my book, I will cheat and just post an excerpt from the manuscript. Not an especially original one, either; in this section I steal shamelessly from the nice paper that Ted Bunn wrote last year about evolution and entropy (inspired by an previous paper by Daniel Styer).

————————————

Without even addressing the question of how “life” should be defined, we can ask what sounds like a subsequent question: does life make thermodynamic sense? The answer, before you get too excited, is “yes.” But the opposite has been claimed – not by any respectable scientists, but by creationists looking to discredit Darwinian natural selection as the correct explanation for the evolution of life on Earth. One of their arguments relies on a misunderstanding of the Second Law, which they read as “entropy always increases,” and then interpret as a universal tendency toward decay and disorder in all natural processes. Whatever life is, it’s pretty clear that life is complicated and orderly – how, then, can it be reconciled with the natural tendency toward disorder?

There is, of course, no contradiction whatsoever. The creationist argument would equally well imply that refrigerators are impossible, so it’s clearly not correct. The Second Law doesn’t say that entropy always increases. It says that entropy always increases (or stays constant) in a closed system, one that doesn’t interact noticeably with the external world. But it’s pretty obvious that life is not like that; living organisms interact very strongly with the external world. They are the quintessential examples of open systems. And that is pretty much that; we can wash our hands of the issue and get on with our lives.

But there’s a more sophisticated version of the argument, which you could imagine being true – although it still isn’t – and it’s illuminating (and fun) to see exactly how it fails. The more sophisticated argument is quantitative: sure, living beings are open systems, so in principle they can decrease entropy somewhere as long as it increases somewhere else. How do you know that the increase in entropy in the outside world is really enough to account for the low entropy of living beings?

As we mentioned way back in Chapter Two, the Earth and its biosphere are systems that are very far away from thermal equilibrium. In equilibrium, the temperature is the same everywhere, whereas when we look up we see a very hot Sun in an otherwise very cold sky. There is plenty of room for entropy to increase, and that’s exactly what’s happening. But it’s instructive to run the numbers.

The energy budget of the Earth, considered as a single system, is pretty simple. We get energy from the Sun, via radiation; we lose the same amount of energy to empty space, also via radiation. (Not exactly the same; processes such as nuclear decays also heat up the Earth and leak energy into space, and the rate at which energy is radiated is not strictly constant. Still, it’s an excellent approximation.) But while the amount is the same, there is a big difference in the quality of the energy we get and the energy we give back. Remember back in the pre-Boltzmann days, entropy was understood as a measurement of the uselessness of a certain amount of energy; low-entropy forms of energy could be put to useful work, such as powering an engine or grinding flour, while high-entropy forms of energy just sat there.

Sun-Earth-entropy

The energy we get from the Sun is of a low-entropy, useful form, while the energy we radiate back out into space has a much higher entropy. The temperature of the Sun is about twenty times the average temperature of the Earth. The temperature of radiation is just the average energy of the photons of which it is made, so the Earth needs to radiate twenty low-energy (long-wavelength, infrared) photons for every one high-energy (short-wavelength, visible) photon it receives. It turns out, after a bit of math, that twenty times as many photons directly translates into twenty times the entropy. The Earth emits the same amount of energy as it receives, but with twenty times higher entropy.

The hard part is figuring out just what we mean when we say that the life forms here on Earth are “low-entropy.” How exactly do we do the coarse-graining? It is possible to come up with reasonable answers to that question, but it’s complicated. Fortunately, there is a dramatic shortcut we can take. Consider the entire biomass of the Earth – all of the molecules that are found in living organisms of any type. We can easily calculate the maximum entropy that collection of molecules could have, if it were in thermal equilibrium; plugging in the numbers (the biomass is 1015 kilograms, the temperature of the Earth is 255 Kelvin), we find that its maximum entropy is 1044. And we can compare that to the absolute minimum entropy it could have – if it were in an exactly unique state, the entropy would be precisely zero.

So the largest conceivable change in entropy that would be required to take a completely disordered collection of molecules the size of our biomass and turn them into absolutely any configuration at all – including the actual ecosystem we currently have – is 1044. If the evolution of life is consistent with the Second Law, it must be the case that the Earth has generated more entropy over the course of life’s evolution by converting high-energy photons into low-energy ones than it has decreased entropy by creating life. The number 1044 is certainly an overly generous estimate – we don’t have to generate nearly that much entropy, but if we can generate that much, the Second Law is in good shape.

How long does it take to generate that much entropy by converting useful solar energy into useless radiated heat? The answer, once again plugging in the temperature of the Sun and so forth, is: about one year. Every year, if we were really efficient, we could take an undifferentiated mass as large as the entire biosphere and arrange it in a configuration with as small an entropy as we can imagine. In reality, life has evolved over billions of years, and the total entropy of the “Sun + Earth (including life) + escaping radiation” system has increased by quite a bit. So the Second Law is perfectly consistent with life as we know it; not that you were ever in doubt.

Evolution and the Second Law Read More »

35 Comments

Seems a Bit More Real Now

There’s a major event in the life of every young book that marks its progression from mere draft on someone’s computer to a public figure in its own right. No, I’m not thinking about when the book gets published, or even when the final manuscript is sent to the publisher. I’m thinking of when a book gets its own page on amazon.com. (The right analogy is probably to “getting your drivers license” or something along those lines. Feel free to concoct your own details.)

From Eternity to Here cover
So it’s with a certain parental joy that I can announce From Eternity to Here now has its own amazon page. My baby is all grown up! And, as a gesture of independence, has already chosen a different subtitle: “The Quest for the Ultimate Theory of Time.” The previous version, “The Origin of the Universe and the Arrow of Time,” was judged a bit too dry, and was apparently making the marketing people at Dutton scrunch up their faces in disapproval. I am told that “quests” are very hot right now.

All of which means, of course: you can buy it! For quite a handsome discount, I may add.

It also means: I really should finish writing it. Pretty darn close; the last chapters are finished, and I’m just touching up a couple of the previous ones that were abandoned in my rush to tell the end of the story. The manuscript is coming in at noticeably more words than I had anticipated — I suspect the “320 pages” listed on amazon is an underestimate.

And, yes, there is another book with almost the same title and an eerily similar cover, which just appeared. But very different content inside! Frank Viola’s subtitle is “Rediscovering the Ageless Purpose of God,” which should be a clue to the sharp-eyed shopper that the two works are not the same.

Writing a book is a big undertaking, in case no one before me had never noticed that before. I’m very grateful to my scientific collaborators for putting up with my extended disappearances along the way. It’s also very nerve-wracking to imagine sending it out there into the world all by itself. With blog posts there is immediate feedback in terms of comments and trackbacks; you can get a feel for what the reactions are, and revise and respond accordingly. But the book really has a life of its own. People will read and review it for goodness knows how long, and I won’t always be there to protect it.

Frankly, I’m not sure this “book” technology will ever catch on.

Seems a Bit More Real Now Read More »

46 Comments

Remembering the Past is Like Imagining the Future

Because of the growth of entropy, we have a very different epistemic access to the past than to the future. In retrodicting the past, we have recourse to “memories” and “records,” which we can take as mostly-reliable indicators of events that actually happened. But when it comes to the future, the best we can do is extrapolate, without nearly the reliability that we have in reconstructing the past.

However — the human brain, as most readers of this blog probably know, was not intelligently designed. It’s doesn’t have the high-level structure of a computer program, where all the processes are carefully planned to achieve some goal. (The lower-level structures share the mechanical features of any other physical system, but that’s of little help here.) Evolution nudges the genome in useful directions, but it can only work with the raw materials it’s given; it doesn’t have the luxury of starting from scratch. So over and over in biological organisms, we find features that were originally developed for one purpose being re-engineered for something else.

As it turns out, the way that the human brain goes about the task of “remembering the past” is actually very similar to how it goes about “imagining the future.” Deep down, these are activities with very different functions and outcomes — predicting the future is a lot less reliable, for one thing. But in both cases, the brain goes through more or less the same routine.

mri-schacter.jpg

That’s what Daniel Schacter at Harvard and his friends have discovered, by doing functional MRI studies of brains subjected to different kinds of cues. (Science News report, Nature review article, Charlie Rose interview.) Subjects are inserted gently into the giant magnetic field, then asked to either conjure up a memory or imagine a future scenario about some particular cue-word. What you see is that the same sites in the brain light up in both cases. The brain on the left in this image is remembering the past — on the right, it’s concocting an imaginary scenario about the future.

doing_double_duty.jpg

Further confirmation comes from studies of amnesiacs, who famously can’t remember the past. But if you ask the right questions, you find that they also have significant problems imagining their own future.

We tend to assume that the brain must be like a computer — when we want to access a memory, we simply pull up a “file” stored somewhere on the brain’s hard drive, and take a look at its contents. But that’s not it at all. Schacter believes that pieces of data relevant to any particular memory — times, images, sounds — are stored piecemeal in different parts of the brain. When we want to “remember” something, another part of the brain assembles these pieces into a (hopefully) coherent picture. It’s like running a new simulation every time you need a memory, and it’s the same thing we do when we try to imagine some event in the future.

Everyone has heard that memories can be unreliable, but many of us don’t appreciate the extent to which that is true. It’s not the case that “real” memories are stored once and for all deep in the darkest recesses of the brain, and it’s just a matter of digging them up. False memories — conjured from any number of sources, from gradual embellishment to direct suggestion by others — seem precisely as vivid and real to us as accurate memories do. For a good reason: the brain uses the same tools to construct the memory from the available raw materials. A novel and a history book look the same on the printed page.

Remembering the Past is Like Imagining the Future Read More »

31 Comments

Chrono-Synclastic Infundibulum

I’m happy to announce that the first review of From Eternity to Here has appeared, over at Michael Bérubé’s blog. It has also appeared at Crooked Timber, a phenomenon that can ultimately traced to the holographic non-locality inherent in quantum descriptions of space as well as time.

Readers of underdeveloped imagination will wonder how a review could appear when the book has not yet been written. When one has mastered the mysteries of time, should anyone be surprised?

Chrono-Synclastic Infundibulum Read More »

19 Comments

From Eternity to Here: The Quest for the Ultimate Theory of Time

You know what the world really needs? A good book about time. Google tells me there are only about one and a half million such books right now, but I think you’ll agree that one more really good one is called for.

So I’m writing one. From Eternity to Here: The Quest for the Ultimate Theory of Time is a popular-level book on time, entropy, and their connections to cosmology, to be published by Dutton. Hopefully before the end of this year! I’ve been plugging away at it, and have shifted almost into full-time book-writing mode now. (Note to collaborators: I promise not to abandon you entirely.)

I have my own idiosyncratic ideas about how to account for the arrow of time in cosmology, but those are going to be confined to passing mentions in the last chapter. Mostly I’ll be discussing basic ideas that most experts agree are true, or true ideas that everyone should agree on even if perhaps they don’t quite yet, or the implications of those ideas for knotty questions in cosmology. Hopefully we can at least shift the conventional wisdom a little bit.

Naturally there is a web page with some details. Here is the tentative table of contents, although I’ve been cutting and pasting pretty vigorously, so who knows how it will end up looking once all is said and done. One thing is for sure, some of these chapter titles need sprucing up.

  1. Prologue

Part One: Time, Experience, and the Universe

  1. The Heavy Hand of Entropy
  2. The Beginning and End of Time
  3. The Past is Present Memory

Part Two: Einstein’s Universe

  1. Time is Personal
  2. Time is Flexible
  3. Looping Through Time

Part Three: Distinguishing the Past from the Future

  1. Running Backwards
  2. Entropy and Disorder
  3. Information and Life
  4. Recurrent Nightmares
  5. Quantum Time

Part Four: Natural and Unnatural Spacetimes

  1. Black Holes
  2. The Life of the Universe
  3. The Past Through Tomorrow
  4. Epilogue: From the Universe to the Kitchen
    Appendix:  Math

If anyone out there is friends with Oprah, maybe drop her a line suggesting that this would make a good book-club choice. I hear that’s helpful when it comes to sales.

Update: And now you can buy it.

From Eternity to Here: The Quest for the Ultimate Theory of Time Read More »

49 Comments

Richard Feynman on Boltzmann Brains

The Boltzmann Brain paradox is an argument against the idea that the universe around us, with its incredibly low-entropy early conditions and consequential arrow of time, is simply a statistical fluctuation within some eternal system that spends most of its time in thermal equilibrium. You can get a universe like ours that way, but you’re overwhelmingly more likely to get just a single galaxy, or a single planet, or even just a single brain — so the statistical-fluctuation idea seems to be ruled out by experiment. (With potentially profound consequences.)

The first invocation of an argument along these lines, as far as I know, came from Sir Arthur Eddington in 1931. But it’s a fairly straightforward argument, once you grant the assumptions (although there remain critics). So I’m sure that any number of people have thought along similar lines, without making a big deal about it.

One of those people, I just noticed, was Richard Feynman. At the end of his chapter on entropy in the Feynman Lectures on Physics, he ponders how to get an arrow of time in a universe governed by time-symmetric underlying laws.

So far as we know, all the fundamental laws of physics, such as Newton’s equations, are reversible. Then were does irreversibility come from? It comes from order going to disorder, but we do not understand this until we know the origin of the order. Why is it that the situations we find ourselves in every day are always out of equilibrium?

Feynman, following the same logic as Boltzmann, contemplates the possibility that we’re all just a statistical fluctuation.

One possible explanation is the following. Look again at our box of mixed white and black molecules. Now it is possible, if we wait long enough, by sheer, grossly improbable, but possible, accident, that the distribution of molecules gets to be mostly white on one side and mostly black on the other. After that, as time goes on and accidents continue, they get more mixed up again.

Thus one possible explanation of the high degree of order in the present-day world is that it is just a question of luck. Perhaps our universe happened to have had a fluctuation of some kind in the past, in which things got somewhat separated, and now they are running back together again. This kind of theory is not unsymmetrical, because we can ask what the separated gas looks like either a little in the future or a little in the past. In either case, we see a grey smear at the interface, because the molecules are mixing again. No matter which way we run time, the gas mixes. So this theory would say the irreversibility is just one of the accidents of life.

But, of course, it doesn’t really suffice as an explanation for the real universe in which we live, for the same reasons that Eddington gave — the Boltzmann Brain argument.

We would like to argue that this is not the case. Suppose we do not look at the whole box at once, but only at a piece of the box. Then, at a certain moment, suppose we discover a certain amount of order. In this little piece, white and black are separate. What should we deduce about the condition in places where we have not yet looked? If we really believe that the order arose from complete disorder by a fluctuation, we must surely take the most likely fluctuation which could produce it, and the most likely condition is not that the rest of it has also become disentangled! Therefore, from the hypothesis that the world is a fluctuation, all of the predictions are that if we look at a part of the world we have never seen before, we will find it mixed up, and not like the piece we just looked at. If our order were due to a fluctuation, we would not expect order anywhere but where we have just noticed it.

After pointing out that we do, in fact, see order (low entropy) in new places all the time, he goes on to emphasize the cosmological origin of the Second Law and the arrow of time:

We therefore conclude that the universe is not a fluctuation, and that the order is a memory of conditions when things started. This is not to say that we understand the logic of it. For some reason, the universe at one time had a very low entropy for its energy content, and since then the entropy has increased. So that is the way toward the future. That is the origin of all irreversibility, that is what makes the processes of growth and decay, that makes us remember the past and not the future, remember the things which are closer to that moment in history of the universe when the order was higher than now, and why we are not able to remember things where the disorder is higher than now, which we call the future.

And he closes by noting that our understanding of the early universe will have to improve before we can answer these questions.

This one-wayness is interrelated with the fact that the ratchet [a model irreversible system discussed earlier in the chapter] is part of the universe. It is part of the universe not only in the sense that it obeys the physical laws of the universe, but its one-way behavior is tied to the one-way behavior of the entire universe. It cannot be completely understood until the mystery of the beginnings of the history of the universe are reduced still further from speculation to scientific understanding.

We’re still working on that.

Richard Feynman on Boltzmann Brains Read More »

114 Comments
Scroll to Top