Science

Quantum Sleeping Beauty and the Multiverse

Hidden in my papers with Chip Sebens on Everettian quantum mechanics is a simple solution to a fun philosophical problem with potential implications for cosmology: the quantum version of the Sleeping Beauty Problem. It’s a classic example of self-locating uncertainty: knowing everything there is to know about the universe except where you are in it. (Skeptic’s Play beat me to the punch here, but here’s my own take.)

The setup for the traditional (non-quantum) problem is the following. Some experimental philosophers enlist the help of a subject, Sleeping Beauty. She will be put to sleep, and a coin is flipped. If it comes up heads, Beauty will be awoken on Monday and interviewed; then she will (voluntarily) have all her memories of being awakened wiped out, and be put to sleep again. Then she will be awakened again on Tuesday, and interviewed once again. If the coin came up tails, on the other hand, Beauty will only be awakened on Monday. Beauty herself is fully aware ahead of time of what the experimental protocol will be.

So in one possible world (heads) Beauty is awakened twice, in identical circumstances; in the other possible world (tails) she is only awakened once. Each time she is asked a question: “What is the probability you would assign that the coin came up tails?”

Modified from a figure by Stuart Armstrong.
Modified from a figure by Stuart Armstrong.

(Some other discussions switch the roles of heads and tails from my example.)

The Sleeping Beauty puzzle is still quite controversial. There are two answers one could imagine reasonably defending.

  • Halfer” — Before going to sleep, Beauty would have said that the probability of the coin coming up heads or tails would be one-half each. Beauty learns nothing upon waking up. She should assign a probability one-half to it having been tails.
  • Thirder” — If Beauty were told upon waking that the coin had come up heads, she would assign equal credence to it being Monday or Tuesday. But if she were told it was Monday, she would assign equal credence to the coin being heads or tails. The only consistent apportionment of credences is to assign 1/3 to each possibility, treating each possible waking-up event on an equal footing.

The Sleeping Beauty puzzle has generated considerable interest. It’s exactly the kind of wacky thought experiment that philosophers just eat up. But it has also attracted attention from cosmologists of late, because of the measure problem in cosmology. In a multiverse, there are many classical spacetimes (analogous to the coin toss) and many observers in each spacetime (analogous to being awakened on multiple occasions). Really the SB puzzle is a test-bed for cases of “mixed” uncertainties from different sources.

Chip and I argue that if we adopt Everettian quantum mechanics (EQM) and our Epistemic Separability Principle (ESP), everything becomes crystal clear. A rare case where the quantum-mechanical version of a problem is actually easier than the classical version. …

Quantum Sleeping Beauty and the Multiverse Read More »

245 Comments

Why Probability in Quantum Mechanics is Given by the Wave Function Squared

One of the most profound and mysterious principles in all of physics is the Born Rule, named after Max Born. In quantum mechanics, particles don’t have classical properties like “position” or “momentum”; rather, there is a wave function that assigns a (complex) number, called the “amplitude,” to each possible measurement outcome. The Born Rule is then very simple: it says that the probability of obtaining any possible measurement outcome is equal to the square of the corresponding amplitude. (The wave function is just the set of all the amplitudes.)

Born Rule:     \mathrm{Probability}(x) = |\mathrm{amplitude}(x)|^2.

The Born Rule is certainly correct, as far as all of our experimental efforts have been able to discern. But why? Born himself kind of stumbled onto his Rule. Here is an excerpt from his 1926 paper:

Born Rule

That’s right. Born’s paper was rejected at first, and when it was later accepted by another journal, he didn’t even get the Born Rule right. At first he said the probability was equal to the amplitude, and only in an added footnote did he correct it to being the amplitude squared. And a good thing, too, since amplitudes can be negative or even imaginary!

The status of the Born Rule depends greatly on one’s preferred formulation of quantum mechanics. When we teach quantum mechanics to undergraduate physics majors, we generally give them a list of postulates that goes something like this:

  1. Quantum states are represented by wave functions, which are vectors in a mathematical space called Hilbert space.
  2. Wave functions evolve in time according to the Schrödinger equation.
  3. The act of measuring a quantum system returns a number, known as the eigenvalue of the quantity being measured.
  4. The probability of getting any particular eigenvalue is equal to the square of the amplitude for that eigenvalue.
  5. After the measurement is performed, the wave function “collapses” to a new state in which the wave function is localized precisely on the observed eigenvalue (as opposed to being in a superposition of many different possibilities).

It’s an ungainly mess, we all agree. You see that the Born Rule is simply postulated right there, as #4. Perhaps we can do better.

Of course we can do better, since “textbook quantum mechanics” is an embarrassment. There are other formulations, and you know that my own favorite is Everettian (“Many-Worlds”) quantum mechanics. (I’m sorry I was too busy to contribute to the active comment thread on that post. On the other hand, a vanishingly small percentage of the 200+ comments actually addressed the point of the article, which was that the potential for many worlds is automatically there in the wave function no matter what formulation you favor. Everett simply takes them seriously, while alternatives need to go to extra efforts to erase them. As Ted Bunn argues, Everett is just “quantum mechanics,” while collapse formulations should be called “disappearing-worlds interpretations.”)

Like the textbook formulation, Everettian quantum mechanics also comes with a list of postulates. Here it is:

  1. Quantum states are represented by wave functions, which are vectors in a mathematical space called Hilbert space.
  2. Wave functions evolve in time according to the Schrödinger equation.

That’s it! Quite a bit simpler — and the two postulates are exactly the same as the first two of the textbook approach. Everett, in other words, is claiming that all the weird stuff about “measurement” and “wave function collapse” in the conventional way of thinking about quantum mechanics isn’t something we need to add on; it comes out automatically from the formalism.

The trickiest thing to extract from the formalism is the Born Rule. That’s what Charles (“Chip”) Sebens and I tackled in our recent paper: …

Why Probability in Quantum Mechanics is Given by the Wave Function Squared Read More »

96 Comments

Galaxies That Are Too Big To Fail, But Fail Anyway

Dark matter exists, but there is still a lot we don’t know about it. Presumably it’s some kind of particle, but we don’t know how massive it is, what forces it interacts with, or how it was produced. On the other hand, there’s actually a lot we do know about the dark matter. We know how much of it there is; we know roughly where it is; we know that it’s “cold,” meaning that the average particle’s velocity is much less than the speed of light; and we know that dark matter particles don’t interact very strongly with each other. Which is quite a bit of knowledge, when you think about it.

Fortunately, astronomers are pushing forward to study how dark matter behaves as it’s scattered through the universe, and the results are interesting. We start with a very basic idea: that dark matter is cold and completely non-interacting, or at least has interactions (the strength with which dark matter particles scatter off of each other) that are too small to make any noticeable difference. This is a well-defined and predictive model: ΛCDM, which includes the cosmological constant (Λ) as well as the cold dark matter (CDM). We can compare astronomical observations to ΛCDM predictions to see if we’re on the right track.

At first blush, we are very much on the right track. Over and over again, new observations come in that match the predictions of ΛCDM. But there are still a few anomalies that bug us, especially on relatively small (galaxy-sized) scales.

One such anomaly is the “too big to fail” problem. The idea here is that we can use ΛCDM to make quantitative predictions concerning how many galaxies there should be with different masses. For example, the Milky Way is quite a big galaxy, and it has smaller satellites like the Magellanic Clouds. In ΛCDM we can predict how many such satellites there should be, and how massive they should be. For a long time we’ve known that the actual number of satellites we observe is quite a bit smaller than the number predicted — that’s the “missing satellites” problem. But this has a possible solution: we only observe satellite galaxies by seeing stars and gas in them, and maybe the halos of dark matter that would ordinarily support such galaxies get stripped of their stars and gas by interacting with the host galaxy. The too big to fail problem tries to sharpen the issue, by pointing out that some of the predicted galaxies are just so massive that there’s no way they could not have visible stars. Or, put another way: the Milky Way does have some satellites, as do other galaxies; but when we examine these smaller galaxies, they seem to have a lot less dark matter than the simulations would predict.

Still, any time you are concentrating on galaxies that are satellites of other galaxies, you rightly worry that complicated interactions between messy atoms and photons are getting in the way of the pristine elegance of the non-interacting dark matter. So we’d like to check that this purported problem exists even out “in the field,” with lonely galaxies far away from big monsters like the Milky Way.

A new paper claims that yes, there is a too-big-to-fail problem even for galaxies in the field.

Is there a “too big to fail” problem in the field?
Emmanouil Papastergis, Riccardo Giovanelli, Martha P. Haynes, Francesco Shankar

We use the Arecibo Legacy Fast ALFA (ALFALFA) 21cm survey to measure the number density of galaxies as a function of their rotational velocity, Vrot,HI (as inferred from the width of their 21cm emission line). Based on the measured velocity function we statistically connect galaxies with their host halos, via abundance matching. In a LCDM cosmology, low-velocity galaxies are expected to be hosted by halos that are significantly more massive than indicated by the measured galactic velocity; allowing lower mass halos to host ALFALFA galaxies would result in a vast overestimate of their number counts. We then seek observational verification of this predicted trend, by analyzing the kinematics of a literature sample of field dwarf galaxies. We find that galaxies with Vrot,HI<25 km/s are kinematically incompatible with their predicted LCDM host halos, in the sense that hosts are too massive to be accommodated within the measured galactic rotation curves. This issue is analogous to the "too big to fail" problem faced by the bright satellites of the Milky Way, but here it concerns extreme dwarf galaxies in the field. Consequently, solutions based on satellite-specific processes are not applicable in this context. Our result confirms the findings of previous studies based on optical survey data, and addresses a number of observational systematics present in these works. Furthermore, we point out the assumptions and uncertainties that could strongly affect our conclusions. We show that the two most important among them, namely baryonic effects on the abundances and rotation curves of halos, do not seem capable of resolving the reported discrepancy.

Here is the money plot from the paper:

toobigtofail

The horizontal axis is the maximum circular velocity, basically telling us the mass of the halo; the vertical axis is the observed velocity of hydrogen in the galaxy. The blue line is the prediction from ΛCDM, while the dots are observed galaxies. Now, you might think that the blue line is just a very crappy fit to the data overall. But that’s okay; the points represent upper limits in the horizontal direction, so points that lie below/to the right of the curve are fine. It’s a statistical prediction: ΛCDM is predicting how many galaxies we have at each mass, even if we don’t think we can confidently measure the mass of each individual galaxy. What we see, however, is that there are a bunch of points in the bottom left corner that are above the line. ΛCDM predicts that even the smallest galaxies in this sample should still be relatively massive (have a lot of dark matter), but that’s not what we see.

If it holds up, this result is really intriguing. ΛCDM is a nice, simple starting point for a theory of dark matter, but it’s also kind of boring. From a physicist’s point of view, it would be much more fun if dark matter particles interacted noticeably with each other. We have plenty of ideas, including some of my favorites like dark photons and dark atoms. It is very tempting to think that observed deviations from the predictions of ΛCDM are due to some interesting new physics in the dark sector.

Which is why, of course, we should be especially skeptical. Always train your doubt most strongly on those ideas that you really want to be true. Fortunately there is plenty more to be done in terms of understanding the distribution of galaxies and dark matter, so this is a very solvable problem — and a great opportunity for learning something profound about most of the matter in the universe.

Galaxies That Are Too Big To Fail, But Fail Anyway Read More »

22 Comments

Why the Many-Worlds Formulation of Quantum Mechanics Is Probably Correct

universe-splitter I have often talked about the Many-Worlds or Everett approach to quantum mechanics — here’s an explanatory video, an excerpt from From Eternity to Here, and slides from a talk. But I don’t think I’ve ever explained as persuasively as possible why I think it’s the right approach. So that’s what I’m going to try to do here. Although to be honest right off the bat, I’m actually going to tackle a slightly easier problem: explaining why the many-worlds approach is not completely insane, and indeed quite natural. The harder part is explaining why it actually works, which I’ll get to in another post.

Any discussion of Everettian quantum mechanics (“EQM”) comes with the baggage of pre-conceived notions. People have heard of it before, and have instinctive reactions to it, in a way that they don’t have to (for example) effective field theory. Hell, there is even an app, universe splitter, that lets you create new universes from your iPhone. (Seriously.) So we need to start by separating the silly objections to EQM from the serious worries.

The basic silly objection is that EQM postulates too many universes. In quantum mechanics, we can’t deterministically predict the outcomes of measurements. In EQM, that is dealt with by saying that every measurement outcome “happens,” but each in a different “universe” or “world.” Say we think of Schrödinger’s Cat: a sealed box inside of which we have a cat in a quantum superposition of “awake” and “asleep.” (No reason to kill the cat unnecessarily.) Textbook quantum mechanics says that opening the box and observing the cat “collapses the wave function” into one of two possible measurement outcomes, awake or asleep. Everett, by contrast, says that the universe splits in two: in one the cat is awake, and in the other the cat is asleep. Once split, the universes go their own ways, never to interact with each other again.

Branching wave function

And to many people, that just seems like too much. Why, this objection goes, would you ever think of inventing a huge — perhaps infinite! — number of different universes, just to describe the simple act of quantum measurement? It might be puzzling, but it’s no reason to lose all anchor to reality.

To see why objections along these lines are wrong-headed, let’s first think about classical mechanics rather than quantum mechanics. And let’s start with one universe: some collection of particles and fields and what have you, in some particular arrangement in space. Classical mechanics describes such a universe as a point in phase space — the collection of all positions and velocities of each particle or field.

What if, for some perverse reason, we wanted to describe two copies of such a universe (perhaps with some tiny difference between them, like an awake cat rather than a sleeping one)? We would have to double the size of phase space — create a mathematical structure that is large enough to describe both universes at once. In classical mechanics, then, it’s quite a bit of work to accommodate extra universes, and you better have a good reason to justify putting in that work. (Inflationary cosmology seems to do it, by implicitly assuming that phase space is already infinitely big.)

That is not what happens in quantum mechanics. The capacity for describing multiple universes is automatically there. We don’t have to add anything.

UBC_SuperpositionThe reason why we can state this with such confidence is because of the fundamental reality of quantum mechanics: the existence of superpositions of different possible measurement outcomes. In classical mechanics, we have certain definite possible states, all of which are directly observable. It will be important for what comes later that the system we consider is microscopic, so let’s consider a spinning particle that can have spin-up or spin-down. (It is directly analogous to Schrödinger’s cat: cat=particle, awake=spin-up, asleep=spin-down.) Classically, the possible states are

“spin is up”

or

“spin is down”.

Quantum mechanics says that the state of the particle can be a superposition of both possible measurement outcomes. It’s not that we don’t know whether the spin is up or down; it’s that it’s really in a superposition of both possibilities, at least until we observe it. We can denote such a state like this:

(“spin is up” + “spin is down”).

While classical states are points in phase space, quantum states are “wave functions” that live in something called Hilbert space. Hilbert space is very big — as we will see, it has room for lots of stuff.

To describe measurements, we need to add an observer. It doesn’t need to be a “conscious” observer or anything else that might get Deepak Chopra excited; we just mean a macroscopic measuring apparatus. It could be a living person, but it could just as well be a video camera or even the air in a room. To avoid confusion we’ll just call it the “apparatus.”

In any formulation of quantum mechanics, the apparatus starts in a “ready” state, which is a way of saying “it hasn’t yet looked at the thing it’s going to observe” (i.e., the particle). More specifically, the apparatus is not entangled with the particle; their two states are independent of each other. So the quantum state of the particle+apparatus system starts out like this: …

Why the Many-Worlds Formulation of Quantum Mechanics Is Probably Correct Read More »

237 Comments

Quantum Mechanics Open Course from MIT

Kids today don’t know how good they have it. Back when I was learning quantum mechanics, the process involved steps like “going to lectures.” Not only did that require physical movement from the comfort of one’s home to dilapidated lecture halls, but — get this — you actually had to be there at some pre-arranged time! Often early in the morning.

These days, all you have to do is fire up the YouTube and watch lectures on your own time. MIT has just released an entire undergraduate quantum course, lovingly titled “8.04” because that’s how MIT rolls. The prof is Allan Adams, who is generally a fantastic lecturer — so I’m suspecting these are really good even though I haven’t actually watched them all myself. Here’s the first lecture, “Introduction to Superposition.”

Lecture 1: Introduction to Superposition

Allan’s approach in this video is actually based on the first two chapters of Quantum Mechanics and Experience by philosopher David Albert. I’m sure this will be very disconcerting to the philosophy-skeptics haunting the comment section of the previous post.

This is just one of many great physics courses online; I’ve previously noted Lenny Susskind’s GR course. But, being largely beyond my course-taking days myself, I haven’t really kept track. Feel free to suggest your favorites in the comments.

Quantum Mechanics Open Course from MIT Read More »

16 Comments

Physicists Should Stop Saying Silly Things about Philosophy

The last few years have seen a number of prominent scientists step up to microphones and belittle the value of philosophy. Stephen Hawking, Lawrence Krauss, and Neil deGrasse Tyson are well-known examples. To redress the balance a bit, philosopher of physics Wayne Myrvold has asked some physicists to explain why talking to philosophers has actually been useful to them. I was one of the respondents, and you can read my entry at the Rotman Institute blog. I was going to cross-post my response here, but instead let me try to say the same thing in different words.

Roughly speaking, physicists tend to have three different kinds of lazy critiques of philosophy: one that is totally dopey, one that is frustratingly annoying, and one that is deeply depressing.

  • “Philosophy tries to understand the universe by pure thought, without collecting experimental data.”

This is the totally dopey criticism. Yes, most philosophers do not actually go out and collect data (although there are exceptions). But it makes no sense to jump right from there to the accusation that philosophy completely ignores the empirical information we have collected about the world. When science (or common-sense observation) reveals something interesting and important about the world, philosophers obviously take it into account. (Aside: of course there are bad philosophers, who do all sorts of stupid things, just as there are bad practitioners of every field. Let’s concentrate on the good ones, of whom there are plenty.)

Philosophers do, indeed, tend to think a lot. This is not a bad thing. All of scientific practice involves some degree of “pure thought.” Philosophers are, by their nature, more interested in foundational questions where the latest wrinkle in the data is of less importance than it would be to a model-building phenomenologist. But at its best, the practice of philosophy of physics is continuous with the practice of physics itself. Many of the best philosophers of physics were trained as physicists, and eventually realized that the problems they cared most about weren’t valued in physics departments, so they switched to philosophy. But those problems — the basic nature of the ultimate architecture of reality at its deepest levels — are just physics problems, really. And some amount of rigorous thought is necessary to make any progress on them. Shutting up and calculating isn’t good enough.

  • “Philosophy is completely useless to the everyday job of a working physicist.”

Now we have the frustratingly annoying critique. Because: duh. If your criterion for “being interesting or important” comes down to “is useful to me in my work,” you’re going to be leading a fairly intellectually impoverished existence. Nobody denies that the vast majority of physics gets by perfectly well without any input from philosophy at all. (“We need to calculate this loop integral! Quick, get me a philosopher!”) But it also gets by without input from biology, and history, and literature. Philosophy is interesting because of its intrinsic interest, not because it’s a handmaiden to physics. I think that philosophers themselves sometimes get too defensive about this, trying to come up with reasons why philosophy is useful to physics. Who cares?

Nevertheless, there are some physics questions where philosophical input actually is useful. Foundational questions, such as the quantum measurement problem, the arrow of time, the nature of probability, and so on. Again, a huge majority of working physicists don’t ever worry about these problems. But some of us do! And frankly, if more physicists who wrote in these areas would make the effort to talk to philosophers, they would save themselves from making a lot of simple mistakes.

  • “Philosophers care too much about deep-sounding meta-questions, instead of sticking to what can be observed and calculated.”

Finally, the deeply depressing critique. Here we see the unfortunate consequence of a lifetime spent in an academic/educational system that is focused on taking ambitious dreams and crushing them into easily-quantified units of productive work. The idea is apparently that developing a new technique for calculating a certain wave function is an honorable enterprise worthy of support, while trying to understand what wave functions actually are and how they capture reality is a boring waste of time. I suspect that a substantial majority of physicists who use quantum mechanics in their everyday work are uninterested in or downright hostile to attempts to understand the quantum measurement problem.

This makes me sad. I don’t know about all those other folks, but personally I did not fall in love with science as a kid because I was swept up in the romance of finding slightly more efficient calculational techniques. Don’t get me wrong — finding more efficient calculational techniques is crucially important, and I cheerfully do it myself when I think I might have something to contribute. But it’s not the point — it’s a step along the way to the point.

The point, I take it, is to understand how nature works. Part of that is knowing how to do calculations, but another part is asking deep questions about what it all means. That’s what got me interested in science, anyway. And part of that task is understanding the foundational aspects of our physical picture of the world, digging deeply into issues that go well beyond merely being able to calculate things. It’s a shame that so many physicists don’t see how good philosophy of science can contribute to this quest. The universe is much bigger than we are and stranger than we tend to imagine, and I for one welcome all the help we can get in trying to figure it out.

Physicists Should Stop Saying Silly Things about Philosophy Read More »

225 Comments

Quantum Mechanics In Your Face

(Title shamelessly stolen from Sidney Coleman.) I’m back after a bit of insane traveling, looking forward to resuming regular blogging next week. Someone has to weigh in about BICEP, right?

In the meantime, here’s a video to keep you occupied: a recording of the World Science Festival panel on quantum mechanics I had previously mentioned.

Measure for Measure: Quantum Physics and Reality

David Albert is defending dynamical collapse formulations, Sheldon Goldstein stands up for hidden variables, I am promoting the many-worlds formulation, and Rüdiger Schack is in favor of QBism, a psi-epistemic approach. Brian Greene is the moderator, and has brought along some fancy animations. It’s an hour and a half of quantal goodness, so settle in for quite a ride.

Just as the panel was happening, my first official forays into quantum foundations were appearing on the arxiv: a paper with Charles Sebens on deriving the Born Rule in Everettian quantum mechanics, as well as a shorter conference proceeding.

No time to delve into the details here, but I promise to do so soon!

Quantum Mechanics In Your Face Read More »

14 Comments

Quantum Mechanics Smackdown

Greetings from the Big Apple, where the World Science Festival got off to a swinging start with the announcement of the Kavli Prize winners. The local favorite will of course be the Astrophysics prize, which was awarded to Alan Guth, Andrei Linde, and Alexei Starobinsky for pioneering the theory of cosmic inflation. But we should also congratulate Nanoscience winners Thomas Ebbesen, Stefan Hell, and Sir John B. Pendry, as well as Neuroscience winners Brenda Milner, John O’Keefe, and Marcus E. Raichle.

I’m participating in several WSF events, and one of them tonight will be live-streamed in this very internet. The title is Measure for Measure: Quantum Physics and Reality, and we kick off at 8pm Eastern, 5pm Pacific.

[Update: I had previously embedded the video here, but that seems to be broken. It’s still available on the WSF website.]

The other participants are David Albert, Sheldon Goldstein, and Rüdiger Schack, with the conversation moderated by Brian Greene. The group is not merely a randomly-selected collection of people who know and love quantum mechanics; each participant was carefully chosen to defend a certain favorite version of this most mysterious of physical theories.

  • David Albert will propound the idea of dynamical collapse theories, such as the Ghirardi-Rimini-Weber (GRW) model. They posit that QM is truly stochastic, with wave functions really “collapsing” at unpredictable times, with a tiny rate that is negligible for individual particles but becomes rapid for macroscopic objects.
  • Shelly Goldstein will support some version of hidden-variable theories such as Bohmian mechanics. It’s sometimes thought that hidden variables have been ruled out by experimental tests of Bell’s inequalities, but that’s not right; only local hidden variables have been excluded. Non-local hidden variables are still very viable!
  • Rüdiger Schack will be telling us about a relatively new approach called Quantum Bayesianism, or QBism for short. (Don’t love the approach, but the nickname is awesome.) The idea here is that QM is really a theory about our ignorance of the world, similar to what Tom Banks defended here way back when.
  • My job, of course, will be to defend the honor of the Everett (many-worlds) formulation. I’ve done a lot less serious research on this issue than the other folks, but I will make up for that disadvantage by supporting the theory that is actually true. And coincidentally, by the time we’ve started debating I should have my first official paper on the foundations of QM appear on the arxiv: new work on deriving the Born Rule in Everett with Chip Sebens.

(For what it’s worth, I cannot resist quoting David Wallace in this context: when faced with the measurement problem in quantum mechanics, philosophers are eager to change the physics, while physicists are sure it’s just a matter of better philosophy.)

(Note also that both Steven Weinberg and Gerard ‘t Hooft have proposed new approaches to thinking about quantum mechanics. Neither of them were judged to be sufficiently distinguished to appear on our panel.)

It’s not accidental that I call these “formulations” rather than “interpretations” of quantum mechanics. I’d like to see people abandon the phrase “interpretation of quantum mechanics” entirely (though I often slip up and use it myself). The options listed above are not different interpretations of the same underlying structure — they are legitimately different physical theories, with potentially different experimental consequences (as our recent work on quantum fluctuations shows).

Relatedly, I discovered this morning that celebrated philosopher Hilary Putnam has joined the blogosphere, with the whimsically titled “Sardonic Comment.” His very first post shares an email conversation he had about the measurement problem in QM, including my co-panelists David and Shelly, and also Tim Maudlin and Roderich Tumulka (but not me). I therefore had the honor of leaving the very first comment on Hilary Putnam’s blog, encouraging him to bring more Everettians into the discussion!

Quantum Mechanics Smackdown Read More »

53 Comments

Arrrgh Rumors

Today’s hot issue in my favorite corners of the internet (at least, besides “What’s up with Solange?”) is the possibility that the BICEP2 discovery of the signature of gravitational waves in the CMB might not be right after all. At least, that’s the rumor, spread in this case by Adam Falkowski at Résonaances. The claim is that one of the methods used by the BICEP2 team to estimate its foregrounds (polarization induced by the galaxy and other annoying astrophysical stuff, rather than imprinted on the microwave background from early times) relied on a pdf image of data from the Planck satellite, and that image was misinterpreted.

culprit

Is it true? I have no idea. It could be. Or it could be completely inconsequential. (For a very skeptical take, see Sesh Nadathur.) It seems that this was indeed one of the methods used by BICEP2 to estimate foregrounds, but it wasn’t the only one. A big challenge for the collaboration is that BICEP2 only observes in one frequency of microwaves, which makes it very hard to distinguish signals from foregrounds. (Often you can take advantage of the fact that we know the frequency dependence of the CMB, and it’s different from that of the foregrounds — but not if you only measure one frequency.) As excited as we’ve all been about the discovery, it’s important to be cautious, especially when something dramatic has only been found by a single experiment. That’s why most of us have tried hard to include caveats like “if it holds up” every time we wax enthusiastic about what it all means.

However. I have no problem with the blog rumors — it’s great that social media enable scientists to examine and challenge results out in the open, rather than relying on being part of some in-crowd. The problem is when this perfectly normal chit-chat gets elevated to some kind of big news story. To unfairly single someone out, here’s Science NOW, with a headline “Blockbuster Big Bang Result May Fizzle, Rumor Suggests.” The evidence put forward for that fizzling is nothing but the Résonaances blog post, which consists in turn of some anonymous whispers. (Including the idea that “the BICEP team has now admitted to the mistake,” which the team has subsequently strongly denied.)

I would claim that is some bad journalism right there. (Somewhat more nuanced stories appeared at New Scientist and National Geographic.) If a reporter could talk to an actual CMB scientist, who would offer an informed opinion on the record that BICEP2 had made a mistake, that would be well worth reporting (along with the appropriate responses from the BICEP2 team itself). But an unsourced rumor on a blog isn’t news (not even from this blog!). As Peter Coles says, “Rational scepticism is a very good thing. It’s one of the things that makes science what it is. But it all too easily turns into mudslinging.”

We’re having a workshop on the CMB and inflation here at Caltech this weekend, featuring talks from representatives of both BICEP2 and Planck. I was going to wait to talk about this until I actually had some idea of what was going on, which hopefully that workshop will provide. Right now I have no idea what the answer is — I suspect the BICEP2 result is fine, as they did things other than just look at that one pdf file, but I don’t pretend to be an expert, and I’ll quickly change my mind if that’s what the evidence indicates. But other non-experts rely on the media to distinguish between what’s true and what’s merely being gossiped about, and this is an example where they could do a better job.

Arrrgh Rumors Read More »

22 Comments

Squelching Boltzmann Brains (And Maybe Eternal Inflation)

There’s no question that quantum fluctuations play a crucial role in modern cosmology, as the recent BICEP2 observations have reminded us. According to inflation, all of the structures we see in the universe, from galaxies up to superclusters and beyond, originated as tiny quantum fluctuations in the very early universe, as did the gravitational waves seen by BICEP2. But quantum fluctuations are a bit of a mixed blessing: in addition to providing an origin for density perturbations and gravitational waves (good!), they are also supposed to give rise to Boltzmann brains (bad) and eternal inflation (good or bad, depending on taste). Nobody would deny that it behooves cosmologists to understand quantum fluctuations as well as they can, especially since our theories involve mysterious aspects of physics operating at absurdly high energies.

Kim Boddy, Jason Pollack and I have been re-examining how quantum fluctuations work in cosmology, and in a new paper we’ve come to a surprising conclusion: cosmologists have been getting it wrong for decades now. In an expanding universe that has nothing in it but vacuum energy, there simply aren’t any quantum fluctuations at all. Our approach shows that the conventional understanding of inflationary perturbations gets the right answer, although the perturbations aren’t due to “fluctuations”; they’re due to an effective measurement of the quantum state of the inflaton field when the universe reheats at the end of inflation. In contrast, less empirically-grounded ideas such as Boltzmann brains and eternal inflation both rely crucially on treating fluctuations as true dynamical events, occurring in real time — and we say that’s just wrong.

All very dramatically at odds with the conventional wisdom, if we’re right. Which means, of course, that there’s always a chance we’re wrong (although we don’t think it’s a big chance). This paper is pretty conceptual, which a skeptic might take as a euphemism for “hand-waving”; we’re planning on digging into some of the mathematical details in future work, but for the time being our paper should be mostly understandable to anyone who knows undergraduate quantum mechanics. Here’s the abstract:

De Sitter Space Without Quantum Fluctuations
Kimberly K. Boddy, Sean M. Carroll, and Jason Pollack

We argue that, under certain plausible assumptions, de Sitter space settles into a quiescent vacuum in which there are no quantum fluctuations. Quantum fluctuations require time-dependent histories of out-of-equilibrium recording devices, which are absent in stationary states. For a massive scalar field in a fixed de Sitter background, the cosmic no-hair theorem implies that the state of the patch approaches the vacuum, where there are no fluctuations. We argue that an analogous conclusion holds whenever a patch of de Sitter is embedded in a larger theory with an infinite-dimensional Hilbert space, including semiclassical quantum gravity with false vacua or complementarity in theories with at least one Minkowski vacuum. This reasoning provides an escape from the Boltzmann brain problem in such theories. It also implies that vacuum states do not uptunnel to higher-energy vacua and that perturbations do not decohere while slow-roll inflation occurs, suggesting that eternal inflation is much less common than often supposed. On the other hand, if a de Sitter patch is a closed system with a finite-dimensional Hilbert space, there will be Poincaré recurrences and Boltzmann fluctuations into lower-entropy states. Our analysis does not alter the conventional understanding of the origin of density fluctuations from primordial inflation, since reheating naturally generates a high-entropy environment and leads to decoherence.

The basic idea is simple: what we call “quantum fluctuations” aren’t true, dynamical events that occur in isolated quantum systems. Rather, they are a poetic way of describing the fact that when we observe such systems, the outcomes are randomly distributed rather than deterministically predictable. …

Squelching Boltzmann Brains (And Maybe Eternal Inflation) Read More »

67 Comments
Scroll to Top