Science

Why is the Universe So Damn Big?

I love reading io9, it’s such a fun mixture of science fiction, entertainment, and pure science. So I was happy to respond when their writer George Dvorsky emailed to ask an innocent-sounding question: “Why is the scale of the universe so freakishly large?”

You can find the fruits of George’s labors at this io9 post. But my own answer went on at sufficient length that I might as well put it up here as well. Of course, as with any “Why?” question, we need to keep in mind that the answer might simply be “Because that’s the way it is.”


Whenever we seem surprised or confused about some aspect of the universe, it’s because we have some pre-existing expectation for what it “should” be like, or what a “natural” universe might be. But the universe doesn’t have a purpose, and there’s nothing more natural than Nature itself — so what we’re really trying to do is figure out what our expectations should be.

The universe is big on human scales, but that doesn’t mean very much. It’s not surprising that humans are small compared to the universe, but big compared to atoms. That feature does have an obvious anthropic explanation — complex structures can only form on in-between scales, not at the very largest or very smallest sizes. Given that living organisms are going to be complex, it’s no surprise that we find ourselves at an in-between size compared to the universe and compared to elementary particles.

What is arguably more interesting is that the universe is so big compared to particle-physics scales. The Planck length, from quantum gravity, is 10^{-33} centimeters, and the size of an atom is roughly 10^{-8} centimeters. The difference between these two numbers is already puzzling — that’s related to the “hierarchy problem” of particle physics. (The size of atoms is fixed by the length scale set by electroweak interactions, while the Planck length is set by Newton’s constant; the two distances are extremely different, and we’re not sure why.) But the scale of the universe is roughly 10^29 centimeters across, which is enormous by any scale of microphysics. It’s perfectly reasonable to ask why.

Part of the answer is that “typical” configurations of stuff, given the laws of physics as we know them, tend to be very close to empty space. (“Typical” means “high entropy” in this context.) That’s a feature of general relativity, which says that space is dynamical, and can expand and contract. So you give me any particular configuration of matter in space, and I can find a lot more configurations where the same collection of matter is spread out over a much larger volume of space. So if we were to “pick a random collection of stuff” obeying the laws of physics, it would be mostly empty space. Which our universe is, kind of.

Two big problems with that. First, even empty space has a natural length scale, which is set by the cosmological constant (energy of the vacuum). In 1998 we discovered that the cosmological constant is not quite zero, although it’s very small. The length scale that it sets (roughly, the distance over which the curvature of space due to the cosmological constant becomes appreciable) is indeed the size of the universe today — about 10^26 centimeters. (Note that the cosmological constant itself is inversely proportional to this length scale — so the question “Why is the cosmological-constant length scale so large?” is the same as “Why is the cosmological constant so small?”)

This raises two big questions. The first is the “coincidence problem”: the universe is expanding, but the length scale associated with the cosmological constant is a constant, so why are they approximately equal today? The second is simply the “cosmological constant problem”: why is the cosmological constant scale so enormously larger than the Planck scale, or event than the atomic scale? It’s safe to say that right now there are no widely-accepted answers to either of these questions.

So roughly: the answer to “Why is the universe so big?” is “Because the cosmological constant is so small.” And the answer to “Why is the cosmological constant so small?” is “Nobody knows.”

But there’s yet another wrinkle. Typical configurations of stuff tend to look like empty space. But our universe, while relatively empty, isn’t *that* empty. It has over a hundred billion galaxies, with a hundred billion stars each, and over 10^50 atoms per star. Worse, there are maybe 10^88 particles (mostly photons and neutrinos) within the observable universe. That’s a lot of particles! A much more natural state of the universe would be enormously emptier than that. Indeed, as space expands the density of particles dilutes away — we’re headed toward a much more natural state, which will be much emptier than the universe we see today.

So, given what we know about physics, the real question is “Why are there so many particles in the observable universe?” That’s one angle on the question “Why is the entropy of the observable universe so small?” And of course the density of particles was much higher, and the entropy much lower, at early times. These questions are also ones to which we have no good answers at the moment.

Why is the Universe So Damn Big? Read More »

68 Comments

Yoichiro Nambu

yoichiro_nambu It was very sad to hear yesterday that Yoichiro Nambu has died. He was aged 94, so it was after a very long and full life.

Nambu was one of the greatest theoretical physicists of the 20th century, although not one with a high public profile. Among his contributions:

  • Being the first to really understand spontaneous symmetry breaking in quantum field theory, work for which he won a (very belated) Nobel Prize in 2008. We now understand the pion as a (pseudo-) “Nambu-Goldstone boson.”
  • Suggesting that quarks might come in three colors, and those colors might be charges for an SU(3) gauge symmetry, giving rise to force-carrying particles called gluons.
  • Proposing the first relativistic string theory, based on what is now called the Nambu-Goto action.

So — not too shabby.

But despite his outsized accomplishments, Nambu was quiet, proper, it’s even fair to say “shy.” He was one of those physicists who talked very little, and was often difficult to understand when he does talk, but if you put in the effort to follow him you would invariably be rewarded. One of his colleagues at the University of Chicago, Bruce Winstein, was charmed by the fact that Nambu was an experimentalist at heart; at home, apparently, he kept a little lab, where he would tinker with electronics to take a break from solving equations.

Any young person in science might want to read this profile of Nambu by his former student Madhusree Mukerjee. In it, Nambu tells of when he first came to the US from Japan, to be a postdoctoral researcher at the Institute for Advanced Study in Princeton. “Everyone seemed smarter than I,” Nambu recalls. “I could not accomplish what I wanted to and had a nervous breakdown.”

If Yoichiro Nambu can have a nervous breakdown because he didn’t feel smart enough, what hope is there for the rest of us?

Here are a few paragraphs I wrote about Nambu and spontaneous symmetry breaking in The Particle at the End of the Universe. …

Yoichiro Nambu Read More »

11 Comments

Why Is There Dark Matter?

Years ago I read an article by Martin Rees, in which he surveyed the options for what the dark matter of the universe might be. I forget the exact wording, but near the end he said something like “There are so many candidates, it would be quite surprising to find ourselves living in a universe without dark matter.”

I was reminded of this when I saw a Quantum Diaries post by Alex Millar, entitled “Why Dark Matter Exists.” Why do we live in a universe with five times as much dark matter as ordinary matter, anyway? As it turns out, the post was more about explaining all of the wonderful evidence we have that there is so much dark matter. That’s a very respectable question, one that I’ve covered now and again. The less-respectable (but still interesting to me) question is, Why is the universe like that? Is the existence of dark matter indeed unsurprising, or is it an unusual feature that we should take as an important clue as to the nature of our world?

070107_cosmos_hmed3p.grid-6x2

Generally, physicists love asking these kinds of questions (“why does the universe look this way, rather than that way?”), and yet are terribly sloppy at answering them. Questions about surprise and probability require a measure: a way of assigning, to each set of possibilities, some kind of probability number. Your answer wholly depends on how you assign that measure. If you have a coin, and your probability measure is “it will be heads half the time and tails half the time,” then getting twenty heads in a row is very surprising. If you have reason to think the coin is loaded, and your measure is “it comes up heads almost every time,” then twenty heads in a row isn’t surprising at all. Yet physicists love to bat around these questions in reference to the universe itself, without really bothering to justify one measure rather than another.

With respect to dark matter, we’re contemplating a measure over all the various ways the universe could be, including both the laws of physics (which tell us what particles there can be) and the initial conditions (which set the stage for the later evolution). Clearly finding the “right” such measure is pretty much hopeless! But we can try to set up some reasonable considerations, and see where that leads us. …

Why Is There Dark Matter? Read More »

53 Comments

Algebra of the Infrared

In my senior year of college, when I was beginning to think seriously about graduate school, a magical article appeared in the New York Times magazine. Called “A Theory of Everything,” by KC Cole, it conveyed the immense excitement that had built in the theoretical physics community behind an idea that had suddenly exploded in popularity after burbling beneath the surface for a number of years: a little thing called “superstring theory.” The human-interest hook for the story was simple — work on string theory was being led by a brilliant 36-year-old genius, a guy named Ed Witten. It was enough to cement Princeton as the place I most wanted to go to for graduate school. (In the end, they didn’t let me in.)

Nearly thirty years later, Witten is still going strong. As evidence, check out this paper that recently appeared on the arxiv, with co-authors Davide Gaiotto and Greg Moore:

Algebra of the Infrared: String Field Theoretic Structures in Massive N=(2,2) Field Theory In Two Dimensions
Davide Gaiotto, Gregory W. Moore, Edward Witten

We introduce a “web-based formalism” for describing the category of half-supersymmetric boundary conditions in 1+1 dimensional massive field theories with N=(2,2) supersymmetry and unbroken U(1)R symmetry. We show that the category can be completely constructed from data available in the far infrared, namely, the vacua, the central charges of soliton sectors, and the spaces of soliton states on ℝ, together with certain “interaction and boundary emission amplitudes”. These amplitudes are shown to satisfy a system of algebraic constraints related to the theory of A∞ and L∞ algebras. The web-based formalism also gives a method of finding the BPS states for the theory on a half-line and on an interval. We investigate half-supersymmetric interfaces between theories and show that they have, in a certain sense, an associative “operator product.” We derive a categorification of wall-crossing formulae. The example of Landau-Ginzburg theories is described in depth drawing on ideas from Morse theory, and its interpretation in terms of supersymmetric quantum mechanics. In this context we show that the web-based category is equivalent to a version of the Fukaya-Seidel A∞-category associated to a holomorphic Lefschetz fibration, and we describe unusual local operators that appear in massive Landau-Ginzburg theories. We indicate potential applications to the theory of surface defects in theories of class S and to the gauge-theoretic approach to knot homology.

I cannot, in good conscience, say that I understand very much about this new paper. It’s a kind of mathematica/formal field theory that is pretty far outside my bailiwick. (This is why scientists roll their eyes when a movie “physicist” is able to invent a unified field theory, build a time machine, and construct nanobots that can cure cancer. Specialization is real, folks!)

But there are two things about the paper that I nevertheless can’t help but remarking on. One is that it’s 429 pages long. I mean, damn. That’s a book, not a paper. Scuttlebutt informs me that the authors had to negotiate specially with the arxiv administrators just to upload the beast. Most amusingly, they knew perfectly well that a 400+ page work might come across as a little intimidating, so they wrote a summary paper!

An Introduction To The Web-Based Formalism
Davide Gaiotto, Gregory W. Moore, Edward Witten

This paper summarizes our rather lengthy paper, “Algebra of the Infrared: String Field Theoretic Structures in Massive N=(2,2) Field Theory In Two Dimensions,” and is meant to be an informal, yet detailed, introduction and summary of that larger work.

This short, user-friendly introduction is a mere 45 pages — still longer than 95% of the papers in this field. After a one-paragraph introduction, the first words of the lighthearted summary paper are “Let X be a Kähler manifold, and W : X → C a holomorphic Morse function.” So maybe it’s not that informal.

The second remarkable thing is — hey look, there’s my name! Both of the papers cite one of my old works from when I was a grad student, with Simeon Hellerman and Mark Trodden. (A related paper was written near the same time by Gary Gibbons and Paul Townsend.)

Domain Wall Junctions are 1/4-BPS States
Sean M. Carroll, Simeon Hellerman, Mark Trodden

We study N=1 SUSY theories in four dimensions with multiple discrete vacua, which admit solitonic solutions describing segments of domain walls meeting at one-dimensional junctions. We show that there exist solutions preserving one quarter of the underlying supersymmetry — a single Hermitian supercharge. We derive a BPS bound for the masses of these solutions and construct a solution explicitly in a special case. The relevance to the confining phase of N=1 SUSY Yang-Mills and the M-theory/SYM relationship is discussed.

Simeon, who was a graduate student at UCSB at the time and is now faculty at the Kavli IPMU in Japan, was the driving force behind this paper. Mark and I had recently written a paper on different ways that topological defects could intersect and join together. Simeon, who is an expert in supersymmetry, noticed that there was a natural way to make something like that happen in supersymmetric theories: in particular, you could have domain walls (sheets that stretch through space, separating different possible vacuum states) could intersect at “junctions.” Even better, domain-wall junction configurations would break some of the supersymmetry but not all of it. Setups like that are known as BPS states, and are highly valued and useful to supersymmetry aficionados. In general, solutions to quantum field theories are very difficult to find and characterize with any precision, but the BPS property lets you invoke some of the magic of supersymmetry to prove results that would otherwise be intractable.

Admittedly, the above paragraph is likely to be just as opaque to the person on the street as the Gaiotto/Moore/Witten paper is to me. The point is that we were able to study the behavior of domain walls and how they come together using some simple but elegant techniques in field theory. Think of drawing some configuration of walls as a network of lines in a plane. (All of the configurations we studied were invariant along some “vertical” direction in space, as well as static in time, so all the action happens in a two-dimensional plane.) Then we were able to investigate the set of all possible ways such walls could come together to form allowed solutions. Here’s an example, using walls that separate four different possible vacuum states:

wall-moduli-3

As far as I understand it (remember — not that far!), this is a very baby version of what Gaiotto, Moore, and Witten have done. Like us, they look at a large-distance limit, worrying about how defects come together rather than the detailed profiles of the individual configurations. That’s the “infrared” in their title. Unlike us, they go way farther, down a road known as “categorification” of the solutions. In particular, they use a famous property of BPS states: you can multiply them together to get other BPS states. That’s the “algebra” of their title. To mathematicians, algebras aren’t just ways of “solving for x” in equations that tortured you in high school; they are mathematical structures describing sets of vectors that can be multiplied by each other to produce other vectors. (Complex numbers are an algebra; so are ordinary three-dimensional vectors, using the cross product operation.)

At this point you’re allowed to ask: Why should I care? At least, why should I imagine putting in the work to read a 429-page opus about this stuff? For that matter, why did these smart guys put in the work to write such an opus?

It’s a reasonable question, but there’s also a reasonable answer. In theoretical physics there are a number of puzzles and unanswered questions that we are faced with, from “Why is the mass of the Higgs 125 GeV?” to “How does information escape from black holes?” Really these are all different sub-questions of the big one, “How does Nature work?” By construction, we don’t know the answer to these questions — if we did, we’d move onto other ones. But we don’t even know the right way to go about getting the answers. When Einstein started thinking about fitting gravity into the framework of special relativity, Riemannian geometry was absolutely the last thing on his mind. It’s hard to tell what paths you’ll have to walk down to get to the final answer.

So there are different techniques. Some people will try a direct approach: if you want to know how information comes out of a black hole, think as hard as you can about what happens when black holes radiate. If you want to know why the Higgs mass is what it is, think as hard as you can about the Higgs field and other possible fields we haven’t yet found.

But there’s also a more patient, foundational approach. Quantum field theory is hard; to be honest, we don’t understand it all that well. There’s little question that there’s a lot to be learned by studying the fundamental behavior of quantum field theories in highly idealized contexts, if only to better understand the space of things that can possibly happen with an eye to eventually applying them to the real world. That, I suspect, is the kind of motivation behind a massive undertaking like this. I don’t want to speak for the authors; maybe they just thought the math was cool and had fun learning about these highly unrealistic (but still extremely rich) toy models. But the ultimate goal is to learn some basic wisdom that we will someday put to use in answering that underlying question: How does Nature work?

As I said, it’s not really my bag. I don’t have nearly the patience nor that mathematical aptitude that is required to make real progress in this kind of way. I’d rather try to work out on general principles what could have happened near the Big Bang, or how our classical world emerges out of the quantum wave function.

But, let a thousand flowers bloom! Gaiotto, Moore, and Witten certainly know what they’re doing, and hardly need to look for my approval. It’s one strategy among many, and as a community we’re smart enough to probe in a number of different directions. Hopefully this approach will revolutionize our understanding of quantum field theory — and at my retirement party everyone will be asking me why I didn’t stick to working on domain-wall junctions.

Algebra of the Infrared Read More »

10 Comments

Warp Drives and Scientific Reasoning

A bit ago, the news streams were once again abuzz with claims that NASA was investigating amazing space drives that violate the laws of physics. And it’s true! If we grant that “NASA” includes “any person employed by NASA,” and “investigating” is defined as “wasting time and money thinking about.”

I say “again” because it was only a few years ago that news spread about a NASA effort aimed at a warp drive, a way to truly break the speed-of-light limit. Of course there are no realistic scenarios along those lines, so the investigators didn’t have any tangible results to present. Instead, they did the next best thing, releasing an artist’s conception of what a space ship powered by their (wholly imaginary) warp drive would look like. (What remains unclear is how the warpiness of the drive affected the design of their fantasy vessel.)

warpy

The more recent “news” is not actually about warp drive at all. It’s about propellantless space drives — which are, if anything, even less believable than the warp drives. (There is a whole zoo of nomenclature devoted to categorizing all of the non-existent technologies of this general ilk, which I won’t bother to keep straight.) Warp drives at least inspired by some respectable science — Miguel Alcubierre’s energy-condition-violating spacetime. The “propellantless” stuff, on the other hand, just says “Laws of physics? Screw em.”

You may have heard of a little thing called Newton’s Third Law of Motion — for every action there is an equal and opposite reaction. If you want to go forward, you have to push on something or propel something backwards. The plucky NASA engineers in question aren’t hampered by such musty old ideas. As others have pointed out, what they’re proposing is very much like saying that you can sit in your car and start it moving by pushing on the steering wheel.

I’m not going to go through the various claims and attempt to sort out why they’re wrong. I’m not even an engineer! My point is a higher-level one: there is no reason whatsoever why these claims should be given the slightest bit of credence, even by complete non-experts. The fact that so many media outlets (with some happy exceptions) have credulously reported on it is extraordinarily depressing.

Now, this might sound like a shockingly anti-scientific attitude. After all, I certainly haven’t gone through the experimental results carefully. And it’s a bedrock principle of science that all of our theories are fundamentally up for grabs if we collect reliable evidence against them — even one so well-established as conservation of momentum. So isn’t the proper scientific attitude to take a careful look at the data, and wait until more conclusive experiments have been done before passing judgment? (And in the meantime make some artist’s impressions of what our eventual spaceships might look like?)

No. That is not the proper scientific attitude. For a very scientific reason: life is too short.

There is a more important lesson here than any fever dreams about warp drives: how we evaluate scientific claims, especially ones we encounter in the popular media. Not all claims are created equal. This is elementary Bayesian reasoning about beliefs. The probability you should ascribe to a claim is not determined only by the chance that certain evidence would be gathered if that claim were true; it depends also on your prior, the probability you would have attached to the claim before you got the evidence. (I don’t think I’ve ever written a specific explanation of Bayesian reasoning, but it’s being discussed quite a bit in the comments to Don Page’s guest post.)

Think of it this way. A friend says, “I saw a woman riding a bicycle earlier today.” No reason to disbelieve them — probably they did see that. Now imagine the same friend instead had said, “I saw a real live Tyrannosaurus Rex riding a bicycle today.” Are you equally likely to believe them? After all, the evidence you’ve been given in either case is pretty equivalent. But in reality, you’re much more skeptical in the second case, and for good reason — the prior probability you would attach to a T-Rex riding a bicycle in your town is much lower than that for an ordinary human woman riding a bicycle.

The same thing is true for claims about new technology. If someone says, “NASA scientists are planning on sending a mission to Jupiter’s moon Europa,” you would have no reason to disbelieve them — that’s just the kind of thing NASA does. If, on the other hand, someone says “NASA scientists are building a space drive that violates Newton’s laws of motion” — you should be rather more skeptical.

Which is not to say you should be absolutely skeptical. It’s worth spending five seconds asking about what kind of evidence for this outlandish claim we have actually been given. I could certainly imagine getting enough evidence to think that momentum wasn’t conserved after all. The kind of thing I would like to see is highly respected scientists, working under exquisitely controlled conditions, doing everything they can to be hard on their own work, subjecting their experiments to intensive peer review, published in refereed journals, and ideally replicated by competing groups that would love to prove them wrong. That’s the kind of thing we got, for example, when the Higgs boson was discovered.

And what do we have for our propellantless space drive? Hmm — not quite that. No refereed publications — indeed, no publications at all. What started the hoopla was an article on a web forum called NASAspaceflight.com. Which sounds kind of respectable, until you notice it isn’t affiliated with NASA in any way. And the evidence that the article points to is — wait for it — a comment on a post on a forum on that very same web site. Admittedly, the comment was written by someone who actually does work for NASA. But, not to put too fine a point on it, lots of people work for NASA. The folks in this particular “Eagleworks” group at Johnson Spaceflight Center are a group of enthusiasts who feel that gumption and a bit of elbow grease might possibly enable them to build spaceships that do things beyond what the laws of physics might naively let you do.

And good for them! Enthusiasm is a virtue. Less virtuous is taking people’s enthusiasm at face value, rather than evaluating claims soberly. The Eagleworks group has succeeded in producing, essentially, nothing at all. Their primary mode of communication seems to be on Facebook. NASA officials, when asked by journalists for comment on the claims they leave on websites, remain silent — they don’t want to have anything to do with the whole mess.

So what we have is a situation where there’s a claim being made that is as extraordinary as it gets — conservation of momentum is being violated. And the evidenced adduced for that claim is, how shall we put it, non-extraordinary. Utterly unconvincing. Not worth a minute’s thought. Let’s get on with our lives.

Warp Drives and Scientific Reasoning Read More »

102 Comments

Does Spacetime Emerge From Quantum Information?

Quantizing gravity is an important goal of contemporary physics, but after decades of effort it’s proven to be an extremely tough nut to crack. So it’s worth considering a very slight shift of emphasis. What if the right strategy is not “finding the right theory of gravity and quantizing it,” but “finding a quantum theory out of which gravity emerges”?

That’s one way of thinking about a new and exciting approach to the problem known as “tensor networks” or the “AdS/MERA correspondence.” If you want to have the background and basic ideas presented in a digestible way, the talented Jennifer Ouellette has just published an article at Quanta that lays it all out. If you want to dive right into some of the nitty-gritty, my young and energetic collaborators and I have a new paper out:

Consistency Conditions for an AdS/MERA Correspondence
Ning Bao, ChunJun Cao, Sean M. Carroll, Aidan Chatwin-Davies, Nicholas Hunter-Jones, Jason Pollack, Grant N. Remmen

The Multi-scale Entanglement Renormalization Ansatz (MERA) is a tensor network that provides an efficient way of variationally estimating the ground state of a critical quantum system. The network geometry resembles a discretization of spatial slices of an AdS spacetime and “geodesics” in the MERA reproduce the Ryu-Takayanagi formula for the entanglement entropy of a boundary region in terms of bulk properties. It has therefore been suggested that there could be an AdS/MERA correspondence, relating states in the Hilbert space of the boundary quantum system to ones defined on the bulk lattice. Here we investigate this proposal and derive necessary conditions for it to apply, using geometric features and entropy inequalities that we expect to hold in the bulk. We show that, perhaps unsurprisingly, the MERA lattice can only describe physics on length scales larger than the AdS radius. Further, using the covariant entropy bound in the bulk, we show that there are no conventional MERA parameters that completely reproduce bulk physics even on super-AdS scales. We suggest modifications or generalizations of this kind of tensor network that may be able to provide a more robust correspondence.

(And we’re not the only Caltech-flavored group to be thinking about this stuff.)

Between the Quanta article and our paper you should basically be covered, but let me give the basic idea. It started when quantum-information theorists interested in condensed-matter physics, in particular Giufre Vidal and Glen Evenbly, were looking for ways to find the quantum ground state (the wave function with lowest possible energy) of toy-model systems of spins (qubits) arranged on a line. A simple problem, but one that is very hard to solve, even on a computer — Hilbert space is just too big to efficiently search through it. So they turned to the idea of a “tensor network.”

A tensor network is a way of building up a complicated, highly-entangled state of many particles, by starting with a simple initial state. The particular kind of network that Vidal and Evenbly became interested in is called the MERA, for Multiscale Entanglement Renormalization Ansatz (see for example). Details can be found in the links above; what matters here is that the MERA takes the form of a lattice that looks a bit like this.

tensor banner - circle_0

Our initial simple starting point is actually at the center of this diagram. The various links represent tensors acting on that initial state to make something increasingly more complicated, culminating in the many-body state at the circular boundary of the picture.

Here’s the thing: none of this had anything to do with gravity. It was a just a cute calculational trick to find quantum states of interacting electron spins. But this kind of picture can’t help but remind certain theoretical physicists of a very famous kind of spacetime: Anti-de Sitter space (AdS), the maximally symmetric solution to Einstein’s equation in the presence of a negative cosmological constant. (Or at least the “spatial” part thereof, which is simply a hyperbolic plane.)

cft-correspondence

Of course, someone has to be the first to actually do the noticing, and in this case it was a young physicist named Brian Swingle. Brian is a condensed-matter physicist himself, but he was intellectually curious enough to take courses on string theory as a grad student. There he learned that string theorists love AdS — it’s the natural home of Maldacena’s celebrated gauge/gravity duality, with a gauge theory living on the flat-space “boundary” and gravity lurking in the AdS “bulk.” Swingle wondered whether the superficial similarity between the MERA tensor network and AdS geometry wasn’t actually a sign of something deeper — an AdS/MERA correspondence?

And the answer is — maybe! Some of the features of AdS gravity are certainly captured by the MERA, so the whole thing kind of smells right. But, as we say in the paper above with the expansive list of authors, it doesn’t all just fall together right away. Some things you would like to be true in AdS don’t happen automatically in the MERA interpretation. Which isn’t a deal-killer — it’s just a sign that we have to, at the very least, work a bit harder. Perhaps there’s a generalization of the simple MERA that must be considered, or a slightly more subtle version of the purported correspondence.

The possibility is well worth pursuing. As amazing (and thoroughly checked) as the traditional AdS/CFT correspondence is, there are still questions about it that we haven’t satisfactorily answered. The tensor networks, on the other hand, are extremely concrete, well-defined objects, for which you should in principle be able to answer any question you might have. Perhaps more intriguingly, the idea of “string theory” never really enters the game. The “bulk” where gravity lives emerges directly from a set of interacting spins, in a context where the original investigators weren’t thinking about gravity at all. The starting point doesn’t even necessarily have anything to do with “spacetime,” and certainly not with the dynamics of spacetime geometry. So I certainly hope that people remain excited and keep thinking in this direction — it would be revolutionary if you could build a complete theory of quantum gravity directly from some interacting qubits.

Does Spacetime Emerge From Quantum Information? Read More »

42 Comments

Quantum Field Theory and the Limits of Knowledge

Last week I had the pleasure of giving a seminar to the philosophy department at the University of North Carolina. Ordinarily I would have talked about the only really philosophical work I’ve done recently (or arguably ever), deriving the Born Rule in the Everett approach to quantum mechanics. But in this case I had just talked about that stuff the day before, at a gathering of local philosophers of science.

So instead I decided to use the opportunity to get some feedback on another idea I had been thinking about — our old friend, the claim that The Laws of Physics Underlying Everyday Life Are Completely Understood (also here, here). In particular, given that I was looking for feedback from a group of people that had expertise in philosophical matters, I homed in on the idea that quantum field theory has a unique property among physical theories: any successful QFT tells us very specifically what its domain of applicability is, allowing us to distinguish the regime where it should be accurate from the regime where we can’t make predictions.

The talk wasn’t recorded, but here are the slides. I recycled a couple of ones from previous talks, but mostly these were constructed from scratch.

The punchline of the talk was summarized in this diagram, showing different regimes of phenomena and the arrows indicating what they depend on:

layers

There are really two arguments going on here, indicated by the red arrows with crosses through them. These two arrows, I claim, don’t exist. The physics of everyday life is not affected by dark matter or any new particles or forces, and its only dependence on the deeper level of fundamental physics (whether it be string theory or whatever) is through the intermediary of what Frank Wilczek has dubbed “The Core Theory” — the Standard Model plus general relativity. The first argument (no new important particles or forces) relies on basic features of quantum field theory, like crossing symmetry and the small number of species that go into making up ordinary matter. The second argument is more subtle, relying on the idea of effective field theory.

So how did it go over? I think people were properly skeptical and challenging, but for the most part they got the point, and thought it was interesting. (Anyone who was in the audience is welcome to chime in and correct me if that’s a misimpression.) Mostly, since this was a talk to philosophers rather than physicists, I spent my time doing a pedagogical introduction to quantum field theory, rather than diving directly into any contentious claims about it — and learning something new is always a good thing.

Quantum Field Theory and the Limits of Knowledge Read More »

59 Comments

The Reality of Time

The idea that time isn’t “real” is an ancient one — if we’re allowed to refer to things as “ancient” under the supposition that time isn’t real. You will recall the humorous debate we had at our Setting Time Aright conference a few years ago, in which Julian Barbour (the world’s most famous living exponent of the view that time isn’t real) and Tim Maudlin (who believes strongly that time is real, and central) were game enough to argue each other’s position, rather than their own. Confusingly, they were both quite convincing.

smithsonian-mag The subject has come up once again with two new books by Lee Smolin: Time Reborn, all by himself, and The Singular Universe and the Reality of Time, with philosopher Roberto Mangabeira Unger. This new attention prompted me to write a short essay for Smithsonian magazine, laying out the different possibilities.

Personally I think that the whole issue is being framed in a slightly misleading way. (Indeed, this mistaken framing caused me to believe at first that Lee and I were in agreement, until his book actually came out.) The stance of Maudlin and Smolin and others isn’t merely that time is “real,” in the sense that it exists and plays a useful role in how we talk about the world. They want to say something more: that the passage of time is real. That is, that time is more than simply a label on different moments in the history of the universe, all of which are independently pretty much equal. They want to attribute “reality” to the idea of the universe coming into being, moment by moment.

3metaphysics

Such a picture — corresponding roughly to the “possibilism” option in the picture above, although I won’t vouch that any of these people would describe their own views that way — is to be contrasted with the “eternalist” picture of the universe that has been growing in popularity ever since Laplace introduced his Demon. This is the view, in the eyes of many, that is straightforwardly suggested by our best understanding of the laws of physics, which don’t seem to play favorites among different moments of time.

According to eternalism, the apparent “flow” of time from past to future is indeed an illusion, even if the time coordinate in our equations is perfectly real. There is an apparent asymmetry between the past and future (many such asymmetries, really), but that can be traced to the simple fact that the entropy of the universe was very low near the Big Bang — the Past Hypothesis. That’s an empirical feature of the configuration of stuff in the universe, not a defining property of the nature of time itself.

Personally, I find the eternalist block-universe view to be perfectly acceptable, so I think that these folks are working hard to tackle a problem that has already been solved. There are more than enough problems that haven’t been solved to occupy my life for the rest of its natural span of time (as it were), so I’m going to concentrate on those. But who knows? If someone could follow this trail and be led to a truly revolutionary and successful picture of how the universe works, that would be pretty awesome.

The Reality of Time Read More »

84 Comments

What Happens Inside the Quantum Wave Function?

Many things can “happen” inside a quantum wave function, of course, including everything that actually does happen — formation of galaxies, origin of life, Lady Gaga concerts, you name it. But given a certain quantum wave function, what actual is happening inside it?

Philosophy of Cosmology

A surprisingly hard problem! Basically because, unlike in classical mechanics, in quantum mechanics the wave function describes superpositions of different possible measurement outcomes. And you can easily cook up situations where a single wave function can be written in many different ways as superpositions of different things. Indeed, it’s inevitable; a humble quantum spin can be written as a superposition of “spinning clockwise” or “spinning counterclockwise” with respect to the z-axis, but it can equally well be written as a superposition of similar behavior with respect to the z-axis, or indeed any axis at all. Which one is “really happening”?

Answer: none of them is “really happening” as opposed to any of the others. The possible measurement outcomes (in this case, spinning clockwise or counterclockwise with respect to some chosen axis) only become “real” when you actually measure the thing. Put more objectively: when the quantum system interacts with a large number of degrees of freedom, becomes entangled with them, and decoherence occurs. But the perfectly general and rigorous picture of all that process is still not completely developed.

So to get some intuition, let’s start with the simplest possible version of the problem: what happens inside a wave function (describing “system” but also “measurement device” and really, the whole universe) that is completely stationary? I.e., what dynamically processes are occurring while the wave function isn’t changing at all?

You’re first guess here — nothing at all “happens” inside a wave function that doesn’t evolve with time — is completely correct. That’s what I explain in the video above, of a talk I gave at the Philosophy of Cosmology workshop in Tenerife. The talk is based on my recent paper with Kim Boddy and Jason Pollack.

Surprisingly, this claim — “nothing is happening if the quantum state isn’t changing with time” — manages to be controversial! People have this idea that a time-independent quantum state has a rich inner life, with civilizations rising and falling within even though the state is literally exactly the same at every moment in time. I’m not precisely sure why. It would be more understandable if that belief got you something good, like an answer to some pressing cosmological problem. But it’s the opposite — believing that all sorts of things are happening inside a time-independent state creates cosmological problems, in particular the Boltzmann Brain problem, where conscious observers keep popping into existence in empty space. So we’re in the funny situation where believing the correct thing — that nothing is happening when the quantum state isn’t changing — solves a problem, and yet some people prefer to believe the incorrect thing, even though that creates problems for them.

Quantum mechanics is a funny thing.

What Happens Inside the Quantum Wave Function? Read More »

60 Comments
Scroll to Top