Science

Gifford Lectures on Natural Theology

In October I had the honor of visiting the University of Glasgow to give the Gifford Lectures on Natural Theology. These are a series of lectures that date back to 1888, and happen at different Scottish universities: Glasgow, Aberdeen, Edinburgh, and St. Andrews. “Natural theology” is traditionally the discipline that attempts to learn about the nature of God via our experience of the world (in contrast to by revelation or contemplation). The Gifford Lectures have always interpreted this regime rather broadly; many theologians have given the talks, but also people like Neils Bohr, Arthur Eddington, Hannah Arendt, Noam Chomsky, Carl Sagan, Richard Dawkins, and Steven Pinker.

Sometimes the speakers turn their lectures into short published books; in my case, I had just written a book that fit well into the topic, so I spoke about the ideas in The Big Picture. Unfortunately the first of the five lectures was not recorded, but the subsequent four were. Here are those recordings, along with a copy of my slides for the first talk. It’s not a huge loss, as many of the ideas in the first lecture can be found in previous talks I’ve given on the arrow of time; it’s about the evolution of our universe, how that leads to an arrow of time, and how that helps explain things like memory and cause/effect relations. The second lecture was on the Core Theory and why we think it will remain accurate in the face of new discoveries. The third lecture was on emergence and how different ways of talking about the world fit together, including discussions of effective field theory and why the universe itself exists. Lecture four dealt with the evolution of complexity, the origin of life, and the nature of consciousness. (I might have had to skip some details during that one.) And the final lecture was on what it all means, why we are here, and how to live in a universe that doesn’t come with any instructions. Enjoy!

(Looking at my YouTube channel makes me realize that I’ve been in a lot of videos.)

Lecture One: Cosmos, Time, Memory (slides only, no video)
Slideshare

Lecture Two: The Stuff of Which We Are Made

The Gifford Lectures in Natural Theology, 2016, lecture 2

Lecture Three: Layers of Reality

The Gifford Lectures in Natural Theology, 2016, lecture 3

Lecture Four: Simplicity, Complexity, Thought

The Gifford Lectures in Natural Theology, 2016, lecture 4

Lecture Five: Our Place in the Universe

The Gifford Lectures in Natural Theology, 2016, lecture 5

Gifford Lectures on Natural Theology Read More »

11 Comments

Talking About Dark Matter and Dark Energy

Trying to keep these occasional Facebook Live videos going. (I’ve looked briefly into other venues such as Periscope, but FB is really easy and anyone can view without logging in if they like.)

So here is one I did this morning, about why cosmologists think dark matter and dark energy are things that really exist. I talk in particular about a recent paper by Nielsen, Guffanti, and Sarkar that questioned the evidence for universal acceleration (I think the evidence is still very good), and one by Erik Verlinde suggesting that emergent gravity can modify Einstein’s general relativity on large scales to explain away dark matter (I think it’s an intriguing idea, but am skeptical it can ever fit the data from the cosmic microwave background).

Feel free to propose topics for future conversations, or make suggestions about the format.

Talking About Dark Matter and Dark Energy Read More »

54 Comments

Entropy and Complexity, Cause and Effect, Life and Time

Finally back from Scotland, where I gave a series of five talks for the Gifford Lectures in Glasgow. The final four, at least, were recorded, and should go up on the web at some point, but I’m not sure when.

Meanwhile, I had a very fun collaboration with Henry Reich, the wizard behind the Minute Physics videos. Henry and I have known each other for a while, and I previously joined forces with him to talk about dark energy and the arrow of time.

This time, we made a series of five videos (sponsored by Google and Audible.com) based on sections of The Big Picture. In particular, we focused on the thread connecting the arrow of time and entropy to such everyday notions of cause and effect and the appearance of complex structures, ending with the origin of life and how low-entropy energy from the Sun powers the biosphere here on Earth. Henry and I wrote the scripts together, based on the book; I read the narration, and of course he did the art.

Enjoy!

  1. Why Doesn’t Time Flow Backwards?
  2. Why Doesn't Time Flow Backwards? (Big Picture Ep. 1/5)

  3. Do Cause and Effect Really Exist?
  4. Do Cause and Effect Really Exist? (Big Picture Ep. 2/5)

  5. Where Does Complexity Come From?
  6. Where Does Complexity Come From? (Big Picture Ep. 3/5)

  7. How Entropy Powers the Earth
  8. How Entropy Powers The Earth (Big Picture Ep. 4/5)

  9. What Is the Purpose of Life?
  10. What is the Purpose of Life? (Big Picture Ep. 5/5)

Entropy and Complexity, Cause and Effect, Life and Time Read More »

27 Comments

Consciousness and Downward Causation

For many people, the phenomenon of consciousness is the best evidence we have that there must be something important missing in our basic physical description of the world. According to this worry, a bunch of atoms and particles, mindlessly obeying the laws of physics, can’t actually experience the way a conscious creature does. There’s no such thing as “what it is to be like” a collection of purely physical atoms; it would lack qualia, the irreducibly subjective components of our experience of the world. One argument for this conclusion is that we can conceive of collections of atoms that behave physically in exactly the same way as ordinary humans, but don’t have those inner experiences — philosophical zombies. (If you think about it carefully, I would claim, you would realize that zombies are harder to conceive of than you might originally have guessed — but that’s an argument for another time.)

The folks who find this line of reasoning compelling are not necessarily traditional Cartesian dualists who think that there is an immaterial soul distinct from the body. On the contrary, they often appreciate the arguments against “substance dualism,” and have a high degree of respect for the laws of physics (which don’t seem to need or provide evidence for any non-physical influences on our atoms). But still, they insist, there’s no way to just throw a bunch of mindless physical matter together and expect it to experience true consciousness.

People who want to dance this tricky two-step — respect for the laws of physics, but an insistence that consciousness can’t reduce to the physical — are forced to face up to a certain problem, which we might call the causal box argument. It goes like this. (Feel free to replace “physical particles” with “quantum fields” if you want to be fastidious.)

  1. Consciousness cannot be accounted for by physical particles obeying mindless equations.
  2. Human beings seem to be made up — even if not exclusively — of physical particles.
  3. To the best of our knowledge, those particles obey mindless equations, without exception.
  4. Therefore, consciousness does not exist.

Nobody actually believes this argument, let us hasten to add — they typically just deny one of the premises.

But there is a tiny sliver of wiggle room that might allow us to salvage something special about consciousness without giving up on the laws of physics — the concept of downward causation. Here we’re invoking the idea that there are different levels at which we can describe reality, as I discussed in The Big Picture at great length. We say that “higher” (more coarse-grained) levels are emergent, but that word means different things to different people. So-called “weak” emergence just says the obvious thing, that higher-level notions like the fluidity or solidity of a material substance emerge out of the properties of its microscopic constituents. In principle, if not in practice, the microscopic description is absolutely complete and comprehensive. A “strong” form of emergence would suggest that something truly new comes into being at the higher levels, something that just isn’t there in the microscopic description.

Downward causation is one manifestation of this strong-emergentist attitude. It’s the idea that what happens at lower levels can be directly influenced (causally acted upon) by what is happening at the higher levels. The idea, in other words, that you can’t really understand the microscopic behavior without knowing something about the macroscopic.

There is no reason to think that anything like downward causation really happens in the world, at least not down to the level of particles and forces. While I was writing The Big Picture, I grumbled on Twitter about how people kept talking about it but how I didn’t want to discuss it in the book; naturally, I was hectored into writing something about it.

But you can see why the concept of downward causation might be attractive to someone who doesn’t think that consciousness can be accounted for by the fields and equations of the Core Theory. Sure, the idea would be, maybe electrons and nuclei act according to the laws of physics, but those laws need to include feedback from higher levels onto that microscopic behavior — including whether or not those particles are part of a conscious creature. In that way, consciousness can play a decisive, causal role in the universe, without actually violating any physical laws.

One person who thinks that way is John Searle, the extremely distinguished philosopher from Berkeley (and originator of the Chinese Room argument). I recently received an email from Henrik Røed Sherling, who took a class with Searle and came across this very issue. He sent me this email, which he was kind enough to allow me to reproduce here:

Hi Professor Carroll,

I read your book and was at the same time awestruck and angered, because I thought your entire section on the mind was both well-written and awfully wrong — until I started thinking about it, that is. Now I genuinely don’t know what to think anymore, but I’m trying to work through it by writing a paper on the topic.

I took Philosophy of Mind with John Searle last semester at UC Berkeley. He convinced me of a lot of ideas of which your book has now disabused me. But despite your occasionally effective jabs at Searle, you never explicitly refute his own theory of the mind, Biological Naturalism. I want to do that, using an argument from your book, but I first need to make sure that I properly understand it.

Searle says this of consciousness: it is caused by neuronal processes and realized in neuronal systems, but is not ontologically reducible to these; consciousness is not just a word we have for something else that is more fundamental. He uses the following analogy to visualize his description: consciousness is to the mind like fluidity is to water. It’s a higher-level feature caused by lower-level features and realized in a system of said lower-level features. Of course, for his version of consciousness to escape the charge of epiphenomenalism, he needs the higher-level feature in this analogy to act causally on the lower-level features — he needs downward causation. In typical fashion he says that “no one in their right mind” can say that solidity does not act causally when a hammer strikes a nail, but it appears to me that this is what you are saying.

So to my questions. Is it right to say that your argument against the existence of downward causation boils down to the incompatible vocabularies of lower-level and higher-level theories? I.e. that there is no such thing as a gluon in Fluid Dynamics, nor anything such as a fluid in the Standard Model, so a cause in one theory cannot have an effect in the other simply because causes and effects are different things in the different theories; gluons don’t affect fluidity, temperaturs and pressures do; fluids don’t affect gluons, quarks and fields do. If I have understood you right, then there couldn’t be any upward causation either. In which case Searle’s theory is not only epiphenomenal, it’s plain inaccurate from the get-go; he wants consciousness to both be a higher-level feature of neuronal processes and to be caused by them. Did I get this right?

Best regards,
Henrik Røed Sherling

Here was my reply:

Dear Henrik–

Thanks for writing. Genuinely not knowing what to think is always an acceptable stance!

I think your summary of my views are pretty accurate. As I say on p. 375, poetic naturalists tend not to be impressed by downward causation, but not by upward causation either! At least, not if your theory of each individual level is complete and consistent.

Part of the issue is, as often happens, an inconsistent use of a natural-language word, in this case “cause.” The kinds of dynamical, explain-this-occurrence causes that we’re talking about here are a different beast than inter-level implications (that one might be tempted to sloppily refer to as “causes”). Features of a lower level, like conservation of energy, can certainly imply or entail features of higher-level descriptions; and indeed the converse is also possible. But saying that such implications are “causes” is to mean something completely different than when we say “swinging my elbow caused the glass of wine to fall to the floor.”

So, I like to think I’m in my right mind, and I’m happy to admit that solidity acts causally when a hammer strikes a nail. But I don’t describe that nail as a collection of particles obeying the Core Theory *and* additionally as a solid object that a hammer can hit; we should use one language or the other. At the level of elementary particles, there’s no such concept as “solidity,” and it doesn’t act causally.

To be perfectly careful — all this is how we currently see things according to modern physics. An electron responds to the other fields precisely at its location, in quantitatively well-understood ways that make no reference to whether it’s in a nail, in a brain, or in interstellar space. We can of course imagine that this understanding is wrong, and that future investigations will reveal the electron really does care about those things. That would be the greatest discovery in physics since quantum mechanics itself, perhaps of all time; but I’m not holding my breath.

I really do think that enormous confusion is caused in many areas — not just consciousness, but free will and even more purely physical phenomena — by the simple mistake of starting sentences in one language or layer of description (“I thought about summoning up the will power to resist that extra slice of pizza…”) but then ending them in a completely different vocabulary (“… but my atoms obeyed the laws of the Standard Model, so what could I do?”) The dynamical rules of the Core Theory aren’t just vague suggestions; they are absolutely precise statements about how the quantum fields making up you and me behave under any circumstances (within the “everyday life” domain of validity). And those rules say that the behavior of, say, an electron is determined by the local values of other quantum fields at the position of the electron — and by nothing else. (That’s “locality” or “microcausality” in quantum field theory.) In particular, as long as the quantum fields at the precise position of the electron are the same, the larger context in which it is embedded is utterly irrelevant.

It’s possible that the real world is different, and there is such inter-level feedback. That’s an experimentally testable question! As I mentioned to Henrik, it would be the greatest scientific discovery of our lifetimes. And there’s basically no evidence that it’s true. But it’s possible.

So I don’t think downward causation is of any help to attempts to free the phenomenon of consciousness from arising in a completely conventional way from the collective behavior of microscopic physical constituents of matter. We’re allowed to talk about consciousness as a real, causally efficacious phenomenon — as long as we stick to the appropriate human-scale level of description. But electrons get along just fine without it.

Consciousness and Downward Causation Read More »

421 Comments

Maybe We Do Not Live in a Simulation: The Resolution Conundrum

Greetings from bucolic Banff, Canada, where we’re finishing up the biennial Foundational Questions Institute conference. To a large extent, this event fulfills the maxim that physicists like to fly to beautiful, exotic locations, and once there they sit in hotel rooms and talk to other physicists. We did manage to sneak out into nature a couple of times, but even there we were tasked with discussing profound questions about the nature of reality. Evidence: here is Steve Giddings, our discussion leader on a trip up the Banff Gondola, being protected from the rain as he courageously took notes on our debate over “What Is an Event?” (My answer: an outdated notion, a relic of our past classical ontologies.)

stevegiddings

One fun part of the conference was a “Science Speed-Dating” event, where a few of the scientists and philosophers sat at tables to chat with interested folks who switched tables every twenty minutes. One of the participants was philosopher David Chalmers, who decided to talk about the question of whether we live in a computer simulation. You probably heard about this idea long ago, but public discussion of the possibility was recently re-ignited when Elon Musk came out as an advocate.

At David’s table, one of the younger audience members raised a good point: even simulated civilizations will have the ability to run simulations of their own. But a simulated civilization won’t have access to as much computing power as the one that is simulating it, so the lower-level sims will necessarily have lower resolution. No matter how powerful the top-level civilization might be, there will be a bottom level that doesn’t actually have the ability to run realistic civilizations at all.

This raises a conundrum, I suggest, for the standard simulation argument — i.e. not only the offhand suggestion “maybe we live in a simulation,” but the positive assertion that we probably do. Here is one version of that argument:

  1. We can easily imagine creating many simulated civilizations.
  2. Things that are that easy to imagine are likely to happen, at least somewhere in the universe.
  3. Therefore, there are probably many civilizations being simulated within the lifetime of our universe. Enough that there are many more simulated people than people like us.
  4. Likewise, it is easy to imagine that our universe is just one of a large number of universes being simulated by a higher civilization.
  5. Given a meta-universe with many observers (perhaps of some specified type), we should assume we are typical within the set of all such observers.
  6. A typical observer is likely to be in one of the simulations (at some level), rather than a member of the top-level civilization.
  7. Therefore, we probably live in a simulation.

Of course one is welcome to poke holes in any of the steps of this argument. But let’s for the moment imagine that we accept them. And let’s add the observation that the hierarchy of simulations eventually bottoms out, at a set of sims that don’t themselves have the ability to perform effective simulations. Given the above logic, including the idea that civilizations that have the ability to construct simulations usually construct many of them, we inevitably conclude:

  • We probably live in the lowest-level simulation, the one without an ability to perform effective simulations. That’s where the vast majority of observers are to be found.

Hopefully the conundrum is clear. The argument started with the premise that it wasn’t that hard to imagine simulating a civilization — but the conclusion is that we shouldn’t be able to do that at all. This is a contradiction, therefore one of the premises must be false.

This isn’t such an unusual outcome in these quasi-anthropic “we are typical observers” kinds of arguments. The measure on all such observers often gets concentrated on some particular subset of the distribution, which might not look like we look at all. In multiverse cosmology this shows up as the “youngness paradox.”

Personally I think that premise 1. (it’s easy to perform simulations) is a bit questionable, and premise 5. (we should assume we are typical observers) is more or less completely without justification. If we know that we are members of some very homogeneous ensemble, where every member is basically the same, then by all means typicality is a sensible assumption. But when ensembles are highly heterogeneous, and we actually know something about our specific situation, there’s no reason to assume we are typical. As James Hartle and Mark Srednicki have pointed out, that’s a fake kind of humility — by asserting that “we are typical” in the multiverse, we’re actually claiming that “typical observers are like us.” Who’s to say that is true?

I highly doubt this is an original argument, so probably simulation cognoscenti have debated it back and forth, and likely there are standard responses. But it illustrates the trickiness of reasoning about who we are in a very big cosmos.

Maybe We Do Not Live in a Simulation: The Resolution Conundrum Read More »

102 Comments

You Should Love (or at least respect) the Schrödinger Equation

Over at the twitter dot com website, there has been a briefly-trending topic #fav7films, discussing your favorite seven films. Part of the purpose of being on twitter is to one-up the competition, so I instead listed my #fav7equations. Slightly cleaned up, the equations I chose as my seven favorites are:

  1. {\bf F} = m{\bf a}
  2. \partial L/\partial {\bf x} = \partial_t ({\partial L}/{\partial {\dot {\bf x}}})
  3. {\mathrm d}*F = J
  4. S = k \log W
  5. ds^2 = -{\mathrm d}t^2 + {\mathrm d}{\bf x}^2
  6. G_{ab} = 8\pi G T_{ab}
  7. \hat{H}|\psi\rangle = i\partial_t |\psi\rangle

In order: Newton’s Second Law of motion, the Euler-Lagrange equation, Maxwell’s equations in terms of differential forms, Boltzmann’s definition of entropy, the metric for Minkowski spacetime (special relativity), Einstein’s equation for spacetime curvature (general relativity), and the Schrödinger equation of quantum mechanics. Feel free to Google them for more info, even if equations aren’t your thing. They represent a series of extraordinary insights in the development of physics, from the 1600’s to the present day.

Of course people chimed in with their own favorites, which is all in the spirit of the thing. But one misconception came up that is probably worth correcting: people don’t appreciate how important and all-encompassing the Schrödinger equation is.

I blame society. Or, more accurately, I blame how we teach quantum mechanics. Not that the standard treatment of the Schrödinger equation is fundamentally wrong (as other aspects of how we teach QM are), but that it’s incomplete. And sometimes people get brief introductions to things like the Dirac equation or the Klein-Gordon equation, and come away with the impression that they are somehow relativistic replacements for the Schrödinger equation, which they certainly are not. Dirac et al. may have originally wondered whether they were, but these days we certainly know better.

As I remarked in my post about emergent space, we human beings tend to do quantum mechanics by starting with some classical model, and then “quantizing” it. Nature doesn’t work that way, but we’re not as smart as Nature is. By a “classical model” we mean something that obeys the basic Newtonian paradigm: there is some kind of generalized “position” variable, and also a corresponding “momentum” variable (how fast the position variable is changing), which together obey some deterministic equations of motion that can be solved once we are given initial data. Those equations can be derived from a function called the Hamiltonian, which is basically the energy of the system as a function of positions and momenta; the results are Hamilton’s equations, which are essentially a slick version of Newton’s original {\bf F} = m{\bf a}.

There are various ways of taking such a setup and “quantizing” it, but one way is to take the position variable and consider all possible (normalized, complex-valued) functions of that variable. So instead of, for example, a single position coordinate x and its momentum p, quantum mechanics deals with wave functions ψ(x). That’s the thing that you square to get the probability of observing the system to be at the position x. (We can also transform the wave function to “momentum space,” and calculate the probabilities of observing the system to be at momentum p.) Just as positions and momenta obey Hamilton’s equations, the wave function obeys the Schrödinger equation,

\hat{H}|\psi\rangle = i\partial_t |\psi\rangle.

Indeed, the \hat{H} that appears in the Schrödinger equation is just the quantum version of the Hamiltonian.

The problem is that, when we are first taught about the Schrödinger equation, it is usually in the context of a specific, very simple model: a single non-relativistic particle moving in a potential. In other words, we choose a particular kind of wave function, and a particular Hamiltonian. The corresponding version of the Schrödinger equation is

\displaystyle{\left[-\frac{1}{\mu^2}\frac{\partial^2}{\partial x^2} + V(x)\right]|\psi\rangle = i\partial_t |\psi\rangle}.

If you don’t dig much deeper into the essence of quantum mechanics, you could come away with the impression that this is “the” Schrödinger equation, rather than just “the non-relativistic Schrödinger equation for a single particle.” Which would be a shame.

What happens if we go beyond the world of non-relativistic quantum mechanics? Is the poor little Schrödinger equation still up to the task? Sure! All you need is the right set of wave functions and the right Hamiltonian. Every quantum system obeys a version of the Schrödinger equation; it’s completely general. In particular, there’s no problem talking about relativistic systems or field theories — just don’t use the non-relativistic version of the equation, obviously.

What about the Klein-Gordon and Dirac equations? These were, indeed, originally developed as “relativistic versions of the non-relativistic Schrödinger equation,” but that’s not what they ended up being useful for. (The story is told that Schrödinger himself invented the Klein-Gordon equation even before his non-relativistic version, but discarded it because it didn’t do the job for describing the hydrogen atom. As my old professor Sidney Coleman put it, “Schrödinger was no dummy. He knew about relativity.”)

The Klein-Gordon and Dirac equations are actually not quantum at all — they are classical field equations, just like Maxwell’s equations are for electromagnetism and Einstein’s equation is for the metric tensor of gravity. They aren’t usually taught that way, in part because (unlike E&M and gravity) there aren’t any macroscopic classical fields in Nature that obey those equations. The KG equation governs relativistic scalar fields like the Higgs boson, while the Dirac equation governs spinor fields (spin-1/2 fermions) like the electron and neutrinos and quarks. In Nature, spinor fields are a little subtle, because they are anticommuting Grassmann variables rather than ordinary functions. But make no mistake; the Dirac equation fits perfectly comfortably into the standard Newtonian physical paradigm.

For fields like this, the role of “position” that for a single particle was played by the variable x is now played by an entire configuration of the field throughout space. For a scalar Klein-Gordon field, for example, that might be the values of the field φ(x) at every spatial location x. But otherwise the same story goes through as before. We construct a wave function by attaching a complex number to every possible value of the position variable; to emphasize that it’s a function of functions, we sometimes call it a “wave functional” and write it as a capital letter,

\Psi[\phi(x)].

The absolute-value-squared of this wave functional tells you the probability that you will observe the field to have the value φ(x) at each point x in space. The functional obeys — you guessed it — a version of the Schrödinger equation, with the Hamiltonian being that of a relativistic scalar field. There are likewise versions of the Schrödinger equation for the electromagnetic field, for Dirac fields, for the whole Core Theory, and what have you.

So the Schrödinger equation is not simply a relic of the early days of quantum mechanics, when we didn’t know how to deal with much more than non-relativistic particles orbiting atomic nuclei. It is the foundational equation of quantum dynamics, and applies to every quantum system there is. (There are equivalent ways of doing quantum mechanics, of course, like the Heisenberg picture and the path-integral formulation, but they’re all basically equivalent.) You tell me what the quantum state of your system is, and what is its Hamiltonian, and I will plug into the Schrödinger equation to see how that state will evolve with time. And as far as we know, quantum mechanics is how the universe works. Which makes the Schrödinger equation arguably the most important equation in all of physics.

While we’re at it, people complained that the cosmological constant Λ didn’t appear in Einstein’s equation (6). Of course it does — it’s part of the energy-momentum tensor on the right-hand side. Again, Einstein didn’t necessarily think of it that way, but these days we know better. The whole thing that is great about physics is that we keep learning things; we don’t need to remain stuck with the first ideas that were handed down by the great minds of the past.

You Should Love (or at least respect) the Schrödinger Equation Read More »

47 Comments

Space Emerging from Quantum Mechanics

The other day I was amused to find a quote from Einstein, in 1936, about how hard it would be to quantize gravity: “like an attempt to breathe in empty space.” Eight decades later, I think we can still agree that it’s hard.

So here is a possibility worth considering: rather than quantizing gravity, maybe we should try to gravitize quantum mechanics. Or, more accurately but less evocatively, “find gravity inside quantum mechanics.” Rather than starting with some essentially classical view of gravity and “quantizing” it, we might imagine starting with a quantum view of reality from the start, and find the ordinary three-dimensional space in which we live somehow emerging from quantum information. That’s the project that ChunJun (Charles) Cao, Spyridon (Spiros) Michalakis, and I take a few tentative steps toward in a new paper.

We human beings, even those who have been studying quantum mechanics for a long time, still think in terms of a classical concepts. Positions, momenta, particles, fields, space itself. Quantum mechanics tells a different story. The quantum state of the universe is not a collection of things distributed through space, but something called a wave function. The wave function gives us a way of calculating the outcomes of measurements: whenever we measure an observable quantity like the position or momentum or spin of a particle, the wave function has a value for every possible outcome, and the probability of obtaining that outcome is given by the wave function squared. Indeed, that’s typically how we construct wave functions in practice. Start with some classical-sounding notion like “the position of a particle” or “the amplitude of a field,” and to each possible value we attach a complex number. That complex number, squared, gives us the probability of observing the system with that observed value.

Mathematically, wave functions are elements of a mathematical structure called Hilbert space. That means they are vectors — we can add quantum states together (the origin of superpositions in quantum mechanics) and calculate the angle (“dot product”) between them. (We’re skipping over some technicalities here, especially regarding complex numbers — see e.g. The Theoretical Minimum for more.) The word “space” in “Hilbert space” doesn’t mean the good old three-dimensional space we walk through every day, or even the four-dimensional spacetime of relativity. It’s just math-speak for “a collection of things,” in this case “possible quantum states of the universe.”

Hilbert space is quite an abstract thing, which can seem at times pretty removed from the tangible phenomena of our everyday lives. This leads some people to wonder whether we need to supplement ordinary quantum mechanics by additional new variables, or alternatively to imagine that wave functions reflect our knowledge of the world, rather than being representations of reality. For purposes of this post I’ll take the straightforward view that quantum mechanics says that the real world is best described by a wave function, an element of Hilbert space, evolving through time. (Of course time could be emergent too … something for another day.)

Here’s the thing: we can construct a Hilbert space by starting with a classical idea like “all possible positions of a particle” and attaching a complex number to each value, obtaining a wave function. All the conceivable wave functions of that form constitute the Hilbert space we’re interested in. But we don’t have to do it that way. As Einstein might have said, God doesn’t do it that way. Once we make wave functions by quantizing some classical system, we have states that live in Hilbert space. At this point it essentially doesn’t matter where we came from; now we’re in Hilbert space and we’ve left our classical starting point behind. Indeed, it’s well-known that very different classical theories lead to the same theory when we quantize them, and likewise some quantum theories don’t have classical predecessors at all.

The real world simply is quantum-mechanical from the start; it’s not a quantization of some classical system. The universe is described by an element of Hilbert space. All of our usual classical notions should be derived from that, not the other way around. Even space itself. We think of the space through which we move as one of the most basic and irreducible constituents of the real world, but it might be better thought of as an approximate notion that emerges at large distances and low energies. …

Space Emerging from Quantum Mechanics Read More »

70 Comments

Father of the Big Bang

Georges Lemaître died fifty years ago today, on 20 June 1966. If anyone deserves the title “Father of the Big Bang,” it would be him. Both because he investigated and popularized the Big Bang model, and because he was an actual Father, in the sense of being a Roman Catholic priest. (Which presumably excludes him from being an actual small-f father, but okay.)

John Farrell, author of a biography of Lemaître, has put together a nice video commemoration: “The Greatest Scientist You’ve Never Heard Of.” I of course have heard of him, but I agree that Lemaître isn’t as famous as he deserves.

The Greatest Scientist You've Never Heard Of from Farrellmedia on Vimeo.

Father of the Big Bang Read More »

150 Comments

Did LIGO Detect Dark Matter?

It has often been said, including by me, that one of the most intriguing aspects of dark matter is that provides us with the best current evidence for physics beyond the Core Theory (general relativity plus the Standard Model of particle physics). The basis of that claim is that we have good evidence from at least two fronts — Big Bang nucleosynthesis, and perturbations in the cosmic microwave background — that the total density of matter in the universe is much greater than the density of “ordinary” matter like we find in the Standard Model.

There is one important loophole to this idea. The Core Theory includes not only the Standard Model, but also gravity. Gravitons themselves can’t be the dark matter — they’re massless particles, moving at the speed of light, while we know from its effects on galaxies that dark matter is “cold” (moving slowly compared to light). But there are massive, slowly-moving objects that are made of “pure gravity,” namely black holes. Could black holes be the dark matter?

It depends. The constraints from nucleosynthesis, for example, imply that the dark matter was not made of ordinary particles by the time the universe was a minute old. So you can’t have a universe with just regular matter and then form black-hole-dark-matter in the conventional ways (like collapsing stars) at late times. What you can do is imagine that the black holes were there from almost the start — that they’re primordial. Having primordial black holes isn’t the most natural thing in the world, but there are ways to make it happen, such as having very strong density perturbations at relatively small length scales (as opposed to the very weak density perturbations we see at universe-sized scales).

Recently, of course, black holes were in the news, when LIGO detected gravitational waves from the inspiral of  two black holes of approximately 30 solar masses each. This raises an interesting question, at least if you’re clever enough to put the pieces together: could the dark matter be made of primordial black holes of around 30 solar masses, and could two of them have come together to produce the LIGO signal? (So the question is not, “Are the black holes made of dark matter?”, it’s “Is the dark matter made of black holes?”)

LIGO black hole (artist's conception)

This idea has just been examined in a new paper by Bird et al.:

Did LIGO detect dark matter?

Simeon Bird, Ilias Cholis, Julian B. Muñoz, Yacine Ali-Haïmoud, Marc Kamionkowski, Ely D. Kovetz, Alvise Raccanelli, Adam G. Riess

We consider the possibility that the black-hole (BH) binary detected by LIGO may be a signature of dark matter. Interestingly enough, there remains a window for masses 10M≲Mbh≲100M where primordial black holes (PBHs) may constitute the dark matter. If two BHs in a galactic halo pass sufficiently close, they can radiate enough energy in gravitational waves to become gravitationally bound. The bound BHs will then rapidly spiral inward due to emission of gravitational radiation and ultimately merge. Uncertainties in the rate for such events arise from our imprecise knowledge of the phase-space structure of galactic halos on the smallest scales. Still, reasonable estimates span a range that overlaps the 2−53 Gpc−3 yr−1 rate estimated from GW150914, thus raising the possibility that LIGO has detected PBH dark matter. PBH mergers are likely to be distributed spatially more like dark matter than luminous matter and have no optical nor neutrino counterparts. They may be distinguished from mergers of BHs from more traditional astrophysical sources through the observed mass spectrum, their high ellipticities, or their stochastic gravitational wave background. Next generation experiments will be invaluable in performing these tests.

Given this intriguing idea, there are a couple of things you can do. First, of course, you’d like to check that it’s not ruled out by some other data. This turns out to be a very interesting question, as there are good limits on what masses are allowed for primordial-black-hole dark matter, from things like gravitational microlensing and the fact that sufficiently massive objects would disrupt the orbits of wide binary stars. The authors claim (and quote papers to the effect) that 30 solar masses fits snugly inside the range of values that are not ruled out by the data.

The other thing you’d like to do is figure out how many mergers like the one LIGO saw should be expected under such a scenario. Remember, LIGO seemed to get lucky by seeing such a big beautiful event right out of the gate — the thought was that most detectable signals would be from relatively puny neutron-star/neutron-star mergers, not ones from such gloriously massive black holes.

The expected rate of such mergers, under the assumption that the dark matter is made of such big black holes, isn’t easy to estimate, but the authors do their best and come up with a figure of about 5 mergers per cubic gigaparsec per year. You can then ask what the rate should be if LIGO didn’t actually get lucky, but simply observed something that is happening all the time; the answer, remarkably, is between about 2 and 50 per cubic gigaparsec per year. The numbers kind of make sense!

The scenario would be quite remarkable and significant, if it turns out to be right. Good news: we’ve found that dark matter! Bad news: hopes would dim considerably for finding new particles at energies accessible to particle accelerators. The Core Theory would turn out to be even more triumphant than we had believed.

Happily, there are ways to test the idea. If events like the ones LIGO saw came from dark-matter black holes, there would be no reason for them to be closely associated with stars. They would be distributed through space like dark matter is rather than like ordinary matter is, and we wouldn’t expect to see many visible electromagnetic counterpart events (as we might if the black holes were surrounded by gas and dust).

We shall see. It’s a popular truism, especially among gravitational-wave enthusiasts, that every time we look at the universe in a new kind of way we end up seeing something we hadn’t anticipated. If the LIGO black holes are the dark matter of the universe, that would be an understatement indeed.

Did LIGO Detect Dark Matter? Read More »

62 Comments

Gravitational Waves at Last

ONCE upon a time, there lived a man who was fascinated by the phenomenon of gravity. In his mind he imagined experiments in rocket ships and elevators, eventually concluding that gravity isn’t a conventional “force” at all — it’s a manifestation of the curvature of spacetime. He threw himself into the study of differential geometry, the abstruse mathematics of arbitrarily curved manifolds. At the end of his investigations he had a new way of thinking about space and time, culminating in a marvelous equation that quantified how gravity responds to matter and energy in the universe.

Not being one to rest on his laurels, this man worked out a number of consequences of his new theory. One was that changes in gravity didn’t spread instantly throughout the universe; they traveled at the speed of light, in the form of gravitational waves. In later years he would change his mind about this prediction, only to later change it back. Eventually more and more scientists became convinced that this prediction was valid, and worth testing. They launched a spectacularly ambitious program to build a technological marvel of an observatory that would be sensitive to the faint traces left by a passing gravitational wave. Eventually, a century after the prediction was made — a press conference was called.

Chances are that everyone reading this blog post has heard that LIGO, the Laser Interferometric Gravitational-Wave Observatory, officially announced the first direct detection of gravitational waves. Two black holes, caught in a close orbit, gradually lost energy and spiraled toward each other as they emitted gravitational waves, which zipped through space at the speed of light before eventually being detected by our observatories here on Earth. Plenty of other places will give you details on this specific discovery, or tutorials on the nature of gravitational waves, including in user-friendly comic/video form.

Gravitational Waves Explained

What I want to do here is to make sure, in case there was any danger, that nobody loses sight of the extraordinary magnitude of what has been accomplished here. We’ve become a bit blasé about such things: physics makes a prediction, it comes true, yay. But we shouldn’t take it for granted; successes like this reveal something profound about the core nature of reality.

Some guy scribbles down some symbols in an esoteric mixture of Latin, Greek, and mathematical notation. Scribbles originating in his tiny, squishy human brain. (Here are what some of those those scribbles look like, in my own incredibly sloppy handwriting.) Other people (notably Rainer Weiss, Ronald Drever, and Kip Thorne), on the basis of taking those scribbles extremely seriously, launch a plan to spend hundreds of millions of dollars over the course of decades. They concoct an audacious scheme to shoot laser beams at mirrors to look for modulated displacements of less than a millionth of a billionth of a centimeter — smaller than the diameter of an atomic nucleus. Meanwhile other people looked at the sky and tried to figure out what kind of signals they might be able to see, for example from the death spiral of black holes a billion light-years away. You know, black holes: universal regions of death where, again according to elaborate theoretical calculations, the curvature of spacetime has become so pronounced that anything entering can never possibly escape. And still other people built the lasers and the mirrors and the kilometers-long evacuated tubes and the interferometers and the electronics and the hydraulic actuators and so much more, all because they believed in those equations. And then they ran LIGO (and other related observatories) for several years, then took it apart and upgraded to Advanced LIGO, finally reaching a sensitivity where you would expect to see real gravitational waves if all that fancy theorizing was on the right track.  …

Gravitational Waves at Last Read More »

91 Comments
Scroll to Top