Guest Post: Nathan Moynihan on Amplitudes for Astrophysicists

As someone who sits at Richard Feynman’s old desk, I take Feynman diagrams very seriously. They are a very convenient and powerful way of answering a certain kind of important physical question: given some set of particles coming together to interact, what is the probability that they will evolve into some specific other set of particles?

Unfortunately, actual calculations with Feynman diagrams can get unwieldy. The answers they provide are only approximate (though the approximations can be very good), and making the approximations just a little more accurate can be a tremendous amount of work. Enter the “amplitudes program,” a set of techniques for calculating these scattering probabilities more directly, without adding together a barrel full of Feynman diagrams. This isn’t my own area, but we’ve  had guest posts from Lance Dixon and  Jaroslav Trnka about this subject a while back.

But are these heady ideas just brain candy for quantum field theorists, or can they be applied more widely? A very interesting new paper just came out that argued that even astrophysicists — who usually deal with objects a lot bigger than a few colliding particles — can put amplitude technology to good use! And we’re very fortunate to have a guest post on the subject by one of the authors, Nathan Moynihan. Nathan is a grad student at the University of Cape Town, studying under Jeff Murugan and Amanda Weltman, who works on quantum field theory, gravity, and information. This is a great introduction to the application of cutting-edge mathematical physics to some (relatively) down-to-Earth phenomena.


In a recent paper, my collaborators and I (Daniel Burger, Raul Carballo-Rubio, Jeff Murugan and Amanda Weltman) make a case for applying modern methods in scattering amplitudes to astrophysical and cosmological scenarios. In this post, I would like to explain why I think this is interesting, and why you, if you’re an astrophysicist or cosmologist, might want to use the techniques we have outlined.

In a scattering experiment, objects of known momentum p are scattered from a localised target, with various possible outcomes predicted by some theory. In quantum mechanics, the probability of a particular outcome is given by the square of the scattering amplitude, a complex number that can be derived directly from the theory. Scattering amplitudes are the central quantities of interest in (perturbative) quantum field theory, and in the last 15 years or so, there has been something of a minor revolution surrounding the tools used to calculate these quantities (partially inspired by the introduction of twistors into string theory). In general, a particle theorist will make a perturbative expansion of the path integral of her favorite theory into Feynman diagrams, and mechanically use the Feynman rules of the theory to calculate each diagram’s contribution to the final amplitude. This approach works perfectly well, although the calculations are often tough, depending on the theory.

Astrophysicists, on the other hand, are often not concerned too much with quantum field theories, preferring to work with the classical theory of general relativity. However, it turns out that you can, in fact, do the same thing with general relativity: perturbatively write down the Feynman diagrams and calculate scattering amplitudes, at least to first or second order. One of the simplest scattering events you can imagine in pure gravity is that of two gravitons scattering off one another: you start with two gravitons, they interact, you end up with two gravitons. It turns out that you can calculate this using Feynman diagrams, and the answer turns out to be strikingly simple, being barely one line.

The calculation, on the other hand, is utterly vicious. An unfortunate PhD student by the name of Walter G Wesley was given this monstrous task in 1963, and he found that to calculate the amplitude meant evaluating over 500 terms, which as you can imagine took the majority of his PhD to complete (no Mathematica!). The answer, in the end, is breathtakingly simple. In the centre of mass frame, the cross section for this amplitude is:

\frac{d\sigma}{d\Omega} = 4G^2E^2 \frac{\cos^{12}\frac12\theta}{\sin^{4}\frac12\theta}

Where G is Newton’s constant, E is the energy of the gravitons, and \theta is the scattering angle. The fact that they are so cumbersome has meant that many in the astrophysics community may eschew calculations of this type.

However, the fact that the answer is so simple implies that there may be an easier route to calculation than evaluating all 500 diagrams. Indeed, this is exactly what we have tried to allude to in our paper: using the methods we have outlined, this calculation can be done on half a page and with little effort.

The technology that we introduce can be summed up as follows and should be easily recognised as common tools present in any physicists arsenal: a change of variables (for the more mathematically inclined reader, a change of representation), recursion relations and complex analysis. Broadly speaking, the idea is to take the basic components of Feynman diagrams: momentum vectors and polarisation vectors (or tensors) and to represent them as spinors, objects that are inherently complex in nature (they are elements of a complex vector space). Once we do that, we can utilise the transformation rules of spinors to simplify calculations. This simplifies things a bit, but we can do better.

The next trick is to observe that Feynman diagrams all have one common feature: they contain real singularities (meaning the amplitude blows up) wherever there are physical internal lines. These singularities usually show up as poles, functions that behave like 1/z^n. Typically, an internal line might contribute a factor like \frac{1}{p^2 + m^2}, which obviously blows up around p^2 = -m^2.

We normally make ourselves feel better by taking these internal lines to be virtual, meaning they don’t correspond to physical processes that satisfy the energy-momentum condition p^2 = -m^2 and thus never blow up. In contrast to this, the modern formulation insists that internal lines do satisfy the condition, but that the pole is complex. Now that we have complex poles, we can utilise the standard tools of complex analysis, which tells us that if you know about the poles and residues, you know everything. For this to work, we are required to insist that at least some of external momentum is complex, since the internal momentum depends on the external momentum. Thankfully, we can do this in such a way that momentum conservation holds and that the square of the momentum is still physical.

The final ingredient we need are known as the BCFW recursion relations, an indispensable tool used by the amplitudes community developed by Britto, Cachazo, Feng and Witten in 2005. Roughly speaking, these relations tell us that we can turn a complex, singular, on-shell amplitude of any number of particles into a product of 3-particle amplitudes glued together by poles. Essentially, this means we can treat amplitudes like lego bricks and stick them together in an intelligent way in order to construct a really-difficult-to-compute amplitude from some relatively simple ones.

In the paper, we show how this can be achieved using the example of scattering a graviton off a scalar. This interaction is interesting since it’s a representation of a gravitational wave being ‘bent’ by the gravitational field of a massive object like a star. We show, in a couple of pages, that the result calculated using these methods exactly corresponds with what you would calculate using general relativity.

If you’re still unconvinced by the utility of what I’ve outlined, then do look out for the second paper in the series, hopefully coming in the not too distant future. Whether you’re a convert or not, our hope is that these methods might be useful to the astrophysics and cosmology communities in the future, and I would welcome any comments from any members of those communities.

Guest Post: Nathan Moynihan on Amplitudes for Astrophysicists Read More »

6 Comments

Is Inflationary Cosmology Science?

[tl;dr: Check out this article in Scientific American by Ijjas, Steinhardt, and Loeb suggesting that inflation isn’t science; this response by Guth, Kaiser, Linde, and Nomura that was co-signed by a bunch of people including me; and this counter-response by the original authors.]

Inflationary cosmology is the clever idea that the early universe underwent a brief period of accelerated expansion at an enormously high energy density, before that energy converted in a flash into ordinary hot matter and radiation. Inflation helps explain the observed large-scale smoothness of the universe, as well as the absence of unwanted relics such as magnetic monopoles. Most excitingly, quantum fluctuations during the inflationary period can be amplified to density perturbations that seed the formation of galaxies and large-scale structure in the universe.

That’s the good news. The bad news — or anyway, an additional piece of news, which you may choose to interpret as good or bad, depending on how you feel about these things — is that inflation doesn’t stop there. In a wide variety of models (not necessarily all), the inflationary energy converts into matter and radiation in some places, but in other places inflation just keeps going, and quantum fluctuations ensure that this process will keep happening forever — “eternal inflation.” (At some point I was a bit skeptical of the conventional story of eternal inflation via quantum fluctuations, but recently Kim Boddy and Jason Pollack and I verified to our satisfaction that you can do the decoherence calculations carefully and it all works out okay.) That’s the kind of thing, as we all know, that can lead to a multiverse.

Here’s where things become very tense and emotional. To some folks, the multiverse is great. It implies that there are very different physical conditions in different parts of the cosmos, which means that the anthropic principle kicks in, which might in turn imply a simple explanation for otherwise puzzling features of our observed universe, such as the value of the cosmological constant. To others, it’s a disaster. The existence of infinitely many regions of spacetime, each with potentially different local conditions, suggests that anything is possible, and therefore that inflation doesn’t make any predictions, and hence that it isn’t really science.

This latter perspective was defended in a recent article in Scientific American by three top-notch scientists, Anna Ijjas, Paul Steinhardt, and Avi Loeb. They argue that (1) the existence of a wide variety of individual inflationary models, and (2) the prediction of a multiverse in many of them, together imply that inflation “cannot be evaluated using the scientific method” and that its proponents are “promoting the idea of some kind of nonempirical science.”

Now, as early-universe cosmologists go, I am probably less inclined to think that inflation is part of the final answer than most are. Many of my colleagues are more or less convinced that it’s correct, and it’s just a matter of nailing down parameters. I am much more concerned about the fine-tuning difficulties that make inflation hard to get started in the first place — in particular, the hilariously low entropy that is required. Nevertheless, inflation has so many attractive features that I still give it a fairly high Bayesian credence for being correct, above 50% at least.

And inflation is indubitably science. It is investigated by scientists, used to make scientific predictions, and plays a potentially important explanatory role in our understanding of the early universe. The multiverse is potentially testable in its own right, but even if it weren’t that wouldn’t affect the status of inflation as a scientific theory. We judge theories by what predictions they make that we can test, not the ones they make that can’t be tested. It’s absolutely true that there are important unanswered questions facing the inflationary paradigm. But the right response in that situation is to either work on trying to answer them, or switch to working on something else (which is a perfectly respectable option). It’s not to claim that the questions are in principle unanswerable, and therefore the field has dropped out of the realm of science.

So I was willing to go along when Alan Guth asked if I would be a co-signer on this response letter to Scientific American. It was originally written by by Guth, David Kaiser, Andrei Linde, and Yasunori Nomura, and was co-signed by an impressive group of physicists who are experts in the field. (A quick glance at the various titles will verify that I’m arguably the least distinguished member of the group, but I was happy to sneak in.) Ijjas, Steinhardt, and Loeb have also replied to the letter.

I won’t repeat here everything that’s in the letter; Alan and company have done a good job of reminding everyone just how scientific inflationary cosmology really is. Personally I don’t object to ISL writing their original article, even if I disagree with some of its substantive claims. Unlike some more delicate souls, I’m quite willing to see real scientific controversies play out in the public eye. (The public pays a goodly amount of the salaries and research budgets of the interested parties, after all.) When people say things you disagree with, the best response is to explain why you disagree. The multiverse is a tricky thing, but there’s no reason to expect that the usual course of scientific discussion and investigation won’t help us sort it all out before too long.

Is Inflationary Cosmology Science? Read More »

25 Comments

Marching for Science

The March for Science, happening tomorrow 22 April in Washington DC and in satellite events around the globe (including here in LA), is on the one hand an obviously good idea, and at the same time quite controversial. As in many controversies, both sides have their good points!

Marching for science is a good idea because 1) science is good, 2) science is in some ways threatened, and 3) marching to show support might in some way ameliorate that threat. Admittedly, as with all rallies of support, there is a heavily emotive factor at work — even if it had no effect whatsoever, many people are motivated to march in favor of ideas they believe in, just because it feels good to show support for things you feel strongly about. Nothing wrong with that at all.

But in a democracy, marching in favor of things is a little  more meaningful than that. Even if it doesn’t directly cause politicians to change their minds (“Wait, people actually like science? I’ll have to revise my stance on a few key pieces of upcoming legislation…”), it gets ideas into the general conversation, which can lead to benefits down the road. Support for science is easy to take for granted — we live in a society where even the most anti-science forces try to portray their positions as being compatible with a scientific outlook of some sort, even if it takes doing a few evidentiary backflips to paper over the obvious inconsistencies. But just because the majority of people claim to be in favor of science, that doesn’t mean they will actually listen to what science has to say, much less vote to spend real money supporting it. Reminding them how much the general public is pro-science is an important task.

Charles Plateau, Reuters. Borrowed from The Atlantic.

Not everyone sees it that way. Scientists, bless their hearts, like to fret and argue about things, as I note in this short essay at The Atlantic. (That piece basically what I’ll be saying when I give my talk tomorrow noonish at the LA march — so if you can’t make it, you can get the gist at the link. If you will be marching in LA — spoiler alert.) A favorite source of fretting and worrying is “getting science mixed up with politics.” We scientists, the idea goes, are seekers of eternal truths — or at least we should aim to be — and that lofty pursuit is incompatible with mucking around in tawdry political battles. Or more pragmatically, there is a worry that if science is seen to be overly political, then one political party will react by aligning itself explicitly against science, and that won’t be good for anyone. (Ironically, this latter argument is an attempt at being strategic and political, rather than a seeker of universal truths.)

I don’t agree, as should be clear. First, science is political, like it or not. That’s because science is done by human beings, and just about everything human beings do is political. Science isn’t partisan — it doesn’t function for the benefit of one party over the other. But if we look up “political” in the dictionary, we get something like “of or relating to the affairs of government,” or more broadly “related to decisions applying to all members of a group.” It’s hard to question that science is inextricably intertwined with this notion of politics. The output of science, which purports to be true knowledge of the world, is apolitical. But we obtain that output by actually doing science, which involves hard questions about what questions to ask, what research to fund, and what to do with the findings of that research. There is no way to pretend that politics has nothing to do with the actual practice of science. Great scientists, from Einstein on down, have historically been more than willing to become involved in political disputes when the stakes were sufficiently high.

It would certainly be bad if scientists tarnished their reputations as unbiased researchers by explicitly aligning “science” with any individual political party. And we can’t ignore the fact that various high-profile examples of denying scientific reality — Darwinian evolution comes to mind, or more recently the fact that human activity is dramatically affecting the Earth’s climate — are, in our current climate, largely associated with one political party more than the other one. But people of all political persuasions will occasionally find scientific truths to be a bit inconvenient. And more importantly, we can march in favor of science without having to point out that one party is working much harder than the other one to undermine it. That’s a separate kind of march.

It reminds me of this year’s Super Bowl ads. Though largely set in motion before the election ever occurred, several of the ads were labeled as “anti-Trump” after the fact. But they weren’t explicitly political; they were simply stating messages that would, in better days, have been considered anodyne and unobjectionable, like “people of all creeds and ethnicities should come together in harmony.” If you can’t help but perceive a message like that as a veiled attack on your political philosophy, maybe your political philosophy needs a bit of updating.

Likewise for science. This particular March was, without question, created in part because people were shocked into fear by the prospect of power being concentrated in the hands of a political party that seems to happily reject scientific findings that it deems inconvenient. But it grew into something bigger and better: a way to rally in support of science, full stop.

That’s something everyone should be able to get behind. It’s a mistake to think that the best way to support science is to stay out of politics. Politics is there, whether we like it or not. (And if we don’t like it, we should at least respect it — as unappetizing as the process of politics may be at times, it’s a necessary part of how we make decisions in a representative democracy, and should be honored as such.) The question isn’t “should scientists play with politics, or rise above it?” The question is “should we exert our political will in favor of science, or just let other people make the decisions and hope for the best?”

Democracy can be difficult, exhausting, and heartbreaking. It’s a messy, chaotic process, a far cry from the beautiful regularities of the natural world that science works to uncover. But participating in democracy as actively as we can is one of the most straightforward ways available to us to make the world a better place. And there aren’t many causes more worth rallying behind than that of science itself.

 

 

Marching for Science Read More »

36 Comments

What Happened at the Big Bang?

I had the pleasure earlier this month of giving a plenary lecture at a meeting of the American Astronomical Society. Unfortunately, as far as I know they don’t record the lectures on video. So here, at least, are the slides I showed during my talk. I’ve been a little hesitant to put them up, since some subtleties are lost if you only have the slides and not the words that went with them, but perhaps it’s better than nothing.

My assigned topic was “What We Don’t Know About the Beginning of the Universe,” and I focused on the question of whether there could have been space and time even before the Big Bang. Short answer: sure there could have been, but we don’t actually know.

So what I did to fill my time was two things. First, I talked about different ways the universe could have existed before the Big Bang, classifying models into four possibilities (see Slide 7):

  1. Bouncing (the universe collapses to a Big Crunch, then re-expands with a Big Bang)
  2. Cyclic (a series of bounces and crunches, extending forever)
  3. Hibernating (a universe that sits quiescently for a long time, before the Bang begins)
  4. Reproducing (a background empty universe that spits off babies, each of which begins with a Bang)

I don’t claim this is a logically exhaustive set of possibilities, but most semi-popular models I know fit into one of the above categories. Given my own way of thinking about the problem, I emphasized that any decent cosmological model should try to explain why the early universe had a low entropy, and suggested that the Reproducing models did the best job.

My other goal was to talk about how thinking quantum-mechanically affects the problem. There are two questions to ask: is time emergent or fundamental, and is Hilbert space finite- or infinite-dimensional. If time is fundamental, the universe lasts forever; it doesn’t have a beginning. But if time is emergent, there may very well be a first moment. If Hilbert space is finite-dimensional it’s necessary (there are only a finite number of moments of time that can possibly emerge), while if it’s infinite-dimensional the problem is open.

Despite all that we don’t know, I remain optimistic that we are actually making progress here. I’m pretty hopeful that within my lifetime we’ll have settled on a leading theory for what happened at the very beginning of the universe.

What Happened at the Big Bang? Read More »

71 Comments

Memory-Driven Computing and The Machine

Back in November I received an unusual request: to take part in a conversation at the Discover expo in London, an event put on by Hewlett Packard Enterprise (HPE) to showcase their new technologies. The occasion was a project called simply The Machine — a step forward in what’s known as “memory-driven computing.” On the one hand, I am not in any sense an expert in high-performance computing technologies. On the other hand (full disclosure alert), they offered to pay me, which is always nice. What they were looking for was simply someone who could speak to the types of scientific research that would be aided by this kind of approach to large-scale computation. After looking into it, I thought that I could sensibly talk about some research projects that were relevant to the program, and the technology itself seemed very interesting, so I agreed stop by London on the way from Los Angeles to a conference in Rome in honor of Georges Lemaître (who, coincidentally, was a pioneer in scientific computing).

Everyone knows about Moore’s Law: computer processing power doubles about every eighteen months. It’s that progress that has enabled the massive technological changes witnessed over the past few decades, from supercomputers to handheld devices. The problem is, exponential growth can’t go on forever, and indeed Moore’s Law seems to be ending. It’s a pretty fundamental problem — you can only make components so small, since atoms themselves have a fixed size. The best current technologies sport numbers like 30 atoms per gate and 6 atoms per insulator; we can’t squeeze things much smaller than that.

So how do we push computers to faster processing, in the face of such fundamental limits? HPE’s idea with The Machine (okay, the name could have been more descriptive) is memory-driven computing — change the focus from the processors themselves to the stored data they are manipulating. As I understand it (remember, not an expert), in practice this involves three aspects:

  1. Use “non-volatile” memory — a way to store data without actively using power.
  2. Wherever possible, use photonics rather than ordinary electronics. Photons move faster than electrons, and cost less energy to get moving.
  3. Switch the fundamental architecture, so that input/output and individual processors access the memory as directly as possible.

Here’s a promotional video, made by people who actually are experts.

The project is still in the development stage; you can’t buy The Machine at your local Best Buy. But the developers have imagined a number of ways that the memory-driven approach might change how we do large-scale computational tasks. Back in the early days of electronic computers, processing speed was so slow that it was simplest to store large tables of special functions — sines, cosines, logarithms, etc. — and just look them up as needed. With the huge capacities and swift access of memory-driven computing, that kind of “pre-computation” strategy becomes effective for a wide variety of complex problems, from facial recognition to planing airline routes.

It’s not hard to imagine how physicists would find this useful, so that’s what I briefly talked about in London. Two aspects in particular are pretty obvious. One is searching for anomalies in data, especially in real time. We’re in a data-intensive era in modern science, where very often we have so much data that we can only find signals we know how to look for. Memory-driven computing could offer the prospect of greatly enhanced searches for generic “anomalies” — patterns in the data that nobody had anticipated. You can imagine how that might be useful for something like LIGO’s search for gravitational waves, or the real-time sweeps of the night sky we anticipate from the Large Synoptic Survey Telescope.

The other obvious application, of course, is on the theory side, to large-scale simulations. In my own bailiwick of cosmology, we’re doing better and better at including realistic physics (star formation, supernovae) in simulations of galaxy and large-scale structure formation. But there’s a long way to go, and improved simulations are crucial if we want to understand the interplay of dark matter and ordinary baryonic physics in accounting for the dynamics of galaxies. So if a dramatic new technology comes along that allows us to manipulate and access huge amounts of data (e.g. the current state of a cosmological simulation) rapidly, that would be extremely useful.

Like I said, HPE compensated me for my involvement. But I wouldn’t have gone along if I didn’t think the technology was intriguing. We take improvements in our computers for granted; keeping up with expectations is going to require some clever thinking on the part of engineers and computer scientists.

Memory-Driven Computing and The Machine Read More »

19 Comments

Quantum Is Calling

Hollywood celebrities are, in many important ways, different from the rest of us. But we are united by one crucial similarity: we are all fascinated by quantum mechanics.

This was demonstrated to great effect last year, when Paul Rudd and some of his friends starred with Stephen Hawking in the video Anyone Can Quantum, a very funny vignette put together by Spiros Michalakis and others at Caltech’s Institute for Quantum Information and Matter (and directed by Alex Winter, who was Bill in Bill & Ted’s Excellent Adventure). You might remember Spiros from our adventures emerging space from quantum mechanics, but when he’s not working as a mathematical physicist he’s brought incredible energy to Caltech’s outreach programs.

Now the team is back again with a new video, this one titled Quantum is Calling. This one stars the amazing Zoe Saldana, with an appearance by John Cho and the voices of Simon Pegg and Keanu Reeves, and of course Stephen Hawking once again. (One thing about Caltech: we do not mess around with our celebrity cameos.)

Stephen Hawking + Zoe Saldana: Quantum is Calling ft. Keanu Reeves, Simon Pegg, John Cho, Paul Rudd

If you’re interested in the behind-the-scenes story, Zoe and Spiros and others give it to you here:

Behind the Scenes: Stephen Hawking + Zoe Saldana: Quantum is Calling

If on the other hand you want all the quantum-mechanical jokes explained, that’s where I come in:

The Science Behind Quantum Is Calling

Jokes should never be explained, of course. But quantum mechanics always should be, so this time we made an exception.

Quantum Is Calling Read More »

5 Comments

Thanksgiving

This year we give thanks for a feature of the physical world that many people grumble about rather than celebrating, but is undeniably central to how Nature works at a deep level: the speed of light. (We’ve previously given thanks for the Standard Model Lagrangian, Hubble’s Law, the Spin-Statistics Theorem, conservation of momentum, effective field theory, the error bar, gauge symmetry, Landauer’s Principle, the Fourier Transform and Riemannian Geometry.)

The speed of light in vacuum, traditionally denoted by c, is 299,792,458 meters per second. It’s exactly that, not just approximately; it turns out to be easier to measure intervals of time to very high precision than it is to measure distances in space, so we measure the length of a second experimentally, then define the meter to be “the distance that light travels 299,792,458 of in one second.” Personally I prefer to characterize c as “one light-year per year”; that’s equally exact, and it’s easier to remember all the significant figures that way.

There are a few great things about the speed of light. One is that it’s a fixed, universal constant, as measured by inertial (unaccelerating) observers, in vacuum (empty space). Of course light can slow down if it propagates through a medium, but that’s hardly surprising. The other great thing is that it’s an upper limit; physical particles, as far as we know in the real world, always move at speeds less than or equal to c.

That first fact, the universal constancy of c, is the startling feature that set Einstein on the road to figuring out relativity. It’s a crazy claim at first glance: if two people are moving relative to each other (maybe because one is in a moving car and one is standing on the sidewalk) and they measure the speed of a third object (like a plane passing overhead) relative to themselves, of course they will get different answers. But not with light. I can be zipping past you at 99% of c, directly at an oncoming light beam, and both you and I will measure it to be moving at the same speed. That’s only sensible if something is wonky about our conventional pre-relativity notions of space and time, which is what Einstein eventually figured out. It was his former teacher Minkowski who realized the real implication is that we should think of the world as a single four-dimensional spacetime; Einstein initially scoffed at the idea as typically useless mathematical puffery, but of course it turned out to be central in his eventual development of general relativity (which explains gravity by allowing spacetime to be curved).

Because the speed of light is universal, when we draw pictures of spacetime we can indicate the possible paths light can take through any point, in a way that will be agreed upon by all observers. Orienting time vertically and space horizontally, the result is the set of light cones — the pictorial way of indicating the universal speed-of-light limit on our motion through the universe. Moving slower than light means moving “upward through your light cones,” and that’s what all massive objects are constrained to do. (When you’ve really internalized the lessons of relativity, deep in your bones, you understand that spacetime diagrams should only indicate light cones, not subjective human constructs like “space” and “time.”)

Light Cones

The fact that the speed of light is such an insuperable barrier to the speed of travel is something that really bugs people. On everyday-life scales, c is incredibly fast; but once we start contemplating astrophysical distances, suddenly it seems maddeningly slow. It takes just over a second for light to travel from the Earth to the Moon; eight minutes to get to the Sun; over five hours to get to Pluto; four years to get to the nearest star; twenty-six thousand years to get to the galactic center; and two and a half million years to get to the Andromeda galaxy. That’s why almost all good space-opera science fiction takes the easy way out and imagines faster-than-light travel. (In the real world, we won’t ever travel faster than light, but that won’t stop us from reaching the stars; it’s much more feasible to imagine extending human lifespans by many orders of magnitude, or making cryogenic storage feasible. Not easy — but not against the laws of physics, either.)

It’s understandable, therefore, that we sometimes get excited by breathless news reports about faster-than-light signals, though they always eventually disappear. But I think we should do better than just be grumpy about the finite speed of light. Like it or not, it’s an absolutely crucial part of the nature of reality. It didn’t have to be, in the sense of all possible worlds; the Newtonian universe is a relatively sensible set of laws of physics, in which there is no speed-of-light barrier at all.

That would be a very different world indeed. …

Thanksgiving Read More »

25 Comments

Gifford Lectures on Natural Theology

In October I had the honor of visiting the University of Glasgow to give the Gifford Lectures on Natural Theology. These are a series of lectures that date back to 1888, and happen at different Scottish universities: Glasgow, Aberdeen, Edinburgh, and St. Andrews. “Natural theology” is traditionally the discipline that attempts to learn about the nature of God via our experience of the world (in contrast to by revelation or contemplation). The Gifford Lectures have always interpreted this regime rather broadly; many theologians have given the talks, but also people like Neils Bohr, Arthur Eddington, Hannah Arendt, Noam Chomsky, Carl Sagan, Richard Dawkins, and Steven Pinker.

Sometimes the speakers turn their lectures into short published books; in my case, I had just written a book that fit well into the topic, so I spoke about the ideas in The Big Picture. Unfortunately the first of the five lectures was not recorded, but the subsequent four were. Here are those recordings, along with a copy of my slides for the first talk. It’s not a huge loss, as many of the ideas in the first lecture can be found in previous talks I’ve given on the arrow of time; it’s about the evolution of our universe, how that leads to an arrow of time, and how that helps explain things like memory and cause/effect relations. The second lecture was on the Core Theory and why we think it will remain accurate in the face of new discoveries. The third lecture was on emergence and how different ways of talking about the world fit together, including discussions of effective field theory and why the universe itself exists. Lecture four dealt with the evolution of complexity, the origin of life, and the nature of consciousness. (I might have had to skip some details during that one.) And the final lecture was on what it all means, why we are here, and how to live in a universe that doesn’t come with any instructions. Enjoy!

(Looking at my YouTube channel makes me realize that I’ve been in a lot of videos.)

Lecture One: Cosmos, Time, Memory (slides only, no video)
Slideshare

Lecture Two: The Stuff of Which We Are Made

The Gifford Lectures in Natural Theology, 2016, lecture 2

Lecture Three: Layers of Reality

The Gifford Lectures in Natural Theology, 2016, lecture 3

Lecture Four: Simplicity, Complexity, Thought

The Gifford Lectures in Natural Theology, 2016, lecture 4

Lecture Five: Our Place in the Universe

The Gifford Lectures in Natural Theology, 2016, lecture 5

Gifford Lectures on Natural Theology Read More »

11 Comments

Talking About Dark Matter and Dark Energy

Trying to keep these occasional Facebook Live videos going. (I’ve looked briefly into other venues such as Periscope, but FB is really easy and anyone can view without logging in if they like.)

So here is one I did this morning, about why cosmologists think dark matter and dark energy are things that really exist. I talk in particular about a recent paper by Nielsen, Guffanti, and Sarkar that questioned the evidence for universal acceleration (I think the evidence is still very good), and one by Erik Verlinde suggesting that emergent gravity can modify Einstein’s general relativity on large scales to explain away dark matter (I think it’s an intriguing idea, but am skeptical it can ever fit the data from the cosmic microwave background).

Feel free to propose topics for future conversations, or make suggestions about the format.

Talking About Dark Matter and Dark Energy Read More »

54 Comments

Leonard Cohen

What a goddamn week. Leonard Cohen, one of the greatest singer-songwriters in living memory, has died at age 82. His music meant a lot to me personally, as it did to countless others. Usually sad, sometimes melodramatic, always thoughtful and poetic and provocative. I never met him in person (though I did go to a couple of concerts), but he lived not too far away from me in LA, and somehow felt as if I knew him. We’ll miss you, Leonard.

Let’s hope he was right about this democracy thing.

Leonard Cohen - Democracy

Leonard Cohen Read More »

30 Comments
Scroll to Top