Science

The Trouble With Physics

I was asked to review Lee Smolin’s The Trouble With Physics by New Scientist. The review has now appeared, although with a couple of drawbacks. Most obviously, only subscribers can read it. But more importantly, they have some antiquated print-journal notion of a “word limit,” which in my case was about 1000 words. When I started writing the review, I kind of went over the limit. By a factor of about three. This is why the Intelligent Designer invented blogs; here’s the review I would have written, if the Man hadn’t tried to stifle my creativity. (Other reviews at Backreaction and Not Even Wrong; see also Bee’s interview with Lee, or his appearance with Brian Greene on Science Friday.)

——————————————————————

It was only after re-reading and considerable head-scratching that I figured out why Lee Smolin’s The Trouble With Physics is such a frustrating book: it’s really two books, with intertwined but ultimately independent arguments. One argument is big and abstract and likely to be ignored by most of the book’s audience; the other is narrow and specific and part of a wide-ranging and heated discussion carried out between scientists, in the popular press, and on the internet. The abstract argument — about academic culture and the need to nurture speculative ideas — is, in my opinion, important and largely correct, while the specific one — about the best way to set about quantizing gravity — is overstated and undersupported. It’s too bad that vociferous debate over the latter seems likely to suck all the oxygen away from the former.

Fundamental physics (for want of a better term) is concerned with the ultimate microscopic laws of nature. In our current understanding, these laws describe gravity according to Einstein’s general theory of relativity, and everything else according to the Standard Model of particle physics. The good news is that, with just a few exceptions (dark matter and dark energy, neutrino masses), these two theories are consistent with all the experimental data we have. The bad news is that they are mutually inconsistent. The Standard Model is a quantum field theory, a direct outgrowth of the quantum-mechanical revolution of the 1920’s. General relativity (GR), meanwhile, remains a classical theory, very much in the tradition of Newtonian mechanics. The program of “quantum gravity” is to invent a quantum-mechanical theory that reduces to GR in the classical limit.

This is obviously a crucially important problem, but one that has traditionally been a sidelight in the world of theoretical physics. For one thing, coming up with good models of quantum gravity has turned out to be extremely difficult; for another, the weakness of gravity implies that quantum effects don’t become important in any realistic experiment. There is a severe conceptual divide between GR and the Standard Model, but as a practical matter there is no pressing empirical question that one or the other of them cannot answer.

Quantum gravity moved to the forefront of research in the 1980’s, for two very different reasons. One was the success of the Standard Model itself; its triumph was so complete that there weren’t any nagging experimental puzzles left to resolve (a frustrating situation that persisted for twenty years). The other was the appearance of a promising new approach: string theory, the simple idea of replacing elementary point particles by one-dimensional loops and segments of “string.” (You’re not supposed to ask what the strings are made of; they’re made of string stuff, and there are no deeper layers.) In fact the theory had been around since the late 1960’s, originally investigated as an approach to the strong interactions. But problems arose, including the unavoidable appearance of string states that had all the characteristics one would expect of gravitons, particles of gravity. Whereas most attempts to quantize gravity ran quickly aground, here was a theory that insisted on the existence of gravity even when we didn’t ask for it! In 1984, Michael Green and John Schwarz demonstrated that certain potentially worrisome anomalies in the theory could be successfully canceled, and string mania swept the particle-theory community.

In the heady days of the “first superstring revolution,” triumphalism was everywhere. String theory wasn’t just a way to quantize gravity, it was a Theory of Everything, from which we could potentially derive all of particle physics. Sadly, that hasn’t worked out, or at least not yet. (String theorists remain quite confident that the theory is compatible with everything we know about particle physics, but optimism that it will uniquely predict the low-energy world is at a low ebb.) But on the theoretical front, there have been impressive advances, including a “second revolution” in the mid-nineties. Among the most astonishing results was the discovery by Juan Maldacena of gauge/gravity duality, according to which quantum gravity in a particular background is precisely equivalent to a completely distinct field theory, without gravity, in a different number of dimensions! String theory and quantum field theory, it turns out, aren’t really separate disciplines; there is a web of dualities that reveal various different-looking string theories as simply different manifestations of the same underlying theory, and some of those manifestations are ordinary field theories. Results such as this convince string theorists that they are on the right track, even in the absence of experimental tests. (Although all but the most fervent will readily agree that experimental tests are always the ultimate arbiter.)

But it’s been a long time since the last revolution, and contact with data seems no closer. Indeed, the hope that string theory would uniquely predict a model of particle physics appears increasingly utopian; these days, it seems more likely that there is a huge number (10500 or more) phases in which string theory can find itself, each featuring different particles and forces. This embarrassment of riches has opened a possible explanation for apparent fine-tunings in nature — perhaps every phase of string theory exists somewhere, and we only find ourselves in those that are hospitable to life. But this particular prediction is not experimentally testable; if there is to be contact with data, it seems that it won’t be through predicting the details of particle physics.

It is perhaps not surprising that there has been a backlash against string theory. Lee Smolin’s The Trouble With Physics is a paradigmatic example, along with Peter Woit’s new book Not Even Wrong. Both books were foreshadowed by Roger Penrose’s massive work, The Road to Reality. But string theorists have not been silent; several years ago, Brian Greene’s The Elegant Universe was a surprise bestseller, and more recently Leonard Susskind’s The Cosmic Landscape has focused on the opportunities presented by a theory with 10500 different phases. Alex Vilenkin’s Many Worlds in One also discusses the multiverse, and Lisa Randall’s Warped Passages enthuses over the possibility of extra dimensions of spacetime — while Lawrence Krauss’s Hiding in the Mirror strikes a skeptical note. Perhaps surprisingly, these books have not been published by vanity presses — there is apparently a huge market for popular discussions of the problems and prospects of string theory and related subjects.

The Trouble With Physics Read More »

93 Comments

The Cell is Like Tron!

At least that’s the impression I got from this quite spectacular animation. (Via Shtetl-Optimized.)

The Inner Life of the Cell Animation

Admittedly, real biological molecules would be quite a bit more densely packed, and there would be a lot less smooth motion and a lot more jaggedy fluctuation. Either that, or there’s an awful lot of spooky-action-at-a-distance going on inside cells.

After you’re done being amazed, you could do worse than check out the lecture notes for Scott Aaronson’s class, Quantum Computing Since Democritus. And when you’re done with those, here are some videos of Feynman lecturing on QED in New Zealand. (Thanks to Ed Copeland for reminding me of these.)

The Cell is Like Tron! Read More »

22 Comments

Quantum Mechanics Made Easy?

I was recently asked to recommend a good popular-level book on quantum mechanics. I don’t think I know of any, at least not first hand. We had a whole thread on the Greatest Popular Science Book, filled with good suggestions, but none specifically about quantum mechanics. A quick glance through amazon.com reveals plenty of books on particle physics, or even specific notions like quantum computing, but not one book that I could recommend in good conscience to someone who just wants to know what quantum mechanics is all about. It is the greatest intellectual achievement of the twentieth century, after all.

There are some books that either come close, or might very well be perfect but I’m not familiar with them. In the latter category we have The Quantum World by Ken Ford, and David Lindley’s Where Does the Weirdness Go? These might be great, I just haven’t read them. I’m sure that the Mr. Tompkins books by George Gamow are good, since I love One, Two, Three… Infinity (and Gamow was a genius), but I haven’t actually read them. Feynman’s QED is another classic, but focuses more on quantum electrodynamics (duh) than on QM more generally. David Deutsch’s The Fabric of Reality is a fantastic book, especially if you are curious about the Many-Worlds Interpretation of quantum mechanics; but I’m not sure if it’s the best first introduction (I haven’t looked at it closely in years). And David Albert’s Quantum Mechanics and Experience is great for a careful philosophical account of what QM is all about, but again maybe not the best first exposure.

Any suggestions? Not for a good book that is related to quantum mechanics or perhaps mentions it in a chapter or two, but for something whose major goal is to provide a clear account of QM. Surely there is something?

Quantum Mechanics Made Easy? Read More »

62 Comments

Dark Matter Exists

The great accomplishment of late-twentieth-century cosmology was putting together a complete inventory of the universe. We can tell a story that fits all the known data, in which ordinary matter (every particle ever detected in any experiment) constitutes only about 5% of the energy of the universe, with 25% being dark matter and 70% being dark energy. The challenge for early-twenty-first-century cosmology will actually be to understand the nature of these mysterious dark components. A beautiful new result illuminating (if you will) the dark matter in galaxy cluster 1E 0657-56 is an important step in this direction. (Here’s the press release, and an article in the Chandra Chronicles.)

A prerequisite to understanding the dark sector is to make sure we are on the right track. Can we be sure that we haven’t been fooled into believing in dark matter and dark energy? After all, we only infer their existence from detecting their gravitational fields; stronger-than-expected gravity in galaxies and clusters leads us to posit dark matter, while the acceleration of the universe (and the overall geometry of space) leads us to posit dark energy. Could it perhaps be that gravity is modified on the enormous distance scales characteristic of these phenomena? Einstein’s general theory of relativity does a great job of accounting for the behavior of gravity in the Solar System and astrophysical systems like the binary pulsar, but might it be breaking down over larger distances?

A departure from general relativity on very large scales isn’t what one would expect on general principles. In most physical theories that we know and love, modifications are expected to arise on small scales (higher energies), while larger scales should behave themselves. But, we have to keep an open mind — in principle, it’s absolutely possible that gravity could be modified, and it’s worth taking seriously.

Furthermore, it would be really cool. Personally, I would prefer to explain cosmological dynamics using modified gravity instead of dark matter and dark energy, just because it would tell us something qualitatively different about how physics works. (And Vera Rubin agrees.) We would all love to out-Einstein Einstein by coming up with a better theory of gravity. But our job isn’t to express preferences, it’s to suggest hypotheses and then go out and test them.

The problem is, how do you test an idea as vague as “modifying general relativity”? You can imagine testing specific proposals for how gravity should be modified, like Milgrom’s MOND, but in more general terms we might worry that any observations could be explained by some modification of gravity.

But it’s not quite so bad — there are reasonable features that any respectable modification of general relativity ought to have. Specifically, we expect that the gravitational force should point in the direction of its source, not off at some bizarrely skewed angle. So if we imagine doing away with dark matter, we can safely predict that gravity always be pointing in the direction of the ordinary matter. That’s interesting but not immediately helpful, since it’s natural to expect that the ordinary matter and dark matter cluster in the same locations; even if there is dark matter, it’s no surprise to find the gravitational field pointing toward the visible matter as well.

What we really want is to take a big cluster of galaxies and simply sweep away all of the ordinary matter. Dark matter, by hypothesis, doesn’t interact directly with ordinary matter, so we can imagine moving the ordinary stuff while leaving the dark stuff behind. If we then check back and determine where the gravity is, it should be pointing either at the left-behind dark matter (if there is such a thing) or still at the ordinary matter (if not).

Happily, the universe has done exactly this for us. In the Bullet Cluster, more formally known as 1E 0657-56, we actually find two clusters of galaxies that have (relatively) recently passed right through each other. It turns out that the large majority (about 90%) of ordinary matter in a cluster is not in the galaxies themselves, but in hot X-ray emitting intergalactic gas. As the two clusters passed through each other, the hot gas in each smacked into the gas in the other, while the individual galaxies and the dark matter (presumed to be collisionless) passed right through. Here’s an mpeg animation of what we think happened. As hinted at in last week’s NASA media advisory, astrophysicists led by Doug Clowe (Arizona) and Maxim Markevitch (CfA) have now compared images of the gas obtained by the Chandra X-ray telescope to “maps” of the gravitational field deduced from weak lensing observations. Their short paper is astro-ph/0608407, and a longer one on lensing is astro-ph/0608408. And the answer is: there’s definitely dark matter there!

Despite the super-secret embargoed nature of this result, enough hints were given in the media advisory and elsewhere on the web that certain scientific sleuths were basically able to figure out what was going on. But they didn’t have access to the best part: pictures!

Here is 1E 0657-56 in all its glory, or at least some of it’s glory — this is the optical image, in which you can see the actual galaxies.

1e0657 optical

With some imagination it shouldn’t be too hard to make out the two separate concentrations of galaxies, a larger one on the left and a smaller one on the right. These are pretty clearly clusters, but you can take redshifts to verify that they’re all really at the same location in the universe, not just a random superposition of galaxies at very different distances. Even better, you can map out the gravitational fields of the clusters, using weak gravitational lensing. That is, you take very precise pictures of galaxies that are in the background of these clusters. The images of the background galaxies are gently distorted by the gravitational field of the clusters. The distortion is so gentle that you could never tell it was there if you only looked at one galaxy; but with more than a hundred galaxies, you begin to notice that the images are systematically aligned, characteristic of passing through a coherent gravitational lens. From these distortions it’s possible to work backwards and ask “what kind of mass concentration could have created such a gravitational lens?” Here’s the answer, superimposed on the optical image.

1e0657 optical and dark matter

It’s about what you would expect: the dark matter is concentrated in the same regions as the galaxies themselves. But we can separately make X-ray observations to map out the hot gas, which constitutes most of the ordinary (baryonic) matter in the cluster. Here’s what we see.

1e6057 optical and x-ray

This is why it’s the “Bullet” cluster — the bullet-shaped region on the right is a shock front. These two clusters have passed right through each other, creating an incredibly energetic collision between the gas in each of them. The fact that the “bullet” is so sharply defined indicates that the clusters are moving essentially perpendicular to our line of sight.

This collision has done exactly what we want — it’s swept out the ordinary matter from the clusters, displacing it with respect to the dark matter (and the galaxies, which act as collisionless particles for these purposes). You can see it directly by superimposing the weak-lensing map and the Chandra X-ray image.

1e6057 optical, dark matter, and x-ray

Clicking on each of these images leads to a higher-resolution version. If you have a tabbed browser, the real fun is opening each of the images in a separate tab and clicking back and forth. The gravitational field, as reconstructed from lensing observations, is not pointing toward the ordinary matter. That’s exactly what you’d expect if you believed in dark matter, but makes no sense from the perspective of modified gravity. If these pictures don’t convince you that dark matter exists, I don’t know what will.

So is this the long-anticipated (in certain circles) end of MOND? What need do we have for modified gravity if there clearly is dark matter? Truth is, it was already very difficult to explain the dynamics of clusters (as opposed to individual galaxies) in terms of MOND without invoking anything but ordinary matter. Even MOND partisans generally agree that some form of dark matter is necessary to account for cluster dynamics and cosmology. It’s certainly conceivable that we are faced with both modified gravity and dark matter. If the dark matter is sufficiently “warm,” it might fail to accumulate in galaxies, but still be important for clusters. Needless to say, the picture begins to become somewhat baroque and unattractive. But the point is not whether or not MOND remains interesting; after all, someone else might come up with a different theory of modified gravity tomorrow that can fit both galaxies and clusters. The point is that, independently of any specific model of modified gravity, we now know that there definitely is dark matter out there. It will always be possible that some sort of modification of gravity lurks just below our threshold of detection; but now we have established beyond reasonable doubt that we need a substantial amount of dark matter to explain cosmological dynamics.

That’s huge news for physicists. Theorists now know what to think about (particle-physics models of dark matter) and experimentalists know what to look for (direct and indirect detection of dark matter particles, production of dark matter candidates at accelerators). The dark matter isn’t just ordinary matter that’s not shining; limits from primordial nucleosynthesis and the cosmic microwave background imply a strict upper bound on the amount of ordinary matter, and it’s not nearly enough to account for all the matter we need. This new result doesn’t tell us which particle the new dark matter is, but it confirms that there is such a particle. We’re definitely making progress on the crucial project of understanding the inventory of the universe.

What about dark energy? The characteristic features of dark energy are that it is smooth (spread evenly throughout space) and persistent (evolving slowly, if at all, with time). In particular, dark energy doesn’t accumulate in dense regions such as galaxies or clusters — it’s the same everywhere. So these observations don’t tell us anything directly about the nature of the 70% of the universe that is purportedly in this ultra-exotic component. In fact we know rather less about dark energy than we do about dark matter, so we have more freedom to speculate. It’s still quite possible that the acceleration of the universe can be explained by modifying gravity rather than invoking a mysterious new dark component. One of our next tasks, then, is obviously to come up with experiments that might distinguish between dark energy and modified gravity — and some of us are doing our best. Stay tuned, as darkness gradually encroaches upon our universe, and Einstein continues to have the last laugh.

Dark Matter Exists Read More »

197 Comments

The Cash Value of Astronomical Ideas

Can’t … stop … blogging … must … resist …

So you may have heard that Pluto is still a planet, and indeed we have a few new ones as well! Phil Plait, Rob Knop, Clifford, and Steinn have all weighed in. Hey, it’s on the front page of the New York Times, above the fold!

The problem is that Pluto is kind of small, and far away. Those aren’t problems by themselves, but there are lots of similar-sized objects that are also out beyond Neptune, in the Kuiper Belt. As we discover more and more, should they all count as planets? And if not, shouldn’t Pluto be demoted? Nobody wants to lose Pluto among the family of planets — rumors to that effect were previously enough to inspire classrooms around the globe to write pleading letters to the astronomical powers that be, begging them not to discard the plucky ninth planet. But it’s really hard to come up with some objective criteria of planet-ness that would include the canonical nine but not open the doors to all sorts of unwanted interlopers. Now the Planet Definition Committee of the International Astronomical Union has proposed a new definition:

1) A planet is a celestial body that (a) has sufficient mass for its self-gravity to overcome rigid body forces so that it assumes a hydrostatic equilibrium (nearly round) shape, and (b) is in orbit around a star, and is neither a star nor a satellite of a planet.

It turns out that, by this proposed definition, there are twelve planets — not just the usual nine, but also Ceres (the largest asteroid, between Mars and Jupiter), and also Charon (Pluto’s moon, but far enough away that apparently it doesn’t count as a “satellite,” but as a double-planet), and UB313, a faraway rock that is even bigger than Pluto. I’m not sure why anyone thinks this is an improvement.

The thing is, it doesn’t matter. Most everyone who writes about it admits that it doesn’t matter, before launching into a passionate defense of what they think the real definition should be. But, seriously: it really doesn’t matter. We are not doing science, or learning anything about the universe here. We’re just making up a definition, and we’re doing so solely for our own convenience. There is no pre-existing Platonic nature of “planet-ness” located out there in the world, which we are trying to discover so that we may bring our nomenclature in line with it. We are not discovering anything new about nature, nor even bringing any reality into existence by our choices.

The Pragmatists figured this out long ago: we get to choose the definition to be whatever we want, and the best criterion by which to make that choice is whatever is most useful and convenient for our purposes. But people have some deep-seated desire to believe that our words should be brought in line with objective criteria, even if it’s dramatically inconvenient. (These are the same people, presumably, who think that spelling reform would be really cool.) But as Rob says, there is no physically reasonable definition that would let us stick with nine planets. That’s okay! We have every right to define “planet” to mean “Mercury, Venus, Earth, Mars, Jupiter, Saturn, Uranus, Neptune, and Pluto, plus whatever other large rocky bodies we find orbiting other stars.” Or whatever else we want. It’s completely up to us.

So we really shouldn’t have to tear up a century’s worth of textbooks and illustrations, and start trying to figure out when the shape of some particular body is governed by hydrostatic equilibrium, just to pat ourselves on the back for obeying “physically reasonable” definitions. But it looks like that’s what the IAU Planet Definition Committee wants us to do. Of course that’s what you’d expect a Planet Definition Committee to suggest; otherwise why would we need a Planet Definition Committee?

Now if you’ll excuse me, I have change-of-address forms to fill out.

[And don’t even contemplate accusing me of hypocrisy for dragging myself away from a much-deserved blog-vacation to carry on about something that I claim doesn’t matter. The definition of “planet” doesn’t matter; but appreciating that the choice of definition is a matter of our own convenience, not a matter of necessarily conforming to some objective criteria about the physical world, matters a lot.]

Update: Chris Clarke for the opposition.

The Cash Value of Astronomical Ideas Read More »

38 Comments

Rapped on the Head by Creationists

I think this is a new category for my CV — “articles subjected to close reading by creationists.” (That, and pioneering the concept of the least bloggable unit.) Here is the first entry: my humble little essay for Nature entitled “Is Our Universe Natural?” has been lovingly dissected at “Creation-Evolution Headlines.” In which they claim that my paper “arms the intelligent design movement in the current fight over the definition of science.” Okay, now those are fighting words.

The page is part of a larger site called Creation Safaris. I would tell you more about the site if only their web pages weren’t so confusing that I can’t follow what’s going on. It seems to be one of those places that takes you on a rafting trip to better enjoy God’s creation; blurbs for the trips include stuff like this:

ABOUT YOUR GUIDE: Tom Vail is a veteran rafting guide with 24 years experience. In recent years he has led the big trips for ICR and Answers in Genesis. Formerly an evolutionist, he used to tell his rafting parties the usual millions-of-years stories about the canyon, but when he became a Christian, he began to look at the world differently: this led to the publication last year of his book Grand Canyon: A Different View that caused a firestorm among evolutionists when the National Park Service began selling it in its bookstores; fortunately, visitors to the park are voting for it with their dollars!

Hey look, they’re the ones saying that becoming a Christian persuaded poor Tom to give up on rational scientific thought, not me. I’m not sure what belief system is responsible for the run-on sentences.

The most impressive thing about the site is that they have the massive cojones necessary to favorably invoke Carl Sagan, of all people. In particular, Sagan’s notion of a baloney detector, which apparently is just a “good grasp of logical reasoning and investigative procedure.” Which they use, ahem, to counter the illogical rhetorical sneakiness of the pro-evolution crowd. Jiminy crickets.

Anyway. Somehow they found my Nature article, which was about how physicists are taking advantage of seemingly-unnatural features of our universe in their efforts to develop a deeper understanding how how nature works. The title, “Is Our Universe Natural?”, is of course a joke, which folks of a certain cast of mind apparently don’t get. Of course our universe is natural, more or less by definition. The point is that it doesn’t always look natural from the perspective of our current state of understanding. That’s no surprise, because our current understanding is necessarily incomplete. In fact, it’s good news for scientists when they can point to something that doesn’t seem “natural” about the universe; although it’s not as useful as a direct experimental result that can’t be explained by current theories, it can still provide some useful guidance while we develop better theories. Trying to understand the rarity of certain particle-physics decays inspired people to invent the concept of “strangeness,” and ultimately the Eight-Fold Way and the quark model. Trying to understand the flatness and smoothness of our universe on large scales inspired Alan Guth to invent inflation, which provided a dynamical mechanism to generate density perturbations purely as a bonus.

Right now, trying to understand hierarchies in particle physics and the arrow of time has led people to seriously contemplate a vast multiverse beyond what we can see, perhaps populated by regions occupying different phases in the string theory landscape. Wildly speculative, of course, but that’s to be expected of, you know, speculations. Ideas are always speculative when they are new and untested; either they will ultimately be tested one way or another, or they’ll fade into obscurity, as I made perfectly clear.

The ultimate goal is undoubtedly ambitious: to construct a theory that has definite consequences for the structure of the multiverse, such that this structure provides an explanation for how the observed features of our local domain can arise naturally, and that the same theory makes predictions that can be directly tested through laboratory experiments and astrophysical observations. To claim success in this programme, we will need to extend our theoretical understanding of cosmology and quantum gravity considerably, both to make testable predictions and to verify that some sort of multiverse picture really is a necessary consequence of these ideas. Only further investigation will allow us to tell whether such a programme represents laudable aspiration or misguided hubris.

(Did you know that Nature has an editorial policy forbidding the use of the words “scenario” and “paradigm”? Neither did I, but it’s true. “Paradigm” I can see, but banning “scenario” seems unnecessarily stuffy to me.) (Also, it’s a British publication, thus the spelling of “programme.” There is no “me” in “program”!)

It’s not hard to guess what a creationist would make of this: scientists are stuck, don’t understand what’s going on, grasping at straws, refusing to admit that God did it, blah blah blah. And that’s more or less what we get:

For the most part, Carroll wrote thoughtfully and perceptively, except for one thing: he totally ignored theism as an option. He is like Robert Jastrow’s mountain climber, scrambling over the last highest peak, only to find a band of theologians who have been sitting there for centuries. Yet he doesn’t even bother to say Howdy. Instead, he walks over to them and tries to describe them with equations, and puzzles about how they emerged by a natural process. As he does this, one of the theologians taps on his head and says, “Hello? Anybody home?” yet Carroll continues, now trying to naturalize the pain he feels in his skull.

Gee, I wonder why anyone would waste their time trying to explain the universe in natural terms? Maybe because it’s been a fantastically successful strategy for the last five hundred years? Somewhat more successful, one might suggest, than anything “creation science” has managed to come up with.

Sorry, got a little sarcastic there. Don’t mean to offend anyone, even while they are tapping on my empty skull. What we have here is a textbook case of the God of the gaps argument, notwithstanding the thorough squelching that David Hume gave the idea many years ago. It’s really kind of sad. All they can do is point to something that scientists don’t yet understand and say “Aha! You’ll never understand that! Only God will provide the answer!” And when the scientists finally do understand it and move on to some other puzzle, they’ll say “Okay, this one you’ll really never understand! You need God, admit it!”

Think about it for a second — a century ago concepts like “the state of the universe one second after the Big Bang” or “the ratio of the vacuum energy to the Planck scale” hadn’t even been invented yet. Today, not only have they been invented, but they’ve been measured, and we’ve moved on to trying to understand them in terms of deeper principles. I’d say it’s a bit to early to declare defeat in our attempts to fit these ideas into a naturalistic framework.

Rapped on the Head by Creationists Read More »

63 Comments

Boltzmann’s Anthropic Brain

A recent post of Jen-Luc’s reminded me of Huw Price and his work on temporal asymmetry. The problem of the arrow of time — why is the past different from the future, or equivalently, why was the entropy in the early universe so much smaller than it could have been? — has attracted physicists’ attention (although not as much as it might have) ever since Boltzmann explained the statistical origin of entropy over a hundred years ago. It’s a deceptively easy problem to state, and correspondingly difficult to address, largely because the difference between the past and the future is so deeply ingrained in our understanding of the world that it’s too easy to beg the question by somehow assuming temporal asymmetry in one’s purported explanation thereof. Price, an Australian philosopher of science, has made a specialty of uncovering the hidden assumptions in the work of numerous cosmologists on the problem. Boltzmann himself managed to avoid such pitfalls, proposing an origin for the arrow of time that did not secretly assume any sort of temporal asymmetry. He did, however, invoke the anthropic principle — probably one of the earliest examples of the use of anthropic reasoning to help explain a purportedly-finely-tuned feature of our observable universe. But Boltzmann’s anthropic explanation for the arrow of time does not, as it turns out, actually work, and it provides an interesting cautionary tale for modern physicists who are tempted to travel down that same road.

The Second Law of Thermodynamics — the entropy of a closed system will not spontaneously decrease — was understood well before Boltzmann. But it was a phenomenological statement about the behavior of gasses, lacking a deeper interpretation in terms of the microscopic behavior of matter. That’s what Boltzmann provided. Pre-Boltzmann, entropy was thought of as a measure of the uselessness of arrangements of energy. If all of the gas in a certain box happens to be located in one half of the box, we can extract useful work from it by letting it leak into the other half — that’s low entropy. If the gas is already spread uniformly throughout the box, anything we could do to it would cost us energy — that’s high entropy. The Second Law tells us that the universe is winding down to a state of maximum uselessness.

Ludwig Boltzmann Boltzmann suggested that the entropy was really counting the number of ways we could arrange the components of a system (atoms or whatever) so that it really didn’t matter. That is, the number of different microscopic states that were macroscopically indistinguishable. (If you’re worried that “indistinguishable” is in the eye of the beholder, you have every right to be, but that’s a separate puzzle.) There are far fewer ways for the molecules of air in a box to arrange themselves exclusively on one side than there are for the molecules to spread out throughout the entire volume; the entropy is therefore much higher in the latter case than the former. With this understanding, Boltzmann was able to “derive” the Second Law in a statistical sense — roughly, there are simply far more ways to be high-entropy than to be low-entropy, so it’s no surprise that low-entropy states will spontaneously evolve into high-entropy ones, but not vice-versa. (Promoting this sensible statement into a rigorous result is a lot harder than it looks, and debates about Boltzmann’s H-theorem continue merrily to this day.)

Boltzmann’s understanding led to both a deep puzzle and an unexpected consequence. The microscopic definition explained why entropy would tend to increase, but didn’t offer any insight into why it was so low in the first place. Suddenly, a thermodynamics problem became a puzzle for cosmology: why did the early universe have such a low entropy? Over and over, physicists have proposed one or another argument for why a low-entropy initial condition is somehow “natural” at early times. Of course, the definition of “early” is “low-entropy”! That is, given a change in entropy from one end of time to the other, we would always define the direction of lower entropy to be the past, and higher entropy to be the future. (Another fascinating but separate issue — the process of “remembering” involves establishing correlations that inevitably increase the entropy, so the direction of time that we remember [and therefore label “the past”] is always the lower-entropy direction.) The real puzzle is why there is such a change — why are conditions at one end of time so dramatically different from those at the other? If we do not assume temporal asymmetry a priori, it is impossible in principle to answer this question by suggesting why a certain initial condition is “natural” — without temporal aymmetry, the same condition would be equally natural at late times. Nevertheless, very smart people make this mistake over and over, leading Price to emphasize what he calls the Double Standard Principle: any purportedly natural initial condition for the universe would be equally natural as a final condition.

The unexpected consequence of Boltzmann’s microscopic definition of entropy is that the Second Law is not iron-clad — it only holds statistically. In a box filled with uniformly-distributed air molecules, random motions will occasionally (although very rarely) bring them all to one side of the box. It is a traditional undergraduate physics problem to calculate how often this is likely to happen in a typical classroom-sized box; reasurringly, the air is likely to be nice and uniform for a period much much much longer than the age of the observable universe.

Faced with the deep puzzle of why the early universe had a low entropy, Boltzmann hit on the bright idea of taking advantage of the statistical nature of the Second Law. Instead of a box of gas, think of the whole universe. Imagine that it is in thermal equilibrium, the state in which the entropy is as large as possible. By construction the entropy can’t possibly increase, but it will tend to fluctuate, every so often diminishing just a bit and then returning to its maximum. We can even calculate how likely the fluctuations are; larger downward fluctuations of the entropy are much (exponentially) less likely than smaller ones. But eventually every kind of fluctuation will happen.

Entropy Fluctuations

You can see where this is going: maybe our universe is in the midst of a fluctuation away from its typical state of equilibrium. The low entropy of the early universe, in other words, might just be a statistical accident, the kind of thing that happens every now and then. On the diagram, we are imagining that we live either at point A or point B, in the midst of the entropy evolving between a small value and its maximum. It’s worth emphasizing that A and B are utterly indistinguishable. People living in A would call the direction to the left on the diagram “the past,” since that’s the region of lower entropy; people living at B, meanwhile, would call the direction to the right “the past.”

Boltzmann’s Anthropic Brain Read More »

102 Comments

Foundational Questioners Announced

Back in March we had a guest post by Anthony Aguirre about the Foundational Questions Institute, a new effort to support “research at the foundations of physics and cosmology, particularly new frontiers and innovative ideas integral to a deep understanding of reality, but unlikely to be supported by conventional funding sources.” Today the FQXi (that’s the official acronym, sorry) announced their first round of grant awardees.

It’s a very good list, and Anthony and Max Tegmark are to be congratulated for funding some very interesting science. If anything, I could see almost all of these proposals receiving money from the NSF or DOE or NASA, although perhaps it might have been more difficult. We see well-known string theorists (for example Steve Giddings, Brian Greene, Eva Silverstein), early-universe cosmologists (Richard Easther, Alex Vilenkin), late-universe astrophysicists (Fred Adams, Avi Loeb), general relativists (Justin Khoury, Ken Olum), loop-quantizers (Olaf Dreyer, Fotini Markopoulou), respectable physicists taking the opportunity to be a little more speculative than usual (Louis Crane, Janna Levin), and even some experimentalists working on the foundations of quantum mechanics (Markus Aspelmeyer, former guest-poster Paul Kwiat), as well as a bunch of others.

Nothing in there about finding God by doing theoretical physics. Which might have been a non-trivial worry, since currently the sole source of funding for FQXi is the John Templeton Foundation. The Templeton Foundation was set up “to encourage a fresh appreciation of the critical importance — for all peoples and cultures — of the moral and spiritual dimensions of life,” and in particular has worked to promote a reconciliation between science and religion. I am not a big fan of such reconciliation, in the sense that I think it is completely and woefully misguided. This has led me in the past to decline to participate in Templeton-sponsored activities, and the close connection between Templeton and FQXi was enough to dissuade me from applying for money from them myself.

Gareth Cook has written a nice article in the Boston Globe about FQXi and the grant program, in which I am quoted as saying that bringing science and religion together is a bad thing. Absolutely accurate, but the space constraints of a newspaper article make it hard to convey much subtlety. The FQXi folks have stated definitively that their own mission is certainly not to reconcile science and religion; in case of doubt, they’ve put it succinctly in their FAQ:

I’ve read that a goal of JTF [John Templeton Foundation] is to “reconcile science and religion.” Is this part of the FQXi mission?

No.

Indeed, they’ve been quite clear that the Templeton Foundation has just given them a pot of money and been otherwise hands-off, which is good news. And that they would like to get additional sources of funding. My own current worry — which is extremely mild, to be clear — is that the publicity generated by FQXi’s activities will be good for Templeton’s larger purpose, to which I am opposed.

But at the moment the focus should be on recognizing Max and Anthony and their friends for steering a substantial amount of money to some very interesting research. If they succeed at getting additional sources of funding, I may even apply myself one day!

Update: More quotes in this piece from Inside Higher Ed.

Foundational Questioners Announced Read More »

23 Comments

N Bodies

This will be familiar to anyone who reads John Baez’s This Week’s Finds in Mathematical Physics, but I can’t help but show these lovely exact solutions to the gravitational N-body problem. This one is beautiful in its simplicity: twenty-one point masses moving around in a figure-8.

Figure-8 Orbit

The N-body problem is one of the most famous, and easily stated, problems in mathematical physics: find exact solutions to point masses moving under their mutual Newtonian gravitational forces (i.e. the inverse-square law). For N=2 the complete set of solutions is straightforward and has been known for a long time — each body moves in a conic section (circle, ellipse, parabola or hyperbola) around the center of mass. In fact, Kepler found the solution even before Newton came up with the problem!

But let N=3 and chaos breaks loose, quite literally. For a long time people recognized that the motion of three gravitating bodies would be a difficult problem, but there were hopes to at least characterize the kinds of solutions that might exist (even if we couldn’t write down the solutions explicitly). It became a celebrated goal for mathematical physicists, and the very amusing story behind how it was resolved is related in Peter Galison’s book Einstein’s Clocks and Poincare’s Maps. In 1885, a mathematical competition was announced in honor of the 60th birthday of King Oscar II of Sweden, and the three-body problem was one of the questions. (Feel free to muse about the likelihood of the birthday of any contemporary world leader being celebrated by mathematical competitions.) Henri Poincare was a favorite to win the prize, and he submitted an essay that demonstrated the stability of planetary motions in the three-body problem (actually the “restricted” problem, in which one test body moves in the gravitational field generated by two others). In other words, without knowing the exact solutions, we could at least be confident that the orbits wouldn’t go crazy; more technically, solutions starting with very similar initial conditions would give very similar orbits. Poincare’s work was hailed as brilliant, and he was awarded the prize.

But as his essay was being prepared for publication in Acta Mathematica, a couple of tiny problems were pointed out by Edvard Phragmen, a Swedish mathematician who was an assistant editor at the journal. …

N Bodies Read More »

30 Comments
Scroll to Top