Science

Nobel Day

Today was the Nobel Prize ceremony, including of course the Physics Prize to François Englert and Peter Higgs. Congratulations once again to them!

Englert and Higgs

(Parenthetically, it’s sad that the Nobel is used to puff up national pride. In Belgium, Englert gets into the headline but not Higgs; in the UK, it’s the other way around.)

I of course had nothing to do with the physics behind this year’s Nobel, but I did write a book about it, so I’ve had a chance to do a little commentating here and there. I wrote a short piece for The Independent that tries to place the contribution in historical context. I’ve had a bit of practice by now in talking about this topic to general audiences, so consider this the distillation of the best I can do! (It’s a UK newspaper, so naturally only Higgs is mentioned in the headline.) I love how, at the bottom of the story, you can register your level of agreement, from “strongly agree” to “strongly disagree.” And if you prefer your words spoken aloud, here I am on the BBC talking about the book.

270px-Murray_Gell-Mann_-_World_Economic_Forum_Annual_Meeting_2012Meanwhile here at Caltech, we welcomed back favorite son Murray Gell-Mann (who spends his days at the Santa Fe Institute these days) for the 50th anniversary of quarks. One of the speakers, Geoffrey West, pointed out that no Nobel was awarded for the idea of quarks. Gell-Mann did of course win the Nobel in 1969, but that was “for his contributions and discoveries concerning the classification of elementary particles and their interactions”. In other words, strangeness, SU(3) flavor symmetry, the Eightfold Way, and the prediction of the Omega-minus particle. (Other things Gell-Mann helped invent: kaon mixing, the renormalization group, the sigma model for pions, color and quantum chromodynamics, the seesaw mechanism for neutrino masses, and the decoherent histories approach to quantum mechanics. He is kind of a big deal.)

But, while we now understand SU(3) flavor symmetry in terms of the quark model (the up/down/strange quarks are all light compared to the QCD scale, giving rise to an approximate symmetry), the idea of quarks itself wasn’t honored by the 1969 prize. If it had been, the prize certainly would have been shared by George Zweig, who proposed the idea independently. So there’s still time to give out the Nobel for the quark model! Perhaps Gell-Mann and Zweig could share it with Harald Fritzsch, who collaborated with Gell-Mann on the invention of color and QCD. (The fact that QCD is asymptotically free won a prize for Gross, Politzer and Wilczek in 2004, but there hasn’t been a prize for the invention of the theory itself.) Modern particle physics has such a rich and fascinating history, we should honor it as accurately as possible.

Nobel Day Read More »

19 Comments

Thanksgiving

This year we give thanks for an idea that establishes a direct connection between the concepts of “energy” and “information”: Landauer’s Principle. (We’ve previously given thanks for the Standard Model Lagrangian, Hubble’s Law, the Spin-Statistics Theorem, conservation of momentum, effective field theory, the error bar, and gauge symmetry.)

Landauer’s Principle states that irreversible loss of information — whether it’s erasing a notebook or swiping a computer disk — is necessarily accompanied by an increase in entropy. Charles Bennett puts it in relatively precise terms:

Any logically irreversible manipulation of information, such as the erasure of a bit or the merging of two computation paths, must be accompanied by a corresponding entropy increase in non-information bearing degrees of freedom of the information processing apparatus or its environment.

The principle captures the broad idea that “information is physical.” More specifically, it establishes a relationship between logically irreversible processes and the generation of heat. If you want to erase a single bit of information in a system at temperature T, says Landauer, you will generate an amount of heat equal to at least

(\ln 2)k T,

where k is Boltzmann’s constant.

This all might come across as a blur of buzzwords, so take a moment to appreciate what is going on. “Information” seems like a fairly abstract concept, even in a field like physics where you can’t swing a cat without hitting an abstract concept or two. We record data, take pictures, write things down, all the time — and we forget, or erase, or lose our notebooks all the time, too. Landauer’s Principle says there is a direct connection between these processes and the thermodynamic arrow of time, the increase in entropy throughout the universe. The information we possess is a precious, physical thing, and we are gradually losing it to the heat death of the cosmos under the irresistible pull of the Second Law.

The principle originated in attempts to understand Maxwell’s Demon. You’ll remember the plucky sprite who decreases the entropy of gas in a box by letting all the high-velocity molecules accumulate on one side and all the low-velocity ones on the other. Since Maxwell proposed the Demon, all right-thinking folks agreed that the entropy of the whole universe must somehow be increasing along the way, but it turned out to be really hard to pinpoint just where it was happening.

maxwellsdemon

The answer is not, as many people supposed, in the act of the Demon observing the motion of the molecules; it’s possible to make such observations in a perfectly reversible (entropy-neutral) fashion. But the Demon has to somehow keep track of what its measurements have revealed. And unless it has an infinitely big notebook, it’s going to eventually have to erase some of its records about the outcomes of those measurements — and that’s the truly irreversible process. This was the insight of Rolf Landauer in the 1960’s, which led to his principle.

A 1982 paper by Bennett provides a nice illustration of the principle in action, based on Szilard’s Engine. Short version of the argument: imagine you have a piston with a single molecule in it, rattling back and forth. If you don’t know where it is, you can’t extract any energy from it. But if you measure the position of the molecule, you could quickly stick in a piston on the side where the molecule is not, then let the molecule bump into your piston and extract energy. The amount you get out is (ln 2)kT. You have “extracted work” from a system that was supposed to be at maximum entropy, in apparent violation of the Second Law. But it was important that you started in a “ready state,” not knowing where the molecule was — in a world governed by reversible laws, that’s a crucial step if you want your measurement to correspond reliably to the correct result. So to do this kind of thing repeatedly, you will have to return to that ready state — which means erasing information. That decreases your phase space, and therefore increases entropy, and generates heat. At the end of the day, that information erasure generates just as much entropy as went down when you extracted work; the Second Law is perfectly safe.

The status of Landauer’s Principle is still a bit controversial in some circles — here’s a paper by John Norton setting out the skeptical case. But modern computers are running up against the physical limits on irreversible computation established by the Principle, and experiments seem to be verifying it. Even something as abstract as “information” is ultimately part of the world of physics.

Thanksgiving Read More »

36 Comments

Scientists Confirm Existence of Moon

Bit of old news here — well, the existence of the Moon is extremely old news, but even this new result is slightly non-new. But it was new to me.

Ice Cube is a wondrously inventive way of looking at the universe. Sitting at the South Pole, the facility itself consists of strings of basketball-sized detectors reaching over two kilometers deep into the Antarctic ice. Its purpose is to detect neutrinos, which it does when a neutrino interacts with the ice to create a charged lepton (electron, muon, or tau), which in turn splashes Cherenkov radiation into the detectors. The eventual hope is to pinpoint very high-energy neutrinos coming from specific astrophysical sources.

For this purpose, it’s the muon-creating neutrinos that are your best bet; electrons scatter multiple times in the ice, while taus decay too quickly, while muons give you a nice straight line. Sadly there is a heavy background of muons that have nothing to do with neutrinos, just from cosmic rays hitting the atmosphere. Happily most of these can be dealt with by using the Earth as a shield — the best candidate neutrino events are those that hit Ice Cube by coming up through the Earth, not down from the sky.

It’s important in this game to make sure your detector is really “pointing” where you think it is. (Ice Cube doesn’t move, of course; the detectors find tracks in the ice, from which a direction is reconstructed.) So it would be nice to have a source of muons to check against. Sadly, there is no such source in the sky. Happily, there is an anti-source — the shadow of the Moon.

Cosmic rays rain down on the Earth, creating muons as they hit the atmosphere, but we expect a deficit of cosmic rays in the direction of the Moon, which gets in the way. And indeed, here is the map constructed by Ice Cube of the muon flux in the vicinity of the Moon’s position in the sky.

moon-icecube

There it is! I can definitely make out the Moon.

Really this is a cosmic-ray eclipse, I suppose. We can also detect the Moon in gamma rays, and the Sun in neutrinos. It’s exciting to be living at a time when technological progress is helping us overcome the relative poverty of our biological senses.

Scientists Confirm Existence of Moon Read More »

23 Comments

Why Does Dark Energy Make the Universe Accelerate?

Peter Coles has issued a challenge: explain why dark energy makes the universe accelerate in terms that are understandable to non-scientists. This is a pet peeve of mine — any number of fellow cosmologists will recall me haranguing them about it over coffee at conferences — but I’m not sure I’ve ever blogged about it directly, so here goes. In three parts: the wrong way, the right way, and the math.

The Wrong Way

Ordinary matter acts to slow down the expansion of the universe. That makes intuitive sense, because the matter is exerting a gravitational force, acting to pull things together. So why does dark energy seem to push things apart?

The usual (wrong) way to explain this is to point out that dark energy has “negative pressure.” The kind of pressure we are most familiar with, in a balloon or an inflated tire, pushing out on the membrane enclosing it. But negative pressure — tension — is more like a stretched string or rubber band, pulling in rather than pushing out. And dark energy has negative pressure, so that makes the universe accelerate.

If the kindly cosmologist is both lazy and fortunate, that little bit of word salad will suffice. But it makes no sense at all, as Peter points out. Why do we go through all the conceptual effort of explaining that negative pressure corresponds to a pull, and then quickly mumble that this accounts for why galaxies are pushed apart?

So the slightly more careful cosmologist has to explain that the direct action of this negative pressure is completely impotent, because it’s equal in all directions and cancels out. (That’s a bit of a lie as well, of course; it’s really because you don’t interact directly with the dark energy, so you don’t feel pressure of any sort, but admitting that runs the risk of making it all seem even more confusing.) What matters, according to this line of fast talk, is the gravitational effect of the negative pressure. And in Einstein’s general relativity, unlike Newtonian gravity, both the pressure and the energy contribute to the force of gravity. The negative pressure associated with dark energy is so large that it overcomes the positive (attractive) impulse of the energy itself, so the net effect is a push rather than a pull.

This explanation isn’t wrong; it does track the actual equations. But it’s not the slightest bit of help in bringing people to any real understanding. It simply replaces one question (why does dark energy cause acceleration?) with two facts that need to be taken on faith (dark energy has negative pressure, and gravity is sourced by a sum of energy and pressure). The listener goes away with, at best, the impression that something profound has just happened rather than any actual understanding.

The Right Way

The right way is to not mention pressure at all, positive or negative. For cosmological dynamics, the relevant fact about dark energy isn’t its pressure, it’s that it’s persistent. It doesn’t dilute away as the universe expands. And this is even a fact that can be explained, by saying that dark energy isn’t a collection of particles growing less dense as space expands, but instead is (according to our simplest and best models) a feature of space itself. The amount of dark energy is constant throughout both space and time: about one hundred-millionth of an erg per cubic centimeter. It doesn’t dilute away, even as space expands.

Given that, all you need to accept is that Einstein’s formulation of gravity says “the curvature of spacetime is proportional to the amount of stuff within it.” (The technical version of “curvature of spacetime” is the Einstein tensor, and the technical version of “stuff” is the energy-momentum tensor.) In the case of an expanding universe, the manifestation of spacetime curvature is simply the fact that space is expanding. (There can also be spatial curvature, but that seems negligible in the real world, so why complicate things.)

So: the density of dark energy is constant, which means the curvature of spacetime is constant, which means that the universe expands at a fixed rate.

The tricky part is explaining why “expanding at a fixed rate” means “accelerating.” But this is a subtlety worth clarifying, as it helps distinguish between the expansion of the universe and the speed of a physical object like a moving car, and perhaps will help someone down the road not get confused about the universe “expanding faster than light.” (A confusion which many trained cosmologists who really should know better continue to fall into.)

The point is that the expansion rate of the universe is not a speed. It’s a timescale — the time it takes the universe to double in size (or expand by one percent, or whatever, depending on your conventions). It couldn’t possibly be a speed, because the apparent velocity of distant galaxies is not a constant number, it’s proportional to their distance. When we say “the expansion rate of the universe is a constant,” we mean it takes a fixed amount of time for the universe to double in size. So if we look at any one particular galaxy, in roughly ten billion years it will be twice as far away; in twenty billion years (twice that time) it will be four times as far away; in thirty billion years it will be eight times that far away, and so on. It’s accelerating away from us, exponentially. “Constant expansion rate” implies “accelerated motion away from us” for individual objects.

There’s absolutely no reason why a non-scientist shouldn’t be able to follow why dark energy makes the universe accelerate, given just a bit of willingness to think about it. Dark energy is persistent, which imparts a constant impulse to the expansion of the universe, which makes galaxies accelerate away. No negative pressures, no double-talk.

The Math

So why are people tempted to talk about negative pressure? As Peter says, there is an equation for the second derivative (roughly, the acceleration) of the universe, which looks like this:

\frac{\ddot a}{a} = -\frac{4\pi G}{3}(\rho + 3p) .

(I use a for the scale factor rather than R, and sensibly set c=1.) Here, ρ is the energy density and p is the pressure. To get acceleration, you want the second derivative to be positive, and there’s a minus sign outside the right-hand side, so we want (ρ + 3p) to be negative. The data say the dark energy density is positive, so a negative pressure is just the trick.

But, while that’s a perfectly good equation — the “second Friedmann equation” — it’s not the one anyone actually uses to solve for the evolution of the universe. It’s much nicer to use the first Friedmann equation, which involves the first derivative of the scale factor rather than its second derivative (spatial curvature set to zero for convenience):

H^2 \equiv \left(\frac{\dot a}{a}\right)^2 = \frac{8\pi G}{3} \rho.

Here H is the Hubble parameter, which is what we mean when we say “the expansion rate.” You notice a couple of nice things about this equation. First, the pressure doesn’t appear. The expansion rate is simply driven by the energy density ρ. It’s completely consistent with the first equation, as they are related to each other by an equation that encodes energy-momentum conservation, and the pressure does make an appearance there. Second, a constant energy density straightforwardly implies a constant expansion rate H. So no problem at all: a persistent source of energy causes the universe to accelerate.

Banning “negative pressure” from popular expositions of cosmology would be a great step forward. It’s a legitimate scientific concept, but is more often employed to give the illusion of understanding rather than any actual insight.

Why Does Dark Energy Make the Universe Accelerate? Read More »

120 Comments

Handing the Universe Over to Europe

Back in the day (ten years ago), I served on a NASA panel charged with developing a long-term roadmap for NASA’s astrophysics missions. At the time there were complaints from Congress and the Office of Management and the Budget that NASA was asking for lots of things, but without any overarching strategy. Whether that was true or not, we recognized the need to make hard choices and put forward a coherent plan. The result was the Beyond Einstein roadmap. We were ambitious, but reasonable, we thought, and the feedback we received from Congress and elsewhere was generally quite positive.

Hahahahaha. In the end, almost nothing that we proposed is actually being carried out. Our roadmap had different ingredients (to mix a metaphor): two large “facility-class” missions comparable to NASA’s Great Observatories, three more moderate “Einstein Probes” to study dark energy, inflation, and black holes, and more speculative “Vision missions” for further down the road. The Einstein Probes have long since fallen by the wayside, although the dark-energy mission might find life via one of the telescopes donated to NASA by the National Reconnaissance Office. If we don’t have the willpower/resources to do the moderate-sized missions, you might suspect that the facility-class missions are even more hopeless, and you’d be right.

But never fear! Word out of Europe (although still not official, apparently) is that the ESA has prioritized missions to study the “hot and energetic universe” and the “gravitational universe.” These map pretty well onto Constellation-X and LISA, the two facility-class missions we recommended pursuing in Beyond Einstein. The former would have been an X-ray telescope, while the latter would be a gravitational-wave observatory. Unfortunately the likely launch date for an ESA gravitational-wave mission isn’t until 2034, which is like forever. Fortunately, China has expressed interest in such a project, which might move things along.

For anyone following the news of last year’s Higgs discovery, it’s a familiar story. Here in the US we had a big particle accelerator planned, the SSC, which was canceled in 1993. That allowed CERN time and money to build the LHC, which eventually found the Higgs (and who knows what else it will find in the future). The US makes big plans, loses nerve, and Europe (or someone else) picks up the pieces.

Personally, I could not possibly care less which country gets the credit for scientific discoveries. If we someday map out the spacetime geometry around a black hole using data from a gravitational-wave observatory, whether it was launched by Europe or the US or China or India or Dubai matters to me not one whit. But I do want to see it launched by somebody. And the health of global science is certainly better off when the US is an active and energetic participant — the more resources and more competition we see in the field, the more benefits for everybody. Let’s hope we find a way for US science to shift back into high gear, so that we are players rather than merely spectators in this amazing game.

Handing the Universe Over to Europe Read More »

15 Comments

Billions of Worlds

I’m old enough to remember when we had nine planets in the Solar System, and zero outside. The news since then has been mixed. Here in our neighborhood we’re down to only eight planets; but in the wider galaxy, we’ve obtained direct evidence for about a thousand, with another several thousand candidates. [Thanks to Peter Edmonds for a correction there.] Now that we have real data, what used to be guesswork gives way to best-fit statistical inference. How many potentially habitable planets are there in the Milky Way, given some supposition about what counts as “habitable”? Well, there are about 200 billion stars in the galaxy. And about one in five are roughly Sun-like. And now our best estimate is that about one in five of them has a somewhat Earth-like planet. So you do the math: about eight billion Earth-like planets. (Here’s the PNAS paper, by Petigura, Howard, and Marcy.)

kepler

“Earth-like” doesn’t mean “littered with human-esque living organisms,” of course. The number of potentially habitable planets is a big number, but to get the number of intelligent civilizations we need to multiply by the fraction of such planets that are home to such civilizations. And we don’t know that.

It’s surprising how many people resist this conclusion. To drive it home, consider a very simplified model of the Drake equation.

x = a \cdot b.

x equals a times b. Now I give you a, and ask you to estimate x. Well, you can’t. You don’t know b. In the abstract this seems obvious, but there’s a temptation to think that if a (the number of Earth-like planets) is really big, then x (the number of intelligent civilizations) must be pretty big too. As if it’s just not possible that b (the fraction of Earth-like planets with intelligent life) could be that small. But it could be! It could be 10-100, in which case there could be billions of Earth-like planets for every particle in the observable universe and still it would be unlikely that any of the others contained intelligent life. Our knowledge of how easy it is for life to start, and what happens once it does, is pretty pitifully bad right now.

On the other hand — maybe b isn’t that small, and there really are (or perhaps “have been”) many other intelligent civilizations in the Milky Way. No matter what UFO enthusiasts might think, we haven’t actually found any yet. The galaxy is big, but its spatial extent (about a hundred thousand light-years) is not all that forbidding when you compare to its age (billions of years). It wouldn’t have been that hard for a plucky civilization from way back when to colonize the galaxy, whether in person or using self-replicating robots. It’s not the slightest bit surprising (to me) that we haven’t heard anything by pointing radio telescopes at the sky — beaming out electromagnetic radiation in all directions seems like an extraordinarily wasteful way to go about communicating. Much better to send spacecraft to lurk around likely star systems, à la the monolith from 2001. But we haven’t found any such thing, and 2001 was over a decade ago. That’s the Fermi paradox — where is everyone?

It isn’t hard to come up with solutions to the Fermi paradox. Maybe life is just rare, or maybe intelligence generally leads to self-destruction. I don’t have strong feelings one way or another, but I suspect that more credence should be given to a somewhat disturbing possibility: the Enlightentment/Boredom Hypothesis (EBH).

The EBH is basically the idea that life is kind of like tic-tac-toe. It’s fun for a while, but eventually you figure it out, and after that it gets kind of boring. Or, in slightly more exalted terms, intelligent beings learn to overcome the petty drives of the material world, and come to an understanding that all that strife and striving was to no particular purpose. We are imbued by evolution with a desire to survive and continue the species, but perhaps a sufficiently advanced civilization overcomes all that. Maybe they perfect life, figure out everything worth figuring out, and simply stop.

I’m not saying the EBH is likely, but I think it’s on the table as a respectable possibility. The Solar System is over four billion years old, but humans reached behavioral modernity only a few tens of thousands of years ago, and figured out how to do science only a few hundred years ago. Realistically, there’s no way we can possibly predict what humanity will evolve into over the next few hundreds of thousands or millions of years. Maybe the swashbuckling, galaxy-conquering impulse is something that intelligent species rapidly outgrow or grow tired of. It’s an empirical question — we should keep looking, not be discouraged by speculative musings for which there’s little evidence. While we’re still in swashbuckling mode, there’s no reason we shouldn’t enjoy it a little.

Billions of Worlds Read More »

60 Comments

Back In the Saddle

So apparently I just took an unscheduled blogging hiatus over the past couple of weeks. Sorry about that — it wasn’t at all intentional, real life just got in the way. It was a fun kind of real life — trips to Atlanta, NYC, and Century City, all of which I hope to chat about soon enough.

Anything happen while I was gone? Oh yeah, dark matter was not discovered. More specifically, the LUX experiment released new limits, which at face value rule out some of those intriguing hints that might have been pointing toward lighter-than-expected dark matter particles. (Not everyone thinks things should be taken at face value, but we’ll see.) I didn’t get a chance to comment at the time, but Jester and Matt Strassler have you covered.

lux

Let me just emphasize: there’s still plenty of room for dark matter in general, and WIMPs (weakly interactive massive particles, the particular kind of dark matter experiments like this are looking for) in particular. The parameter space is shaved off a bit, but it’s far from exhausted. Not finding a signal in a certain region of parameter space certainly decreases the Bayesian probability that a model is true, but in this case there’s still plenty of room.

Not that there will be forever. If dark matter is a WIMP, it should be detectable, as long as we build sensitive enough experiments. Of course there are plenty of non-WIMP models out there, well worth exploring. But for the moment Nature is just asking that we be a little more patient.

Back In the Saddle Read More »

10 Comments

Is Time Real?

I mentioned some time back the Closer to Truth series, in which Robert Lawrence Kuhn chats with scientists, philosophers, and theologians about the Big Questions. Apparently some excerpts are now appearing on YouTube — here I am talking about whether time is real.

Sean Carroll - Is Time Real?

In one sense, it’s a silly question. The “reality” of something is only an interesting issue if its a well-defined concept whose actual existence is in question, like Bigfoot or supersymmetry. For concepts like “time,” which are unambiguously part of a useful vocabulary we have for describing the world, talking about “reality” is just a bit of harmless gassing. They may be emergent or fundamental, but they’re definitely there. (Feel free to substitute “free will” for “time” if you like.) Temperature and pressure didn’t stop being real once we understood them as emergent properties of an underlying atomic description.

The question of whether time is fundamental or emergent is, on the other hand, crucially important. I have no idea what the answer is (and neither does anybody else). Modern theories of fundamental physics and cosmology include both possibilities among the respectable proposals.

Note that I haven’t actually watched the above video, and it’s been more than three years since the interview. Let me know if I said anything egregiously wrong. (I’m sure you will.)

Is Time Real? Read More »

63 Comments

Englert and Higgs

Congratulations to Francois Englert and Peter Higgs for winning this year’s Nobel Prize in Physics. However annoying the self-imposed rules are that prevent the prize from more accurately reflecting the actual contributions, there’s no question that the work being honored this time around is truly worthy.

To me, the proposal of the Higgs mechanism is one of the absolutely most impressive examples we have of the precision and restrictiveness of Nature’s workings at a deep level — something that sometimes gets lost in the hand-waving analogies we are necessarily reduced to when trying to explain hard ideas to a wide audience. There they were, back in 1964 — Englert and Higgs, as well as Anderson, Brout, Guralnik, Hagen, and Kibble — confronted with a relatively abstract-sounding problem: how can you make a model for the nuclear forces that is based on local symmetry, like electromagnetism and gravity, but nevertheless only stretches over short ranges, like we actually observe? (None of these folks were thinking about “giving particles mass”; that only came in 1967, with Weinberg and Salam.)

It sounds like a pretty esoteric, open-ended question. And they just sat down and thought about it, with only very crude guidance from actual data. And they went out on a limb (one that had been constructed by other physicists, like Yochiro Nambu and Jeffrey Goldstone) and put forward a dramatic idea: empty space is filled with an invisible field that acts like fog, attenuating the lines of force and keeping the interaction short-range. How would you ever know that such an idea were true? Only because you could imagine poking that field a bit, to set it vibrating, and observe the vibrations as a new kind of particle.

And forty-eight years later, billions of dollars and thousands of dedicated people, that particle finally showed up, as a little bump amidst trillions of collision events. Amazing.

cms-2012-clean

atlas-2012-clean

Here are my Top Ten Higgs Boson Facts. And here I am yakking about it on Sixty Symbols:

Talking about the Higgs Boson - Sixty Symbols

Professors Englert and Higgs have every reason to be very proud, but this prize is really a testament to human intellectual curiosity and perseverance. And well deserved, at that.

Englert and Higgs Read More »

17 Comments

The Nobel Prize Is Really Annoying

nobelOne of the chapters in Surely You’re Joking, Mr. Feynman is titled “Alfred Nobel’s Other Mistake.” The first being dynamite, of course, and the second being the Nobel Prize. When I first read it I was a little exasperated by Feynman’s kvetchy tone — sure, there must be a lot of nonsense associated with being named a Nobel Laureate, but it’s nevertheless a great honor, and more importantly the Prizes do a great service for science by highlighting truly good work.

These days, as I grow in wisdom and kvetchiness myself, I’m coming around to Feynman’s point of view. I still believe that on balance the Prizes are a very good thing, and generally they honor some of the very best work in physics. (Some of my best friends are winners!) But having written a book about the Higgs boson discovery, which is on everybody’s lips as a natural candidate (though not the only one!), all of the most annoying aspects of the process are immediately apparent.

The most annoying of all the annoying aspects is, of course, the rule in physics (and the other non-peace prizes, I think) that the prize can go to at most three people. This is utterly artificial, and completely at odds with the way science is actually done these days. In my book I spread credit for the Higgs mechanism among no fewer than seven people: Philip Anderson, Francois Englert, Robert Brout (who is now deceased), Peter Higgs, Gerald Guralnik, Carl Hagen, and Tom Kibble. In a sensible world they would share the credit, but in our world we have endless pointless debates (the betting money right now seems to be pointing toward Englert and Higgs, but who knows). As far as I can tell, the “no more than three winners” rule isn’t actually written down in Nobel’s will, it’s more of a tradition that has grown up over the years. It’s kind of like the government shutdown: we made up some rules, and are now suffering because of them.

The folks who should really be annoyed are, of course, the experimentalists. There’s a real chance that no Nobel will ever be given out for the Higgs discovery, since it was carried out by very large collaborations. If that turns out to be the case, I think it will be the best possible evidence that the system is broken. I definitely appreciate that you don’t want to water down the honor associated with the prizes by handing them out to too many people (the ranks of “Nobel Laureates” would in some sense swell by the thousands if the prize were given to the ATLAS and CMS collaborations, as they should be), but it’s more important to get things right than to stick to some bureaucratic rule.

The worst thing about the prizes is that people become obsessed with them — both the scientists who want to win, and the media who write about the winners. What really matters, or should matter, is finding something new and fundamental about how nature works, either through a theoretical idea or an experimental discovery. Prizes are just the recognition thereof, not the actual point of the exercise.

Of course, none of the theorists who proposed the Higgs mechanism nor the experimentalists who found the boson actually had “win the Nobel Prize” as a primary motivation. They wanted to do good science. But once the good science is done, it’s nice to be recognized for it. And if any subset of the above-mentioned folks are awarded the prize this year or next, it will be absolutely well-deserved — it’s epochal, history-making stuff we’re talking about here. The griping from the non-winners will be immediate and perfectly understandable, but we should endeavor to honor what was actually accomplished, not just who gets the gold medals.

The Nobel Prize Is Really Annoying Read More »

44 Comments
Scroll to Top