Science

Not Being Announced Tomorrow: Discovery of the Higgs Boson

Tomorrow, Tuesday 13 December, there will be a couple of seminars at CERN presented by Fabiola Gianotti and Guido Tonelli, speaking respectively for the ATLAS and CMS collaborations at the LHC. They will be updating us on the current status of the search for the Higgs boson. The seminars will be webcast from CERN, and there should be a liveblog on Twitter that you can follow by searching for the #higgsliveblog hashtag (no Twitter account required). The seminars start at 14:00 Geneva time, so that’s 5:00 a.m. Pacific time if I do my calculations correctly. Of course there will be plenty of news coverage immediately thereafter, so don’t feel too bad if you sleep through it. Many places with LHC physicists (including Caltech) are also having their own local seminars. Should be exciting!

If you want to know why it’s exciting, after you’ve read John’s description of life in the trenches and Matt Strassler’s post about the multiple stages of hunting the Higgs and mine about why we need something like it, see even more recent posts by Matt, Jester, and Pauline Gagnon. Reader’s Digest version: not only are we being updated on the status of the search, there are believable rumors that the searches are actually seeing something — hints of a Higgs near 125 GeV, with better than 3-sigma significance from ATLAS and better than 2-sigma significance from CMS. But obviously rumors are no match for what actually happens.

All I’m here to tell you is: you should not expect to hear anyone announcing that we have discovered the Higgs boson. This will, at best, be a hint — “evidence for” something, not “discovery of” that thing. The collaborations realistically can’t claim to have actually discovered the Higgs, even if it’s there — they don’t have enough data. (CERN even issued a press release to drive home the point.) And in the real world, hints are sometimes misleading. That is: the experimenters will give us their absolute best judgment about what they are seeing, but at this stage of the game that judgment is necessarily extremely preliminary. If they say “we have 3.5-sigma evidence, which is quite suggestive,” do not think that they are just being coy and what they really mean is “oh, we know it’s there, we just have to follow the protocols.” The protocols are there for a reason! Mostly, that many 3-sigma findings eventually go away. This is one step on a journey, not the culmination of anything. (For Americans out there: it’s like a bill has been passed by the House, but not yet passed by the Senate, and certainly not signed by the President. Much can go wrong along the way.)

The journey of a thousand miles begins with a single step. It’s possible that tomorrow’s announcement means that we’re nearing the end of the journey, say at the mile-990 marker. But we can’t be sure, and there are no royal roads to particle physics. Patience! The excitement of not knowing for sure is what makes science one of the most compelling human stories.

Not Being Announced Tomorrow: Discovery of the Higgs Boson Read More »

27 Comments

Guest Post: Matt Strassler on Hunting for the Higgs

Perhaps you’ve heard of the Higgs boson. Perhaps you’ve heard the phrase “desperately seeking” in this context. We need it, but so far we can’t find it. This all might change soon — there are seminars scheduled at CERN by both of the big LHC collaborations, to update us on their progress in looking for the Higgs, and there are rumors they might even bring us good news. You know what they say about rumors: sometimes they’re true, and sometimes they’re false.

So we’re very happy to welcome a guest post by Matt Strassler, who is an expert particle theorist, to help explain what’s at stake and where the search for the Higgs might lead. Matt has made numerous important contributions, from phenomenology to string theory, and has recently launched the website Of Particular Significance, aimed at making modern particle physics accessible to a wide audience. Go there for a treasure trove of explanatory articles, growing at an impressive pace.

———————–

After this year’s very successful run of the Large Hadron Collider (LHC), the world’s most powerful particle accelerator, a sense of great excitement is beginning to pervade the high-energy particle physics community. The search for the Higgs particle… or particles… or whatever appears in its place… has entered a crucial stage.

We’re now deep into Phase 1 of this search, in which the LHC experiments ATLAS and CMS are looking for the simplest possible Higgs particle. This unadorned version of the Higgs particle is usually called the Standard Model Higgs, or “SM Higgs” for short. The end of Phase 1 looks to be at most a year away, and possibly much sooner. Within that time, either the SM Higgs will show up, or it will be ruled out once and for all, forcing an experimental search for more exotic types of Higgs particles. Either way, it’s a turning point in the history of our efforts to understand nature’s elementary laws.

This moment has been a long time coming. I’ve been working as a scientist for over twenty years, and for a third decade before that I was reading layperson’s articles about particle physics, and attending public lectures by my predecessors. Even then, the Higgs particle was a profound mystery. Within the Standard Model (the equations that used at the LHC to describe all the particles and forces of nature we know about so far, along with the SM Higgs field and particle) it stood out as a bit different, a bit ad hoc, something not quite like the others. It has always been widely suspected that the full story might be more complicated. Already in the 1970s and 1980s there were speculative variants of the Standard Model’s equations containing several types of Higgs particles, and other versions with a more complicated Higgs field and no Higgs particle — with a key role of the Higgs particle being played by other new particles and forces.

But everyone also knew this: you could not simply take the equations of the Standard Model, strip the Higgs particle out, and put nothing back in its place. The resulting equations would not form a complete theory; they would be self-inconsistent. …

Guest Post: Matt Strassler on Hunting for the Higgs Read More »

56 Comments

On Determinism

Back in 1814, Pierre-Simon Laplace was mulling over the implications of Newtonian mechanics, and realized something profound. If there were a vast intelligence — since dubbed Laplace’s Demon — that knew the exact state of the universe at any one moment, and knew all the laws of physics, and had arbitrarily large computational capacity, it could both predict the future and reconstruct the past with perfect accuracy. While this is a straightforward consequence of Newton’s theory, it seems to conflict with our intuitive notion of free will. Even if there is no such demon, presumably there is some particular state of the universe, which implies that the future is fixed by the present. What room, then, for free choice? What’s surprising is that we still don’t have a consensus answer to this question. Subsequent developments, most relevantly in the probabilistic nature of predictions in quantum mechanics, have muddied the waters more than clarifying them.

Massimo Pigliucci has written a primer for skeptics of determinism, in part spurred by reading (and taking issue with) Alex Rosenberg’s new book The Atheist’s Guide to Reality, which I mentioned here. And Jerry Coyne responds, mostly to say that none of this amounts to “free will” over and above the laws of physics. (Which is true, even if, as I’ll mention below, quantum indeterminacy can propagate upward to classical behavior.) I wanted to give my own two cents, partly as a physicist and partly as a guy who just can’t resist giving his two cents.

Echoing Massimo’s structure, here are some talking points:

* There are probably many notions of what determinism means, but let’s distinguish two. The crucial thing is that the universe can be divided up into different moments of time. (The division will generally be highly non-unique, but that’s okay.) Then we can call “global determinism” the claim that, if we know the exact state of the whole universe at one time, the future and past are completely determined. But we can also define “local determinism” to be the claim that, if we know the exact state of some part of the universe at one time, the future and past of a certain region of the universe (the “domain of dependence”) is completely determined. Both are reasonable and relevant.

* It makes sense to be interested, as Massimo seems to be, in whether or not the one true correct ultimate set of laws of physics are deterministic or not. …

On Determinism Read More »

80 Comments

Thanksgiving

This year we give thanks for a concept that has been particularly useful in recent times: the error bar. (We’ve previously given thanks for the Standard Model Lagrangian, Hubble’s Law, the Spin-Statistics Theorem, conservation of momentum, and effective field theory.)

Error bars are a simple and convenient way to characterize the expected uncertainty in a measurement, or for that matter the expected accuracy of a prediction. In a wide variety of circumstances (though certainly not always), we can characterize uncertainties by a normal distribution — the bell curve made famous by Gauss. Sometimes the measurements are a little bigger than the true value, sometimes they’re a little smaller. The nice thing about a normal distribution is that it is fully specified by just two numbers — the central value, which tells you where it peaks, and the standard deviation, which tells you how wide it is. The simplest way of thinking about an error bar is as our best guess at the standard deviation of what the underlying distribution of our measurement would be if everything were going right. Things might go wrong, of course, and your neutrinos might arrive early; but that’s not the error bar’s fault.

Now, there’s much more going on beneath the hood, as any scientist (or statistician!) worth their salt would be happy to explain. Sometimes the underlying distribution is not expected to be normal. Sometimes there are systematic errors. Are you sure you want the standard deviation, or perhaps the standard error? What are the error bars on your error bars?

While these are important issues, we’re in a holiday mood and aren’t trying to be so picky. What we’re celebrating is not the concept of statistical uncertainty, but the elegant shortcut provided by the concept of the error bar. Sure, many things can be going on, and ultimately we want to be more careful; nevertheless, there’s no question that the ability to sum up our rough degree of precision in a single number is enormously useful. That’s the genius of the error bar: it lets you decide at a glance whether a result is possibly worth believing or not. The power spectrum of the cosmic microwave background is a pretty plot, but it only becomes convincing when we see the error bars. Then you have a right to go, “Aha, I see three peaks there!”

And the error bar isn’t just pretty, it provides some quantitative oomph. An error bar is basically the standard deviation — “sigma,” as the scientists like to call it. So if your distribution really is normal you know that an individual measurement should be within one sigma of the expected value about 68% of the time; within two sigma 95% of the time, and within three sigma 99.7% of the time. So if you’re not within three sigma, you begin to think your expectation was wrong — something fishy is going on. (Like maybe a Nobel-prize-worthy discovery?) Once you’re out at five sigma, you’re outside the 99.9999% range — in normal human experience, that’s pretty unlikely.

Error bars aren’t the last word on statistical significance, they’re the first word. But we can all be thankful that so much meaning can be compressed into one little quantity.

Thanksgiving Read More »

15 Comments

Guest Post: David Wallace on the Physicality of the Quantum State

The question of the day seems to be, “Is the wave function real/physical, or is it merely a way to calculate probabilities?” This issue plays a big role in Tom Banks’s guest post (he’s on the “useful but not real” side), and there is an interesting new paper by Pusey, Barrett, and Rudolph that claims to demonstrate that you can’t simply treat the quantum state as a probability calculator. I haven’t gone through the paper yet, but it’s getting positive reviews. I’m a “realist” myself, as I think the best definition of “real” is “plays a crucial role in a successful model of reality,” and the quantum wave function certainly qualifies.

To help understand the lay of the land, we’re very happy to host this guest post by David Wallace, a philosopher of science at Oxford. David has been one of the leaders in trying to make sense of the many-worlds interpretation of quantum mechanics, in particular the knotty problem of how to get the Born rule (“the wave function squared is the probability”) out of the this formalism. He was also a participant at our recent time conference, and the co-star of one of the videos I posted. He’s a very clear writer, and I think interested parties will get a lot out of reading this.

———————————-

Why the quantum state isn’t (straightforwardly) probabilistic

In quantum mechanics, we routinely talk about so-called “superposition states” – both at the microscopic level (“the state of the electron is a superposition of spin-up and spin-down”) and, at least in foundations of physics, at the macroscopic level (“the state of Schrodinger’s cat is a superposition of alive and dead”). Rather a large fraction of the “problem of measurement” is the problem of making sense of these superposition states, and there are basically two views. On the first (“state as physical”), the state of a physical system tells us what that system is actually, physically, like, and from that point of view, Schrodinger’s cat is seriously weird. What does it even mean to say that the cat is both alive and dead? And, if cats can be alive and dead at the same time, how come when we look at them we only see definitely-alive cats or definitely-dead cats? We can try to answer the second question by invoking some mysterious new dynamical process – a “collapse of the wave function” whereby the act of looking at half-alive, half-dead cats magically causes them to jump into alive-cat or dead-cat states – but a physical process which depends for its action on “observations”, “measurements”, even “consciousness”, doesn’t seem scientifically reputable. So people who accept the “state-as-physical” view are generally led either to try to make sense of quantum theory without collapses (that leads you to something like Everett’s many-worlds theory), or to modify or augment quantum theory so as to replace it with something scientifically less problematic.

On the second view, (“state as probability”), Schrodinger’s cat is totally unmysterious. When we say “the state of the cat is half alive, half dead”, on this view we just mean “it has a 50% probability of being alive and a 50% probability of being dead”. And the so-called collapse of the wavefunction just corresponds to us looking and finding out which it is. From this point of view, to say that the cat is in a superposition of alive and dead is no more mysterious than to say that Sean is 50% likely to be in his office and 50% likely to be at a conference.

Now, to be sure, probability is a bit philosophically mysterious. …

Guest Post: David Wallace on the Physicality of the Quantum State Read More »

73 Comments

Guest Post: Tom Banks on Probability and Quantum Mechanics

The lure of blogging is strong. Having guest-posted about problems with eternal inflation, Tom Banks couldn’t resist coming back for more punishment. Here he tackles a venerable problem: the interpretation of quantum mechanics. Tom argues that the measurement problem in QM becomes a lot easier to understand once we appreciate that even classical mechanics allows for non-commuting observables. In that sense, quantum mechanics is “inevitable”; it’s actually classical physics that is somewhat unusual. If we just take QM seriously as a theory that predicts the probability of different measurement outcomes, all is well.

Tom’s last post was “technical” in the sense that it dug deeply into speculative ideas at the cutting edge of research. This one is technical in a different sense: the concepts are presented at a level that second-year undergraduate physics majors should have no trouble following, but there are explicit equations that might make it rough going for anyone without at least that much background. The translation from LaTeX to WordPress is a bit kludgy; here is a more elegant-looking pdf version if you’d prefer to read that.

—————————————-

Rabbi Eliezer ben Yaakov of Nahariya said in the 6th century, “He who has not said three things to his students, has not conveyed the true essence of quantum mechanics. And these are Probability, Intrinsic Probability, and Peculiar Probability”.

Probability first entered the teachings of men through the work of that dissolute gambler Pascal, who was willing to make a bet on his salvation. It was a way of quantifying our risk of uncertainty. Implicit in Pascal’s thinking, and all who came after him was the idea that there was a certainty, even a predictability, but that we fallible humans may not always have enough data to make the correct predictions. This implicit assumption is completely unnecessary and the mathematical theory of probability makes use of it only through one crucial assumption, which turns out to be wrong in principle but right in practice for many actual events in the real world.

For simplicity, assume that there are only a finite number of things that one can measure, in order to avoid too much math. List the possible measurements as a sequence

A = \left( \begin{array}{ccc} a_1 & \ldots & a_N\end{array} \right).
The aN are the quantities being measured and each could have a finite number of values. Then a probability distribution assigns a number P(A) between zero and one to each possible outcome. The sum of the numbers has to add up to one. The so called frequentist interpretation of these numbers is that if we did the same measurement a large number of times, then the fraction of times or frequency with which we’d find a particular result would approach the probability of that result in the limit of an infinite number of trials. It is mathematically rigorous, but only a fantasy in the real world, where we have no idea whether we have an infinite amount of time to do the experiments. The other interpretation, often called Bayesian, is that probability gives a best guess at what the answer will be in any given trial. It tells you how to bet. This is how the concept is used by most working scientists. You do a few experiments and see how the finite distribution of results compares to the probabilities, and then assign a confidence level to the conclusion that a particular theory of the data is correct. Even in flipping a completely fair coin, it’s possible to get a million heads in a row. If that happens, you’re pretty sure the coin is weighted but you can’t know for sure.

Physical theories are often couched in the form of equations for the time evolution of the probability distribution, even in classical physics. One introduces “random forces” into Newton’s equations to “approximate the effect of the deterministic motion of parts of the system we don’t observe”. The classic example is the Brownian motion of particles we see under the microscopic, where we think of the random forces in the equations as coming from collisions with the atoms in the fluid in which the particles are suspended. However, there’s no a priori reason why these equations couldn’t be the fundamental laws of nature. Determinism is a philosophical stance, an hypothesis about the way the world works, which has to be subjected to experiment just like anything else. Anyone who’s listened to a geiger counter will recognize that the microscopic process of decay of radioactive nuclei doesn’t seem very deterministic. …

Guest Post: Tom Banks on Probability and Quantum Mechanics Read More »

93 Comments

New Physics at LHC? An Anomaly in CP Violation

Here in the Era of 3-Sigma Results, we tend to get excited about hints of new physics that eventually end up going away. That’s okay — excitement is cheap, and eventually one of these results is going to stick and end up changing physics in a dramatic way. Remember that “3 sigma” is the minimum standard required for physicists to take a new result at all seriously; if you want to get really excited, you should wait for 5 sigma significance. What we have here is a 3.5 sigma result, indicating CP violation in the decay of D mesons. Not quite as exciting as superluminal neutrinos, but if it holds up it’s big stuff. You can read about it at Résonaances or Quantum Diaries, or look at the talk recently given at the Hadronic Collider Physics Symposium 2011 in Paris. Here’s my attempt an an explanation.

The latest hint of a new result comes from the Large Hadron Collider, in particular the LHCb experiment. Unlike the general-purpose CMS and ATLAS experiments, LHCb is specialized: it looks at the decays of heavy mesons (particles consisting of one quark and one antiquark) to search for CP violation. “C” is for “charge” and “P” is for “parity”; so “CP violation” means you measure something happening with some particles, and then you measure the analogous thing happening when you switch particles with antiparticles and take the mirror image. (Parity reverses directions in space.) We know that CP is a pretty good symmetry in nature, but not a perfect one — Cronin and Fitch won the Nobel Prize in 1980 for discovering CP violation experimentally.

While the existence of CP violation is long established, it remains a target of experimental particle physicists because it’s a great window onto new physics. What we’re generally looking for in these big accelerators are new particles that are just to heavy and short-lived to be easily noticed in our everyday low-energy world. One way to do that is to just make the new particles directly and see them decaying into something. But another way is more indirect — measure the tiny effect of heavy virtual particles on the interactions of known particles. That’s what’s going on here.

More specifically, we’re looking at the decay of D mesons in two different ways, into kaons and pions. If you like thinking in terms of quarks, here are the dramatis personae:

  • D0 meson: charm quark + anti-up quark
  • anti-D0: anti-charm quark + up quark
  • K-: strange quark + anti-up quark
  • K+: anti-strange quark + up quark
  • π-: down quark + anti-up quark
  • π+: anti-down quark + up quark

Let’s look at the D0 meson. What happens is the charm quark (much heavier than the anti-up) decays into three lighter quarks: either up + strange + anti-strange, or up + down + anti-down. …

New Physics at LHC? An Anomaly in CP Violation Read More »

29 Comments

A Minute of Time

For you arrow-of-time freaks who have been looking for a quick and engaging intro to the issues (maybe to show your friends to get them to appreciate your obsession), here’s a guest spot I did for the terrific Minute Physics series illustrated by Henry Reich. If you’re not already familiar with them, check out the entire series.

The Arrow of Time feat. Sean Carroll

Previously I did one on dark energy. It came out right after the Nobel Prize announcement, but don’t let that trick you into thinking I won the Prize myself. (Some people were tricked.)

2011 Nobel Prize: Dark Energy feat. Sean Carroll

Meanwhile, in a parallel universe, instead of writing Spacetime and Geometry, I wrote a massive tome on Cosmology. This parallel universe was featured on this week’s episode of Fringe. Here’s Walter Bishop retrieving his copy from Peter.

I helped with some of the equations on the episode. Thanks to Glen Whitman and Rob Chiappetta for the shout-out.

A Minute of Time Read More »

23 Comments

A Cornucopia of Time Talks

I don’t suppose “cornucopia” is the right collective noun, but what does one call a collection of talks centered on the subject of time? I previously linked to these talks from our time conference, but it’s clear from the viewing numbers that not nearly enough of you have taken advantage of them. There’s a lot of great stuff here! So let me pick out some of my very favorites, although I promise they are all good.

Here’s neuroscientist David Eagleman, talking about how we perceive time.

David Eagleman on CHOICE

Here’s physicist-turned-complexity-theorist Raissa D’Souza, talking about complexity.

Raissa D'Souza on COMPLEXITY

Here’s another physicist-turned-complexity-theorist, Geoffrey West, taking the complexity story even further.

Geoffrey West on COMPLEXITY

Here’s former guest-blogger, now Discover blogger, and engineer/roboticist/neuroscientist/philosopher Malcolm Maciver, talking about making choices and the evolution of consciousness.

Malcolm MacIver on CHOICE

And to top things off, here’s one of those mock debates (where participants attempt to defend the side they don’t believe in). This time it’s David Albert vs. David Wallace, on the many-worlds interpretation of quantum mechanics.

A Mock Debate on Quantum Mechanics with DAVID ALBERT and DAVID WALLACE

Seriously good stuff. There are still more talks not yet up, I’ll let you know.

Update: I didn’t realize my own talk was up. Here it is.

Sean Carroll, OPENING PANEL at FQXi Conference on Time

A Cornucopia of Time Talks Read More »

20 Comments

Column: Looking for New Forces

While my first column for Discover was on the multiverse, the second one is more down to Earth (as these things go): searching for new forces. Of course we are searching for new short-range forces at the Large Hadron Collider and in other particle-physics experiments, but here I’m talking about long-range “fifth forces.” While there are plausible motivations for searching for such forces, and the experimentalists have done an heroic job in constraining them, I argue that the most impressive thing is how we can say what forces are not out there — in particular, anything that would have any important effect on everyday life. There probably are more forces than we know about, but they’re only going to be of direct interest to physicists, I’m afraid. No tractor beams.

Column: Looking for New Forces Read More »

25 Comments
Scroll to Top