Here in the Era of 3-Sigma Results, we tend to get excited about hints of new physics that eventually end up going away. That’s okay — excitement is cheap, and eventually one of these results is going to stick and end up changing physics in a dramatic way. Remember that “3 sigma” is the minimum standard required for physicists to take a new result at all seriously; if you want to get really excited, you should wait for 5 sigma significance. What we have here is a 3.5 sigma result, indicating CP violation in the decay of D mesons. Not quite as exciting as superluminal neutrinos, but if it holds up it’s big stuff. You can read about it at Résonaances or Quantum Diaries, or look at the talk recently given at the Hadronic Collider Physics Symposium 2011 in Paris. Here’s my attempt an an explanation.
The latest hint of a new result comes from the Large Hadron Collider, in particular the LHCb experiment. Unlike the general-purpose CMS and ATLAS experiments, LHCb is specialized: it looks at the decays of heavy mesons (particles consisting of one quark and one antiquark) to search for CP violation. “C” is for “charge” and “P” is for “parity”; so “CP violation” means you measure something happening with some particles, and then you measure the analogous thing happening when you switch particles with antiparticles and take the mirror image. (Parity reverses directions in space.) We know that CP is a pretty good symmetry in nature, but not a perfect one — Cronin and Fitch won the Nobel Prize in 1980 for discovering CP violation experimentally.
While the existence of CP violation is long established, it remains a target of experimental particle physicists because it’s a great window onto new physics. What we’re generally looking for in these big accelerators are new particles that are just to heavy and short-lived to be easily noticed in our everyday low-energy world. One way to do that is to just make the new particles directly and see them decaying into something. But another way is more indirect — measure the tiny effect of heavy virtual particles on the interactions of known particles. That’s what’s going on here.
More specifically, we’re looking at the decay of D mesons in two different ways, into kaons and pions. If you like thinking in terms of quarks, here are the dramatis personae:
- D0 meson: charm quark + anti-up quark
- anti-D0: anti-charm quark + up quark
- K-: strange quark + anti-up quark
- K+: anti-strange quark + up quark
- π-: down quark + anti-up quark
- π+: anti-down quark + up quark
Let’s look at the D0 meson. What happens is the charm quark (much heavier than the anti-up) decays into three lighter quarks: either up + strange + anti-strange, or up + down + anti-down. If it’s the former, we get a K- and a K+; if it’s the latter, we get a π- and a π+. Here’s one example, where D0 goes to K- and K+.
Of course the anti-D0 can also decay, and the anti-charm will go to either anti-up plus strange plus anti-strange, or anti-up plus down plus anti-down (just the antiparticles of what the D0 could go to). But if you match up the quarks, you see that the decay products are exactly the same as they were in the case of the original D0: either K- and K+, or π- and a π+.
Here’s where the search for CP violation comes in. If you take a D0 meson and “do a CP transformation to it,” you get an anti-D0, and vice-versa. So we can test for CP violation by comparing the rate at which D0’s decay to the rate of anti-D0’s. That’s basically the way Cronin and Fitch discovered CP violation, except that they started with neutral kaons and anti-kaons and watched them decay.
One problem is that the LHC itself doesn’t treat particles and anti-particles equally. It collides protons with protons, not protons with anti-protons. (It’s easier to make protons, so you get a higher luminosity [more events] if you stick with just protons.) So you end up making a lot more D0’s than anti-D0’s. In principle you can correct for that if you understand everything there is to understand about particle physics and your detector, but in practice we don’t. So the LHCb experimentalists did a clever thing: rather than just measuring the decay of D0’s and anti-D0’s into either kaons or pions, they measured them both, and then took the difference. This procedure is meant to cancel out all of the annoying experimental features, leaving only the pristine physics underneath. (If there is a nonzero difference in the CP violation rates between decays into kaons and decays into pions, at least one of those decays must itself violate CP.)
And the answer is: there is a noticeable difference! It’s -0.82%, plus or minus 0.24%, for a total of 3.5 sigma. (82 divided by 24 is about 3.5.) And the prediction from the Standard Model is that we should get almost zero for this quantity — maybe 0.01% or thereabouts.
So what could be going on? As Jester says, this is a surprising result — there aren’t a lot of models on the market that predict this level of CP violation in D0 decays but not in any of the other experiments we’ve already done. But the general idea, if you wanted to come up with such a model, would be to add new heavy particles that gently interfere with the process by which the charm quark in the above diagram decays into lighter quarks.
If I were to guess, I’d put my money on this result going away. But it stands a fighting chance! If it does hold up, to be honest it would be a bit frustrating — we would know that something new was going on, but not have too much of an idea what exactly it would be. But at least we’d know something about where to look, which is a huge advantage.
Truth in advertising notice: folks who write articles or press releases about CP violation are contractually obligated to say that this will help explain the matter-antimatter asymmetry in the universe. That might be true, or … it might not. My strong feeling is that we should be excited by discovering new particles of nature, and not rely on the crutch of relating everything to cosmology.
Pingback: Der Mond-Schatten vor der Kosmischen Strahlung « Skyweek Zwei Punkt Null
This comment isn’t about the possible new physics, it’s about the extremely annoying repeating of the “5 sigma to really believe it” myth. Follow that, and very little of astronomy over the past 3 or 4 decades survives! (case to point: there may be NO (or very, very few) extrasolar planet detections that can pass this test). The discovery of the accelrating expansion rate of the universe wouldn’t have passed this test when it was first announced (to great acclaim and damn few doubters) either, as another obvious example. Yes, you particle physicists (and related theorists) may actually need this rule of thumb but many other branches of physics just laugh at it (atmospheric physics comes to mind also). And you have several real live working astronomers who are members of this blog also…..
@27 This may be true but the 5 sigma rule is an “industry standard” in particle physics and this will always be a requirement for a discovery. The main reason being that preliminary results can often lead to 3 or 4 sigma measurements that after further study on larger datasets can be washed out to 1 or 2 sigma. Hopefully this won’t be the case here 😉 but the 5 sigma limit is a failsafe. With regard to this application in atmospheric and astronomy: The datasets used in PP are VAST and this is frequently not the case in the other areas you mention. Hence, in the high statistics world of particle physics, a small effect can be “blown up” artificially due to systematic errors and the artificial training of analyses. Implementing the 5 sigma limit on a discovery ensures that, for the most part, the result is due to a physics effect rather than anything else.
Particle physicists search for deviations from theory predictions in many channels. Just from the number of measurements, you expect some to get a statistical deviation of the order of 3 sigma, without any new physics (or errors in the theory predictions). 5 sigma, however, is safe. That is new physics or an error somewhere, but not a statistical fluctuation.