A recent essay in the New York Times by Dennis Overbye has managed to attract quite a bit of attention around the internets — most of it not very positive. It concerns a recent paper by Holger Nielsen and Masao Ninomiya (and some earlier work) discussing a seemingly crazy-sounding proposal — that we should randomly choose a card from a million-card deck and, on the basis of which card we get, decide whether to go forward with the Large Hadron Collider. Responses have ranged from eye-rolling and heavy sighs to cries of outrage, clutching at pearls, and grim warnings that the postmodernists have finally infiltrated the scientific/journalistic establishment, this could be the straw that breaks the back of the Enlightenment camel, and worse.
Since I am quoted (in a rather non-committal way) in the essay, it’s my responsibility to dig into the papers and report back. And my message is: relax! Western civilization will survive. The theory is undeniably crazy — but not crackpot, which is a distinction worth drawing. And an occasional fun essay about speculative science in the Times is not going to send us back to the Dark Ages, or even rank among the top ten thousand dangers along those lines.
The standard Newtonian way of thinking about the laws of physics is in terms of an initial-value problem. You specify the state of the system (positions and velocities) at one moment, then the laws of physics tell you how it will evolve into the future. But there is a completely equivalent alternative, which casts the laws of physics in terms of an action principle. In this formulation, we assign a number — the action — to every possible history of the system throughout time. (The choice of what action to assign is simply the choice of what laws of physics are operative.) Then the allowed histories, the ones that “obey the laws of physics,” are those for which the action is the smallest. That’s the “principle of least action,” and it’s a standard undergraduate exercise to show that it’s utterly equivalent to the initial-value formulation of dynamics.
In quantum mechanics, as you may have heard, things change a tiny bit. Instead of only allowing histories that minimize the action, quantum mechanics (as reformulated by Feynman) tells us to add up the contributions from every possible history, but give larger weight to those with smaller actions. In effect, we blur out the allowed trajectories around the one with absolutely smallest action.
Nielsen and Ninomiya (NN) pull an absolutely speculative idea out of their hats: they ask us to consider what would happen if the action were a complex number, rather than just a real number. Then there would be an imaginary part of the action, in addition to the real part. (This is the square-root-of-minus-one sense of “imaginary,” not the LSD-hallucination sense of “imaginary.”) No real justification — or if there is, it’s sufficiently lost in the mists that I can’t discern it from the recent papers. That’s okay; it’s just the traditional hypothesis-testing that has served science well for a few centuries now. Propose an idea, see where it leads, toss it out if it conflicts with the data, build on it if it seems promising. We don’t know all the laws of physics, so there’s no reason to stand pat.
NN argue that the effect of the imaginary action is to highly suppress the probabilities associated with certain trajectories, even if those trajectories minimize the real action. But it does so in a way that appears nonlocal in spacetime — it’s really the entire trajectory through time that seems to matter, not just what is happening in our local neighborhood. That’s a crucial difference between their version of quantum mechanics and the conventional formulation. But it’s not completely bizarre or unprecedented. Plenty of hints we have about quantum gravity indicate that it really is nonlocal. More prosaically, in everyday statistical mechanics we don’t assign equal weight to every possible trajectory consistent with our current knowledge of the universe; by hypothesis, we only allow those trajectories that have a low entropy in the past. (As readers of this blog should well know by now; and if you don’t, I have a book you should definitely read.)
To make progress with this idea, you have to make a choice for what the imaginary part of the action is supposed to be. Here, in the eyes of this not-quite-expert, NN seem to cheat a little bit. They basically want the imaginary action to look very similar to the real action, but it turns out that this choice is naively ruled out. So they jump through some hoops until they get a more palatable choice of model, with the property that it is basically impotent except where the Higgs boson is concerned. (The Higgs, as a fundamental scalar, interacts differently than other particles, so this isn’t completely ad hoc — just a little bit.) Because they are not actually crackpots, they even admit what they’re doing — in their own words, “Our model with an imaginary part of the action begins with a series of not completely convincing, but still suggestive, assumptions.”
Having invoked the tooth fairy twice — contemplating an imaginary part of the action, then choosing its form so as to only be relevant where the Higgs is concerned — they consider consequences. Remember that the effect of the imaginary action is non-local in time — it depends on what happens throughout the history of the universe, not just here and now. In particular, given their assumptions, it provides a large suppression to any history in which large numbers of Higgs bosons are produced, even if they won’t be produced until some time in the future.
So this model makes a strong prediction: we’re not going to be producing any Higgs bosons. Not because the ordinary dynamical equations of physics prevent it (e.g., because the Higgs is just too massive), but because the specific trajectory on which the universe finds itself is one in which no Higgses are made.
That, of course, runs into the problem that we have every intention of making Higgs bosons, for example at the LHC. Aha, say NN, but notice that we haven’t yet! The Superconducting Supercollider, which could have found the Higgs long ago, was canceled by Congress. And in their December 2007 paper — before the LHC tried to turn on — they very explicitly say that a “natural” accident will come along and break the LHC if we try to turn it on. Well, we know how that turned out.
But NN have an ingenious suggestion for saving us from future accidents at the LHC — which, as they warn, could endanger lives. They propose a card game with more than a million cards, almost all of which say “go ahead, no problem.” But one card says “don’t turn on the LHC!” In their model, the nonlocal effect of the imaginary part of the action is to ensure that the realized history of the universe is one in which the LHC never turns on; but it doesn’t matter why it doesn’t turn on. If we randomly pick one out of a million cards, and honestly promise to follow through on the instructions on the card we pick, and we happen to pick the card that says not to turn it on, and we therefore don’t — that’s a history of the universe that is completely unsuppressed by their mechanism. And if we choose a card that says “go ahead,” well then their theory is falsified. (Unless we try to go ahead and are continually foiled by a series of unfortunate accidents.) Best of all, playing the card game costs almost nothing. But for it to work, we have to be very sincere that we won’t turn on the LHC if that’s what the card says. It’s only a million-to-one chance, after all.
Note that all of this “nonlocal in time,” “receiving signals sent from the future” stuff is a bit of a red herring, at least at the classical level. We often think that the past is set in stone, while the future is still to be determined. But that’s not how the laws of physics operate. If we knew the precise state of the universe, and the exact laws of physics, the future would be as utterly determined as the present (Laplace’s Demon). We only think otherwise because our knowledge of the present state is highly imperfect, consisting as it does as a few pieces of information about the coarse-grained state. (We don’t know the position and velocity of every particle in the universe, or for that matter in any macroscopic object.) So there’s no need to think of NN’s imaginary action as making reference to what happens in the future — all the necessary data are in the present state. What seems weird to us is that the NN mechanism makes crucial use of detailed, non-macroscopic information about the present state; information to which we don’t have access. (Such as, “does this subset of the universe evolve into the Large Hadron Collider?”) That’s not how the physics we know and love actually works, but the setup doesn’t actually rely on propagation of signals backwards in time.
At the end of the day: this theory is crazy. There’s no real reason to believe in an imaginary component to the action with dramatic apparently-nonlocal effects, and even if there were, the specific choice of action contemplated by NN seems rather contrived. But I’m happy to argue that it’s the good kind of crazy. The authors start with a speculative but well-defined idea, and carry it through to its logical conclusions. That’s what scientists are supposed to do. I think that the Bayesian prior probability on their model being right is less than one in a million, so I’m not going to take its predictions very seriously. But the process by which they work those predictions out has been perfectly scientific.
There is another reasonable question, which is whether an essay (not a news story, note) like this in a major media outlet contributes to the erosion of trust in scientists on the part of the general public. I would love to see actual data one way or the other, which went beyond “remarkably, the view of the common man aligns precisely with the view I myself hold.” My own anecdotal observations are pretty unambiguous — the public loves far-out speculations like this, and happily eats them up. (See previous mocking quote, now applied to myself.) It’s always important to distinguish as clearly as possible between what is crazy-sounding but well-established as true — quantum mechanics, relativity, natural selection — and what is crazy-sounding and speculative, even if it’s respectable speculation — inflation, string theory, exobiology. But if that distinction is made, I’ve always found it pretty paternalistic and condescending to claim that we should shield the public from speculative science until it’s been established one way or the other. The public are grown-ups, and we should assume the best of them rather than the worst. There’s nothing wrong with letting them in on the debates about crazy-sounding ideas that we professional scientists enjoy as our stock in trade.
The disappointing thing about the responses to the article is how non-intellectual they have been. I haven’t heard “the NN argument against contributions to the imaginary action that are homogeneous in field types is specious,” or even “I see no reason whatsoever to contemplate imaginary actions, so I’m going to ignore this” (which would be a perfectly defensible stance). It’s been more like “this is completely counter to my everyday experience, therefore it must be crackpot!” That’s not a very sciencey attitude. It certainly would have been incompatible with all sorts of important breakthroughs in physics through the years. The Nielsen/Ninomiya scenario isn’t going to be one of those breakthroughs, I feel pretty sure. But it’s sensible enough that it merits disagreement on the basis of rational arguments, not just rolling of eyes.
I find the discussion of this hoax/quackery much better at Peter Woit’s blog “Not Even Wrong”.
There you will get a more balanced and objective mix of opinions about whether we are witnessing an elaborate hoax or theoretical physics “gone wild”.
Either way it is a sad commentary on what passes for theoretical physics these days when the jokes cannot be distinguished from the “serious” theories.
Backward causation? “Miraculosity” as a variable! Drawing cards for deciding whether or when to start the LHC.
Good grief Toto, we gotta find our way back to reality!
Yours in science,
RLO
http://www.amherst.edu/~rloldershaw
Pingback: Embarrassing Crackpottery « Not Even Wrong
By, “counter to my everyday experience”, I assume you meant “incompatible with common-sense metaphysical assumptions” rather than “incompatible with observations I make every day”. The latter is a good reason to reject a theory; the former is not.
Well it seems really quite simple. I can boil down the above 20 or so paragraphs into one sentence. Only the universes where the hadron collider did not work are having this discussion.
Anyhow, since we’re stuck in the non-functional collider universe, perhaps an imaginary run of the Hadron Collider would be the best at yeilding a “real” hadron.
Some expect the results from the LHC to explain the Fermi paradox.
Wonderfully explained – thank you! I saw the NYT essay and wasn’t quite sure what to make of it. Your post clears that right up. This is why I love physics.
My own anecdotal observations are pretty unambiguous — the public loves far-out speculations like this, and happily eats them up.
You say that like it’s a good thing. I’m not denying that this is true of much of the public. The question is whether appealing to it is anything more than pandering.
PS: Oh, and is Laplacian determinism still relevant in a quantum mechanical universe? Laplace wasn’t talking about deterministic evolution of wave functions (as problematic as that notion might be in itself), he was talking about particle trajectories, governed by Newtonian mechanics—a distinction that probably shouldn’t be glossed over. 🙂
The bigger story on the “Higgs” over the past couple weeks is that the American Physical Society (APS) awarded the 2010 J. J. Sakurai Prize for Theoretical Particle Physics on this discovery.
Drs. C. Richard Hagen, University of Rochester; G.S. Guralnik, Brown University; T.W.B. Kibble, Imperial College London; R. Brout, Université Libre de Bruxelles; F. Englert, Université Libre de Bruxelles; and P.W. Higgs, University of Edinburgh, Emeritus.
http://www.aps.org/units/dpf/awards/sakurai.cfm
The J. J. Sakurai Prize for Theoretical Particle Physics was established to recognize and encourage outstanding achievement in particle theory. The 2010 prize was awarded “For elucidation of the properties of spontaneous symmetry breaking in four-dimensional relativistic gauge theory and of the mechanism for the consistent generation of vector boson masses.”
The mechanism is the key element of the electroweak theory that forms part of the standard model of particle physics, and of many models, such as the Grand Unified Theory, that go beyond it. The papers that introduce this mechanism were published in Physical Review Letters in 1964 and were each recognized as milestone papers by PRL’s 50th anniversary celebration.
Presently, Fermilab’s Tevatron and the Large Hadron Collider at CERN are searching for a particle that will constitute evidence for this significant discovery. This particle is often referred to as the “God Particle”.
But, Sean, cosmic ray collisions have produced zillions of times more Higgs bosons during our past light cone than the LHC ever will (where “zillion” is a very large number). Shouldn’t some mechanism stop this from happening? Even a small decrease in the cosmic ray flux would cause many fewer Higgs creation events. Ah….maybe that explains the knee is the cosmic ray flux!!
Marc, how do you know they’ve been produced? It’s only very likely, given the conventional rules of quantum mechanics — which these aren’t.
Actually, that would seemingly be a much easier way out than preventing the LHC from being built. It is built, and the Higgs has an easily discoverable mass, but we just get “unlucky” in every event, and end up in the branch of the wave function where no Higgs was produced. So we never see it.
I’m very willing to believe that if someone took this theory seriously, they could probably come up with some already-existing piece of information that rules it out. But I’m not 100% sure.
Of course, by that argument, the Higgs mass could easily be only 10 GeV, but we were just very “unlucky” in every event at LEP. This would make it impossible to rule out anything experimentally—in fact, squarks and sleptons could be very light as well (they are scalars, after all) and we’ve just been unlucky. A lot of old papers have just been resuscitated!! Is this science?
Pingback: Nielsen-Ninomiya and the arXiv « Not Even Wrong
Have you ever considered the possibility that the “Higgs particle” has no basis in reality?
Is it perhaps like “magnetic monopole particles”, where we spend many millions of dollars chasing a theoretical mirage?
The “God Particle”? More likely the “Unicorn Particle”.
Perhaps nature does not “abhor the Higgs”; it’s just that nature hasn’t the slightest idea what you are talking about.
RLO
http://www.amherst.edu/~rloldershaw
Side comment on the path integral formulation of quantum mechanics. You explain that we “add up the contributions from every possible history, but give larger weight to those with smaller actions,” but that summary misses out on what I see as the real elegance of the idea. The wondrous thing to me is that we give perfectly equal weight to every possible history: the weighting factor just uses the action as a complex phase, so histories whose actions differ tend to cancel out. Nature’s apparent preference for histories of least action is thus no more than the familiar fact that functions change slowest near a minimum.
As for the NN work, my concern from the start has been less about their physics and more about their extrapolation to practical conclusions in the real world. Even if they’re right that the LHC will fail to produce the Higgs for this reason, I don’t for one minute believe that their theory can actually predict that a one in a million card draw would be the universe’s preferred way to make that happen (as opposed to “bad luck” preventing each potential production event, or another magnet failure, or for that matter global war or economic collapse). The fact that they were willing to make such claims in their paper didn’t leave me with a lot of confidence in their results.
It’s been more like “this is completely counter to my everyday experience, therefore it must be crackpot!” That’s not a very sciencey attitude. It certainly would have been incompatible with all sorts of important breakthroughs in physics through the years. The Nielsen/Ninomiya scenario isn’t going to be one of those breakthroughs, I feel pretty sure. But it’s sensible enough that it merits disagreement on the basis of rational arguments, not just rolling of eyes.
Nonsense. Crazy claims require strong evidence. Musing about what it means to have imaginary terms in the action is a fine thing to do (cf. Witten’s new work on analytic continuations of Chern-Simons theory). But suggesting these terms exist in the real world is just nutty — it obviously violates unitarity, and if you propose that unitarity is violated you need to do some very hard work of showing that you can violate it in a way that doesn’t have dramatic consequences. Saying the terms only exist in the Higgs sector is not convincing; the Higgs couples to other stuff, and you need a strong argument that the modification is only important for physical, on-shell Higgses, and doesn’t affect the well-measured W and Z properties (or macroscopic unitarity). Other attempts to modify quantum mechanics have famously had dramatic macroscopic problems if they violate unitarity. Why should this one be any different? The burden of proof is on Nielsen and Ninomiya to show they have a consistent framework, and it’s clear from a glance that their papers are far too sketchy to do that. This is orders of magnitude less serious than Hawking’s old attempts to modify QM. Eye-rolling is a perfectly appropriate response.
(For another example, notice that Lee-Wick theories have been taken somewhat more seriously, as I’m sure you know. These also involve radical violations of cherished principles, but they’ve been studied more cautiously, and mostly as interesting curiosities, by sane people. They’re still a little crazy, and I suspect they also have macroscopic problems that have gone unidentified, but I can’t pinpoint the problems and I don’t think anyone is crazy to work on these theories. Sidney Coleman wrote about them and didn’t find any obvious macroscopic problem, so they clearly hold up to some scrutiny. I doubt the same would be true of Nielsen and Ninomiya’s idea, if anyone with even a fraction of Coleman’s insight were to bother to spend the time thinking about them.)
Of course we shouldn’t shield speculative science from the general public and we should actively encourage them to engage in the discourse! Perhaps we could get some sort of grandfatherly figure (let’s call this person the Great Communicator) to propose a wildly speculative idea about lasers in space. We could even give the idea a catchy name like Star Wars. The public would love that! Probably spent hundreds of billions of dollars on the idea and only be mildly disappointed when it didn’t work. With something as simple as Newtonian gravity I expect the best from people. They know without a doubt you can distinguish decoys from warheads and shoot them down with 100% accuracy. It’s terribly unfortunate however that they don’t know how to minimize an action.
This reminded me of the quantum resolution of Greg Benford’s sci-fi novel Timescape (http://en.wikipedia.org/wiki/Timescape), where tachyons are dispatched to warn the past of JFK’s assasination. Message received, history splits following a many-worlds argument in the novel, into a pleasant non-assasination future and a dour one resmbling our own reality.
So maybe if the LHC works, all that means is we’re just on the wrong world .
I get the feeling that this is going to devolve into one of those “physicists are just making stuff up nowadays” arguments (“The arXiv has low standards, so even though the people who take two minutes to think about it know that this idea is crazy, ZOMG FIZIKS IS DOOOOMMMMED!”). As sometimes happens, we seem to be arguing not about what is true, but rather what is interesting. Sean Carroll says “this theory is crazy”, but that it’s got enough meat on it to be worth chewing on until we can say exactly why; maybe it’d make a good homework problem in a field-theory textbook. onymous, in comment #16, says that the craziness is obvious enough at first glance that “eye-rolling is a perfectly appropriate response”, so that it wouldn’t even merit inclusion in a problem set for second-year grad students.
That’s the Nielsen-Ninomiya-Sturgeon law: $latex 0.9 pm 0.001i$ of all blog posts are crud.
My own suspicion (and it is little more than that, a dark worry in the insomniac hours of the night) is that we’ve done such a poor job of educating people on the fundamentals that this is basically impossible. Without a certain level of background knowledge, which we should and probably could impart, the audience can’t set what they read in the right context. What good is a blog post like this one to people who don’t know what an “imaginary number” is? How can somebody grok special relativity when they don’t even leave high school knowing the difference between velocity and acceleration? Etc.
Blake Stacey wrote:
You’re putting words in my mouth — but probably I was unclear. I actually do think it would be nice if someone would take the time to look at the papers, think about how we can definitively know they must be wrong, and explain it clearly. But this is still compatible with thinking that eye-rolling is an appropriate response, and that for most of us, taking the time to do more is a bad idea. It’s a tragedy of the commons — err, a farce of the commons? We’d all be better off if someone would take the time to carefully debunk it, but no one person is better off taking that time when they could be working on something publishable or reading something that’s more likely to be correct. (This is where the attentive reader will ask whether it’s worthwhile to read blogs and comment on them. But hey, everyone has their hobbies.)
Sorry — I should have been more explicit about what I interpreted you as saying versus what I figured that implied. My error.
Good point. I think this happens fairly often in these kinds of circumstances.
I’m actually working on a research project which might be publishable in the not-so-distant future, and which started because somebody on a blog comment thread pointed me to an interesting paper. . . so it can’t all be wasted time, right?
It’s also clear from their paper that “remarkable good luck” has inverse dimensions of mass thus making the theory non renormalizable. I’m sure it works perfectly fine as an effective field theory.
” If we knew the precise state of the universe, and the exact laws of physics, the future would be as utterly determined as the present”
How can we ever know? Given that quantum mechanics is well-established the precise state of the universe is inaccessible.
Love the controversy this has stirred up ! Sean is right about one thing: Nielsen is no crackpot, and is a serious high-energy theorist who does not put forth this idea lightly. Hawking & Hertog recently broke ground on a new paradigm in cosmology, in which unknowable initial conditions in the early universe negate attempts at any canonical, causal prediction of the current state of the universe, and explore the use of path integrals formulated in the current epoch, in a `Top-Down’ scenario, to backward propagate them to obtain the initial conditions.
Steuard, so very clear as usual !
SciFi has so often over the last century, been the harbinger of what would become the scientifc realities of tommorrow. Kudos to Dan for drawing attention to UCI astrophysicist Benford’s mind-blowing novel `TimeScape’ about advanced phenomena warning us about environmental disasters awaiting. U.Washington theorist John Cramer (T-I of QM) is perhaps more prescient than N&N, with his novel `Einstein’s Bridge’ in which a `LHC’ inadvertently announces earth’s presence via HEP expts to some nasty, god-like denizens of an adjoining universe.
The LHC will certainly not destroy the earth, but are we so arrogant to desparately cling to causality here ? Should not everything be on the table, considering the profundity of what will begin this Nov.? If Higgsy is not found, U-know-what will hit the proverbial fan in HEP
Doesn’t this violate the cluster decomposition principle?