Spooky Signals from the Future Telling Us to Cancel the LHC!

A recent essay in the New York Times by Dennis Overbye has managed to attract quite a bit of attention around the internets — most of it not very positive. It concerns a recent paper by Holger Nielsen and Masao Ninomiya (and some earlier work) discussing a seemingly crazy-sounding proposal — that we should randomly choose a card from a million-card deck and, on the basis of which card we get, decide whether to go forward with the Large Hadron Collider. Responses have ranged from eye-rolling and heavy sighs to cries of outrage, clutching at pearls, and grim warnings that the postmodernists have finally infiltrated the scientific/journalistic establishment, this could be the straw that breaks the back of the Enlightenment camel, and worse.

Since I am quoted (in a rather non-committal way) in the essay, it’s my responsibility to dig into the papers and report back. And my message is: relax! Western civilization will survive. The theory is undeniably crazy — but not crackpot, which is a distinction worth drawing. And an occasional fun essay about speculative science in the Times is not going to send us back to the Dark Ages, or even rank among the top ten thousand dangers along those lines.

The standard Newtonian way of thinking about the laws of physics is in terms of an initial-value problem. You specify the state of the system (positions and velocities) at one moment, then the laws of physics tell you how it will evolve into the future. But there is a completely equivalent alternative, which casts the laws of physics in terms of an action principle. In this formulation, we assign a number — the action — to every possible history of the system throughout time. (The choice of what action to assign is simply the choice of what laws of physics are operative.) Then the allowed histories, the ones that “obey the laws of physics,” are those for which the action is the smallest. That’s the “principle of least action,” and it’s a standard undergraduate exercise to show that it’s utterly equivalent to the initial-value formulation of dynamics.

In quantum mechanics, as you may have heard, things change a tiny bit. Instead of only allowing histories that minimize the action, quantum mechanics (as reformulated by Feynman) tells us to add up the contributions from every possible history, but give larger weight to those with smaller actions. In effect, we blur out the allowed trajectories around the one with absolutely smallest action.

Nielsen and Ninomiya (NN) pull an absolutely speculative idea out of their hats: they ask us to consider what would happen if the action were a complex number, rather than just a real number. Then there would be an imaginary part of the action, in addition to the real part. (This is the square-root-of-minus-one sense of “imaginary,” not the LSD-hallucination sense of “imaginary.”) No real justification — or if there is, it’s sufficiently lost in the mists that I can’t discern it from the recent papers. That’s okay; it’s just the traditional hypothesis-testing that has served science well for a few centuries now. Propose an idea, see where it leads, toss it out if it conflicts with the data, build on it if it seems promising. We don’t know all the laws of physics, so there’s no reason to stand pat.

NN argue that the effect of the imaginary action is to highly suppress the probabilities associated with certain trajectories, even if those trajectories minimize the real action. But it does so in a way that appears nonlocal in spacetime — it’s really the entire trajectory through time that seems to matter, not just what is happening in our local neighborhood. That’s a crucial difference between their version of quantum mechanics and the conventional formulation. But it’s not completely bizarre or unprecedented. Plenty of hints we have about quantum gravity indicate that it really is nonlocal. More prosaically, in everyday statistical mechanics we don’t assign equal weight to every possible trajectory consistent with our current knowledge of the universe; by hypothesis, we only allow those trajectories that have a low entropy in the past. (As readers of this blog should well know by now; and if you don’t, I have a book you should definitely read.)

To make progress with this idea, you have to make a choice for what the imaginary part of the action is supposed to be. Here, in the eyes of this not-quite-expert, NN seem to cheat a little bit. They basically want the imaginary action to look very similar to the real action, but it turns out that this choice is naively ruled out. So they jump through some hoops until they get a more palatable choice of model, with the property that it is basically impotent except where the Higgs boson is concerned. (The Higgs, as a fundamental scalar, interacts differently than other particles, so this isn’t completely ad hoc — just a little bit.) Because they are not actually crackpots, they even admit what they’re doing — in their own words, “Our model with an imaginary part of the action begins with a series of not completely convincing, but still suggestive, assumptions.”

Having invoked the tooth fairy twice — contemplating an imaginary part of the action, then choosing its form so as to only be relevant where the Higgs is concerned — they consider consequences. Remember that the effect of the imaginary action is non-local in time — it depends on what happens throughout the history of the universe, not just here and now. In particular, given their assumptions, it provides a large suppression to any history in which large numbers of Higgs bosons are produced, even if they won’t be produced until some time in the future.

So this model makes a strong prediction: we’re not going to be producing any Higgs bosons. Not because the ordinary dynamical equations of physics prevent it (e.g., because the Higgs is just too massive), but because the specific trajectory on which the universe finds itself is one in which no Higgses are made.

That, of course, runs into the problem that we have every intention of making Higgs bosons, for example at the LHC. Aha, say NN, but notice that we haven’t yet! The Superconducting Supercollider, which could have found the Higgs long ago, was canceled by Congress. And in their December 2007 paper — before the LHC tried to turn on — they very explicitly say that a “natural” accident will come along and break the LHC if we try to turn it on. Well, we know how that turned out.

But NN have an ingenious suggestion for saving us from future accidents at the LHC — which, as they warn, could endanger lives. They propose a card game with more than a million cards, almost all of which say “go ahead, no problem.” But one card says “don’t turn on the LHC!” In their model, the nonlocal effect of the imaginary part of the action is to ensure that the realized history of the universe is one in which the LHC never turns on; but it doesn’t matter why it doesn’t turn on. If we randomly pick one out of a million cards, and honestly promise to follow through on the instructions on the card we pick, and we happen to pick the card that says not to turn it on, and we therefore don’t — that’s a history of the universe that is completely unsuppressed by their mechanism. And if we choose a card that says “go ahead,” well then their theory is falsified. (Unless we try to go ahead and are continually foiled by a series of unfortunate accidents.) Best of all, playing the card game costs almost nothing. But for it to work, we have to be very sincere that we won’t turn on the LHC if that’s what the card says. It’s only a million-to-one chance, after all.

Note that all of this “nonlocal in time,” “receiving signals sent from the future” stuff is a bit of a red herring, at least at the classical level. We often think that the past is set in stone, while the future is still to be determined. But that’s not how the laws of physics operate. If we knew the precise state of the universe, and the exact laws of physics, the future would be as utterly determined as the present (Laplace’s Demon). We only think otherwise because our knowledge of the present state is highly imperfect, consisting as it does as a few pieces of information about the coarse-grained state. (We don’t know the position and velocity of every particle in the universe, or for that matter in any macroscopic object.) So there’s no need to think of NN’s imaginary action as making reference to what happens in the future — all the necessary data are in the present state. What seems weird to us is that the NN mechanism makes crucial use of detailed, non-macroscopic information about the present state; information to which we don’t have access. (Such as, “does this subset of the universe evolve into the Large Hadron Collider?”) That’s not how the physics we know and love actually works, but the setup doesn’t actually rely on propagation of signals backwards in time.

At the end of the day: this theory is crazy. There’s no real reason to believe in an imaginary component to the action with dramatic apparently-nonlocal effects, and even if there were, the specific choice of action contemplated by NN seems rather contrived. But I’m happy to argue that it’s the good kind of crazy. The authors start with a speculative but well-defined idea, and carry it through to its logical conclusions. That’s what scientists are supposed to do. I think that the Bayesian prior probability on their model being right is less than one in a million, so I’m not going to take its predictions very seriously. But the process by which they work those predictions out has been perfectly scientific.

There is another reasonable question, which is whether an essay (not a news story, note) like this in a major media outlet contributes to the erosion of trust in scientists on the part of the general public. I would love to see actual data one way or the other, which went beyond “remarkably, the view of the common man aligns precisely with the view I myself hold.” My own anecdotal observations are pretty unambiguous — the public loves far-out speculations like this, and happily eats them up. (See previous mocking quote, now applied to myself.) It’s always important to distinguish as clearly as possible between what is crazy-sounding but well-established as true — quantum mechanics, relativity, natural selection — and what is crazy-sounding and speculative, even if it’s respectable speculation — inflation, string theory, exobiology. But if that distinction is made, I’ve always found it pretty paternalistic and condescending to claim that we should shield the public from speculative science until it’s been established one way or the other. The public are grown-ups, and we should assume the best of them rather than the worst. There’s nothing wrong with letting them in on the debates about crazy-sounding ideas that we professional scientists enjoy as our stock in trade.

The disappointing thing about the responses to the article is how non-intellectual they have been. I haven’t heard “the NN argument against contributions to the imaginary action that are homogeneous in field types is specious,” or even “I see no reason whatsoever to contemplate imaginary actions, so I’m going to ignore this” (which would be a perfectly defensible stance). It’s been more like “this is completely counter to my everyday experience, therefore it must be crackpot!” That’s not a very sciencey attitude. It certainly would have been incompatible with all sorts of important breakthroughs in physics through the years. The Nielsen/Ninomiya scenario isn’t going to be one of those breakthroughs, I feel pretty sure. But it’s sensible enough that it merits disagreement on the basis of rational arguments, not just rolling of eyes.

119 Comments

119 thoughts on “Spooky Signals from the Future Telling Us to Cancel the LHC!”

  1. I think they would need to have far fewer than a million cards. The chance of the LHC never being built for some other reason is surely much greater than 1 in a million, so the card effect will never dominate unless the chance of picking the no-LHC card is much higher.

    In fact for the card thing to have any significance, the chance of picking the card has to be at least the same order of magnitude as the risk the LHC not being built for other reasons. And since that risk is presumably not negligible, neither could be the risk of the LHC not being built because of the card experiment. So I doubt anyone involved with the LHC would ever agree to it. 🙂

  2. Greetings wes parsons,

    You wrote: ” this should be easy to understand if you are remotely educated.
    the real question here is, “should we gamble?” ”

    1. What if one was locally educated?

    2. Is not the real question: whether or not the NN papers, and all too much else in what currently passes for theoretical physics, bear any useful connection to the real world of nature?

    Feel free to gamble away to your heart’s content, but bear in mind that it is a strictly causal affair and that randomness is an illusion due to the limitations of our observational powers.

    Yours in the new paradigm,
    RLO
    http://www.amherst.edu/~rloldershaw

  3. Dhananjay Vaidya

    I am in agreement with the blogger on this statement, (my later criticism should not be construed as a disagreement with this):
    > I’ve always found it pretty paternalistic and condescending to
    > claim that we should shield the public from speculative science
    > until it’s been established one way or the other. The public are
    > grown-ups, and we should assume the best of them rather than the worst.

    Now for my disagreement.

    What is the difference between crazy and crackpot?

    I could easily think of two toothfairies and beyond that do fully defendable math to come up with some conclusions.

    You also state that the Bayesian prior for the N-N theory being correct is less than one in a million. You say:
    > I think that the Bayesian prior probability on their model
    > being right is less than one in a million, so I’m not going to
    > take its predictions very seriously.
    Where did you come up with this number? I hope you did not come up with it based on your “everyday experience”. That would not be very “sciencey”, would it?

    “A theory so unlikely of being right, that I am not going to take its predictions seriously” sounds like the definition of “crackpot”.

    You claim that many interesting ideas began as such.

    Let me now suggest two toothfairies:
    Toothfairy A. The values of the so-called “quantum mechanical constants” differ by time since the bigbang. So I add a function in time to all of the quantum mechanical equations. So say {h+f(t)} everywhere.
    I take it that this is a well-defined speculation?
    Toothfairy B: I claim that f(t) is highly nonlinear assuming a constant value (zero without any loss of generality), but changing ABOUT NOW.
    I take it this is a fairly well defined speculation?

    Clearly, as f(t)->0, all quantum mechanical calculations tend to those that are currently taught to students. So my theory does not contradict any EXPERIMENT that confirms the accuracy of quantum mechanics so far. Though it does contradict many philosophical constructs, no matter. After all, the blogger suggests this is what science is all about.

    Let us give me the benefit of the doubt of being able to do mathematical operations and calculations correctly, of course I will show that something different will be seen when f(t) is not nearly equal to zero.

    Is my theory above a crazy theory or a crackpot theory?

    Give me some time, and I can come up with a more elaborate example with a parallel statement to N-N’s “O look, we haven’t been able to turn on the LHC yet – that provides the preliminary motivation for our theory”. Perhaps I can make my f(t), f(t+latitude) instead and then say that the failure of the LHC to turn on is preliminary motivation for my theory! Just gimme time…

    This creation of a “well-defined” theory with magic nonlinearities that turn up exactly at the Higgs boson energy levels, and supposedly have something to do with Congressional failure to fund a project in the US – this is what science does?

    When a speculative string theorist says “string theory explains some macroscopic society-level phenomenon, just let me introduce a tooth fairy into my otherwise correct calculations” I would hope that “sciencey” scientists would not say this is the same sort of crazy as the speculative string theory itself. They would say it is the sort of crazy as the N-N theory.

    The distinction between string-theory-crazy and N-N-theory crazy is at least as important to science as this supposed difference between crazy and crackpot.

  4. Pingback: Speculative Science and Speculative Philosophy « Hyper tiling

  5. What about imaginary, imaginary numbers. So complex numbers that are not just real and imaginary, but are real with imaginary real components, and imaginary, imaginary components. The latter part here would represent the number that would be assigned to the probabilities for what is predicted as a new, or a perameter to include events for new prior probabilities and not just what is already used. One example of this is, just after the part were the cards are dealt and one chosen – (the bit where commenters on the net and scientists start using current theory) to the time at which the decision is made to turn the LHC on, on the basis of the best ‘path’ forward after betting that the LHC will predict that the Higgs will in itself be dicovered compared to the basis that it won’t.

    Summing – methods used for current experiment and calculations are an actual number itself, ones that can be predicted (before LHC is on) are not a number yet(that could be a sub part of the complex number) .

    Claire

  6. It’s telling that anti-string-theory sentiment has spread so widely that people can’t help but blame the Nielsen and Ninomiya paper on “string theory” or “quantum gravity,” even though neither the papers nor the authors have any connection to those subjects.

  7. I think that many people are shitting on what they don’t understand.

    Let’s look at it from this perspective:

    they’re saying “based on theories we’d like to test as well as observations, we think the idea that (the universe abhors the creation of new higgs-bosons) might be comparable to (nature abhors a vacuum)… and a novel way of that manifesting is that the instant a future potential opened up for the creation of a new higgs-boson, all of the future potentials would sabotage that occurrence unless prevented much in the same way we use containers to create vacuum chambers” and that “based on the reversibility of everything in quantum physics, its possible for causal effects to take place in the future without necessarily transmitting any quantum information”

    at least that’s how I understand it.

  8. Pingback: Sunday Science Sermon : Rocket Party

  9. Cerenkov radiation is a result? Cosmic ray spallation is already understood, by understanding “the location of the interaction.”

  10. In the vanishingly unlikely event this “Higgs aversion” effect is real, it must be quite weak and localized or else the LHC would never been so nearly completed.

    Also, if physics and technology continue to advance at anything like their current rate in the future, machines like the LHC, and far more powerful, will become two a penny, so the effect would need to become more obvious and pronounced. Or would it then manifest itself by a collapse of civilization, to ensure none of these machines was ever used?!

  11. Mark,

    I hate to break it to you, but nothing in the real world is “reversible”.

    The only things that are “reversible” are abstract and oversimplified Platonic idealizations that masquerade as acceptable theoretical physics today [but have a very limited shelf life from here on out].

    Welcome to the real world of nonlinear dynamical systems, i.e., nature.

    The existing crop of theoretical physicists will likely be the last to realize/admit their many errors. The future rests with young educated people who have no vested interest in the bankrupcy of string theory, anthropic reasoning, the veritable zoo of imaginary “particles”, random “multiverses”, thoroughly untestable Ptolemaic models, the bogus conventional Planck scale [anyone with two eyes can see that G scales in a discrete self-similar manner], etc.

    Time to get back to real testable science, and forge a new understanding of nature.

    Yours in the new paradigm
    RLO
    http://www.amherst.edu/~rloldershaw

  12. Determining the fate of a multibillion dollar physics experiment by drawing from a deck of cards?

    Wow, that sounds so “sciencey”.

  13. I was interested in this comment: “Note that all of this “nonlocal in time,” “receiving signals sent from the future” stuff is a bit of a red herring, at least at the classical level. We often think that the past is set in stone, while the future is still to be determined. But that’s not how the laws of physics operate. If we knew the precise state of the universe, and the exact laws of physics, the future would be as utterly determined as the present (Laplace’s Demon).”

    I thought this type determinism was ruled out by the Uncertainty Principle. Or does the phrase “If we knew… the exact laws of physics” refer to a different set of laws than quantum mechanics. I guess that would leave quantum mechanics (or some future adjustment thereof) as the most exact laws of physics we are (in principle) capable of knowing, but not the actual exact laws of physics according to which the actual world works–leaving the actual work still fully determined. Can someone help me out here?

  14. “I’m a Platonist — a follower of Plato — who believes that one didn’t invent these sorts of things, that one discovers them. In a sense, all these mathematical facts are right there waiting to be discovered.”Harold Scott Macdonald (H. S. M.) Coxeter

    It’s not easy to think like a Platonist, yet many do when they defer to shadows cast holographically on the wall. But mostly, it’s a Greek way of facing the past, with their back to the future. 🙂

  15. Sean said:
    “It’s telling that anti-string-theory sentiment has spread so widely that people can’t help but blame the Nielsen and Ninomiya paper on “string theory” or “quantum gravity,” even though neither the papers nor the authors have any connection to those subjects.”

    I don’t think that “anti-string-theory” sentiment is really that widespread; it’s confined mostly to cranks with “new paradigms” and people with axes to grind and/or books to sell. anti-string-theor*ist* sentiment is another matter altogether, of course….

  16. Hi Sean,

    I was wondering doesn’t this backward causation theory go against the faster than light signaling?

    I.e. i can do a game where I say, Unless Alice on the moon sends me the bit string 0110 right now, I’ll spend $30billion on trying to find the Higgs boson, and Alice sends at that moment the result of a measurement from a 4 qubit system (or even 4 just random bits). Then I wait for alice’s signal to get to me, and it should be 0110. So in effect, I’ve signalled 0110 faster than light to Alice.

    Best, Amir

  17. 1. Dark Matter is the dominant form of matter in the Universe.

    2. Any cosmological paradigm that claims to have anything useful to say about the Universe must definitively predict the nature of the Dark Matter.

    3. A definitive prediction is: (a) unique to the paradigm/model being tested, (b) at least partly quantitative, (c) NON-ADJUSTABLE [i.e., falsifiable], and (d) capable of being tested in the forseeable future.

    Can the “Standard Models” of either high energy physics or cosmology definitively predict the nature of the Dark Matter. Not even close! There is just a lot of arm-waving about “WIMPS” with an ‘anything-you-want’ spectrum of properties, or shadow matter, or extra dimensions, or discarded copies of the ApJ.

    Would you like to see a paradigm that CAN predict exactly what the Dark Matter is [in a 1987 ApJ paper], specify exactly its quantitative mass spectrum, and offer significant evidence that supports the prediction?

    Go to http://www.amherst.edu/~rloldershaw and click on “Selected Papers”, then click on #5 “Mass Estimates For Galactic Dark Matter Objects… [pub. in Fractals, 2002].

    Those who claim to have an adequate understanding of nature should show some intergrity and make specific testable predictions about the Dark Matter, or admit their much-hyped hermeneutics are not of much use when it comes to making definitive predictions about the real world, i.e., nature. To put it bluntly: Put up or shut up.

    Or would definitive predictions jeopardize your precious funding?

    Adios Junk-Bond Days,
    RLO
    http://www.amherst.edu/~rloldershaw

  18. Sean,

    I’ve been wondering about this initial-value idea in our non-Newtonian universe.
    Since much in our universe appears to be stochastic in nature, does it really hold true?
    Does knowing one “state” of the universe mean the laws of physics can predict all future states? Or are there many possible future states for a given state?

    Or is it simply that this stochastic nature is simply an illusion – because we don’t know the exact state and/or the complete laws of physics?

    Thanks

    Jay

  19. Pingback: LHC, ¿TERMINATOR ataca? « GRAZNIDOS Weblog

  20. Michael E. Stora

    Just because it is crackpottery does not mean it is not useful crackpottery. Just like quantum suicide and half-living cats, it makes us think and we learn from the attempts to prove or disprove.

    Sure the press does not get it, but why are scientists being so critical (and I don’t mean critical in the usual good way)?

  21. Pingback: That Never Happened « The Amateur’s Guide To Life

  22. Princi- pals of these schools know that engaging the community is not just another program to be implemented, but a new way of approaching the increasingly demanding job of the principal. ,

Comments are closed.

Scroll to Top