String Theory is Losing the Public Debate

I have a long-percolating post that I hope to finish soon (when everything else is finished!) on “Why String Theory Must Be Right.” Not because it actually must be right, of course; it’s an hypothesis that will ultimately have to be tested against data. But there are very good reasons to think that something like string theory is going to be part of the ultimate understanding of quantum gravity, and it would be nice if more people knew what those reasons were.

Of course, it would be even nicer if those reasons were explained (to interested non-physicists as well as other physicists who are not specialists) by string theorists themselves. Unfortunately, they’re not. Most string theorists (not all, obviously; there are laudable exceptions) seem to not deem it worth their time to make much of an effort to explain why this theory with no empirical support whatsoever is nevertheless so promising. (Which it is.) Meanwhile, people who think that string theory has hit a dead end and should admit defeat — who are a tiny minority of those who are well-informed about the subject — are getting their message out with devastating effectiveness.

The latest manifestation of this trend is this video dialogue on Bloggingheads.tv, featuring science writers John Horgan and George Johnson. (Via Not Even Wrong.) Horgan is explicitly anti-string theory, while Johnson is more willing to admit that it might be worthwhile, and he’s not really qualified to pass judgment. But you’ll hear things like “string theory is just not a serious enterprise,” and see it compared to pseudoscience, postmodernism, and theology. (Pick the boogeyman of your choice!)

One of their pieces of evidence for the decline of string theory is a recent public debate between Brian Greene and Lawrence Krauss about the status of string theory. They seemed to take the very existence of such a debate as evidence that string theory isn’t really science any more — as if serious scientific subjects were never to be debated in public. Peter Woit agrees that “things are not looking good for a physical theory when there start being public debates on the subject”; indeed, I’m just about ready to give up on evolution for just that reason.

In their rush to find evidence for the conclusion they want to reach, everyone seems to be ignoring the fact that having public debates is actually a good thing, whatever the state of health of a particular field might be. The existence of a public debate isn’t evidence that a field is in trouble; it’s evidence that there is an unresolved scientific question about which many people are interested, which is wonderful. Science writers, of all people, should understand this. It’s not our job as researchers to hide away from the rest of the world until we’re absolutely sure that we’ve figured it all out, and only then share what we’ve learned; science is a process, and it needn’t be an especially esoteric one. There’s nothing illegitimate or unsavory about allowing the hoi-polloi the occasional glimpse at how the sausage is made.

What is illegitimate is when the view thereby provided is highly distorted. I’ve long supported the rights of stringy skeptics to get their arguments out to a wide audience, even if I don’t agree with them myself. The correct response on the part of those of us who appreciate the promise of string theory is to come back with our (vastly superior, of course) counter-arguments. The free market of ideas, I’m sure you’ve heard it all before.

Come on, string theorists! Make some effort to explain to everyone why this set of lofty speculations is as promising as you know it to be. It won’t hurt too much, really.

Update: Just to clarify the background of the above-mentioned debate. The original idea did not come from Brian or Lawrence; it was organized (they’ve told me) by the Smithsonian to generate interest and excitement for the adventure of particle physics, especially in the DC area, and they agreed to participate to help achieve this laudable purpose. The fact, as mentioned on Bloggingheads, that the participants were joking and enjoying themselves is evidence that they are friends who respect each other and understand that they are ultimately on the same side; not evidence that string theory itself is a joke.

It would be a shame if leading scientists were discouraged from participating in such events out of fear that discussing controversies in public gave people the wrong impression about the health of their field.

531 Comments

531 thoughts on “String Theory is Losing the Public Debate”

  1. Chris, that’s more or less the opposite of the truth. In classical mechanics or electrodynamics we precisely have theories that parameterize our ignorance — they work in the classical domain, and stop working when quantum mechanics becomes important. And it doesn’t really matter what happens when quantum mechanics does become important; classical theories will still work.

    It’s exactly the same with QFT. Things happen at all scales; we don’t know what happens at the highest energy scales; but we don’t need to, since there exist a distinct set of useful rules that operate at low scales. That’s what renormalization is all about. And it works spectacularly well, as has been demonstrated repeatedly by comparing with actual data.

  2. Here is my capsule take on renormalization:

    In a harmonic oscillator, the energy difference between the ground state and the first excited state is hbar w, where w is the classical frequency of the oscillator, which appears in the potential energy as V(x) = m w^2 x^2/2.

    If we add extra terms to the potential energy (such as g x^4), then the relationship between the energy gap and the classical frequency is altered; the corrections can be expressed as a power series in g.

    If we have many coupled oscillators, all contribute to these corrections; the more oscillators there are, the bigger the corrections.

    Now consider quantum field theory. Without infrared and ultraviolet cutoffs, quantum field thoery consists of an infinite number of oscillators. In this case, the corresponding corrections are infinite (without cutoffs in place).

    In field theory, the first excited state consists of a single particle at rest, and so the energy gap (between the first excited state and the ground state) is mc^2, the rest energy of the particle. The parameter that plays the role of the classical frequency is the so-called “bare” mass m_0. Since it is m (and not m_0) that is physically observable, and since m is observed to be finite, and since the corrections that give m in terms of m_0 become infinite in the limit that the cutoffs are removed, we conclude that m_0 must also become infinite in this limit.

    To quote one textbook, “It may be disturbing to have a parameter in the lagrangian that is formally infinite. However, such parameters are not directly measurable, and so need not obey our preconceptions about their magnitudes.”

  3. Thanks for various replies and sub-replies. So, simply speaking, “virtual particles” do the renormalization? I know, I have heard that before, but some people consider VPs a bit hokey (see the Wikipedia articles…) In any case, that still doesn’t work all the way down to “a point”, am I right, in which case you still need “strings” to keep things from getting absurd in the case of a “literal point” – ?

  4. Sean,

    Yes – I suppose that any physical theory is parameterizing ignorance of what happens at the scales when the theory breaks down (unless of course it really is the ultimate theory).

    But I do not find this notion helpful.

    The reasoning that leads to phrases like “parameterizing ignorance” is that perturbation theory involves divergent integrals on account of the integration being over infinite ranges of momentum. Since an infinite result is considered undesirable, a cutoff is introduced. The cutoff then becomes a parameter that represents our lack of knowledge about what happens at high momenta.

    This theory, which includes the cutoff, is then a possible result of subtracting infinity from infinity. A possible result, but not a unique one. The mathematical reality is that by subtracting infinity from infinity we can get any functional form we want, including theories that cannot yield the required experimental numbers.

    Since none of this jiggery-pokery is needed for classical mechanics or electrodynamics, I do not see how you can say that such theories are no different.

  5. Neil, yes, simply speaking, virtual particles do the renormalization.

    And it’s the Wikipedia article on “virtual particles” that’s a bit hokey, not the virtual particles themselves! But their link to Gordy Kane’s answer to the question “Are virtual particles really constantly popping in and out of existence?” is worthwhile.

    And for most field theories, it indeed does not work all the way down to a point, in which case you do indeed need strings (or something; but strings is really the only concrete proposal so far) from getting absurd in the pointlike limit. I said most, because the so-called “asymptotically free” theories have interactions that disappear in the pointlike limit, and so in that case the limit is OK. (David Gross, David Politzer, and Frank Wilczek got the Nobel Prize in 2004 for showing that nonabelian gauge theories can be asymptotically free).

  6. Chris, the whole point of renormalization is that the result is extremely unique, or at least depends on a small number of parameters (the “constants of nature”), not on an arbitrary function. We don’t introduce new parameters every time we test QED or the Standard Model. Theories that are “renormalizable” are precisely those for which only a small number of parameters are needed to make a wide variety of predictions. There is a cutoff, as Joe mentioned, that might very well represent new high-scale physics; but the nice thing is that such physics is completely irrelevant to our low-energy predictions.

    People need to get over this discomfort with “subtracting infinity from infinity.” It’s called “taking a limit,” and is a perfectly respectable procedure.

  7. Neil,

    I am not sure what you mean by “doing” renormalization. Virtual particles form loops in Feynman graphs and these loops are most often divergent integrals. So virtual particles lead to the need to renormalize … they don’t actually “do” them, though, as the renormalization is carried out by tree graphs with counterterms attached.

    Read up on this (& not just on Wikipedia). Try, for example, Mark’s text book … you might find it enlightening.

  8. Chris,

    You wrote, “The mathematical reality is that by subtracting infinity from infinity we can get any functional form we want, including theories that cannot yield the required experimental numbers.”

    This is just wrong. In, say, QED, you can get any value of the electron charge that you want, or any value of the elctron mass that you want, but that’s it. There’s no more freedom to adjust the predicitons (up to corrections suppressed by powers of E/Lambda, where E is the experimental energy and Lambda is the the ultraviolet cutoff).

  9. Sean,

    I am not arguing about the accuracy of renormalized perturbation theory for QED when done “correctly”, i.e. according to the recipe.

    I am merely saying that if, perversely, I choose not to follow the recipe, then I will get a different answer, despite starting with the same assumptions.

    Subtracting infinity from infinity is exactly what it is. It is not a limiting process – the limit, in this case, does not exist.

  10. Chris, the recipe isn’t magic. Subtracting infinity from infinity is not what’s done; it’s a shorthand for a limiting process that we understand (or, at least, we understand how to deal with what we don’t understand.) Ugliness like MSbar and other ad hoc regularizations are just tricks that make it easier to get the answer that arises from the limiting procedure.

  11. Mark,

    There is no theoretical requirement that Λ should be a constant. It could depend on anything: momenta, energies, positions, variables that never entered the equations in the first place: anything. You choose it do be a constant, but its purpose is just to parameterize a divergent integral. I can use this freedom to get all sorts of wrong theories.

  12. Chris,

    An example where you use the freedom to chose Lambda to get a wrong answer would be interesting. I suppose you could also make electric charge, particle masses, and Planck’s constant functions of momenta, energies, political preference and hair color, also generating wrong answers. What fun! However, getting wrong answers isn’t really the goal. Constant Lambda gives right answers, so that’s what we do (likewise for charge, etc.).

  13. Chris,

    You can let Lambda depend on anything you like, but all effects of the cutoff (now matter how it varies) will be suppressed from the standard textbook calculations by powers of E/Lambda_min, where E is the energy scale at which the experiment is carried out (equal to the electron mass for “low energy” experiments like the anomalous magnetic moment), and Lambda_min is the minimum value of the ultraviolet cutoff Lambda.

  14. Chris, what is missing in your discussion I think is the notion of universality: in renormalizable field theories there are certain quantities that, when expressed in terms of a small number of physically measured parameters, become independent on the details of the cutoff procedure. Those are precisely the quantities that are compared to experiment with such spectacular success, the universal ones. Incidentally, this is not specific to perturbation theory, lattice calculations do precisely that as well.

    I think the viewpoint of “subtracting infinities” is not a useful one. Maybe a more useful way is the view that renormalization is a process of systematically eliminating the dependence on unknown short distance physics.

  15. Oops; “now matter how it varies” should have been “no matter how it varies”.

  16. Also, the process of renormalization is so efficient that it is usually useful to eliminate the dependence on short distance physics, even in cases when it is known…

  17. Hmmm… BTW, the need to renormalize would come first anyway from the integration of field energy below the classical electron radius, so it is not just a problem that arises from quantum mechanics. But what about the virtual electrons and positrons themselves being either real points, or strings, does that make a difference? And, the virtual particles should have their own infinity issues, and virtual-upon-virtual particles, etc. It looks like a mess, and I don’t know much about it. If string theory can deal with that, then we would have a reason for it to be true?

  18. This reminds me of a question that I had today that I probably ought to know the answer to. Does anyone know the bounds on the coefficients of various dimension 5 operators in the standard model?

  19. Aaron, that’s a pretty broad question. There are fairly strict bounds on just about any dimension 5 or 6 operator you could write down. (Things involving the top quark are the least constrained, as a rule.) What specific operators do you have in mind? I don’t know of anywhere that you can find all the bounds collected, though the PDG is probably a good place to start looking. Otherwise, key words to look for would be “electroweak precision”, or “compositeness bounds” for four-fermion operators. Not to mention all the flavor physics results….

  20. I was mostly curious about to what scale we can rule out new physics that interacts with the standard model (beyond the electroweak symmetry breaking sector, of course.) But I suppose I should have realized the PDG would be a good place to look.

  21. This really depends on how it interacts with the Standard Model. In general, the bounds are really uncomfortably strong; I’m sure you’ve heard of the little hierarchy problem, or the problem with FCNCs in SUSY, etc…. This is why model-building is nontrivial.

    I think a lot of the strongest bounds are for four-fermion operators. Certainly these are among the easiest to understand the experimental results for: LEP measured e+e- going to various things rather precisely, and some of these operators must be suppressed by a scale of 10 or 20 TeV. (Of course, the actual scale depends on how large the couplings of the new physics to the SM are; e.g. a light Z’ is possible, if it couples very weakly.)

  22. #461 There is no theoretical requirement that Λ should be a constant. It could depend on anything: momenta, energies, positions, variables that never entered the equations in the first place: anything. You choose it do be a constant, but its purpose is just to parameterize a divergent integral. I can use this freedom to get all sorts of wrong theories.

    A good challenge, to which there is good answer (basically already given in 464).
    Highly off-shell particles don’t propagate very far (the uncertainty principle). Thus their effects are essentially a delta function in spacetime (up to systematic corrections), which is a constant in energy-momentum. More formally, if you differentiate a graph enough times with respect to the external momenta, it becomes convergent (the BPHZ formalizsm). Dependence on position is forbidden by translation invariance.

  23. Gavin & Joe,

    You are only reinforcing my point. The principles of the subject (i.e. quantization of classical electrodynamics plus the Dirac field, interaction picture, time-ordered products, Wick-ordered products, Feynman graphs, renormalization) do not lead uniquely to the answer you want. They do not lead uniquely to any answer. As Joe has pointed out, you could even – using the freedom inherent in subtracting infinity from infinity – construct theories that violate translation invariance. So why bother with the “first principles” part at all? Why not just say that effective field theory, post-renormalization, as presented in the text books, is the theory and not pretend that it follows from anything deeper?

  24. Moshe,

    Same point. The clay from which you are shaping your required theory is infinitely malleable. A posteori choices of “reasonable” requirements for the theory (such as independence on Λ after having decided that it is a constant) are exactly that: they are choices you make because the original theory was not sufficiently rigid.

Comments are closed.

Scroll to Top