I continue to believe that “quantum field theory” is a concept that we physicists don’t do nearly enough to explain to a wider audience. And I’m not going to do it here! But I will link to other people thinking about how to think about quantum field theory.
Over on the Google+, I linked to an informal essay by John Norton, in which he recounts the activities of a workshop on QFT at the Center for the Philosophy of Science at the University of Pittsburgh last October. In Norton’s telling, the important conceptual divide was between those who want to study “axiomatic” QFT on the one hand, and those who want to study “heuristic” QFT on the other. Axiomatic QFT is an attempt to make everything absolutely perfectly mathematically rigorous. It is severely handicapped by the fact that it is nearly impossible to get results in QFT that are both interesting and rigorous. Heuristic QFT, on the other hand, is what the vast majority of working field theorists actually do — putting aside delicate questions of whether series converge and integrals are well defined, and instead leaping forward and attempting to match predictions to the data. Philosophers like things to be well-defined, so it’s not surprising that many of them are sympathetic to the axiomatic QFT program, tangible results be damned.
The question of whether or not the interesting parts of QFT can be made rigorous is a good one, but not one that keeps many physicists awake at night. All of the difficulty in making QFT rigorous can be traced to what happens at very short distances and very high energies. And that’s certainly important to understand. But the great insight of Ken Wilson and the effective field theory approach is that, as far as particle physics is concerned, it just doesn’t matter. Many different things can happen at high energies, and we can still get the same low-energy physics at the end of the day. So putting great intellectual effort into “doing things right” at high energies might be misplaced, at least until we actually have some data about what is going on there.
Something like that attitude is defended here by our former guest blogger David Wallace. (Hat tip to Cliff Harvey on G+.) Not the best video quality, but here is David trying to convince his philosophy colleagues to concentrate on “Lagrangian QFT,” which is essentially what Norton called “heuristic QFT,” rather than axiomatic QFT. His reasoning very much follows the Wilsonian effective field theory approach.
The concluding quote says it all:
LQFT is the most successful, precise scientific theory in human history. Insofar as philosophy of physics is about drawing conclusions about the world from our best physical theories, LQFT is the place to look.
Sean: “Many different things can happen at high energies, and we can still get the same low-energy physics at the end of the day. So putting great intellectual effort into “doing things right” at high energies might be misplaced.”
That seems a bit hypocritical coming from particle physicists, after all the same argument can be made against most of modern particle physics: “Many different things can happen at high energies, but at the end of the day almost all practical applications deal with electrons and nucleons so the great intellectual and financial resources invested in studying higher energy physics (LHC) might be misplaced.”
How would Wallace feel about formulating Wilson’s ideas as theorems and proving them? Well, theorems cannot be proven without axioms. So in short, there’s no escape, axiomatic QFT (in some form) is here to stay, whether popular with “the masses” or not.
Not having watched the video yet, I cannot be sure what Wallace’s aims are. But if he intends to argue that mathematical proof is not necessary in particle physics, the main question to answer is why particle physics is so special, given that mathematical proof has demonstrated its value in many other areas of knowledge. Moreover, using some heuristic ideas (Wilsonian view) to argue that other heuristic ideas (perturbative QFT) are intellectually satisfactory may not be very effective in front of an audience that is well aware of the independent value of mathematical proof.
To be honest, I do not see a reason to make exclusionary arguments either way. Physics/math/philosophy is a big tent, big enough for both heuristic reasoning and for rigorous reasoning. Lets just not substitute one for the other (in either direction).
I think “axiomatic” versus “heuristic” might be what someone once called a false dichotomy. It is true that some (mostly older) mathematically rigorous approaches to the subject fail to resemble anything a working physicist might recognize. But, that might be related to the fact they do not incorporate basic ideas of the subject such as universality and the renormalization group. That fact has nothing to do with the general idea of mathematical rigour, there’s nothing fundamentally heuristic or imprecise about those ideas. Many newer mathematical approaches do incorporate the modern understanding of QFT when trying to put the subject on firm footing, as a result they might have better chance of success. Here is for example a really nice survey talk by Mike Douglas (PDF)
http://www.math.upenn.edu/StringMath2011/notes/Douglas_StringMath2011_talk.pdf
AI:
I think the point isn’t that ‘we don’t want to know what happens at higher energies,’ I’m pretty sure that everyone in high energy physics would like to know whether new phenomena emerge at say the Planck scale.
As I understand it, the logic is something like this: One would hope that the final theory of physics could be formulated rigorously. But QFT isn’t necessarily the last word on physics, so maybe it can’t be formulated rigorously, in the sense that maybe certain integrals really do diverge when you insist the theory is valid at super high energies. Despite what you might first think, that’s actually not bad, because Wilson’s approach tells us that there can be lots of things going on at higher energies that will ‘effectively look like’ like QFT at low energies. If that’s the case, spending a lot of mental effort trying to formulate QFT rigorously may be completely futile, if the true theory is not a QFT.
The vacuum energy density predicted by QFT is 10^70 to 10^120 times larger than the observed macroscopic VED measured astrophysically.
Obviously, something is seriously wrong with one or more basic assumptions of QFT.
A simple and natural resolution of the VED disparity was offered:
http://arxiv.org/abs/0901.3381 ,
but it was proposed by a nobody, so the physics community has no interest in it.
Albert Z
QF….err, I mean quoted for truth (let’s keep the acronym QFT for quantum field theory). It’s kinda like worrying that old quantum theory isn’t rigorous. The theory was an approximation and a stepping stone to something better, quantum mechanics. It played that role well enough and it didn’t have to be axiomatic to do it.
Similarly, QFT is a low energy effective field theory, but the more fundamental theory isn’t exactly known at this time. If we find the fundamental theory and that can’t be made rigorous, then it’s time to worry!
“LQFT is the most successful, precise scientific theory in human history. ”
This claim is often made, but I’d like to see some numbers please. I think that classical General Relativity has been confirmed to a very large number of decimal places [think binary pulsar] and I wouldn’t be surprised if it does as well or better.
Andrew#4:
Newtonian mechanics is definitely known not to be the last word on physics, yet it has still been formulated rigorously, with few ill effects to the physicists and mathematicians who sat down and did it. Physical correctness is clearly not a requirement for a precise mathematical formulation of a theory. Otherwise, we would be in MUCH bigger trouble than we actually are.
“[…] Wilson’s approach tells us that there can be lots of things going on at higher energies that will ‘effectively look like’ like QFT at low energies. If that’s the case, spending a lot of mental effort trying to formulate QFT rigorously may be completely futile, if the true theory is not a QFT.”
You’re absolutely right that we do not know what the fundamental physics is at high energies is. But, boy, wouldn’t the above argument have egg on its face if that physics does turn out to be described by a QFT? If a rigorous formulation of QFT is still not available then, then it would look as if we’d spent all this time just twiddling our thumbs and waiting, instead of attacking this important mathematical problem. Luckily, we are not just twiddling our thumbs and there is a substantial community of mathematical physicists working toward a mathematically rigorous way of formulating QFT, incorporating as many physical insights as needed. Labeling this work as unimportant or subtracting from the resources available to them is just not helpful.
@7 Albert Einstein:
I would suggest this
http://en.wikipedia.org/wiki/Tests_of_general_relativity#Strong_field_tests
and
http://en.wikipedia.org/wiki/Precision_tests_of_QED
It would appear that GR doesn’t even come close.
With respect to 8..
First of all Newtonian and classical physics did break down theoretically, and heralded the onset of quantum mechanics (the UV catastrophe). I don’t know about you, but I like when my theories tell me that they no longer work.
Second of all, the great life lesson for all physicists is to know when a problem is solvable, and when you just don’t know enough and need to walk away.
Einstein spent 30 years in an intellectual pursuit that he had no chance of succeeding in. Obvious of course in hindsight, but even then, I think people were aware that the problems were not tractable given the methods and techniques of the day.
Well, it seems to me that’s the case with solving QFT. It might simply be the case that the reason we can’t formulate them precisely is simply b/c we lack some physical principle, or perphaps it is simply the wrong question to ask. (QED as a fundamental high energy theory for instance is by now, merely of academic interest). Given by how difficult, and tortured the process is in making these things rigorous, I like to think that we are likely in one of those scenarios.
It strikes me that this divide has long been present in physics. The axiomatic side had its tragic champion in Albert Einstein, whose hunch that at bottom everything makes sense was so strong that he rejected the very theory he founded. The heuristic side has long been encapsulated in the apocryphal rebuke to students who want to know why QFT is so bizarre: “Shut up and compute.”
Sean may be pragmatically right to say “putting great intellectual effort into ‘doing things right’ at high energies might be misplaced, at least until we actually have some data about what is going on there.” Still, the effort to unite physics in a coherent, evidence-based whole has been underway for several lifetimes. If eventually physics cannot produce the data or explain it once it is there, it will have failed. That might keep a few people up at night.
Haelfix#8:
I hope you will agree to make the distinction between problems of physics and of mathematics. When you have data that is not fit by any known theory, then you have a problem of physics. When you have an interesting, mathematically precise question, the answer to which is simply unknown, you have a problem of mathematics. The first two examples you gave, the UV catastrophe and Einstein’s quests for a unified field theory or alternatives to quantum mechanics were decidedly problems of physics. For instance, the statistical mechanics of the classical electromagnetic field is a perfectly fine mathematical theory, which is used to this day. It however fails to exhibit finite energy equilibrium states that would correspond to the internal states of the black bodies that we find all around us, hence is strictly speaking physically wrong.
Your third example, and the subject of this discussion, the rigorous formulation of QFT is, oppositely, a problem of mathematics. It is treated as such by the practitioners of that field (even if not perceived so by outsiders), which is plainly illustrated by the fact that an instance of this problem has been valued at $1M by the Clay Mathematics Institute. The motivations for working on problems of physics and problems of mathematics are different and need not be pursued by the same demographic. So, whatever the reasons for not working on the mathematical problem of cleaning up the foundations of QFT, I submit that they are not of the kind that you’ve put forward.
As a side remark, I’m often under the impression that the motivations and goals of people working of the mathematical foundations of QFT are misunderstood or misperceived. I hope that bit of clarification helps.
Is there a good (semi-)popular treatment of QFT, say at the level of Ted Harrison’s Cosmology textbook? (BTW, Harrison’s book is what a textbook should be: nothing important left out, lots of background information (otherwise one can just read the key papers in a field), captures the big picture and the important details.)
A simple and natural resolution of the VED disparity was offered:
http://arxiv.org/abs/0901.3381 ,
but it was proposed by a nobody, so the physics community has no interest in it.
Quoting from the abstract: “a discrete self-similar cosmological paradigm based on the fundamental principle of discrete scale invariance”. A web search for this quickly leads to
http://adsabs.harvard.edu/abs/1987ApJ…322…34O (in a well respected refereed journal no less)
Quoting from this abstract: “Two definitive predictions are also pointed out: (1) the model predicts that the electron will be found to have structure with radius of about 4 x 10 to the -17th cm, at just below the current [1987] empirical resolution capability”. Since current (2012) upper limits on the substructure of the electron are a few orders of magnitude lower than this, and there is no circumstantial evidence for substructure, it appears that the discrete self-similar paradigm has been ruled out since a definitive prediction was falsified. Maybe that’s why no-one takes it seriously.
A great advantage of a mathematically rigorous formulation of a theory is that one explicitly states, and everyone can see, exactly what are the assumptions that go into the theory. Then, if the theory is inconsistent with observations, the error must lie with one or more axioms — everyone knows where to look.
In a heuristic approach it may not be entirely clear what, exactly, are all of its assumptions. Some of the assumptions may be implicit or “folklore-ish,” and that leads to potential ambiguity. The result may be that, in regards to a heuristic theory, different people maintain different implicit assumptions even if they agree on the outcome of the theory, leading to endless arguments about the interpretation or relevance of a prediction. The potential also exists that careful examination of all the assumptions (explicit plus implicit) would reveal an inconsistency in the theory, even if the predictions of the theory were correct in the regime where it has been tested thus far. Inconsistent theories are not what good physicists strive for…
In struggling with coming up with a set of consistent axioms, one is forced to consider foundational issues that are too-easily brushed aside with more ad hoc approaches to theory construction. Consideration of those foundational issue may lead to important or even necessary insights. The concern seems especially relevant when trying to develop a fundamental theory, where very limited experimental guidance may be available to help discover theoretical missteps.
One should not be too cavalier about dismissing the importance of rigorous mathematical formulations.
Mr. Helbig says:
“Quoting from this abstract: “Two definitive predictions are also pointed out: (1) the model predicts that the electron will be found to have structure with radius of about 4 x 10 to the -17th cm, at just below the current [1987] empirical resolution capability”. Since current (2012) upper limits on the substructure of the electron are a few orders of magnitude lower than this, and there is no circumstantial evidence for substructure, it appears that the discrete self-similar paradigm has been ruled out since a definitive prediction was falsified. Maybe that’s why no-one takes it seriously.”
Albert Z says:
(1) No one has direct and/or reliable evidence for or against structure, especially regarding low-density envelopes or clouds of virtual particles, below a resolution of 10^-16 cm.
(2) You conveniently and unscientifically ignore the fact that Oldershaw mentioned in the same paragraph of the 1987 paper that it was also possible that the electron was a naked singularity.
(3) Since 1987, which was quite a while ago, Oldershaw has settled on an electron model wherein the electron is a virtually naked singularity that has a low-density envelope of charged subquatum particles, with an envelope radius of 4 x 10^-17 cm.
Less emotion, more accurate information, please.
Best,
Albert Z
@12.
I think my point is that this actually may *be* a physics problem, and not strictly speaking a mathematics issue. I say that for a number of reasons!
First there is the observation that strictly speaking we must take infinite volume limits when defining these theories. Many of the big problems occur there. Well, that may or may not be physically justified in principle if you believe in holographic complementarity.
Two, take QED as an example. I could very easily say that we need 136 terms in the asymptotic expansion, throw away the rest and simply define the theory that way. Its almost well defined (modulo some subtleties) and we are done! The mathematicians will cry foul, b/c in some sense what they want is a systematic method that spits out fundamental theories with well defined objects that work to arbitrarily high energies. They really believe that there is a nonperturbative answer that has ontological value (naively it seems that most exact answers to the path integral are simply infinity). Except that we now know that such an object is meaningless physically. It would need to be completed into the electroweak theory!
Three, it may simply be the case that physical QFT’s simply dont exist as well defined mathematical objects b/c for instance the world is really made out of strings (or some other fundamental theory) and they only spit out QFTs as low energy approximations. So perhaps the mathematics question should be about something else!
I don’t know! It just seems to me that the old attempts at defining QFTs were so unlovely, contrived and ugly (mathematically) that it is likely the case that it was humans putting some structure onto nature that she didn’t want. At least that was my read when I studied them.
I don’t understand the viewpoint that it is impossible to make “heuristic QFT” rigorous. Series doesn’t converge? Stop using “Sum” and define a new symbol (or just work with the terms you need). Integrals don’t exist? Define a new symbol. In the end, just produce a set of rules for the manipulation of mathematical quantities, and explain the relationship between those quantities and experiments. If somebody took the time to do this, it would be *way* easier for outsiders to understand QFT. As it is, one has to learn by hanging around the community long enough to get a “feel” for what’s allowed and what isn’t.
As for some simple underlying mathematical structure… now that sounds hard. But just producing a mathematically well-defined set of rules for predictions should be simple. And that makes QFT “rigorous”, if not beautiful.
My own work is in quantum field theory in curved space, where the “axiomatic” perspective, generally found under local quantum field theory or algebraic quantum field theory, is quite essential for constructing anything that makes sense. No natural vacuum exists in a space without a timelike Killing vector. What used to be a mathematical nicety is essential in the curved space setting.
I know Sean is no stranger to these results and is probably restricting the discussion to role of QFT in particle physics.
Is it wrong that I want to shrug my shoulders at this explanation? I guess I’ll just do it when I’m off the internet…
I totally agree with Sam. At the end of the day, physics is all about formulating a set of rules that enables you to predict the results of experiments. They don’t have to follow from a mathematically well-defined set of axioms and theorems. They just have to work and be able to reduce to well-established rules under more particular situations. These rules are about the relationships between different measurable quantities that we have invented because they conform to what we observe.
@19 & 21:
Please reread comment 17. One certainly *can* formulate a set of working rules and have a well-defined theory, as long as one of the rules says “don’t apply these rules beyond a certain point”. In the QED example, the “certain point” would be the 136th order of perturbation, or so. This extra rule limits the applicability of the theory in a very ad hoc way, without properly understanding why.
If you want to have a theory which doesn’t have such axioms (that limit the applicability of other axioms), then in QFT you may easily run into a problem with the theory being (a) self-contradictory, and/or (b) non-predictive, and/or (c) experimentally incorrect.
If you are a mathematician, you might not care about (c), but you indeed must care about (a) and (b), which is the very problem of axiomatic QFT. The solution to this problem might or might not exist.
If you are a physicist, you must care about (c) first, and about (a) and (b) only if (c) is true. In the case of QFT, (c) is known to have failed, ie. QFT does have predictions that contradict observations (just calculate anything in QED beyond the 137th order of perturbation).
Therefore, given that (c) fails, physicists consider QFT as an “effective theory”, ie. an approximation which is valid only up to a certain point. In that setting, caring about axiomatic formulation of QFT is just an academic exercise, and an irrelevant question.
So as a bottomline, the relevance of axiomatic formulation of QFT depends on your standpoint and perspective (and taste).
HTH 🙂
My comment 22 was directed @18&21, rather than @19&21, sorry for the mixup.
(1) No one has direct and/or reliable evidence for or against structure, especially regarding low-density envelopes or clouds of virtual particles, below a resolution of 10^-16 cm.
A quick web search reveals otherwise. By chance, just today I was reading an article on the substructure of the proton as measured at HERA (an electron-proton accelerator in Hamburg) and the comparison with QCD predictions. The agreement was great, and certainly wouldn’t be possible if the electron had a structure a couple of orders of magnitude bigger than that probed in the proton.
(2) You conveniently and unscientifically ignore the fact that Oldershaw mentioned in the same paragraph of the 1987 paper that it was also possible that the electron was a naked singularity.
Either it is a naked singularity, or it has a substructure. A ‘prediction’ which predicts both predicts neither.
(3) Since 1987, which was quite a while ago, Oldershaw has settled on an electron model wherein the electron is a virtually naked singularity that has a low-density envelope of charged subquatum particles, with an envelope radius of 4 x 10^-17 cm.
There have been a few blog posts on the Superbowl recently. I think this is termed ‘moving the goalposts’ in football jargon.
Dear Mr. Helbig
Sometimes you argue as if Oldershaw were predicting a “solid-body” electron with a radius of 4 x 10^-17 cm. Of course, we know he is doing no such thing.
You say: “Either it is a naked singularity, or it has a substructure.”
But that is a “dumbell argument” with all the weight on two opposite extremes, i.e., it is an excluded middle argument. Imagine, if you can, a singular electron that is shrouded in a low-density envelope of charged and relatively infinitessimal particles, with the envelope having a radius of 4 x 10^-17 cm.
So there you have a viable model for a virtually singular electron that does have substructure, which would be very hard to detect but not impossible.
If you have a vendetta against Oldershaw, fine. But I think it is a mistake to summarily rule out his discrete fractal paradigm without a more open-minded and balanced evaluation. It has many positive features, in addition to a small number of unresolved issues [and what theory does not?].
Why focus on a couple of moles and ignore the global sweep, potential for unification and elegance of the paradigm?
Albert Z