So What Have You Been Maximizing Lately?

A while back, Brad DeLong referred to Ezra Klein’s review of Tyler Cowen’s book Discover Your Inner Economist. (Which I own but haven’t yet read; if it’s as interesting as the blog, I’m sure it will be great.) The question involves rational action in the face of substantial mark-ups on the price of wine in nice restaurants:

I did once try to convince Bob Hall at a restaurant in Palo Alto not to order wine: the fact that the wine would cost four times retail would, I said, depress me and lower my utility. Even though I wasn’t paying for it, I would still feel as though I was being cheated, and as I drank the wine that would depress me more than the wine would please me.

He had two responses: (i) “You really are crazy.” (ii) “Think, instead, that it’s coming straight out of the Hoover Institution endowment, and order two bottles.”

He is crazy, of course — crazy like an economist. I left a searingly brilliant riposte in the comment section of the post, which mysteriously never appeared. He will probably claim it was a software glitch or that I hit “Preview” instead of hitting “Post,” but I know better. What are you afraid of, Brad DeLong!?

Economists have a certain way of looking at the world, in which (to simplify quite a bit) people act rationally to maximize their utility. That sort of talk pushes physicists’ buttons, because maximizing functions is something we do all the time. I’m not deeply familiar with economics in any sense; everything I know about the subject comes from reading blogs. Any social science is much harder than physics, in the sense that constructing quantitative models that usefully describe the behavior of realistic systems is made enormously difficult by the inherent nonlinearities of human interactions. (“Ignoring friction” is the basis of 98% of physics, but nearly impossible in social sciences.) But I can’t help speculating, in a completely uninformed way, how economists could improve their modeling of human behavior. Anyone who actually knows something about economics is welcome to chime in to explain why all this is crazy (very possible), or perfectly well-known to all working economists (more likely), or good stuff that they will steal for their next paper (least likely). The freedom to speculate is what blogs are all about.

Utility is a map from the space of goods (or some space of outcomes) to the real numbers:

U: {goods} -> R

The utility function encapsulates preferences by measuring how happy I would be if I had those goods. If a set of goods A brings me greater utility than a set B, and I have to choose between them, it would be rational for me to choose A. Seems reasonable. But a number of issues arise when we put this kind of philosophy into practice. So here are those that occur to me, over the course of one plane ride across a couple of time zones.

  • Utility is non-linear.

This one is so perfectly obvious that I’m sure everyone knows it; nevertheless, it’s what immediately popped into mind upon reading the wine story. We need to distinguish between two different senses of linear. One is that increasing the amount of goods leads to a proportional increase in utility: U(ax) = aU(x), where x is some collection of goods and a is a real number. Everyone really does know better than that; the notion of marginal utility captures the fact that eating five deep-fried sliders does not bring you five times the happiness that eating just one would bring you. (Likely it brings you less.)

But the other, closely related, sense of linearity is the ability to simply add together the utility associated with different kinds of goods: U(x+y) = U(x) + U(y), where x and y are different goods. In the real world, utility isn’t anything like that. It’s highly nonlinear; the presence of one good can dramatically affect the value placed on another one. I’m also pretty sure that absolutely every economist in the world must know this, and surely they use interesting non-linear utility functions when they write their microeconomics papers. But the temptation to approximate things as linear can lead, I suspect, to the kind of faulty reasoning that dissuades you from ordering wine in nice restaurants. Of course, you could have water with your meal, and then go home and have a glass of wine you bought yourself, thereby saving some money and presumably increasing your net utility. But having wine with dinner is simply a different experience than having the wine later, after you’ve returned home. There is, a physicist would say, strong coupling between the food, the wine, the atmosphere, and other aspects of the dining experience. And paying for that coupling might very well be worth it.

Physicists deal with this by working hard at isolating the correct set of variables which are (relatively) weakly-coupled, and dealing with the dynamics of those variables. It would be silly, for example, to worry about protons and neutrons if you were trying to understand chemistry — atoms and electrons are all you need. So the question is, is there an economic equivalent to the idea of an effective field theory?

  • Utility is not a function of goods.

Another in the category of “surely all the economists in the world know this, but they don’t always act that way.” A classic (if tongue-in-cheek) example is provided by this proposal to cure the economic inefficiency of Halloween by giving out money instead of candy. After all, chances are small that the candy you collect will align perfectly with the candy you would most like to have. The logical conclusion of such reasoning is that nobody should ever buy a gift for anyone else; the recipient, knowing their own preferences, could always purchase equal or greater utility if they were just given the money directly.

But there is an intrinsic utility in gift-giving; we value a certain object for having received it on a special occasion from a loved one (or from a stranger while trick-or-treating), in addition to its inherent value. Now, one can try to account for this effect by introducing “having been given as a gift” as a kind of good in its own right, but that’s clearly a stopgap. Instead, it makes sense to expand the domain set on which the utility function is defined. For example, in addition to a set of goods, we include information about the path by which those goods came to us. Path-dependent utility could easily account for the difference between being given a meaningful gift and being handed the money to buy the same item ourselves. Best of all, there are clearly a number of fascinating technical problems to be solved concerning strategies for maximizing path-dependent utility. (Could we, for example, usefully approximate the space of paths by restricting attention to the tangent bundle of the space of goods?) Full employment for mathematical economists! Other interesting variables that could be added to the domain set on which utility is defined are left as exercises for the reader.

  • People do not behave rationally.

This is the first objection everyone thinks of when they hear about rational-choice theory — rational behavior is a rare, precious subset of all human activity, not the norm that we should simply expect. And again, economists are perfectly aware of this, and incorporating “irrationality” into their models seems to be a growth business.

But I’d like to argue something a bit different — not simply that people don’t behave rationally, but that “rational” and “irrational” aren’t necessarily useful terms in which to think about behavior. After all, any kind of deterministic behavior — faced with equivalent circumstances, a certain person will always act the same way — can be modeled as the maximization of some function. But it might not be helpful to think of that function as utility, or as the act of maximizing it as the manifestation of rationality. If the job of science is to describe what happens in the world, then there is an empirical question about what function people go around maximizing, and figuring out that function is the beginning and end of our job. Slipping words like “rational” in there creates an impression, intentional or not, that maximizing utility is what we should be doing — a prescriptive claim rather than a descriptive one. It may, as a conceptually distinct issue, be a good thing to act in this particular way; but that’s a question of moral philosophy, not of economics.

  • People don’t even behave deterministically.

If, given a set of goods (or circumstances more generally), a certain person will always act in a certain way, we can always describe such behavior as maximizing a function. But real people don’t act that way. At least, I know I don’t — when faced with a tough choice, I might go a certain way, but I can’t guarantee that I would always do the same thing if I were faced with the identical choice another hundred times. It may be that I would be a lot more deterministic if I knew everything about my microstate — the exact configuration of every neuron and chemical transmitter in my brain, if not every atom and photon — but I certainly don’t. There is an inherent randomness in decision-making, which we can choose to ascribe to the coarse-grained description that we necessarily use in talking about realistic situations, but is there one way or the other.

The upshot of which is, a full description of behavior needs to be cast not simply in terms of the most function-maximizing choice, but in a probability distribution over different choices. The evolution of such a distribution would be essentially governed by the same function (utility or whatever) that purportedly governs deterministic behavior, in the same way that the dynamics in Boltzmann’s equation is ultimately governed by Newton’s laws. The fun part is, you’d be making better use of the whole utility function, not just those special points at which it is maximized — just like the Feynman path integral established a way to make use of the entire classical action, not just those extremal points. I have no idea whether thinking in this way would be useful for addressing any real-world problems, but at the very least it should provide full employment for mathematical economists.

Okay, I bet that’s at least three or four Sveriges Riksbank Prizes in Economic Sciences in Memory of Alfred Nobel lurking in there somewhere. Get working, people!

So What Have You Been Maximizing Lately? Read More »

58 Comments

Facing the Future

We now have a Facebook group for Cosmic Variance! But let me work up to it.

I had heard about Facebook many times, but had effortlessly resisted the temptation to learn anything about it or get involved in any way. It’s a social-networking site, allowing people to keep each other up to date with stuff they are doing. A pastime in which I pretty much have no interest, despite what one might gather from the fact that I have a blog and all that. While I’ll tell stories about travel or amusing anecdotes for purposes of local color, and mention the occasional big event, for the most part I prefer to use the blog to talk about ideas and keep the fascinating details of my everyday life a tightly-shrouded mystery.

But at some point, the “everyone is doing it, how hard can it be, and maybe it could even be fun” argument kicks in, and in a moment of weakness you sign up. I blame Carl Zimmer, who just joined himself, with the usual disclaimers. It’s free, and easy as pie — you sign up, post a photo if you like, and that’s it.

The basic point of Facebook, according to my limited understanding, is to have “Friends.” That is, a set of other Facebookers with whom you have (mutually) agreed to allow access to your profile and information. There is a quite brilliant application via which, if you choose to allow it, Facebook can zip through a conventional email program (Gmail, apple, etc) looking for email addresses of other people with Facebook accounts, and let you ask them to be friends. And then there are networks of common interest and all that stuff. The obvious use is that you can simply tell Facebook when you’ve decided to quit your job and hike across the Andes, rather than emailing all of your friends individually.

But there is a deep problem of postmodern community ethics here — who is a “Friend,” in the official Facebook sense? One group would be, you know, your actual friends. Another would be people with whom you have some less tangible, but nevertheless pretty mutual and well-defined, relationship — maybe you’ve exchanged emails, or comments on each others blogs. It’s all up to you where to draw the line.

But personally, I wouldn’t count someone as a “Friend” if I had simply read their book, or visited their blog, or listened to their radio show, without them knowing me at all. And vice-versa. I mean, I think — to be honest, I’m new at this, and have no idea what the standards are. It might be very natural, for example, for regular CV readers to want to be my friend, but I’m not really sure it fits my notion of what friendship is really all about.

Then I noticed that Crooked Timber has its own Facebook group. Which seemed, at first, like the dumbest thing in the world — why do you need some proprietary social network when you already have the damn blog?

Upon digging deeper, however, I realized it was actually the smartest thing in the world. (A very fine line.) With the Facebook group, people can come together and share pictures, or relevant stories or rants, without being “friends” and dealing with constant updates about what they all had for dinner last night. (Although advancing to friendship — or more! — is always possible.) And in fact there are lots of blogs that have their own Facebook group.

So, now, so do we. Go ahead and join up. Upload your photo (or not). Share videos and pictures from the regular “Fans of CV” get-togethers which I’m sure happen all the time. The Pharyngula group has over a hundred members — you don’t want to be shown up by a bunch of godless cephalophiles, do you?

But there’s no way I’m ever having a MySpace page.

Update: Seems to be working! Over a hundred members, and the irrepressible Mark Jackson has even started a conversation about physics-related movie titles.

Facing the Future Read More »

33 Comments

The Meaning of “Life”

John Wilkins at Evolving Thoughts has a great post about the development of the modern definition of “Life” (which, one strongly suspects, is by no means fully developed). Once we break free of the most parochial definitions involving carbon-based chemistry, we’re left with the general ideas that life is something complex, something that processes information, something that can evolve, something that takes advantage of local entropy gradients to make records and build structures. (Probably quantum computation does not play a crucial role, but who knows?) One of the first people to think in these physical terms was none other than Erwin Schrödinger, who was mostly famous for other things, but did write an influential little book called What Is Life? that explored the connections between life and thermodynamics.

Searching for a definition of “Life” is a great reminder of the crucial lesson that we do not find definitions lying out there in the world; we find stuff out there in the world, and it’s our job to choose definitions that help us make sense of it, carving up the world into useful categories. When it comes to life, it’s not so easy to find a definition that includes everything that we would like to think of as living, but excludes the things we don’t.

Milky Way

For example: is the Milky Way galaxy alive? Probably not, so find a good definition that unambiguously excludes it. Keep in mind that the Milky Way, like any good galaxy, metabolizes raw materials (turning hydrogen and helium into heavier elements) and creates complexity out of simplicity, and does so by taking advantage of a dramatic departure from thermal equilibrium (of which CV readers are well aware) to build organization via an entropy gradient.

Update: Unbeknownst to me, Carl Zimmer had just written about this exact topic in Seed. Hat tip to 3QD.

The Meaning of “Life” Read More »

96 Comments

Mistakes

We outsource to Clifford the task of advertising the Categorically Not! events that KC Cole organizes at the Santa Monica Art Studios. Except for this Sunday, since I’m going to be one of the presenters, and I never shy from doing my own PR. The event (see blurb below) will begin at 6:30; everyone is welcome.

The topic is Mistakes! I think we’re all familiar with them. As the scientist, I suppose it’s my job to talk about mistakes made by scientists, and I’m not too proud to stoop to using Einstein as my example. He made some whoppers, and that’s not even including his personal life.

Any fun examples of scientific mistakes? Best would be those that teach some cute lesson about how true progress is impossible if you’re too timid to make mistakes, etc etc. Ideas are welcome.

Here is the blurb:

Blunders, boo boos, bloopers, errors, slip-ups, goofs, misinterpretations and misunderstandings. Everyone makes mistakes. In science, the notion of “mistake” is often itself misunderstood. Frequently, a “mistake” often turns out to be nothing more than a limited or skewed perspective. Or as Einstein put it, discovering a new theory is not so much like tearing down a house to build a new one as climbing a mountain from which one can see farther; the old “house” is still there, but is seen in a vastly different context. Mistakes in personal life and matters of policy can ruin lives; but “mistakes” in a humorous context can also make us laugh.

For our September 9th Categorically Not!, Caltech theoretical physicist Sean Carroll will talk about how mistakes are an inevitable part of scientific inquiry. From Aristotle through Kepler to Einstein, leaps in understanding have often been the offspring of wrong ideas, or right ideas that were suggested for the wrong reasons. (And what about Einstein’s so-called “biggest blunder”?) Sean is the author of a textbook on general relativity, lecturer in a course on cosmology offered by the Teaching Company, and a blogger at Cosmic Variance.

For a psychological perspective, social psychologist Carol Tavris will talk about her new book: Mistakes Were Made (But Not by Me): Why we justify foolish beliefs, bad decisions, and hurtful acts. She’ll describe the biases that blind us to our mistakes, make us unwilling to change unsupported beliefs, and allow us to think ourselves above conflicts of interest. She’ll also explain how the need to justify mistakes prevents us from realizing we might be wrong, ensuring we make the same mistakes again. The antidotes are the scientific method, and a sense of humor.

And as for sense of humor, the endlessly talented Orson Bean will talk about how mistakes are the basis of comedy. Orson won a Tony nomination for his role in Subways Are For Sleeping, appeared regularly on the Tonight Show with Jack Paar (and later Johnny Carson), and hosted numerous game shows (he is the last surviving panelist from To Tell the Truth). More recently, he played Dr. Lester in Being John Malkovich as well as numerous other film and TV roles. He is also the author of the book Me and the Orgone: One Guy’s Search for the Meaning of it All.

Mistakes Read More »

23 Comments

National Academy: Dark Energy First, Maybe LISA Second

The National Academy of Sciences panel charged with evaluating the Beyond Einstein program has come out with its recommendations. Briefly: the first priority should be the Joint Dark Energy Mission (where “joint” means “with the Department of Energy”), but we should keep up some amount of work on LISA, the Laser Interferometer Space Antenna. Steinn has the lowdown, so you should go there for details.

I am happy to know that JDEM will go forward (if NASA listens to the panel, about which I’m less sure than Steinn seems to be); very happy that LISA gets at least some support, although if I were the European Space Agency I’d certainly be shopping around for more reliable partners; slightly bemused that little effort seemed to go into pushing a CMB probe; and very sad to see X-ray astronomy get the shaft, as Constellation-X and EXIST seem right out of the picture. We can only hope for happier times ahead.

National Academy: Dark Energy First, Maybe LISA Second Read More »

28 Comments

Prof in a Box

teachingcompany.jpg Thomas Benton, writing in the Chronicle of Higher Education, describes the process by which the Teaching Company produces its recorded college-level courses for popular consumption:

[L]ecturers are chosen on the basis of “teaching awards, published evaluations of professors, newspaper write-ups of the best teachers on campus, and other sources.” Selected professors are invited to give a sample lecture, which is then reviewed by the company’s regular customers. The most favored professors are brought to a special studio near Washington, where their lecture series is recorded and filmed.

It all sounds rather exciting, like the academic equivalent of being discovered in a coffee shop by a Hollywood casting director.

Yes indeed! And there I was, last April, toiling away at Teaching Company World Headquarters in Chantilly, Virginia, to produce a set of lectures on cosmology and particle physics. These are now available as Dark Matter, Dark Energy: The Dark Side of the Universe, a series of 24 half-hour lectures aimed at anyone with a DVD player and a smidgen of curiosity about the natural world. In plenty of time for Christmas, I may add.

Even though the lectures are nominally about dark matter and dark energy, I used them as an excuse to cover lots of fun stuff about general relativity, particle physics within and beyond the Standard Model, and the early universe. Here is the lecture outline:

  1. Fundamental Building Blocks
  2. The Smooth, Expanding Universe
  3. Space, Time, and Gravity
  4. Cosmology in Einstein’s Universe
  5. Galaxies and Clusters
  6. Gravitational Lensing
  7. Atoms and Particles
  8. The Standard Model of Particle Physics
  9. Relic Particles from the Big Bang
  10. Primordial Nucleosynthesis
  11. The Cosmic Microwave Background
  12. Dark Stars and Black Holes
  13. WIMPs and Supersymmetry
  14. The Accelerating Universe
  15. The Geometry of Space
  16. Smooth Tension and Acceleration
  17. Vacuum Energy
  18. Quintessence
  19. Was Einstein Right?
  20. Inflation
  21. Strings and Extra Dimensions
  22. Beyond the Observable Universe
  23. Future Experiments
  24. The Past and Future of the Dark Side

The Teaching Company does a great job with production, so there are plenty of riveting graphics along the way. The actual lectures are given in a tiny studio in front of just a couple of people, which is not my preferred mode of speaking; I much prefer to have a real audience that will laugh and furrow their brows in puzzlement, as appropriate. So I don’t think my delivery was as sprightly as it could have been, especially in the first couple of lectures when I was getting used to the process. But there’s always the content, I suppose. And I wear a variety of fetching jackets and ties throughout the lectures, so in addition to deep insights about the workings of the universe, you also get a fashion show.

If cosmology isn’t your thing, the Teaching Company has an impressive array of courses on all sorts of stuff, from ancient history to modern jazz. It’s been getting good reviews, such as a recent Wall Street Journal article that refers to we lecturers as “reputable and often quite talented,” which I think is good. As Benton goes on to say:

Even as more and more people find higher education financially out of reach, or impractical to continue beyond early adulthood, recorded lectures — combined with the increasing availability of online lecture content and Web resources like the Wikipedia and countless blogs — are bringing on the Golden Age of the autodidact. I can’t help thinking that Diderot would approve, and I wish academe would do more to encourage such activities.

I’m sure Diderot would indeed approve, if he could just figure out how to work the remote on the DVD player.

Prof in a Box Read More »

24 Comments

Why Is There Something Rather Than Nothing?

The best talk I heard at the International Congress of Logic Methodology and Philosophy of Science in Beijing was, somewhat to my surprise, the Presidential Address by Adolf Grünbaum. I wasn’t expecting much, as the genre of Presidential Addresses by Octogenarian Philosophers is not one noted for its moments of soaring rhetoric. I recognized Grünbaum’s name as a philosopher of science, but didn’t really know anything about his work. Had I known that he has recently been specializing in critiques of theism from a scientific viewpoint (with titles like “The Poverty of Theistic Cosmology“), I might have been more optimistic.

Grünbaum addressed a famous and simple question: “Why is there something rather than nothing?” He called it the Primordial Existential Question, or PEQ for short. (Philosophers are up there with NASA officials when it comes to a weakness for acronyms.) Stated in that form, the question can be traced at least back to Leibniz in his 1697 essay “On the Ultimate Origin of Things,” although it’s been recently championed by Oxford philosopher Richard Swinburne.

The correct answer to this question is stated right off the bat in the Stanford Encyclopedia of Philosophy: “Well, why not?” But we have to dress it up to make it a bit more philosophical. First, we would only even consider this an interesting question if there were some reasonable argument in favor of nothingness over existence. As Grünbaum traces it out, Leibniz’s original claim was that nothingness was “spontaneous,” whereas an existing universe required a bit of work to achieve. Swinburne has sharpened this a bit, claiming that nothingness is uniquely “natural,” because it is necessarily simpler than any particular universe. Both of them use this sort of logic to undergird an argument for the existence of God: if nothingness is somehow more natural or likely than existence, and yet here we are, it must be because God willed it to be so.

I can’t do justice to Grünbaum’s takedown of this position, which was quite careful and well-informed. But the basic idea is straightforward enough. When we talk about things being “natural” or “spontaneous,” we do so on the basis of our experience in this world. This experience equips us with a certain notion of natural — theories are naturally if they are simple and not finely-tuned, configurations are natural if they aren’t inexplicably low-entropy.

But our experience with the world in which we actually live tells us nothing whatsoever about whether certain possible universes are “natural” or not. In particular, nothing in science, logic, or philosophy provides any evidence for the claim that simple universes are “preferred” (whatever that could possibly mean). We only have experience with one universe; there is no ensemble from which it is chosen, on which we could define a measure to quantify degrees of probability. Who is to say whether a universe described by the non-perturbative completion of superstring theory is likelier or less likely than, for example, a universe described by a Rule 110 cellular automaton?

It’s easy to get tricked into thinking that simplicity is somehow preferable. After all, Occam’s Razor exhorts us to stick to simple explanations. But that’s a way to compare different explanations that equivalently account for the same sets of facts; comparing different sets of possible underlying rules for the universe is a different kettle of fish entirely. And, to be honest, it’s true that most working physicists have a hope (or a prejudice) that the principles underlying our universe are in fact pretty simple. But that’s simply an expression of our selfish desire, not a philosophical precondition on the space of possible universes. When it comes to the actual universe, ultimately we’ll just have to take what we get.

Finally, we physicists sometimes muddy the waters by talking about “multiple universes” or “the multiverse.” These days, the vast majority of such mentions refer not to actual other universes, but to different parts of our universe, causally inaccessible from ours and perhaps governed by different low-energy laws of physics (but the same deep-down ones). In that case there may actually be an ensemble of local regions, and perhaps even some sensibly-defined measure on them. But they’re all part of one big happy universe. Comparing the single multiverse in which we live to a universe with completely different deep-down laws of physics, or with different values for such basic attributes as “existence,” is something on which string theory and cosmology are utterly silent.

Ultimately, the problem is that the question — “Why is there something rather than nothing?” — doesn’t make any sense. What kind of answer could possibly count as satisfying? What could a claim like “The most natural universe is one that doesn’t exist” possibly mean? As often happens, we are led astray by imagining that we can apply the kinds of language we use in talking about contingent pieces of the world around us to the universe as a whole. It makes sense to ask why this blog exists, rather than some other blog; but there is no external vantage point from which we can compare the relatively likelihood of different modes of existence for the universe.

So the universe exists, and we know of no good reason to be surprised by that fact. I will hereby admit that, when I was a kid (maybe about ten or twelve years old? don’t remember precisely) I actually used to worry about the Primordial Existential Question. That was when I had first started reading about physics and cosmology, and knew enough about the Big Bang to contemplate how amazing it was that we knew anything about the early universe. But then I would eventually hit upon the question of “What if they universe didn’t exist at all?”, and I would get legitimately frightened. (Some kids are scared by clowns, some by existential questions.) So in one sense, my entire career as a physical cosmologist has just been one giant defense mechanism.

Why Is There Something Rather Than Nothing? Read More »

240 Comments

Arguments For Things I Don’t Believe, 1: Research on String Theory is Largely a Waste of Time

First in a prospective series of my own versions of the best arguments for conclusions I don’t personally share. I’m supposed to stick to statements that I believe are true, even if I don’t think they warrant the conclusion. The idea is to probe presuppositions, put our ideas to the test, and of course to implicitly diss the less-good arguments for things we don’t believe. And who knows, maybe we’ll come up with arguments that are so great we’ll change our minds! (By slipping into the royal “we” I’m encouraging others to play along.) So here we go: the best argument I can think of for why research on string theory is a waste of time.

Traditionally, the greatest progress in physics has come through an intense interaction between theory and experiment. We have learned new things when experiments were good enough to bring us data that didn’t fit into the models of the time, but our theoretical understanding was also sufficiently developed that we had the tools to formulate useful hypotheses. While we know that classical general relativity and quantum mechanics are fundamentally incompatible and must someday be reconciled, straightforward dimensional analysis suggests that detailed experimental information about the workings of such a reconciliation (as opposed to true-but-vague statements like “gravity exists” or “spacetime is four-dimensional on large scales”) won’t be available at energies below the Planck scale, which is hopelessly out of reach at the current time.

A defensible response to this lack of detailed experimental input would be to place the problem of quantizing gravity on the back burner while we think about other things. And this was indeed the strategy pursued by the overwhelming majority of theoretical physicists, up until the 80’s. Two things caused a change: the drying-up of the river of experimental surprises that had previously kept particle theory vibrant and unpredictable, and the appearance of string theory as a miraculously promising theory of quantum gravity. Even though the Planck scale was still just as inaccessible, string theory was so good that it became reasonable to hope that we could figure it all out just by using brainpower, even without Planckian accelerators.

But it hasn’t worked out that way. Gadflies point to the landscape of low-energy manifestations of string theory as the nail in the coffin for any hopes to uniquely predict new particle physics from string theory. But that is only a subset of the more significant challenge, and understanding particle physics beyond the Standard Model was never the primary motivation of most string theorists anyway — it was quantizing gravity.

The real problem is that string theory isn’t a theory. It’s just part of a theory, and we don’t know what that theory is, although sometimes we call it M-theory. As Aaron explains in a very nice post, the thing we understand is “perturbative” string theory, which is a fancy way of saying “the part of M-theory where small perturbations around empty space act like weakly-interacting strings.” We’ve known all along that colorful stories about loops of string propagating through spacetime only captured part of the story, but we’re beginning to catch on to how difficult it will be to capture the whole thing. The Second String Revolution in the 90’s taught us a great deal about M-theory, but it’s hard to know whether we should be more impressed with what we’ve been able to learn even without experimental input, or more daunted by the task of finishing the job.

Within our current understanding of string theory, there is not a single experiment we can even imagine doing (much less actually, realistically hope to do) that would falsify string theory. We can’t make a single unambiguous prediction, even in principle. I used to think that string theory predicted certain “stringy” behavior of scattering cross-sections at energies near the Planck scale; but that’s not right, only perturbative string theory predicts such a thing. “String theory” is part of a larger structure that we don’t understand nearly well enough to make contact with the real world as yet, and it’s completely possible that another century or two of hard thinking won’t get us to that goal. It made sense to be optimistic in the 80’s that there was enough rigidity and uniqueness in the theory that we would be led more or less directly to contact with observation; but that’s not what has happened.

The best reason to think that research on string theory is largely a waste of time is because it’s just too hard.

Pretty convincing, eh? But I don’t buy it, even though I think I’ve adhered to my self-imposed rule that I believe every individual sentence above. It might turn out to be the case that another century or two of hard thinking won’t get us any closer to connecting string theory with the real world, but I don’t see any reason to be that pessimistic. The thing that’s really hard to get across at a popular level is that the theory really is rigid and unique, deep down; it’s the connections between “deep down” and the world around us that are the hard part. Count me as one of those who is more impressed with what we have learned than daunted by what we haven’t; if I were to bet, I would say that more thinking will continue to lead to more breakthroughs, and ultimately a version of M-theory that can rightly be called “realistic.”

In the meantime, the advent of sexy new data from the LHC and elsewhere will draw a certain fraction of brainpower away from string theory and into phenomenology, but there will be plenty left over. The field as a whole will fitfully establish a portfolio of different approaches, as it usually does. And there will undoubtedly be surprises around the corner.

Arguments For Things I Don’t Believe, 1: Research on String Theory is Largely a Waste of Time Read More »

50 Comments
Scroll to Top