Science

The universe is structured like a language

Zizek A little while ago I went to see Zizek!, a new documentary about charismatic and controversial Slovenian philosopher and cultural critic Slavoj Zizek. Part of the Zizekian controversy can be traced straightforwardly to his celebrity — not hard to get fellow academics ornery when you’re greeted by admiring throngs at each of your talks (let me tell you) — but there is also his propensity for acting in ways that are judged to be somewhat frivolous: frequent references to pop culture, an unrestrained delight in telling jokes. I was fortunate enough to see Zizek in person, as part of a panel discussion following the film. He is a compelling figure, effortlessly outshining the two standard-issue academics flanking him on the panel. He adamantly insisted that he had no control over the documentary of which he was the subject, indeed that he hasn’t even seen it, but then reveals that a number of important scenes were admittedly his idea. In one example, the camera lingers on a striking portrait of Stalin in his apartment, which the cinematic Zizek explains as a litmus test, a way of interrogating the bourgeois sensibilities of his visitors. The flesh-and-blood Zizek, meanwhile, points out that it was just a joke, and that he would never have something so horrible as a portrait of Stalin on his wall. It ties into his notion that a film will never reveal the true person behind the scholar or public figure, nor should it; the ideas will stand or fall by themselves, separate from their personification in an actual human. I have no educated opinion about his standing as a thinker; see John Holbo, Adam Kotsko (and here), or Kieran Healy for some opinions, or read this interview in The Believer and judge for yourself.

The movie opens with a Zizek monologue on the origin of the universe and the meaning of life. We can talk all we like, he says, about love and meaning and so on, but that’s not what is real. The universe is “monstrous” (one of his favorite words), a mere accident. “It means something went terribly wrong,” as you can hear him say through a distinctive lisp in this clip from the movie. He even invokes quantum fluctuations, proclaiming that the universe arose as a “cosmic catastrophe” out of nothing.

I naturally cringed a little at the mention of quantum mechanics, but his description ultimately got it right. Our universe probably did originate as a quantum fluctuation, either “from nothing” or within a pre-existing background spacetime. Mostly, to be honest, I was just jealous. As a philosopher and cultural critic, Zizek gets not only to bandy about bits of quantum cosmology, but is permitted (even encouraged) to connect them to questions of love and meaning and so on. As professional physicists, we’re not allowed to talk about those questions — referees at the Physical Review would not approve. But it’s worth interrogating this intellectual leap, from the accidental birth of the universe to the richness of meaning we see around us. How did we get there from here, and why?

It’s the possibility of addressing this question that I take to be the most significant aspect of the “computational quantum universe” idea advocated by Seth Lloyd in his new book Programming the Universe. Lloyd is a somewhat controversial figure in his own right, but undoubtedly an influential physicist; he was the first to propose a plausible design for a quantum computer. I.e., a computer that takes advantage of the full quantum-mechanical wavefunction of its elements, rather than being content with the ordinary classical states.

To Lloyd, quantum computation is a hammer, and it’s tempting to see everything interesting as a nail — from black holes to quantum gravity to the whole universe. The frustrating aspect of his book is the frequency with which he insists that “the universe is a quantum computer,” without always making it clear just what that means or why we should care. What is the universe supposed to be computing, anyway? Its own evolution, apparently. And what good is that, exactly? It’s hard to tell at first whether the entire idea is merely a particular language in which we are free to talk about good old-fashioned physics and cosmology, or whether it’s a profound change of perspective that can be put to good use. What physicists would really like to know is, does thinking of the universe as a quantum computer actually help us solve any problems?

Well, maybe. My own personal reconstruction of the problem that Lloyd is suggesting we might be able to solve by thinking of the universe as a quantum computer, although in slightly different words, is precisely that raised by Zizek’s monologue: Why, in the course of evolving from the early universe to the end of time, do we pass through a phase featuring the fascinating and delightful complexity we see all around us?

Let’s be more specific about what that means. The early universe — at least, the hot Big Bang with which our observable universe began — is a very low-entropy state. That is, it’s a very unlikely configuration in the space of all the ways one could arrange the universe — much like having all of the air molecules accidentally be located in one half of a room (although much worse). But entropy is increasing as the universe evolves, just like the Second Law of Thermodynamics says it should. The late universe will be very high entropy. In particular, if the universe continues to expand forever (which seems likely, although one never knows), we are evolving toward heat death, in which matter cools down and is dispersed thinly over space after black holes form and evaporate. This is a “natural” state for the universe, one which will essentially stay that way in perpetuity.

However. While the early universe is low-entropy and the late universe is high-entropy, both phases are simple. That is, their macrostates can be described in very few words (they have low Kolmogorov complexity): the early state was hot and dense and smoothly-distributed, while the final state will be cold and dilute and smoothly-distributed. But our current universe, replete as it is with galaxies and planets and blogospheres, isn’t at all simple, it’s remarkably complex. There are individual subsystems (like you and me) that would require quite a lengthy description to fully specify.

So: Why is it like that? Why, in the evolution from a simple low-entropy universe to a simple high-entropy universe, do we pass through a complex medium-entropy state?

Typing Monkey Lloyd’s suggested answer, to the extent that I understand it, arises from the classic thought experiment of the randomly typing monkeys. A collection of monkeys, randomly pecking at keyboards, will eventually write the entire text of Hamlet — but it will take an extremely long time, much much longer than the age of the observable universe. For that matter, it will take a very long time to get any “interesting” string of characters. Lloyd argues that the situation is quite different if we allow the monkeys to randomly construct algorithms rather than mere strings of text; in particular, the likelihood that such an algorithm will produce interesting (complex) output is much greater than the chance of randomly generating an interesting string. This phenomenon is easily demonstrated in the context of cellular automata: it’s remarkably easy to find very simple rules for automata that generate extremely complex output from simple starting configurations.

So the force of the idea that “the universe is a quantum computer” lies in an understanding of the origin of complexity. Think of the different subsystems of the universe, existing in slightly different arrangements, running different quantum algorithms. It is much easier for such subsystems to generate complex output computationally than one might guess from an estimate of the likelihood of hitting upon complexity by randomly choosing configurations directly. There is an obvious connection to genetics and evolution; DNA sequences can be thought of as lines of computer code, and mutations and genetic drift allow organisms to sample different algorithms. It’s much easier for natural selection to hit upon interesting possibilities by acting on the underlying instruction set, rather than by acting on the (much larger) space of possible configurations of the pieces of an organism.

Of course I don’t really know if any of this is true or interesting. In particular, the role of the “quantum” nature of the computation seems rather unclear; at a glance, it would seem that much of the universe’s manifest complexity lies squarely in the classical regime. But big ideas are fun, and concepts like entropy and complexity are far from completely understood, so perhaps it’s permissible to let our imaginations run a little freely here.

The reason why this discussion of quantum computation and the complexity of the universe fits comfortably with the story of Zizek is that he should understand this (if he doesn’t already). Zizek is a Lacanian, a disciple of famous French psychoanalyst Jacques Lacan. Lacan was a similarly controversial figure, although his charisma manifested itself as taciturn impenetrability rather than voluble popular appeal. One of Lacan’s catchphrases was “the unconscious is structured like a language.” Which I take (not having any idea what I am talking about) as a claim that the unconscious is not simply a formless chaos of mysterious impulses; rather, it has an architecture, a grammar, rules of operation much like those of our higher-level consciousness.

One way of summarizing Lloyd’s explanation of the origin of complexity might be: the universe is structured like a language. It is not just a random configuration of particles typed out by tireless monkeys; it is a quantum computer, following the rules of its algorithms. And by following these rules the universe manages to generate configurations of enormous complexity. Examples of which include science, poetry, love, meaning, and all of those aspects of human life that lend it more interest than we attach to other chemical reactions.

Of course, it’s only a temporary condition. From featureless simplicity we came, and to featureless simplicity we will return. Like a skier riding the moguls, eventually we’ll reach the bottom of the hill, and dissolve into thermal equilibrium. It’s up to us to enjoy the ride.

The universe is structured like a language Read More »

37 Comments

The Future of Theoretical Cosmology

I’m back from an extraordinarily hectic yet unusually rewarding April Meeting of the American Physical Society in Dallas. The APS has two big meetings each year, the April meetings for very large- and small-scale types (particle physics, nuclear physics, gravitation, astrophysics), and the March meeting for medium-scale types (condensed matter, atomic physics, biophysics). The March meeting is a crucially important event for its constituency, while the April meeting suffers from too much competition and far less customer loyalty, and is correspondingly a much smaller conference (perhaps 1,000-1,500 attendees, as opposed to 6,000 at a typical March meeting). That’s a subject for another post, for those of you out there with an unhealthy interest in APS politics.

(For other reports from the meeting, see Jennifer Ouellette’s Cocktail Party Physics or the mysterious and anonymous Charm &c. Common refrain: “It’s 2006! Why isn’t there decent wireless in this hotel??!!”)

There’s a rule to the effect that any person can give no more than one invited talk at an APS meeting, but such rules are made to be broken and I sneaked in there with two talks. One was a general overview of the accelerating universe and its associated problems, at a special session on Research Talks Aimed at Undergraduates. Having a session devoted to undergrads was a splendid idea, although I suspect that the median age of attendees at my talk was something like 45. That’s because, when asked to pitch a talk to an audience of level of expertise x, most physicists will end up pitching it at a level of expertise x+3. So various people with Ph.D.’s concluded that their best chance of understanding a talk outside their specialty was to attend a session for undergraduates. Perhaps they were right. Before my talk they got to hear nice presentations by Florencia Canelli on particle physics and the top quark, and Paul Chaikin on packing ellipsoids. (Okay, “packing ellipsoids” doesn’t sound like the sexiest topic, but it was filled with fascinating tidbits of information. Did you know that both prolate and oblate ellipsoids pack more efficiently than spheres? That ordered crystalline packings are generally found to be more efficient than random packings, but nobody can prove it in general? That M&M’s are extremely reliable ellipsoids, to better than 0.1%? That the method by which the Mars Corporation makes their M&M’s so regular is a closely guarded secret?)

My other talk was at a joint double session on the past, present, and future of cosmology, co-sponsored by the Division of Astrophysics and the Forum for the History of Physics. Six talks naturally needed to be given: one each on the past/present/future of observational/theoretical cosmology, and organizer Virginia Trimble invited me to speak on The Future of Theoretical Cosmology. The observational session conflicted with my talk to the “undergrads,” but I got to hear the talks on the past and present of theory by Helge Kragh and David Spergel, respectively.

Of course nobody has any idea what the future of theoretical cosmology will be like, given that we know neither what future experiments will tell us, nor what ideas future theorists will come up with. So I defined “the future” to be “100 years from now,” by which time I figured (1) I won’t be around, or (2) if I am around it will be because we will all be living in pods and communicating via the Matrix, and nobody will be all that interested in what I said about the future of cosmology a century earlier.

interactive dark sector

With those caveats in mind, I did try to make some prognostications about how we will be thinking about three kinds of cosmological issues: composition questions, origins questions, and evolution questions. You can peek at my slides in html or pdf, although I confess that many were cannibalized from other talks. The abbreviated version:

  • Composition Questions. We have an inventory of the universe consisting of approximately 4% ordinary matter, 22% dark matter, and 74% dark energy. But each of these components is mysterious: we don’t know what the dark matter or dark energy really are, nor why there is more matter than antimatter. My claim was that we will have completely understood these questions in 100 years. In each case, there is an active experimental program aimed at providing us with clues, so I’m optimistic that the matter will be closed long before then.
  • Origins Questions. Where did the universe come from, and why do we find it in this particular configuration? Inflation, which received an important boost from the recent WMAP results, is a crucial ingredient in our current picture, but I stressed that there is a lot that we don’t yet understand. In particular, we need to understand the pre-inflationary universe to know whether inflation really provides a robust theory of initial conditions. Thinking about inflation naturally leads us to the multiverse, and I argued that untestable predictions of a theory are perfectly legitimate science, so long as the theory makes other testable predictions. We don’t yet have a theory of quantum gravity that does that, and I prevaricated about whether one hundred years would be sufficient time to establish one. (Naive extrapolation predicts that we won’t be doing Planck-scale experiments until two hundred years from now.)
  • Evolution Questions. Given the initial conditions, we already understand the evolution of small perturbations up to the point where they become large (“nonlinear”). That’s when numerical simulations become crucial, and here I was a little more bold. The very idea of a computer simulation is only about 50 years old, so there’s every reason to expect that the way in which computers are used will look completely different 100 years from now. Quantum computers will be commonplace, and enable parallel processing of enormous power. More interestingly, the types of computation that we’ll be doing will be dramatically different; I suggested that the computers will not only be running simulations to test theories against observations, but will be coming up with theories themselves. Such a prospect is a natural outgrowth of the idea of genetic algorithms, so I don’t think it’s as crazy as it sounds.

The next day I managed to catch no fewer than three sessions filled with provocative talks — one on ultra-high-energy cosmic rays, one on cosmology and gravitational physics, and one on precision cosmology. And I would tell you all about them if I hadn’t lost the keys to my special time-stretching machine that allows me to put aside my day job for arbitrarily long periods so that I can blog at leisure. Probably the most intriguing suggestions were those by Shamit Kachru from SLAC, who argued that considerations from string theory (and in particular the constraint that scalar fields cannot evolve by amounts greater than the Planck scale) imply that gravitational waves produced by inflation will never be strong enough to be observable in the CMB, and those by David Saltzberg from UCLA, who listed an amazing variety of upcoming experiments to detect high-energy astrophysical neutrinos, including listening for sound waves (!) produced when a neutrino interacts with ocean water off the Bahamas. If I decide to become an experimentalist, that’s the one I’m joining.

The Future of Theoretical Cosmology Read More »

13 Comments

String Theory, With a View Towards Reality

The Arthur H. Compton Lectures are a great tradition at the Enrico Fermi Institute here at the University of Chicago. Twice each academic year, a postdoc (!) from the EFI gives a series of 8-10 lectures on Saturday mornings, aimed at the general public, on a topic of current scientific interest. The EFI focuses on research in particle physics, astrophysics, and gravitation, so that’s what the lectures tend to cover. They are a great resource, and it’s amazing to see over a hundred people from the community trudge to a lecture hall every Saturday morning to hear about modern physics.

This Spring’s lectures are being given by Nick Halmagyi, a string theorist whose office is right across from mine. The title is String Theory: With a View Towards Reality, and Nick is gradually putting notes and slides online. With two lectures gone by, reality itself has been the focus thus far, as Nick sets up the current state of particle physics. String theory will undoubtedly follow, and when the moment comes to draw the connection between the two time will probably have run out.

Previous Compton lecturers are a distinguished lot, including our very own Risa. The EFI does a terrible job at keeping them online, but I was able to dig up slides from a few recent lecture series.

Any recent ones with online slides that I missed, let me know. And if you’re in the neighborhood, anyone is welcome to come to Nick’s lectures, which are at 11 a.m. most Saturdays. He speaks with a distinct Australian accent, but it’s ususally possible to understand him.

String Theory, With a View Towards Reality Read More »

18 Comments

Experimental sociology

A little late, but I didn’t want to let slip this interesting discussion about the agonizing process of making experimental particle physics results ready for public consumption from Tommaso Dorigo and Gordon Watts. You’ll recall that we mentioned a couple of weeks ago the new results from Fermilab’s Tevatron on B-mixing, a measurement that puts interesting new constraints on the possibilities for physics beyond the Standard Model. The first announcement was from the D0 (“D-Zero”) experiment; as Collin pointed out in the comments, the CDF experiment followed with their own results soon thereafter.

From the CDF point of view, this is not how things are supposed to be; CDF is supposed to get there first, and D0 is supposed to confirm their results. Speaking from the CDF side, Tommaso talks about the process:

The publication process of CDF data analyses is baroque, bordering the grotesque. Once a group finalizes their result and presents it at internal meetings, the result has to be blessed. This involves three rounds of scrutiny, the full documentation of the analysis in internal notes, and often the fight with skeptics who like to sit at meetings and play “shoot the sitting duck” with the unfortunate colleague presenting the result. Usually, when an important result is on, the physicists who produced it are asked to perform additional checks of various kinds, and defend it with internal referees. When all of that is through, and not a day earlier, the result can be shown at Physics conferences.

After that happens, one would like to get the result on a Physics journal as soon as possible – to be cited!!! But just then, another much longer nightmare starts, when a process called “godparenting” begins and three knowledgeable colleagues (the godparents) are designated to scrutinize every detail of the work. Then a draft paper is produced, and in the following two weeks all the collaborators can play “shoot the duck” in written form, by sending criticism and demanding yet more checks. Then a second draft follows, and the process repeats…. In the end, usually six months pass between the blessing of a result at the physics meeting and the forwarding of a paper to a journal.

Gordon, from the D0 side, agrees with the general outline.

I don’t think it is that much different than what CDF has to go through — perhaps a bit more streamlined. We are all afraid that something wrong will make it out; hence all the layers of cross checking that go on. All of the collaboration is on the author list; this is the way the collaboration makes sure that the results that get out are correct. It can be a pain!

Read the whole things.

Of course there’s a lot more to the sociology of particle physics experiments than deciding when to release results. Interestingly, there are a lot of great books that take high-energy experiments and experimenters as their source material. Even novels — I recently read A Hole in Texas by Herman Wouk (best known for The Caine Mutiny and The Winds of War). It’s a short book set in the aftermath of the cancellation of the Superconducting Super Collider, imagining the hysteria if China managed to beat us to the Higgs boson. As a novel, I’ve read better; the romantic and political plots are somewhat perfunctory and not very believable. (And obviously written by a man; where else can you find no fewer than three attractive and accomplished women throwing themselves at a somewhat over-the-hill and not especially charming male physicist?) But the physics is surprisingly good; Wouk really put some effort into getting it right, including field trips to Fermilab and the SSC site.

And then you have your honest social-science explorations of the anthropology of the tribe of particle physicists. Beamtimes and Lifetimes, by anthropologist Sharon Traweek, treats HEP experimenters the same way we would treat an isolated tribe in the Amazon jungle, trying to figure out what makes them tick. (I’m still not sure.) But for my money, far and away the most insightful book is Nobel Dreams by Gary Taubes, the story of how Carlo Rubbia smashed the competition, not always using the most fair-minded tactics, to discover the W boson and win the Nobel Prize. Oh yes, and how he then failed to win another Nobel for discovering supersymmetry, despite repeatedly suggesting that his UA1 experiment had found evidence for it. A fascinating read, one that makes you tremble at the ambition of Rubbia and his lieutenants, admire the superhuman dedication of the many physicists on the project, and thank your lucky stars that your own working hours are a bit more sensible.

Update: Tommaso and Gordon explain more about the physics of the result.

Experimental sociology Read More »

19 Comments

Particle physics marches on

Physicists (like us) are, with good reason, eagerly anticipating results from the Large Hadron Collider at CERN, scheduled to turn on next year. The LHC will collide protons at much higher energies than ever before, giving us direct access to a regime that has been hidden from us up to now. But until then, a whole host of smaller experiments are interrogating particle physics from a variety of different angles, using clever techniques to get indirect information about new physics. Just a quick rundown of some recent results:

  1. Yesterday the MINOS experiment at Fermilab (Main Injector Neutrino Oscillation Search) released their first results. (More from Andrew Jaffe.) This is one of those fun experiments that shoots neutrinos from a particle accelerator onto an underground journey, to be detected in a facility hundreds of miles away — in this case, the Soudan mine in Minnesota. They confirm the existence of neutrino oscillations, with a difference in mass between the two neutrino states of Δm2 = 0.0031 eV2. The neutrinos left Fermilab as muon neutrinos, and oscillated into either electron or tau neutrinos, or something more exotic. MINOS can be thought of as a follow-up to the similar K2K experiment in Japan, with a longer baseline and more neutrinos.
  2. The previous week, the D0 experiment at Fermilab’s Tevatron (the main proton-antiproton collider) released new results on the oscillations of a different kind of particle, the Bs meson (a composite of a strange quark and a bottom antiquark), as reported in this paper. For better or for worse, the results are splendidly consistent with the predictions of the minimal Standard Model. These B-mixing experiments are very sensitive to higher-order contributions from new physics at high energies, such as supersymmetry. D0 is telling us something we have heard elsewhere: that susy could already have easily been detected if it is there at the electroweak scale, but it hasn’t been seen yet. Either it’s cleverly hiding, or there is no susy at the weak scale — which would come as a surprise (a disappointing one) to many people.
  3. Finally, a little-noticed experiment in Italy has been looking for axion-like particles — and claims to have seen evidence for them! (See also Doug Natelson and Chad Orzel.) The usual (although still hypothetical) axion is a light spin-0 particle that helps explain why CP violation is not observed in the strong interactions. (There is a free parameter governing strong CP violation, that should be of order unity, and is experimentally constrained to be less than 10-10.) The axion is a “pseudoscalar” (changes sign under parity), and couples to electromagnetism in a particular way, so that photons can convert into axions in a strong magnetic field. (Another mixing experiment!) The axion relevant to the strong CP problem has certain definite properties, but other similar spin-0 particles may exist that couple to photons in similar ways, and these are generically referred to as axion-like. Zavattini et al. have fired a laser through a magnetic field and noticed that the polarization has rotated, which can be explained by an axion-like particle with a mass around 10-3eV, and a coupling of around (4×105eV)-1. My expert friends tell me that the experimentalists are very good, and the result deserves to be taken seriously. Trouble is, the particle you need to invoke is in strong conflict with bounds from astrophysics — these particles can be produced in stars, leading to various sorts of unusual behavior that aren’t observed. Now maybe the astrophysical bound can somehow be avoided; in fact, I’m sure some clever theorists are working on it already. But it would also be nice to get independent confirmation of the experimental effect.

Particle physics marches on Read More »

37 Comments

The Foundational Questions Institute (Anthony Aguirre)

The Foundational Questions Institute (FQXi) was mentioned in the comments of Mark’s post about John Barrow’s Templeton Prize. This is a new organization that is devoted to supporting innovative ideas at the frontiers of physics and cosmology. It is led by Max Tegmark of MIT and Anthony Aguirre of UCSC, two leading young cosmologists, backed up by an extremely prestigious Scientific Advisory Panel.

Sounds like a great idea, but some of us have questions, primarily concerning the source of funding for FQXi — currently the John Templeton Foundation. The Templeton Foundation is devoted to bringing together science and religion, which may or may not be your cup of tea. I’m already on the record as turning down money from them (see also this Business Week article) — and believe me, turning down money is not part of my usual repertoire. But Max and Anthony and the rest are good scientists, so we here at Cosmic Variance thought it would be good to hear the story behind FQXi in their own words. We invited Anthony to contribute a guest post about the goals and procedures of the new institute, and he was kind enough to agree. Feel free to ask questions and be politely skeptical (or for that matter enthusiastically supportive), and we can all learn more about what’s going on.

———-

I (Anthony Aguirre) have been invited by Sean to write a guest blog entry discussing an exciting new project that Max Tegmark and I have been leading: Foundational Questions in Physics and Cosmology (“FQX”). This program was publicly announced in October, and the Foundational Questions Institute (FQXi) was formally launched as a legal entity in February, as was its first call for proposals. There is a plethora of information on FQXi at www.fqxi.org, but the kind invitation by Cosmic Variance provides a good opportunity to outline informally what FQXi is, why we think it is important, to address some reservations voiced in this forum, and to generate some discussion in the physics and cosmology community.

What is FQXi all about? Its stated mission is “To catalyze, support, and disseminate research on questions at the foundations of physics and cosmology, particularly new frontiers and innovative ideas integral to a deep understanding of reality, but unlikely to be supported by conventional funding sources.” Less formally, the aim of FQXi is to allow researchers in physics, cosmology, and related fields who like to think about and do real research about really big, deep, foundational or even “ultimate” questions, to actually do so — when otherwise they could not. We boiled this type of research down into two defining terms: the research should be foundational (with potentially significant and broad implications for our understanding of the deep or “ultimate” nature of reality) and it should be unconventional (consisting of rigorous research which, because of its speculative, non-mainstream, or high-risk nature, would otherwise go unperformed due to lack of funding.) The particular types of research FQXi will support are detailed in the FQXi Charter and in the first call for proposals, which also features a handy (but by no means whatsoever comprehensive) list of example projects, and their likelihood of being suitable for FQXi funding. In addition to straightforward grants, FQXi will run various other programs — “mini”-grants, conferences, essay contests, a web forum, etc. — focused on the same sort of science.

Why is FQXi important? There are a number of foundational questions that are of deep interest to humanity at large — and are the (often hidden) passion of and inspiration for researchers — but which various financial and “social” pressures make it very difficult for researchers to actually pursue. National funding sources, for example, tend to shy aware from research that is high-risk/high- reward, or speculative, or very fundamental, or unconventional, or “too philosophical”, and instead support research using fairly proven methods with a high probability of advancing science along known routes. There is nothing wrong with this, and it creates a large amount of excellent science. But it leaves some really interesting questions on the sidelines. We go on at length about this in the FQXi Charter — but the researchers FQXi aims to support will know all too well what the problems are. Our goal is to fund the research into foundational questions in physics and cosmology that would otherwise go unfunded.

More money to support really exciting, interesting, and, yes, fun research seems like an unreservedly good thing. Nonetheless, a couple of significant reservations have been voiced to us, both by writers on this blog and others. These are:

1) Some feel research that is very speculative or “borderline philosophical” is just a waste of time and resources — if the research was worth doing, conventional agencies would fund it. We won’t accept this criticism from anyone who has worked on either time machines or the arrow of time (so Sean is out) :), but from others we acknowledge that they feel this way, we respectfully disagree, and we think that many of the giants of 20th century physics (Einstein, Bohr, Schroedinger, Pauli, etc.) would also disagree. Ultimately, those who feel this way are free not to participate in FQXi. We also note that we think it would by great if some private donors were also to support more conventional research in a way that complemented or supplemented federal funding (as they do in, e.g., the Sloan and Packard fellowships); that, however, is not the case here: the donation supporting FQXi is expressely for the purpose of supporting foundational research. Which brings us to…

2) The second major reservation concerns FQXi’s current sole source of funding: the John Templeton Foundation (JTF), an organization that espouses and supports the “constructive dialogue between science and religion.” It is understandable that some people may be suspicious of JTF’s involvement with FQXi, and in today’s political climate in which Intelligent Design and other movements seek to undermine science in order to promote a religious and political agenda, such suspicion is especially understandable. But it is as important as ever to also be open-minded and objective. The salient points, we think, regarding JTF and FQXi are:

  • FQXi is a non-profit scientific grant-awarding organization fully independent from its donors (we are actively seeking other donors beyond JTF, see below) and operated in accordance with its Charter. Proposal funding is determined via a standard and rigorous peer-review process, and an expert panel appointed by FQXi. The structure of FQXi is such that donors — including JTF — have no control or influence over individual proposal selection or renewal. Specifically, scientific decisions are made (as enshrined in the FQXi corporate Bylaws) by the Scientific Directorate (Max & I), on the basis of advice from review panels and the Scientific Advisory Panel. The only condition of the JTF grant to FQXi is that FQXi’s grantmaking be consistent with the FQXi Charter, which, as stated previously, can be viewed in its entirity at fqxi.org.
  • JTF’s stated interest in FQXi is captured in the FQXi Charter: the questions being tackled by researchers funded by FQXi intimately connect with and inform not just scientific fields, but also philosophy, theology and religious belief systems. Answers to these questions will have profound intellectual, practical, and spiritual implications for anyone with deep curiosity about the world’s true nature.
  • While FQXi’s funding is currently all from JTF, we have been strongly encouraged by JTF to seek (and are actively working on finding) additional donors; furthermore, there is no guarantee of JTF funding beyond the first four years — though we certainly hope FQXi will go on long past the initial four-year phase.
  • As for JTF benefiting “by association” with FQXi and the great research we hope that it will support, well, we feel that JTF has been extremely generous not just in giving a large sum of money to science, without strings attached, and with a great deal of support through the complex process of setting up FQXi as an independent institute of just the sort that Max & I wanted. If all this reflects well on JTF, I would submit that they deserve it.

We’ve tried hard to make FQXi’s operation and goals as transparent as possible, so those in the community can make informed decisions on whether they would like to participate in what we are hoping to do. We are very excited by the proposals that are coming in so far, and invite interested scientists to take a look at the call for proposals before it is too late (April 2). For those who are not actively researching foundational questions, we hope to have a very active public discussion and outreach program for both scientists and the interested public; we invite you to periodically check the FQXi website.

Thank you for this opportunity to discuss FQXi at Cosmic Variance.

The Foundational Questions Institute (Anthony Aguirre) Read More »

74 Comments

Lunar laser ranging

Greetings from Toronto, where I’m visiting UofT to talk about dark energy, the arrow of time, and other obsessions of mine. Which has prevented me from as yet writing the long-awaited second installment of “Unsolicited Advice,” the one that will tell you how to choose a graduate school. It is that time of year, after all.

Lunar Radar Ranging In the meantime, check out this nice post at Anthonares on Lunar laser ranging. The Apollo astronauts, during missions 11, 14, and 15, were sufficiently foresighted to bring along reflecting corner mirrors and leave them behind on the Moon’s surface. Why would they do that? So that, from down here on Earth, we can shoot lasers at the lunar surface and time how long it takes for them to come back. Using this data we can map the Moon’s orbit to ridiculous precision; right now we know where the Moon is to better than a centimeter. This experiment, called Lunar laser ranging, teaches us a lot about the Moon, but it also teaches us about gravity. The fact that we can pinpoint the location of the Earth’s biggest satellite and keep track of it over the course of years provides us with a uniquely precise test of Einstein’s general relativity.

You might think that general relativity is already pretty well tested, and it is, but clever folks are constantly inventing alternatives that haven’t yet been ruled out. One example is DGP gravity, invented by Gia Dvali, Gregory Gabadadze, and Massimo Porrati. This is a model in which the observable particles of the Standard Model are confined to a brane embedded in an infinitely large extra dimension of space. Unlike usual models with compact extra dimensions, the extra dimension of the DGP model is hidden because gravity is much stronger in the bulk; hence, the gravitational lines of force from an object on the brane like to stay on the brane for a while before eventually leaking out into the bulk.

The good news about the DGP model is that it makes the universe accelerate, even without dark energy! This is one of the things that I talked about at my colloquium yesterday, and I hope to post about in more detail some day. The better news is that it is potentially testable using Lunar laser ranging! The claim is that the DGP model predicts a tiny perturbation of the Moon’s orbit, too small to have yet been detected, but large enough to be within our reach if we improve the precision of existing laser ranging experiments. People are hot on the trail of doing just that, so we may hear results before too long.

Not to get too giddy, the bad news about DGP is that it may be a non-starter on purely theoretical grounds. There are claims that the model has ghosts (negative-energy particles), and also that it can’t be derived from any sensible high-energy theory (see Jacques’s post). I haven’t examined either of these issues very closely, although I hope to dig into them soon. Maybe if I could quite traveling and sit down and read some papers.

Lunar laser ranging Read More »

11 Comments

WMAP results — cosmology makes sense!

I’ll follow Mark’s suggestion and fill in a bit about the new WMAP results. The WMAP satellite has been measuring temperature anisotropies and polarization signals from the cosmic microwave background, and has finally finished analyzing the data collected in their second and third years of running. (For a brief explanation of what the microwave background is, see the cosmology primer.) I just got back from a nice discussion led by Hiranya Peiris, who is a member of the WMAP team, and I can quickly summarize the major points as I see them.

WMAP spectrum

  • Here is the power spectrum: amount of anisotropy as a function of angular scale (really multipole moment l), with large scales on the left and smaller scales on the right. The major difference between this and the first-year release is that several points that used to not really fit the theoretical curve are now, with more data and better analysis, in excellent agreement with the predictions of the conventional LambdaCDM model. That’s a universe that is spatially flat and made of baryons, cold dark matter, and dark energy.
  • In particular, the octupole moment (l=3) is now in much better agreement than it used to be. The quadrupole moment (l=2), which is the largest scale on which you can make an observation (since a dipole anisotropy is inextricably mixed up with the Doppler effect from our motion through space), is still anomalously low.
  • The best-fit universe has approximately 4% baryons, 22% dark matter, and 74% dark energy, once you combine WMAP with data from other sources. The matter density is a tiny bit low, although including other data from weak lensing surveys brings it up closer to 30% total. All in all, nice consistency with what we already thought.
  • Perhaps the most intriguing result is that the scalar spectral index n is 0.95 +- 0.02. This tells you the amplitude of fluctuations as a function of scale; if n=1, the amplitude is the same on all scales. Slightly less than one means that there is slightly less power on smaller scales. The reason why this is intriguing is that, according to inflation, it’s quite likely that n is not exactly 1. Although we don’t have any strong competitors to inflation as a theory of initial conditions, the successful predictions of inflation have to date been somewhat “vanilla” — a flat universe, a flat perturbation spectrum. This expected deviation from perfect scale-free behavior is exactly what you would expect if inflation were true. The statistical significance isn’t what it could be quite yet, but it’s an encouraging sign.
  • A bonus, as explained to me by Risa: lower power on small scales (as implied by n<1) helps explain some of the problems with galaxies on small scales. If the primordial power is less, you expect fewer satellites and lower concentrations, which is what we actually observe.
  • You need some dark energy to fit the data, unless you think that the Hubble constant is 30 km/sec/Mpc (it’s really 72 +- 4) and the matter density parameter is 1.3 (it’s really 0.3). Yet more proof that dark energy is really there.
  • The dark energy equation-of-state parameter w is a tiny bit greater than -1 with WMAP alone, but almost exactly -1 when other data are included. Still, the error bars are something like 0.1 at one sigma, so there is room for improvement there.
  • One interesting result from the 1st-year data is that reionization — in which hydrogen becomes ionized when the first stars in the universe light up — was early, and the corresponding optical depth was large. It looks like this effect has lessened in the new data, but I’m not really an expert.
  • A lot of work went into understanding the polarization signals, which are dominated by stuff in our galaxy. WMAP detects polarization from the CMB itself, but so far it’s the kind you would expect to see being induced by the perturbations in density. There is another kind of polarization (“B-mode” rather than “E-mode”) which would be induced by gravitational waves produced by inflation. This signal is not yet seen, but it’s not really a suprise; the B-mode polarization is expected to be very small, and a lot of effort is going into designing clever new experiments that may someday detect it. In the meantime, WMAP puts some limits on how big the B-modes can possibly be, which do provide some constraints on inflationary models.

Overall — our picture of the universe is hanging together. In 1998, when supernova studies first found evidence for the dark energy and the LambdaCDM model became the concordance cosmology, Science magazine declared it the “Breakthrough of the Year.” In 2003, when the first-year WMAP results verified that this model was on the right track, it was declared the breakthrough of the year again! Just because we hadn’t made a mistake the first time. I doubt that the third-year results will get this honor yet another time. But it’s nice to know that the overall paradigm is a comfortable fit to the universe we observe.

The reason why verifying a successful model is such a big deal is that the model itself — LambdaCDM with inflationary perturbations — is such an incredible extrapolation from everyday experience into the far reaches of space and time. When we’re talking about inflation, we’re dealing with the first 10-35 seconds in the history of the universe. When we speak about dark matter and dark energy, we’re dealing with substances that are completely outside the very successful Standard Model of particle physics. These are dramatic ideas that need to be tested over and over again, and we’re going to keep looking for chinks in their armor until we’re satisfied beyond any reasonable doubt that we’re on the right track.

The next steps will involve both observations and better theories. Is n really less than 1? Is there any variation of n as a function of scale? Are there non-Gaussian features in the CMB? Is the dark energy varying? Are there tensor perturbations from gravitational waves produced during inflation? What caused inflation, and what are the dark matter and dark energy?

Stay tuned!

More discussion by Steinn Sigurðsson (and here), Phil Plait, Jacques Distler, CosmoCoffee. In the New York Times, Dennis Overbye invokes the name of my previous blog. More pithy quotes at Nature online and Sky & Telescope.

WMAP results — cosmology makes sense! Read More »

62 Comments

Crackpots, contrarians, and the free market of ideas

Some time back we learned that arxiv.org, the physics e-print server that has largely superseded the role of traditional print journals, had taken a major step towards integration with the blogosphere, by introducing trackbacks. This mechanism allows blogs to leave a little link associated with the abstract of a paper on arxiv to which the blog post is referring; you can check out recent trackbacks here. It’s a great idea, although not without some potential for abuse.

Now Peter Woit reports that he has been told that arxiv will not accept trackbacks from his blog. Peter, of course, is most well-known for being a critic of string theory. In this he is not alone; the set of “critics of string theory” includes, in their various ways, people like Roger Penrose, Richard Feynman, Daniel Friedan, Lee Smolin, Gerard ‘t Hooft, Robert Laughlin, Howard Georgi, and Sheldon Glashow. The difference is that these people were all famous for something else before they became critics of string theory; in substance, however, I’m not sure that their critiques are all that different.

Unfortunately, Peter has not been given an explicit reason why trackbacks from his blog have been banned, although his interactions with the arxiv have a long history. It’s not hard to guess, of course; the moderators presumably feel that his criticisms have no merit and shouldn’t be associated with individual paper abstracts.

I think it’s a serious mistake, for many reasons. On the one hand, I certainly don’t think that scientists have any obligation to treat the opinions of complete crackpots with the same respect that they treat those of their colleagues; on a blog, for example, I see nothing wrong with banning comments from people who have nothing but noise to contribute yet feel compelled to keep contributing it. But trackbacks are just about the least intrusive form of communication on the internet, and the most easily ignored; I have never contemplated preventing trackbacks from anyone, and it would be hard for anyone to rise to the level of obnoxiousness necessary for me to do so.

On the other hand, I don’t think there is any sense in which Peter is a crackpot, even if I completely disagree with his ideas about string theory. He is a contrarian, to be sure, not falling in line with the majority view, but that’s hardly the same thing. Admittedly, it can be difficult to articulate the difference between principled disagreement and complete nuttiness (the crackpot index is, despite being both funny and telling, not actually a very good guide), but we usually know it when we see it.

Since I’m not a card-carrying string theorist, I can draw analogies with skeptics in my own field of cosmology. I’ve certainly been hard on folks who push alternative cosmologies (see here and here, for example). But there is definitely a spectrum. Perfectly respectable scientists from Roger Penrose to yours truly have suggested alternatives to cherished ideas like inflation, dark matter, and dark energy; nobody would argue that such ideas are cranky in any sense. Respectable scientists have even questioned whether the universe is accelerating, which is harder to believe but still worth taking seriously. Further down the skepticism scale, we run into folks that disbelieve in the Big Bang model itself. From my own reading of the evidence, there is absolutely no reason to take these people seriously; however, some of them have good track records as scientists, and it doesn’t do much harm to let them state their opinions. In fact, you can sharpen your own understanding by demonstrating precisely why they are wrong, as Ned Wright has shown. Only at the very bottom of the scale do we find the true crackpots, who have come up with a model of the structure of spacetime that purportedly replaces relativity and looks suspiciously like it was put together with pipe cleaners and pieces of string. There is no reason to listen to them at all.

On such a scale, I would put string skepticism of the sort Peter practices somewhere around skepticism about the acceleration of the universe. Maybe not what I believe, but a legitimate opinion to hold. And the standard for actually preventing someone from joining part of the scientific discourse, for example by leaving trackbacks at arxiv, should typically err on the side of inclusiveness; better to have too many voices in there than to exclude someone without good reason. So I think it’s very unfortunate that trackbacks from Not Even Wrong have been excluded, and I hope the folks at arxiv will reconsider their decision.

Of course, there is a huge difference between string theory and the standard cosmological model; the latter has been tested against data in numerous ways. String theory, as rich and compelling as it may be, is still a speculative idea at this point; it might very well be wrong. Losing sight of that possibility doesn’t do us any good as scientists.

Update: Jacques Distler provides some insight into the thinking of the arxiv advisory board.

Crackpots, contrarians, and the free market of ideas Read More »

110 Comments

Paul Kwiat on quantum computation

The quantum puppies post below was written in response to some excitement generated by recent work from Paul Kwiat’s group at UIUC; specifically, this paper in Nature (which is sadly only available to subscribers). Paul was nice enough to write a little clarification about what they actually did, which we’re reproducing here as a guest post.

Hi,

I’m not normally a Blogger (I also don’t have a cell phone, if you can believe that).
However, given the plethora of commentary on our article (and on articles *about* our article), I thought a few words might be useful. I’ll try to keep it short [and fail 🙁 ]

  1. There has been quite a bit of confusion over what we’ve actually done (due in large part to reporters that won’t let us read their copy before it goes to print), not to mention *how* we did it. For the record—
    1. we experimentally implemented a proposal made several years ago, showing how one could sometimes get information about the answer to a quantum computation without the computer running. Specifically, we set up an optical implementation of Grover’s search algorithm, and showed how, ~25% of the time, we could *exclude* one of the answers.

      Some further remarks:

      – our implementation of the article is not “scalable”, which means that although we could pretty easily search a database with 5 or 6 elements, one with 100 elements would be unmanageable.

      – however, the techniques we discuss could equally well be applied to approaches that *are* scalable.

    2. By using the Q. Zeno effect, you can push the success probability to 100%, i.e., you can exclude one of the elements as the answer. However, if the element you are trying to exclude actually *is* the answer, then the computer *does* run.

      -The Q. Zeno effect itself is quite interesting. If you want to know more about it, you can check out this tutorial.

    3. Unless you get really lucky, and the actual answer is the last one (i.e., you’ve excluded all the others without the computer running, and so you know the right answer is the last element, without the computer running), the technique in 2. doesn’t really help too much, since if you happen to check if the answer wasn’t a particular database element, and it really was, then the computer does run.
    4. By putting the Zeno effect inside of another Zeno effect, you can work it so that even if you are looking to exclude a particular database element, and the answer *is* that element, then the computer doesn’t run (but you still get the answer). Therefore, you can now check each of the elements one by one, to find the answer without the computer running. This was the first main theoretical point of the paper. Contrary to some popular press descriptions, we did not implement this experimentally (nor do we intend to, as it’s likely to be inconveniently difficult).
    5. If you had to use the method of 4. to check each database element one-by-one, then you’d lose the entire speedup advantage of a quantum computer. Therefore, we also proposed a scheme whereby the right answer can be determined “bit-by-bit” (i.e., what’s the value of the first bit, what’s the value of the second bit, etc.). This is obviously much faster, and recovers the quantum speedup advantage.
    6. Finally, we proposed a slightly modified scheme that also seems to have some resilience to errors.

      Taken in its present form, the methods are too cumbersome to be much good for quantum error correction. However, it is our hope this article will stimulate some very bright theorists to see if some of the underlying concepts can be used to improve current ideas about quantum error correction.

  2. There have been a number of questions criticisms, either about the article, or the articles about the article. Here are my thoughts on that:
    1. I guess I should disagree that our article is poorly written (no big surprise there 😉 ), though I agree 10000% that it is not at all easy to read. There are (at least) two reasons for this:

      – there is a tight length constraint for Nature, and so many more detailed explanations had to be severely shortened, put in the supplementary information, or left out entirely. Even so, the paper took over a year just to write, so that at least it was accurate, and contained all the essential information. For example, we were not allowed to include any kind of detailed explanation of how Grover’s algorithm works. [If you want more info on this, feel free to check out: P. G. Kwiat, J. R. Mitchell, P. D. D. Schwindt, and A. G. White, “Grover’s search algorithm: An optical approach”, J. Mod. Opt. 47, 257 (2000)., which is available here.

      – the concepts themselves are, in my opinion, not easy to explain. The basic scheme that we experimentally implemented is easy enough. And even the Zeno effect is not so bad (see that above tutorial). But once it becomes “chained”, the description just gets hard. (I am pointing this out, because I would reserve the criticism “poorly written” for something which *could* be easily [and correctly!] explained, but wasn’t.)

    2. I agree that some of the popular press descriptions left something to be desired, and often used very misleading wording (e.g., quantum computer answers question before it’s asked – nonsense!). Having said this, I do have rather great empathy for the writers – the phenomenon is not trivial for people in the field to understand; how should the writers (who *aren’t* in the field) explain it to readers who also aren’t in the field. Overall, the coverage was not too far off the mark.

      -To my mind, the most accurate description was in an article in Friday’s Chicago Tribune (the author kindly let us review his text for accuracy before going to print).

Okay, thanks for your attention if you made it this far.

I hope that these words (and the above web link) will clarify some of the issues in the paper.

Best wishes,
Paul Kwiat

PS Please feel free to post this response (in it’s entirely though) on any other relevant Blogs. Thanks.

Paul Kwiat on quantum computation Read More »

18 Comments
Scroll to Top