Scott Aaronson has thown down a gauntlet by claiming that theoretical computer science, “by any objective standard, has contributed at least as much over the last 30 years as (say) particle physics or cosmology to humankind’s basic picture of the universe.” Obviously the truth-value of such a statement will depend on what counts as our “basic picture of the universe,” but Scott was good enough to provide an explanation of the most important things that TCS has taught us, which is quite fascinating. (More here.) Apparently, if super-intelligent aliens landed and were able to pack boxes in our car trunks very efficiently, they could also prove the Riemann hypothesis. Although the car-packing might be more useful.
There are important issues of empiricism vs. idealism here. The kinds of questions addressed by “theoretical computer science” are in fact logical questions, addressable on the basis of pure mathematics. They are true of any conceivable world, not just the actual world in which we happen to live. What physics teaches us about, on the other hand, are empirical features of the contingent world in which we find ourselves — features that didn’t have to be true a priori. Spacetime didn’t have to be curved, after all; for that matter, the Earth didn’t have to go around the Sun (to the extent that it does). Those are just things that appear to be true of our universe, at least locally.
But let’s grant the hypothesis that our “picture of the universe” consists both of logical truths and empirical ones. Can we defend the honor of particle physics and cosmology here? What have we really contributed over the last 30 years to our basic picture of the universe? It’s not fair to include great insights that are part of some specific theory, but not yet established as true things about reality — so I wouldn’t include, for example, anomalies canceling in string theory, or the Strominger-Vafa explanation for microstates in black holes, or inflationary cosmology. And I wouldn’t include experimental findings that are important but not quite foundation-shaking — so neutrino masses don’t qualify.
With these very tough standards, I think there are two achievements that I would put up against anything in terms of contributions to our basic picture of the universe:
- An inventory of what the universe is made of. That’s pretty important, no? In units of energy density, it’s about 5% ordinary matter, 25% dark matter, 70% dark energy. We didn’t know that 30 years ago, and now we do. We can’t claim to fully understand it, but the evidence in favor of the basic picture is extremely strong. I’m including within this item things like “it’s been 14 billion years since the Big Bang,” which is pretty important in its own right. I thought of a separate item referring to the need for primordial scale-free perturbations and the growth of structure via gravitational instability — I think that one is arguably at the proper level of importance, but it’s a close call.
- The holographic principle. I’m using this as a catch-all for a number of insights, some of which are in the context of string theory, but they are robust enough to be pretty much guaranteed to be part of the final picture whether it involves string theory or not. The germ of the holographic principle is the idea that the number of degrees of freedom inside some region is not proportional to the volume of the region, but rather to the area of its boundary — an insight originally suggested by the behavior of Hawking radiation from black holes. But it goes way beyond that; for example, there can be dualities that establish the equivalence of two different theories defined in different numbers of dimensions (ala AdS/CFT). This establishes once and for all that spacetime is emergent — the underlying notion of a spacetime manifold is not a fundamental feature of reality, but just a good approximation in a certain part of parameter space. People have speculated about this for years, but now it’s actually been established in certain well-defined circumstances.
A short list, but we have every reason to be proud of it. These are insights, I would wager, that will still be part of our basic picture of reality two hundred years from now. Any other suggestions?
Oops, missed Scott’s post.
The “extreme Max Tegmark” viewpoint suggests that any mathematical structure that could represent something like a “universe”, does.
If there is a formalism to describe a Universe that has things in it that do something that could be called thought, then, effectively, those things exist just as assuredly as I do.
All of that is way too philosophical for me — but that’s sort of, in a sense, the idea that other fanatasy creatures may “exist”.
I’m not sure much of that really has a lot to do with Physics as I understand it, though. I’ll stick with run-of-the-mill distant-Hubble-volume-in-our-own-Universe style “parallel Universes,” which is already pretty freaky to the common brain. (Never mind other nucleated Universes in the inflating bulk of Eternal Inflation or Landscape or whatever you want to call it.) I’m not a convinced subscriber to the “extreme Max Tegmark” viewpoint.
All of which has drifted way off topic. Perhaps. Unless you’re suggesting that computer science is making things similar to what was in James P. Hogan’s Entoverse (a cool SF concept whose execution I found disappointing).
-Rob
The idea that all ‘possible worlds’ actually exist predates Tegmark and, I think, is due to a particular philosopher who’s somewhat infamous for it, although I’m forgetting his name.
I don’t want to seem too rude, but most of Scott’s statement 1-7 above seem silly to me. Entanglement just isn’t the be all, end all of quantum mechanics, and two dimensional vector spaces aren’t either (ok — that one is a cheap shot). Quantum information theory is cute and all that, but it doesn’t solve any of the conceptual foundation of quantum mechanics. It also doesn’t particularly help you compute the energy states, of say, Helium, much less something like beznene.
It’s also not true that the path integral is captured by the Weiner measure if you want to ever do quantum field theory.
I’ve deleted some of less sensible off-topic posts, and responses thereto, so as to preserve some chance of having a useful conversation. And because I am a Communist, afraid of new ideas, &c.
Cynthia, yes that may be the case. You can actually also regard the creatures as universes in their own right. What I mean is that you can consider the algorithm that a brain is running using a neural network as describing a virtual world. In that virtual world things like pain, taste etc. objectively exist.
Then the question is why we find ourselves embedded in this particular universe? I think that in the future scientists will ponder such questions and then physics and computer science will have become the same thing.
A conference about exactly this topic today and tomorrow 🙂
As soon as you realise that any carrier of information must be a physical system (and therefore obeys the laws of physics), you can ask how information processing differs in different physical theories. This, to some extent, is what theoretical computer scientists do, nevermind the toy models that may be unphysical. Theoretical physicists study plenty of unphysical models in order to understand more about the world (1+1 dimensions, anyone?). So TCS is an honourable branch of theoretical physics. 🙂
These questions must be forwarded to Slartibarfast.
No “new” ideas here.
In support of computerization, what does the math look like??
Sean said way back,
“What physics teaches us about, on the other hand, are empirical features of the contingent world in which we find ourselves — features that didn’t have to be true a priori”.
So how on earth, rather I mean how in the universe, does the holographic principle – the second supposed major achievement of physics – relate to this criterion? What is the *empirical data* supporting this proposal?
PK, if I so dare to understand you correctly, the following is my paraphrase of your comment#33. Physics – being the true study of Nature – can extend beyond a 1+1 dimensional Universe. By contrast, though, computer science – being only a holographic study of Nature – is confined to a 1+1 dimensional universe. Hence, theoretical computing is merely a subheading under the greater theoretical physics.
Cynthia, I think the 1+1 dimensions PK mentioned was a reference to a quantum gravty model, not to computer science.
The main point, as far as I can tell, is that there is a mapping between physical laws and information theoretic laws, and so information theory could reasonably be considered physics.
Classical mechanics is non-computable, see here.
John Baez said:
…in 1986, right around when fundamental theoretical physics stalled out. Since then most of the progress in fundamental physics has come from observations in astronomy
Can I quote you as saying in the past that GR with a cosmological constant is still the most conservative mainstream approach to explaining our expanding universe, and are the assumptions that are being taken for granted here, (about the nature of observed dark matter and dark energy), what you were referring to being afraid was going to happen in yet another past statement of concern about this very issue?
Fundamental theoretical physics may have stalled-out 30 years ago, but the assmuptions that are being taken for granted seem to have accelerated exponentially like a runaway universe… 😉
Gerard “t Hooft:
Examples for consideration are, “Beyond Einstein(LIGO) and Seti?”
Thanks, Aaron! I’m glad to have someone vehemently disagree with my statements — what I was worried about is that they were too obvious.
Entanglement just isn’t the be all, end all of quantum mechanics, and two dimensional vector spaces aren’t either (ok — that one is a cheap shot).
Indeed; that’s why we study 2^n-dimensional vector spaces for large values of n.
Quantum information theory is cute and all that, but it doesn’t solve any of the conceptual foundation of quantum mechanics. It also doesn’t particularly help you compute the energy states, of say, Helium, much less something like beznene.
Computing the energy states of helium is cute and all that, but how does it help us find an efficient quantum algorithm for graph isomorphism? 🙂
Seriously, look at Guifre Vidal’s papers — he’s already used quantum information techniques to get several-order-of-magnitude efficiency improvements in ground state computations for 1D spin chains. Of course, if we could build a quantum computer, then we’d get enormous improvements for such problems.
Two points:
1) Quantum computers can simulate quantum systems efficiently.
2) The Gottesman-Knill theorem is just one example of a quantum information result which directly affects classical simulation.
QIP has had a major effect on how we look at simulating quantum systems. So the examples you gave are actually very poor if you intend to list things QIP hasn’t helped us with.
Joe Fitzsimons, thanks for kindly reminding me of the huge difference between classical computing and quantum computing!
Guifre Vidal’s work is cute and interesting, *but* it applies to 1D spin chains (and 1d systems) mostly. This is a pretty serious limitation, given that physics is more than just about 1d systems.
Furthermore the extension of DMRG based techniques to higher dimensions is non-trivial, even from a computational viewpoint. In this sense, his work (and the work of S.R White which his methods are based on) does not make significant progress in answering problems of the sort Aaron Bergmann has raised. The state of the art in these questions remains older and less fashionable methods like Density functional theory, which are not a consequence of Quantum information theory.
Fundamental theoretical physics may have stalled-out 30 years ago, but the assmuptions that are being taken for granted seem to have accelerated exponentially like a runaway universe…
Both of these statements are unsupportable.
Yeah, it’s true, the Standard Model of Particle Physics has been in place and has been working for 30 years, which makes it seem like no big huge discoveries are being made.
However, there is a wide swath of space between “stalled out” and “making fundamental paradigm-changing discoveries.” Inflation, for instance, is something that’s come in the last 30 years. The holographic principle that Sean mentions above. There is work going on about understanding better the more complicated behavior of Standard Model particles (e.g. when you can get statistical and do RHIC stuff). Lots of people spend lots of time thinking about how neutrinos work. There’s stuff going on in fundamental theoretical physics; it’s hardly stalled. (And, heck, let’s not forget that the concrete idea of Grand Unification comes out of the 70’s.)
And as for the exponential increase in the assumptions being taken for granted : I fail to see that at all. Sure, we’ve got postulates an axioms and so forth, but given that the whole thing is working, it’s hardly unreasonable. And, there’s not a huge increase at all. There aren’t “epicycles” constantly being added or any such. Yeah, people take the cosmological constant seriously in a way that they didn’t 10 years ago … but that is because of data, not because of assumptions. The CC was already long there. The data fit it well. To throw it out and assume something else would be the increase of assumption, not believing the CC.
I don’t appreciate your leaving Rob’s unsupported shot at me on the board, while removing my factual statement back to him.
I’m not sure what you’re talking about; the “good luck Steinn” post sure looks deleted to me.
-Rob
George (#35) — the empirical data are the existence of gauge theories and gravity. We now know (from theoretical work, admittedly) that these two things are not entirely separate — a theory can look like gauge fields in flat spacetime in one limit, and like gravity in another. The laws of black hole mechanics speak strongly to the idea that something like this is happening in our real world. I don’t know exactly how it will play out, and am happy to admit that its far from proven, but I think it’s a major achievement of our last 30 years to discover that spacetime can be emergent in this way.
And will our more sensible commenters please remember that replying to crackpots is as crucial a part of the Crackpot Dynamic as the original crackpottery itself? Resist the temptation!
We now know (from theoretical work, admittedly) that these two things are not entirely separate — a theory can look like gauge fields in flat spacetime in one limit, and like gravity in another.
That theoretical work is pretty solid, though, isn’t it?
I have to admit that it’s all beyond my understanding… but then, I don’t even really fully understand the Higgs Mechanism, I’m embarassed to admit. (I’m an observer; I know how to take data and make pretty pictures.) One of these days I’ll have to sit down and seriously think about it.
-Rob
George wrote:
“So how on earth, rather I mean how in the universe, does the holographic principle – the second supposed major achievement of physics – relate to this criterion? What is the *empirical data* supporting this proposal?”
Sean has given the zero-order answer: we know there is gravity, and we have good reasons to suspect that any gravitational theory is holographic. Unfortunately like most things in quantum gravity, it’s hard to find experiments that can test it.
I like to think there’s one other bit of very suggestive empirical data, which I hope will eventually point us in the direction of a real theory of cosmology. Namely, we know that there was an inflationary epoch in the past: spacetime then was, approximately, deSitter space with a large cosmological constant. We know that there is dark energy now: spacetime in the future will be, approximately, deSitter space with a small cosmological constant. The decrease of the c.c. between these two epochs is consistent with the idea of dS/CFT (originally due to Strominger in hep-th/0106113 and hep-th/0110087), which is that time evolution in our universe corresponds to running up the RG flow of some boundary theory.
Unfortunately the technical details are difficult — there are good reasons to think any theory of quantum gravity on deSitter space is drastically different from usual theories of quantum gravity, and perhaps that any deSitter space is unstable — but I think it’s still a compelling picture. The empirical data here is limited, essentially just to two cosmological constants (past and future), but suggestive. Maybe someday this will be understood as the first hint of the real dynamics driving cosmology.
Sean said, “I don’t know exactly how it will play out, and am happy to admit that its far from proven, but I think it’s a major achievement of our last 30 years to discover that spacetime can be emergent in this way.”
I think it’s misleading to say that this is an example of “emergent spacetime”. You could equally well argue that what has been shown is that spacetime is fundamental and that gauge theory is “emergent” from it. I’m not saying that this is sensible, just that it is *as* sensible! What we are seeing here, I’m afraid, is that old physics-sociology issue: people trained in particle theory who are uncomfortable with GR and want to be able to dismiss it as “emergent”, which, to their minds, is equivalent to “not really important”. But yes, of course, something really important has been discovered here. But it doesn’t have anything to do with “emergence”.