Scott Aaronson has thown down a gauntlet by claiming that theoretical computer science, “by any objective standard, has contributed at least as much over the last 30 years as (say) particle physics or cosmology to humankind’s basic picture of the universe.” Obviously the truth-value of such a statement will depend on what counts as our “basic picture of the universe,” but Scott was good enough to provide an explanation of the most important things that TCS has taught us, which is quite fascinating. (More here.) Apparently, if super-intelligent aliens landed and were able to pack boxes in our car trunks very efficiently, they could also prove the Riemann hypothesis. Although the car-packing might be more useful.
There are important issues of empiricism vs. idealism here. The kinds of questions addressed by “theoretical computer science” are in fact logical questions, addressable on the basis of pure mathematics. They are true of any conceivable world, not just the actual world in which we happen to live. What physics teaches us about, on the other hand, are empirical features of the contingent world in which we find ourselves — features that didn’t have to be true a priori. Spacetime didn’t have to be curved, after all; for that matter, the Earth didn’t have to go around the Sun (to the extent that it does). Those are just things that appear to be true of our universe, at least locally.
But let’s grant the hypothesis that our “picture of the universe” consists both of logical truths and empirical ones. Can we defend the honor of particle physics and cosmology here? What have we really contributed over the last 30 years to our basic picture of the universe? It’s not fair to include great insights that are part of some specific theory, but not yet established as true things about reality — so I wouldn’t include, for example, anomalies canceling in string theory, or the Strominger-Vafa explanation for microstates in black holes, or inflationary cosmology. And I wouldn’t include experimental findings that are important but not quite foundation-shaking — so neutrino masses don’t qualify.
With these very tough standards, I think there are two achievements that I would put up against anything in terms of contributions to our basic picture of the universe:
- An inventory of what the universe is made of. That’s pretty important, no? In units of energy density, it’s about 5% ordinary matter, 25% dark matter, 70% dark energy. We didn’t know that 30 years ago, and now we do. We can’t claim to fully understand it, but the evidence in favor of the basic picture is extremely strong. I’m including within this item things like “it’s been 14 billion years since the Big Bang,” which is pretty important in its own right. I thought of a separate item referring to the need for primordial scale-free perturbations and the growth of structure via gravitational instability — I think that one is arguably at the proper level of importance, but it’s a close call.
- The holographic principle. I’m using this as a catch-all for a number of insights, some of which are in the context of string theory, but they are robust enough to be pretty much guaranteed to be part of the final picture whether it involves string theory or not. The germ of the holographic principle is the idea that the number of degrees of freedom inside some region is not proportional to the volume of the region, but rather to the area of its boundary — an insight originally suggested by the behavior of Hawking radiation from black holes. But it goes way beyond that; for example, there can be dualities that establish the equivalence of two different theories defined in different numbers of dimensions (ala AdS/CFT). This establishes once and for all that spacetime is emergent — the underlying notion of a spacetime manifold is not a fundamental feature of reality, but just a good approximation in a certain part of parameter space. People have speculated about this for years, but now it’s actually been established in certain well-defined circumstances.
A short list, but we have every reason to be proud of it. These are insights, I would wager, that will still be part of our basic picture of reality two hundred years from now. Any other suggestions?
“by any objective standard, has contributed at least as much over the last 30 years as (say) particle physics or cosmology to humankind’s basic picture of the universe.”
Go Scott! He’s onto it.
Sorry Sean, but you’re really losing this battle. The Holographic principle comes from monadic dualities in higher topos theory, which is all about quantum logic. And as for your breakup of matter components – it’s just plain wrong. The dark matter is black holes, which is also understood in terms of quantum computation.
Wow. I mean, the statement just strikes me as odd. Computer science (at least the parts that aren’t mislabeled engineering) is rightfully a subfield of Math… just as astronomy, nowadays, is rightfully a subfield of (mostly) Physics.
Now, yes, math is the substrate upon which Physics is etched… but Physics is more about our Universe than Math is. If you think about things the way Max Tegmark does, Math is about every conceivable Universe. Me, our universe is so damn cool by itself that I’m happy just trying to understand it.
Now, I know Aaronson wants us to contain ourselves to theoretical physics and cosmology — but I maintain that that is impossible, and also unreasonable. Theroetical cosmology has long been informed by experiment. Hell, ask Clifford — even String Theory is informed by experiment. But in astronomy in particular, the observers and theorists seem to talk to each other a lot, more so than in many disciplines. You see theorists in observing proposals, for example, and sometimes they’re leading them.
So I’ll violate the terms of the challenge and plunge merrily ahead (and while I’m at it, I’ll sometimes go over 30 years):
* the scale of our Universe. 400 years ago, we knew about the Solar System, and the stars were a mysterious unaddressable “firmament.” 100 years ago we had the Great Debate, shortly after which we had shown for sure that there are many, many galaxies just like our own, and that our Univers is, not to put too fine a point on it, butt-huge. You can sit around all day doing thought experiments about packing boxes into the boot of a car (unless you’re on this side of the pond, in which case you pack boxes into the boot of an elephant or something), but all of that is just brain-play compared to actually seeing and knowing just how huge and wild our Universe is.
* the resolution of Olber’s paradox. The Universe has a finite age. And it’s expanding, so redshift provides another handy helper to the resolution of the paradox. (Either could do it alone, of course, but we know both are there, so, hey, fly with it.)
* By the way, did I mention that the Universe is expanding? Who’d’ve thunkit? Not Einstein. Pretty cool, huh? Now, Einstein did figure it out just sitting in his room with a pencil, but tuned in a precarious unstable balance to make it not happen because it did seem absurd to him… but it’s probably fair to say that the expansion of the Universe is a triumph of cosmology, and maybe even of theoretical cosmology.
* Where the chemical elements came from. The finite-aged Universe just gave us Hydrogen and Helium, and a little trace deuterium, lithium, and beryillium. (Not enough of the latter three to warrant capitalizing their names.) Meanwhile, theoretical nuclear astrophysicists worked out where all of the rest came from… that is, stars, living and dying. Damn cool, if you ask me. Anybody who isn’t horribly aenemic can be 100% that some of the atoms in their body have been through a supernova. Mabye that isn’t as esoteric as Dumblefinger’s 18th Dimensional Hypothesis About Nosepicking, or something else I really don’t understand, but damn it’s cool, and it’s demonstrably about our real, physical, world. (If you prick us, do we not bleed?)
* Neutrinos have mass!!!! We figured this out and proved it by looking at the Sun!
* The inventory thing Sean talked about above.
* We know the age of the Universe back to some “we don’t know how to think before here” point to within 5%.
* We can explain Galieo’s apocryphal experiment through Einstein’s relativity… and Einstein’s relativity was for a long time “proven” only via astronomical observations (Mercury, lensing, etc.). Of course, nowadays, GR is the stuff of engineering (GPS, and the next version of ntp I intend to write).
* On the most fuandmental level, reality is stochastic, in complete contrast to the mental models of the world we’ve evolved in our brains. How bizarre is that? Did anybody expect that? What’s more, if it weren’t for all that quantum stuff and transistors followed by integrated circuits, there wouldn’t even be a field called “computer science” (although math well predates all of that, of course). I guess cosmology can’t lay any claim to this one, but hey, it’s Physics.
* There is a supermassive black hole at the center of any respectable galaxy. 10 billion years ago, all of those black holes spent something like 10 million year periods shining as quasars (or wimpier cousins) as they were being fed. What’s more, the processes associated with all of this limited the buildup of stars in the galaxy. And supermassive black holes are way cooler than anything I can code up in Perl or Lisp or something that. Hu-ah!
Computer Science is to Physics as Madden 2007 is to Superbowl XLII. Except that I’m way better at Madden (well, 2005) than I would be playing real football. But otherwise. You know. Analogy.
Or something.
The dark matter is all black holes?
Did I miss a memo?
Pingback: Galactic Interactions » Blog Archive » My science is more fundamental than yours!
Did I miss a memo?
Don’t worry. You’re not the only one.
Rob Knop wrote:
These were understood more than 30 years ago. Scott cleverly stacked the deck by starting his clock in 1986, right around when fundamental theoretical physics stalled out. Since then most of the progress in fundamental physics has come from observations in astronomy – but these items you mention above come from an earlier era.
So, Scott wins this game. If he’d pitted theoretical computer science against math or biology, he would have had a much tougher time.
The main thing that we didn’t know 30 years ago is that the standard model is so good. Pretty much all exotic extensions that people have thought of – GUTs, technicolor, susy, extra-dimensions, … – have been ruled out or are at least looking increasingly unnatural. That is an important discovery, even if it is a negative one.
Whoops – I can’t subtract. Scott actually started his clock ticking in 1976, not 1986. This makes his job harder: the Standard Model was busily being confirmed then. If he’d gone back to 1970, when the Standard Model was being formulated, he would have been in serious trouble.
But the Standard Model cannot be properly understood without operads and other categorical beasts. Scott definitely wins.
Thanks, Sean! John Baez gets it exactly right — I did pick my timeframe very carefully. Of course, even when we restrict to the last 30 years, Lambda>0 and the holographic principle make for some serious competition. But I stand by my subjective and ultimately meaningless claim!
Pingback: danvk.org » Four things
Since 1976, we’ve got
* three families of neutrinos, confirmed
* neutrino oscilllations -> neutrino mass
* gamma ray bursters at cosmological distances
* age of the universe known to 5%
* Sean Carroll gets PhD
* inflation
* cmb isotropic, fluctuations right for structure growth
* cmb dipole, we know which way we’re going
* dark matter = cold, dark matter = non-baryonic, dark matter = real
* spatial curvature flat
* universe accelerating, thus dark energy
* smbh at core of every big galaxy ; bh/bulge relationship
* elliptical galaxies = result of galaxy mergers
* planetary systems ubiquitous, including ones not like ours
* gravitational waves seen from decay of hbinary pulsar
* ed witten being smarter than you (independent of the definition of “you”)
One other thing, Sean. You write:
The kinds of questions addressed by “theoretical computer science” are in fact logical questions, addressable on the basis of pure mathematics. They are true of any conceivable world, not just the actual world in which we happen to live.
That might be less true than you think. Polynomial-time Turing machines are indeed a mathematical construct, but the reason people studied them in the first place is that they believed they accurately modelled what can efficiently be computed in the physical world. If large-scale quantum computers are built (and assuming that factoring is hard for classical computers), that belief will have been experimentally falsified.
Of course, a large part of physics also consists of assuming a model and then working out its mathematical consequences. But I agree with you that, on the whole, theoretical computer scientists place much less emphasis on the model’s being “physical,” and much more emphasis on mathematical minutiae like the probabilities summing to unity. 🙂
Well, let’s keep our eggs in the right baskets. Proofs about what polynomial-time Turing machines can do will be applicable to any polynomial-time Turing machines, regardless of what the laws of physics happen to be. The connection between such hypothetical constructs and the real physical world is a completely separate question (regardless of what the motivations may have been), and that may indeed depend on the laws of physics.
Is there a concrete example of a profound insight about the physical world — one that wouldn’t have been relevant had the laws of physics been different — from theoretical computer science over the last 30 years? E.g. from quantum information theory? Would anyone like to defend a claim that our ability to factor large numbers using a quantum computer contributes as much to our basic picture of the universe as dark energy, or emergent spacetime? (Just among us friends.)
Would anyone like to defend a claim that our ability to factor large numbers using a quantum computer contributes as much to our basic picture of the universe as dark energy, or emergent spacetime? (Just among us friends.)
Can I be your friend? 🙂 If so, I’ll be happy to defend that exact claim. Shor’s algorithm didn’t change quantum mechanics, but along with a few other results from the mid-nineties, it did represent a completely new way of thinking about quantum mechanics. Here are some examples of statements that are “obvious” from a post-Shor perspective:
(1) To really understand QM, you need to consider entangled states of hundreds or thousands of particles, not just two or three.
(2) On the other hand, the basic conceptual features of QM can be not only understood, but studied in great detail, without ever encountering such concepts as boson, fermion, energy, commutator, or wave-particle duality.
(3) Entanglement, far from being “spooky”, is a quantifiable resource like energy or time.
(4) Feynman’s path-integral formalism basically boils down to the statement that BQP (the class of problems solvable efficiently by a quantum computer) is contained in PP (Probabilistic Polynomial-Time).
(5) Schrodinger’s cat is not a very interesting example of a large entangled state. A much more interesting example is the “cluster state”: basically, a 2D lattice of spins subject to pairwise nearest-neighbor Hamiltonians.
(6) Despite the fact that amplitudes vary continuously, quantum information can in principle be protected against noise for an arbitrarily long time — and for that reason, should be thought of as digital rather than analog.
(7) When studying spectral gaps in condensed-matter systems, often the key question to ask is whether they decrease polynomially or exponentially as the number of particles goes to infinity.
Let me make a falsifiable prediction: that over the next few decades, statements like the above will come to seem as obvious to “mainstream” physicists as they now seem to Shorians — and that this will affect, not only how they talk about foundational issues, but also what experimental goals they consider worth reaching and how they teach QM to undergrads.
“the Standard Model cannot be properly understood without operads and other categorical beasts”
I must have missed another one of those memos. “To: S.Weinberg, S.Glashow, etc., etc. You don’t properly understand the Standard Model. Go away and learn your operads. Sincerely, Category theorists.”
Aaronson compensated for the 30-year limit by letting in ‘the Universe’ – just the place where cosmology and astrophysics have got us so much further. It seems to me that all his examples are not particularly about this ol’ Universe, actually about our picture of the methods by which reasoning and logic can operate.
As he said:
“discoveries about the capacities of finite beings like ourselves to learn mathematical truths.”
Then where is ‘the Universe’? Maybe he means the universe as seen by mathematicians, in which the important constitutents are proofs, algorithms and computations.
Is the most important thing about the cosmological constant the fact that (if it really is constant) it disallows computations involving more than 10^122 bits?
It seems to me that all his examples are not particularly about this ol’ Universe, actually about our picture of the methods by which reasoning and logic can operate.
Quantum computing has taught us, if it wasn’t clear already, that “the methods by which reasoning and logic can operate” (or at least feasibly operate) depend on the laws of the universe. The limits of feasible computation are not knowable a priori; they can only be discovered empirically. I realize that much of what I’m saying is controversial, but I hope everyone can agree at least on that.
The way I usually think is in terms of equivalence classes on universes, where two universes are to be identified if they support the same sorts of computation and communication. I know that’s not the only way to think about physics, but it’s a way that’s already been very fruitful for quantum information, and I expect that it will become more prevalent in the future. You should try it sometime!
Maybe he means the universe as seen by mathematicians, in which the important constitutents are proofs, algorithms and computations.
Yes.
Is the most important thing about the cosmological constant the fact that (if it really is constant) it disallows computations involving more than 10^122 bits?
Yes.
Well, let’s keep our eggs in the right baskets.
Let me defend a little bit the bringing in of experiment. The point is that Computer Science doesn’t have an experimental branch. The closest thing to it is computer engineering.
With physics or cosmology — the whole field doesn’t exist without experiment, even if you only want to talk about theoretical discoveries.
But as for profound insights into the nature of our Universe– I like your trump cards, and agree with you completely on those. Part of my reason for hammering away is that the progres of science does only rarely include deep profound paradigm-shifting things, but all the time has regular beating away at the problems and pulling away of bits of the veil to see hints of what’s beyond. And it’s all real about our Physical universe. But, yeah, that’s me whining that I don’t think the terms of the challenge are right… although I might get into a long semantic argument about “basic picture of the Universe,” but I’ve already done enough of that 🙂
(Oh, and by the way, I’m fully a clown — fringe science! Not just island, but Kea too.)
-Rob
Computer science will make important contributions to physics in the future. Questions like why we find ourselves in “this world” and not in any other possible world can only be answered by considering the ensemble of all possible worlds.
Enough here: dark matter is almost certainly not black holes – the bounds on the contribution of the cosmological density by quiescent black holes is very strict and far below the observed cosmological density for pretty much any plausible black hole mass.
Bernie Carr reviews this issue every few years and plops down the new constraints, which get tighter each time.
here is a recent discussion on primordial black holes, which includes some useful limits on their density http://arxiv.org/pdf/astro-ph/0511743
only plausible mass range is ~ 10^16 kg – which is sub-lunar, and requires some amazing fine tuning to produce in such numbers. Don’t know any plausible physics that would make a cosmological density of black holes at that mass.
A more comprehensive review is found in Carr’s 1994 ARAA paper.
http://articles.adsabs.harvard.edu/cgi-bin/nph-iarticle_query?1994ARA%26A..32..531C&data_type=PDF_HIGH&type=PRINTER&filetype=.pdf
Sean, like so many things, I’m so with you on this one!
Computer science – unlike physics – can freely crossover into the realm of fantasy and still remain intact. Stated differently, physics – in comparison to computer science – must always stay hinged upon Reality; otherwise, physics falls apart. In other words, when physics – as opposed to computers – wanders outside the bounds of Nature, physics morphs into nonsense. Simply put, this is what truly distinguishes the (unadulterated) natural science of physics from the (adulterated) artificial science of computers.
Computer science – unlike physics – can freely crossover into the realm of fantasy and still remain intact.
Have you been playing too much World of Warcraft? 🙂
-Rob
Cynthia, what you call fantasy may be reality for creatures living in that “fantasy world”. They may regard our universe as a “fantasy world”.
Rob, thanks, I couldn’t said it any better.;)
Count Iblis, if you think that creatures elsewhere regard our universe as mere fantasy it’s probably because these “elsewhere creatures” exist within another pocket universe outside our particular Hubble Bubble of the Landscape.
Hi Sean,
You asked if anyone would defend the claim that “our ability to factor large numbers using a quantum computer contributes as much to our basic picture of the universe as dark energy, or emergent spacetime?”
Let me start out by saying that I do not agree with Scott Aaronson’s claim.
I can’t defend that claim, impressive and all as Shor’s algorithm is, however, I think it would be misguided to write off QIP as a mechanism for understanding the universe. Certainly Bell’s inequality is a massive achievement, and while QIP did not really exist at the time, the result can certainly be considered a quantum information result.
Many results of quantum information theory are simply different ways of looking at physical laws. The no signalling condition, for example, is a somewhat more general rule than special relativity on it’s own implies.
To me, at least, it seems that information theory can be viewed as another formulation of physics. There has certainly been a huge amount of progress in this area over the last 30 years, starting with the Holevo bound.
That said, I think Scott was specifically referring to computation and not information theory, but I maybe wrong.