A little while ago I went to see Zizek!, a new documentary about charismatic and controversial Slovenian philosopher and cultural critic Slavoj Zizek. Part of the Zizekian controversy can be traced straightforwardly to his celebrity — not hard to get fellow academics ornery when you’re greeted by admiring throngs at each of your talks (let me tell you) — but there is also his propensity for acting in ways that are judged to be somewhat frivolous: frequent references to pop culture, an unrestrained delight in telling jokes. I was fortunate enough to see Zizek in person, as part of a panel discussion following the film. He is a compelling figure, effortlessly outshining the two standard-issue academics flanking him on the panel. He adamantly insisted that he had no control over the documentary of which he was the subject, indeed that he hasn’t even seen it, but then reveals that a number of important scenes were admittedly his idea. In one example, the camera lingers on a striking portrait of Stalin in his apartment, which the cinematic Zizek explains as a litmus test, a way of interrogating the bourgeois sensibilities of his visitors. The flesh-and-blood Zizek, meanwhile, points out that it was just a joke, and that he would never have something so horrible as a portrait of Stalin on his wall. It ties into his notion that a film will never reveal the true person behind the scholar or public figure, nor should it; the ideas will stand or fall by themselves, separate from their personification in an actual human. I have no educated opinion about his standing as a thinker; see John Holbo, Adam Kotsko (and here), or Kieran Healy for some opinions, or read this interview in The Believer and judge for yourself.
The movie opens with a Zizek monologue on the origin of the universe and the meaning of life. We can talk all we like, he says, about love and meaning and so on, but that’s not what is real. The universe is “monstrous” (one of his favorite words), a mere accident. “It means something went terribly wrong,” as you can hear him say through a distinctive lisp in this clip from the movie. He even invokes quantum fluctuations, proclaiming that the universe arose as a “cosmic catastrophe” out of nothing.
I naturally cringed a little at the mention of quantum mechanics, but his description ultimately got it right. Our universe probably did originate as a quantum fluctuation, either “from nothing” or within a pre-existing background spacetime. Mostly, to be honest, I was just jealous. As a philosopher and cultural critic, Zizek gets not only to bandy about bits of quantum cosmology, but is permitted (even encouraged) to connect them to questions of love and meaning and so on. As professional physicists, we’re not allowed to talk about those questions — referees at the Physical Review would not approve. But it’s worth interrogating this intellectual leap, from the accidental birth of the universe to the richness of meaning we see around us. How did we get there from here, and why?
It’s the possibility of addressing this question that I take to be the most significant aspect of the “computational quantum universe” idea advocated by Seth Lloyd in his new book Programming the Universe. Lloyd is a somewhat controversial figure in his own right, but undoubtedly an influential physicist; he was the first to propose a plausible design for a quantum computer. I.e., a computer that takes advantage of the full quantum-mechanical wavefunction of its elements, rather than being content with the ordinary classical states.
To Lloyd, quantum computation is a hammer, and it’s tempting to see everything interesting as a nail — from black holes to quantum gravity to the whole universe. The frustrating aspect of his book is the frequency with which he insists that “the universe is a quantum computer,” without always making it clear just what that means or why we should care. What is the universe supposed to be computing, anyway? Its own evolution, apparently. And what good is that, exactly? It’s hard to tell at first whether the entire idea is merely a particular language in which we are free to talk about good old-fashioned physics and cosmology, or whether it’s a profound change of perspective that can be put to good use. What physicists would really like to know is, does thinking of the universe as a quantum computer actually help us solve any problems?
Well, maybe. My own personal reconstruction of the problem that Lloyd is suggesting we might be able to solve by thinking of the universe as a quantum computer, although in slightly different words, is precisely that raised by Zizek’s monologue: Why, in the course of evolving from the early universe to the end of time, do we pass through a phase featuring the fascinating and delightful complexity we see all around us?
Let’s be more specific about what that means. The early universe — at least, the hot Big Bang with which our observable universe began — is a very low-entropy state. That is, it’s a very unlikely configuration in the space of all the ways one could arrange the universe — much like having all of the air molecules accidentally be located in one half of a room (although much worse). But entropy is increasing as the universe evolves, just like the Second Law of Thermodynamics says it should. The late universe will be very high entropy. In particular, if the universe continues to expand forever (which seems likely, although one never knows), we are evolving toward heat death, in which matter cools down and is dispersed thinly over space after black holes form and evaporate. This is a “natural” state for the universe, one which will essentially stay that way in perpetuity.
However. While the early universe is low-entropy and the late universe is high-entropy, both phases are simple. That is, their macrostates can be described in very few words (they have low Kolmogorov complexity): the early state was hot and dense and smoothly-distributed, while the final state will be cold and dilute and smoothly-distributed. But our current universe, replete as it is with galaxies and planets and blogospheres, isn’t at all simple, it’s remarkably complex. There are individual subsystems (like you and me) that would require quite a lengthy description to fully specify.
So: Why is it like that? Why, in the evolution from a simple low-entropy universe to a simple high-entropy universe, do we pass through a complex medium-entropy state?
Lloyd’s suggested answer, to the extent that I understand it, arises from the classic thought experiment of the randomly typing monkeys. A collection of monkeys, randomly pecking at keyboards, will eventually write the entire text of Hamlet — but it will take an extremely long time, much much longer than the age of the observable universe. For that matter, it will take a very long time to get any “interesting” string of characters. Lloyd argues that the situation is quite different if we allow the monkeys to randomly construct algorithms rather than mere strings of text; in particular, the likelihood that such an algorithm will produce interesting (complex) output is much greater than the chance of randomly generating an interesting string. This phenomenon is easily demonstrated in the context of cellular automata: it’s remarkably easy to find very simple rules for automata that generate extremely complex output from simple starting configurations.
So the force of the idea that “the universe is a quantum computer” lies in an understanding of the origin of complexity. Think of the different subsystems of the universe, existing in slightly different arrangements, running different quantum algorithms. It is much easier for such subsystems to generate complex output computationally than one might guess from an estimate of the likelihood of hitting upon complexity by randomly choosing configurations directly. There is an obvious connection to genetics and evolution; DNA sequences can be thought of as lines of computer code, and mutations and genetic drift allow organisms to sample different algorithms. It’s much easier for natural selection to hit upon interesting possibilities by acting on the underlying instruction set, rather than by acting on the (much larger) space of possible configurations of the pieces of an organism.
Of course I don’t really know if any of this is true or interesting. In particular, the role of the “quantum” nature of the computation seems rather unclear; at a glance, it would seem that much of the universe’s manifest complexity lies squarely in the classical regime. But big ideas are fun, and concepts like entropy and complexity are far from completely understood, so perhaps it’s permissible to let our imaginations run a little freely here.
The reason why this discussion of quantum computation and the complexity of the universe fits comfortably with the story of Zizek is that he should understand this (if he doesn’t already). Zizek is a Lacanian, a disciple of famous French psychoanalyst Jacques Lacan. Lacan was a similarly controversial figure, although his charisma manifested itself as taciturn impenetrability rather than voluble popular appeal. One of Lacan’s catchphrases was “the unconscious is structured like a language.” Which I take (not having any idea what I am talking about) as a claim that the unconscious is not simply a formless chaos of mysterious impulses; rather, it has an architecture, a grammar, rules of operation much like those of our higher-level consciousness.
One way of summarizing Lloyd’s explanation of the origin of complexity might be: the universe is structured like a language. It is not just a random configuration of particles typed out by tireless monkeys; it is a quantum computer, following the rules of its algorithms. And by following these rules the universe manages to generate configurations of enormous complexity. Examples of which include science, poetry, love, meaning, and all of those aspects of human life that lend it more interest than we attach to other chemical reactions.
Of course, it’s only a temporary condition. From featureless simplicity we came, and to featureless simplicity we will return. Like a skier riding the moguls, eventually we’ll reach the bottom of the hill, and dissolve into thermal equilibrium. It’s up to us to enjoy the ride.
I find these ideas very interesting. You could also say that the universe is a (quantum) algorithm instead of a computer. This is close to the idea of Tegmark that all possible mathematical laws define their own universes. You don’t really postulate an infinite number of physical worlds. The idea is rather that physical existence is ”nothing more” than mathematical existence.
Such ideas are interesting, because you then don’t need to bother about questions like where did the universe came from. The universe should be identified with the mathematical model that describes it and that is in a certain sense timeless. Time is a concept that can be formulated relative to the model and can be finite, even though the universe never popped into existence from nothing.
You an also approach this from another angle. Ultimately we humans are also algorithms. It is reasonable to believe that if you simulate the brain of a person accurately enough you would recreate that person’s consciousness. The algorithm that describes our consciousness is extremely complicated.
In principle you can think of observers as universes in their own right (both are algorithms). However, we find ourselves embedded in a universe described by simple laws. This is consistent with the idea that universes with low Kolmogorov complexity are favored.
“if the universe continues to expand forever (which seems likely, although one never knows)”
Is that a prediction about the future of epistemology, or a comment on the state of astro-ph? 🙂
Interesting work on the universe as a quantum computer has been done by Paula Zizzi.
See, for example:
http://xxx.lanl.gov/abs/gr-qc/0304032
http://xxx.lanl.gov/abs/gr-qc/0204007
http://xxx.lanl.gov/abs/gr-qc/0110122
http://xxx.lanl.gov/abs/gr-qc/0103002
http://xxx.lanl.gov/abs/gr-qc/0007006
Tony Smith
http://www.valdostamuseum.org/hamsmith/
The universe is (The universe is (The universe is recursive))).
Seth Lloyd’s main current attempt at answering the question “does thinking of the universe as a quantum computer actually help us solve any problems?” is:
**********************
A theory of quantum gravity based on quantum computation (quant-ph/0501135)
ABSTRACT: This paper proposes a method of unifying quantum mechanics and gravity based on quantum computation. In this theory, fundamental processes are described in terms of pairwise interactions between quantum degrees of freedom. The geometry of space-time is a construct, derived from the underlying quantum information processing. The computation gives rise to a superposition of four-dimensional spacetimes, each of which obeys the Einstein-Regge equations. The theory makes explicit predictions for the back-reaction of the metric to computational `matter,’ black-hole evaporation, holography, and quantum cosmology.
**********************
[Full Disclosure: I am a PhD student of Seth’s, though on much less speculative matters… oh wait, lots of people apply “speculative” to research on practical quantum computer architectures and their ability to solve NP-complete problems in the typical case… but I digress. I’d be very interested—indeed, perhaps unduly interested—for your comments Prof. Carroll on this approach to quantum gravity and cosmology.]
Thanks, Sean, for alluding to Seth Lloyd’s recent work. He does a nice job at differentiating between his “many histories” approach versus Deutsch’s “many worlds” approach to quantum mechanics. He especially goes a superb job at tying information theory into the intricacies of the second law. Overall, I find his section on quantum demonology and quantum exorcism quite entertaining. He argues that while Maxwell’s demon is in violation with classical mechanics/thermodynamics, Loschmidt’s/Laplace’s demons do not violate the second law via quantum-information dynamics. Despite (on the surface) appearing rather fantastic, harnessing dynamic demons for the purpose of quantum computing might – after all – be inline with reality. A nineteenth century preamble to a twenty-first century physics, Boltzmann challenges Loschmidt, “Go ahead, reverse them.”
William, thanks for commenting. I don’t know enough about the quantum-computation approach to quantum gravity to say anything especially deep about it. In general I’m all in favor of dramatically new approaches to trying to quantize gravity, which is a hard problem. However, my impression is that the approach starts with a discretization of spacetime, and that approach faces some fundamental problems right from the start; see this post by Jacques Distler for some idea why:
http://golem.ph.utexas.edu/~distler/blog/archives/000639.html
Pingback: chapati mystery :: Žižek Mechanics
I read the paper by Lloyd that William mentioned above some time ago (version 1, not version 8 🙂 ). Somewhere at the end of the paper, Lloyd suggest that we should consider the superposition of all possible algorithms where each algorithm is weighted by (program length)^(-½).
So, he ends up considering the set of all possible formally describable theories. It is not clear to me that you then really need quantum algorithms. Why noy postulate an ensemble of classical algorithms?
Pingback: Lubos Motl's Reference Frame
I think Seth Lloyd should be a politician or else he should write astrological horoscopes. What he says is too vague to be called Science. Because of this vaguess, everyone *thinks* they see a reflection of their own ideas in what he says. If by some chance, 1/100th of what he says turns out to be true, he will claim to be the Father of a whole school of thought. It’s brilliant, but it is not Science.
I will point out a potential loophole in Lloyd’s model based on “programming the universe”: his model is based on the assumption that all operations within the universe can be completely described by using computational algorithms. I will argue the following: until physicists can completely account for the enormous deficits in the universe’s entire energy budget, it seems Lloyd might be responding rather prematurely on the assumption that the universe is exclusively based on computational algorithms. Then again – perhaps – I am being even more premature to suggest that there might be a link between dark matter/energy and a non-recursive component of the universe. Nevertheless, I can only hope that that the quantum computer does not succumb to the ultimate graveyard of Godel’s Theorem.
I have a question about the distinction you make between the current state of the universe: medium-entropy, high complexity and the end state of the universe: high-entropy, low complexity. I believe there are different senses of complexity being employed, though I may be very misled. Isn’t it the case, that while the end state of the universe may be descriptively very simple – cold, dilute and smoothly-distributed – the actual algorithmic complexity is actually higher than the present state of the universe.
What I mean is that in terms of descriptive complexity the initial and final universes are equivilent and the current state is much more complex (it takes alot to describe our current state)…however, the algorithmic complexity should increase monotonically with the entropy of the system – so, while we currently have regularities in, say molecular structures – dna, whatever – the end universe will be completely random and so have very high algorithmic complexity. again, i may be misled, but i think that when i learned about a.c. it was in the context of “compressibility” of the data to a minimal description and the end universe -or heat death- should be perfectly random and by nature, uncompressible. i would love to hear that i’m incorrect, and how!
Cynthia, Gödel’s theorem doesn’t have to be a problem. The halting problem (whether or not a computer running a program will ever stop) is unsolvable in general in the sense that no algorithm can decide this (in the general case). This means that algorithms can do undecidable things.
Also the fact that in physics we use the uncountable reals doesn’t imply that the universe is uncountable, see here.
With the closing comments in mind, it seems as if we’re interested in the complexity of the set of generating algorithms rather than the complexity of the universe – with the complexity of the grammar, not the final phrase. so it’s not that the initial and final universe have the same algorithmic complexity but that their algorithms have the same algothmic complexity – describing how to create complete uniform singularity or complete randomness are equally complex. however, having to describe the actual state of the final universe has much higher a.c. – you actually have to write the whole thing out. Essentially, the grammar of the universe will develop as the universe does and it’s peak will be in some middle place between the birth of the universe and its death. As entropy increases the opportunity for new interactions to occur also increases until some critical point when we start experiencing a decline in available interactions…. perhaps?
The Mathematical Basis for Deterministic Quantum Mechanics
I am of course speaking about Hooft’s latest paper and title of same.
Looking at simple entropic realization and thermodynamics, how is it that one could not be enamoured with how we would interpret information loss, and the resulting universe in question, as a complexity entropically??
Of course, we need a way in which to do this as well as measurably? Taking reductionism down to quantum perceptions.
From this perspective Smolin’s attributes in endeavoring to describe that “information loss” is built on a solid foundation, but knowing full well the timeline of the string analogies, will it be capable in looking at Gluonic plasma interpetations? Glast determinations seem relevant here
So Hooft adds “subsystems” as two harmonic oscillators.” Another option?
A layman wondering.
More on name.
Of course the “basis of thought” had to be conisdered in context of other “Information of Hooft’s”
Why would you hide it, Lubos? 🙂
cloois, these notions are somewhat vague, both because I am not an expert and because I think they are just vague at this point. But there is one frequently-confusing issue that perhaps I can clear up. By “simplicity” I was referring to algorithmic complexity of the macrostate to which the universe belongs — the description in which we consider only macroscopic variables, treating individual microscopic configurations as equivalent if they correspond to the same macroscopic description.
Think of it this way: how much information does it take to describe the air spread evenly throughout the room? Well, if you insist that we specify the exact microstate, the position and momentum of every single molecule, it would require a huge amount of information. From that point of view, the state is zero entropy and very complex. But if we coarse-grain our description, paying attention only to the macroscopic variables, it’s a very high-entropy state, and also very simple.
Indeed, high-entropy states are always “simple” in that sense. But low-entropy states may or may not be. Having all the air molecules located in one corner of the room is low-entropy (there aren’t many microstates that would correspond to such a macrostate, compared to all of the total set available), but it does have a simple description. If the molecules were organized into some complex structure, it would be both low-entropy and low-simplicity.
Experts are welcome to chime in. The notion of simplicity I’m using here doesn’t correspond to any technical concept of which I’m aware.
First of all, according to our current understanding a quantum computer cannot solve NP-complete problems. In fact, there are only a handful of problems for which a quantum computer provides an exponential speedup. In addition, there are a few important problems that gain a polynomial speedup. The fact that factoring is one of the former, and that it is the basis of all modern cryptography has caught the attention of government agencies that carry large suitcases full of money.
Every grant proposal now tries to make a connection to quantum computing, because that is where the money is. Some people are very cynical about this, but it has ensured a healthy level of funding for all branches of physics that concentrate on manipulating quantum systems. A lot of interesting results have come out of this (some nothing to do with quantum computing), and today we can do things that were wild dreams five years ago. I am convinced (as is David Deutsch) that we will have a working prototype of a quantum computer within a decade. What we’ll do with it once the codes are cracked is a different matter. Don’t forget that transistors were invented to make hearing aides.
Finally, we are slowly gaining a better understanding of what is quantum about quantum information theory. We have a flurry of beautiful results, but right now I would say that we’re at the point electromagnetism was in a decade before Maxwell. Quantum information needs its own James Clerk Maxwell to extract the essence of it all and condense it into a coherent theory. Personally, I believe that this theory will have something to say about quantum gravity, possibly via the holographic principle.
We will see…
PK, by ‘cannot solve’, do you mean ‘cannot solve in polynomial time’? You can solve NP-complete problems by search, which is classically O(N) but quantumly O(N^(1/2)). Still a ‘hard’ problem, though.
I agree with you about the serendipitous nature of one of the few exponential speedups belonging to an algorithm that can attack public-key encryption techniques. I would also say that quantum communications are more useful than quantum computers in the short term.
The entropy of N particles depends on your choice of fine-graining. The minimum size of a cell in phase space is typically chosen as hbar3N, but this is ultimately an arbitrary choice. Who knows what goes on at extremely small scales. For all we know, there is an enormous amount of complexity that is averaged out. Think of the aliens at the end of Men in Black. The question is thus whether we really have an increase in complexity followed by a decrease.
adam, that is indeed what I meant. Thanks.
From the posts desciption of complexity situated between two simple states, I’m reminded of an analogous problem observed in consciousness research. Brain connections can’t be completely random or regular. Kolmogorov complexity fails to describe that too.
Some tries to define appropriate complexity measures. One such is found in http://www.striz.org/docs/tononi-complexity.pdf with the situation vs Kolmogorov described in the Fig in Box 2, and the definition in Box 1. (Note: The paper is from “Trends in Cognitive Sciences”. I haven’t found the original.)
uups. The definition is in Fig 2, not Box 1.
PK: I am pleased to hear that you an active recipient of “large suitcases full of money.” Sounds like quantum computing is a best-kept-secret among the few of you. Way to go!