Science

arxiv Find: What is the Entropy of the Universe?

And the answer is: about 10102, mostly in the form of supermassive black holes. That’s the entropy of the observable part of the universe, at any rate. Or so you will read in this paper by Paul Frampton, Stephen D.H. Hsu, Thomas W. Kephart, and David Reeb, arxiv:0801.1847:

Standard calculations suggest that the entropy of the universe is dominated by black holes, although they comprise only a tiny fraction of its total energy. We give a physical interpretation of this result. Statistical entropy is the logarithm of the number of microstates consistent with the observed macroscopic properties of a system, hence a measure of uncertainty about its precise state. The largest uncertainty in the present and future state of the universe is due to the (unknown) internal microstates of its black holes. We also discuss the qualitative gap between the entropies of black holes and ordinary matter.

It’s easy enough to plug in the Hawking formula for black-hole entropy and add up all the black holes; but there are interesting questions concerning the connection between the entropy of matter configurations and black-hole configurations. They are explored in an earlier paper by Hsu and Reeb, “Black hole entropy, curved space and monsters,” which Steve blogged about here.

arxiv Find: What is the Entropy of the Universe? Read More »

13 Comments

Is the Universe a Computer?

Via the Zeitgeister, a fun panel discussion at the Perimeter Institute between Seth Lloyd, Leonard Susskind, Christopher Fuchs and Sir Tony Leggett, moderated by Bob McDonald of CBC Radio’s Quirks & Quarks program. The topic is “The Physics of Information,” and as anyone familiar with the participants might guess, it’s a lively and provocative discussion.

A few of the panel members tried to pin down Seth Lloyd on one of his favorite catchphrases, “The universe is a computer.” I tackled this one myself at one point, at least half-seriously. If the universe is a computer, what is it computing? Its own evolution, apparently, according to the laws of physics. Tony Leggett got right to the heart of the matter, however, by asking “What kind of process would not count as a computer?” To which Lloyd merely answered, “Yeah, good question.” (But he did have a good line — “If the universe is a computer, why isn’t it running Windows?” Insert your own “blue screen of death” joke here.)

So I tried to look up the definition of a “computer.” You can open a standard text on quantum computation, but “computer” doesn’t appear in the index. The dictionary is either unhelpful — “a device that computes” — or too specific — “an electronic device designed to accept data, perform prescribed mathematical and logical operations at high speed, and display the results of these operations.” Wikipedia tells me that a computer is a machine that manipulates data according to a list of instructions. Again, too specific to include this universe, unless you interpret “machine” to mean “object.”

I think the most general definition of “computer” that would be useful is “a system that takes a set of input and deterministically produces a set of output.” The big assumption being that the same input always produces the same output, but I don’t think that’s overly restrictive for our present purposes. In that sense, the laws of physics act as a computer: given some data in the form of an initial configuration, the laws of physics will evolve the configuration into some output in the form of a final configuration. Setting aside the tricky business of wavefunction collapse, you have something like a computer. I suppose you could argue about whether the laws of physics are “the software” or the computer itself, but I think you are revealing the limitations of the metaphor rather than learning something interesting.

But if we take the metaphor at face value, it makes more sense to me to think of the universe as a calculation rather than as a computer. We have input data in the form of the conditions at early times, and the universe has calculated our current state. It could have been very different, with different input data.

And what precise good does it do to think in this way? Yeah, good question. (Which is not to imply that there isn’t an answer.)

Is the Universe a Computer? Read More »

68 Comments

A Dark, Misleading Force

Certain subsectors of the scientifically-oriented blogosphere are abuzz — abuzz, I say! — about this new presentation on Dark Energy at the Hubblesite. It’s slickly done, and worth checking out, although be warned that a deep voice redolent with mystery will commence speaking as soon as you open the page.

But Ryan Michney at Topography of Ignorance puts his finger on the important thing here, the opening teaser text:

Scientists have found an unexplained force that is changing our universe,
forcing galazies farther and farther apart,
stretching the very fabric of space faster and faster.
If unchecked, this mystery force could be the death of the universe,
tearing even its atoms apart.

We call this force dark energy.

Scary! Also, wrong. Not the part about “tearing even its atoms apart,” an allusion to the Big Rip. That’s annoying, because a Big Rip is an extremely unlikely future for a universe even if it is dominated by dark energy, yet people can’t stop putting the idea front and center because it’s provocative. Annoying, but not wrong.

The wrong part is referring to dark energy as a “force,” which it’s not. At least since Isaac Newton, we’ve had a pretty clear idea about the distinction between “stuff” and the forces that act on that stuff. The usual story in physics is that our ideas become increasingly general and sophisticated, and distinctions that were once clear-cut might end up being altered or completely irrelevant. However, the stuff/force distinction has continued to be useful, even as relativity has broadened our definition of “stuff” to include all forms of matter and energy. Indeed, quantum field theory implies that the ingredients of a four-dimensional universe are divided neatly into two types: fermions, which cannot pile on top of each other due to the exclusion principle, and bosons, which can. That’s extremely close to the stuff/force distinction, and indeed we tend to associate the known bosonic fields — gravity, electromagnetism, gluons, and weak vector bosons — with the “forces of nature.” Personally I like to count the Higgs boson as a fifth force rather than a new matter particle, but that’s just because I’m especially fastidious. The well-defined fermion/boson distinction is not precisely equivalent to the more casual stuff/force distinction, because relativity teaches us that the bosonic “force fields” are also sources for the forces themselves. But we think we know the difference between a force and the stuff that is acting as its source.

Anyway, that last paragraph got a bit out of control, but the point remains: you have stuff, and you have forces. And dark energy is definitely “stuff.” It’s not a new force. (There might be a force associated with it, if the dark energy is a light scalar field, but that force is so weak that it’s not been detected, and certainly isn’t responsible for the acceleration of the universe.) In fact, the relevant force is a pretty old one — gravity! Cosmologists consider all kinds of crazy ideas in their efforts to account for dark energy, but in all the sensible theories I’ve heard of, it’s gravity that is the operative force. The dark energy is causing a gravitational field, and an interesting kind of field that causes distant objects to appear to accelerate away from us rather than toward us, but it’s definitely gravity that is doing the forcing here.

Is this a distinction worth making, or just something to kvetch about while we pat ourselves on the back for being smart scientists, misunderstood once again by those hacks in the PR department? I think it is worth making. One of the big obstacles to successfully explaining modern physics to a broad audience is that the English language wasn’t made with physics in mind. How could it have been, when many of the physical concepts weren’t yet invented? Sometimes we invent brand new words to describe new ideas in science, but often we re-purpose existing words to describe concepts for which they originally weren’t intended. It’s understandably confusing, and it’s the least we can do to be careful about how we use the words. One person says “there are four forces of nature…” and another says “we’ve discovered a new force, dark energy…”, and you could hardly blame someone who is paying attention for turning around and asking “Does that mean we have five forces now?” And you’d have to explain “No, we didn’t mean that…” Why not just get it right the first time?

Sometimes the re-purposed meanings are so deeply embedded that we forget they could mean anything different. Anyone who has spoken about “energy” or “dimensions” to a non-specialist audience has come across this language barrier. Just recently it was finally beaten into me how bad “dark” is for describing “dark matter” and “dark energy.” What we mean by “dark” in these cases is “completely transparent to light.” To your average non-physicist, it turns out, “dark” might mean “completely absorbs light.” Which is the opposite! Who knew? That’s why I prefer calling it “smooth tension,” which sounds more Barry White than Public Enemy.

What I would really like to get rid of is any discussion of “negative pressure.” The important thing about dark energy is that it’s persistent — the density (energy per cubic centimeter) remains roughly constant, even as the universe expands. Therefore, according to general relativity, it imparts a perpetual impulse to the expansion of the universe, not one that gradually dilutes away. A constant density leads to a constant expansion rate, which means that the time it takes the universe to double in size is a constant. But if the universe doubles in size every ten billion years or so, what we see is distant galaxies acceleratating away — first they are X parsecs away, then they are 2X parsecs away, then 4X parsecs away, then 8X, etc. The distance grows faster and faster, which we observe as acceleration.

That all makes a sort of sense, and never once did we mention “negative pressure.” But it’s nevertheless true that, in general relativity, there is a relationship between the pressure of a substance and the rate at which its density dilutes away as the universe expands: the more (positive) pressure, the faster it dilutes away. To indulge in a bit of equationry, imagine that the energy density dilutes away as a function of the scale factor as R-n. So for matter, whose density just goes down as the volume goes up, n=3. For a cosmological constant, which doesn’t dilute away at all, n=0. Now let’s call the ratio of the pressure to the density w, so that matter (which has no pressure) has w=0 and the cosmological constant (with pressure equal and opposite to its density) has w=-1. In fact, there is a perfectly lockstep relation between the two quantities:

n = 3(w + 1).

Measuring, or putting limits on, one quantity is precisely equivalent to the other; it’s just a matter of your own preferences how you might want to cast your results.

To me, the parameter n describing how the density evolves is easy to understand and has a straightforward relationship to how the universe expands, which is what we are actually measuring. The parameter w describing the relationship of pressure to energy density is a bit abstract. Certainly, if you haven’t studied general relativity, it’s not at all clear why the pressure should have anything to do with how the universe expands. (Although it does, of course; we’re not debating right and wrong, just how to most clearly translate the physics into English.) But talking about negative pressure is a quick and dirty way to convey the illusion of understanding. The usual legerdemain goes like this: “Gravity feels both energy density and pressure. So negative pressure is kind of like anti-gravity, pushing things apart rather than pulling them together.” Which is completely true, as far as it goes. But if you think about it just a little bit, you start asking what the effect of a “negative pressure” should really be. Doesn’t ordinary positive pressure, after all, tend to push things apart? So shouldn’t negative pressure pull them together? Then you have to apologize and explain that the actual force of this negative pressure can’t be felt at all, since it’s equal in magnitude in every direction, and it’s only the indirect gravitational effect of the negative pressure that is being measured. All true, but not nearly as enlightening as leaving the concept behind altogether.

But I fear we are stuck with it. Cosmologists talk about negative pressure and w all the time, even though it’s confusing and ultimately not what we are measuring anyway. Once I put into motion my nefarious scheme to overthrow the scientific establishment and have myself crowned Emperor of Cosmology, rest assured that instituting a sensible system of nomenclature will be one of my very first acts as sovereign.

A Dark, Misleading Force Read More »

326 Comments

Arrow of Time FAQ

The arrow of time is hot, baby. I talk about it incessantly, of course, but the buzz is growing. There was a conference in New York, and subtle pulses are chasing around the lower levels of the science-media establishment, preparatory to a full-blown explosion into popular consciousness. I’ve been ahead of my time, as usual.

So, notwithstanding the fact that I’ve disquisitioned about this a great length and considerable frequency, I thought it would be useful to collect the salient points into a single FAQ. My interest is less in pushing my own favorite answers to these questions, so much as setting out the problem that physicists and cosmologists are going to have to somehow address if they want to say they understand how the universe works. (I will stick to more or less conventional physics throughout, even if not everything I say is accepted by everyone. That’s just because they haven’t thought things through.)

Without further ado:

What is the arrow of time?

The past is different from the future. One of the most obvious features of the macroscopic world is irreversibility: heat doesn’t flow spontaneously from cold objects to hot ones, we can turn eggs into omelets but not omelets into eggs, ice cubes melt in warm water but glasses of water don’t spontaneously give rise to ice cubes. These irreversibilities are summarized by the Second Law of Thermodynamics: the entropy of a closed system will (practically) never decrease into the future.

But entropy decreases all the time; we can freeze water to make ice cubes, after all.

Not all systems are closed. The Second Law doesn’t forbid decreases in entropy in open systems, nor is it in any way incompatible with evolution or complexity or any such thing.

So what’s the big deal?

In contrast to the macroscopic universe, the microscopic laws of physics that purportedly underlie its behavior are perfectly reversible. (More rigorously, for every allowed process there exists a time-reversed process that is also allowed, obtained by switching parity and exchanging particles for antiparticles — the CPT Theorem.) The puzzle is to reconcile microscopic reversibility with macroscopic irreversibility.

And how do we reconcile them?

The observed macroscopic irreversibility is not a consequence of the fundamental laws of physics, it’s a consequence of the particular configuration in which the universe finds itself. In particular, the unusual low-entropy conditions in the very early universe, near the Big Bang. Understanding the arrow of time is a matter of understanding the origin of the universe.

Wasn’t this all figured out over a century ago?

Not exactly. In the late 19th century, Boltzmann and Gibbs figured out what entropy really is: it’s a measure of the number of individual microscopic states that are macroscopically indistinguishable. An omelet is higher entropy than an egg because there are more ways to re-arrange its atoms while keeping it indisputably an omelet, than there are for the egg. That provides half of the explanation for the Second Law: entropy tends to increase because there are more ways to be high entropy than low entropy. The other half of the question still remains: why was the entropy ever low in the first place?

Is the origin of the Second Law really cosmological? We never talked about the early universe back when I took thermodynamics.

Trust me, it is. Of course you don’t need to appeal to cosmology to use the Second Law, or even to “derive” it under some reasonable-sounding assumptions. However, those reasonable-sounding assumptions are typically not true of the real world. Using only time-symmetric laws of physics, you can’t derive time-asymmetric macroscopic behavior (as pointed out in the “reversibility objections” of Lohschmidt and Zermelo back in the time of Boltzmann and Gibbs); every trajectory is precisely as likely as its time-reverse, so there can’t be any overall preference for one direction of time over the other. The usual “derivations” of the second law, if taken at face value, could equally well be used to predict that the entropy must be higher in the past — an inevitable answer, if one has recourse only to reversible dynamics. But the entropy was lower in the past, and to understand that empirical feature of the universe we have to think about cosmology.

Does inflation explain the low entropy of the early universe?

Not by itself, no. To get inflation to start requires even lower-entropy initial conditions than those implied by the conventional Big Bang model. Inflation just makes the problem harder.

Does that mean that inflation is wrong?

Not necessarily. Inflation is an attractive mechanism for generating primordial cosmological perturbations, and provides a way to dynamically create a huge number of particles from a small region of space. The question is simply, why did inflation ever start? Rather than removing the need for a sensible theory of initial conditions, inflation makes the need even more urgent.

Arrow of Time FAQ Read More »

161 Comments

Turtles Much of the Way Down

Paul Davies has published an Op-Ed in the New York Times, about science and faith. Edge has put together a set of responses — by Jerry Coyne, Nathan Myhrvold, Lawrence Krauss, Scott Atran, Jeremy Bernstein, and me, so that’s some pretty lofty company I’m hob-nobbing with. Astonishingly, bloggers have also weighed in: among my regular reads, we find responses from Dr. Free-Ride, PZ, and The Quantum Pontiff. (Bloggers have much more colorful monikers than respectable folk.) Peter Woit blames string theory.

I post about this only with some reluctance, as I fear the resulting conversation is very likely to lower the average wisdom of the human race. Davies manages to hit a number of hot buttons right up front — claiming that both science and religion rely on faith (I don’t think there is any useful definition of the word “faith” in which that is true), and mentioning in passing something vague about the multiverse. All of which obscures what I think is his real point, which only pokes through clearly at the end — a claim to the effect that the laws of nature themselves require an explanation, and that explanation can’t come from the outside.

Personally I find this claim either vacuous or incorrect. Does it mean that the laws of physics are somehow inevitable? I don’t think that they are, and if they were I don’t think it would count as much of an “explanation,” but your mileage may vary. More importantly, we just don’t have the right to make deep proclamations about the laws of nature ahead of time — it’s our job to figure out what they are, and then deal with it. Maybe they come along with some self-justifying “explanation,” maybe they don’t. Maybe they’re totally random. We will hopefully discover the answer by doing science, but we won’t make progress by setting down demands ahead of time.

So I don’t know what it could possibly mean, and that’s what I argued in my response. Paul very kindly emailed me after reading my piece, and — not to be too ungenerous about it, I hope — suggested that I would have to read his book.

My piece is below the fold. The Edge discussion is interesting, too. But if you feel your IQ being lowered by long paragraphs on the nature of “faith” that don’t ever quite bother to give precise definitions and stick to them, don’t blame me.

Turtles Much of the Way Down Read More »

110 Comments

Update: Lemaitre vs. Hubble

We’ve previously celebrated Father Georges-Henri Lemaitre on this very blog, for taking seriously the idea of the Big Bang. His name has come up again in the post expressing thanks for Hubble’s Law — several commenters, including John Farrell, who wrote the book and should know — mentioned that it was actually Lemaitre, not Hubble, who first derived the law. That offered me a chance to haughtily dismiss these folks as being unable to distinguish between a theoretical prediction (Lemaitre was one of the first to understand the equations governing relativistic cosmology) and an observational discovery. But it turns out that Lemaitre did actually look at the data! Shows you how much you should listen to me.

I received an email from Albert Bosma that cleared up the issue a bit. Indeed, it was not just a theoretical prediction (as I had wrongly presumed) — Lemaitre definitely used data to estimate Hubble’s constant in a 1927 paper. He obtained a value of about 625 km/sec/Mpc, not too different from Hubble’s ultimate value. Of course, Lemaitre’s paper was in French, so it might as well be in Martian. Arthur Eddington translated the paper for the Monthly Notices of the Royal Astronomical Society, in 1931, but left out the (one-sentence) discussion of the data!

Here is a slide from Albert containing the original paragraphs. Click for a larger version, and you can compare the French to the English. Thanks to Albert for sending it along.

lemaitre-hubble.jpg

However — Lemaitre didn’t have very good data (and what he did was partly from Hubble, I gather). And for whatever reason, he did not plot velocity vs. distance. Instead, he seems to have taken the average velocity (which was known since the work of Vesto Slipher to be nonzero) and divided by some estimated average distance! If Hubble’s Law — the linear relation between velocity and distance — is true, that will correctly get you Hubble’s constant, but it’s definitely not enough to establish Hubble’s Law. If you have derived the law theoretically from the principles of general relativity applied to an expanding universe, and are convinced you are correct, maybe all you care about is fixing the value of the one free parameter in your model. But I think it’s still correct to say that credit for Hubble’s Law goes to Hubble — although it’s equally correct to remind people of the crucial role that Lemaitre played in the development of modern cosmology.

Further update: Albert has now sent me more snippets from Lemaitre’s original paper (one, two, three), and the English translation. (All jpg images.)

Update: Lemaitre vs. Hubble Read More »

11 Comments

Thanksgiving

Last year we gave thanks for the Lagrangian of the Standard Model of Particle Physics. This year, we give thanks for Hubble’s Law, the linear relationship between velocity and distance of faraway galaxies:

v = H0 d.

(We could be sticklers and call it the “effective velocity as inferred from the cosmological redshift,” but it’s a holiday and we’re in an expansive mood.) Here is the original plot, from Hubble 1929:

Hubble’s Law

And here is a modern version, from Riess, Press and Kirshner 1996 (figure from Ned Wright’s cosmology tutorial):

Hubble’s Law (recent supernovae)

Note that Hubble’s distance scale goes out to about two million parsecs, whereas the modern one goes out to 500 million parsecs. Note also that Hubble mis-labeled the vertical axis, expressing velocity in units of kilometers, but he discovered the expansion of the universe so we can forgive him. And yes, the link above is to Hubble’s original paper in the Proceedings of the National Academy of Sciences. Only 146 citations! He’d never get tenure these days. (Over 1000 citations for Freedman et al., the final paper from the Hubble Key Project to measure H0.)

Hubble was helped along in his investigations by Milton Humason; together they wrote a longer follow-up paper. (Some habits don’t change.) Here is a sobering sentence from an article about Humason: “During the period from 1930 until his retirement in 1957, he measured the velocities of 620 galaxies.” These days projects measure millions of velocities. So let’s give thanks for better telescopes, CCD cameras, and software, while we’re at it.

Hubble’s Law is an empirical fact about photons we receive in our telescopes, but it’s implications are profound: the universe is expanding. This discovery marks a seismic shift in how we think about the cosmos, as profound as the Copernican displacement away from the center. It was so important, Einstein felt the need to visit Hubble on Mt. Wilson and check that he wasn’t making any mistakes.

Einstein and Hubble

Thanksgiving Read More »

40 Comments

Garrett Lisi’s Theory of Everything!

Garrett Lisi has a new paper, “An Exceptionally Simple Theory of Everything.” Many people seem to think that I should have an opinion about it, but I don’t. It’s received a good deal of publicity, in part because of Lisi’s personal story — if you can write an story with lines like “A. Garrett Lisi, a physicist who divides his time between surfing in Maui and teaching snowboarding in Lake Tahoe, has come up with what may be the Grand Unified Theory,” you do it.

The paper seems to involve a novel mix-up between internal symmetries and spacetime symmetries, including adding particles of different spin. This runs against the spirit, if not precisely the letter, of the Coleman-Mandula theorem. Okay, maybe there is a miraculous new way of using loopholes in that theorem to do fun things. But I would be much more likely to invest time trying to understand a paper that was devoted to how we can use such loopholes to mix up bosons and fermions in an unexpected way, and explained clearly why this was possible even though you might initially be skeptical, than in a paper that purports to be a theory of everything and mixes up bosons and fermions so casually.

So I’m sufficiently pessimistic about the prospects for this idea that I’m going to spend my time reading other papers. I could certainly be guessing wrong. But you can’t read every paper, and my own judgment is all I have to go on. Someone who understands this stuff much better than I do will dig into it and report back, and it will all shake out in the end. Science! It works, bitches.

For a discussion that manages to include some physics content, see Bee’s post and the comments at Backreaction.

Garrett Lisi’s Theory of Everything! Read More »

241 Comments

High-Energy Spam Filter

Monica Dunford, at the US/LHC Blog, has a great metaphor: thinking of the “trigger” in a particle detector as a spam filter. The trigger, you will remember, is the combination of hardware and software that works to separate potentially interesting events from boring old background. If my rough numbers are anywhere near right (experts should chime in if not), the LHC will create about a billion collisions per second, and only about 100 of them will actually get stored on hard disk. Doesn’t sound like much, but we’re talking about a megabyte of data per event, so you’re writing a gigabyte to disk every ten seconds. Just not practical to keep every piece of data, so the trigger makes some snap judgments about what events are fun (like the simulated supersymmetric ATLAS event below) and which are just the usual workings of the Standard Model.

atlantis_susy.jpg

The spam-filter analogy is pretty apt: you’re being deluged with data, and most of it is irrelevant, and you can’t afford to look at all of it individually. So you have to come up with some automated system that decides what to keep and what to toss out. And of course you have exactly the same concern that you would have with any spam filter: the worry that you’re tossing out interesting stuff! You don’t want any job offers to get lost amidst the ads for C!al1$.

A great deal of work, therefore, goes into deciding what the trigger should keep and what it should toss out. Perhaps that helps explain the graph in a previous post of Monica’s:

atlas_events.jpg

That would be “meetings as a function of time,” in case it wasn’t obvious. Over 4500 scheduled meetings in 2007, and that’s just for the ATLAS collaboration. The other general-purpose LHC experiment, CMS, has a similar graph, but they only had about 1000 meetings. Whether it is a tribute to their greater efficiency or a mismatch in accounting procedures remains an open question.

High-Energy Spam Filter Read More »

2 Comments

arxiv Find: Universal Quantum Mechanics

A new paper by Steve Giddings, “Universal Quantum Mechanics,” arxiv:0711.0757. Here’s the abstract:

If gravity respects quantum mechanics, it is important to identify the essential postulates of a quantum framework capable of incorporating gravitational phenomena. Such a construct likely requires elimination or modification of some of the “standard” postulates of quantum mechanics, in particular those involving time and measurement. This paper proposes a framework that appears sufficiently general to incorporate some expected features of quantum gravity. These include the statement that space and time may only emerge approximately and relationally. One perspective on such a framework is as a sort of generalization of the S-matrix approach to dynamics. Within this framework, more dynamical structure is required to fully specify a theory; this structure is expected to lack some of the elements of local quantum field theory. Some aspects of this structure are discussed, both in the context of scattering of perturbations about a flat background, and in the context of cosmology.

Part of the problem in reconciling gravity with quantum mechanics is “technical” — GR is not renormalizable, by the lights of ordinary quantum field theory. But part is “conceptual” — ordinary QM takes a spacetime background as given, not as part of the wavefunction. The role of time, in particular, is a bit hazy, especially because the Wheeler-deWitt equation (the quantum-gravity version of the Schrodinger equation) doesn’t contain any explicit time parameter. Most likely, our notion of “time” makes sense only in a semi-classical context, not as part of the fundamental dynamics. Similarly, our notions of “locality” are going to have to be broadened if spacetime itself is part of the quantum picture. But the truth is that we don’t really know for sure. So it’s worth digging into the underlying principles of quantum mechanics to understand which of them rely crucially on our standard understanding of spacetime, and which are likely to survive in any sensible theory of quantum gravity. Giddings’s paper follows in the footsteps of previous work such as Jim Hartle’s Spacetime Quantum Mechanics and the Quantum Mechanics of Spacetime. (The Santa Barbara air must be conducive to thinking such deep thoughts.)

arxiv Find: Universal Quantum Mechanics Read More »

41 Comments
Scroll to Top