Science

Dark Energy Has Long Been Dark-Energy-Like

Thursday (“today,” for most of you) at 1:00 p.m. Eastern, there will be a NASA Media Teleconference to discuss some new observations relevant to the behavior of dark energy at high redshifts (z > 1). Participants will be actual astronomers Adam Riess and Lou Strolger, as well as theorist poseurs Mario Livio and myself. If the press release is to be believed, the whole thing will be available in live audio stream, and some pictures and descriptions will be made public once the telecon starts.

I’m not supposed to give away what’s going on, and might not have a chance to do an immediate post, but at some point I’ll update this post to explain it. If you read the press release, it says the point is “to announce the discovery that dark energy has been an ever-present constituent of space for most of the universe’s history.” Which means that the dark energy was acting dark-energy-like (a negative equation of state, or very slow evolution of the energy density) even back when the universe was matter-dominated.

Update: The short version is that Adam Riess and collaborators have used Hubble Space Telescope observations to discover 21 new supernovae, 13 of which are spectroscopically confirmed as Type Ia (the standardizable-candle kind) with redshifts z > 1. Using these, they place new constraints on the evolution of the dark energy density, in particular on the behavior of dark energy during the epoch when the universe was matter-dominated. The result is that the dark energy component seems to have been negative-pressure even back then; more specifically, w(z > 1) = -0.8+0.6-1.0, and w(z > 1) < 0 at 98% confidence.

supernovae

Longer version: Dark energy, which is apparently about 70% of the energy of the universe (with about 25% dark matter and 5% ordinary matter), is characterized by two features — it’s distributed smoothly throughout space, and maintains nearly-constant density as the universe expands. This latter quality, persistence of the energy density, is sometimes translated as “negative pressure,” since the law of energy conservation relates the rate of change of the energy density to (ρ + p), where ρ is the energy density and p is the pressure. Thus, if p = -ρ, the density is strictly constant; that’s vacuum energy, or the cosmological constant. But it could evolve just a little bit, and we wouldn’t have noticed yet. So we invent an “equation-of-state parameter” w = p/ρ. Then w = -1 implies that the dark energy density is constant; w > -1 implies that the density is decreasing, while w < -1 means that it’s increasing.

In the recent universe, supernova observations convince us that w = -1+0.1-0.1; so the density is close to constant. But there are puzzles in the dark-energy game; why is the vacuum energy so small, and why are the densities of matter and dark energy comparable, even though matter evolves noticeably while dark energy is close to constant? So it’s certainly conceivable that the behavior of the dark energy was different in the past — in particular, that the density of what we now know as dark energy used to behave similarly to that of matter, fading away as the universe expanded, and only recently switched over to an appreciably negative value of w.

These new observations speak against that possibility. They include measurements of supernovae at high redshifts, back when the density of matter was higher than that of dark energy. They then constrain the value of w as it was back then, at redshifts greater than one (when the universe was less than half its current size). And the answer is … the dark energy was still dark-energy-like! That is, it had a negative pressure, and its energy density wasn’t evolving very much. It was in the process of catching up to the matter density, not “tracking” it in some sneaky way.

Of course, to get such a result requires some assumptions. Riess et al. consider three different “priors” — assumed behaviors for the dark energy. The “weak” prior makes no assumptions at all about what the dark energy was doing at redshifts greater than 1.8, and draws correspondingly weak conclusions. The “strong” prior uses data from the microwave background, along with the assumption (which is really not that strong) that the dark energy wasn’t actually dominating at those very high redshifts. That’s the prior under which the above results were obtained. The “strongest” prior imagines that we can extrapolate the behavior of the equation-of-state parameter linearly back in time — that’s a very strong prior indeed, and probably not realistic.

So everything is consistent with a perfectly constant vacuum energy. No big surprise, right? But everything about dark energy is a surprise, and we need to constantly be questioning all of our assumptions. The coincidence scandal is a real puzzle, and the idea that dark energy used to behave differently and has changed its nature recently is a perfectly reasonable one. We don’t yet know what the dark energy is or why it has the density it does, but every new piece of information nudges us a bit further down the road to really understanding it.

Update: The Riess et al. paper is now available as astro-ph/0611572. The link to the data is broken, but I think it means to go here.

Dark Energy Has Long Been Dark-Energy-Like Read More »

48 Comments

Toward a Unified Epistemology of the Natural Sciences

Donald Rumsfeld Dr. Free-Ride reminds us of the celebrated free-verse philosophizing of Donald Rumsfeld, from a 2002 Department of Defense news briefing.

As we know,
There are known knowns.
There are things we know we know.

We also know
There are known unknowns.
That is to say
We know there are some things
We do not know.

But there are also unknown unknowns,
The ones we don’t know
We don’t know.

We tease our erstwhile Defense Secretary, but beneath the whimsical parallelisms, the quote actually makes perfect sense. In fact, I’ll be using it in my talk later today on the nature of science. One of the distinguishing features of science, I will argue, is that we pretty much know which knowns are known. That is to say, it’s obviously true that there are plenty of questions to which science does not know the answer, as well as some to which it does. But the nice thing is that we have a pretty good idea of where the boundary is. Where people often go wrong — and I’ll use examples of astrology, Intelligent Design, perpetual-motion machines, and What the Bleep Do We Know? — is in attempting to squeeze remarkable and wholly implausible wonders into the tightly-delimited regimes where science doesn’t yet have it all figured out, or hasn’t done some explicit experiment. (For example, it may be true that we haven’t taken apart and understood your specific perpetual-motion device, but it pretty obviously violates not only conservation of energy, but also Maxwell’s equations and Newton’s laws of motion. We don’t need to spend time worrying about your particular gizmo; we already know it can’t work.)

Rumsfeld’s comprehensive classification system did, admittedly, leave out the crucial category of unknown knowns — the things you think you know, that aren’t true. Those had something to do with his ultimate downfall.

Toward a Unified Epistemology of the Natural Sciences Read More »

16 Comments

Out-Einsteining Einstein

Among my recent peregrinations was a jaunt up to Santa Barbara, where I gave two talks in a row (although in different buildings, and to somewhat different audiences). Both were about attempts to weasel out of the need for dark stuff in the universe by trying to modify gravity.

The first talk, a high-energy theory seminar, was on trying to do away with dark energy by modifying gravity. I used an antiquated technology called “overhead transparencies” to give the talk itself, so there is no electronic record. If I get a chance sometime soon, I’ll post a summary of the different models I talked about.

The subsequent talk was over at the Kavli Institute for Theoretical Physics. There was a program on gravitational lensing going on, and they had asked Jim Hartle to give an overview of attempts to replace dark matter with modified gravity. Jim decided that he would be happier if I gave the talk, so it was all arranged to happen on a day I’d be visiting SB anyway. (Don’t feel bad for me; it was fun to give the talks, and they took me to a nice dinner afterwards.) I’m not really an expert on theories of gravity that do away with dark matter, but I’ve dabbled here and there, so I was able to put together a respectable colloquium-level talk.

MOND slide

And here it is. You can see the slides from the talk, as well as hear what I’m saying. I started somewhat lethargically, as it’s hard to switch gears quickly from one talk to another, but we built up some momentum by the end. I started quite broadly with the idea of different “gravitational degrees of freedom,” and worked my up to Bekenstein’s TeVeS model (a relativistic version of Milgrom’s MOND), explaining the empirical difficulties with clusters of galaxies, the cosmic microwave background, and most recently the Bullet Cluster. We can’t say that the idea is ruled out, but the evidence that dark matter of some sort exists is overwhelming, which removes much of the motivation for modifying gravity.

The KITP is firmly in the vanguard of putting talks online, both audio/video and reproductions of the slides. By now they have quite the extensive collection of past talks, from technical seminars to informal discussions to public lectures. Some recent categories of interest:

On Friday I’ll be at Villanova, my alma mater, giving a general talk to undergraduates on what science is all about. I’m not sure if it will be recorded, but if the yet-to-be-written slides turn out okay, I’ll put them online.

Out-Einsteining Einstein Read More »

32 Comments

Humankind’s Basic Picture of the Universe

Scott Aaronson has thown down a gauntlet by claiming that theoretical computer science, “by any objective standard, has contributed at least as much over the last 30 years as (say) particle physics or cosmology to humankind’s basic picture of the universe.” Obviously the truth-value of such a statement will depend on what counts as our “basic picture of the universe,” but Scott was good enough to provide an explanation of the most important things that TCS has taught us, which is quite fascinating. (More here.) Apparently, if super-intelligent aliens landed and were able to pack boxes in our car trunks very efficiently, they could also prove the Riemann hypothesis. Although the car-packing might be more useful.

There are important issues of empiricism vs. idealism here. The kinds of questions addressed by “theoretical computer science” are in fact logical questions, addressable on the basis of pure mathematics. They are true of any conceivable world, not just the actual world in which we happen to live. What physics teaches us about, on the other hand, are empirical features of the contingent world in which we find ourselves — features that didn’t have to be true a priori. Spacetime didn’t have to be curved, after all; for that matter, the Earth didn’t have to go around the Sun (to the extent that it does). Those are just things that appear to be true of our universe, at least locally.

But let’s grant the hypothesis that our “picture of the universe” consists both of logical truths and empirical ones. Can we defend the honor of particle physics and cosmology here? What have we really contributed over the last 30 years to our basic picture of the universe? It’s not fair to include great insights that are part of some specific theory, but not yet established as true things about reality — so I wouldn’t include, for example, anomalies canceling in string theory, or the Strominger-Vafa explanation for microstates in black holes, or inflationary cosmology. And I wouldn’t include experimental findings that are important but not quite foundation-shaking — so neutrino masses don’t qualify.

With these very tough standards, I think there are two achievements that I would put up against anything in terms of contributions to our basic picture of the universe:

  1. An inventory of what the universe is made of. That’s pretty important, no? In units of energy density, it’s about 5% ordinary matter, 25% dark matter, 70% dark energy. We didn’t know that 30 years ago, and now we do. We can’t claim to fully understand it, but the evidence in favor of the basic picture is extremely strong. I’m including within this item things like “it’s been 14 billion years since the Big Bang,” which is pretty important in its own right. I thought of a separate item referring to the need for primordial scale-free perturbations and the growth of structure via gravitational instability — I think that one is arguably at the proper level of importance, but it’s a close call.
  2. The holographic principle. I’m using this as a catch-all for a number of insights, some of which are in the context of string theory, but they are robust enough to be pretty much guaranteed to be part of the final picture whether it involves string theory or not. The germ of the holographic principle is the idea that the number of degrees of freedom inside some region is not proportional to the volume of the region, but rather to the area of its boundary — an insight originally suggested by the behavior of Hawking radiation from black holes. But it goes way beyond that; for example, there can be dualities that establish the equivalence of two different theories defined in different numbers of dimensions (ala AdS/CFT). This establishes once and for all that spacetime is emergent — the underlying notion of a spacetime manifold is not a fundamental feature of reality, but just a good approximation in a certain part of parameter space. People have speculated about this for years, but now it’s actually been established in certain well-defined circumstances.

A short list, but we have every reason to be proud of it. These are insights, I would wager, that will still be part of our basic picture of reality two hundred years from now. Any other suggestions?

Humankind’s Basic Picture of the Universe Read More »

78 Comments

After Reading a Child’s Guide to Modern Physics

Abbas at 3 Quarks reminds us that next year is W.H. Auden’s centenary (and that Britain is curiously unenthusiastic about celebrating the event). The BBC allows you to hear Auden read this poem at a 1965 festival; his father was a physicist.

If all a top physicist knows
About the Truth be true,
Then, for all the so-and-so’s,
Futility and grime,
Our common world contains,
We have a better time
Than the Greater Nebulae do,
Or the atoms in our brains.

Marriage is rarely bliss
But, surely it would be worse
As particles to pelt
At thousands of miles per sec
About a universe
Wherein a lover’s kiss
Would either not be felt
Or break the loved one’s neck.

Though the face at which I stare
While shaving it be cruel
For, year after year, it repels
An ageing suitor, it has,
Thank God, sufficient mass
To be altogether there,
Not an indeterminate gruel
Which is partly somewhere else.

Our eyes prefer to suppose
That a habitable place
Has a geocentric view,
That architects enclose
A quiet Euclidian space:
Exploded myths – but who
Could feel at home astraddle
An ever expanding saddle?

This passion of our kind
For the process of finding out
Is a fact one can hardly doubt,
But I would rejoice in it more
If I knew more clearly what
We wanted the knowledge for,
Felt certain still that the mind
Is free to know or not.

It has chosen once, it seems,
And whether our concern
For magnitude’s extremes
Really become a creature
Who comes in a median size,
Or politicizing Nature
Be altogether wise,
Is something we shall learn.

Ol’ Wystan is right; we do have a better time than most of the universe. It would be no fun to constantly worry that “a lover’s kiss / Would either not be felt / Or break the loved one’s neck.” And in a sense, it’s surprising (one might almost say unnatural) that our local conditions allow for the build-up of the delicate complexity necessary to nurture passion and poetry among we creatures of median size.

In most physical systems, we can get a pretty good idea of the relevant scales of length and time just by using dimensional analysis. If you have some fundamental timescale governing the behavior of a system, you naturally expect most processes characteristic of that system to happen on approximately that timescale, give or take an order of magnitude here or there. But our universe doesn’t work that way at all — there are dramatic balancing acts that stretch the relevant timescales far past their natural values. In the absence of any fine-tunings, the relevant timescale for the universe would be the Planck time, 10-44 seconds, whereas the actual age of the universe is more like 1018 seconds. This is actually two problems in one: why doesn’t the vacuum energy rapidly dominate over the energy density in matter and radiation — the cosmological constant problem — and, imagining that we’ve solved that one, why doesn’t spatial curvature dominate over all the energy density — the flatness problem. It would be much more “natural,” in other words, to live in either a cold and empty universe, or one that recollapsed in a jiffy.

But given that the universe does linger around, it’s still a surprise that the matter within it exhibits interesting dynamics on timescales much longer than the Planck time. A human lifespan, for example, is about 109 seconds. The human/Planck hierarchy actually owes its existence to a multi-layered series of hierarchies. First, the characteristic energy scale of particle physics is set by electroweak symmetry breaking to be about 1011 electron volts, far below the Planck energy at 1027 electron volts. (That’s known to particle physicists as “the” hierarchy problem.) And then the mass of the electron (me ~ 5 x 105 electron volts) is smaller than it really should be, as it is suppressed with respect to the electroweak scale by a Yukawa coupling of about 10-6. But then the weakness of the electromagnetic interaction, as manifested in the small value of the fine-structure constant α = 1/137, implies that the Rydberg (which sets the scales for atomic physics) is even lower than that:

Ry ~ α2 me ~ 10 electron volts.

This energy corresponds to timescales (by inserting appropriate factors of Planck’s constant and the speed of light) of about 10-18 seconds; much longer than the Planck time, but still much shorter than a human lifetime. The cascade of hierarchies continues; molecular binding energies are typically much smaller than a Rydberg, the timescales characteristic of mesocopic collections of slowly-moving molecules are correspondingly longer still, etc.

Because we don’t yet fully understand the origin of these fantastic hierarchies, we can conclude that God exists. Okay, no we can’t. Really we can conclude that we live in a multiverse in which all of the constants of nature take on different values in different places. Okay, we can’t actually conclude that either. What we can do is keep thinking about it, not jumping to too many conclusions while we try to fill one of those pesky “gaps” in our understanding that people like to insist must be evidence for their personal favorite story of reality.

But “politicizing Nature,” now that’s just bad. Not altogether wise at all.

After Reading a Child’s Guide to Modern Physics Read More »

20 Comments

Reconstructing Inflation

All sorts of responsibilities have been sadly neglected, as I’ve been zooming around the continent — stops in Illinois, Arizona, New York, Ontario, New York again, and next Tennessee, all within a matter of two weeks. How is one to blog under such trying conditions? (Airplanes and laptops are involved, if you must know.)

But the good news is that I’ve been listening to some very interesting physics talks, the kind that actually put ideas into your head and set off long and convoluted bouts of thinking. Possibly conducive to blogging, but only if one pauses for a moment to stop thinking and actually write something. Which is probably a good idea in its own right.

One of the talks was a tag-team performance by Dick Bond and Lev Kofman, both cosmologists at the Canadian Institute for Theoretical Astrophysics at the University of Toronto. The talk was part of a brief workshop at the Perimeter Institute on “Strings, Inflation, and Cosmology.” It was just the right kind of meeting, with only about twenty people, fairly narrowly focused on an area of common interest (although the talks themselves spanned quite a range, from a typically imaginative propsoal by Gia Dvali about quantum hair on black holes to a detailed discussion of density fluctuations in inflation by Alan Guth).

Dick and Lev were interested in what we should expect inflationary models to predict, and what data might ultimately teach us about the inflationary era. The primary observables connected with inflation are primordial perturbations — the tiny deviations from a perfectly smooth universe that were imprinted at early times. These deviations come in two forms: “scalar” perturbations, which are fluctuations in the energy density from place to place, and which eventually grow via gravitational instability into galaxies and clusters; and the “tensor” perturbations in the curvature of spacetime itself, which are just long-wavelength gravitational waves. Both arise from the zero-point vacuum fluctuations of quantum fields in the very early universe — for scalar fluctuations, the relevant field is the “inflaton” φ that actually drives inflation, while for tensor fluctuations it’s the spacetime metric itself.

The same basic mechanism works in both cases — quantum fluctuations (due ultimately to Heisenberg’s uncertainty principle) at very small wavelengths are amplified by the process of inflation to macroscopic scales, where they are temporarily frozen-in until the expansion of the universe relaxes sufficiently to allow them to dynamically evolve. But there is a crucial distinction when it comes to the amount of such fluctuations that we would ultimately see. In the case of gravity waves, the field we hope to observe is precisely the one that was doing the fluctuating early on; the amplitude of such fluctuation is related directly to the rate of inflation when they were created, which is in turn related to the energy density, which is given simply by the potential energy V(φ) of the scalar field. But scalar perturbations arise from quantum fluctuations in φ, and we aren’t going to be observing φ directly; instead, we observe perturbations in the energy density ρ. A fluctuation in φ leads to a different value of the potential V(φ), and consequently the energy density; the perturbation in ρ therefore depends on the slope of the potential, V’ = dV/dφ, as well as the potential itself. Once one cranks through the calculation, we find (somewhat counterintuitively) that a smaller slope yields a larger density perturbation. Long story short, the amplitude of tensor perturbations looks like

T 2 ~ V ,

while that of the scalar perturbations looks like

S 2 ~ V 3/(V’ )2 .

Of course, such fluctuations are generated at every scale; for any particular wavelength, you are supposed to evaluate these quantities at the moment when the mode is stretched to be larger than the Hubble radius during inflation.

To date, we are quite sure that we have detected the influence of scalar perturbations; they are responsible for most, if not all, of the temperature fluctuations we observe in the Cosmic Microwave Background. We’re still looking for the gravity-wave/tensor perturbations. It may someday be possible to detect them directly as gravitational waves, with an ultra-sensitive dedicated satellite; at the moment, though, that’s still pie-in-the-sky (as it were). More optimistically, the stretching caused by the gravity waves can leave a distinctive imprint on the polarization of the CMB — in particular, in the type of polarization known as the B-modes. These haven’t been detected yet, but we’re trying.

Problem is, even if the tensor modes are there, they are probably quite tiny. Whether or not they are substantial enough to produce observable B-mode polarization in the CMB is a huge question, and one that theorists are presently unable to answer with any confidence. (See papers by Lyth and Knox and Song on some of the difficulties.) It’s important to get our theoretical expectations straight, if we’re going to encourage observers to spend millions of dollars and years of their time building satellites to go look for the tensor modes. (Which we are.)

So Dick and Lev have been trying to figure out what we should expect in a fairly model-independent way, given our meager knowledge of what was going on during inflation. They’ve come up with a class of models and possible behaviors for the scalar and tensor modes as a function of wavelength, and asked which of them could fit the data as we presently understand it, and then what they would predict for future experiments. And they hit upon something interesting. There is a well-known puzzle in the anisotropies of the CMB: on very large angular scales (small l, in the graph below), the observed anisotropy is much smaller than we expect. The red line is the prediction of the standard cosmology, and the data come from the WMAP satellite. (The gray error bars arise from the fact that there are only a finite number of observations of each mode at large scales, while the predictions are purely statistical — a phenomenon known as “cosmic variance.”)

WMAP CMB power spectrum

It’s hard to tell how seriously we should take that little glitch, especially since it is at one end of what we can observe. But the computers don’t care, so when Dick and Leve fit models to the data, the models like to do their best to fit that point. If you have a perfectly flat primordial spectrum, or even one that is tilted but still a straight line, there’s not much you can do to fit it. But if you allow some more interesting behavior for the inflaton field, you have a chance.

Let’s ask ourselves, what would it take for the inflaton to be generating smaller perturbations at earlier times? (Larger wavelengths are produced earlier, as they are the first to get stretched outside the Hubble radius during inflation.) We expect the value of the inflaton potential V to monotonically decrease during inflation, as the scalar field rolls down. So, from the second equation above, the only way to get a smaller scalar amplitude S at early times is to have a substantially larger value of the slope V’. So the inflaton potential might look something like this.

InflatonPotential

Maybe it’s a little contrived, but it seems to fit the data, and that’s always nice. And the good news is that a large slope at early times implies that the actual value of the potential V was also large at early times (because the field was higher up a steep slope). Which means, from the equation for T above, that we expect (relatively) large tensor modes at large scales! Which in turn is exactly where we have some hope to look for them.

This is all a hand-waving reconstruction of the talk that Dick and Lev gave, which involved a lot more equations and Monte Carlo simulations. The real lesson, to me, is that we are still a long way from having a solid handle on what to expect in terms of the inflationary perturbations, and shouldn’t fool ourselves into thinking that our momentary theoretical prejudices are accurate reflections of the true space of possibilities. If it’s true that we have a decent shot at detecting the tensor modes at large scales, it would represent an incredible triumph of our ability to extend our thinking about the universe back to its earliest moments.

Reconstructing Inflation Read More »

31 Comments

Physics Antiques Roadshow

Liveblogging here from the Fall Meeting of the Illinois and Iowa Sections of the American Association of Physics Teachers. The attendees are mostly high-school physics teachers, some from local colleges. Later tonight I’ll be giving a talk, but I can’t resist telling you about the delightful session we just had — WITHIT, or “What in the Heck is This?”

What is this? High-school science teachers live in a very different world than professional researchers. Typically a “department” is only one person, and when it comes to resources one has to be a little creative. So it’s quite common (I’ve just learned), when one first is hired, for the new teacher to be presented with a storeroom full of stuff that their predecessors had acquired one way or another. And this stuff doesn’t always come nicely packaged with detailed instructions and lesson plans.

Sometimes, indeed, it’s hard to figure out what the stuff is! So here at the FM of the IIS of the AAPT, people have been bringing in pieces of apparatus that have been lying around for decades and have become unmoored from their original purposes. They then show the wayward equipment to their assembled colleagues, and ask for help figuring out what the heck this thing is supposed to be. So far we’ve had experiments to measure kinetic energy, X-ray tubes, and an inverse-square-law apparatus.

I see great TV-show possibilities here. (After only one month of living in LA!) Could you imagine the tension as a bedraggled but hopeful physics teacher is told that their gizmo is an original Leonardo?

Physics Antiques Roadshow Read More »

14 Comments

Is That a Particle Accelerator in Your Pocket, Or Are You Just Happy to See Me?

The Large Hadron Collider accelerates protons to an energy of 7000 GeV, which is pretty impressive. (A GeV is a billion electron volts; the energy in a single proton at rest, using E=mc2, is about 1 GeV.) But it requires a 27-kilometer ring, and the cost is measured in billions of dollars. The next planned accelerator is the International Linear Collider (ILC), which will be similarly grand in size and cost. People have worried, not without reason, that the end is in sight for experimental particle physics at the energy frontier, as it becomes prohibitively expensive to build new machines.

That why it’s great news that scientists from Lawrence Berkeley Labs and Oxford have managed to accelerate electrons to 1 GeV (via Entropy Bound). What’s that you say? 1 GeV seems tiny compared to 7000 GeV? Yes, but these electrons were accelerated over a distance of just 3.3 centimeters, using laser wakefield technology. You can do the math: if you could simply scale things up (in reality it’s not so easy, of course), you could reach 10,000 GeV in a distance of about a hundred meters.

The LHC and the ILC won’t be the end of particle physics. Even the Planck scale, 1018 GeV, isn’t all that big. In terms of mass-energy, it’s only one millionth of a gram. The kinetic energy of a fast car is of order 1016 GeV, close to the traditional grand-unification scale. (Why? Kinetic energy is mv2/2, but let’s ignore factors of order unity. The speed of light is c = 200,000 miles/sec = 7*108 miles/hour. So a car going 70 miles/hour is moving at 10-7 the speed of light. The mass of a car is about one metric ton, which is 1000 kg, which is 106 grams, and one gram is 1024 GeV. So a car is 1030 GeV. [Or you could just happen to know how many nucleons/car.] So the kinetic energy is that mass times the velocity squared, which is 1030*(10-7)2 GeV = 1016 GeV.)

The trick, of course, is getting all this energy into a single particle, but that’s a technology problem. We’ll get there.

Is That a Particle Accelerator in Your Pocket, Or Are You Just Happy to See Me? Read More »

10 Comments

Nobel Prize to Mather and Smoot for CMB Anisotropies

COBE The Nobel Prize in Physics has been awarded to John Mather and George Smoot, for their discovery using the COBE satellite of temperature anisotropies in the cosmic microwave background. (Update: To be more accurate, Mather won the prize for measuring the blackbody spectrum of photons, announced in 1990; Smoot won it for measuring the anisotropies, announced in 1992. Thanks to Ned Wright for pointing out my sloppiness.) These tiny fluctuations in temperature provide a high-precision snapshot of what the universe was like 380,000 years after the Big Bang. They originate in density fluctuations that grow into large-scale structure today, and subsequent careful examination of the properties has revealed a tremendous amount about our universe. It’s a very well-deserved Nobel, which was top on my list of potential cosmology prizes back in May:

The 1992 observation of CMB anisotropies by NASA’s COBE satellite was the first step in a revolution in how cosmology is done, one that has come to dominate a lot of current research. Subsequent measurements by other experiments have obviously led to great improvements in precision, and most importantly extended our understanding of the anisotropies to smaller length scales, but I think the initial finding deserves the Nobel. So to whom should the prize be awarded? On purely scientific grounds, it seems to me that there was an obvious three-way prize that should have been given a while ago, to David Wilkinson, John Mather, and George Smoot. Wilkinson was the grandfather of the project, and was the leading CMB experimentalist for decades. Mather was the Project Manager for the satellite itself (as well as the Principal Investigator for the FIRAS instrument that measured the blackbody spectrum), while Smoot was the PI for the DMR instrument that actually measured the anisotropies. Unfortunately, Wilkinson passed away in 2002. Another complicating factor is that there were various intra-collaboration squabbles, leading to books by both Smoot and Mather that weren’t always completely complimentary toward each other. Still, background noise like that shouldn’t get in the way of great science, and these guys definitely deserve the Nobel.

When the results were announced in 1992, I was a fourth-year graduate student at the Center for Astrophysics at Harvard. Somehow, despite great attempts at secrecy, Bill Press had received a leak about the upcoming announcement, and had told some of us at CfA. The next day I went to the Physics colloquium and was the first to spread the news to some of the famous physicists chatting in the tea room, like Sidney Coleman and Roman Jackiw. My first feeling of being a cosmology insider.

It was a funny discovery, in the sense that most everyone expected that it would come (COBE was designed to do exactly this), and nevertheless ended up revolutionizing the field. The simplest measure of this was the arrival of an entire generation of smart young theoretical cosmologists who got their Ph.D.’s in the 1990’s working on the implications of the CMB anisotropies. Whenever we learn alot about the universe, of course, we also start ruling out interesting ideas; these days, nobody proposing a new cosmological scenario will be taken seriously unless their model is compatible with the microwave background.

Congratulations to John and George for ushering in the Golden Age of Cosmology!

Nobel Prize to Mather and Smoot for CMB Anisotropies Read More »

20 Comments
Scroll to Top