Love and Blogging

Peter Pan should have been a cosmologist. I mean, if you want to stay forever young, nothing puts things in perspective like contemplating our place in a fourteen-billion-year-old universe. You tend to take a long view of things.

But eventually, one must grow up and start acting like an adult. Did you realize, for example, that many grownups participate in an institution known as “marriage,” which apparently involves tying your entire future history (and let’s be clear about this — I fully expect to be immortal) to that of another person? Someone, obviously, who you better like an awful lot. And who better be able to put up with you. Trust me, you really don’t want to interact with me before I’ve had my coffee in the morning.

How in the world is one expected to find such a person, in a world full of interesting but flawed characters? Well, there’s always the blogosphere. Two kindred spirits, tapping away at their matching MacBook Pros, could find each other across thousands of miles in a way that was heretofore impossible.

All of which, in a fumbling and hopefully-charming way, is to say that it’s happened. I’ve fallen hopelessly for the beautiful and talented Jennifer Ouellette, science writer extraordinaire and proprietess of Cocktail Party Physics. I first plugged her blog (completely innocently! honestly!) back in March, and we met in person at an APS meeting, of all places. Best conference ever.

And, various cross-country jaunts and countless emails later, we’re engaged to be married. If it’s clear that you’ve found the perfect person with whom you want nothing more than to spend the rest of your life, you might was well get the presents, right?

Expressions of astonishment that I could have done so well by myself, and wonderings aloud concerning what in the world Jennifer must be thinking, may be left in the comment section. You needn’t tell me how fortunate I am — I know.

Love and Blogging Read More »

97 Comments

Dark Energy Has Long Been Dark-Energy-Like

Thursday (“today,” for most of you) at 1:00 p.m. Eastern, there will be a NASA Media Teleconference to discuss some new observations relevant to the behavior of dark energy at high redshifts (z > 1). Participants will be actual astronomers Adam Riess and Lou Strolger, as well as theorist poseurs Mario Livio and myself. If the press release is to be believed, the whole thing will be available in live audio stream, and some pictures and descriptions will be made public once the telecon starts.

I’m not supposed to give away what’s going on, and might not have a chance to do an immediate post, but at some point I’ll update this post to explain it. If you read the press release, it says the point is “to announce the discovery that dark energy has been an ever-present constituent of space for most of the universe’s history.” Which means that the dark energy was acting dark-energy-like (a negative equation of state, or very slow evolution of the energy density) even back when the universe was matter-dominated.

Update: The short version is that Adam Riess and collaborators have used Hubble Space Telescope observations to discover 21 new supernovae, 13 of which are spectroscopically confirmed as Type Ia (the standardizable-candle kind) with redshifts z > 1. Using these, they place new constraints on the evolution of the dark energy density, in particular on the behavior of dark energy during the epoch when the universe was matter-dominated. The result is that the dark energy component seems to have been negative-pressure even back then; more specifically, w(z > 1) = -0.8+0.6-1.0, and w(z > 1) < 0 at 98% confidence.

supernovae

Longer version: Dark energy, which is apparently about 70% of the energy of the universe (with about 25% dark matter and 5% ordinary matter), is characterized by two features — it’s distributed smoothly throughout space, and maintains nearly-constant density as the universe expands. This latter quality, persistence of the energy density, is sometimes translated as “negative pressure,” since the law of energy conservation relates the rate of change of the energy density to (ρ + p), where ρ is the energy density and p is the pressure. Thus, if p = -ρ, the density is strictly constant; that’s vacuum energy, or the cosmological constant. But it could evolve just a little bit, and we wouldn’t have noticed yet. So we invent an “equation-of-state parameter” w = p/ρ. Then w = -1 implies that the dark energy density is constant; w > -1 implies that the density is decreasing, while w < -1 means that it’s increasing.

In the recent universe, supernova observations convince us that w = -1+0.1-0.1; so the density is close to constant. But there are puzzles in the dark-energy game; why is the vacuum energy so small, and why are the densities of matter and dark energy comparable, even though matter evolves noticeably while dark energy is close to constant? So it’s certainly conceivable that the behavior of the dark energy was different in the past — in particular, that the density of what we now know as dark energy used to behave similarly to that of matter, fading away as the universe expanded, and only recently switched over to an appreciably negative value of w.

These new observations speak against that possibility. They include measurements of supernovae at high redshifts, back when the density of matter was higher than that of dark energy. They then constrain the value of w as it was back then, at redshifts greater than one (when the universe was less than half its current size). And the answer is … the dark energy was still dark-energy-like! That is, it had a negative pressure, and its energy density wasn’t evolving very much. It was in the process of catching up to the matter density, not “tracking” it in some sneaky way.

Of course, to get such a result requires some assumptions. Riess et al. consider three different “priors” — assumed behaviors for the dark energy. The “weak” prior makes no assumptions at all about what the dark energy was doing at redshifts greater than 1.8, and draws correspondingly weak conclusions. The “strong” prior uses data from the microwave background, along with the assumption (which is really not that strong) that the dark energy wasn’t actually dominating at those very high redshifts. That’s the prior under which the above results were obtained. The “strongest” prior imagines that we can extrapolate the behavior of the equation-of-state parameter linearly back in time — that’s a very strong prior indeed, and probably not realistic.

So everything is consistent with a perfectly constant vacuum energy. No big surprise, right? But everything about dark energy is a surprise, and we need to constantly be questioning all of our assumptions. The coincidence scandal is a real puzzle, and the idea that dark energy used to behave differently and has changed its nature recently is a perfectly reasonable one. We don’t yet know what the dark energy is or why it has the density it does, but every new piece of information nudges us a bit further down the road to really understanding it.

Update: The Riess et al. paper is now available as astro-ph/0611572. The link to the data is broken, but I think it means to go here.

Dark Energy Has Long Been Dark-Energy-Like Read More »

48 Comments

Defense Wins Games

Bowen Blocks Dirk Matthew Yglesias and Tyler Cowen both consider the eternal question of whether defense or offense is more important, especially in the context of new NBA rules that allow for more scoring. Apparently some folks are arguing that, since it’s now easier to score, a team’s priority should be to bring in offensive-minded players, rather than concentrating on defense. (I’ll leave it as an exercise to the reader to identify the logical flaw there.) Yglesias argues that offense and defense must both be important, since the goal is to end the game with more points than the other team:

I concede that the new rules have made it harder to play defense. I fail to see, though, how that makes defense less important. Two factors determine who wins a basketball game: how many points your team scores and how many points the other team scores. Since you have the ball roughly half the time and the other team has the ball roughly half the time, it stands to reason that offense and defense should have exactly the same importance.

Unfortunately, that last bit is just as logically flawed as the previous argument. The truth is that defense is (still) significantly more important than offense in winning games.

How can that be, if teams (basically) spend the same amount of time, or number of possessions, on offense and defense? To decide which skill is more important, we have to consider the variation in results obtained by being good at one vs. being good at the other. In other words, which has a bigger effect on wins: being one of the best offensive teams, or being one of the best defensive teams?

Yglesias looks at some individual playoff results, which are somewhat inconclusive. But we can just look at the season stats and compare the results of being good at offense vs. being good at defense. Of course, we’re faced with deciding how to measure those skills. Points scored is actually not a good measure, since that is affected more by the pace of the game than by true offensive or defensive prowess. Points per possession would be perfect, but I don’t know where to find that stat. So instead let’s just look at Team Offensive/Defensive Field Goal Percentage (FG%), which is a pretty good proxy for offensive/defensive aptitude.

What you should really do is to type in all the data and correlate with wins, but that sounds like work. Instead, let’s just define a “good offensive/defensive team” as one in the top 10 of the 30 teams in the NBA in offensive or defensive FG%, respectively, and “bad” as being in the bottom 10. We immediately see that there is a greater range in defensive aptitude than in offensive aptitude. The median good offensive team shoots at a .474 clip, whereas the median bad offensive team shoots .439, for a difference of .035. But while the median good defensive team holds their opponents to .439, the median bad defensive team only holds their opponents to .478, for a difference of .039. In other words, there is a slightly bigger difference between good and bad defensive teams than good and bad offensive teams. Concentrating on defense, it should follow, would potentially have a bigger outcome in the win/loss columns.

And it does. The winning percentage of the good offensive teams is .580, while that of the bad offensive teams is .413, for a difference of .167. But the winning percentage of the good defensive teams is .615, while that of the bad defensive teams is .358, for a difference of .257. That’s a substantial difference. A good defensive team is much more likely to be a winner than a good offensive team.

The simple ex post facto explanation is just that all NBA players are pretty good scorers, or at least that the players who do the bulk of the scoring are all pretty good. There’s not too much of a difference in overall efficiency between the very-good and the truly excellent. But defensive abilities are much more variable, and perhaps also more dependent on coaching and team dynamics. Putting your effort into defense has a larger marginal payoff than putting it into offense. Which most coaches would agree with. People these days like to blame Pat Riley for that, but I think Bill Russell figured it out long ago.

Defense Wins Games Read More »

11 Comments

Navel-Gazing Links

Following in JoAnne’s footsteps, I’ve been in the midst of arguably my most exhausting bout of insane traveling ever — nine different stops over the course of less than a month, two of which involved two talks in one day! And one of which was at my old Philadelphia stomping grounds, where the people of the great-but-occasionally-confused state of Pennsylvania had recently rid themselves of the creepy menace that is Rick Santorum. (Actually, Michael, it’s a great-but-occasionally-confused commonwealth, but you’re a Francophone carpetbagger so that’s okay.) And where I was greeted, upon driving out of the airport, by a lovely billboard proclaiming Santorum is Good for Senate (sic). With snappy slogans like that one, perched precariously at the grammatical cutting edge, I can’t imagine how he lost.

I’m looking forward to settling back down to bucolic LA and churning out the high-quality blogging that CV readers expect. In the meantime, a couple of links distinguished by the fact that they link to us!

We should just start a separate blog, and have every post on each consist of links to the other site. Just for balance, one link that doesn’t link to us:

  • A great explanation of the Beyond Einstein program by Charles Daney at Science and Reason (via Steinn). We complain all the time about government agencies cutting funding for basic science, but here we’re really seeing a wholesale gutting of NASA astrophysics in action.

Navel-Gazing Links Read More »

16 Comments

Toward a Unified Epistemology of the Natural Sciences

Donald Rumsfeld Dr. Free-Ride reminds us of the celebrated free-verse philosophizing of Donald Rumsfeld, from a 2002 Department of Defense news briefing.

As we know,
There are known knowns.
There are things we know we know.

We also know
There are known unknowns.
That is to say
We know there are some things
We do not know.

But there are also unknown unknowns,
The ones we don’t know
We don’t know.

We tease our erstwhile Defense Secretary, but beneath the whimsical parallelisms, the quote actually makes perfect sense. In fact, I’ll be using it in my talk later today on the nature of science. One of the distinguishing features of science, I will argue, is that we pretty much know which knowns are known. That is to say, it’s obviously true that there are plenty of questions to which science does not know the answer, as well as some to which it does. But the nice thing is that we have a pretty good idea of where the boundary is. Where people often go wrong — and I’ll use examples of astrology, Intelligent Design, perpetual-motion machines, and What the Bleep Do We Know? — is in attempting to squeeze remarkable and wholly implausible wonders into the tightly-delimited regimes where science doesn’t yet have it all figured out, or hasn’t done some explicit experiment. (For example, it may be true that we haven’t taken apart and understood your specific perpetual-motion device, but it pretty obviously violates not only conservation of energy, but also Maxwell’s equations and Newton’s laws of motion. We don’t need to spend time worrying about your particular gizmo; we already know it can’t work.)

Rumsfeld’s comprehensive classification system did, admittedly, leave out the crucial category of unknown knowns — the things you think you know, that aren’t true. Those had something to do with his ultimate downfall.

Toward a Unified Epistemology of the Natural Sciences Read More »

16 Comments

Out-Einsteining Einstein

Among my recent peregrinations was a jaunt up to Santa Barbara, where I gave two talks in a row (although in different buildings, and to somewhat different audiences). Both were about attempts to weasel out of the need for dark stuff in the universe by trying to modify gravity.

The first talk, a high-energy theory seminar, was on trying to do away with dark energy by modifying gravity. I used an antiquated technology called “overhead transparencies” to give the talk itself, so there is no electronic record. If I get a chance sometime soon, I’ll post a summary of the different models I talked about.

The subsequent talk was over at the Kavli Institute for Theoretical Physics. There was a program on gravitational lensing going on, and they had asked Jim Hartle to give an overview of attempts to replace dark matter with modified gravity. Jim decided that he would be happier if I gave the talk, so it was all arranged to happen on a day I’d be visiting SB anyway. (Don’t feel bad for me; it was fun to give the talks, and they took me to a nice dinner afterwards.) I’m not really an expert on theories of gravity that do away with dark matter, but I’ve dabbled here and there, so I was able to put together a respectable colloquium-level talk.

MOND slide

And here it is. You can see the slides from the talk, as well as hear what I’m saying. I started somewhat lethargically, as it’s hard to switch gears quickly from one talk to another, but we built up some momentum by the end. I started quite broadly with the idea of different “gravitational degrees of freedom,” and worked my up to Bekenstein’s TeVeS model (a relativistic version of Milgrom’s MOND), explaining the empirical difficulties with clusters of galaxies, the cosmic microwave background, and most recently the Bullet Cluster. We can’t say that the idea is ruled out, but the evidence that dark matter of some sort exists is overwhelming, which removes much of the motivation for modifying gravity.

The KITP is firmly in the vanguard of putting talks online, both audio/video and reproductions of the slides. By now they have quite the extensive collection of past talks, from technical seminars to informal discussions to public lectures. Some recent categories of interest:

On Friday I’ll be at Villanova, my alma mater, giving a general talk to undergraduates on what science is all about. I’m not sure if it will be recorded, but if the yet-to-be-written slides turn out okay, I’ll put them online.

Out-Einsteining Einstein Read More »

32 Comments

Humankind’s Basic Picture of the Universe

Scott Aaronson has thown down a gauntlet by claiming that theoretical computer science, “by any objective standard, has contributed at least as much over the last 30 years as (say) particle physics or cosmology to humankind’s basic picture of the universe.” Obviously the truth-value of such a statement will depend on what counts as our “basic picture of the universe,” but Scott was good enough to provide an explanation of the most important things that TCS has taught us, which is quite fascinating. (More here.) Apparently, if super-intelligent aliens landed and were able to pack boxes in our car trunks very efficiently, they could also prove the Riemann hypothesis. Although the car-packing might be more useful.

There are important issues of empiricism vs. idealism here. The kinds of questions addressed by “theoretical computer science” are in fact logical questions, addressable on the basis of pure mathematics. They are true of any conceivable world, not just the actual world in which we happen to live. What physics teaches us about, on the other hand, are empirical features of the contingent world in which we find ourselves — features that didn’t have to be true a priori. Spacetime didn’t have to be curved, after all; for that matter, the Earth didn’t have to go around the Sun (to the extent that it does). Those are just things that appear to be true of our universe, at least locally.

But let’s grant the hypothesis that our “picture of the universe” consists both of logical truths and empirical ones. Can we defend the honor of particle physics and cosmology here? What have we really contributed over the last 30 years to our basic picture of the universe? It’s not fair to include great insights that are part of some specific theory, but not yet established as true things about reality — so I wouldn’t include, for example, anomalies canceling in string theory, or the Strominger-Vafa explanation for microstates in black holes, or inflationary cosmology. And I wouldn’t include experimental findings that are important but not quite foundation-shaking — so neutrino masses don’t qualify.

With these very tough standards, I think there are two achievements that I would put up against anything in terms of contributions to our basic picture of the universe:

  1. An inventory of what the universe is made of. That’s pretty important, no? In units of energy density, it’s about 5% ordinary matter, 25% dark matter, 70% dark energy. We didn’t know that 30 years ago, and now we do. We can’t claim to fully understand it, but the evidence in favor of the basic picture is extremely strong. I’m including within this item things like “it’s been 14 billion years since the Big Bang,” which is pretty important in its own right. I thought of a separate item referring to the need for primordial scale-free perturbations and the growth of structure via gravitational instability — I think that one is arguably at the proper level of importance, but it’s a close call.
  2. The holographic principle. I’m using this as a catch-all for a number of insights, some of which are in the context of string theory, but they are robust enough to be pretty much guaranteed to be part of the final picture whether it involves string theory or not. The germ of the holographic principle is the idea that the number of degrees of freedom inside some region is not proportional to the volume of the region, but rather to the area of its boundary — an insight originally suggested by the behavior of Hawking radiation from black holes. But it goes way beyond that; for example, there can be dualities that establish the equivalence of two different theories defined in different numbers of dimensions (ala AdS/CFT). This establishes once and for all that spacetime is emergent — the underlying notion of a spacetime manifold is not a fundamental feature of reality, but just a good approximation in a certain part of parameter space. People have speculated about this for years, but now it’s actually been established in certain well-defined circumstances.

A short list, but we have every reason to be proud of it. These are insights, I would wager, that will still be part of our basic picture of reality two hundred years from now. Any other suggestions?

Humankind’s Basic Picture of the Universe Read More »

78 Comments

Things You Should Read On the Internet

Collected links, moldering in my bookmarks:

  • Eszter Hargittai writes about a new book by her father, István Hargittai, called The Martians of Science. It’s a heartwarming tale of five Jewish-Hungarian kids who studied physics and changed the world: Theodore von Karman, Leo Szilard, Eugene Wigner, John von Neumann, and Edward Teller. (Okay, so I’m guessing that the Teller story isn’t completely “heartwarming.”)
  • Coturnix announces a Science Blogging conference to be held in North Carolina on January 20, 2007.
  • Rob Knop gives an example of egregious scientific male misbehavior, in case anyone was skeptical that any such examples existed. The truth is, the number of senior male physicists who regularly hit on attractive younger women physicists is … well, it’s a very long list. And that’s only one kind of misbehavior. I once had a professor who wondered out loud (to a group of male students in his class) why the female students were doing better than they were on the problem sets. The possibility that the female students in that particular sample were just smarter, and that this was not really cause for a news bulletin, had apparently never occured to him.
  • An archive of the Top 100 Images from the Hubble Space Telescope. This one is my favorite:
    V838 Monocerotis
    but this one and this one ain’t too shabby. The big news this week was that there will be a servicing mission to HST, which should keep it alive for several more years. I have slightly mixed feelings about this. HST has been an amazing instrument, and I was pushing to save it from my earliest blogging days. It does cost money, though, and NASA is in the midst of a budget crisis that is leading it to dismantle much of its astrophysics program. I was part of the committee that set up the original Beyond Einstein program, which proposed a program consisting of five near-term and mid-term missions: Constellation-X (an X-ray satellite), LISA (gravitational waves), Dark Energy Explorer (using either supernovae or weak lensing), Inflation Probe (looking for tensor modes in the CMB), and Black Hole Finder. Now we have a National Academies panel that will be looking over all of these to decide which one of them to actually go forward with. Still, the money spent on science is not a zero-sum game, so I’m happy to see Hubble saved for a while.
  • Best Google search to ever lead someone to Cosmic Variance (and there have been some doozies, let me tell you): sex. Apparently we are about the 320th best place to look on the web for information about sex. Whereas, for information about “physics,” we’re not in top 500 or so (I stopped looking). A lot of you suspected this, but now Google has provided incontrovertible proof.

Things You Should Read On the Internet Read More »

14 Comments

After Reading a Child’s Guide to Modern Physics

Abbas at 3 Quarks reminds us that next year is W.H. Auden’s centenary (and that Britain is curiously unenthusiastic about celebrating the event). The BBC allows you to hear Auden read this poem at a 1965 festival; his father was a physicist.

If all a top physicist knows
About the Truth be true,
Then, for all the so-and-so’s,
Futility and grime,
Our common world contains,
We have a better time
Than the Greater Nebulae do,
Or the atoms in our brains.

Marriage is rarely bliss
But, surely it would be worse
As particles to pelt
At thousands of miles per sec
About a universe
Wherein a lover’s kiss
Would either not be felt
Or break the loved one’s neck.

Though the face at which I stare
While shaving it be cruel
For, year after year, it repels
An ageing suitor, it has,
Thank God, sufficient mass
To be altogether there,
Not an indeterminate gruel
Which is partly somewhere else.

Our eyes prefer to suppose
That a habitable place
Has a geocentric view,
That architects enclose
A quiet Euclidian space:
Exploded myths – but who
Could feel at home astraddle
An ever expanding saddle?

This passion of our kind
For the process of finding out
Is a fact one can hardly doubt,
But I would rejoice in it more
If I knew more clearly what
We wanted the knowledge for,
Felt certain still that the mind
Is free to know or not.

It has chosen once, it seems,
And whether our concern
For magnitude’s extremes
Really become a creature
Who comes in a median size,
Or politicizing Nature
Be altogether wise,
Is something we shall learn.

Ol’ Wystan is right; we do have a better time than most of the universe. It would be no fun to constantly worry that “a lover’s kiss / Would either not be felt / Or break the loved one’s neck.” And in a sense, it’s surprising (one might almost say unnatural) that our local conditions allow for the build-up of the delicate complexity necessary to nurture passion and poetry among we creatures of median size.

In most physical systems, we can get a pretty good idea of the relevant scales of length and time just by using dimensional analysis. If you have some fundamental timescale governing the behavior of a system, you naturally expect most processes characteristic of that system to happen on approximately that timescale, give or take an order of magnitude here or there. But our universe doesn’t work that way at all — there are dramatic balancing acts that stretch the relevant timescales far past their natural values. In the absence of any fine-tunings, the relevant timescale for the universe would be the Planck time, 10-44 seconds, whereas the actual age of the universe is more like 1018 seconds. This is actually two problems in one: why doesn’t the vacuum energy rapidly dominate over the energy density in matter and radiation — the cosmological constant problem — and, imagining that we’ve solved that one, why doesn’t spatial curvature dominate over all the energy density — the flatness problem. It would be much more “natural,” in other words, to live in either a cold and empty universe, or one that recollapsed in a jiffy.

But given that the universe does linger around, it’s still a surprise that the matter within it exhibits interesting dynamics on timescales much longer than the Planck time. A human lifespan, for example, is about 109 seconds. The human/Planck hierarchy actually owes its existence to a multi-layered series of hierarchies. First, the characteristic energy scale of particle physics is set by electroweak symmetry breaking to be about 1011 electron volts, far below the Planck energy at 1027 electron volts. (That’s known to particle physicists as “the” hierarchy problem.) And then the mass of the electron (me ~ 5 x 105 electron volts) is smaller than it really should be, as it is suppressed with respect to the electroweak scale by a Yukawa coupling of about 10-6. But then the weakness of the electromagnetic interaction, as manifested in the small value of the fine-structure constant α = 1/137, implies that the Rydberg (which sets the scales for atomic physics) is even lower than that:

Ry ~ α2 me ~ 10 electron volts.

This energy corresponds to timescales (by inserting appropriate factors of Planck’s constant and the speed of light) of about 10-18 seconds; much longer than the Planck time, but still much shorter than a human lifetime. The cascade of hierarchies continues; molecular binding energies are typically much smaller than a Rydberg, the timescales characteristic of mesocopic collections of slowly-moving molecules are correspondingly longer still, etc.

Because we don’t yet fully understand the origin of these fantastic hierarchies, we can conclude that God exists. Okay, no we can’t. Really we can conclude that we live in a multiverse in which all of the constants of nature take on different values in different places. Okay, we can’t actually conclude that either. What we can do is keep thinking about it, not jumping to too many conclusions while we try to fill one of those pesky “gaps” in our understanding that people like to insist must be evidence for their personal favorite story of reality.

But “politicizing Nature,” now that’s just bad. Not altogether wise at all.

After Reading a Child’s Guide to Modern Physics Read More »

20 Comments
Scroll to Top