Science

Quantum interrogation

Quantum mechanics, as we all know, is weird. It’s weird enough in its own right, but when some determined experimenters do tricks that really bring out the weirdness in all its glory, and the results are conveyed to us by well-intentioned but occasionally murky vulgarizations in the popular press, it can seem even weirder than usual.

Last week was a classic example: the computer that could figure out the answer without actually doing a calculation! (See Uncertain Principles, Crooked Timber, 3 Quarks Daily.) The articles refer to an experiment performed by Onur Hosten and collaborators in Paul Kwiat‘s group at Urbana-Champaign, involving an ingenious series of quantum-mechanical miracles. On the surface, these results seem nearly impossible to make sense of. (Indeed, Brad DeLong has nearly given up hope.) How can you get an answer without doing a calculation? Half of the problem is that imprecise language makes the experiment seem even more fantastical than it really is — the other half is that it really is quite astonishing.

Let me make a stab at explaining, perhaps not the entire exercise in quantum computation, but at least the most surprising part of the whole story — how you can detect something without actually looking at it. The substance of everything that I will say is simply a translation of the nice explanation of quantum interrogation at Kwiat’s page, with the exception that I will forgo the typically violent metaphors of blowing up bombs and killing cats in favor of a discussion of cute little puppies.

dog-660505_640 So here is our problem: a large box lies before us, and we would like to know whether there is a sleeping puppy inside. Except that, sensitive souls that we are, it’s really important that we don’t wake up the puppy. Furthermore, due to circumstances too complicated to get into right now, we only have one technique at our disposal: the ability to pass an item of food into a small flap in the box. If the food is something uninteresting to puppies, like a salad, we will get no reaction — the puppy will just keep slumbering peacefully, oblivious to the food. But if the food is something delicious (from the canine point of view), like a nice juicy steak, the aromas will awaken the puppy, which will begin to bark like mad.

It would seem that we are stuck. If we stick a salad into the box, we don’t learn anything, as from the outside we can’t tell the difference between a sleeping puppy and no puppy at all. If we stick a steak into the box, we will definitely learn whether there is a puppy in there, but only because it will wake up and start barking if it’s there, and that would break our over-sensitive hearts. Puppies need their sleep, after all.

Fortunately, we are not only very considerate, we are also excellent experimental physicists with a keen grasp of quantum mechanics. Quantum mechanics, according to the conventional interpretations that are good enough for our purposes here, says three crucial and amazing things.

  • First, objects can exist in “superpositions” of the characteristics we can measure about them. For example, if we have an item of food, according to old-fashioned classical mechanics it could perhaps be “salad” or “steak.” But according to quantum mechanics, the true state of the food could be a combination, known as a wavefunction, which takes the form (food) = a(salad) + b(steak), where a and b are some numerical coefficients. That is not to say (as you might get the impression) that we are not sure whether the food is salad or steak; rather, it really is a simultaneous superposition of both possibilities.
  • The second amazing thing is that we can never observe the food to be in such a superposition; whenever we (or sleeping puppies) observe the food, we always find that it appears to be either salad or steak. (Eigenstates of the food operator, for you experts.) The numerical coefficients a and b tell us the probability of measuring either alternative; the chance we will observe salad is a2, while the chance we will observe steak is b2. (Obviously, then, we must have a2 + b2 = 1, since the total probability must add up to one [at least, in a world in which the only kinds of food are salad and steak, which we are assuming for simplicity].)
  • Third and finally, the act of observing the food changes its state once and for all, to be purely whatever we have observed it to be. If we look and it’s salad, the state of the food item is henceforth (food) = (salad), while if we saw that it was steak we would have (food) = (steak). That’s the “collapse of the wavefunction.”

You can read all that again, it’s okay. It contains everything important you need to know about quantum mechanics; the rest is just some equations to make it look like science.

Now let’s put it to work to find some puppies without waking them up. …

Quantum interrogation Read More »

95 Comments

Elsewhere

Chad Orzel’s competition to choose the Greatest Physics Experiment — via the magic of the ballot box — is almost coming to close, so go vote soon. Nominees include Galileo, Roemer, Newton, Cavendish, Faraday, Michelson and Morley, Hertz, Rutherford, Hubble, Mössbauer, and Aspect. I totally think Galileo should win, for discovering the moons of Jupiter — it’s not every day you simultaneously demonstrate the value of perhaps the single most useful instrument in the physical sciences (the telescope), but also show that the Earth is not the center of the universe.

Meanwhile, coturnix of Science and Politics, guest-blogging at Majikthise, points to his growing list of science blogs. Who knew there were so many? Remember, if you have a physics- or astronomy-oriented blog that is not manifestly crazy, we’re happy to put it on the “Physics and Astronomy Blogs” list here at CV.

Elsewhere Read More »

4 Comments

Why 10 or 11?

Why does string theory require 10 or 11 spacetime dimensions? The answer at a technical level is well-known, but it’s hard to bring it down to earth. By reading economics blogs by people who check out political theory blogs, I stumbled across an attempt at making it clear — by frequent CV commenter Moshe Rozali, writing in Scientific American. After explaining a bit about supersymmetry, Moshe concludes:

A guide in this pursuit is a theorem devised/put forth by physicists Steven Weinberg and Edward Witten, which proves that theories containing particles with spin higher than 2 are trivial. Remember each supersymmetry changes the spin by one half. If we want the spin to be between -2 and 2, we cannot have more than eight supersymmetries. The resulting theory contains a spin -2 boson, which is just what is needed to convey the force of gravitation and thereby unite all physical interactions in a single theory. This theory–called N=8 supergravity–is the maximally symmetric theory possible in four dimensions and it has been a subject of intense research since the 1980s.

Another type of symmetry occurs when an object remains the same despite being rotated in space. Because there is no preferred direction in empty space, rotations in three dimensions are symmetric. Suppose the universe had a few extra dimensions. That would lead to extra symmetries because there would be more ways to rotate an object in this extended space than in our three-dimensional space. Two objects that look different from our vantage point in the three visible dimensions might actually be the same object, rotated to different degrees in the higher-dimensional space. Therefore all properties of these seemingly different objects will be related to each other; once again, simplicity would underlie the complexity of our world.

These two types of symmetry look very different but modern theories treat them as two sides of the same coin. Rotations in a higher-dimensional space can turn one supersymmetry into another. So the limit on the number of supersymmetries puts a limit on the number of extra dimensions. The limit turns out to be 6 or 7 dimensions in addition to the four dimensions of length, width, height and time, both possibilities giving rise to exactly eight supersymmetries (M-theory is a proposal to further unify both cases). Any more dimensions would result in too much supersymmetry and a theoretical structure too simple to explain the complexity of the natural world.

This is reminiscent of Joe Polchinski’s argument (somewhat tongue-in-cheek, somewhat serious) that all attempts to quantize gravity should eventually lead to string theory. According to Joe, whenever you sit around trying to quantize gravity, you will eventually realize that your task is made easier by supersymmetry, which helps cancel divergences. Once you add supersymmetry to your theory, you’ll try to add as much as possible, which leads you to N=8 in four dimensions. Then you’ll figure out that this theory has a natural interpretation as a compactification of maximal supersymmetry in eleven dimensions. Gradually it will dawn on you that 11-dimensional supergravity contains not only fields, but two-dimensional membranes. And then you will ask what happens if you compactify one of those dimensions on a circle, and you’ll see that the membranes become superstrings. Voila!

Why 10 or 11? Read More »

67 Comments

Mitochondrial Eve and you

We’re all brothers and sisters under the skin — or at least distant cousins. According to the popular Single-Origin Hypothesis (the “Out of Africa” theory), the human race originated in eastern Africa something like 100,000-200,000 years ago. The alternative Multiregional Hypothesis (which seems less likely to me, but what do I know) says that Homo sapiens evolved independently in several places, but even then there were Homo habilis ancestors that evolved in Africa — the question is really which populations should and should not count as Homo sapiens.

That means that we share common ancestors. And no, you needn’t be one of the three million Irish descended from Niall of the Nine Hostages, or the sixteen million people worldwide descended from Genghis Khan. If you go back far enough, you’ll eventually hit the human race’s most recent common ancestor, some lucky breeder with billions of living descendants — possibly as late as the first or second millenium BCE. We can also imagine tracing back to Mitochondrial Eve and Y-chromosomal Adam — our most recent common ancestors through purely matrilineal and patrilineal lines, respectively. (Adam and Eve didn’t know each other; he lived 60,000-90,000 years ago, while she was perhaps 150,000 years ago.) The point is, if you follow your family tree backwards, it keeps branching into more and more ancestors, and eventually all of our individual trees get mixed together. So, for example, Bill O’Reilly and Michael Moore are distantly related, although the family reunions are likely a bit awkward.

An obvious question is: how did we get from there to here? How did human DNA mix and match, spread out through various locales and ethnicities, and focus together to create that pinnacle of biological achievement: you? Well, a new project from National Geographic, IBM, Spencer Wells, and the Waitt Foundation aims to find out: the Genographic Project (hat tip to Maria). They are collecting DNA samples from all over the world, and using genetic markers characteristic of certain populations to infer how humans migrated across the globe, cheerfully (or not so cheerfully, often enough) reproducing along the way.

Best of all: you can participate! Sadly, you have to pay ($100) to join the project, rather than receiving recompense for your services; but it’s pretty cool. You get a kit that allows you to take a sample of your own DNA and send it in for analysis. The results won’t tell you about your immediate family, but they’ll reveal the geographical origins of your deeper ancestry. C’mon, you want to know where your haplogroup originated, don’t you? And we need to hurry, before the intimate (as it were) interconnectivity of the global village scrambles our genetic markers once and for all.

Mitochondrial Eve and you Read More »

22 Comments

The future of the universe

This month’s provocative results on the acceleration of the universe raise an interesting issue: what can we say about our universe’s ultimate fate? In the old days (like, when I was in grad school) we were told a story that was simple, compelling, and wrong. It went like this: matter acts to slow down the expansion of the universe, and also to give it spatial curvature. If there is enough matter, space will be positively curved (like a sphere) and will eventually collapse into a Big Crunch. If there is little matter, space will be negatively curved (like a saddle) and expand forever. And if the matter content is just right, space will be flat and will just barely expand forever, slowing down all the while.

Fate of the universe This story is wrong in a couple of important ways. First and foremost, the assumption that the only important component of the universe is “matter” (or radiation, for that matter) is unduly restrictive. Now that we think that there is dark energy, the simple relation between spatial curvature and the ultimate fate of the universe is completely out the window. We can have positively curved universes that expand forever, negatively curved ones that recollapse, or what have you. (See my little article on the cosmological constant.) To determine the ultimate fate of the universe, you need to know both how much dark energy there is, and how it changes with time. (Mark has also written about this with Dragan Huterer and Glenn Starkman.)

If we take current observations at face value, and make the economical assumption that the dark energy is strictly constant in density, all indications are that the universe is going to expand forever, never to recollapse. If any of your friends go on a trip that extends beyond the Hubble radius (about ten billion light-years), kiss them goodbye, because they won’t ever be able to return — the space in between you and them will expand so quickly that they couldn’t get back to you, even if they were moving at the speed of light. Meanwhile, stars will die out and eventually collapse to black holes. The black holes will ultimately evaporate, leaving nothing in the universe but an increasingly dilute and cold gas of particles. A desolate, quiet, and lonely universe.

However, if the dark energy density actually increases with time, as it does with phantom energy, a completely new possibility presents itself: not a Big Crunch, but a Big Rip. Explored by McInnes and by Robert Caldwell, Marc Kamionkowski, and Nevin Weinberg, the Big Rip happens when the universe isn’t just accelerating, but super-accelerating — i.e., the rate of acceleration is perpetually increasing. If that happens, all hell breaks loose. The super-accelerated expansion of spacetime exerts a stretching force on all the galaxies, stars, and atoms in the universe. As it increases in strength, every bound structure in the universe is ultimately ripped apart. Eventually we hit a singularity, but a very different one than in the Big Crunch picture: rather than being squashed together, matter is torn to bits and scattered to infinity in a finite amount of time. Some observations, including the new gamma-ray-burst results, show a tiny preference for an increasing dark energy density; but given the implications of such a result, they are far from meeting the standard for convincing anyone that we’ve confidently measured any evolution of the dark energy at all.

So, it sounds like we’d like to know whether this Big Rip thing is going to happen, right? Yes, but there’s bad news: we don’t know if we’re headed for a Big Rip, and no set of cosmological observations will ever tell us. The point is, observations of the past and present are never by themselves sufficient to predict the future. That can only be done within the framework of a theory in which we have confidence. We can say that the universe will hit a Big Rip in so-and-so many years if the dark energy is increasing in density at a certain rate and we are sure that it will continue to increase at that rate. But how can we ever be sure of what the dark energy will do twenty trillion years from now? Only by actually understanding the nature of the dark energy can we extrapolate from present behavior to the distant future. In fact, it’s perfectly straightforward (and arguably more natural) for a phase of super-accelerated expansion to last for a while, before settling down to a more gently accerated phase, avoiding the Big Rip entirely. Truth is, we just don’t know. This is one of those problems that ineluctably depends on progress in both observation and theory.

The future of the universe Read More »

38 Comments

Extremophilia

D. RadioduransThe Astronomy Picture of the Day from Sunday was a cool one — a nutrient agar plate of Deinococcus radiodurans, a/k/a “Conan the Bacterium.” (Photo: M. Daly, Uniformed Services University of the Health Sciences.) D. rad is quite the remarkable little microbe — it’s an extremophile, an organism that thrives in conditions that you and I would deem overly harsh. (And no, not the internet.) It even has a listing in the Guinness Book of World Records under “World’s Toughest Bacterium.”

D. rad is able to survive in vacuum and through extremes of temperature as well as dehydration, but its special ability is to shrug off large amounts of radiation: a dosage 3,000 times what would kill a strapping young human. Now, you may perhaps wonder why the Intelligent Designer would bother to equip a certain unicellular organism with such an impressive, but not manifestly adaptive, kind of superpower. It could be that radiation tolerance was quite useful in the environment of the very young Earth, but biologists are also thinking that the radiation resistance may come along with resistance to dehydration (which is something that obviously is useful) — radiation and dehydration seem to cause similar types of DNA damage, and D. rad has a remarkable ability to keep its DNA in good working order. It carries along several copies of its genome, stacked on top of each other, ready to step in at the first sign of damage. It’s like towing an entire repair shop behind your car at all times.

Which means, of course, that we meddling humans want to put it to work. D. rad has already been genetically engineered to clean up spills of toxic mercury, which can be highly radioactive. And now, NASA is exploring the possibility of recruiting the plucky bacteria into the astronaut corps. They are imagining adapting D. rad to help with a variety of tasks that humans might face on a trip to Mars — synthesizing drugs, recycling wastes, producing food, all the way up to terraforming the planet. If I were in charge of this project, I would tread pretty lightly here. These are some tough bacteria — I wouldn’t be surprised if they’re just biding their time until we can fly them to Mars, at which point they’ll rise up and take over both planets.

Extremophilia Read More »

18 Comments

Get out the vote!

Sir Isaac Newton may have written the greatest physics work of all time, but he shouldn’t rest easy — he has heavy competition for being the greatest experimenter. Chad Orzel at Uncertain Principles aims to find out. He’s assembled an impressive list of nominees for the greatest physics experiment ever (and is drumming up interest in the greatest in other fields). Contenders include such household names as Galileo, Roemer, Faraday, Cavendish, Michelson and Morley, Hertz, Rutherford, Hubble, Mossbauer, and Aspect, not to mention Newton himself. Greedy bastard. Be sure to go vote.

On the opposite side of the practicality/speculation scale, Christine Dantas has the somewhat more modest goal of finding the Best Quantum Gravity Paper of 2005. Help out, she needs both nominees and votes. Of course what we think is the best paper now might not be what we remember a hundred years later.

A final way to have your own bit of vox populi be heard is to visit Wampum and vote for the Koufax awards (previously mentioned here). You’ll have to keep checking in, as posts where you can actually vote are gradually being assembled; so far we’ve seen

If I’m good I’ll keep a list here. We’ve been nominated in a few categories, including Best New Blog; I have high hopes for a respectable third-place showing in the Best Expert Blog category behind Pharyngula and Informed Comment.

Get out the vote! Read More »

2 Comments

Could have predicted this

Man, I go away for a couple of days and all my co-bloggers choose to take a siesta. I’m going to have to give them a good talking-to, I tell you.

Now I’m stuck in Philadelphia for one night more than planned, due to an unforeseen outbreak of weather back in Chicago. The City of Brotherly Love has greatly come up in the world since I grew up in the suburbs a couple of decades ago — the Rittenhouse Square neighborhood, where I’m staying, is a really lively and engaging downtown environment.

Nothing of substance to report, so I’ll point you to this takedown of astrology by Phil Plait of Bad Astronomy fame, which is worthy of some contemplation. We all know that astrology is nonsense, but it’s worth the exercise to try to explain to people who aren’t well-versed in science why we know that astrology can’t work even without doing elaborate double-blind tests. Phil’s argument is the same one that I’ve given before: we really do know something about the forces of nature, and there is absolutely no room to fit paranormal phenomena into what we know. There’s much we don’t know, and much we do; sometimes we even have a pretty good idea of where the boundary is.

Could have predicted this Read More »

17 Comments

Evolving dark energy?

Don’t be surprised if you keep reading astronomy stories in the news this week — the annual meeting of the American Astronomical Society is underway in Washington DC, and it’s common for groups to announce exciting results at this meeting. Today there was a provocative new claim from Bradley Schaefer at Louisiana State University — the dark energy is evolving in time! (Read about it also from Phil Plait and George Musser.)

Short version of my own take: interesting, but too preliminary to get really excited. Schaefer has used gamma-ray bursts (GRB’s) as standard candles to measure the distance vs. redshift relation deep into the universe’s history — up to redshifts of greater than 6, as opposed to ordinary supernova studies, that are lucky to get much past redshift 1. To pull this off, you want “standard candles” — objects that are really bright (so you can see them far away), and have a known intrinsic luminosity (so you can infer their distance from how bright they appear). True standard candles are hard to find, so we settle for “standardizable” candles — objects that might vary in brightness, but in a way that can be correlated with some other observable property, and therefore accounted for. The classic example is Cepheid variables, which have a relationship between their oscillation period and their intrinsic brightness.

Certain supernovae, known as Type Ia’s, have quite a nice correlation between their peak brightness and the time it takes for them to diminish in brightness. That makes them great standardizable candles, since they’re also really bright. GRB’s are much brighter, but aren’t nearly so easy to standardize — Schaefer used a model in which five different properties were correlated with peak brightness (details). The result? The best fit is a model in which the dark energy density (energy per cubic centimeter) is gradually growing with time, rather than being strictly constant.

GRB Hubble Diagram

If it’s true, this is an amazingly important result. There are four possibilities for why the universe is accelerating: a true cosmological constant (vacuum energy), dynamical (time-dependent) dark energy, a modification of gravity, or something fundamental being missed by all us cosmologists. The first possiblity is the most straightforward and most popular. If it’s not right, the set of theoretical ideas that physicists pursue to help explain the acceleration of the universe will be completely different than if it is right. So we need to know the answer!

What’s more, the best-fit behavior for the dark energy density seems to have it increasing with time, as in phantom energy. In terms of the equation-of-state parameter w, it is less than -1 (or close to -1, but with a positive derivative w’). That’s quite bizarre and unexpected.

GRB w plot

As I said, at this point I’m a bit skeptical, but willing to wait and see. Most importantly, the statistical significance of the finding is only 2.5σ (97% confidence), whereas the informal standard in much of physics for discovering something is 3σ (99% confidence). As a side worry, at these very high redshifts the effect of gravitational lensing becomes crucial. If the light from a GRB passes nearby a mass concentration like a galaxy or cluster, it can easily be amplified in brightness. I am not really an expert on how important this effect is, nor do I know whether it’s been taken into account, but it’s good to keep in mind how little we know about GRB’s and the universe at high redshift more generally.

So my betting money stays on the cosmological constant. But the odds have shifted, just a touch.

Update: Bradley Schaefer, author of the study, was nice enough to leave a detailed comment about what he had actually done and what the implications are. I’m reproducing it here for the benefit of people who don’t necessarily dip into the comments:

Sean has pointed me to this blog and requested me to send along any comments that I might have. His summary at the top is reasonable.

I’d break my results into two parts. The first part is that I’m putting forward a demonstration of a new method to measure Dark Energy by means of using GRBs as standard candles out to high red shift. My work is all rather standard with most everything I’ve done just following what has been in the literature.

The GRB Hubble Diagram has been in print since 2003, with myself and Josh Bloom independently presenting early version in public talks as far back as 2001. Over the past year, several groups have used the GRB Hubble Diagram to starting putting constraints on cosmology. This prior work has always used only one GRB luminosity indicator (various different indicators for the various papers) and for no more than 17 GRBs (neglecting GRBs with only limits).

What I am doing new is I am using much more data and I’m directly addressing the question of the change of the Dark Energy. In all, I am using 52 GRBs and each GRB has 3-4 luminosity indicators on average. So I’ve got a lot more data. And this allows for a demonstration of the GRB Hubble Diagram as a new method.

The advantages of this new method is that it goes to high redshift, that is, it looks at the expansion history of the Universe from 1.7-6.3 in redshift. It is impervious to extinction. Also, I argue that there should be no evolution effects as the GRB luminosity indicators are based on energetics and light travel time (which should not evolve). Another advantage is that we have the data now, with the size of the data base to be doubled within two years by HETE and Swift.

One disadvantage of the GRB Hubble Diagram is that the GRBs are lower in quality than supernovae. Currently my median one sigma error bar is 2.6-times worse in comparing a single GRB and a single supernova. But just as with supernovae, I expect that the accuracy of GRB luminosities can be rapidly improved. [After all, in 1996, I was organizing debates between the gradaute students as to whether Type Ia SNe were standard candles or not.] Another substantial problem that is hard to quantify is that our knowledge of the physical processes in GRBs is not perfect (and certtainly much worse than what we know for SNe). It is rational and prudent for everyone to worry that there are hidden problems (although I now know of none). A simple historical example is how Cepheids were found to have two types with different calibrations.

So the first part of my talk was simply presenting a new method for getting the expansion histoy of the Universe from redshifts up to 6.3. For this, it is pretty confident that the method will work. Inevitably there will be improvements, new data, corrections, and all the usual changes (just as for the supernova).

The second part of my talk was to point out the first results, which I could not avoid giving. It so happens that the first results point against the Cosmological Constant. I agree with Sean that this second part should not be pushed, for various reasons. Foremost is that the result is only 2.5-sigma.

Both parts of my results are being cast onto a background where various large groups are now competing for the a new dedicated satellite.

Evolving dark energy? Read More »

44 Comments

Quick hits

Two quick things noticed on Cynical-C.

First, very much up the old Cosmic Variance alley, a list of the Ten Most Beautiful Physics Experiments ever. I have a funny feeling we’ve linked to it before, but it’s worth a visit. I would have voted for Archimedes taking a bath over Galileo dropping balls from the Leaning Tower of Pisa, which after all probably never happened. (He did, however, amuse himself during sermons at the next-door cathedral by using his pulse to time the chandelier swinging overhead, thereby discovering that the period of a pendulum is independent of its amplitude.)

Our fan base will verify that we here at CV are utterly beholden to the dictates of political correctness, so this other link is somewhat outside our normal fare: but it is perhaps the best blonde joke ever.

Quick hits Read More »

12 Comments
Scroll to Top