Science

arxiv Find: Breakdown of Classical Gravity?

The single most interesting feature of attempts to replace dark matter with a modification of gravity is Milgrom’s discovery that in a wide variety of galaxies, there’s a unique place where ordinary gravity plus ordinary matter stops working: when the acceleration due to gravity (as Newton would have calculated it) drops below a fixed value a0 ≈ 10−10 m/s2. This is the basis of MOND, but the pattern itself is arguably more interesting than any current attempt to account for it. Very possibly it can be explained by the complicated dynamics of baryons and dark matter in galaxies — but in any event it should be explained somehow.

The existence of this feature gives a strong motivation for testing gravity in the regime of very tiny accelerations. Note that this isn’t even a statement that makes sense in general relativity; particles move on geodesics, and the “acceleration due to gravity” is always exactly zero. So implicitly we’re imagining some global inertial frame with respect to which such acceleration can be measured. That’s a job for a future theory to make sense of; for the moment we’re forgetting that we know GR and thinking like Newton would have.

So now Hernandez, Jimenez, and Allen have tried to test gravity in this weak-acceleration regime — and they claim it fails!

The Breakdown of Classical Gravity?
X. Hernandez, M. A. Jimenez, C. Allen

Assuming Newton’s gravity and GR to be valid at all scales, leads to the dark matter hypothesis as a forced requirement demanded by the observed dynamics and measured baryonic content at galactic and extra galactic scales. Alternatively, one can propose a contrasting scenario where gravity exhibits a change of regime at acceleration scales less than $a_{0}$, and obtain just as good a fit to observations across astrophysical scales. A critical experiment in this debate is offered by wide orbit binary stars. Since for $1 M_{odot}$ systems the acceleration drops below $a_{0}$ at scales of around 7000 AU, an statistical survey of relative velocities and binary separations reaching beyond $10^{4}$ AU should yield a conclusive answer to the above debate. By performing such a study we show Kepler’s third law to fail precisely beyond $a approx a_{0}$ scales, precisely as predicted by modified gravity theories designed not to require any dark matter at galactic scales and beyond.

Color me dubious, but interested in seeing further studies. It’s very hard to collect this kind of data, and note that it’s just a statistical survey of velocities, not a precise measurement of individual systems. In principle a statistical survey is fine; in practice, it opens up the possibility of hidden subtle systematic effects.

Still, intriguing and worth checking out. Any time you have the chance to overthrow Sir Isaac Newton, you go for it.

arxiv Find: Breakdown of Classical Gravity? Read More »

25 Comments

Dark Matter is Just Messing With Us Now

The state of play in dark matter searches just refuses to settle down. Just a few weeks ago, the XENON100 experiment released the best-yet limits on WIMP dark matter (a two-dimensional parameter space, “mass of the dark matter particle” and “cross section with ordinary matter”). These limits seemed to firmly exclude the hints of a signal that had been trickling in from other experiments. But… the story isn’t over yet.

Remember that XENON, like CDMS and other experiments, tries to find dark matter by making a very quiet experiment and picking out individual events where a dark matter particle bumps into a nucleus inside the detector. There is a complementary strategy, looking for annual modulations in the dark matter signal: rather than being very picky about what event is and is not a DM interaction, just take lots of events and look for tiny changes in the rate as the Earth moves around the Sun. Dark matter is like an atmosphere through which we are moving; when we’re moving into a headwind, the rate of interactions should be slightly higher than when our relative speed through the ambient dark matter is smaller. The DAMA experiment was designed to look for such a modulation, and it certainly sees one. The problem is that lots of things modulate on a one-year timescale; as Juan Collar explained in a guest post here, there were many questions about whether what DAMA is detecting is really dark matter.

Now one of Juan’s own experiments, CoGeNT, has seen (very tentative) hints of an annual modulation itself! CoGeNT had already teased us with a hint of a dark matter signal, which (like DAMA) seemed to imply lower masses (about 10 GeV, where 1 GeV is the mass of a proton) rather than the usual masses for weakly-interacting dark matter favored by theorists (hundreds of GeV). But the competitor experiment CDMS, and later of course XENON, seemed to put the kabosh on those claims. The CDMS result was especially hurtful to CoGeNT’s claims, as both experiments use germanium as their detector material. Theorists are very clever at inventing models in which dark matter interacts with one substance but not some other substance (see e.g.), but it’s harder to invent models where dark matter interacts with one substance in one experiment but not the same substance in some other experiment.

Yesterday Juan Collar gave a talk at the April Meeting of the APS, where he revealed something about CoGeNT’s latest findings. (I don’t think there’s a paper yet, but it’s supposed to come very soon, and they are promising to share their data with anyone who asks.) Now, unlike for their earlier results, they are explicitly looking for annual modulation. And … they see it. Maybe. Well, not really enough to take it seriously, but enough to be intrigued. Or, in science-speak: it’s a 2.8 sigma result. It doesn’t seem to have hit the news very hard, but there are writeups by Valerie Jamieson and David Harris. The CoGeNT folks have 442 days of data, with a rate of about three events per day.

Ordinarily, a tasteful physicist would claim that a 2.8 sigma result doesn’t even rise to the level of “intriguing”; you need three sigma to count as “evidence,” and five sigma for “discovery,” by the accepted standards of the field. The reason this is even blogworthy (a low bar indeed) is that it’s the first attempt to check DAMA by looking for an annual modulation signal, and the result matches the phase of DAMA’s oscillation, and is claimed to be consistent with its amplitude (the experiments use different materials, so it’s hard to do a direct comparison). Also, of course, because the team was looking to bury DAMA, not to praise it: “We tried like everyone else to shut down DAMA, but what happened was slightly different.” On the other hand, what you would need to explain this purported signal is at first glance still very much incompatible with XENON’s limits.

In the end: probably still nothing to get too excited about. But at least it will keep the pot boiling a while longer. Don’t fear; the experiments are getting better and better, and temporary confusions eventually evaporate. Or are swept away by the dark matter wind.

Dark Matter is Just Messing With Us Now Read More »

41 Comments

Dark Matters

Jorge Cham, creator of the celebrated PhD Comics, sits down to talk with Daniel Whiteson and Jonathan Feng about dark matter (and visible matter!). But rather than a dry and boring video of the encounter, he cleverly illustrates the whole conversation.

Dark Matters from PHD Comics on Vimeo.

I think it’s an exaggeration to say we have “no idea” about dark energy — physicists like to say this to impress upon people how weird DE is, but it gives the wrong impression because we actually do know something about it. But not much!

Dark Matters Read More »

20 Comments

Does Time Run Faster When You’re Terrified?

Neuroscientists have all the fun. When we physicists think about the fundamental nature of time, it largely involves standing hopefully in front of a blackboard and writing the occasional equation, or at best sending clocks on strange journeys. All in the service of very good ideas, of course. But when I give talks about these wonderful ideas, I learn that what people care more about are down-to-earth questions about aging and memory. So not only do neuroscientists get to tackle those questions directly, but they do so by dropping people from tall buildings. How cool is that?

Dr. David Eagleman on the Discovery Channel

David Eagleman is an interesting guy, as a recent New Yorker profile reveals. Mild-mannered neuroscientist by day, in his spare time he manages to write fiction as well as iPad-based superbooks. But his research focuses on how the mind works, in particular how we perceive time.

I’ve written previously about how, as far as the brain is concerned, remembering the past is like imagining the future. Eagleman studies a different neurological feature of time: how we perceive it passing under a variety of different conditions. You might be familiar with the feeling that “time slows down” when you are frightened or in some extreme environment. The problem is, how to test this hypothesis? It’s hard to come up with experimental protocols that frighten the crap out of human subjects while remaining consistent with all sorts of bothersome regulations.

So Eagleman and collaborators did the obvious thing: they tied subjects very carefully into harnesses, and threw them from a very tall platform. The non-obvious thing is that they invented a gizmo that flashed numbers as they fell, so that they could determine whether the brain really did speed up (perceiving a larger number of subjective moments per objective second) during this period of fear.

Answer: no, not really. There is a perceptual effect that kicks in after the event, giving the subject the impression that time moved more slowly; but in fact they didn’t perceive any more moments than a non-terrified person would have. Still, incredibly interesting results; for example, when you’re afraid, the brain lays down memories differently than when you’re in a normal state.

Obviously, of course, these findings need to be replicated. If you’ll excuse me, I’m off to find some grad students and a tall building.

Does Time Run Faster When You’re Terrified? Read More »

27 Comments

Avignon Day 4: Dark Matter

Yesterday’s talks were devoted to the idea of dark matter, which as you know is the hottest topic in cosmology these days, both theoretically and experimentally.

Eric Armengaud and Lars Bergstrom gave updates on the state of direct searches and indirect searches for dark matter, respectively. John March-Russell gave a theory talk about possible connections between dark matter and the baryon asymmetry. The density of dark matter and ordinary matter in the universe is the same, to within an order of magnitude, even though we usually think of them as arising from completely different mechanisms. That’s a coincidence that bugs some people, and the last couple of years have seen a boomlet of papers proposing models in which the two phenomena are actually connected. Tracy Slatyer gave an update on proposals for a new dark force coupled to dark matter, which could give rise to interesting signatures in both direct and indirect detection experiments.

This is science at its most intense. A big, looming mystery, a bounty of clever theoretical ideas, not nearly enough data to pinpoint the correct answer, but more than enough data to exclude or tightly constrain most of the ideas you might have. It wouldn’t be at all surprising if we finally discover the dark matter in the next few years; unfortunately, it wouldn’t really be surprising if it eluded detection for a very long time. If we knew the answers ahead of time, it wouldn’t be science (or nearly as much fun).

Today is our last day in Avignon, devoted to cosmic acceleration. My own talk later today is on “White and Dark Smokes in Cosmology.” (The title wasn’t my idea, but I couldn’t have done better, given the context.) It’s the last talk of the conference, so I’ll try to take a big-picture perspective and not sweat the technical details, but (following tradition) I will admit that it’s an excuse to talk about my own recent papers and ideas I think are interesting but haven’t written papers about. At least it should be short, which I understand is the primary criterion for a successful talk of this type.

Also, few people have strong feelings about non-gaussianities or neutrinos, but many people have strong feelings about reductionism. Quelle surprise!

Avignon Day 4: Dark Matter Read More »

14 Comments

Avignon Day 3: Reductionism

Every academic who attends conferences knows that the best parts are not the formal presentations, but the informal interactions in between. Roughly speaking, the perfect conference would consist of about 10% talks and 90% coffee breaks; an explanation for why the ratio is reversed for almost every real conference is left as an exercise for the reader.

Yesterday’s talks here in Avignon constituted a great overview of issues in cosmological structure formation. But my favorite part was the conversation at our table at the conference banquet, fueled by a pretty darn good Côtes du Rhône. After a long day of hardcore data-driven science, our attention wandered to deep issues about fundamental physics: is the entire history of the universe determined by the exact physical state at any one moment in time?

The answer, by the way, is “yes.” At least I think so. This certainly would be the case is classical Newtonian physics, and it’s also the case in the many-worlds interpretation of quantum mechanics, which is how we got onto the topic. In MWI, the entirety of dynamics is encapsulated in the Schrodinger equation, a first-order differential equation that uniquely determines the quantum state in the past and future from the state at the present time. If you believe that wave functions really collapse, determinism is obviously lost; prediction is necessarily probabilistic, and retrodiction is effectively impossible.

But there was a contingent of physicists at our table who were willing to believe in MWI, but nevertheless didn’t believe that the laws of microscopic quantum mechanics were sufficient to describe the evolution of the universe. They were taking an anti-reductionist line: complex systems like people and proteins and planets couldn’t be described simply by the Standard Model of particle physics applied to a large number of particles, but instead called for some sort of autonomous description appropriate at macroscopic scales.

No one denies that in practice we can never describe human beings as collections of electrons, protons, and neutrons obeying the Schrodinger equation. But many of us think that this is clearly an issue of practice vs. principle; the ability of our finite minds to collect the relevant data and solve the relevant equations shouldn’t be taken as evidence that the universe isn’t fully capable of doing so.

Yet, that is what they were arguing — that there was no useful sense in which something as complicated as a person could, even in principle, be described as a collection of elementary particles obeying the laws of microscopic physics. This is an extremely dramatic ontological claim, and I have almost no doubt whatsoever that it’s incorrect — but I have to admit that I can’t put my objections into a compact and persuasive form. I’m trying to rise above responding with a blank stare and “you can’t be serious.”

So, that’s a shortcoming on my part, and I need to clean up my act. Why shouldn’t we expect truly new laws of behavior at different scales? (Note: not just that we can’t derive the higher-level laws from the lower-level ones, but that the higher-level laws aren’t even necessarily consistent with the lower-level ones.) My best argument is simply that: (1) that’s an incredibly complicated and inelegant way to run a universe, and (2) there’s absolutely no evidence for it. (Either argument separately wouldn’t be that persuasive, but together they carry some weight.) Of course it’s difficult to describe people using Schrodinger’s equation, but that’s not evidence that our behavior is actually incompatible with a reductionist description. To believe otherwise you have to believe that somewhere along the progression from particles to atoms to molecules to proteins to cells to organisms, physical systems begin to violate the microscopic laws of physics. At what point is that supposed to happen? And what evidence is there supposed to be?

But I don’t think my incredulity will suffice to sway the opinion of anyone who is otherwise inclined, so I have to polish up the justification for my side of the argument. My banquet table was full of particle physicists and cosmologists — pretty much the most sympathetic audience for reductionism one can possibly imagine. If I can’t convince them, there’s not much hope for the rest of the world.

Avignon Day 3: Reductionism Read More »

92 Comments

Avignon Day 2: Cosmological Neutrinos

By this point in my life, when I attend a large-ish conference like this one the chances are good that I’m older than the average participant. Certainly true here. It’s a great chance to hear energetic young people tackling the hard problems, and I certainly have the feeling that the field is in very good hands. It’s also a good reminder that we old people need to resist the temptation to fall into a rut, churning out tiny variations on the research we’ve been doing for years now. It’s easy to get left behind!

Still, it’s also nice to hear a talk on a perennial topic, especially when you hear something you didn’t know. Yvonne Wong gave a very nice talk on “hot relics” — particles that were moving close to the speed of light in the early universe. (They may have slowed down by now, or maybe not.) Neutrinos, of course, are the classic example here; they are known to exist, and were certainly relativistic at early times. If the neutrinos have masses of order 10 electron volts, they would contribute enough density to be the dark matter. But that doesn’t quite work in the real world; “hot dark matter” tends to wipe out structure on small scales, in a way that is dramatically incompatible with the world we actually observe. Also, ground-based measurements point to neutrino masses less than 0.1 electron volt — not for sure, since what we directly measure are the differences in mass between different kinds of neutrinos, rather than the masses themselves, but that seems to be the most comfortable possibility.

Of course, we know about three kinds of neutrinos (associated with electrons, muons, and taus), but there could be more. So it’s fun to use cosmology to see if we can constrain that possibility. An extra neutrino species, even if it were very light, would slightly affect the expansion rate of the early universe, which works to damp structure on small scales. This is something you can look for in the cosmic microwave background, and the WMAP team has diligently been doing so. Interestingly — the best fit is for four neutrinos, not for three! Here’s a plot from Komatsu et al.’s analysis of the WMAP seven-year data, showing the likelihood as a function of the effective number of neutrino species. (“Effective” because a massive neutrino counts a little less than a massless one.)

Now, maybe this isn’t worth getting too excited about. There’s a nice discussion of this possibility in a recent paper by Zhen Hou, Ryan Keisler, Lloyd Knox, Marius Millea, and Christian Reichardt. I’m not sure how a new neutrino could affect the CMB in this way without being ruled out by primordial nucleosynthesis, but I haven’t looked at it carefully. Regardless, it’s best not to just trust any one measurement, but do every measurement we can think of and make sure they are consistent. Certainly something worth keeping an eye on as CMB measurements improve.

Avignon Day 2: Cosmological Neutrinos Read More »

7 Comments

Avignon Day 1: Calculating Non-Gaussianities

Greetings from Avignon, where I’m attending a conference on “Progress on Old and New Themes” in cosmology. (Name chosen to create a clever acronym.) We’re gathering every day at the Popes’ Palace, or at least what was the Pope’s palace back in the days of the Babylonian Captivity.

This is one of those dawn-to-dusk conferences with no time off, so there won’t be much blogging. But if possible I’ll write in to report briefly on just one interesting idea that was discussed each day.

On the first day (yesterday, by now), my favorite talk was by Leonardo Senatore on the effective field theory of inflation. This idea goes back a couple of years to a paper by Clifford Cheung, Paolo Creminelli, Liam Fitzpatrick, Jared Kaplan, and Senatore; there’s a nice technical-level post by Jacques Distler that explains some of the basic ideas. An effective field theory is a way of using symmetries to sum up the effects of many unknown high-energy effects in a relatively simple low-energy description. The classic example is chiral perturbation theory, which replaces the quarks and gluons of quantum chromodynamics with the pions and nucleons of the low-energy world.

In the effective field theory of inflation, you try to characterize the behavior of inflationary perturbations in as general a way as possible. It’s tricky, because you are in a time-dependent background with a preferred (non-Lorentz-invariant) frame provided by the expanding universe. But it can be done, and Leonardo did a great job of explaining the virtues of the approach. In particular, it provides a very nice way of calculating non-gaussianities. …

Avignon Day 1: Calculating Non-Gaussianities Read More »

12 Comments

No Dark Matter Seen by XENON

Here in the Era of (Attempted) Dark Matter Detection, new results just keep coming in. Some are tantalizing, some simply deflating. Count this one in the latter camp.

The XENON100 experiment is a detector underneath the Gran Sasso mountain in Italy (NYT article). It’s a very promising experiment, and they’ve just released results from their most recent run. Unlike some other recent announcement, this one is pretty straightforward: they don’t see anything.

Here we see the usual 2-dimensional dark matter parameter space: mass of the particle is along the horizontal axis, while its cross-section with ordinary matter is along the vertical axis. Anything above the blue lines is now excluded. This improves upon previous experimental limits, and calls into question the possible claimed detections from DAMA and CoGeNT. (You can try to invent models that fit these experiments while not giving any signal at XENON, but only at the cost of invoking theoretical imagination.) See Résonaances or Tommaso Dorigo for more details.

No need to hit the panic button yet — there’s plenty of parameter space yet to be explored. That grey blob in the bottom right is a set of predictions from a restricted class of supersymmetric models (taking into account recent LHC limits). So it’s not like we’re finished yet. But it is too bad. This run of XENON had a realistic shot of actually finding the dark matter. It could be harder to detect than we had hoped, or it could very well be something with an extremely small cross-section, like an axion. The universe decides what’s out there, we just have to dig in and look for it.

No Dark Matter Seen by XENON Read More »

45 Comments

Guest Post: Jim Kakalios on the Quantum Mechanics of Source Code

Jim Kakalios of the University of Minnesota has achieved internet demi-fame — he has a YouTube video with over a million and a half views. It’s on the science of Watchmen, the movie based on Alan Moore’s graphic novel. Jim got that sweet gig because he wrote a great book called The Science of Superheroes — what better credentials could you ask for?

More recently Jim has written another book, The Amazing Story of Quantum Mechanics. But even without superheroes in the title, everything Jim thinks about ends up being relevant to movies before too long. The new movie Source Code features a twist at the end that involves — you guessed it — quantum mechanics. Jim has applied his physicist super-powers to unraveling what it all means, and was kind enough to share his thoughts with us in this guest post.

——————————————————————-

There is an interesting discussion taking place on the internets concerning the ending of the newly released film SOURCE CODE, that suggests that the film concludes with a paradox. I believe that any such paradox can be resolved – with Physics!

This entire post is one big honkin’ SPOILER, so if you want to read about the final twist ending of a film without having seen said film – by all means, read on, MacDuff!

In SOURCE CODE, Jake Gyllenhaal plays US helicopter pilot Colter Stevens, whose consciousness is inserted into another man’s body (Sean Fentress, a school teacher in Chicago) through a procedure that requires a miracle exception from the laws of nature (involving quantum mechanics and “parabolic calculus” – by the way, there is no such thing as parabolic calculus). Thanks to some technobabble (or as Q-Bert on Futurama would describe it – weapons grade bolognium) Colter’s mind can only enter Sean’s body in the last eight minutes of Sean’s life. As Sean is sitting on a city bound Chicago commuter train, on which a bomb will explode at 7:58 AM, killing everyone aboard, the goal is for Colter to ascertain who planted the bomb. He cannot stop it from exploding, he is told, because that has already happened. He cannot affect the past, but he can bring information obtained in the past back to his present time. Learning the identity of the bomber would enable the authorities to prevent the detonation of a threatened second “dirty atomic” bomb is downtown Chicago.

While the above can be discerned from the movie trailer, what I am going to discuss next involves the actual ending of the film, and if you do not want this ending spoiled, you should stop reading now. …

Guest Post: Jim Kakalios on the Quantum Mechanics of Source Code Read More »

32 Comments
Scroll to Top