General relativity, Einstein’s theory of gravity and spacetime, has been pretty successful over the years. It’s passed numerous tests in the Solar System, scored a Nobel-worthy victory with the binary pulsar, and gets the right answer even when extrapolated back to the first one second after the Big Bang. But no scientific theory is sacred. Even though GR is both aesthetically compelling and an unquestioned empirical success, it’s our job as scientists to keep probing it in different ways. Especially when it comes to astrophysics, where we need dark matter and dark energy to explain what we see, it makes sense to put Einstein to the most stringent tests we can devise.
So here is a new such test, courtesy of Rachel Bean of Cornell. She combines a suite of cosmological data, especially measurements of weak gravitational lensing from the Hubble Space Telescope, to see whether GR correctly describes the behavior of large-scale structure in the universe. And the surprising thing is — it doesn’t. At the 98% confidence level, Rachel finds that general relativity is inconsistent with the data. I’m not sure why we haven’t been reading about this in the science media or even on other blogs — it’s certainly a newsworthy result. Admittedly, the smart money is still that there is some tricky thing that hasn’t yet been noticed and Einstein will eventually come through the victor, but this is serious work by a respected cosmologist. Either the result is wrong, and we should be working hard to find out why, or it’s right, and we’re on the cusp of a revolution.
Here is the abstract:
A weak lensing detection of a deviation from General Relativity on cosmic scales
Authors: Rachel BeanAbstract: We consider evidence for deviations from General Relativity (GR) in the growth of large scale structure, using two parameters, γ and η, to quantify the modification. We consider the Integrated Sachs-Wolfe effect (ISW) in the WMAP Cosmic Microwave Background data, the cross-correlation between the ISW and galaxy distributions from 2MASS and SDSS surveys, and the weak lensing shear field from the Hubble Space Telescope’s COSMOS survey along with measurements of the cosmic expansion history. We find current data, driven by the COSMOS weak lensing measurements, disfavors GR on cosmic scales, preferring η < 1 at 1 < z < 2 at the 98% significance level.
Let’s see if we can’t unpack the basic idea. The real problem in testing GR in cosmology is that any particular kind of spacetime curvature can be a solution to Einstein’s theory — all you need are the right sources of matter and energy. So in order to do a real test, you need to have some confidence that you understand what is creating the gravitational field — in the Solar System it’s the Sun and planets, in the binary pulsar it’s two neutron stars, and in the early universe it’s radiation. For large-scale structure things are a bit less clear — there’s ordinary matter, and dark matter, and of course dark energy.
Nevertheless, even though there are some things we don’t know about dark matter and dark energy, there are some things we think we do know. One of those things is that they don’t create any “anisotropic stress” — basically, a force that pulls different sides of things in different directions. Given that extremely reasonable assumption, GR makes a powerful prediction: there is a certain amount of curvature associated with space, and a certain amount of curvature associated with time, and those two things should be equal. (The space-space and time-time potentials φ and ψ of Newtonian gauge, for you experts.) The curvature of space tells you how meter sticks are distorted relative to each other as they move from place to place, while the curvature of time tells you how clocks at different locations seem to run at different rates. The prediction that they are equal is testable: you can try to measure both forms of curvature and divide one by the other. The parameter η in the abstract is the ratio of the space curvature to the time curvature; if GR is right, the answer should be one.
There is a straightforward way, in principle, to measure these two types of curvature. A slowly-moving object (like a planet moving around the Sun) is influenced by the curvature of time, but not by the curvature of space. (That sounds backwards, but keep in mind that “slowly-moving” is equivalent to “moves more through time than through space,” so the curvature of time is more important.) But light, which moves as fast as you can, is pushed around equally by the two types of curvature. So all you have to do is, for example, compare the gravitational field felt by slowly-moving objects to that felt by a passing light ray. GR predicts that they should, in a well-defined sense, be the same.
We’ve done this in the Solar System, of course, and everything is fine. But it’s always possible that some deviation from Einstein shows up at much larger distance and weaker gravitational fields than we have access to in our local neighborhood. That’s basically what Rachel’s paper does, considering different measures of the statistical properties of large-scale structure and comparing them to the predictions of a phenomenological model of the gravitational field. A crucial role is played by gravitational lensing, since that’s where the deflection of light comes in.
And here is the answer: the likelihood, given the data, for different values of 1/η, the ratio of the time curvature to the space curvature. The GR prediction is at 1, but the data show a pronounced peak between 3 and 4, and strongly disfavor the GR prediction. If both the data and the analysis are okay, there would be less than a 2% chance of obtaining this result. Not as good as 0.01%, but still pretty good.
So what are we supposed to make of this? Don’t get me wrong: I’m not ready to bet against Einstein, at least not yet. Mostly my pro-Einstein prejudice comes from long experience trying to come up with alternative theories of gravity that are simultaneously logically sensible and observationally consistent; it’s just very hard to do. But more generally, good scientists naturally have a strong suspicion of any claimed observational result that purports to overthrow an extremely well-established theory. That’s just common sense, not hidebound establishmentarianism; most such anomalies eventually go away.
But that doesn’t mean that you ignore anomalies; you just treat them with caution. In this case, there could be an unrecognized systematic error in the data set, or a subtle error in the analysis. Given 1:1 odds, that’s certainly where the smart money would bet right now. It’s also possible that the fault lies with dark matter or dark energy, not with gravity — but it’s hard to see how that could work, to be honest. Happily, it’s an empirical question — more data and more analysis will either reinforce the result, or make it go away. After all, some anomalies turn out to be frighteningly real. This one is worth taking seriously, to say the least.
To Peter Fred:
I share the opinion of several of my colleagues that dark energy and dark matter are the aethers of the 21st century. We need a new theory eliminating both from physics.
We already have a theory that explains the “rotation curves” with an accuracy do not matched by dark matter theories
http://www.astro.umd.edu/~ssm/mond/fit_compare.html
Moreover, the theory is predictive and its predictions have been all confirmed
http://www.astro.umd.edu/~ssm/mond/mondvsDM.html
http://www.astro.umd.edu/~ssm/mond/mondpred.html
http://www.astro.umd.edu/~ssm/mond/CMB1.html
It seems that the theory continues providing satisfactory predictions
http://arxiv.org/abs/0909.5184
The goal here is to add relativistic corrections to this theory. One popular approach is revised next
http://arxiv.org/abs/0901.1524
I advance an alternative model to TeVeS in the above cited report (CSR:2009), with the advantage it can also compute what cannot be obtained by any other available model or theory: we can obtain a_0 and its relation to cosmological a_H, we can obtain the correct order of magnitude of the cosmological constant, we can obtain the cluster mass limit…
Great, wrote up a nice fat comment then stupidly clicked a link to read something so I have to write it aaaaaaaaalllllll over again. Double the snark now!
Article comments:
The paper is starting off by imposing a Newtonian perturbation on top of the usual FRW anzatz. I find the choice of eta to be, overall, quite interesting. The forcing of eta being equal to one falls out through the boundary conditions that define the weak field limit, or in this case, perturbation theory on top of FRW.
The quantity being considered is not the “curvature” of space-time, which is horrifically misleading, but rather the ratio of two potentials used to define perturbation theory on top of the FRW manifold which is typically used to model the large scale universe.
I think the best way of understanding the quantity being considered is how well the boundary conditions we choose to apply to perturbation theory match observation, which is a whole lot LESS sexy than ‘a new challenge to Einstein’. {Sorry Sean :D}
What I’m having difficulty with is getting an exact handle on what is being _measured_. We have the ISW effect, and that is sensitive to the time components of the metric ala Shapiro delay, as it quantifies that little bit of gravitational redshift in CMB photons as they traverse anisotropies on their way to Earth. But the ISW effect, to my knowledge, isn’t done on a per-galaxy basis. And the two individual surveys were simply, as far as I know, doing mass surveys of the sky in some certain region looking for redshifts of objects.
How the hell this translates to a serious test of GR as claimed is something that I’ve been scratching my head on for awhile now. I wonder how circular the data sets are, with the essential cosmic parameters derived from WMAP/2df being fed back into a test of WMAP/2df.
I also wonder how much of a coincidence it is that the low-z objects end up favoring GR more than the higher-z objects. I further wonder if any effort was made to distinguish between freely traveling objects and objects gravitationally bound. Any test of the expansion theory is going to go to shit if you consider objects in the local group, which is a {loosely} bound system.
Basically this doesn’t tell us jack beyond ‘WE REQUIRE MORE VESPENE GAS’, I mean , “DATA”.
Peter Fred: “What am I missing? Why am I alone in considering the flat rotation curves of galaxies as representing a serious anomaly. If dark matter is detected by some means other than gravitationally , then the anomaly of flat rotation curves should rightfully ” go away”.”
What you are missing are gravitational {macro,micro} lensing observations that directly couple to the {mass,energy} density present in a given volume of space. People who argue that dark matter doesn’t exist have to find creative excuses for why dark matter behave as predicted in that respect.
Juan: Thanks for being a cosmic jackass by importing an argument you lost onto a medium that has people who are capable of reading for comprehension. Readers interested in the state of the art on the galactic center should read through the overall thread. As I was reduced to explaining things using small words and pantomime to a child with a learning disability, the overall argument should be simple to follow for people who can read for comprehension.
As for this stupid goddamn argument, this nonsense was dealt with 6 months ago.
http://groups.google.com/group/sci.physics.research/msg/2c6d68195d99b0c8?dmode=source
Taganov’s argument is, quite frankly, as full of shit as you are. Why is he, in 2009, citing literature from 1991 when there has been a factor of 3 reduction in error bars since then?
Could it be because his argument falls to pieces if one reads, say arXiv:astro-ph/0407149v1? I think so, because it becomes remarkably hard to not only butcher statistics and claim a 0.6% _ERROR BAR_ is not just an excess as opposed to an exquisite failure in comprehension of basic error analysis (of which you are complicit by repeating the claim) when an article published 13 years later shows that the error bars are reduced to 0.13% +/- 0.21.
Class, since statistics seems to be under discussion to some degree, how many standard deviations away is a factor of 3 difference in a measurement when one realizes that an error bar represents 1 standard deviation centered upon the measurement? I’ll leave it as an exercise.
And Juan, an extra special thank you for once again citing your many-year unpublished draft in an argument. As you have explicitly denied having done. Hurry the fuck up and publish it, as Sean Carroll will probably be highly interested in your opinions of his work in addition to your complete butchering of the subject. If for no other reason than because you invoke his name all the time.
Hey Sean, since I know you’ll see this, I have a question.Have you heard about Juan’s rather interesting usage of your online lecture notes? Its’ fuuuun to read.
To Eric Gisse,
You are repeating the same unfair accusations on Taganov, the same incorrect factors and misguided errors analysis, etc. that were already replied in the sci.physics.research links of my above message (before you were finally blocked by moderators in that thread who rejected your further posts).
The same moderators recently approved a post about the recent report CSR:20092
http://groups.google.com/group/sci.physics.research/browse_thread/thread/e27983b2fae018ed#
ignoring the ad hominems and vitriolic ‘evaluations’ of the report you are doing in last days in several places, including this blog now.
I will only add that the analysis that you mis-attribute to Taganov were using Weisberg & Taylor (2002) not “literature from 1991” as you say.
Also your arXiv:astro-ph/0407149v1 is the reference (Relativistic Binary Pulsar B1913+16: Thirty Years of Observations and Analysis by Weisberg and Taylor) given in
http://www.canonicalscience.org/en/publicationzone/canonicalsciencereports/20092.html
As can be easily checked at the bottom part of that page.
A discussion of binary pulsars is given in page 13 of the report, which also includes quotations from Weisberg and Taylor on the issue.
Taylor (2002) completely disagrees with Taganov’s claims, which might have something to do with why his works on the subject continue to be unpublished. Taylor (2002) and Taylor (2004) put the overall uncertainty in the change in period to be around 0.2%. The number 0.2% is, according to my calculations, a lot *smaller* than 0.7%.
Now let us see if this test of reading for comprehension can be passed.
As for your report, NOBODY CAN READ IT. It is password protected. And expecting people to pay you money to substantiate your arguments is high order stupidity.
To Eric Gisse,
Since the unfair accusations on Taganov are the same and since you continue confused about ‘calculations’ in the same way, evidently the corrections are the same as were given to you in the next spr links
http://groups.google.com/group/sci.physics.research/msg/0bca9684bc5e0a3e
http://groups.google.com/group/sci.physics.research/msg/bee924193a0a5b68
http://groups.google.com/group/sci.physics.research/msg/b6c4269a42ddea97
http://groups.google.com/group/sci.physics.research/msg/b2384376b4c5b02c
(…)
People can read the entire thread and see that you also submitted unfair accusations about top journals and other people, including an expert in black holes who you accused of ignoring last years observations for promoting obscure agendas. Nasty enough, good the moderators blocked you!
I will not reply to you more about this issue
My apologies to Sean Caroll and rest of readers for this episode with Eric!
Reading for comprehension is an obscure agenda?
Pingback: Is General Relativity Wrong? | Good, Bad, and Bogus
Pingback: Rethinking relativity: Is time out of joint? – space – 21 October 2009 – New Scientist «
Pingback: ¿Se equivocó Einstein después de todo? | Maikelnai's blog
Pingback: GR : Is Time out of Frame? « The Abyss Of the Unknown
I am just an interested layman, reading these posts to try better to understand present-day science. What I want to know is – who the hell is Eric Gisse, and what is his problem, exactly? Was he dropped on his head when a baby?
Crazy but I have a theory that explains everything. To start and to make things simple, I say Newton got it in reverse. Gravity is not a pulling action, it is a pushing action. The dark matter pushes everything together like a glue. Think of it like our atmosphere.
If you start extending this theory, it can bring you to the big bang, explain the common denominator of the electricity, gravity, light, fire, heat, cold. Furthermore it can show that time is not what we think of it but it is rather a state of matter – constantly changing. Yes, one can go back in time or forward but really impossible because time is really always the “present” for every one.
I can explain further
The result is interesting, and it may be pertinent to mention the value eta = 1/3, comes naturally from the 2002 version of Self Creation Cosmology.
i.e. when the Robertson parameters alpha = 1 and gamma = 1/3.
Though this would seem not to be the case for z < 1
http://arxiv.org/abs/gr-qc/0302088 (eq. 60).
Also 'A New Self Creation Cosmology', Astrophysics and Space Science 282, 4, pp 683-731.
Cosmic Energy-Mass Evolution In A simple Understandable Format
Deciphering Life’s Regulatory Code
To : Robert P. Zinzen, EMBL Heidelberg
Re : “Deciphering the regulatory code”
A. From “EMBL scientists take new approach to predict gene expression”
http://www.embl.de/aboutus/communication_outreach/media_relations/2009/091104_Heidelberg/index.html
“What’s exciting for me is that this study shows that it is possible to predict when and where genes are expressed, which is a crucial first step towards understanding how regulatory networks drive development”
B. Organism’s behaviour, its reactions to its environments, are “regulatory networks”?
The above statement by Furlong, translated to 22nd century comprehension, amounts to:
What’s exciting is that this study shows that it is possible to predict when and where organisms react to their environments, which is a crucial first step towards understanding how evolution proceeds.
C. Please consider the following suggestions of the origin and nature of life and organisms, and of the origin and nature of cosmic and life evolution
– Genes, Earth’s primal organisms, and all their take-off organisms – Life in general – are but one of the cosmic forms of mass, of constrained energy formats.
– The on-going cosmic mass-to-energy reversion since the Big-Bang inflation is resisted by mass, this resistance being the archtype of selection for survival by all forms of mass, including life.
– The mode of genes’, Earth’s primal organisms, response to the cultural feed-back signals reaching them from their upper stratum take-off organism is “replicate without change” or “replicate with change”. “Replicate with change” is selected in case of proven augmented energy constrainment by the the new generation, this being “better survival”. This mode of Life’s normal evolution is the mode of energy-mass evolution universally.
Suggesting for your consideration,
Dov Henis
(Comments From The 22nd Century)
Updated Life’s Manifest May 2009
http://www.the-scientist.com/community/posts/list/140/122.page#2321
Implications Of E=Total[m(1 + D)]
http://www.the-scientist.com/community/posts/list/180/122.page#3108