Science

The Arrow of Time in Scientific American

ab230924-fa4d-9eac-5e5e8d5152c227b1_1.jpg Greetings from Paris! Just checking in to do a bit of self-promotion, from which no blog-vacation could possibly keep me. I’ve written an article in this month’s Scientific American about the arrow of time and cosmology. It’s available for free online; the given title is “Does Time Run Backward in Other Universes?”, which wasn’t my choice, but these happenings are team events.

As a teaser, here is a timeline of the history of the universe according to the standard cosmology:

  • Space is empty, featuring nothing but a tiny amount of vacuum energy and an occasional long-wavelength particle formed via fluctuations of the quantum fields that suffuse space.
  • High-intensity radiation suddenly sweeps in from across the universe, in a spherical pattern focused on a point in space. When the radiation collects at that point, a “white hole” is formed.
  • The white hole gradually grows to billions of times the mass of the sun, through accretion of additional radiation of ever decreasing temperature.
  • Other white holes begin to approach from billions of light-years away. They form a homogeneous distribution, all slowly moving toward one another.
  • The white holes begin to lose mass by ejecting gas, dust and radiation into the surrounding environment.
  • The gas and dust occasionally implode to form stars, which spread themselves into galaxies surrounding the white holes.
  • Like the white holes before them, these stars receive inwardly directed radiation. They use the energy from this radiation to convert heavy elements into lighter ones.
  • Stars disperse into gas, which gradually smooths itself out through space; matter as a whole continues to move together and grow more dense.
  • The universe becomes ever hotter and denser, eventually contracting all the way to a big crunch.

Despite appearances, this really is just the standard cosmology, not some fairy tale. I just chose to tell it from the point of view of a time coordinate that is oriented in the opposite direction from the one we usually use. Given that the laws of physics are reversible, this choice is just as legitimate as the usual one; nevertheless, one must admit that the story told this way seems rather unlikely. So why does the universe evolve this way? That’s the big mystery, of course.

The Arrow of Time in Scientific American Read More »

132 Comments

Weighty Spin

Remember E = mc2? It’s the one equation that you are allowed to include in your popular-physics book (unless you’re George Gamow, who couldn’t be stopped). Mark gave a nice explanation of why it is true some time back, and I babbled about it some time before that. For a famous equation, it tends to be a bit misunderstood. A profitable way to think about it is to divide both sides by the speed of light squared, giving us m = E/c2, and take this as the definition of what we mean by mass. The mass of some object is just the energy it has in its rest frame — according to special relativity, the energy (not the mass!) will be larger if the object is moving with respect to us, so the mass of an object is essentially the energy intrinsic to its state, rather than that imparted by its motion. Energy is the primary concept, and mass is derived from it. Interestingly, the dark energy that makes up 70% of the energy of the universe doesn’t really have “mass” at all, since it’s not made up of objects (such as particles) that can have a rest frame — it’s a smooth field filling space.

qna_190.jpg All of which is to say that the mainstream media have let us down again. C. Clairborne Ray, writing in the New York Times, attempts to explain whether a spinning gyroscope weighs more than a stationary one, and answers “The weight stays the same; there is no known physical reason for any change.” Actually, there is! The spinning gyroscope has more energy than the non-spinning one. As a test, we can imagine extracting work from the spinning gyroscope — for example, by hooking it up to a generator — in ways that we couldn’t extract work from the stationary gyroscope. And since it has more energy, it has more mass. And the weight is just the acceleration due to gravity times the mass — so, as long as we weigh our spinning and non-spinning gyroscopes in the same gravitational field, the spinning one will indeed weigh more.

Admittedly, it’s a very tiny difference — the energy will increase by an amount proportional to the speed of the spinning gyroscope, divided by the speed of light, that quantity squared, which is really tiny. Nothing you’re going to measure at home. (I’m guessing it’s never even been measured in any laboratory, but I don’t know for sure.) And the article is correct to emphasize that there is no difference in mass that depends on the direction of spin of the gyroscope — that would violate Lorentz invariance, which is something worth looking for in its own right, but would be a Nobel-worthy discovery for anyone who found it.

Weighty Spin Read More »

60 Comments

Guest Post: Juan Collar on Dark Matter Detection

You may have heard some of the buzz about a new result concerning the direct detection of dark matter particles in an underground laboratory. The buzz originates from a new paper by the DAMA/LIBRA collaboration; David Harris links to powerpoint slides from Rita Bernabei, leader of the experiment, from her talk at a meeting in Venice.

The new experiment is an upgrade from a previous version of DAMA, which had already been on record as having recorded a statistically significant signal of the form you would expect from the collision of weakly interacting massive particles (WIMP’s) with the detector. The experiment uses a challenging technique, in which their focus is not on eliminating all possible backgrounds so as to isolate the dark-matter signal, but to look at the annual modulation in that signal that would presumably be caused by the Earth’s orbital motion through the cloud of dark matter in the Solar System: you expect more events when we are moving with a high velocity into the dark-matter wind. Other workers in the field have not been shy about expressing skepticism, but the DAMA team has stood their ground; as Jennifer notes in her report from the recent APS Meeting, the DAMA collaboration home page currently features a quote from Kipling: “If you can bear to hear the truth you’ve spoken/ twisted by knaves to make a trap for fools,/ ……………you’ll be a Man my son!”

Juan Collar To help provide some insight and context, we’ve solicited the help of a true expert in the field — Juan Collar of the University of Chicago. I got to know Juan back in my days as a Midwesterner, and a trip to his bustling underground experimental empire was always a highlight of anyone’s visit to the UofC physics department. You can hear him talk about his own work in this colloquium at Fermilab; he’s agreed to post for us about his views on the new DAMA result, and more general thoughts on what it takes to search for 25% of the universe. I promise you won’t be bored.

——————————————————————-

My dear friend Sean has me blogging: hey, I’ll try anything once. On the subject of the recent DAMA results no less, as per his request. I am normally a bit of a curmudgeon but… Sean, you really want the worst of me out there permanently on the internets, don’t you?

I’ll try to keep this to the point. A bard I am not, nor the subject invites any poetry. I have therefore chosen brief eruptions of flatulence as the metric and style for this piece. The result of indigestion, you see. I’ll start with the most negative, so as to end up on a brighter note:

  • The modulation is undeniable by now. I don’t know of any colleagues who doubted these data were blatantly modulated already back in 2003, when “the lady” (DAMA) decided to keep mum for a while. However, to conclude from something this mundane that the experiment “confirms evidence of Dark Matter particles in the galactic halo with high confidence level” or that there is “an evidence for the presence of dark matter particles in the galactic halo at 8.2 sigma confidence level” is simply delusional. There is evidence for a modulation in the data at 8.2 sigma, stop. Compatible with what would be expected from some dark matter particles in some galactic halo models, full stop. Anything beyond this is wanting to believe, and it smears on the rest of us in the field. Of course, of course… there is no other observed process in nature that peaks in the summer and goes through a low in winter, so this must be dark matter, right? (Occam is turning in his grave, rusty razor still in hand. He is thinking a remake of that opening scene in “Un chien andalou“, with help from this little lady. I am channeling him loud and clear).
  • Someone should take the DAMA folks aside for a beer, make them see the following. If one day soon we are all convinced that this effect was DM-induced (see below for what that will really take), they will be recognized for one of the greatest discoveries in the history of science, without them having to look desperate or foolish today. Or making the rest of us in the field do, by association: thanks DAMA, for cheapening the level of our discourse to truly imbecilic levels. (Sean, if you edit this I will scratch the paint off your car. I may not write blogs, but I do read them: I know how to hurt you).
  • Deep breath. Having cleared the air some (or just made it toxic, whatever), it is not DAMA’s fault that there is a penury of signatures in this field of ours, laboratory searches for particle dark matter. The one possible exception to this is a detector with good recoil directionality and sufficient target mass to be truly competitive, but we don’t know of a good enough way to do this as of today (“good enough” folds in the price tag). People are still trying. The diurnal modulation in the DM signal that would be sensed by such a device is wickedly rich in features, extremely hard for nature to imitate with anything else. The annual modulation resides on the other side of this spectrum of complexity. It is the poor man’s smoking-gun to DM “evidence”. Inspected carefully, it is disappointingly feeble: different models of the halo can shift the phase of this modulation completely, turning expected maxima into minima and vice-versa, changing the expected amplitude as well. Add to this the fact that essentially every possible systematic effect able to pass for a “signal” can be yearly-modulated, for one reason or another. That’s the ones we can presently think of, and the ones yet to be proposed. To grow convinced that we have observed dark matter in the lab we’ll require a number of entirely different techniques, using a variety of targets, all pointing at the same WIMP (mass, cross sections), with additional back-up information from accelerator experiments and from gamma-ray satellite observations (so-called indirect searches). All of those lines crossing at one point, so to speak. This I (for one) will call “evidence”. I know of no single existing or planned DM experiment, including those I participate in, that would be able to make anything close to a bulletproof claim on its own. My advice to any overambitious individuals looking for a quick kill is to look elsewhere in physics. WIMP hunting is not it, no matter how important the discovery of these particles might be.

Guest Post: Juan Collar on Dark Matter Detection Read More »

32 Comments

Science and Unobservable Things

Today’s Bloggingheads dialogue features me and writer John Horgan — I will spare you a screen capture of our faces, but here is a good old-fashioned link.

John is the author of The End of Science, in which he argues that much of modern physics has entered an era of “ironic science,” where speculation about unobservable things (inflation, other universes, extra dimensions) has replaced the hard-nosed empiricism of an earlier era. Most of our discussion went over that same territory, focusing primarily on inflation but touching on other examples as well.

You can judge for yourself whether I was persuasive or not, but the case I tried to make was that attitudes along the lines of “that stuff you’re talking about can never be observed, so you’re not doing science, it’s just theology” are woefully simplistic, and simply don’t reflect the way that science works in the real world. Other branches of the wavefunction, or the state of the universe before the Big Bang, may by themselves be unobservable, but they are part of a larger picture that remains tied to what we see around us. (Inflation is a particularly inappropriate example to pick on; while it has by no means been established, and it is undeniably difficult to distinguish definitively between models, it keeps making predictions that are tested and come out correct — spatial flatness of the universe, density fluctuations larger than the Hubble radius, correlations between perturbations in matter and radiation, fluctuation amplitudes on different scales that are almost equal but not quite…)

If you are firmly convinced that talking about the multiverse and other unobservable things is deeply unscientific and a leading indicator of the Decline of the West, nothing I say will change your mind. In particular, you may judge that the question which inflation tries to answer — “Why was the early universe like that?” — is a priori unscientific, and we should just accept the universe as it is. That’s an intellectually consistent position that you are welcome to take. The good news is that the overwhelming majority of interesting science being done today remains closely connected to tangible phenomena just as it (usually!) has been through the history of modern science. But if you instead ask in good faith why sensible people would be led to hypothesize all of this unobservable superstructure, there are perfectly good answers to be had.

The most important point is that the underlying goal of science is not simply making predictions — it’s developing an understanding of the mechanisms underlying the operation of the natural world. This point is made very eloquently by David Deutsch in his book The Fabric of Reality. As I mention in the dialogue, Deutsch chooses this quote by Steven Weinberg as an exemplar of hard-boiled instrumentalism:

The important thing is to be able to make predictions about images on the astronomers’ photographic plates, frequencies of spectral lines, and so on, and it simply doesn’t matter whether we ascribe these predictions to the physical effects of gravitational fields on the motion of planets and photons or to a curvature of space and time.

That’s crazy, of course — the dynamics through which we derive those predictions matters enormously. (I suspect that Weinberg was trying to emphasize that there may be formulations of the same underlying theory that look different but are actually equivalent; then the distinction truly wouldn’t matter, but saying “the important thing is to make predictions” is going a bit too far.) Deutsch asks us to imagine an “oracle,” a black box which will correctly answer any well-posed empirical question we ask of it. So in principle the oracle can help us make any prediction we like — would that count as the ultimate end-all scientific theory? Of course not, as it would provide no understanding whatsoever. As Deutsch notes, it would be able to predict that a certain rocket-ship design would blow up on take-off, but offer no clue as to how we could fix it. The oracle would serve as a replacement for experiments, but not for theories. No scientist, armed with an infinite array of answers to specific questions but zero understanding of how they were obtained, would declare their work completed.

If making predictions were all that mattered, we would have stopped doing particle physics some time around the early 1980’s. The problem with the Standard Model of particle physics, remember, is that (until we learned more about neutrino physics and dark matter) it kept making predictions that fit all of our experiments! We’ve been working very hard, and spending a lot of money, just to do experiments for which the Standard Model would be unable to make an accurate prediction. And we do so because we’re not satisfied with predicting the outcome of experiments; we want to understand the underlying mechanism, and the Standard Model (especially the breaking of electroweak symmetry) falls short on that score.

The next thing to understand is that all of these crazy speculations about multiverses and extra dimensions originate in the attempt to understand phenomena that we observe right here in the nearby world. Gravity and quantum mechanics both exist — very few people doubt that. And therefore, we want a theory that can encompass both of them. By a very explicit chain of reasoning — trying to understand perturbation theory, getting anomalies to cancel, etc. — we are led to superstrings in ten dimensions. And then we try to bring that theory back into contact with the observed world around us, compactifying those extra dimensions and trying to match onto particle physics and cosmology. The program may or may not work — it’s certainly hard, and we may ultimately decide that it’s just too hard, or find an idea that works just as well without all the extra-dimensional superstructure. Theories of what happened before the Big Bang are the same way; we’re not tossing out scenarios because we think it’s amusing, but because we are trying to understand features of the world we actually do observe, and that attempt drives us to these hypotheses.

Ultimately, of course, we do need to make contact with observation and experiment. But the final point to emphasize is that not every prediction of every theory needs to be testable; what needs to be testable is the framework as a whole. If we do manage to construct a theory that makes a set of specific and unambiguous testable predictions, and those predictions are tested and the theory comes through with flying colors, and that theory also predicts unambiguously that inflation happened or there are multiple universes or extra dimensions, I will be very happy to believe in the reality of those ideas. That happy situation does not seem to be around the corner — right now the data are offering us a few clues, on the basis of which invent new hypotheses, and we have a long way to go before some of those hypotheses grow into frameworks which can be tested against data. If anyone is skeptical that this is likely to happen, that is certainly their prerogative, and they should feel fortunate that the overwhelming majority of contemporary science is not forced to work that way. Others, meanwhile, will remain interested in questions that do seem to call for this kind of bold speculation, and are willing to push the program forward for a while to see what happens. Keeping in mind, of course, that when Boltzmann was grounding the laws of thermodynamics using kinetic theory, most physicists scoffed at the notion of these “atoms” and rolled their eyes at the invocation of unobservable entities to explain everyday phenomena.

There is also a less rosy possibility, which may very well come to pass: that we develop more than one theory that fits all of the experimental data we know how to collect, such that they differ in specific predictions that are beyond our technological reach. That would, indeed, be too bad. But at the moment, we seem to be in little danger of this embarrassment of theoretical riches. We don’t even have one theory that reconciles gravity and quantum mechanics while matching cleanly onto our low-energy world, or a comprehensive model of the early universe that explains our initial conditions. If we actually do develop more than one, science will be faced with an interesting kind of existential dilemma that doesn’t have a lot of precedent in history. (Can anyone think of an example?) But I’m not losing sleep over this possibility; and in the meantime, I’ll keep trying to develop at least one such idea.

Science and Unobservable Things Read More »

94 Comments

WMAP 5-Year Results Released

It doesn’t seem like all that long ago that we were enthusing about the results from the first three years of data from the Wilkinson Microwave Anisotropy Probe satellite. Now the team has put out an impressive series of papers discussing the results of the first five years of data. Here is what the CMB looks like, with galaxy and foregrounds and monopole and dipole subtracted, from Ned Wright’s Cosmology Tutorial:


WMAP 5-year sky

And here is one version of the angular power spectrum, taken from the Dunkley et al. paper. I like this one because it shows the individual points that get binned to create the spectrum you usually see. (Click for larger version.)

WMAP 5-year power spectrum

The headline two years ago was “Cosmology Makes Sense.” (That was my headline, anyway — others were not quite as accurate.) This continues to be true — the biggest piece of news isn’t that the results have overturned any foundations, but that the concordance model with dark matter, dark energy, and ordinary matter continues to work. The WMAP folks have produced an elaborate cosmological parameters table that runs the numbers for different sets of assumptions (with and without spatial curvature, running spectral index, etc), and for different sets of data (not just WMAP but also supernovae, lensing, etc). Everything is basically consistent with a flat universe comprised of 72% vacuum energy, 23% dark matter, and 5% ordinary matter. The perturbations are close to scale-free, but still seem to be a little larger on long wavelengths than shorter ones (0.014 < 1-ns < 0.067 at 95% confidence). Probably the most fun result is that there is, for the first time, evidence from the CMB that neutrinos exist! Good to know.

My personal favorite was the constraint in the Komatsu et al. paper on parity-violating birefringence that would rotate CMB polarization. I was in on the ground floor where birefringence is concerned, so I’m sentimentally attached to it. But it’s also a signature of some very natural quintessence models, so this helps constrain the physics of dark energy as well.

Congratulations to the WMAP team, who have done a great job in establishing some of the pillars of contemporary cosmology — it’s historic stuff.

WMAP 5-Year Results Released Read More »

40 Comments

Guest Post: Michelangelo D’Agostino on Particle Physics Fieldwork in Antarctica

Michelangelo is a grad student at Berkeley who had the fun opportunity to write a diary for the Economist that will continue through this week about his adventures doing particle physics in Antarctica. I would say more, but he does a pretty good job himself!

———————————————————————–

First off, I’d like to thank Sean for giving me the chance to write this guest post. It’s not every day (in fact, this would be the first time) that I get to write something for a blog that I both read and enormously respect. This is an especially great opportunity since the scions of CV are graciously allowing me to do a bit of shameless self-promotion for a five-part journal, being published this week, that I got to write for the website of the Economist magazine.

Maybe I should back up and introduce myself. I’m currently a fifth year physics PhD student at UC Berkeley. My research is on the IceCube project, a neutrino physics experiment located at the South Pole. Basically, we’re building the world’s largest particle detector out of the polar icecap itself. Using hot water, we melt holes 2,500 m down into the ice and install very sensitive light detectors. This allows us to study the particle debris that results from collisions of high-energy neutrinos in the ice. Ultimately, we’re hoping to learn about the basic physics of neutrinos as well about the properties of some of the violent astrophysical objects that might produce them and send them hurtling through the universe towards our detector.

This means that I do what many physicists do. I sit in front of a computer, writing code and analyzing data. I do calculations and simulations. I drink coffee and talk and argue with colleagues. But it also means that I get to do something only a smaller subset of astrophysicists and physicists get to do. I get to travel to a really exotic location to do fieldwork.

monplane.jpg I think this is an aspect of being a physicist that sometimes gets overlooked. It’s true that astronomers have always gone to mountaintops to build the best telescopes, and particle physicists have always traveled to underground accelerator facilities. However, fanning out to other locations to take advantage of particular natural features is something that has become increasingly important as we build bigger, deeper detectors to try to understand weak signals and/or rare and exotic phenomena. In recent years, physicists have been traveling to the vast Argentinean plains to understand the origins of the highest energy cosmic rays, particles that are constantly bombarding Earth. Folks who study the CMB and other long-wavelength radiation have been heading up to the high-altitude Atacama Desert (here, here, and here) and to the South Pole to take advantage of their thin, dry atmospheres. Selection and planning has been moving forward for a deep underground facility for doing basic neutrino and dark matter physics.

All this means that graduate students for years to come will have the exciting opportunity to go out into the field to do their work. While the research itself is exciting, traveling to these exotic locations brings us in contact with scientists from other fields doing all sorts of other great science. For those of us who get to go to Antarctica, we meet people on the cutting edge of climate and atmospheric research. For those working underground, they may encounter earth scientists or researchers study life in extreme environments. All of which make for rich and stimulating conversations and experiences.

This brings me back to the shameless self-promotion. When the Economist opportunity came up to share some of my experiences traveling, living, and doing research at the South Pole, I jumped at it. I’ve tried to squeeze in as much basic climate science and physics as I could, so if you’re interested, check it out here

Guest Post: Michelangelo D’Agostino on Particle Physics Fieldwork in Antarctica Read More »

11 Comments

Telekinesis and Quantum Field Theory

In the aftermath of the dispiriting comments following last week’s post on the Parapsychological Association, it seems worth spelling out in detail the claim that parapsychological phenomena are inconsistent with the known laws of physics. The main point here is that, while there are certainly many things that modern science does not understand, there are also many things that it does understand, and those things simply do not allow for telekinesis, telepathy, etc. Which is not to say that we can prove those things aren’t real. We can’t, but that is a completely worthless statement, as science never proves anything; that’s simply not how science works. Rather, it accumulates empirical evidence for or against various hypotheses. If we can show that psychic phenomena are incompatible with the laws of physics we currently understand, then our task is to balance the relative plausibility of “some folks have fallen prey to sloppy research, unreliable testimony, confirmation bias, and wishful thinking” against “the laws of physics that have been tested by an enormous number of rigorous and high-precision experiments over the course of many years are plain wrong in some tangible macroscopic way, and nobody ever noticed.”

The crucial concept here is that, in the modern framework of fundamental physics, not only do we know certain things, but we have a very precise understanding of the limits of our reliable knowledge. We understand, in other words, that while surprises will undoubtedly arise (as scientists, that’s what we all hope for), there are certain classes of experiments that are guaranteed not to give exciting results — essentially because the same or equivalent experiments have already been performed.

A simple example is provided by Newton’s law of gravity, the famous inverse-square law. It’s a pretty successful law of physics, good enough to get astronauts to the Moon and back. But it’s certainly not absolutely true; in fact, we already know that it breaks down, due to corrections from general relativity. Nevertheless, there is a regime in which Newtonian gravity is an effective approximation, good at least to a well-defined accuracy. We can say with confidence that if you are interested in the force due to gravity between two objects separated by a certain distance, with certain masses, Newton’s theory gives the right answer to a certain precision. At large distances and high precisions, the domain of validity is formalized by the Parameterized Post-Newtonian formalism. There is a denumerable set of ways in which the motion of test particles can deviate from Newtonian gravity (as well as from general relativity), and we can tell you what the limits are on each of them. At small distances, the inverse-square behavior of the gravitational force law can certainly break down; but we can tell you exactly the scale above which it will not break down (about a tenth of a millimeter). We can also quantify how well this knowledge extends to different kinds of materials; we know very well that Newton’s law works for ordinary matter, but the precision for dark matter is understandably not nearly as good.

This knowledge has consequences. If we discover a new asteroid headed toward Earth, we can reliably use Newtonian gravity to predict its future orbit. From a rigorous point of view, someone could say “But how do you know that Newtonian gravity works in this particular case? It hasn’t been tested for that specific asteroid!” And that is true, because science never proves anything. But it’s not worth worrying about, and anyone making that suggestion would not be taken seriously.

As with asteroids, so with human beings. We are creatures of the universe, subject to the same laws of physics as everything else. As everyone knows, there are many things we don’t understand about biology and neuroscience, not to mention the ultimate laws of physics. But there are many things that we do understand, and only the most basic features of quantum field theory suffice to definitively rule out the idea that we can influence objects from a distance through the workings of pure thought.

The simplest example is telekinesis, the ability to remotely move an object using only psychic powers. For definitiveness, let’s consider the power of spoon-bending, claimed not only by Uri Geller but by author and climate skeptic Michael Crichton.

What do the laws of physics have to say about spoon-bending? Below the fold, we go through the logic.

Telekinesis and Quantum Field Theory Read More »

172 Comments

Aether Compactification

Even in an election year, physics marches on. Physics is forever.

In this case it’s a fun little paper by Heywood Tam (a grad student here at Caltech) and me, arXiv:0802.0521:

We propose a new way to hide large extra dimensions without invoking branes, based on Lorentz-violating tensor fields with expectation values along the extra directions. We investigate the case of a single vector “aether” field on a compact circle. In such a background, interactions of other fields with the aether can lead to modified dispersion relations, increasing the mass of the Kaluza-Klein excitations. The mass scale characterizing each Kaluza-Klein tower can be chosen independently for each species of scalar, fermion, or gauge boson. No small-scale deviations from the inverse square law for gravity are predicted, although light graviton modes may exist.

This harkens back to the idea of a vector field that violates Lorentz invariance (which Ted Jacobson and friends have dubbed “aether,” appropriately enough), and in particular a vector that picks out a preferred direction in space. I explored this possibility last year in a paper with Lotty Ackerman and Mark Wise, and Mark recently wrote a followup with Tim Dulaney and Moira Gresham. (Our paper was detailed in the “anatomy of a paper” series, 1 2 3.)

There is an obvious problem with the notion of a vector field that violates rotational invariance by picking out a preferred direction through all of space — we don’t see any evidence for it! Physics as we have thus far experienced it seems pretty darn rotationally invariant. In my paper with Lotty and Mark, we sidestepped this issue by imagining that the vector was important in the early universe, and subsequently decayed away.

But there’s another way to sidestep the issue, pretty obvious in retrospect: have the vector point in a direction we don’t see! Extra dimensions are of course a popular theoretical construct, and once you make that leap you can ask what would happen if an unseen extra dimension contained a constant vector field. That would leave good old four-dimensional Lorentz invariance completely unbothered, so it’s not immediately constrained by any well-known experimental bounds.

So, beyond being fun and not ruled out, is it good for anything? The answer is: quite possibly. Heywood and I calculated what the influence of such a vector would be on other fields that propagated in a single extra dimension. In good old-fashioned Kaluza-Klein theory, momentum in the extra dimension can only take on discrete values (it’s quantized, in other words), and each kind of field breaks into an infinite “tower” of particles of different masses. The separation between different mass levels is just the inverse of the size of the extra dimension in natural units. What’s that? You insist upon seeing the equation? Okay, if the original mass of the field is m and the size of the extra dimension is R, we have a series of masses indexed by n:

displaystyle m_n^2 = m^2 + left(frac{hbar n}{cR}right)^2,.

Here, hbar is Planck’s constant, c is the speed of light, and n is just a whole number that can be anything from 0 to infinity. So the effect of the compact fifth dimension is to give us an infinite set of four-dimensional particles, indexed by n, each with a different mass. Not a very big mass, unless the extra dimension is pretty small; separating the levels by about 1 electron volt requires a dimension that is about 1 micrometer across. We would certainly have noticed all those new particles unless the extra dimensions were considerably smaller than a Fermi (10-15 meters).

The interesting thing that Heywood and I discovered is that the effect of an aether field pointing in the extra dimension is to boost all of the mass levels in the Kaluza-Klein tower. There is a new set of coupling constants, αi for every kind of particle i, that tells us how strongly that particle interacts with the aether. The mass formula is modified to read

displaystyle m_n^2 = m^2 + left(alpha_ifrac{hbar n}{cR}right)^2,.

So if αi is huge, you could have a huge mass splitting even with an extra dimension that was pretty large. This gives a new way to hide extra dimensions — not just make them invisibly small (the old-school Kaluza-Klein method) or confine us to some thin brane (the new-school ’90s style), but to boost the effective masses associated with momentum in the new direction. And there is an obvious experimental test, if you were to find all of these new particles: unlike plain vanilla compactification, where the towers associated with each kind of field have the same mass splittings, here the splittings could be completely different for every kind of particle, just by choosing different αi‘s.

To be fair, this idea does not by itself suggest any reason why the extra dimensions should be large. To allow for a millimeter-sized dimension, the coupling αi has to be at least 1015, which any particle physicist will tell you is an unnaturally big number. But the aether at least allows for the possibility, which I think is worth exploring. Who knows, some clever young graduate student out there might figure out how to use this idea to solve the hierarchy problem and the cosmological constant problem, then we would discover aetherized extra dimensions at the LHC, and everyone would become famous.

Aether Compactification Read More »

37 Comments

Dunbar’s Number

I never knew this. (Via xkcd.) Wikipedia defines Dunbar’s number:

Dunbar’s number, which is very approximately 150, represents a theorized cognitive limit to the number of individuals with whom any one person can maintain stable social relationships, the kind of relationships that go with knowing who each person is and how each person relates socially to every other person. Group sizes larger than this generally require more restricted rules, laws, and enforced policies and regulations to maintain a stable cohesion. Dunbar’s number is a significant value in sociology and anthropology. It was proposed by British anthropologist Robin Dunbar, who theorized that “this limit is a direct function of relative neocortex size, and that this in turn limits group size … the limit imposed by neocortical processing capacity is simply on the number of individuals with whom a stable inter-personal relationship can be maintained.”

In the context of the impending Super Duper Tuesday showdown, I can’t help but thinking of this in terms of politicians. Various famous political figures are occasionally described as having uncanny abilities to connect quickly with a wide variety of people, remember faces, and convince casual acquaintances that they are your best friend. Perhaps their neocortices have the unusual ability to maintain relationships (or at least appear to) with far more than the conventional 150?

Dunbar’s Number Read More »

18 Comments

Boltzmann’s Universe

Boltzmann’s Brain CV readers, ahead of the curve as usual, are well aware of the notion of Boltzmann’s Brains — see e.g. here, here, and even the original paper here. Now Dennis Overbye has brought the idea to the hoi polloi by way of the New York Times. It’s a good article, but I wanted to emphasize something Dennis says quite explicitly, but (from experience) I know that people tend to jump right past in their enthusiasm:

Nobody in the field believes that this is the way things really work, however.

The point about Boltzmann’s Brains is not that they are a fascinating prediction of an exciting new picture of the multiverse. On the contrary, the point is that they constitute a reductio ad absurdum that is meant to show the silliness of a certain kind of cosmology — one in which the low-entropy universe we see is a statistical fluctuation around an equilibrium state of maximal entropy. According to this argument, in such a universe you would see every kind of statistical fluctuation, and small fluctuations in entropy would be enormously more frequent than large fluctuations. Our universe is a very large fluctuation (see previous post!) but a single brain would only require a relatively small fluctuation. In the set of all such fluctuations, some brains would be embedded in universes like ours, but an enormously larger number would be all by themselves. This theory, therefore, predicts that a typical conscious observer is overwhelmingly likely to be such a brain. But we (or at least I, not sure about you) are not individual Boltzmann brains. So the prediction has been falsified, and that kind of theory is not true. (For arguments along these lines, see papers by Dyson, Kleban, and Susskind, or Albrecht and Sorbo.)

I tend to find this kind of argument fairly persuasive. But the bit about “a typical observer” does raise red flags. In fact, folks like Hartle and Srednicki have explicitly argued that the assumption of our own “typicality” is completely unwarranted. Imagine, they say, two theories of life in the universe, which are basically indistinguishable, except that in one theory there is no life on Jupiter and in the other theory the Jovian atmosphere is inhabited by six trillion intelligent floating Saganite organisms.

Boltzmann’s Universe Read More »

100 Comments
Scroll to Top