Science

Martian Colors

I’m back from the Beyond Belief II conference at the Salk Institute in La Jolla, which packed an extraordinary amount of intellectual stimulation into a few short days. Any conference where you wander into the opening reception, get drawn into a conversation about reductionism and meaning with Stuart Kauffman, Rebecca Goldstein, and Sir Harold Kroto, and end up closing down the bar, is bound to be a good one, and this did not disappoint. (The title notwithstanding, much of the conference had little to do with atheism or religion — the subtitle “Enlightenment 2.0” gave a better flavor.) The talks provided fodder for at least ten to twenty blog posts, of which I’ll probably get around to writing one or two.

One of the talks was by local neuroscientist V.S. Ramachandran, or “Rama” to his friends. (Like any good neuro person, his web page includes a fun collection of optical illusions.) He talked about his experiments with synesthesia, the phenomenon in which people see graphemes (e.g. numbers or letters) as associated with colors. I do that a little bit — five is certainly yellow, seven is red, and eight is blue — but it’s closer to a vague association than a vivid experience. Some people report very strong synesthetic reactions, and for a long time researchers have wondered whether the experience was mostly metaphorical or something stronger.

synes-1.jpgTo test synesthesia, Rama and collaborators designed an experiment where they could measure the vividness of the colors associated with the numbers 2 and 5. They chose those because you can make them look almost identical, although reversed, by choosing a boxy font. Then they made up a picture (on left) of mostly fives, with a few twos scattered within there. Then they asked people to pick out the twos. Most ordinary folks could do it within about twenty seconds or so.

synes-2.jpg But true synesthetes could do it immediately. That’s because to them, the twos popped out as a brightly colored triangle (right). This established beyond much doubt that synesthesia was “real,” and more particularly that was a measurable phenomenon with real consequences.

This, in turn, strengthened the hypothesis that the origin of synesthesia was to be found in the structure of the brain. Indeed, it turns out that the region of the brain responsible for processing graphemes lies adjacent to the region responsible for processing colors.

Martian Colors Read More »

51 Comments

Dark Matter: Still Existing

I love telling the stories of Neptune and Vulcan. Not the Roman gods, the planets that were originally hypothesized to explain the mysterious motions of other planets. Neptune was propsed by Urbain Le Verrier in order to account for deviations from the predicted orbit of Uranus. After it was discovered, he tried to repeat the trick, suggesting a new inner planet, Vulcan, to account for the deviations of the orbit of Mercury. It didn’t work the second time; Einstein’s general relativity, not a new celestial body, was the ultimate explanation.

In other words, Neptune was dark matter, and it was eventually discovered. But for Mercury, the correct explanation was modified gravity.

We’re faced with the same choices today, with galaxies and clusters playing the role of the Solar System. Except that the question has basically been answered, by observations such as the Bullet Cluster. If you modify gravity, it’s fairly straightforward (although harder than you might guess, if you’re careful about it) to change the strength of gravity as a function of distance. So you can mock up “dark matter” by imagining that gravity at very large distances is just a bit stronger than Newton (or Einstein) would have predicted — as long as the hypothetical dark matter is in the same place as the ordinary matter is.

But it’s enormously more difficult to invent a theory of modified gravity in which the direction of the gravitational force points toward some place other than where the ordinary matter is. So the way to rule out the modified-gravity hypothesis is to find a system in which the dark matter and ordinary matter are located in separate places. If you see a gravitational force pointing at something other than the ordinary matter, dark matter remains the only reasonable explanation.

And that’s precisely what the Bullet Cluster gives you. Dark matter that has been dynamically separated from the ordinary matter, and indeed you measure the gravitational force (using weak lensing) and find that it points toward the dark matter, not toward the ordinary matter. So, we had an interesting question — dark matter or modified gravity? — and now we know the answer: dark matter. You might also have modified gravity, but one’s interest begins to wane, and we move on to trying to figure out what the dark matter actually is.

Dark Matter Motivational Poster

But some people don’t want to give up. A recent paper by Brownstein and Moffat claims to fit the Bullet Cluster using modified gravity rather than dark matter. If that were right, and the theory were in some sense reasonable, it would be an interesting and newsworthy result. So, you might think, the job of any self-respecting cosmologist should be to work carefully through this paper (it’s full of equations) and figure out what’s going on. Right?

I’m not going to bother. The dark matter hypothesis provides a simple and elegant fit to the Bullet Cluster, and for that matter fits a huge variety of other data. That doesn’t mean that it’s been proven within metaphysical certainty; but it does mean that there is a tremendous presumption that it is on the right track. The Bullet Cluster (and for that matter the microwave background) behave just as they should if there is dark matter, and not at all as you would expect if gravity were modified. Any theory of modified gravity must have the feature that essentially all of its predictions are exactly what dark matter would predict. So if you want to convince anyone to read your long and complicated paper arguing in favor of modified gravity, you have a barrier to overcome. These folks aren’t crackpots, but they still face the challenge laid out in the alternative science respectability checklist: “Understand, and make a good-faith effort to confront, the fundamental objections to your claims within established science.” Tell me right up front exactly how your theory explains how a force can point somewhere other than in the direction of its source, and why your theory miraculously reproduces all of the predictions of the dark matter idea (which is, at heart, extraordinarily simple: there is some collisionless non-relativistic particle with a certain density).

And people just don’t do that. They want to believe in modified gravity, and are willing to jump through all sorts of hoops and bend into uncomfortable contortions to make it work. You might say that more mainstream people want to believe in dark matter, and are therefore just as prejudiced. But you’d be laboring under the handicap of being incorrect. Any of us would love to discover a modification of Einstein’s equations, and we talk about it all the time. As a personal preference, I think it would be immeasurably more interesting if cosmological dynamics could be explained by modifying gravity rather than inventing some dumb old particle.

But the data say otherwise. So most of us suck it up and get on with our lives. Don’t get me wrong: I’m happy that some people are continuing to work on a long-shot possibility such as replacing dark matter with modified gravity. But it’s really a long shot at this point. There is a tremendous presumption against it, and you would have to have a correspondingly tremendous theory to get people interested in the possibility. I don’t think it’s worth writing news stories about, in particular: it gives people who don’t have the background to know any better the idea that more or less everything is still up for grabs. But we do learn things and make progress, and at this point it’s completely respectable to say that we’ve learned that dark matter exists. Not what all of us were rooting for, but the universe is notoriously uninterested in adapting its behavior to conform to our wishes.

Dark Matter: Still Existing Read More »

147 Comments

Rationality Revisited

Speaking about how someone with a physics background might approach economics, you might prefer intimidatingly-informed commentary over my unfettered-by-knowledge noodling. In that case, you should zip over to Cosma Shalizi’s blog, where he offers a thoroughly-hyperlinked meditation on the state of econophysics. Full of good stuff along the lines of:

So then: why oh why don’t we have better econophysics?

The first reason which occurs to me, now that I’m a dues-paying, card-carrying statistician, is that almost all econophysicists are theoretical physicists, and moreover statistical physicists. (I’m one myself, or at least was through my Ph.D.) Modern physics began, in the 17th century, by fusing mathematical
theorizing
and artisanal craft, but one result of our progress has been to impose a specialized division of labor, sharply separating theory and experiment; Fermi was probably the last physicist to be both a great theorist and a great experimenter. (Perhaps this is connected to his invention of Monte Carlo?) This means that it is very rare for a theoretical physicist to analyze actual empirical data (say, measurements of magnetic susceptibility), which is what the experimentalists do. Theorists instead deal with experimental results (say, that the susceptibility depends on temperature in such-and-such a way). In high energy physics, theorists are actually so remote from contact with experimentalists that a separate guild of interface specialists (“phenomenologists”) has arisen to mediate between them. As a natural consequence of this division of labor, theorists receive no instruction at all in data analysis, let alone statistical inference.

There is much more. Jim Cronin once loaned me some videotapes of old news shows from the 1940’s that featured interviews with Enrico Fermi. He was an amazing guy, the kind who would kill time on a free afternoon by coming up with an explanation for the origin of cosmic rays. It’s too bad that, in the popular or semi-popular imagination, his name doesn’t immediately pop up on the list of the demigods of 20th-century physics. You could make a solid case that he should be number two after Einstein.

Crooked Timber links to Cosma’s post, and also features a post by John Quiggin that follows up on mine. He notes that most of my suggestions are well-incorporated into economics, which is no surprise. The part that is judged interesting is the idea that social scientists would be well-served to distinguish between descriptive notions of what people do and prescriptive notions of what is considered “rational.” A blog at The Economist (the magazine they like to call a newspaper) makes a similar point, so maybe there is something there. Indeed, we are informed that this kind of reasoning keeps popping up despite the fact that Joseph Butler demolished it 300 years ago, so there must be something attractive about it. (Hey, David Hume demolished the argument from design before William Paley even popularized it, but you don’t see it fading away, do you?)

Rationality Revisited Read More »

25 Comments

So What Have You Been Maximizing Lately?

A while back, Brad DeLong referred to Ezra Klein’s review of Tyler Cowen’s book Discover Your Inner Economist. (Which I own but haven’t yet read; if it’s as interesting as the blog, I’m sure it will be great.) The question involves rational action in the face of substantial mark-ups on the price of wine in nice restaurants:

I did once try to convince Bob Hall at a restaurant in Palo Alto not to order wine: the fact that the wine would cost four times retail would, I said, depress me and lower my utility. Even though I wasn’t paying for it, I would still feel as though I was being cheated, and as I drank the wine that would depress me more than the wine would please me.

He had two responses: (i) “You really are crazy.” (ii) “Think, instead, that it’s coming straight out of the Hoover Institution endowment, and order two bottles.”

He is crazy, of course — crazy like an economist. I left a searingly brilliant riposte in the comment section of the post, which mysteriously never appeared. He will probably claim it was a software glitch or that I hit “Preview” instead of hitting “Post,” but I know better. What are you afraid of, Brad DeLong!?

Economists have a certain way of looking at the world, in which (to simplify quite a bit) people act rationally to maximize their utility. That sort of talk pushes physicists’ buttons, because maximizing functions is something we do all the time. I’m not deeply familiar with economics in any sense; everything I know about the subject comes from reading blogs. Any social science is much harder than physics, in the sense that constructing quantitative models that usefully describe the behavior of realistic systems is made enormously difficult by the inherent nonlinearities of human interactions. (“Ignoring friction” is the basis of 98% of physics, but nearly impossible in social sciences.) But I can’t help speculating, in a completely uninformed way, how economists could improve their modeling of human behavior. Anyone who actually knows something about economics is welcome to chime in to explain why all this is crazy (very possible), or perfectly well-known to all working economists (more likely), or good stuff that they will steal for their next paper (least likely). The freedom to speculate is what blogs are all about.

Utility is a map from the space of goods (or some space of outcomes) to the real numbers:

U: {goods} -> R

The utility function encapsulates preferences by measuring how happy I would be if I had those goods. If a set of goods A brings me greater utility than a set B, and I have to choose between them, it would be rational for me to choose A. Seems reasonable. But a number of issues arise when we put this kind of philosophy into practice. So here are those that occur to me, over the course of one plane ride across a couple of time zones.

  • Utility is non-linear.

This one is so perfectly obvious that I’m sure everyone knows it; nevertheless, it’s what immediately popped into mind upon reading the wine story. We need to distinguish between two different senses of linear. One is that increasing the amount of goods leads to a proportional increase in utility: U(ax) = aU(x), where x is some collection of goods and a is a real number. Everyone really does know better than that; the notion of marginal utility captures the fact that eating five deep-fried sliders does not bring you five times the happiness that eating just one would bring you. (Likely it brings you less.)

But the other, closely related, sense of linearity is the ability to simply add together the utility associated with different kinds of goods: U(x+y) = U(x) + U(y), where x and y are different goods. In the real world, utility isn’t anything like that. It’s highly nonlinear; the presence of one good can dramatically affect the value placed on another one. I’m also pretty sure that absolutely every economist in the world must know this, and surely they use interesting non-linear utility functions when they write their microeconomics papers. But the temptation to approximate things as linear can lead, I suspect, to the kind of faulty reasoning that dissuades you from ordering wine in nice restaurants. Of course, you could have water with your meal, and then go home and have a glass of wine you bought yourself, thereby saving some money and presumably increasing your net utility. But having wine with dinner is simply a different experience than having the wine later, after you’ve returned home. There is, a physicist would say, strong coupling between the food, the wine, the atmosphere, and other aspects of the dining experience. And paying for that coupling might very well be worth it.

Physicists deal with this by working hard at isolating the correct set of variables which are (relatively) weakly-coupled, and dealing with the dynamics of those variables. It would be silly, for example, to worry about protons and neutrons if you were trying to understand chemistry — atoms and electrons are all you need. So the question is, is there an economic equivalent to the idea of an effective field theory?

  • Utility is not a function of goods.

Another in the category of “surely all the economists in the world know this, but they don’t always act that way.” A classic (if tongue-in-cheek) example is provided by this proposal to cure the economic inefficiency of Halloween by giving out money instead of candy. After all, chances are small that the candy you collect will align perfectly with the candy you would most like to have. The logical conclusion of such reasoning is that nobody should ever buy a gift for anyone else; the recipient, knowing their own preferences, could always purchase equal or greater utility if they were just given the money directly.

But there is an intrinsic utility in gift-giving; we value a certain object for having received it on a special occasion from a loved one (or from a stranger while trick-or-treating), in addition to its inherent value. Now, one can try to account for this effect by introducing “having been given as a gift” as a kind of good in its own right, but that’s clearly a stopgap. Instead, it makes sense to expand the domain set on which the utility function is defined. For example, in addition to a set of goods, we include information about the path by which those goods came to us. Path-dependent utility could easily account for the difference between being given a meaningful gift and being handed the money to buy the same item ourselves. Best of all, there are clearly a number of fascinating technical problems to be solved concerning strategies for maximizing path-dependent utility. (Could we, for example, usefully approximate the space of paths by restricting attention to the tangent bundle of the space of goods?) Full employment for mathematical economists! Other interesting variables that could be added to the domain set on which utility is defined are left as exercises for the reader.

  • People do not behave rationally.

This is the first objection everyone thinks of when they hear about rational-choice theory — rational behavior is a rare, precious subset of all human activity, not the norm that we should simply expect. And again, economists are perfectly aware of this, and incorporating “irrationality” into their models seems to be a growth business.

But I’d like to argue something a bit different — not simply that people don’t behave rationally, but that “rational” and “irrational” aren’t necessarily useful terms in which to think about behavior. After all, any kind of deterministic behavior — faced with equivalent circumstances, a certain person will always act the same way — can be modeled as the maximization of some function. But it might not be helpful to think of that function as utility, or as the act of maximizing it as the manifestation of rationality. If the job of science is to describe what happens in the world, then there is an empirical question about what function people go around maximizing, and figuring out that function is the beginning and end of our job. Slipping words like “rational” in there creates an impression, intentional or not, that maximizing utility is what we should be doing — a prescriptive claim rather than a descriptive one. It may, as a conceptually distinct issue, be a good thing to act in this particular way; but that’s a question of moral philosophy, not of economics.

  • People don’t even behave deterministically.

If, given a set of goods (or circumstances more generally), a certain person will always act in a certain way, we can always describe such behavior as maximizing a function. But real people don’t act that way. At least, I know I don’t — when faced with a tough choice, I might go a certain way, but I can’t guarantee that I would always do the same thing if I were faced with the identical choice another hundred times. It may be that I would be a lot more deterministic if I knew everything about my microstate — the exact configuration of every neuron and chemical transmitter in my brain, if not every atom and photon — but I certainly don’t. There is an inherent randomness in decision-making, which we can choose to ascribe to the coarse-grained description that we necessarily use in talking about realistic situations, but is there one way or the other.

The upshot of which is, a full description of behavior needs to be cast not simply in terms of the most function-maximizing choice, but in a probability distribution over different choices. The evolution of such a distribution would be essentially governed by the same function (utility or whatever) that purportedly governs deterministic behavior, in the same way that the dynamics in Boltzmann’s equation is ultimately governed by Newton’s laws. The fun part is, you’d be making better use of the whole utility function, not just those special points at which it is maximized — just like the Feynman path integral established a way to make use of the entire classical action, not just those extremal points. I have no idea whether thinking in this way would be useful for addressing any real-world problems, but at the very least it should provide full employment for mathematical economists.

Okay, I bet that’s at least three or four Sveriges Riksbank Prizes in Economic Sciences in Memory of Alfred Nobel lurking in there somewhere. Get working, people!

So What Have You Been Maximizing Lately? Read More »

58 Comments

The Meaning of “Life”

John Wilkins at Evolving Thoughts has a great post about the development of the modern definition of “Life” (which, one strongly suspects, is by no means fully developed). Once we break free of the most parochial definitions involving carbon-based chemistry, we’re left with the general ideas that life is something complex, something that processes information, something that can evolve, something that takes advantage of local entropy gradients to make records and build structures. (Probably quantum computation does not play a crucial role, but who knows?) One of the first people to think in these physical terms was none other than Erwin Schrödinger, who was mostly famous for other things, but did write an influential little book called What Is Life? that explored the connections between life and thermodynamics.

Searching for a definition of “Life” is a great reminder of the crucial lesson that we do not find definitions lying out there in the world; we find stuff out there in the world, and it’s our job to choose definitions that help us make sense of it, carving up the world into useful categories. When it comes to life, it’s not so easy to find a definition that includes everything that we would like to think of as living, but excludes the things we don’t.

Milky Way

For example: is the Milky Way galaxy alive? Probably not, so find a good definition that unambiguously excludes it. Keep in mind that the Milky Way, like any good galaxy, metabolizes raw materials (turning hydrogen and helium into heavier elements) and creates complexity out of simplicity, and does so by taking advantage of a dramatic departure from thermal equilibrium (of which CV readers are well aware) to build organization via an entropy gradient.

Update: Unbeknownst to me, Carl Zimmer had just written about this exact topic in Seed. Hat tip to 3QD.

The Meaning of “Life” Read More »

96 Comments

National Academy: Dark Energy First, Maybe LISA Second

The National Academy of Sciences panel charged with evaluating the Beyond Einstein program has come out with its recommendations. Briefly: the first priority should be the Joint Dark Energy Mission (where “joint” means “with the Department of Energy”), but we should keep up some amount of work on LISA, the Laser Interferometer Space Antenna. Steinn has the lowdown, so you should go there for details.

I am happy to know that JDEM will go forward (if NASA listens to the panel, about which I’m less sure than Steinn seems to be); very happy that LISA gets at least some support, although if I were the European Space Agency I’d certainly be shopping around for more reliable partners; slightly bemused that little effort seemed to go into pushing a CMB probe; and very sad to see X-ray astronomy get the shaft, as Constellation-X and EXIST seem right out of the picture. We can only hope for happier times ahead.

National Academy: Dark Energy First, Maybe LISA Second Read More »

28 Comments

Arguments For Things I Don’t Believe, 1: Research on String Theory is Largely a Waste of Time

First in a prospective series of my own versions of the best arguments for conclusions I don’t personally share. I’m supposed to stick to statements that I believe are true, even if I don’t think they warrant the conclusion. The idea is to probe presuppositions, put our ideas to the test, and of course to implicitly diss the less-good arguments for things we don’t believe. And who knows, maybe we’ll come up with arguments that are so great we’ll change our minds! (By slipping into the royal “we” I’m encouraging others to play along.) So here we go: the best argument I can think of for why research on string theory is a waste of time.

Traditionally, the greatest progress in physics has come through an intense interaction between theory and experiment. We have learned new things when experiments were good enough to bring us data that didn’t fit into the models of the time, but our theoretical understanding was also sufficiently developed that we had the tools to formulate useful hypotheses. While we know that classical general relativity and quantum mechanics are fundamentally incompatible and must someday be reconciled, straightforward dimensional analysis suggests that detailed experimental information about the workings of such a reconciliation (as opposed to true-but-vague statements like “gravity exists” or “spacetime is four-dimensional on large scales”) won’t be available at energies below the Planck scale, which is hopelessly out of reach at the current time.

A defensible response to this lack of detailed experimental input would be to place the problem of quantizing gravity on the back burner while we think about other things. And this was indeed the strategy pursued by the overwhelming majority of theoretical physicists, up until the 80’s. Two things caused a change: the drying-up of the river of experimental surprises that had previously kept particle theory vibrant and unpredictable, and the appearance of string theory as a miraculously promising theory of quantum gravity. Even though the Planck scale was still just as inaccessible, string theory was so good that it became reasonable to hope that we could figure it all out just by using brainpower, even without Planckian accelerators.

But it hasn’t worked out that way. Gadflies point to the landscape of low-energy manifestations of string theory as the nail in the coffin for any hopes to uniquely predict new particle physics from string theory. But that is only a subset of the more significant challenge, and understanding particle physics beyond the Standard Model was never the primary motivation of most string theorists anyway — it was quantizing gravity.

The real problem is that string theory isn’t a theory. It’s just part of a theory, and we don’t know what that theory is, although sometimes we call it M-theory. As Aaron explains in a very nice post, the thing we understand is “perturbative” string theory, which is a fancy way of saying “the part of M-theory where small perturbations around empty space act like weakly-interacting strings.” We’ve known all along that colorful stories about loops of string propagating through spacetime only captured part of the story, but we’re beginning to catch on to how difficult it will be to capture the whole thing. The Second String Revolution in the 90’s taught us a great deal about M-theory, but it’s hard to know whether we should be more impressed with what we’ve been able to learn even without experimental input, or more daunted by the task of finishing the job.

Within our current understanding of string theory, there is not a single experiment we can even imagine doing (much less actually, realistically hope to do) that would falsify string theory. We can’t make a single unambiguous prediction, even in principle. I used to think that string theory predicted certain “stringy” behavior of scattering cross-sections at energies near the Planck scale; but that’s not right, only perturbative string theory predicts such a thing. “String theory” is part of a larger structure that we don’t understand nearly well enough to make contact with the real world as yet, and it’s completely possible that another century or two of hard thinking won’t get us to that goal. It made sense to be optimistic in the 80’s that there was enough rigidity and uniqueness in the theory that we would be led more or less directly to contact with observation; but that’s not what has happened.

The best reason to think that research on string theory is largely a waste of time is because it’s just too hard.

Pretty convincing, eh? But I don’t buy it, even though I think I’ve adhered to my self-imposed rule that I believe every individual sentence above. It might turn out to be the case that another century or two of hard thinking won’t get us any closer to connecting string theory with the real world, but I don’t see any reason to be that pessimistic. The thing that’s really hard to get across at a popular level is that the theory really is rigid and unique, deep down; it’s the connections between “deep down” and the world around us that are the hard part. Count me as one of those who is more impressed with what we have learned than daunted by what we haven’t; if I were to bet, I would say that more thinking will continue to lead to more breakthroughs, and ultimately a version of M-theory that can rightly be called “realistic.”

In the meantime, the advent of sexy new data from the LHC and elsewhere will draw a certain fraction of brainpower away from string theory and into phenomenology, but there will be plenty left over. The field as a whole will fitfully establish a portfolio of different approaches, as it usually does. And there will undoubtedly be surprises around the corner.

Arguments For Things I Don’t Believe, 1: Research on String Theory is Largely a Waste of Time Read More »

50 Comments

Ask a String Theorist! Or an Atomic Physicist.

Over at Uncertain Principles, Chad Orzel is on vacation and has handed the keys to the blog over to Aaron Bergman and Nathan (last name mysterious), specialists in string theory and atomic physics, respectively. Good luck to them as they experience what the blogosphere is like from the other side.

Aaron has begun to talk a little about the multiverse — here, here. He has thereby earned grumpy mutterings, rolled eyes, and “help” from some sensible physicists, some crackpots, some curmudgeons, his guest co-blogger, and even himself. I don’t quite understand what all the angst is about. (Actually I do understand, of course; this is one of those times when you adopt a rhetorical stance of pretending not to understand some alternative position in order to emphasize how unimpeachably correct your own position is.)

People are very welcome to disagree with the presuppositions or conclusions of anthropic reasoning; that’s just how science goes, and is perfectly healthy. But in addition to the substantive disagreements, there’s a widespread urge to express dismay that it’s even being talked about all the time. Now, that urge can’t be sensibly directed toward the actual research being done, because on that score multiverse-type stuff is a tiny percentage of all the work that goes on. Peek at any day’s worth of abstracts on hep-th, hep-ph, gr-qc, or astro-ph; you might find something anthropic here or there if you’re lucky, but it’s a tiny minority. This stuff is not dominating science, or physics, or theoretical physics, or high-energy theory, or even string theory.

No, the complaint is that considerations of parts of the universe that we can’t possibly see tend to receive an inordinate amount of attention in public discussions — on blogs, in books, in magazines and newspapers. Which is completely true, as a factual statement. At the risk of revealing a trade secret: the public discussion of different avenues of scientific research does not faithfully reflect the amount of research effort being put into those questions. Eek! I’d be willing to bet that it has always been like that. And yet, science marches on.

You may ask why something like the multiverse exerts such an outsized pull on the public imagination. So let me break it down for you here: it’s fun. People like talking about other universes, and whether we could be living in a simulation, and what happened before the Big Bang. For one thing, anyone can dive in; you don’t need to be an expert on twistor space, or two-loop counterterms, or BRST invariance in order to pontificate about the conditions under which life could exist if the laws of physics were very different. (Comments from people who are more informed and thoughtful about the subject will generally be more useful, but anyone can say something.) For another, it’s just cool to contemplate these way-out possibilities. The lure of crazy ideas is what draws a lot of people to science in the first place.

And that’s … okay, as Stuart Smalley would remind us. It would be very bad indeed if unmoored philosophizing about other universes became the dominant paradigm in science, or any subset thereof, but there’s zero danger of that. Really. But there’s no reason why people can’t have fun contemplating some of the more provocative and accessible ideas out there. On this very blog we will occasionally write lengthy discourses on some piece of technical work related to observations — and not get anywhere near the number of comments that a two-minute toss-off about the anthropic principle gets. And yet, science has not ground to a halt. I think the enterprise is sufficiently healthy to survive a few more posts about the multiverse.

Ask a String Theorist! Or an Atomic Physicist. Read More »

31 Comments

Hey, I Uploaded a Video

Just got back from a great trip to Beijing, very enjoyable if a bit tiring, where much musing was done on the Primordial Existential Question, about which more anon. But I also mused a bit about what this blog needs, and I came to the conclusion that must have been obvious to everyone else long ago: more videos of me.

So, here you are. Thanks to some heroic efforts on the part of folks who would just as soon lurk behind the scenes, we now have video captured from the C-SPAN broadcast of our science panel at YearlyKos. Here is my talk, conveniently divided into two pieces to appease the YouTube gods. They are a little fuzzy, but you get the idea. I used the mysterious beauty of dark matter and dark energy as an excuse to make some didactic points about science and rationality and politics. (If I weren’t an atheist, I would have made a good preacher.) You can also find videos of Chris’s talk and Ed’s talk at their respective sites; Tara, who felt sorry for me for being given the impossible task of making the universe sound interesting, has the Q&A up as well.

But! Behind the fold, the true payoff!

Hey, I Uploaded a Video Read More »

31 Comments

Unusual Features of Our Place In the Universe That Have Obvious Anthropic Explanations

The “sensible anthropic principle” says that certain apparently unusual features of our environment might be explained by selection effects governing the viability of life within a plethora of diverse possibilities, rather than being derived uniquely from simple dynamical principles. Here are some examples of that principle at work.

  • Most of the planetary mass in the Solar System is in the form of gas giants. And yet, we live on a rocky planet.
  • Most of the total mass in the Solar System is in the Sun. And yet, we live on a planet.
  • Most of the volume in the Solar System is in interplanetary space. And yet, we live in an atmosphere.
  • Most of the volume in the universe is in intergalactic space. And yet, we live in a galaxy.
  • Most of the ordinary matter in the universe (by mass) consists of hydrogen and helium. And yet, we are made mostly of heavier elements.
  • Most of the particles of ordinary matter in the universe are photons. And yet, we are made of baryons and electrons.
  • Most of the matter in the universe (by mass) is dark matter. And yet, we are made of ordinary matter.
  • Most of the energy in the universe is dark energy. And yet, we are made of matter.
  • The post-Big-Bang lifespan of the universe is very plausibly infinite. And yet, we find ourselves living within the first few tens of billions of years (a finite interval) after the Bang.

That last one deserves more attention, I think.

Unusual Features of Our Place In the Universe That Have Obvious Anthropic Explanations Read More »

111 Comments
Scroll to Top