Science

Origins of Biological Complexity

Carl Zimmer has a great article in Quanta about the origins of biological complexity. Quanta, in case you’re wondering, is the new name for the Simons Foundation’s online science magazine, which is certainly going to be a go-to resource for reliable science stories much of the media would consider to be too subtle or insufficiently newsworthy.

Defining “complexity” is a notoriously tricky business, although we tend to think we know it when we see it. The quest for the One True Definition is a red herring, as what’s interesting is to see what patterns and laws we can associate with different kinds of complexity. In the biological realm, it seems natural to give at least some credit for the development of complexity to the pressures of natural selection. Having more highly-developed sensory apparatus or higher intelligence naturally goes along with greater complexity (or so we tend to think), and there can be obvious evolutionary advantages to these traits.

Carl looks at a new paper by Leonore Fleming and Daniel McShea that looks at the number of different part types, shapes, and colors in everyone’s favorite biological test subject, the Drosophila fruit fly. They argue that complexity increases even in the absence of any evolutionary pressure at all. That’s consistent with a proposal called the “Zero-Force Evolutionary Law,” which says that complexity and diversity simply tend to increase naturally, apart from any nudges evolution might provide. Fleming and McShea looked at the evolution of Drosophila raised in comfortable laboratory environments, where they were provided with unlimited food and perfectly livable conditions, and compared them to wild fruit flies. They conclude that, indeed, the absence of pressures led to increased measures of complexity in the population. Roughly speaking, there was less reason for crazy mutations to die off, so the genome could go galloping freely through the fitness landscape.

Other biologists are skeptical of this way of looking at things. I think the basic point is that it’s easy to see how diversity will increase in the absence of evolutionary pressures, but much harder to useful complexity (like eyes and brains) would develop. To which I imagine the appropriate response is “it depends on the conditions.” If we imagine that offspring survive and reproduce equally well no matter what kinds of mutations they undergo, and there are truly unlimited resources, then I would predict that some descendants would have just as many usefully complex outcomes as they would in the presence of selection pressures. But that’s only because there would be a ginormous number of descendants, and most of them would be utterly unviable in the real world. The fraction of descendants with useful complex features in the selection-free world would doubtless be much lower than in a world with natural selection.

Evolution is able to make some wonderful things, but it really is a blind watchmaker. Mutations and sexual shuffling of genes happen, and then natural selection culls away the less successful outcomes. It doesn’t actually accelerate the production of useful outcomes. So complexity happens naturally (at least, starting from simple states in open systems very far from equilibrium), but evolution brings it into focus.

Origins of Biological Complexity Read More »

21 Comments

Cosmology and the Past Hypothesis

Greetings from sunny Santa Cruz, where we’re in week three of the Summer School on Philosophy of Cosmology. I gave two lectures yesterday afternoon, and in a technological miracle they’ve already appeared on YouTube. The audio and video aren’t perfect quality, but hopefully viewers can hear everything clearly.

These are closer to discussions than lectures, as I was intentionally pretty informal about the whole thing. Rather than trying to push any one specific model or idea, I gave an overview of what I take to be the relevant issues confronting someone who wants to build a cosmological model that naturally explains why the early universe had a low entropy. They are a little bit technical, as the intended audience is grad students in physics and philosophy who have already sat through two weeks of lecturing.

If there is one central idea, it’s the concept of a “cosmological realization measure” for statistical mechanics. Ordinarily, when we have some statistical system, we know some macroscopic facts about it but only have a probability distribution over the microscopic details. If our goal is to predict the future, it suffices to choose a distribution that is uniform in the Liouville measure given to us by classical mechanics (or its quantum analogue). If we want to reconstruct the past, in contrast, we need to conditionalize over trajectories that also started in a low-entropy past state — that the “Past Hypothesis” that is required to get stat mech off the ground in a world governed by time-symmetric fundamental laws.

The goal I am pursuing is to find cosmological scenarios in which the Past Hypothesis is predicted by the dynamics, not merely assumed. We imagine a “large universe,” one in which local macroscopic situations (like a box of gas or a lecture hall full of students) occur many times. Then we can define a measure over the microconditions corresponding to such situations by looking at the ways in which those situations actually appear in the cosmic history. The hope — still just a hope, really — is that familiar situations like observers or lecture halls or apple pies appear predominantly in the aftermath of low-entropy Big-Bang-like states. That would stand in marked contrast to the straightforward Boltzmannian expectation that any particular low-entropy state is both preceded by and followed by higher-entropy configurations. I don’t think any particular model completely succeeds in this ambition, but I’m optimistic that we can build theories of this type. We shall see.

Initial state/origin of the universe (Part 1) by Sean Carroll

Initial state/origin of the universe (Part 2) by Sean Carroll

Cosmology and the Past Hypothesis Read More »

41 Comments

What Is Science?

There is an old parable — not sure if it comes from someone famous I should be citing, or whether I read it in some obscure book years ago — about a lexicographer who was tasked with defining the word “taxi.” Thing is, she lived and worked in a country where every single taxi was yellow, and every single non-taxi car was blue. Makes for an extremely simple definition, she concluded: “Taxis are yellow cars.”

Hopefully the problem is obvious. While that definition suffices to demarcate the differences between taxis and non-taxis in that particular country, it doesn’t actually capture the essence of what makes something a taxi at all. The situation was exacerbated when loyal readers of her dictionary visited another country, in which taxis were green. “Outrageous,” they said. “Everyone knows taxis aren’t green. You people are completely wrong.”

The taxis represent Science.

(It’s usually wise not to explain your parables too explicitly; it cuts down on the possibilities of interpretation, which limits the size of your following. Jesus knew better. But as Bob Dylan said in a related context, “You’re not Him.”)

Defining the concept of “science” is a notoriously tricky business. In particular, there is long-running debate over the demarcation problem, which asks where we should draw the line between science and non-science. I won’t be providing any final answers to this question here. But I do believe that we can parcel out the difficulties into certain distinct classes, based on a simple scheme for describing how science works. Essentially, science consists of the following three-part process:

  1. Think of every possible way the world could be. Label each way an “hypothesis.”
  2. Look at how the world actually is. Call what you see “data” (or “evidence”).
  3. Where possible, choose the hypothesis that provides the best fit to the data.

The steps are not necessarily in chronological order; sometimes the data come first, sometimes it’s the hypotheses. This is basically what’s known as the hypothetico-deductive method, although I’m intentionally being more vague because I certainly don’t think this provides a final-answer definition of “science.”

The reason why it’s hard to provide a cut-and-dried definition of “science” is that every one of these three steps is highly problematic in its own way. Number 3 is probably the trickiest; any finite amount of data will generally underdetermine a choice of hypothesis, and we need to rely on imprecise criteria for deciding between theories. (Thomas Kuhn suggested five values that are invoked in making such choices: accuracy, simplicity, consistency, scope, and fruitfulness. A good list, but far short of an objective algorithm.) But even numbers 1 and 2 would require a great deal more thought before they rose to the level of perfect clarity. It’s not easy to describe how we actually formulate hypotheses, nor how we decide which data to collect. (Problems that are vividly narrated in Zen and the Art of Motorcycle Maintenance, among other places.)

But I think it’s a good basic outline. What you very often find, however, are folks who try to be a bit more specific and programmatic in their definition of science, and end up falling into the trap of our poor lexicographic enthusiasts: they mistake the definition for the thing being defined.

Along these lines, you will sometimes hear claims such as these:

  • “Science assumes naturalism, and therefore cannot speak about the supernatural.”
  • “Scientific theories must make realistically falsifiable predictions.”
  • “Science must be based on experiments that are reproducible.”

In each case, you can kind of see why one might like such a claim to be true — they would make our lives simpler in various ways. But each one of these is straightforwardly false. …

What Is Science? Read More »

58 Comments

Why Does the World Exist?

In Jim Holt’s enjoyable book, Why Does the World Exist? An Existential Detective Story, he recounts conversations with a wide variety of thinkers, from physicists and biologists to writers and philosophers, who have struggled with the Primordial Existential Question. You probably know my take on the issue, but Jim and I sat down at the LA Library a few weeks ago to chat about this and related issues. I think it’s safe to say we at least had a few laughs. Here’s the complete video; audio is also available as a podcast.

Jim Holt and Sean Carroll from ALOUDla on Vimeo.

Why Does the World Exist? Read More »

53 Comments

Philosophy of Cosmology Summer School

Going on right now, up at UC Santa Cruz — I guess the official name is the UCSC Institute for the Philosophy of Cosmology. It’s a three-week event, with talks by some top-notch people: David Albert, David Wallace, Tim Maudlin, Joel Primack, Anthony Aguirre, Matt Johnson, Leonard Susskind, and a bunch more. To my great regret I can’t be there for the whole thing, but I will be popping in during the last week to say some things about cosmology and the arrow of time. (Is it possible that not everything worth saying will have already been said?)

If you’re not actually there, they seem to be doing a great job of putting lectures on YouTube almost as soon as they appear. Almost like being there, except that you won’t get to walk outside into the redwood forest.

Philosophy of Cosmology Summer School Read More »

15 Comments

How Quantum Field Theory Becomes “Effective”

Ken Wilson, Nobel Laureate and deep thinker about quantum field theory, died last week. He was a true giant of theoretical physics, although not someone with a lot of public name recognition. John Preskill wrote a great post about Wilson’s achievements, to which there’s not much I can add. But it might be fun to just do a general discussion of the idea of “effective field theory,” which is crucial to modern physics and owes a lot of its present form to Wilson’s work. (If you want something more technical, you could do worse than Joe Polchinski’s lectures.)

So: quantum field theory comes from starting with a theory of fields, and applying the rules of quantum mechanics. A field is simply a mathematical object that is defined by its value at every point in space and time. (As opposed to a particle, which has one position and no reality anywhere else.) For simplicity let’s think about a “scalar” field, which is one that simply has a value, rather than also having a direction (like the electric field) or any other structure. The Higgs boson is a particle associated with a scalar field. Following the example of every quantum field theory textbook ever written, let’s denote our scalar field φ(x, t).

What happens when you do quantum mechanics to such a field? Remarkably, it turns into a collection of particles. That is, we can express the quantum state of the field as a superposition of different possibilities: no particles, one particle (with certain momentum), two particles, etc. (The collection of all these possibilities is known as “Fock space.”) It’s much like an electron orbiting an atomic nucleus, which classically could be anywhere, but in quantum mechanics takes on certain discrete energy levels. Classically the field has a value everywhere, but quantum-mechanically the field can be thought of as a way of keeping track an arbitrary collection of particles, including their appearance and disappearance and interaction.

So one way of describing what the field does is to talk about these particle interactions. That’s where Feynman diagrams come in. The quantum field describes the amplitude (which we would square to get the probability) that there is one particle, two particles, whatever. And one such state can evolve into another state; e.g., a particle can decay, as when a neutron decays to a proton, electron, and an anti-neutrino. The particles associated with our scalar field φ will be spinless bosons, like the Higgs. So we might be interested, for example, in a process by which one boson decays into two bosons. That’s represented by this Feynman diagram:

3pointvertex

Think of the picture, with time running left to right, as representing one particle converting into two. Crucially, it’s not simply a reminder that this process can happen; the rules of quantum field theory give explicit instructions for associating every such diagram with a number, which we can use to calculate the probability that this process actually occurs. (Admittedly, it will never happen that one boson decays into two bosons of exactly the same type; that would violate energy conservation. But one heavy particle can decay into different, lighter particles. We are just keeping things simple by only working with one kind of particle in our examples.) Note also that we can rotate the legs of the diagram in different ways to get other allowed processes, like two particles combining into one.

This diagram, sadly, doesn’t give us the complete answer to our question of how often one particle converts into two; it can be thought of as the first (and hopefully largest) term in an infinite series expansion. …

How Quantum Field Theory Becomes “Effective” Read More »

39 Comments

There Is No Classical World

Caltech’s Institute for Quantum Information and Matter is a fun place. It’s led by people like John Preskill, Jeff Kimble, and Alexei Kitaev — some of the world’s great scientists — so you know the physics is going to be top-notch. But it’s the youngsters, such as postdoc Spiros Michalakis, who are bringing the fun. Suff like the IQIM blog (where you should read John’s recent post on the Maldadcena/Susskind wormhole proposal) and a successful Kickstarter campaign for science-inspired fashion.

The fun is now being ratcheted up even higher, as IQIM is teaming with Jorge Cham of PhD Comics fame to make a series of animated web videos about quantum mechanics. I ask you, who doesn’t love some good videos about quantum mechanics??

Sensibly, they’ve kicked off by spotlighting an interesting experimental result, rather than diving right into the realms of esoteric theoretical speculation. Of course, this is quantum mechanics we’re talking about, so even the experiments get pretty wild in their implications. The work is by Amir Safavi-Naeini and Oskar Painter, who take a small mirror and put it into a quantum state where its center of mass is as cold as it is possible to be. Classically, of course, the mirror can be perfectly still; quantum-mechanically, there is a ground state wave function that still shows “fluctuations” (i.e. the fact that observations won’t always show zero motion).

Doing The Impossible

Now, the mirror is tiny — microscopic, it’s fair to say — but it’s not that tiny. It’s a piece of metal, non just an atom or two. (I didn’t catch what the actual size was.) So the implication here is that things don’t miraculously “become classical” when they are made of many atoms rather than just a few. We don’t notice the quantum-ness of the universe in our everyday lives, but that’s because the systems we encounter are noisy and constantly jostled by their environments, leading to rapid decoherence; not because there is a magical transition to classicalness once you get above a certain number of atoms, or a truly distinct “classical realm.”

Of course, no right-minded person really believes that there is a hard and fast transition to a classical realm once objects get big; rather, there is a sense in which the classical approximation becomes more and more accurate, but it’s always just an approximation. The experimental results here are simply affirming the truth of quantum mechanics. Nevertheless, you can still meet people (the wrong-minded ones) who are willing to believe that electrons and photons are governed by quantum mechanics, but not that they are governed by quantum mechanics. Have them watch this video, and hope that the implications sink in.

There Is No Classical World Read More »

30 Comments

Firewalls, Burning Brightly

The firewall puzzle is the claim that, if information is ultimately conserved as black holes evaporate via Hawking radiation, then an infalling observer sees a ferocious wall of high-energy radiation as they fall through the event horizon. This is contrary to everything we’ve ever believed about black holes based on classical and semi-classical reasoning, so if it’s true it’s kind of a big deal.

The argument in favor of firewalls is based on everyone’s favorite spooky physical phenomenon, quantum entanglement. Think of a Hawking photon near the event horizon of a very old (mostly-evaporated) black hole, about to sneak out to the outside world. If there is no firewall, the quantum state near the horizon is (pretty close to) the vacuum, which is unique. Therefore, the outgoing photon will be completely entangled with a partner ingoing photon — the negative-energy guy who is ultimately responsible for the black hole losing mass. However, if information is conserved, that outgoing photon must also be entangled with the radiation that left the hole much earlier. This is a problem because quantum entanglement is “monogamous” — one photon can’t be maximally entangled with two other photons at the same time. (Awww.) The simplest way out, so the story goes, is to break the entanglement between the ingoing and outgoing photons, which means the state is not close to the vacuum. Poof: firewall.

You folks read about this some time ago in a guest post by Joe Polchinski, one of the authors (with Ahmed Almheiri, Don Marolf, and James Sully, thus “AMPS”) of the original paper. I’m just updating now to let you know: almost a year later, the controversy has not gone away.

You can read about some of the current state of play in An Apologia for Firewalls, by the above authors plus Douglas Stanford. (Those of us with good Catholic educations understand that “apologia” means “defense,” not “apology.”) We also had a physics colloquium by Joe at Caltech last week, where he masterfully explained the basics of the black hole information paradox as well as the recent firewall brouhaha. Caltech is not very good at technology (don’t let the name fool you), so we don’t record our talks, but Joe did agree to put his slides up on the web, which you can now all enjoy. Aimed at physics students, so there might be an equation or two in there.

Just to point out a couple of intriguing ideas that have come along in response to the AMPS proposal, one paper that has deservedly received a lot of attention is An Infalling Observer in AdS/CFT by Kyriakos Papadodimas and Suvrat Raju. They consider the AdS/CFT correspondence, which relates a theory of gravity in anti-de Sitter space to a non-gravitational field theory on its boundary. One can model black holes in such a theory, and see what the boundary field theory has to say about them. Papadodimas and Raju argue that they don’t see any evidence of firewalls. It’s suggestive, but like many AdS/CFT constructions, comes across as a bit of a black box; even if there aren’t any firewalls, it’s hard to pinpoint exactly what part of the original AMPS argument is at fault.

More radically, there was just a new paper by Juan Maldacena and Lenny Susskind, Cool Horizons for Entangled Black Holes. These guys have tenure, so they aren’t afraid of putting forward some crazy-sounding ideas, which is what they’ve done here. (Note the enormous difference between “crazy-sounding” and “actually crazy.”) They are proposing that, when two particles are entangled, there is actually a tiny wormhole connecting them through spacetime. This seems bizarre from a classical general-relativity standpoint, since such wormholes would instantly collapse upon themselves; but they point out that their wormholes are “highly quantum objects.” They claim there is evidence that such a conjecture makes sense, although they can’t confidently argue that it gets rid of the firewalls.

I suspect further work is required. Good times.

Firewalls, Burning Brightly Read More »

36 Comments

Visualizing Entanglement In Real Time

Entanglement is one of the “spookier” aspects of quantum mechanics. In classical physics, the states of two distinct objects (their positions, velocities, spins, etc.) are specified completely separately from each other. Knowing what this tomato is doing gives you no information, in principle, about what that carrot is doing. But quantum mechanics says there is only one “state of the whole world,” which refers to absolutely everything in it. And, of course, the quantum state is specified as a superposition of possible measurement outcomes, rather than one definite possibility. So the quantum state of two vegetables might take the form “the tomato is in the refrigerator and the carrot is on the kitchen counter, or the carrot is in the refrigerator and the tomato is on the counter.” Although usually we talk about spins and polarizations of particles rather than locations of foodstuffs.

Entanglement is by no means a mystery, in the same way that the measurement problem is a mystery. It’s just a straightforward prediction of quantum mechanics, repeatedly verified by experiments. But it bugs us, because it seems “nonlocal.” In the state described above, I can look at the tomato and instantly infer what the carrot is doing, without ever looking at it. This bothers people, although it doesn’t lead to anything dangerous or immoral, like communication faster than light. That’s because physical information still travels slower than light. Someone wondering about the carrot doesn’t gain any information just because you measured the location of the tomato; you still have to tell them what answer you got.

Still, entanglement is pretty cool. And now Anton Zeilinger’s group in Vienna, one of the leading labs working on quantum experiments, has queried the Zeitgeist and responded in a way appropriate to our internet age: they made a YouTube video. (Since they are also old-fashioned scientists, they also wrote a paper.)

Real-Time Imaging of Quantum Entanglement

Let me try to explain this as I understand it, but I’ll confess up front this is a bit outside my comfort zone so real experts should chime in. Here we have two entangled photons, which can be polarized either H (horizontal) or V (vertical). The quantum state is of the form HV + VH, which means that we don’t know what either polarization is, but we know that the two polarizations must be opposite of each other. (If the state had been HH + VV, we still wouldn’t know either one, but we would know they were the same.) We send each photon through a merry path, observe one of them (that’s on the left), and see what happens to the other one (on the right). We’re looking at an image of where the individual photons land on a screen. You can see how the state of photon #2 is affected by what’s happening to photon #1.

Doesn’t it seem like you could use this to send information faster than light, if photon #2 is instantly affected by what we do to photon #1? I believe the trick here is that we’re not taking an image of all of the #2 photons. We’re only taking images of #2 if photon #1 was registered in a certain state. That is, we send photon #1 through a filter that only lets horizontal polarizations through. If photon #1 gets through, we turn on the camera and image photon #2. If it doesn’t, the camera never triggers, and photon #2 hits the screen harmlessly. So no superluminal chitchat, you science-fiction fans out there.

Nevertheless, pretty awesome. Quantum mechanics is sufficiently non-intuitive that we would only ever come up with it by having it forced on us by data. Even though experiments like this are completely explained by quantum mechanics as we currently know it, every little demonstration helps us appreciate it a bit more viscerally. As we strive toward a deeper understanding, that’s a crucially important part of the process.

Visualizing Entanglement In Real Time Read More »

25 Comments

Sixty Symbols: The Arrow of Time

Completing an action-packed trilogy that began with quantum mechanics and picked up speed with the Higgs boson, here I am talking with Brady Haran of Sixty Symbols about the arrow of time. If you’d like something more in-depth, I can recommend a good book.

Arrow of Time - Sixty Symbols

Will there be more? You never know! The Hitchhiker’s Guide to the Galaxy started out as a trilogy, and look what happened to that. (But I promise no prequels.)

Sixty Symbols: The Arrow of Time Read More »

9 Comments
Scroll to Top