Sean Carroll

George B. Field, 1929-2024

George Field, brilliant theoretical astrophysicist and truly great human being, passed away on the morning of July 31. He was my Ph.D. thesis advisor and one of my favorite people in the world. I often tell my own students that the two most important people in your life who you will (consensually) choose are your spouse and your Ph.D. advisor. With George, I got incredibly lucky.

I am not the person to recount George’s many accomplishments as a scientist and a scientific citizen. He was a much more mainstream astrophysicist than I ever was, doing foundational work on the physics of the interstellar and intergalactic medium, astrophysical magnetic fields, star formation, thermal instability, accretion disks, and more. One of my favorite pieces of work he did was establishing that you could use spectral lines of hydrogen to determine the temperature of an ambient cosmic radiation field. This was before the discovery of the Cosmic Microwave Background, although George’s method became a popular way of measuring the CMB temperature once it was discovered. (George once told me that he had practically proven that there must be an anisotropic microwave radiation component in the universe, using this kind of reasoning — but his thesis advisor told him it was too speculative, so he never published it.)

At the height of his scientific career, as a professor at Berkeley, along came a unique opportunity: the Harvard College Observatory and the Smithsonian Astrophysical Observatory were considering merging into a single unit, and they needed a visionary leader to be the first director. After some negotiations, George became the founding director of the Harvard-Smithsonian Center for Astrophysics in 1973. He guided it to great success before stepping down a decade later. During those years he focused more on developing CfA and being a leader in astronomy than on doing research, including leading an influential Decadal Survey in Astronomy for the National Academy of Sciences (the “Field Report”). He never stopped advocating for good science, including in 2016 helping to draft an open letter in support of climate research.

I remember in 1989, when I was still a beginning grad student, hearing that George had just been elected to the National Academy of Sciences. I congratulated him, and he smiled and graciously thanked me. Talking to one of the other local scientists, they expressed surprise that he hadn’t been elected long before, which did indeed seem strange to me. Eventually I learned that he had been elected long before — but turned it down. That is extremely rare, and I wondered why. George explained that it had been a combination of him thinking the Academy hadn’t taken a strong enough stance against the Vietnam War, and that they wouldn’t let in a friend of his for personality reasons rather than scientific ones. By 1989 those reasons had become moot, so he was happy to accept.

It was complete luck that I ended up with George as my advisor. I was interested in particle physics and gravity, which is really physics more than astronomy, but the Harvard physics department didn’t accept me, while the astronomy department did. Sadly Harvard didn’t have any professors working on those topics, but I was randomly assigned to George as one of the few members of the theory group. Particle physics was not his expertise, but he had noticed that it was becoming important to cosmology, so thought it would be good to learn about it a bit. In typical fashion, he attended a summer school in particle physics as a student — not something most famous senior scientists tend to do. At the school he heard lectures by MIT theorist Roman Jackiw, who at the time was thinking about gravity and electromagnetism in 2+1 spacetime dimensions. This is noticeably different than the 3+1 dimensions in which we actually live — a tiny detail that modern particle theorists have learned to look past, but one that rubbed George’s astrophysicist heart the wrong way. So George wondered whether you could do similar things as in Roman’s theory, but in the real world. Roman said no, because that would violate Lorentz invariance — there would be a preferred frame of reference. Between the two of them they eventually thought to ask, so what if that were actually true? That’s where I arrived on the scene, with very little knowledge but a good amount of enthusiasm and willingness to learn. Eventually we wrote “Limits on a Lorentz- and Parity-Violating Modification of Electrodynamics,” which spelled out the theoretical basis of the idea and also suggested experimental tests, most effectively the prediction of cosmic birefringence (a rotation of the plane of polarization of photons traveling through the universe).

Both George and I were a little dubious that violating Lorentz invariance was the way to make a serious contribution to particle physics. To our surprise, the paper turned out to be quite influential. In retrospect, we had shown how to do something interesting: violate Lorentz invariance by coupling to a field with a Lorentz-violating expectation value in a gauge-invariant way. There turn out to be many other ways to do that, and correspondingly many experimental tests to be investigated. And later I realized that a time-evolving dark energy field could do the same thing — and now there is an ongoing program to search for such an effect. There’s a lesson there: wild ideas are well worth investigating if they can be directly tied to experimental constraints.

Despite being assigned to each other somewhat arbitrarily, George and I hit it off right away (or at least once I stopped being intimidated). He was unmatched in both his pure delight at learning new things about the universe, and his absolute integrity in doing science the right way. Although he was not an expert in quantum field theory or general relativity, he wanted to know more about them, and we learned together. But simply by being an example of what a scientist should be, I learned far more from him. (He once co-taught a cosmology course with Terry Walker, and one day came to class more bedraggled than usual. Terry later explained to us that George had been looking into how to derive the spectrum of the cosmic microwave background, was unsatisfied with the usual treatment, and stayed up all night re-doing it himself.)

I was also blessed to become George’s personal friend, as well as getting to know his wonderful wife Susan. I would visit them while they were vacationing, and George would have been perfectly happy to talk about science the entire time, but Susan kept us all more grounded. He also had hidden talents. I remember once taking a small rowboat into a lake, but it was extremely windy. Being the younger person (George must have been in his 70s at the time), I gallantly volunteered to do the rowing. But the wind was more persistent than I was, and after a few minutes I began to despair of making much headway. George gently suggested that he give it a try, and bip-bip-bip just like that we were in the middle of the lake. Turns out he had rowed for a crew team as an undergraduate at MIT, and never lost his skills.

George remained passionate about science to the very end, even as his health began to noticeably fail. For the last couple of years we worked hard to finish a paper on axions and cosmic magnetic fields. (The current version is a bit muddled, I need to get our updated version onto the arxiv.) It breaks my heart that we won’t be able to write any more papers together. A tremendous loss.

George B. Field, 1929-2024 Read More »

New Course: The Many Hidden Worlds of Quantum Mechanics

In past years I’ve done several courses for The Great Courses/Wondrium (formerly The Teaching Company): Dark Matter and Dark Energy, Mysteries of Modern Physics:Time, and The Higgs Boson and Beyond. Now I’m happy to announce a new one, The Many Hidden Worlds of Quantum Mechanics.

This is a series of 24 half-hour lectures, given by me with impressive video effects from the Wondrium folks.

The content will be somewhat familiar if you’ve read my book Something Deeply Hidden — the course follows a similar outline, with a few new additions and elaborations along the way. So it’s both a general introduction to quantum mechanics, and also an in-depth exploration of the Many Worlds approach in particular. It’s meant for absolutely everybody — essentially no equations this time! — but 24 lectures is plenty of time to go into depth.

Check out this trailer:

As I type this on Monday 27 November, I believe there is some kind of sale going on! So move quickly to get your quantum mechanics at unbelievably affordable prices.

New Course: The Many Hidden Worlds of Quantum Mechanics Read More »

1 Comment

Thanksgiving

 This year we give thanks for a feature of nature that is frequently misunderstood: quanta. (We’ve previously given thanks for the Standard Model LagrangianHubble’s Law, the Spin-Statistics Theoremconservation of momentumeffective field theorythe error bargauge symmetryLandauer’s Principle, the Fourier TransformRiemannian Geometrythe speed of lightthe Jarzynski equalitythe moons of Jupiterspaceblack hole entropyelectromagnetism, and Arrow’s Impossibility Theorem.)

Of course quantum mechanics is very important and somewhat misunderstood in its own right; I can recommend a good book if you’d like to learn more. But we’re not getting into the measurement problem or the reality problem just now. I want to highlight one particular feature of quantum mechanics that is sometimes misinterpreted: the fact that some things, like individual excitations of quantized fields (“particles”) or the energy levels of atoms, come in sets of discrete numbers, rather than taking values on a smooth continuum. These discrete chunks of something-or-other are the “quanta” being referred to in the title of a different book, scheduled to come out next spring.

The basic issue is that people hear the phrase “quantum mechanics,” or even take a course in it, and come away with the impression that reality is somehow pixelized — made up of smallest possible units — rather than being ultimately smooth and continuous. That’s not right! Quantum theory, as far as it is currently understood, is all about smoothness. The lumpiness of “quanta” is just apparent, although it’s a very important appearance.

What’s actually happening is a combination of (1) fundamentally smooth functions, (2) differential equations, (3) boundary conditions, and (4) what we care about.

This might sound confusing, so let’s fix ideas by looking at a ubiquitous example: the simple harmonic oscillator. That can be thought of as a particle moving in one dimension, x, with a potential energy that looks like a parabola: V(x) = \frac{1}{2}\omega^2x^2. In classical mechanics, there is a lowest-energy state where the particle just sits at the bottom of its potential, unmoving, so both its kinetic and potential energies are zero. We can give it any positive amount of energy we like, either by kicking it to impart motion or just picking it up and dropping it in the potential at some point other than the origin.

Quantum mechanically, that’s not quite true (although it’s truer than you might think). Now we have a set of discrete energy levels, starting from the ground state and going upward in equal increments. Quanta!

But we didn’t put the quanta in. They come out of the above four ingredients. First, the particle is described not by its position and momentum, but by its wave function, \psi(x,t). Nothing discrete about that; it’s a fundamentally smooth function. But second, that function isn’t arbitrary; it’s going to obey the Schrödinger equation, which is a special differential equation. The Schrödinger equation tells us how the wave function evolves with time, and we can solve it starting with any initial wave function \psi(x, 0) we like. Still nothing discrete there. But there is one requirement, coming from the idea of boundary conditions: if the wave function grows (or remains constant) as x\rightarrow \pm \infty, the potential energy grows along with it. (It actually has to diminish at infinity just to be a wave function at all, but for the moment let’s think about the energy.) When we bring in the fourth ingredient, “what we care about,” the answer is that we care about low-energy states of the oscillator. That’s because in real-world situations, there is dissipation. Whatever physical system is being modeled by the harmonic oscillator, in reality it will most likely have friction or be able to give off photons or something like that. So no matter where we start, left to its own devices the oscillator will diminish in energy. So we generally care about states with relatively low energy.

Since this is quantum mechanics after all, most states of the wave function won’t have a definite energy, in much the same way they will not have a definite position or momentum. (They have “an energy” — the expectation value of the Hamiltonian — but not a “definite” one, since you won’t necessarily observe that value.) But there are some special states, the energy eigenstates, associated with a specific, measurable amount of energy. It is those states that are discrete: they come in a set made of a lowest-energy “ground” state, plus a ladder of evenly-spaced states of ever-higher energy.

We can even see why that’s true, and why the states look the way they do, just by thinking about boundary conditions. Since each state has finite energy, the wave function has to be zero at the far left and also at the far right. The energy in the state comes from two sources: the potential, and the “gradient” energy from the wiggles in the wave function. The lowest-energy state will be a compromise between “staying as close to x=0 as possible” and “not changing too rapidly at any point.” That compromise looks like the bottom (red) curve in the figure: starts at zero on the left, gradually increases and then decreases as it continues on to the right. It is a feature of eigenstates that they are all “orthogonal” to each other — there is zero net overlap between them. (Technically, if you multiply them together and integrate over x, the answer is zero.) So the next eigenstate will first oscillate down, then up, then back to zero. Subsequent energy eigenstates will each oscillate just a bit more, so they contain the least possible energy while being orthogonal to all the lower-lying states. Those requirements mean that they will each pass through zero exactly one more time than the state just below them.

And that is where the “quantum” nature of quantum mechanics comes from. Not from fundamental discreteness or anything like that; just from the properties of the set of solutions to a perfectly smooth differential equation. It’s precisely the same as why you get a fundamental note from a violin string tied at both ends, as well as a series of discrete harmonics, even though the string itself is perfectly smooth.

One cool aspect of this is that it also explains why quantum fields look like particles. A field is essentially the opposite of a particle: the latter has a specific location, while the former is spread all throughout space. But quantum fields solve equations with boundary conditions, and we care about the solutions. It turns out (see above-advertised book for details!) that if you look carefully at just a single “mode” of a field — a plane-wave vibration with specified wavelength — its wave function behaves much like that of a simple harmonic oscillator. That is, there is a ground state, a first excited state, a second excited state, and so on. Through a bit of investigation, we can verify that these states look and act like a state with zero particles, one particle, two particles, and so on. That’s where particles come from.

We see particles in the world, not because it is fundamentally lumpy, but because it is fundamentally smooth, while obeying equations with certain boundary conditions. It’s always tempting to take what we see to be the underlying truth of nature, but quantum mechanics warns us not to give in.

Is reality fundamentally discrete? Nobody knows. Quantum mechanics is certainly not, even if you have quantum gravity. Nothing we know about gravity implies that “spacetime is discrete at the Planck scale.” (That may be true, but it is not implied by anything we currently know; indeed, it is counter-indicated by things like the holographic principle.) You can think of the Planck length as the scale at which the classical approximation to spacetime is likely to break down, but that’s a statement about our approximation schemes, not the fundamental nature of reality.

States in quantum theory are described by rays in Hilbert space, which is a vector space, and vector spaces are completely smooth. You can construct a candidate vector space by starting with some discrete things like bits, then considering linear combinations, as happens in quantum computing (qubits) or various discretized models of spacetime. The resulting Hilbert space is finite-dimensional, but is still itself very much smooth, not discrete. (Rough guide: “quantizing” a discrete system gets you a finite-dimensional Hilbert space, quantizing a smooth system gets you an infinite-dimensional Hilbert space.) True discreteness requires throwing out ordinary quantum mechanics and replacing it with something fundamentally discrete, hoping that conventional QM emerges in some limit. That’s the approach followed, for example, in models like the Wolfram Physics Project. I recently wrote a paper proposing a judicious compromise, where standard QM is modified in the mildest possible way, replacing evolution in a smooth Hilbert space with evolution on a discrete lattice defined on a torus. It raises some cosmological worries, but might otherwise be phenomenologically acceptable. I don’t yet know if it has any specific experimental consequences, but we’re thinking about that.

Thanksgiving Read More »

4 Comments

Proposed Closure of the Dianoia Institute at Australian Catholic University

Just a few years ago, Australian Catholic University (ACU) established a new Dianoia Institute of Philosophy. They recruited a number of researchers and made something of a splash, leading to a noticeable leap in ACU’s rankings in philosophy — all the way to second among Catholic universities in the English-speaking world, behind only Notre Dame.

Now, without warning, ACU has announced plans to completely disestablish the institute, along with eliminating 35 other academic positions in other fields. This leaves the faculty, some of which left permanent jobs elsewhere to join the new institute, completely stranded.

I sent the letter below to the Vice-Chancellor of ACU and other interested parties. I hope the ongoing international outcry leads the administration to change its mind.

Proposed Closure of the Dianoia Institute at Australian Catholic University Read More »

4 Comments

Thanksgiving

This year we give thanks for Arrow’s Impossibility Theorem. (We’ve previously given thanks for the Standard Model Lagrangian, Hubble’s Law, the Spin-Statistics Theorem, conservation of momentum, effective field theory, the error bar, gauge symmetry, Landauer’s Principle, the Fourier Transform, Riemannian Geometry, the speed of light, the Jarzynski equality, the moons of Jupiter, space, black hole entropy, and electromagnetism.)

Arrow’s Theorem is not a result in physics or mathematics, or even in physical science, but rather in social choice theory. To fans of social-choice theory and voting models, it is as central as conservation of momentum is to classical physics; if you’re not such a fan, you may never have even heard of it. But as you will see, there is something physics-y about it. Connections to my interests in the physics of democracy are left as an exercise for the reader.

Here is the setup. You have a set of voters {1, 2, 3, …} and a set of choices {A, B, C, …}. The choices may be candidates for office, but they may equally well be where a group of friends is going to meet for dinner; it doesn’t matter. Each voter has a ranking of the choices, from most favorite to least, so that for example voter 1 might rank D first, A second, C third, and so on. We will ignore the possibility of ties or indifference concerning certain choices, but they’re not hard to include. What we don’t include is any measure of intensity of feeling: we know that a certain voter prefers A to B and B to C, but we don’t know whether (for example) they could live with B but hate C with a burning passion. As Kenneth Arrow observed in his original 1950 paper, it’s hard to objectively compare intensity of feeling between different people.

The question is: how best to aggregate these individual preferences into a single group preference? Maybe there is one bully who just always gets their way. But alternatively, we could try to be democratic about it and have a vote. When there is more than one choice, however, voting becomes tricky.

This has been appreciated for a long time, for example in the Condorcet Paradox (1785). Consider three voters and three choices, coming out as in this table.

Voter 1Voter 2Voter 3
ABC
BCA
CAB

Then simply posit that one choice is preferred to another if a majority of voters prefer it. The problem is immediate: more voters prefer A over B, and more voters prefer B over C, but more voters also prefer C over A. This violates the transitivity of preferences, which is a fundamental postulate of rational choice theory. Maybe we have to be more clever.

So, much like Euclid did a while back for geometry, Arrow set out to state some simple postulates we can all agree a good voting system should have, then figure out what kind of voting system would obey them. The postulates he settled on (as amended by later work) are:

  • Nobody is a dictator. The system is not just “do what Voter 1 wants.”
  • Independence of irrelevant alternatives. If the method says that A is preferred to B, adding in a new alternative C will not change the relative ranking between A and B.
  • Pareto efficiency. If every voter prefers A over B, the group prefers A over B.
  • Unrestricted domain. The method provides group preferences for any possible set of individual preferences.

These seem like pretty reasonable criteria! And the answer is: you can’t do it. Arrow’s Theorem proves that there is no ranked-choice voting method that satisfies all of these criteria. I’m not going to prove the theorem here, but the basic strategy is to find a subset of the voting population whose preferences are always satisfied, and then find a similar subset of that population, and keep going until you find a dictator.

It’s fun to go through different proposed voting systems and see how they fall short of Arrow’s conditions. Consider for example the Borda Count: give 1 point to a choice for every voter ranking it first, 2 points for second, and so on, finally crowning the choice with the least points as the winner. (Such a system is used in some political contexts, and frequently in handing out awards like the Heisman Trophy in college football.) Seems superficially reasonable, but this method violates the independence of irrelevant alternatives. Adding in a new option C that many voters put between A and B will increase the distance in points between A and B, possibly altering the outcome.

Arrow’s Theorem reflects a fundamental feature of democratic decision-making: the idea of aggregating individual preferences into a group preference is not at all straightforward. Consider the following set of preferences:

Voter 1Voter 2Voter 3Voter 4Voter 5
AAADD
BBBBB
CDCCC
DCDAA

Here a simple majority of voters have A as their first choice, and many common systems will spit out A as the winner. But note that the dissenters seem to really be against A, putting it dead last. And their favorite, D, is not that popular among A’s supporters. But B is ranked second by everyone. So perhaps one could make an argument that B should actually be the winner, as a consensus not-so-bad choice?

Perhaps! Methods like the Borda Count are intended to allow for just such a possibility. But it has it’s problems, as we’ve seen. Arrow’s Theorem assures us that all ranked-voting systems are going to have some kind of problems.

By far the most common voting system in the English-speaking world is plurality voting, or “first past the post.” There, only the first-place preferences count (you only get to vote for one choice), and whoever gets the largest number of votes wins. It is universally derided by experts as a terrible system! A small improvement is instant-runoff voting, sometimes just called “ranked choice,” although the latter designation implies something broader. There, we gather complete rankings, count up all the top choices, and declare a winner if someone has a majority. If not, we eliminate whoever got the fewest first-place votes, and run the procedure again. This is … slightly better, as it allows for people to vote their conscience a bit more easily. (You can vote for your beloved third-party candidate, knowing that your vote will be transferred to your second-favorite if they don’t do well.) But it’s still rife with problems.

One way to avoid Arrow’s result is to allow for people to express the intensity of their preferences after all, in what is called cardinal voting (or range voting, or score voting). This allows the voters to indicate that they love A, would grudgingly accept B, but would hate to see C. This slips outside Arrow’s assumptions, and allows us to construct a system that satisfies all of his criteria.

There is some evidence that cardinal voting leads to less “regret” among voters than other systems, for example as indicated in this numerical result from Warren Smith, where it is labeled “range voting” and left-to-right indicates best-to-worst among voting systems.

On the other hand — is it practical? Can you imagine elections with 100 candidates, and asking voters to give each of them a score from 0 to 100?

I honestly don’t know. Here in the US our voting procedures are already laughably primitive, in part because that primitivity serves the purposes of certain groups. I’m not that optimistic that we will reform the system to obtain a notably better result, but it’s still interesting to imagine how well we might potentially do.

Thanksgiving Read More »

8 Comments

The Biggest Ideas in the Universe: Space, Time, and Motion

Just in case there are any blog readers out there who haven’t heard from other channels: I have a new book out! The Biggest Ideas in the Universe: Space, Time, and Motion is Volume One of a planned three-volume series. It grew out of the videos that I did in 2020, trying to offer short and informal introductions to big ideas in physics. Predictably, they grew into long and detailed videos. But they never lost their informal charm, especially since I didn’t do that much in the way of research or preparation.

For the book, by contrast, I actually did research and preparation! So the topics are arranged a bit more logically, the presentation is a bit more thorough and coherent, and the narrative is sprinkled with fun anecdotes about the philosophy and history behind the development of these ideas. In this volume, “these ideas” cover classical physics, from Aristotle through Newton up through Einstein.

The gimmick, of course, is that we don’t shy away from using equations. The goal of this book is to fill the gap between what you generally get as a professional physics student, who the teacher can rely on to spend years of study and hours of doing homework problems, and what you get as an interested amateur, where it is assumed that you are afraid of equations or can’t handle them. I think equations are not so scary, and that essentially everyone can handle them, if they are explained fully along the way. So there are no prerequisites, but we will teach you about calculus and vectors and all that stuff along the way. Not enough to actually solve the equations and become a pro, but enough to truly understand what the equations are saying. If it all works, this will open up a new way of looking at the universe for people who have been denied it for a long time.

The payoff at the end of the book is Einstein’s theory of general relativity and its prediction of black holes. You will understand what Einstein’s equation really says, and why black holes are an inevitable outcome of that equation. Something most people who get an undergraduate university degree in physics typically don’t get to.

Table of contents:

  • Introduction
  • 1. Conservation
  • 2. Change
  • 3. Dynamics
  • 4. Space
  • 5. Time
  • 6. Spacetime
  • 7. Geometry
  • 8. Gravity
  • 9. Black Holes
  • Appendices

Available wherever books are available: Amazon * Barnes and Noble * BAM * IndieBound * Bookshop.org * Apple Books.

The Biggest Ideas in the Universe: Space, Time, and Motion Read More »

2 Comments

Johns Hopkins

As far as I remember, the first time I stepped onto a university campus was in junior high school, when I visited Johns Hopkins for an awards ceremony for the Study of Mathematically Precocious Youth. (I grew up in an environment that didn’t involve spending a lot of time on college campuses, generally speaking.) The SMPY is a longitudinal study that looks for kids who do well on standardized math tests, encourages them to take the SATs at a very young age, and follows the progress of those who do really well. I scored as “pretty precocious” but “not precocious enough to be worth following up.” Can’t really argue. My award was a slim volume on analytic geometry, which — well, the thought was nice.

But the campus made an impression. It was elegant and evocative in a way that was new to me and thoroughly compelling. Grand architecture, buildings stuffed with books and laboratories, broad green commons criss-crossed by students and professors talking about ideas. (I presumed that was what they were talking about). Magical. I was already committed to the aspiration that I would go to university, get a Ph.D., and become a theoretical physicist, although I had very little specific concept of what that entailed. Soaking in the campus atmosphere redoubled my conviction that this was the right path for me.

So it is pretty special to me to announce that I am going to become a professor at Hopkins. This summer Jennifer and I will move from Los Angeles to Baltimore, and I will take up a position as Homewood Professor of Natural Philosophy. (She will continue writing about science and culture at Ars Technica, which she can do from any geographic location.)

The title requires some explanation. Homewood Professors are a special category at Hopkins. There aren’t many of them. Some are traditional academics like famous cosmologist Joseph Silk; others are not traditional academics, like former Senator Barbara Mikulski, musician Thomas Dolby, or former UK Poet Laureate Andrew Motion. The official documentation states that a Homewood Professor should be “a person of high scholarly, professional, or artistic distinction whose appointment brings luster to the University.” (You see why I waited to announce until my appointment was completely official, so nobody could write in objecting that I don’t qualify. Too late!)

It’s a real, permanent faculty job — teaching, students, grant proposals, the whole nine yards. Homewood Professors are not tenured, but in some sense it’s better — the position floats freely above any specific department lines, so administrative/committee obligations are minimized. (They told me they could think about a tenure process if I insisted. Part of me wanted to, for purely symbolic reasons. But once all the ins and outs were explained, I decided not to bother.)

In practice, my time will be split between the Department of Physics and Astronomy and the Department of Philosophy. I will have offices in both places, and teach roughly one course/year in each department. The current plan is for me to teach two classes this fall: a first-year seminar on the Physics of Democracy, and an upper-level seminar on Topics in the Philosophy of Physics. (The latter will probably touch on the arrow of time, philosophy of cosmology, and the foundations of quantum mechanics, but all is subject to change.) And of course I’ll be supervising grad students and eventually hiring postdocs in both departments — let me know if you’re interested in applying!

You’ll note that both departments have recently been named after William Miller. That’s because Bill Miller, who was a graduate student in philosophy at Hopkins and became a successful investment banker, has made generous donations both to philosophy and to physics. (He’s also donated to, and served as board chair for, the Santa Fe Institute, where I will continue to be Fractal Faculty — our interests have considerable overlap!) Both departments are already very high-quality; physics and astronomy includes friends and colleagues like Adam Riess, Marc Kamionkowski, and David Kaplan, not to mention benefitting from association with the Space Telescope Science Institute. But these gifts will allow us to grow in substantial ways, which makes for a very exciting time.

One benefit of being a Homewood Professor is that you get to choose what you will be designated a professor “of.” I asked that it be Natural Philosophy, harkening back to the days before science and philosophy split into distinct disciplines. (Resisted the temptation to go with a Latin version.) This is what makes this opportunity so special. I’ve always been interdisciplinary, between physics and philosophy and other things, and also always had an interest in reaching out to wider audiences. But there was inevitably tension with what I was supposed to be doing as a theoretical physicist and cosmologist. My predilections don’t fit comfortably with the academic insistence on putting everyone into a silo and encouraging them to stay there.

Now, for the first time in my life, all that stuff I want to do will be my job, rather than merely tolerated. (Or not tolerated, as the case may be.) The folks at JHU want me to build connections between different departments, and they very much want me to both keep up with the academic work, and with the podcasts and books and all that. Since that’s exactly what I want to do myself, it’s a uniquely good fit.

I’ve had a great time at Caltech, and have nothing bad to say about it. I have enormous fondness for my colleagues and especially for the many brilliant students and postdocs who I’ve been privileged to interact with along the way. But a new adventure awaits, and I can’t wait to dive in. I have a long list of ideas I want to pursue in cosmology, quantum mechanics, complexity, statistical mechanics, emergence, information, democracy, origin of life, and elsewhere. Maybe we’ll start up a seminar series in Complexity and Emergence that brings different people together. Maybe it will grow into a Center of some kind. Maybe I’ll write academic papers on moral philosophy! Who knows? It’s all allowed. Can’t ask for more than that.

Johns Hopkins Read More »

92 Comments

Thanksgiving

This year we give thanks for something we’ve all heard of, but maybe don’t appreciate as much as we should: electromagnetism. (We’ve previously given thanks for the Standard Model Lagrangian, Hubble’s Law, the Spin-Statistics Theorem, conservation of momentum, effective field theory, the error bar, gauge symmetry, Landauer’s Principle, the Fourier Transform, Riemannian Geometry, the speed of light, the Jarzynski equality, the moons of Jupiter, space, and black hole entropy.)

Physicists like to say there are four forces of nature: gravitation, electromagnetism, the strong nuclear force, and the weak nuclear force. That’s a somewhat sloppy and old-fashioned way of talking. In the old days it made sense to distinguish between “matter,” in the form of particles or fluids or something like that, and “forces,” which pushed around the matter. These days we know it’s all just quantum fields, and both matter and forces arise from the behavior of quantum fields interacting with each other. There is an important distinction between fermions and bosons, which almost maps onto the old-fashioned matter/force distinction, but not quite. If it did, we’d have to include the Higgs force among the fundamental forces, but nobody is really inclined to do that.

The real reason we stick with the traditional four forces is that (unlike the Higgs) they are all mediated by a particular kind of bosonic quantum field, called gauge fields. There’s a lot of technical stuff that goes into explaining what that means, but the basic idea is that the gauge fields help us compare other fields at different points in space, when those fields are invariant under a certain kind of symmetry. For more details, check out this video from the Biggest Ideas in the Universe series (but you might need to go back to pick up some of the prerequisites).

The Biggest Ideas in the Universe | 15. Gauge Theory

All of which is just throat-clearing to say: there are four forces, but they’re all different in important ways, and electromagnetism is special. All the forces play some kind of role in accounting for the world around us, but electromagnetism is responsible for almost all of the “interestingness” of the world of our experience. Let’s see why.

When you have a force carried by a gauge field, one of the first questions to ask is what phase the field is in (in whatever physical situation you care about). This is “phase” in the same sense as “phase of matter,” e.g. solid, liquid, gas, etc. In the case of gauge theories, we can think about the different phases in terms of what happens to lines of force — the imaginary paths through space that we would draw to be parallel to the direction of the force exerted at each point.

The simplest thing that lines of force can do is just to extend away from a source, traveling forever through space until they hit some other source. (For electromagnetism, a “source” is just a charged particle.) That corresponds to field being in the Coulomb phase. Infinitely-stretching lines of force dilute in density as the area through which they are passing increases. In three dimensions of space, that corresponds to spheres we draw around the source, whose area goes up as the distance squared. The magnitude of the force therefore goes as the inverse of the square — the famous inverse square law. In the real world, both gravity and electromagnetism are in the Coulomb phase, and exhibit inverse-square laws.

But there are other phases. There is the confined phase, where lines of force get all tangled up with each other. There is also the Higgs phase, where the lines of force are gradually absorbed into some surrounding field (the Higgs field!). In the real world, the strong nuclear force is in the confined phase, and the weak nuclear force is in the Higgs phase. As a result, neither force extends farther than subatomic distances.

Phases of gauge fields.

So there are four gauge forces that push around particles, but only two of them are “long-range” forces in the Coulomb phase. The short-range strong and weak forces are important for explaining the structure of protons and neutrons and nuclei, but once you understand what stable nuclei there are, there work is essentially done, as far as accounting for the everyday world is concerned. (You still need them to explain fusion inside stars, so here we’re just thinking of life here on Earth.) The way that those nuclei come together with electrons to make atoms and molecules and larger structures is all explained by the long-range forces, electromagnetism and gravity.

But electromagnetism and gravity aren’t quite equal here. Gravity is important, obviously, but it’s also pretty simple: everything attracts everything else. (We’re ignoring cosmology etc, focusing in on life here on Earth.) That’s nice — it’s good that we stay attached to the ground, rather than floating away — but it’s not a recipe for intricate complexity.

To get complexity, you need to be able to manipulate matter in delicate ways with your force. Gravity isn’t up to the task — it just attracts. Electromagentism, on the other hand, is exactly what the doctor ordered. Unlike gravity, where the “charge” is just mass and all masses are positive, electromagnetism has both positive and negative charges. Like charges repel, and opposite charges attract. So by deftly arranging collections of positively and negatively charged particles, you can manipulate matter in whatever way you like.

That pinpoint control over pushing and pulling is crucial for the existence of complex structures in the universe, including you and me. Nuclei join with electrons to make atoms because of electromagnetism. Atoms come together to make molecules because of electromagnetism. Molecules interact with each other in different ways because of electromagnetism. All of the chemical processes in your body, not to mention in the world immediately around you, can ultimately be traced to electromagnetism at work.

Electromagnetism doesn’t get all the credit for the structure of matter. A crucial role is played by the Pauli exclusion principle, which prohibits two electrons from inhabiting exactly the same state. That’s ultimately what gives matter its size — why objects are solid, etc. But without the electromagnetic interplay between atoms of different sizes and numbers of electrons, matter would be solid but inert, just sitting still without doing anything interesting. It’s electromagnetism that allows energy to move from place to place between atoms, both via electricity (electrons in motion, pushed by electromagnetic fields) and radiation (vibrations in the electromagnetic fields themselves).

So we should count ourselves lucky that we live in a world where at least one fundamental force is both in the Coulomb phase and has opposite charges, and give appropriate thanks. It’s what makes the world interesting.

Thanksgiving Read More »

19 Comments

The Zombie Argument for Physicalism (Contra Panpsychism)

The nature of consciousness remains a contentious subject out there. I’m a physicalist myself — as I explain in The Big Picture and elsewhere, I think consciousness is best understood as weakly-emergent from the ordinary physical behavior of matter, without requiring any special ontological status at a fundamental level. In poetic-naturalist terms, consciousness is part of a successful way of talking about what happens at the level of humans and other organisms. “Being conscious” and “having conscious experiences” are categories that help us understand how human beings live and behave, while corresponding to goings-on at more fundamental levels in which the notion of consciousness plays no role at all. Nothing very remarkable about that — the same could be said for the categories of “being alive” or “being a table.” There is a great deal of work yet to be done to understand how consciousness actually works and relates to what happens inside the brain, but it’s the same kind of work that is required in other questions at the science/philosophy boundary, without any great metaphysical leaps required.

Not everyone agrees! I recently went on a podcast hosted by philosophers Philip Goff (former Mindscape guest) and Keith Frankish to hash it out. Philip is a panpsychist, who believes that consciousness is everywhere, underlying everything we see around us. Keith is much closer to me, but prefers to describe himself as an illusionist about consciousness.

S02E01 Sean Carroll: Is Consciousness Emergent?

Obviously we had a lot to disagree about, but it was a fun and productive conversation. (I’m nobody’s panpsychist, but I’m extremely impressed by Philip’s willingness and eagerness to engage with people with whom he seriously disagrees.) It’s a long video; the consciousness stuff starts around 17:30, and goes to about 2:04:20.

But despite the length, there was a point that Philip raised that I don’t think was directly addressed, at least not carefully. And it goes back to something I’m quite fond of: the Zombie Argument for Physicalism. Indeed, this was the original title of a paper that I wrote for a symposium responding to Philip’s book Galileo’s Error. But in the editing process I realized that the argument wasn’t original to me; it had appeared, in somewhat different forms, in a few previous papers:

  • Balog, K. (1999). “Conceivability, Possibility, and the Mind-Body Problem,” The Philosophical Review, 108: 497-528.
  • Frankish, K. (2007). “The Anti-Zombie Argument,” The Philosophical Quarterly, 57: 650-666.
  • Brown, R. (2010). “Deprioritizing the A Priori Arguments against Physicalism,” Journal of Consciousness Studies, 17 (3-4): 47-69.
  • Balog, K. (2012). “In Defense of the Phenomenal Concept Strategy,” Philosophy and Phenomenological Research, 84: 1-23.
  • Campbell, D., J. Copeland and Z-R Deng 2017. “The Inconceivable Popularity of Conceivability Arguments,” The Philosophical Quarterly, 67: 223—240.

So the published version of my paper shifted the focus from zombies to the laws of physics.

The idea was not to explain how consciousness actually works — I don’t really have any good ideas about that. It was to emphasize a dilemma that faces anyone who is not a physicalist, someone who doesn’t accept the view of consciousness as a weakly-emergent way of talking about higher-level phenomena.

The dilemma flows from the following fact: the laws of physics underlying everyday life are completely known. They even have a name, the “Core Theory.” We don’t have a theory of everything, but what we do have is a theory that works really well in a certain restricted domain, and that domain is large enough to include everything that happens in our everyday lives, including inside ourselves. I won’t rehearse all the reasons we have for believing this is probably true, but they’re in The Big Picture, and I recently wrote a more technical paper that goes into some of the details:

Given that success, the dilemma facing the non-physicalist about consciousness is the following: either your theory of consciousness keeps the dynamics of the Core Theory intact within its domain of applicability, or it doesn’t. There aren’t any other options! I emphasize this because many non-physicalists are weirdly cagey about whether they’re going to violate the Core Theory. In our discussion, Philip suggested that one could rely on “strong emergence” to create new kinds of behavior without really violating the CT. You can’t. The fact that the CT is a local effective field theory completely rules out the possibility, for reasons I talk about in the above two papers.

That’s not to say we are certain the Core Theory is correct, even in its supposed domain of applicability. As good scientists, we should always be open to the possibility that our best current theories will be proven inadequate by future developments. It’s absolutely fine to base your theory of consciousness on the idea that the CT will be violated by consciousness itself — that’s one horn of the above dilemma. The point of “Consciousness and the Laws of Physics” was simply to emphasize the extremely high standard to which any purported modification should be held. The Core Theory is extraordinarily successful, and to violate it within its domain of applicability means not only that we are tweaking a successful model, but that we are somehow contradicting some extremely foundational principles of effective field theory. And maybe consciousness does that, but I want to know precisely how. Show me the equations, explain what happens to energy conservation and gauge invariance, etc.

Increasingly, theorists of consciousness appreciate this fact. They therefore choose the other horn of the dilemma: leave the Core Theory intact as a theory of the dynamics of what happens in the world, but propose that a straightforward physicalist understanding fails to account for the fundamental nature of the world. The equations might be right, in other words, but to account for consciousness we should posit that Mind (or something along those lines) underlies all of the stuff obeying those equations. It’s not hard to see how this strategy might lead one to a form of panpsychism.

That’s fine! You are welcome to contemplate that. But then we physicalists are welcome to tell you why it doesn’t work. That’s precisely what the Zombie Argument for Physicalism does. It’s not precisely an argument for physicalism tout court, but for the superiority of physicalism over a non-physicalist view that purports to explain consciousness while leaving the behavior of matter unaltered.

Usually, of course, the zombie argument is deployed against physicalism, not for it. I know that. We find ourselves in the presence of irony.

The Zombie Argument for Physicalism (Contra Panpsychism) Read More »

92 Comments

Energy Conservation and Non-Conservation in Quantum Mechanics

Conservation of energy is a somewhat sacred principle in physics, though it can be tricky in certain circumstances, such as an expanding universe. Quantum mechanics is another context in which energy conservation is a subtle thing — so much so that it’s still worth writing papers about, which Jackie Lodman and I recently did. In this blog post I’d like to explain two things:

  • In the Many-Worlds formulation of quantum mechanics, the energy of the wave function of the universe is perfectly conserved. It doesn’t “require energy to make new universes,” so that is not a respectable objection to Many-Worlds.
  • In any formulation of quantum mechanics, energy doesn’t appear to be conserved as seen by actual observers performing quantum measurements. This is a not-very-hard-to-see aspect of quantum mechanics, which nevertheless hasn’t received a great deal of attention in the literature. It is a phenomenon that should be experimentally observable, although as far as I know it hasn’t yet been; we propose a simple experiment to do so.

The first point here is well-accepted and completely obvious to anyone who understands Many-Worlds. The second is much less well-known, and it’s what Jackie and I wrote about. I’m going to try to make this post accessible to folks who don’t know QM, but sometimes it’s hard to make sense without letting the math be the math.

First let’s think about energy in classical mechanics. You have a system characterized by some quantities like position, momentum, angular momentum, and so on, for each moving part within the system. Given some facts of the external environment (like the presence of gravitational or electric fields), the energy is simply a function of these quantities. You have for example kinetic energy, which depends on the momentum (or equivalently on the velocity), potential energy, which depends on the location of the object, and so on. The total energy is just the sum of all these contributions. If we don’t explicitly put any energy into the system or take any out, the energy should be conserved — i.e. the total energy remains constant over time.

There are two main things you need to know about quantum mechanics. First, the state of a quantum system is no longer specified by things like “position” or “momentum” or “spin.” Those classical notions are now thought of as possible measurement outcomes, not well-defined characteristics of the system. The quantum state — or wave function — is a superposition of various possible measurement outcomes, where “superposition” is a fancy term for “linear combination.”

Consider a spinning particle. By doing experiments to measure its spin along a certain axis, we discover that we only ever get two possible outcomes, which we might call “spin-up” or “(\uparrow)” and “spin-down” or “(\downarrow).” But before we’ve made the measurement, the system can be in some superposition of both possibilities. We would write (\Psi), the wave function of the spin, as

    \[ (\Psi) = a(\uparrow) + b(\downarrow), \]

where a and b are numerical coefficients, the “amplitudes” corresponding to spin-up and spin-down, respectively. (They will generally be complex numbers, but we don’t have to worry about that.)

The second thing you have to know about quantum mechanics is that measuring the system changes its wave function. When we have a spin in a superposition of this type, we can’t predict with certainty what outcome we will see. All we can predict is the probability, which is given by the amplitude squared. And once that measurement is made, the wave function “collapses” into a state that is purely what is observed. So we have

    \[ (\Psi)_\mathrm{post-measurement} = \begin{cases} (\uparrow), & \mbox{with probability } |a|^2,\\ (\downarrow), & \mbox{with probability } |b|^2. \end{cases}\]

At least, that’s what we teach our students — Many-Worlds has a slightly more careful story to tell, as we’ll see.

We can now ask about energy, but the concept of energy in quantum mechanics is a bit different from what we are used to in classical mechanics. …

Energy Conservation and Non-Conservation in Quantum Mechanics Read More »

26 Comments
Scroll to Top