Quantum Foundations of a Classical Universe

Greetings from sunny (for the moment) Yorktown Heights, NY, home of IBM’s Watson Research Center. I’m behind on respectable blogging (although it’s been nice to see some substantive conversation on the last couple of comment threads), and I’m at a conference all week here, so that situation is unlikely to change dramatically in the next few days.

But the conference should be great — a small workshop, Quantum Foundations of a Classical Universe. We’re going to be arguing about how we’re supposed to connect wave functions and quantum observables to the everyday world of space and stuff. I will mostly be paying attention to the proceedings, but I might occasionally interject a tweet if something interesting/amusing happens. I’m told that some sort of proceedings will eventually be put online.

Update: Trying something new here. I’ve been tweeting about the workshop under the hashtag #quantumfoundations. So here I am using Storify to collect those tweets, making a quasi-live-blog on the cheap. Let’s see if it works. …

Quantum Foundations of a Classical Universe Read More »

26 Comments

Quantum Sleeping Beauty and the Multiverse

Hidden in my papers with Chip Sebens on Everettian quantum mechanics is a simple solution to a fun philosophical problem with potential implications for cosmology: the quantum version of the Sleeping Beauty Problem. It’s a classic example of self-locating uncertainty: knowing everything there is to know about the universe except where you are in it. (Skeptic’s Play beat me to the punch here, but here’s my own take.)

The setup for the traditional (non-quantum) problem is the following. Some experimental philosophers enlist the help of a subject, Sleeping Beauty. She will be put to sleep, and a coin is flipped. If it comes up heads, Beauty will be awoken on Monday and interviewed; then she will (voluntarily) have all her memories of being awakened wiped out, and be put to sleep again. Then she will be awakened again on Tuesday, and interviewed once again. If the coin came up tails, on the other hand, Beauty will only be awakened on Monday. Beauty herself is fully aware ahead of time of what the experimental protocol will be.

So in one possible world (heads) Beauty is awakened twice, in identical circumstances; in the other possible world (tails) she is only awakened once. Each time she is asked a question: “What is the probability you would assign that the coin came up tails?”

Modified from a figure by Stuart Armstrong.
Modified from a figure by Stuart Armstrong.

(Some other discussions switch the roles of heads and tails from my example.)

The Sleeping Beauty puzzle is still quite controversial. There are two answers one could imagine reasonably defending.

  • Halfer” — Before going to sleep, Beauty would have said that the probability of the coin coming up heads or tails would be one-half each. Beauty learns nothing upon waking up. She should assign a probability one-half to it having been tails.
  • Thirder” — If Beauty were told upon waking that the coin had come up heads, she would assign equal credence to it being Monday or Tuesday. But if she were told it was Monday, she would assign equal credence to the coin being heads or tails. The only consistent apportionment of credences is to assign 1/3 to each possibility, treating each possible waking-up event on an equal footing.

The Sleeping Beauty puzzle has generated considerable interest. It’s exactly the kind of wacky thought experiment that philosophers just eat up. But it has also attracted attention from cosmologists of late, because of the measure problem in cosmology. In a multiverse, there are many classical spacetimes (analogous to the coin toss) and many observers in each spacetime (analogous to being awakened on multiple occasions). Really the SB puzzle is a test-bed for cases of “mixed” uncertainties from different sources.

Chip and I argue that if we adopt Everettian quantum mechanics (EQM) and our Epistemic Separability Principle (ESP), everything becomes crystal clear. A rare case where the quantum-mechanical version of a problem is actually easier than the classical version. …

Quantum Sleeping Beauty and the Multiverse Read More »

245 Comments

Why Probability in Quantum Mechanics is Given by the Wave Function Squared

One of the most profound and mysterious principles in all of physics is the Born Rule, named after Max Born. In quantum mechanics, particles don’t have classical properties like “position” or “momentum”; rather, there is a wave function that assigns a (complex) number, called the “amplitude,” to each possible measurement outcome. The Born Rule is then very simple: it says that the probability of obtaining any possible measurement outcome is equal to the square of the corresponding amplitude. (The wave function is just the set of all the amplitudes.)

Born Rule:     \mathrm{Probability}(x) = |\mathrm{amplitude}(x)|^2.

The Born Rule is certainly correct, as far as all of our experimental efforts have been able to discern. But why? Born himself kind of stumbled onto his Rule. Here is an excerpt from his 1926 paper:

Born Rule

That’s right. Born’s paper was rejected at first, and when it was later accepted by another journal, he didn’t even get the Born Rule right. At first he said the probability was equal to the amplitude, and only in an added footnote did he correct it to being the amplitude squared. And a good thing, too, since amplitudes can be negative or even imaginary!

The status of the Born Rule depends greatly on one’s preferred formulation of quantum mechanics. When we teach quantum mechanics to undergraduate physics majors, we generally give them a list of postulates that goes something like this:

  1. Quantum states are represented by wave functions, which are vectors in a mathematical space called Hilbert space.
  2. Wave functions evolve in time according to the Schrödinger equation.
  3. The act of measuring a quantum system returns a number, known as the eigenvalue of the quantity being measured.
  4. The probability of getting any particular eigenvalue is equal to the square of the amplitude for that eigenvalue.
  5. After the measurement is performed, the wave function “collapses” to a new state in which the wave function is localized precisely on the observed eigenvalue (as opposed to being in a superposition of many different possibilities).

It’s an ungainly mess, we all agree. You see that the Born Rule is simply postulated right there, as #4. Perhaps we can do better.

Of course we can do better, since “textbook quantum mechanics” is an embarrassment. There are other formulations, and you know that my own favorite is Everettian (“Many-Worlds”) quantum mechanics. (I’m sorry I was too busy to contribute to the active comment thread on that post. On the other hand, a vanishingly small percentage of the 200+ comments actually addressed the point of the article, which was that the potential for many worlds is automatically there in the wave function no matter what formulation you favor. Everett simply takes them seriously, while alternatives need to go to extra efforts to erase them. As Ted Bunn argues, Everett is just “quantum mechanics,” while collapse formulations should be called “disappearing-worlds interpretations.”)

Like the textbook formulation, Everettian quantum mechanics also comes with a list of postulates. Here it is:

  1. Quantum states are represented by wave functions, which are vectors in a mathematical space called Hilbert space.
  2. Wave functions evolve in time according to the Schrödinger equation.

That’s it! Quite a bit simpler — and the two postulates are exactly the same as the first two of the textbook approach. Everett, in other words, is claiming that all the weird stuff about “measurement” and “wave function collapse” in the conventional way of thinking about quantum mechanics isn’t something we need to add on; it comes out automatically from the formalism.

The trickiest thing to extract from the formalism is the Born Rule. That’s what Charles (“Chip”) Sebens and I tackled in our recent paper: …

Why Probability in Quantum Mechanics is Given by the Wave Function Squared Read More »

96 Comments

Galaxies That Are Too Big To Fail, But Fail Anyway

Dark matter exists, but there is still a lot we don’t know about it. Presumably it’s some kind of particle, but we don’t know how massive it is, what forces it interacts with, or how it was produced. On the other hand, there’s actually a lot we do know about the dark matter. We know how much of it there is; we know roughly where it is; we know that it’s “cold,” meaning that the average particle’s velocity is much less than the speed of light; and we know that dark matter particles don’t interact very strongly with each other. Which is quite a bit of knowledge, when you think about it.

Fortunately, astronomers are pushing forward to study how dark matter behaves as it’s scattered through the universe, and the results are interesting. We start with a very basic idea: that dark matter is cold and completely non-interacting, or at least has interactions (the strength with which dark matter particles scatter off of each other) that are too small to make any noticeable difference. This is a well-defined and predictive model: ΛCDM, which includes the cosmological constant (Λ) as well as the cold dark matter (CDM). We can compare astronomical observations to ΛCDM predictions to see if we’re on the right track.

At first blush, we are very much on the right track. Over and over again, new observations come in that match the predictions of ΛCDM. But there are still a few anomalies that bug us, especially on relatively small (galaxy-sized) scales.

One such anomaly is the “too big to fail” problem. The idea here is that we can use ΛCDM to make quantitative predictions concerning how many galaxies there should be with different masses. For example, the Milky Way is quite a big galaxy, and it has smaller satellites like the Magellanic Clouds. In ΛCDM we can predict how many such satellites there should be, and how massive they should be. For a long time we’ve known that the actual number of satellites we observe is quite a bit smaller than the number predicted — that’s the “missing satellites” problem. But this has a possible solution: we only observe satellite galaxies by seeing stars and gas in them, and maybe the halos of dark matter that would ordinarily support such galaxies get stripped of their stars and gas by interacting with the host galaxy. The too big to fail problem tries to sharpen the issue, by pointing out that some of the predicted galaxies are just so massive that there’s no way they could not have visible stars. Or, put another way: the Milky Way does have some satellites, as do other galaxies; but when we examine these smaller galaxies, they seem to have a lot less dark matter than the simulations would predict.

Still, any time you are concentrating on galaxies that are satellites of other galaxies, you rightly worry that complicated interactions between messy atoms and photons are getting in the way of the pristine elegance of the non-interacting dark matter. So we’d like to check that this purported problem exists even out “in the field,” with lonely galaxies far away from big monsters like the Milky Way.

A new paper claims that yes, there is a too-big-to-fail problem even for galaxies in the field.

Is there a “too big to fail” problem in the field?
Emmanouil Papastergis, Riccardo Giovanelli, Martha P. Haynes, Francesco Shankar

We use the Arecibo Legacy Fast ALFA (ALFALFA) 21cm survey to measure the number density of galaxies as a function of their rotational velocity, Vrot,HI (as inferred from the width of their 21cm emission line). Based on the measured velocity function we statistically connect galaxies with their host halos, via abundance matching. In a LCDM cosmology, low-velocity galaxies are expected to be hosted by halos that are significantly more massive than indicated by the measured galactic velocity; allowing lower mass halos to host ALFALFA galaxies would result in a vast overestimate of their number counts. We then seek observational verification of this predicted trend, by analyzing the kinematics of a literature sample of field dwarf galaxies. We find that galaxies with Vrot,HI<25 km/s are kinematically incompatible with their predicted LCDM host halos, in the sense that hosts are too massive to be accommodated within the measured galactic rotation curves. This issue is analogous to the "too big to fail" problem faced by the bright satellites of the Milky Way, but here it concerns extreme dwarf galaxies in the field. Consequently, solutions based on satellite-specific processes are not applicable in this context. Our result confirms the findings of previous studies based on optical survey data, and addresses a number of observational systematics present in these works. Furthermore, we point out the assumptions and uncertainties that could strongly affect our conclusions. We show that the two most important among them, namely baryonic effects on the abundances and rotation curves of halos, do not seem capable of resolving the reported discrepancy.

Here is the money plot from the paper:

toobigtofail

The horizontal axis is the maximum circular velocity, basically telling us the mass of the halo; the vertical axis is the observed velocity of hydrogen in the galaxy. The blue line is the prediction from ΛCDM, while the dots are observed galaxies. Now, you might think that the blue line is just a very crappy fit to the data overall. But that’s okay; the points represent upper limits in the horizontal direction, so points that lie below/to the right of the curve are fine. It’s a statistical prediction: ΛCDM is predicting how many galaxies we have at each mass, even if we don’t think we can confidently measure the mass of each individual galaxy. What we see, however, is that there are a bunch of points in the bottom left corner that are above the line. ΛCDM predicts that even the smallest galaxies in this sample should still be relatively massive (have a lot of dark matter), but that’s not what we see.

If it holds up, this result is really intriguing. ΛCDM is a nice, simple starting point for a theory of dark matter, but it’s also kind of boring. From a physicist’s point of view, it would be much more fun if dark matter particles interacted noticeably with each other. We have plenty of ideas, including some of my favorites like dark photons and dark atoms. It is very tempting to think that observed deviations from the predictions of ΛCDM are due to some interesting new physics in the dark sector.

Which is why, of course, we should be especially skeptical. Always train your doubt most strongly on those ideas that you really want to be true. Fortunately there is plenty more to be done in terms of understanding the distribution of galaxies and dark matter, so this is a very solvable problem — and a great opportunity for learning something profound about most of the matter in the universe.

Galaxies That Are Too Big To Fail, But Fail Anyway Read More »

22 Comments

Particle Fever on iTunes

mza_592945757694281252.227x227-75 The documentary film Particle Fever, directed by Mark Levinson and produced by physicist David Kaplan, opened a while back and has been playing on and off in various big cities. But it’s still been out of reach for many people who don’t happen to be lucky enough to live near a theater enlightened enough to play it. No more!

The movie has just been released on iTunes, so now almost everyone can watch it from the comfort of their own computer. And watch it you should — it’s a fascinating and enlightening glimpse into the world of modern particle physics, focusing on the Large Hadron Collider and the discovery of the Higgs boson. That’s not just my bias talking, either — the film is rated 95% “fresh” on RottenTomatoes.com, which represents an amazingly strong critical consensus. (Full disclosure: I’m not in it, and I had nothing to do with making it.)

Huge kudos to Mark (who went to grad school in physics before becoming a filmmaker) and David (who did a brief stint as an undergraduate film major before switching to physics) for pulling this off. It’s great for the public appreciation of science, but it’s also just an extremely enjoyable movie, no matter what your background is. Watch it with a friend!

Particle Fever on iTunes Read More »

15 Comments

Why the Many-Worlds Formulation of Quantum Mechanics Is Probably Correct

universe-splitter I have often talked about the Many-Worlds or Everett approach to quantum mechanics — here’s an explanatory video, an excerpt from From Eternity to Here, and slides from a talk. But I don’t think I’ve ever explained as persuasively as possible why I think it’s the right approach. So that’s what I’m going to try to do here. Although to be honest right off the bat, I’m actually going to tackle a slightly easier problem: explaining why the many-worlds approach is not completely insane, and indeed quite natural. The harder part is explaining why it actually works, which I’ll get to in another post.

Any discussion of Everettian quantum mechanics (“EQM”) comes with the baggage of pre-conceived notions. People have heard of it before, and have instinctive reactions to it, in a way that they don’t have to (for example) effective field theory. Hell, there is even an app, universe splitter, that lets you create new universes from your iPhone. (Seriously.) So we need to start by separating the silly objections to EQM from the serious worries.

The basic silly objection is that EQM postulates too many universes. In quantum mechanics, we can’t deterministically predict the outcomes of measurements. In EQM, that is dealt with by saying that every measurement outcome “happens,” but each in a different “universe” or “world.” Say we think of Schrödinger’s Cat: a sealed box inside of which we have a cat in a quantum superposition of “awake” and “asleep.” (No reason to kill the cat unnecessarily.) Textbook quantum mechanics says that opening the box and observing the cat “collapses the wave function” into one of two possible measurement outcomes, awake or asleep. Everett, by contrast, says that the universe splits in two: in one the cat is awake, and in the other the cat is asleep. Once split, the universes go their own ways, never to interact with each other again.

Branching wave function

And to many people, that just seems like too much. Why, this objection goes, would you ever think of inventing a huge — perhaps infinite! — number of different universes, just to describe the simple act of quantum measurement? It might be puzzling, but it’s no reason to lose all anchor to reality.

To see why objections along these lines are wrong-headed, let’s first think about classical mechanics rather than quantum mechanics. And let’s start with one universe: some collection of particles and fields and what have you, in some particular arrangement in space. Classical mechanics describes such a universe as a point in phase space — the collection of all positions and velocities of each particle or field.

What if, for some perverse reason, we wanted to describe two copies of such a universe (perhaps with some tiny difference between them, like an awake cat rather than a sleeping one)? We would have to double the size of phase space — create a mathematical structure that is large enough to describe both universes at once. In classical mechanics, then, it’s quite a bit of work to accommodate extra universes, and you better have a good reason to justify putting in that work. (Inflationary cosmology seems to do it, by implicitly assuming that phase space is already infinitely big.)

That is not what happens in quantum mechanics. The capacity for describing multiple universes is automatically there. We don’t have to add anything.

UBC_SuperpositionThe reason why we can state this with such confidence is because of the fundamental reality of quantum mechanics: the existence of superpositions of different possible measurement outcomes. In classical mechanics, we have certain definite possible states, all of which are directly observable. It will be important for what comes later that the system we consider is microscopic, so let’s consider a spinning particle that can have spin-up or spin-down. (It is directly analogous to Schrödinger’s cat: cat=particle, awake=spin-up, asleep=spin-down.) Classically, the possible states are

“spin is up”

or

“spin is down”.

Quantum mechanics says that the state of the particle can be a superposition of both possible measurement outcomes. It’s not that we don’t know whether the spin is up or down; it’s that it’s really in a superposition of both possibilities, at least until we observe it. We can denote such a state like this:

(“spin is up” + “spin is down”).

While classical states are points in phase space, quantum states are “wave functions” that live in something called Hilbert space. Hilbert space is very big — as we will see, it has room for lots of stuff.

To describe measurements, we need to add an observer. It doesn’t need to be a “conscious” observer or anything else that might get Deepak Chopra excited; we just mean a macroscopic measuring apparatus. It could be a living person, but it could just as well be a video camera or even the air in a room. To avoid confusion we’ll just call it the “apparatus.”

In any formulation of quantum mechanics, the apparatus starts in a “ready” state, which is a way of saying “it hasn’t yet looked at the thing it’s going to observe” (i.e., the particle). More specifically, the apparatus is not entangled with the particle; their two states are independent of each other. So the quantum state of the particle+apparatus system starts out like this: …

Why the Many-Worlds Formulation of Quantum Mechanics Is Probably Correct Read More »

237 Comments

Quantum Mechanics Open Course from MIT

Kids today don’t know how good they have it. Back when I was learning quantum mechanics, the process involved steps like “going to lectures.” Not only did that require physical movement from the comfort of one’s home to dilapidated lecture halls, but — get this — you actually had to be there at some pre-arranged time! Often early in the morning.

These days, all you have to do is fire up the YouTube and watch lectures on your own time. MIT has just released an entire undergraduate quantum course, lovingly titled “8.04” because that’s how MIT rolls. The prof is Allan Adams, who is generally a fantastic lecturer — so I’m suspecting these are really good even though I haven’t actually watched them all myself. Here’s the first lecture, “Introduction to Superposition.”

Lecture 1: Introduction to Superposition

Allan’s approach in this video is actually based on the first two chapters of Quantum Mechanics and Experience by philosopher David Albert. I’m sure this will be very disconcerting to the philosophy-skeptics haunting the comment section of the previous post.

This is just one of many great physics courses online; I’ve previously noted Lenny Susskind’s GR course. But, being largely beyond my course-taking days myself, I haven’t really kept track. Feel free to suggest your favorites in the comments.

Quantum Mechanics Open Course from MIT Read More »

16 Comments

Physicists Should Stop Saying Silly Things about Philosophy

The last few years have seen a number of prominent scientists step up to microphones and belittle the value of philosophy. Stephen Hawking, Lawrence Krauss, and Neil deGrasse Tyson are well-known examples. To redress the balance a bit, philosopher of physics Wayne Myrvold has asked some physicists to explain why talking to philosophers has actually been useful to them. I was one of the respondents, and you can read my entry at the Rotman Institute blog. I was going to cross-post my response here, but instead let me try to say the same thing in different words.

Roughly speaking, physicists tend to have three different kinds of lazy critiques of philosophy: one that is totally dopey, one that is frustratingly annoying, and one that is deeply depressing.

  • “Philosophy tries to understand the universe by pure thought, without collecting experimental data.”

This is the totally dopey criticism. Yes, most philosophers do not actually go out and collect data (although there are exceptions). But it makes no sense to jump right from there to the accusation that philosophy completely ignores the empirical information we have collected about the world. When science (or common-sense observation) reveals something interesting and important about the world, philosophers obviously take it into account. (Aside: of course there are bad philosophers, who do all sorts of stupid things, just as there are bad practitioners of every field. Let’s concentrate on the good ones, of whom there are plenty.)

Philosophers do, indeed, tend to think a lot. This is not a bad thing. All of scientific practice involves some degree of “pure thought.” Philosophers are, by their nature, more interested in foundational questions where the latest wrinkle in the data is of less importance than it would be to a model-building phenomenologist. But at its best, the practice of philosophy of physics is continuous with the practice of physics itself. Many of the best philosophers of physics were trained as physicists, and eventually realized that the problems they cared most about weren’t valued in physics departments, so they switched to philosophy. But those problems — the basic nature of the ultimate architecture of reality at its deepest levels — are just physics problems, really. And some amount of rigorous thought is necessary to make any progress on them. Shutting up and calculating isn’t good enough.

  • “Philosophy is completely useless to the everyday job of a working physicist.”

Now we have the frustratingly annoying critique. Because: duh. If your criterion for “being interesting or important” comes down to “is useful to me in my work,” you’re going to be leading a fairly intellectually impoverished existence. Nobody denies that the vast majority of physics gets by perfectly well without any input from philosophy at all. (“We need to calculate this loop integral! Quick, get me a philosopher!”) But it also gets by without input from biology, and history, and literature. Philosophy is interesting because of its intrinsic interest, not because it’s a handmaiden to physics. I think that philosophers themselves sometimes get too defensive about this, trying to come up with reasons why philosophy is useful to physics. Who cares?

Nevertheless, there are some physics questions where philosophical input actually is useful. Foundational questions, such as the quantum measurement problem, the arrow of time, the nature of probability, and so on. Again, a huge majority of working physicists don’t ever worry about these problems. But some of us do! And frankly, if more physicists who wrote in these areas would make the effort to talk to philosophers, they would save themselves from making a lot of simple mistakes.

  • “Philosophers care too much about deep-sounding meta-questions, instead of sticking to what can be observed and calculated.”

Finally, the deeply depressing critique. Here we see the unfortunate consequence of a lifetime spent in an academic/educational system that is focused on taking ambitious dreams and crushing them into easily-quantified units of productive work. The idea is apparently that developing a new technique for calculating a certain wave function is an honorable enterprise worthy of support, while trying to understand what wave functions actually are and how they capture reality is a boring waste of time. I suspect that a substantial majority of physicists who use quantum mechanics in their everyday work are uninterested in or downright hostile to attempts to understand the quantum measurement problem.

This makes me sad. I don’t know about all those other folks, but personally I did not fall in love with science as a kid because I was swept up in the romance of finding slightly more efficient calculational techniques. Don’t get me wrong — finding more efficient calculational techniques is crucially important, and I cheerfully do it myself when I think I might have something to contribute. But it’s not the point — it’s a step along the way to the point.

The point, I take it, is to understand how nature works. Part of that is knowing how to do calculations, but another part is asking deep questions about what it all means. That’s what got me interested in science, anyway. And part of that task is understanding the foundational aspects of our physical picture of the world, digging deeply into issues that go well beyond merely being able to calculate things. It’s a shame that so many physicists don’t see how good philosophy of science can contribute to this quest. The universe is much bigger than we are and stranger than we tend to imagine, and I for one welcome all the help we can get in trying to figure it out.

Physicists Should Stop Saying Silly Things about Philosophy Read More »

225 Comments

Quantum Mechanics In Your Face

(Title shamelessly stolen from Sidney Coleman.) I’m back after a bit of insane traveling, looking forward to resuming regular blogging next week. Someone has to weigh in about BICEP, right?

In the meantime, here’s a video to keep you occupied: a recording of the World Science Festival panel on quantum mechanics I had previously mentioned.

Measure for Measure: Quantum Physics and Reality

David Albert is defending dynamical collapse formulations, Sheldon Goldstein stands up for hidden variables, I am promoting the many-worlds formulation, and Rüdiger Schack is in favor of QBism, a psi-epistemic approach. Brian Greene is the moderator, and has brought along some fancy animations. It’s an hour and a half of quantal goodness, so settle in for quite a ride.

Just as the panel was happening, my first official forays into quantum foundations were appearing on the arxiv: a paper with Charles Sebens on deriving the Born Rule in Everettian quantum mechanics, as well as a shorter conference proceeding.

No time to delve into the details here, but I promise to do so soon!

Quantum Mechanics In Your Face Read More »

14 Comments

The Common Core: How Bill Gates Changed America

James Joyner points us to a Washington Post article on how Bill Gates somewhat single-handedly pulled off a dramatic restructuring of American public education, via promoting the Common Core standards. There is much that is fascinating here, including the fact that a billionaire with a plan can get things done in our fractured Republic a lot more easily than our actual governments (plural because education is still largely a local matter) ever could. Apparently, Gates got a pitch in 2008 from a pair of education reformers who wanted to see uniform standards for US schools. Gates thought about it, then jumped in with two feet (and a vast philanthropic and lobbying apparatus). Within two years, 45 states and the District of Columbia had fully adopted the Common Core Standards. The idea enjoyed bipartisan support; only quite recently, when members of the Tea Party realized that all this happened under Obama’s watch, have Republicans taken up the fight against it.

Personally, I’m completely in favor of national curricula and standards. Indeed, I’d like to go much further, and nationalize the schools, so that public spending on students in rural Louisiana is just as high as that in wealthy suburbs in the Northeast. I’m also not dead set against swift action by small groups of people who are willing to get things done, rather than sit around for decades trading white papers and town hall meetings. (I even helped a bit with such non-democratic action myself, and suffered the attendant abuse with stoic calm.)

What I don’t know, since I simply am completely unfamiliar with the details, is whether the actual Common Core initiative (as opposed to the general idea of a common curriculum) is a good idea. I know that some people are very much against it — so much so that it’s difficult to find actual information about it, since emotions run very high, and you are more likely to find either rampant boosterism or strident criticism. Of course you can look up what the standards are, both in English Language Arts and in Mathematics (there don’t seem to be standards for science, history, or social studies). But what you read is so vague as to be pretty useless. For example, the winningly-named “CCSS.ELA-LITERACY.CCRA.W.1” standard reads

Write arguments to support claims in an analysis of substantive topics or texts using valid reasoning and relevant and sufficient evidence.

That sounds like a good idea! But doesn’t translate unambiguously into something teachable. The devil is in the implementation.

So — anyone have any informed ideas about how it works in practice, and whether it’s helpful and realistic? (Early results seem to be mildly promising.) I worry from skimming some of the information that there seems to be an enormous emphasis on “assessment,” which presumably translates into standardized testing. I recognize the value of such testing in the right context, but also have the feeling that it’s already way overdone (in part because of No Child Left Behind), and the Common Core just adds another layer of requirements. I’d rather have students and schools spend more time on teaching and less time on testing, all else being equal.

The Common Core: How Bill Gates Changed America Read More »

54 Comments
Scroll to Top