Nobody comes to these parts (at least, they shouldn’t) looking for insight into atomic physics, quantum optics, and related fields, but hearty congratulations to Serge Haroche and David Wineland for sharing this year’s Nobel Prize in Physics. Here are helpful stories by Alex Witze and Dennis Overbye.
One way of thinking about their accomplishments is to say that they’ve managed to manipulate particles one at a time: Haroche with individual photons, and Wineland with trapped ions. But what’s really exciting is that they are able to study intrinsically quantum-mechanical properties of the particles. For a long time, quantum mechanics could be treated as a black box. You had an atomic nucleus sitting there quietly, not really deviating from your classical intuition, and then some quantum magic would occur, and now you have several decay products flying away. The remoteness of the quantum effects themselves is what has enabled physicists to get away for so long using quantum mechanics without really understanding it. (Thereby enabling such monstrosities as the “Copenhagen interpretation” of quantum mechanics, and its unholy offspring “shut up and calculate.”)
These days, in contrast, we can no longer refuse to take quantum mechanics seriously. The experimentalists have brought it up close and personal, in your face. We’re using it to build things in ways we wouldn’t have imagined in the bad old days. This prize is a great tribute to physicists who are dragging us, kicking and screaming, into a quantum-mechanical reality.
Rick@12
I don’t really understand what is “unphysical” about the collapse of a wavefunction. Rezso@19 more-or-less summarizes my prejudice in that regard. I can understand the simplicity of a decision tree that is discrete for a simple 50/50 spinor set, for example, but it is hard for me to understand the simplicity of a decision tree where an uncountable number of bifurcations happen at the moment of collapse, the limit of which is some irrational probability calculated for each Eigenstate. Perhaps I’m relying too much on my own conceptual bias and not enough on the MWI realization that David@17 and Lee@20 seem to prefer.
Kostas #25: The experiments of Haroche and Wineland, phenomenal as they are, have zero implications one way or the other for the MWI/Copenhagen debate (nor, for that matter, for third-party candidates like Bohm 🙂 ). In other words, while doing these experiments is a tremendous challenge requiring lots of new ideas, no sane proponent of any interpretation would have made predictions for their outcomes other than the ones that were observed. To do an experiment about which the proponents of different interpretations might conceivably diverge, it would be necessary to try to demonstrate quantum interference in a much, much larger system — for example, a brain or an artificially-intelligent quantum computer. And even then, the different interpretations arguably don’t make differing predictions about what the published results of such an experiment would be. If they differ at all, it’s in what they claim, or refuse to claim, about the experiences of the subject of the experiment, while the experiment is underway. But if quantum mechanics is right, then the subject would necessarily have forgotten those experiences by the end of the experiment — since otherwise, no interference could be observed!
So, yeah, barring any change to the framework of quantum mechanics itself, it seems likely that people will be arguing about its interpretation forever. Sorry about that. 🙂
“To understand it better, as my knowledge is very basic: the results by Haroche and Wineland mean that “Copenhagen interpretation” is now severely weakened in favor of MWI theory ?”
NO, their work is based on decoherence theory. Decoherence is a physical phenomena, you can measure it in the lab, and it solves the “preferred basis problem” of the Copenhagen interpretation. So the Copenhagen interpretation actually became stronger.
“Decoherence is a physical phenomena, you can measure it in the lab, and it solves the “preferred basis problem” of the Copenhagen interpretation. So the Copenhagen interpretation actually became stronger.”
What Scott said. QM became stronger — as if it needed any strengthening.
“Bohr did not need to explain how the wave function collapsed because it was never, to him, a physical wave to begin with.
Yeah, it just about our knowledge of world, not the world itself . . . ?
Pingback: Shtetl-Optimized » Blog Archive » Quantum computing in the newz
Pingback: Guest Post: John Preskill on Individual Quantum Systems | Cosmic Variance | Discover Magazine
Decoherence explains how a superposed state “splits” into two classical ones, but I don’t see it as endorsing CI. In fact, decoherence does not predict a collapse but a splitting, which is much more in line with MWI! For the collapse you still need the measurement postulate.
Here’s the picture: there is only one wave function carrying all the information about the experimental device and the experimenter. When the measurement is performed, two “regions” of the function decohere, meaning they become unable to interact with each other, therefore seeming like two effective separate wave-functions. These we call “parallel universes”, which is probably a misnomer since we’re only talking about different parts of one wave-function. That is, decoherence taken seriosuly is MWI.
If you want to explain the classical outcome without retorting to MWI, you have to postulate the collapse of the original wave-function, that is, the fact that after decoherence the wave-function “chooses” one of its parts and discards the other. That seems to me as completely ad-hoc and unjustified, since the previous interpretation already gave the results we see!
29. Rick
“QM became stronger — as if it needed any strengthening.”
The Copenhagen interpretation was not able to derive the preferred basis, it was chosen by an ad hoc rule. This was an important problem of the interpretation.
In decoherence theory, the preferred basis of the system+measuring device Hilbert-space is generated by the unitary dynamics of the system+measuring device+environment.
So QM needed strengthening and it is stronger now.
32. David
Sorry, but I completely disagree with you.
“Decoherence explains how a superposed state “splits” into two classical ones, but I don’t see it as endorsing CI. In fact, decoherence does not predict a collapse but a splitting, which is much more in line with MWI! For the collapse you still need the measurement postulate.”
No, decoherence theory gives you a density operator and the eigenvalues of this operator are classical probabilities.
And classical probabilities collapse by definition, when you learn the outcome of a measurement. This fact has nothing to do with quantum mechanics. If I throw a classical dice, then the probabilitiy distribution is (1/6, 1/6, 1/6, 1/6, 1/6, 1/6). But when I learn that the result is 3, then the probability distribution collapses to (0, 0, 1, 0, 0, 0). In my opinion, there is nothing more to explain here.
“These we call “parallel universes”, which is probably a misnomer since we’re only talking about different parts of one wave-function. That is, decoherence taken seriosuly is MWI.”
A superposition has nothing to do with parallel universes. I can represent the motion of a classical guitar string with a Fourier-series, but this doesn’t mean that there are infinitely many guitar strings in parallel universes, there is only one.
33. Reszno, decoherence explains how the system-experimenter state is split into two orthonormal states, both of which keep existing as a superposition. Nowhere in the equations can you see you’re allowed to interpret their squared amplitudes as classical probabilities, unless you explicitly use the Born rule. Decoherence does not solve the measurement problem. You need an interpretation on top (CI or MWI) to do that.
I think our main source of disagreement is you assume once you see a measurement outcome, it is obvious the probability distribution has collapsed. To me, all that’s obvious is you’re entangled with that particular outcome and you have no way of accessing the other, which nonetheless keeps existing as an unaccessible part of the system’s state, thus effectively splitting into two separate realities.
“This doesn’t mean there are infinitely many guitar strings in parallel universes”: precisely. That’s why “parallel universes” is a misnomer. There is only one wave function one can split into many components which cannot interact.
Pingback: A Nobel Prize for Very Clever Quantum Experiments | Whiskey…Tango…Foxtrot?
“Reszno, decoherence explains how the system-experimenter state is split into two orthonormal states, both of which keep existing as a superposition. Nowhere in the equations can you see you’re allowed to interpret their squared amplitudes as classical probabilities, unless you explicitly use the Born rule.”
I’m not interpreting the squared amplitudes of a superposition as probabilities!
My whole point is that in decoherence theory, there are no probabilities at level of the system+measuring device+environment wavefunction.
Classical probabilities only emerge after we trace over the environment. After that, we obtain a density operator wich describes the system+measuring device.
The classical probabilities are the eigenvalues of the density operator.
Sean Carroll,
would you say the Copenhagen Interpretation is still the Orthodox interpretation of QM?