Science

Philosophy and Cosmology: Slow Live-Blogging

Greetings from Oxford, a charming little town across the Atlantic with its very own university. It’s in the United Kingdom, a small island nation recognized for its steak and kidney pie and other contributions to world cuisine. What you may not know is that the UK has also produced quite a few influential philosophers and cosmologists, making it an ideal venue for a small conference that aims to bring these two groups together.

george_ellis The proximate reason for this particular conference is George Ellis’s 70th birthday party. Ellis is of course a well-known general relativist, cosmologist, and author. Although the idea of a birthday conference for respected scientists is quite an established one, Ellis had the idea of a focused and interdisciplinary meeting that might actually be useful, rather than just bringing together all of his friends and collaborators for a big party. It’s to his credit that they invited as many multiverse-boosters as multiverse-skeptics. (I would go for the party, myself.)

George is currently very interested and concerned by the popularity of the multiverse idea in modern cosmology. He’s worried, as many others are (not me, especially), that the idea of a multiverse is intrinsically untestable, and represents a break with the standard idea of what constitutes “science.” So he and the organizing committee have asked a collection of scientists and philosophers with very different perspectives on the idea to come together and hash things out.

It appears as if there is working wireless here in the conference room, so I’ll make some attempt to blog very briefly about what the different speakers are saying. If all goes well, I’ll be updating this post over the next three days. I won’t always agree with everyone, of course, but I’ll try to fairly represent what they are saying.

Saturday night:

Like any good British undertaking, we begin in the pub. I introduce some of the philosophers to Andrei Linde, who entertains us by giving an argument for solipsism based on the Wheeler-deWitt equation. The man can command a room, that’s all I’m saying.

(If you must know the argument: the ordinary Schrodinger equation tells us that the rate of change of the wave function is given by the energy. But for a closed universe in general relativity, the energy is exactly zero — so there is no time evolution, nothing happens. But you can divide the universe into “you” and “the rest.” Your own energy is not zero, so the energy of the rest of the universe is not zero, and therefore it obeys the standard Schrodinger equation with ordinary time evolution. So the only way to make the universe real is to consider yourself separate from it.)

Sunday morning: Cosmology

9:00: Ellis gives the opening remarks. Cosmology is in a fantastic data-rich era, but it is also coming up against the limits of measurement. In the quest for ever deeper explanation, increasingly speculative proposals are being made, which are sometimes untestable even in principle. The multiverse is the most obvious example.

Question: are these proposals science? Or do they attempt to change the definition of what “science” is? Does the search for explanatory power trump testability?

The questions aren’t only relevant to the multiverse. We need to understand the dividing line between science and non-science to properly classify standard cosmology, inflation, natural selection, Intelligent Design, astrology, parapsychology. Which are science?

9:30: Joe Silk gives an introduction to the state of cosmology today. Just to remind us of where we really are, he concentrates on the data-driven parts of the field: dark matter, primordial nucleosynthesis, background radiation, large-scale structure, dark energy, etc.

Silk’s expertise is in galaxy formation, so he naturally spends a good amount of time on that. Theory and numerical simulations are gradually making progress on this tough problem. One outstanding puzzle: why are spiral galaxies so thin? Probably improved simulations will crack this before too long.

10:30: Andrei Linde talks about inflation and the multiverse. The story is laden with irony: inflation was invented to help explain why the universe looks uniform, but taking it seriously leads you to eternal inflation, in which space on extremely large (unobservable) scales is highly non-uniform — the multiverse. The mechanism underlying eternal inflation is just the same quantum fluctuations that give rise to the density fluctuations observed in large-scale structure and the microwave background. The fluctuations we see are small, but at earlier times (and therefore on larger scales) they could easily have been very large — large enough to give rise to different “pocket universes” with different local laws of physics.

Linde represents the strong pro-multiverse view: “An enormously large number of possible types of compactification which exist e.g. in the theory of superstrings should be considered a virtue.” He said that in 1986, and continues to believe it. String theorists were only forced to take all these compactifications seriously by the intervention of a surprising experimental result: the acceleration of the universe, which implied that there was no magic formula that set the vacuum energy exactly to zero. Combining the string theory landscape with eternal inflation gives life to the multiverse, which among other things offers an anthropic solution to the cosmological constant problem.

Still, there are issues, especially the measure problem: how do you compare different quantities when they’re all infinitely big? (E.g. number of different kinds of observers in the multiverse.) Linde doesn’t think any of the currently proposed measures are completely satisfactory, including the ones he’s invented. A big problem with Boltzmann brains.

Another problem is what we mean by “us,” when we’re trying to predict “what observers like us are likely to see.” Are we talking about carbon-based life, or information-processing computers? Help, philosophers!

Linde thinks that the multiverse shows tendencies, although not cut-or-dried predictions. It prefers a cosmological constant to quintessence, and increases the probability that axions rather than WIMPs are the dark matter. Findings to the contrary would be blows to the multiverse idea. Most strongly, without extreme fine-tuning, the multiverse would not be able to simultaneously explain large tensor modes in the CMB and low-energy supersymmetry.

Philosophy and Cosmology: Slow Live-Blogging Read More »

29 Comments

Planck First Light

If you haven’t heard that Planck has seen first light, you haven’t been reading the right cosmology blogs: see Andrew Jaffe, Peter Coles, and Planck’s own Twitter feed. Planck is of course the European Space Agency’s microwave background satellite experiment, which was launched back in May. Since then it’s been tumbling in space about once every minute, doing a leisurely scan of the sky. The survey is not nearly completed, but all systems seem to be running smoothly.

Here’s the region it’s looked at so far, superimposed over a visual-light map of the Milky Way:

FIRST_LIGHT_SURVEY_skystrip_boxes_L

And here’s a zoom in on one region, as seen in two different wavelengths:

Planck_FirstLight_Compos02_2images_410

So far the scientists are playing with the data to learn about the instrument, not so much about the microwave background. Andrew predicts a big splash of papers from Planck in August 2012. We’ll be looking for a bunch of things: Are the overall features of the CMB consistent with predictions from inflation? Are there “non-Gaussian” features indicating extra power in some regions? Is the strength of the perturbations equal on all scales, or does it gradually diminish at smaller distances? Did we learn anything surprising from the polarization, such as tensor modes that could come from inflation or an overall rotation that could come from quintessence? Does the universe have a preferred direction?

I’m sure it will be front-page news, whatever that news turns out to be. Stay tuned.

Planck First Light Read More »

19 Comments

Dark Atoms

Almost a year ago we talked about dark photons — the idea that there was a new force, almost exactly like ordinary electromagnetism, except that it coupled only to dark matter and not to ordinary matter. It turns out to be surprisingly hard to rule such a proposal out on the basis of known astrophysical data, although I suspect that it could be tightly constrained if people did high-precision simulations of the evolution of structure in such a model.

In fact our original idea wasn’t merely the idea of dark photons, it was dark atoms — having dark matter bear a close family resemblance to ordinary matter, all the way to having most of its mass be in the form of composite objects consisting of one positively-charged dark particle (a “dark proton”) and one negatively-charged dark particle (a “dark electron”). We thought about it a very tiny bit, but didn’t pursue the idea and only mentioned it in passing at the very end of our paper. There is an informal rule in theoretical physics that you should only invoke the tooth fairy (propose an extremely speculative idea or hope for some possible but unprovable result) once per paper, so we stuck with only a single kind of charged dark particle.

But once someone invokes the tooth fairy in their paper, anyone who writes another paper gets to invoke the tooth fairy for themselves. (That’s just how the rule works.) And the good news is that it’s now been done:

Atomic Dark Matter
Authors: David E. Kaplan, Gordan Z. Krnjaic, Keith R. Rehermann, Christopher M. Wells

Abstract: We propose that dark matter is dominantly comprised of atomic bound states. We build a simple model and map the parameter space that results in the early universe formation of hydrogen-like dark atoms. We find that atomic dark matter has interesting implications for cosmology as well as direct detection: Protohalo formation can be suppressed below $M_{proto} sim 10^3 – 10^6 M_{odot}$ for weak scale dark matter due to Ion-Radiation interactions in the dark sector. Moreover, weak-scale dark atoms can accommodate hyperfine splittings of order $100 kev$, consistent with the inelastic dark matter interpretation of the DAMA data while naturally evading direct detection bounds.

(Note that one of the authors has been a guest-blogger here at CV.) It looks like a great paper, and they seem to have done a careful job at chasing down some of the interesting implications of dark atoms. In fact the idea might be more robust than that of the one in our paper; the fact that dark atoms are neutral lets you slip loose of some of the more inconvenient observational bounds. And the last sentence of the abstract points to an intriguing consequence: by giving the dark matter particles some structure, you might be able to explain the intriguing DAMA results while remaining consistent with other (thus far negative) direct searches for dark matter. Stay tuned; that dark sector may turn out to be a pretty exciting place after all.

Dark Atoms Read More »

25 Comments

Dark Energy: Still a Puzzle

The arrow of time wasn’t the only big science problem garnering media attention last week: there was also a claim that dark energy doesn’t exist. See Space.com (really just a press release), USA Today, and a bizarre op-ed in the Telegraph saying that maybe this means global warming isn’t real either, so there.

The reports are referring to a paper by mathematicians Blake Temple and Joel Smoller, which is behind a paywall at PNAS but publicly available on the arxiv. (And folks wonder why journals are dying.) Now, some of my best friends are mathematicians, and in this paper they do the kind of thing that mathematicians are trained to do: they solve some equations. In particular, they solve Einstein’s equation of general relativity, for the particular case of a giant spherical “wave” in the universe. So instead of a universe that looks basically the same (on large scales) throughout space, they consider a universe with a special point, so that the density changes as you move away from that point.

Then — here’s the important part — they put the Earth right at that point, or close enough. And then they say, “Hey! In a universe like that, if we look at how fast distant galaxies and supernovae are receding from us, we can fit the data without any dark energy!” That is, they can cook up a result for distance vs. redshift in this model that looks like it would in a smooth model with dark energy, even though there’s nothing but ordinary (and dark) matter in their cosmology.

There are three things to note about this result. First, it’s already known; see e.g. Kolb, Marra, and Matarrese, or Clifton, Ferreira, and Land. In fact, I would argue that it’s kind of obvious. When we observe distant galaxies, we don’t see the full three dimensions of space at every moment in time; we can only look back along our own light cone. If the universe isn’t homogeneous, but is only spherically symmetric around our location, I can arrange the velocities of galaxies along that past light cone to do whatever I want. We could have them spell out “Cosmic Variance” in Morse code if we so desired. So it’s not very surprising we could reconstruct the observed distance vs. redshift curve of an accelerating universe; you don’t have to solve Einstein’s equation to do that.

Second, do you really want to put us right at the center of the universe? That’s hard to rule out on the basis of data — although people are working on it. So it’s definitely a possibility to keep in mind. But it seems a bit of a backwards step from Copernicus and all that. Most of us would like to save this as a move of last resort, at least while there are alternatives available.

Third, there are perfectly decent alternatives available! Namely, dark energy, and in particular the cosmological constant. This idea not only fits the data from supernovae concerning the distance vs. redshift relation, but a bunch of other data as well (cosmic microwave background, cluster abundances, baryon acoustic oscillations, etc.), which this new paper doesn’t bother with. People should not be afraid of dark energy. Remember that the problem with the cosmological constant isn’t that it’s mysterious and ill-motivated — it’s that it’s too small! The naive theoretical prediction is larger than what’s required by observation by a factor of 10120. That’s a puzzle, no doubt, but setting it equal to zero doesn’t make the puzzle go away — then it’s smaller than the theoretical prediction by a factor of infinity.

The cosmological constant should exist, and it fits the data. It might not be the right answer, and we should certainly keep looking for alternatives. But my money is on Λ.

Dark Energy: Still a Puzzle Read More »

55 Comments

Why Don’t We Know When the LHC Will Restart?

We’re all waiting for the LHC to restart. Current plans call for collisions later this year, but at lower energies than originally hoped.

Why is it so hard to say for sure? Here’s a nice article in the CERN Bulletin that lays out some of the difficulties.

Due to the huge amount of inter-dependency between different areas of work in the LHC, even a small change can necessitate a complete overhaul of the schedule. For example, something as simple as cleaning a water cooling tower – required regularly by Swiss law to prevent Legionella – has a huge impact on the planning: “When you clean the water tanks it means we don’t have water-cooling for the compressors, that means we can’t run the cryogenics, so the temperature starts to go up,” explains Myers. “If a sector gets above 100 K, then the expansion effects of heating can cause problems, and we could have to replace parts.”

That may be cold comfort (get it? cold comfort!), but it’s the real world. I have no strong opinions about the job CERN is doing, except to recognize that this is the most complicated machine ever built, so patience is probably called for. The particles and interactions are going to be the same next year as they were last year. (Or if they’re not, that would be even more interesting.)

Why Don’t We Know When the LHC Will Restart? Read More »

15 Comments

The Arrow of Time: Still a Puzzle

A paper just appeared in Physical Review Letters with a provocative title: “A Quantum Solution to the Arrow-of-Time Dilemma,” by Lorenzo Maccone. Actually just “Quantum…”, not “A Quantum…”, because among the various idiosyncrasies of PRL is that paper titles do not begin with articles. Don’t ask me why.

But a solution to the arrow-of-time dilemma would certainly be nice, quantum or otherwise, so the paper has received a bit of attention (Focus, Ars Technica). Unfortunately, I don’t think this paper qualifies.

The arrow-of-time dilemma, you will recall, arises from the tension between the apparent reversibility of the fundamental laws of physics (putting aside collapse of the wave function for the moment) and the obvious irreversibility of the macroscopic world. The latter is manifested by the growth of entropy with time, as codified in the Second Law of Thermodynamics. So a solution to this dilemma would be an explanation of how reversible laws on small scales can give rise to irreversible behavior on large scales.

The answer isn’t actually that mysterious, it’s just unsatisfying. Namely, the early universe was in a state of extremely low entropy. If you accept that, everything else follows from the nineteenth-century work of Boltzmann and others. The problem then is, why should the universe be like that? Why should the state of the universe be so different at one end of time than at the other? Why isn’t the universe just in a high-entropy state almost all the time, as we would expect if its state were chosen randomly? Some of us have ideas, but the problem is certainly unsolved.

So you might like to do better, and that’s what Maccone tries to do in this paper. He forgets about cosmology, and tries to explain the arrow of time using nothing more than ordinary quantum mechanics, plus some ideas from information theory.

I don’t think that there’s anything wrong with the actual technical results in the paper — at a cursory glance, it looks fine to me. What I don’t agree with is the claim that it explains the arrow of time. Let’s just quote the abstract in full:

The arrow of time dilemma: the laws of physics are invariant for time inversion, whereas the familiar phenomena we see everyday are not (i.e. entropy increases). I show that, within a quantum mechanical framework, all phenomena which leave a trail of information behind (and hence can be studied by physics) are those where entropy necessarily increases or remains constant. All phenomena where the entropy decreases must not leave any information of their having happened. This situation is completely indistinguishable from their not having happened at all. In the light of this observation, the second law of thermodynamics is reduced to a mere tautology: physics cannot study those processes where entropy has decreased, even if they were commonplace.

So the claim is that entropy necessarily increases in “all phenomena which leave a trail of information behind” — i.e., any time something happens for which we can possibly have a memory of it happening. So if entropy decreases, we can have no recollection that it happened; therefore we always find that entropy seems to be increasing. Q.E.D.

But that doesn’t really address the problem. The fact that we “remember” the direction of time in which entropy is lower, if any such direction exists, is pretty well-established among people who think about these things, going all the way back to Boltzmann. (Chapter Nine.) But in the real world, we don’t simply see entropy increasing; we see it increase by a lot. The early universe has an entropy of 1088 or less; the current universe has an entropy of 10101 or more, for an increase of more than a factor of 1013 — a giant number. And it increases in a consistent way throughout our observable universe. It’s not just that we have an arrow of time — it’s that we have an arrow of time that stretches coherently over an enormous region of space and time.

This paper has nothing to say about that. If you don’t have some explanation for why the early universe had a low entropy, you would expect it to have a high entropy. Then you would expect to see small fluctuations around that high-entropy state. And, indeed, if any complex observers were to arise in the course of one of those fluctuations, they would “remember” the direction of time with lower entropy. The problem is that small fluctuations are much more likely than large ones, so you predict with overwhelming confidence that those observers should find themselves in the smallest fluctuations possible, freak observers surrounded by an otherwise high-entropy state. They would be, to coin a pithy phrase, Boltzmann brains. Back to square one.

Again, everything about Maccone’s paper seems right to me, except for the grand claims about the arrow of time. It looks like a perfectly reasonable and interesting result in quantum information theory. But if you assume a low-entropy initial condition for the universe, you don’t really need any such fancy results — everything follows the path set out by Boltzmann years ago. And if you don’t assume that, you don’t really explain our universe. So the dilemma lives on.

The Arrow of Time: Still a Puzzle Read More »

97 Comments

Barely Excited

The purpose of the LIGO experiment is to search for gravitational waves in the universe. They haven’t found any yet, but no good big-science experiment would be complete without a few cool spinoffs. They LIGO folks have an especially cool one: they’ve put a kilogram-sized pendulum and “cooled” it so effectively that it’s almost in its quantum-mechanical ground state. To be honest, I’m not exactly sure what this is good for, but it’s really cool. Ha ha, little physics humor there, get it? “Cool.”

LIGO works by bouncing lasers down a pair of evacuated tubes four kilometers in length. The laser beams bounce off a mirror suspended from a pendulum, and then recombine back at the source, where you look for tiny changes in the phase of the light wave. If a gravitational wave passes by, it will gently disturb the pendulums, and the length the laser has to travel down one or the other tube will be slightly changed, leading to a detectable shift in the phase. But obviously they’re looking for an extremely tiny shift, so it’s important that those mirrors not be jiggling around just due to random noise. Thus, they need to be kept cool; a warm mirror will be jiggling just from its thermal motion, even before we start worrying about noisy trucks passing by the observatory.

Physicists are pretty good at getting things to be cold; they can cool down collections of atoms to under a billionth of a Kelvin (room temperature is about 300 Kelvin). But there we’re talking about relatively small collections of atoms, maybe a million at a time. Here we’re talking about a kilogram, which is a honking big number of atoms, something like 1025. And the LIGO folks have cooled the oscillator down to about a millionth of a Kelvin, which is pretty cold.

The secret is that they don’t cool the entire mirror down to that low temperature. That would mean taking all of those 1025 atoms and putting them close to their quantum-mechanical ground state. But instead of thinking of the mirror as a collection of individual atoms, you can think of it as a single “center of mass,” plus a bunch of individual displacements from that center for each of the atoms. Then forget about the individual atoms, and just worry about that center of mass. That’s what we do all the time in the real world; when you tell someone where you are, you give them a single position — you don’t individually specify the location of every atom in your body.

harmonicoscillator.jpg We can think of the center of mass as an isolated “degree of freedom,” and talk about its quantum state apart from that of all the other atoms. Ordinarily, if a big collection of atoms is in thermal equilibrium, each of its degrees of freedom is “excited” above its ground state by a similar amount. Every physicist learns about the simple harmonic oscillator, which is one of the most basic physical systems we can study — it’s just a pendulum. In quantum mechanics, the nice thing about such an oscillator is that it has discrete energy levels, equally spaced, that depend only on the frequency of the pendulum. There is a ground state with just a tiny bit of energy (the “zero-point energy”), then a bunch of higher energy levels, from the first excited state all the way up to infinity. The energy of the Nth excited state is just (N+1/2) times Planck’s constant, times the frequency of the oscillator.

What the LIGO folks have done is to isolate that single degree of freedom, the center of mass of the oscillator, and gently coax it into a very low quantum state: N is about 200, whereas at room temperature N would be about 40 billion. An amazing feat, for a collection of that many atoms.

So what can you do with it? Don’t ask me. But the LIGO scientists know they have something interesting on their hands, and are thinking of ways they can take advantage of this approach to the quantum realm. It’s different, but complementary, to the strategy of putting entire macroscopic objects in a coherent quantum state. (Notice that the linked article is still talking about 1010 atoms, not 1025 atoms.) The LIGO mirror as a whole is still resolutely classical, even if the center-of-mass degree of freedom is near its quantum ground state. But taking big things and pushing them toward the quantum realm is a growth industry these days, and I’m sure we’ll be hearing more about clever applications of the process.

Barely Excited Read More »

13 Comments

Quote of the Day

Children of light and children of darkness is the vision of physics that emerges from this chapter, as from other branches of physics. The children of light are the differential equations that predict the future from the present. The children of darkness are the factors that fix these initial conditions.

— Misner, Thorne, and Wheeler, Gravitation (1973), p. 555.

Quote of the Day Read More »

12 Comments

Feynman’s Character of Physical Law Lectures

Everyone and their niece is emailing me that I should post these. (And Aatish in comments.) And a good thing, too, because it usually takes at least half a dozen emails before I will do anything at all.

In 1964, Richard Feynman gave the Messenger Lectures at Cornell, aimed at a general audience. They were later collected into The Character of Physical Law, a great little book with a depressingly boring cover. Feynman-worship is often overdone, but man, the guy could lecture. And he knew a lot about physics!

The good news is that Bill Gates has now put the full video of the lectures online, as part of Project Tuva. I had to update some software to view them on my Mac, but it seems to be working now.

Feynman Lecturing

Lecture Five is about the arrow of time. If you skip ahead to the 18th minute or so, you’ll hear Feynman explain the Boltzmann Brain argument.

Feynman’s Character of Physical Law Lectures Read More »

30 Comments

What Questions Can Science Answer?

One frustrating aspect of our discussion about the compatibility of science and religion was the amount of effort expended arguing about definitions, rather than substance. When I use words like “God” or “religion,” I try to use them in senses that are consistent with how they have been understood (at least in the Western world) through history, by the large majority of contemporary believers, and according to definitions as you would encounter them in a dictionary. It seems clear to me that, by those standards, religious belief typically involves various claims about things that happen in the world — for example, the virgin birth or ultimate resurrection of Jesus. Those claims can be judged by science, and are found wanting.

Some people would prefer to define “religion” so that religious beliefs entail nothing whatsoever about what happens in the world. And that’s fine; definitions are not correct or incorrect, they are simply useful or useless, where usefulness is judged by the clarity of one’s attempts at communication. Personally, I think using “religion” in that way is not very clear. Most Christians would disagree with the claim that Jesus came about because Joseph and Mary had sex and his sperm fertilized her ovum and things proceeded conventionally from there, or that Jesus didn’t really rise from the dead, or that God did not create the universe. The Congregation for the Causes of Saints, whose job it is to judge whether a candidate for canonization has really performed the required number of miracles and so forth, would probably not agree that miracles don’t occur. Francis Collins, recently nominated to direct the NIH, argues that some sort of God hypothesis helps explain the values of the fundamental constants of nature, just like a good Grand Unified Theory would. These views are by no means outliers, even without delving into the more extreme varieties of Biblical literalism.

Furthermore, if a religious person really did believe that nothing ever happened in the world that couldn’t be perfectly well explained by ordinary non-religious means, I would think they would expend their argument-energy engaging with the many millions of people who believe that the virgin birth and the resurrection and the promise of an eternal afterlife and the efficacy of intercessory prayer are all actually literally true, rather than with a handful of atheist bloggers with whom they agree about everything that happens in the world. But it’s a free country, and people are welcome to define words as they like, and argue with whom they wish.

But there was also a more interesting and substantive issue lurking below the surface. I focused in that post on the meaning of “religion,” but did allude to the fact that defenders of Non-Overlapping Magisteria often misrepresent “science” as well. And this, I think, is not just a matter of definitions: we can more or less agree on what “science” means, and still disagree on what questions it has the power to answer. So that’s an issue worth examining more carefully: what does science actually have the power to do?

I can think of one popular but very bad strategy for answering this question: first, attempt to distill the essence of “science” down to some punchy motto, and then ask what questions fall under the purview of that motto. At various points throughout history, popular mottos of choice might have been “the Baconian scientific method” or “logical positivism” or “Popperian falsificationism” or “methodological naturalism.” But this tactic always leads to trouble. Science is a messy human endeavor, notoriously hard to boil down to cut-and-dried procedures. A much better strategy, I think, is to consider specific examples, figure out what kinds of questions science can reasonably address, and compare those to the questions in which we’re interested.

Here is my favorite example question. Alpha Centauri A is a G-type star a little over four light years away. Now pick some very particular moment one billion years ago, and zoom in to the precise center of the star. Protons and electrons are colliding with each other all the time. Consider the collision of two electrons nearest to that exact time and that precise point in space. Now let’s ask: was momentum conserved in that collision? Or, to make it slightly more empirical, was the magnitude of the total momentum after the collision within one percent of the magnitude of the total momentum before the collision?

This isn’t supposed to be a trick question; I don’t have any special knowledge or theories about the interior of Alpha Centauri that you don’t have. The scientific answer to this question is: of course, the momentum was conserved. Conservation of momentum is a principle of science that has been tested to very high accuracy by all sorts of experiments, we have every reason to believe it held true in that particular collision, and absolutely no reason to doubt it; therefore, it’s perfectly reasonable to say that momentum was conserved.

A stickler might argue, well, you shouldn’t be so sure. You didn’t observe that particular event, after all, and more importantly there’s no conceivable way that you could collect data at the present time that would answer the question one way or the other. Science is an empirical endeavor, and should remain silent about things for which no empirical adjudication is possible.

What Questions Can Science Answer? Read More »

175 Comments
Scroll to Top