Philosophy

Beyond the Room

I’m sure Ruben Bolling is making fun of people I disagree with, and not of me.

The underlying point is a good one, though, and one that is surprisingly hard for people thinking about cosmology to take to heart: without actually looking at it, there is no sensible a priori reasoning that can lead us to reliable knowledge about parts of the universe we haven’t observed. Einstein and Wheeler believed that the universe was closed and would someday recollapse, because a universe that was finite in time felt right to them. The universe doesn’t care what feels right, or what “we just can’t imagine”; so all possibilities should remain on the table.

On the other hand, that doesn’t mean we can’t draw reasonable a posteriori conclusions about the unobservable universe, if the stars align just right. That is, if we had a comprehensive theory of physics and cosmology that successfully passed a barrage of empirical tests here in the universe we do observe, and made unambiguous predictions for the universe that we don’t, it would not be crazy to take those predictions seriously.

We don’t have that theory yet, but we’re working on it. (Where “we” means an extremely tiny fraction of working scientists, who receive an extremely disproportionate amount of attention.)

Beyond the Room Read More »

31 Comments

If It’s Not Disturbing, You’re Not Doing It Right

Science, that is. No, this is not what I have in mind. Rather, this provocative statement — the discoveries of science should be disturbing, they shouldn’t simply provide gentle reassurance about our place in the universe — is the conclusion reached by my latest Bloggingheads dialogue, with David Albert.

.

David is a philosopher of science at Columbia, author of Time and Chance as well as Quantum Mechanics and Experience. We talked about what philosophers of science do, the awful What the Bleep Do We Know? movie, string theory and falsifiability, and touched on time before running out thereof. Future episodes are clearly called for.

If It’s Not Disturbing, You’re Not Doing It Right Read More »

19 Comments

What Is Interesting?

Lurking behind the debate over the high energy physics budget is a meta question that rarely gets addressed head-on: in a world with many things that we would like to do, but limited resources to do them, how do we decide what questions are interesting enough to warrant our attention? This question arises at every level. If we have a certain number of dollars to spend on particle physics, how much should go to the high-energy frontier and how much to smaller-scale experiments? Within fundamental science, how much should go to physics and how much to biology or astronomy or whatever? And it’s not just money: within a university, how many faculty positions should go to historians, and how many to archaeologists? Within philosophy, how many logicians do we need, and how many ethicists? It’s not even an especially academic question: which book am I going to bring with me to read on the plane?

There are a number of issues that get tied up in such considerations. One is that certain activities simply require certain resources, so if we judge them sufficiently interesting to be pursued then we need to be prepared to devote the appropriate resources their way. A colleague of mine in condensed-matter physics was fond of complaining about all the great small-scale physics that his community could do if they only had half of Fermilab’s budget. Which is undoubtedly true, but with half of Fermilab’s budget you wouldn’t get half the science out of Fermilab — you wouldn’t get anything at all. If that kind of particle physics is worth doing at all (which is a completely fair question), there is an entry fee you can’t avoid paying.

But more deeply, the problem is that there is no intrinsic property of “interestingness” that we can compare across different academic questions. Questions are not interesting in and of themselves; they are interesting to somebody. If I happen to not be interested in the American Civil War, and a friend of mine thinks it’s fascinating, that doesn’t mean that one of us is “right” and the other “wrong”; it just means that we have different opinions about the interestingness of that particular subject. It’s precisely the same kind of personal decision that goes into preferences for different kinds of music or cuisine. The difference is that, unlike CD’s or appetizers, we don’t consume these goods individually; we need to make some collective decision about how to allocate our intellectual resources.

People pretend that there are objective criteria, of course. The standard battle lines within physics are drawn between research that is “fundamental” and research that is “useful.” I was once in the audience for a colloquium by Steven Weinberg, back in the days when we were still planning on the Superconducting Supercollider, and he was talking about why particle physics was worthy of substantial investment: “People sometimes object to the way we speak about particle physics, objecting that we give the impression that it’s more `fundamental’ than other fields. But I think it’s okay, because … well, it is more fundamental.” Contrariwise, I’ve heard condensed-matter physicists wonder with a straight face why anyone in the general public would be interested in books on string theory and cosmology. After all, those subjects have no impact at all on their everyday lives, so what is the possible interest?

In reality, there is no objective metaphysical standard to separate the interesting from the uninteresting. There are a bunch of human beings with different interests, and we have the social task of balancing them. A complication arises in the context of academia, where we don’t weigh everyone’s interests equally — there are experts whose opinions count for more than those on the streets. And that makes sense; even if I have no idea which directions in contemporary chemistry or French literature are interesting, I am more than willing to leave such questions in the hands of people who care deeply and have contributed to the fields.

The real problem, of course, is that sometimes we have to compare between fields, so that decisions have to be made by people who are almost certainly not experts in all of the competing interests. We have, for example, the danger of self-perpetuation, where a small cadre of experts in an esoteric area continue to insist on the importance of their work. That’s where it becomes crucial to be able to explain to outsiders why certain questions truly are interesting, even if the outsiders can’t appreciate all the details. In fundamental physics, we actually have a relatively easy time of it, our fondness for kvetching notwithstanding; it’s not too hard to appreciate the importance of concepts like “the laws of nature” and “the beginning of the universe,” even to people who don’t follow the math. Making a convincing request for a billion dollars is, of course, a different story.

Sadly, none of these high-minded considerations are really at work in the current budget debacle. High-energy physics seems to be caught in a pissing match between the political parties, each of whom wants to paint the other as irresponsible.

The White House and congressional leaders exchanged barbs Tuesday over who was to blame for the Fermilab impasse. Lawmakers said the Bush administration’s tight overall budget targets tied their hands, while a spokesman for Bush’s Office of Management and Budget said the Democratic leaders could have met the targets by cutting back on other discretionary elements of the budget.

Durbin said the $196 billion required for the wars in Iraq and Afghanistan left little room for budget maneuvering.

“We were left with stark choices: reduce funding for high-end physics or cut money for veterans; reduce spending at Fermilab or eliminate funding for rural hospitals,” Durbin said in a statement Tuesday.

Sean Kevelighan, a spokesman for the administration’s Office of Management and Budget, said Congress could have chosen instead to take more money from the $9.7 billion worth of earmarks designated for lawmakers’ projects.

“The choices were up to the Congress,” Kevelighan said.

As annoying as academia can be, politics is infinitely worse.

What Is Interesting? Read More »

53 Comments

Turtles Much of the Way Down

Paul Davies has published an Op-Ed in the New York Times, about science and faith. Edge has put together a set of responses — by Jerry Coyne, Nathan Myhrvold, Lawrence Krauss, Scott Atran, Jeremy Bernstein, and me, so that’s some pretty lofty company I’m hob-nobbing with. Astonishingly, bloggers have also weighed in: among my regular reads, we find responses from Dr. Free-Ride, PZ, and The Quantum Pontiff. (Bloggers have much more colorful monikers than respectable folk.) Peter Woit blames string theory.

I post about this only with some reluctance, as I fear the resulting conversation is very likely to lower the average wisdom of the human race. Davies manages to hit a number of hot buttons right up front — claiming that both science and religion rely on faith (I don’t think there is any useful definition of the word “faith” in which that is true), and mentioning in passing something vague about the multiverse. All of which obscures what I think is his real point, which only pokes through clearly at the end — a claim to the effect that the laws of nature themselves require an explanation, and that explanation can’t come from the outside.

Personally I find this claim either vacuous or incorrect. Does it mean that the laws of physics are somehow inevitable? I don’t think that they are, and if they were I don’t think it would count as much of an “explanation,” but your mileage may vary. More importantly, we just don’t have the right to make deep proclamations about the laws of nature ahead of time — it’s our job to figure out what they are, and then deal with it. Maybe they come along with some self-justifying “explanation,” maybe they don’t. Maybe they’re totally random. We will hopefully discover the answer by doing science, but we won’t make progress by setting down demands ahead of time.

So I don’t know what it could possibly mean, and that’s what I argued in my response. Paul very kindly emailed me after reading my piece, and — not to be too ungenerous about it, I hope — suggested that I would have to read his book.

My piece is below the fold. The Edge discussion is interesting, too. But if you feel your IQ being lowered by long paragraphs on the nature of “faith” that don’t ever quite bother to give precise definitions and stick to them, don’t blame me.

Turtles Much of the Way Down Read More »

110 Comments

Why Is There Something Rather Than Nothing?

The best talk I heard at the International Congress of Logic Methodology and Philosophy of Science in Beijing was, somewhat to my surprise, the Presidential Address by Adolf Grünbaum. I wasn’t expecting much, as the genre of Presidential Addresses by Octogenarian Philosophers is not one noted for its moments of soaring rhetoric. I recognized Grünbaum’s name as a philosopher of science, but didn’t really know anything about his work. Had I known that he has recently been specializing in critiques of theism from a scientific viewpoint (with titles like “The Poverty of Theistic Cosmology“), I might have been more optimistic.

Grünbaum addressed a famous and simple question: “Why is there something rather than nothing?” He called it the Primordial Existential Question, or PEQ for short. (Philosophers are up there with NASA officials when it comes to a weakness for acronyms.) Stated in that form, the question can be traced at least back to Leibniz in his 1697 essay “On the Ultimate Origin of Things,” although it’s been recently championed by Oxford philosopher Richard Swinburne.

The correct answer to this question is stated right off the bat in the Stanford Encyclopedia of Philosophy: “Well, why not?” But we have to dress it up to make it a bit more philosophical. First, we would only even consider this an interesting question if there were some reasonable argument in favor of nothingness over existence. As Grünbaum traces it out, Leibniz’s original claim was that nothingness was “spontaneous,” whereas an existing universe required a bit of work to achieve. Swinburne has sharpened this a bit, claiming that nothingness is uniquely “natural,” because it is necessarily simpler than any particular universe. Both of them use this sort of logic to undergird an argument for the existence of God: if nothingness is somehow more natural or likely than existence, and yet here we are, it must be because God willed it to be so.

I can’t do justice to Grünbaum’s takedown of this position, which was quite careful and well-informed. But the basic idea is straightforward enough. When we talk about things being “natural” or “spontaneous,” we do so on the basis of our experience in this world. This experience equips us with a certain notion of natural — theories are naturally if they are simple and not finely-tuned, configurations are natural if they aren’t inexplicably low-entropy.

But our experience with the world in which we actually live tells us nothing whatsoever about whether certain possible universes are “natural” or not. In particular, nothing in science, logic, or philosophy provides any evidence for the claim that simple universes are “preferred” (whatever that could possibly mean). We only have experience with one universe; there is no ensemble from which it is chosen, on which we could define a measure to quantify degrees of probability. Who is to say whether a universe described by the non-perturbative completion of superstring theory is likelier or less likely than, for example, a universe described by a Rule 110 cellular automaton?

It’s easy to get tricked into thinking that simplicity is somehow preferable. After all, Occam’s Razor exhorts us to stick to simple explanations. But that’s a way to compare different explanations that equivalently account for the same sets of facts; comparing different sets of possible underlying rules for the universe is a different kettle of fish entirely. And, to be honest, it’s true that most working physicists have a hope (or a prejudice) that the principles underlying our universe are in fact pretty simple. But that’s simply an expression of our selfish desire, not a philosophical precondition on the space of possible universes. When it comes to the actual universe, ultimately we’ll just have to take what we get.

Finally, we physicists sometimes muddy the waters by talking about “multiple universes” or “the multiverse.” These days, the vast majority of such mentions refer not to actual other universes, but to different parts of our universe, causally inaccessible from ours and perhaps governed by different low-energy laws of physics (but the same deep-down ones). In that case there may actually be an ensemble of local regions, and perhaps even some sensibly-defined measure on them. But they’re all part of one big happy universe. Comparing the single multiverse in which we live to a universe with completely different deep-down laws of physics, or with different values for such basic attributes as “existence,” is something on which string theory and cosmology are utterly silent.

Ultimately, the problem is that the question — “Why is there something rather than nothing?” — doesn’t make any sense. What kind of answer could possibly count as satisfying? What could a claim like “The most natural universe is one that doesn’t exist” possibly mean? As often happens, we are led astray by imagining that we can apply the kinds of language we use in talking about contingent pieces of the world around us to the universe as a whole. It makes sense to ask why this blog exists, rather than some other blog; but there is no external vantage point from which we can compare the relatively likelihood of different modes of existence for the universe.

So the universe exists, and we know of no good reason to be surprised by that fact. I will hereby admit that, when I was a kid (maybe about ten or twelve years old? don’t remember precisely) I actually used to worry about the Primordial Existential Question. That was when I had first started reading about physics and cosmology, and knew enough about the Big Bang to contemplate how amazing it was that we knew anything about the early universe. But then I would eventually hit upon the question of “What if they universe didn’t exist at all?”, and I would get legitimately frightened. (Some kids are scared by clowns, some by existential questions.) So in one sense, my entire career as a physical cosmologist has just been one giant defense mechanism.

Why Is There Something Rather Than Nothing? Read More »

240 Comments

Bérubé on Rorty

Via Mixing Memory, Slate has a collection of short reminisces about Richard Rorty by everyone from Brian Eno to Jurgen Habermas. (Although, admittedly, I sometimes have trouble telling the two apart.) In one contribution, semi-retired blogger of leisure Michael Bérubé says just what I was saying, except from a better-informed and more eloquent perspective.

In the spring of 1985, when I was a graduate student at the University of Virginia, Richard Rorty’s seminar on Martin Heidegger changed my life. Not because he converted me to Heidegger; he was not much of a Heidegger fan himself. But his seminar introduced me to anti-foundationalist pragmatism — to the idea that our beliefs, our vocabularies, and our ways of life are contingent. “Um, contingent on what?” I asked. “Not contingent on anything,” Rorty replied, “just — contingent.”

Although I was never quite convinced by Rorty’s claims that the languages of the physical sciences were as contingent as any other form of language, I was thoroughly convinced, by the end of the term, that it was a bad idea to think of philosophy as a kind of epistemological physics, in which moral truths are waiting somewhere out there to be discovered, like planets or particles. One of the reasons Rorty’s view of the world seemed so attractive was that it offered us humans a useful way to think about why it is that we disagree with each other about what those moral truths actually are: If you think you are acting in accordance with the eternal moral truths of the universe, after all, it is likely that you will think of people who think and act differently as being defective, deluded, or downright dangerous. On the other hand, if you think that morality is a matter of contingent vocabularies, you don’t have to become a shallow relativist — you can go right on believing what you believe, except that you have to give up the conviction that there’s no plausible way another rational person could think differently.

Bérubé on Rorty Read More »

11 Comments

Richard Rorty

Richard Rorty Richard Rorty has passed away. He was arguably the most well-known living American philosopher, not least of which because he was a wonderful communicator; see Jacob Levy’s appreciation of his rhetorical skills.

Intellectually, Rorty was hard to pin down; while he was most closely identified with the American pragmatist tradition of Dewey and Peirce, he was trained as a hard-core analytic philosopher, and later became heavily influenced both by Wittgenstein and by continental/”postmodern” philosophy. So he managed to annoy everybody, basically. But his real project was to take seriously radical critiques of meaning and truth while simultaneously offering a positive prospect for morality and human living. Which is a good project to have, I think.

Wikipedia has a representative quote from Contingency, Irony, and Solidarity, in which Rorty spells out his view of a good “ironist”:

(1) She has radical and continuing doubts about the final vocabulary she currently uses, because she has been impressed by other vocabularies, vocabularies taken as final by people or books she has encountered; (2) she realizes that argument phrased in her present vocabulary can neither underwrite nor dissolve these doubts; (3) insofar as she philosophizes about her situation, she does not think that her vocabulary is closer to reality than others, that it is in touch with a power not herself.

As physicists go, I’m more sympathetic to postmodernism than most. (Who are, you know, not very sympathetic.) What I really think is that people who think carefully about science and people who think carefully about the social construction of truth would have a lot to learn from each other, if they would approach each other’s concerns and insights in good faith, which is hard to do.

When Rorty talks about “final vocabularies” in the quote above, he’s not really thinking of “quantum field theory” or “general relativity” or even “the scientific method,” although they would arguably be legitimate examples. He’s thinking of doctrines of religion or morality or politics or ethics or aesthetics that we use to judge good and bad and right and wrong in our lives. These are areas in which such vocabularies truly are contingent, and unpacking our presuppositions about their finality is a useful practice.

Science is different. To do science, we presume the existence of a “real world” that is “out there” and follows a set of rules and patterns that are completely independent of whatever actions we humans may be taking, including our actions of conceptualizing that real world. Questions of good and bad and right and wrong are not like that; their subject matter is our judgments themselves, which are subject to interrogation and ultimately to alteration. Right and wrong are not out there in the world to be probed and described; we create them through various human mechanisms. A scientist cannot consistently hold radical doubts about the nature of the real world.

On the other hand — and this is the part that, I think, scientists consistently miss — we certainly can hold radical doubts about the vocabulary with which we as scientists describe that real world. In fact, when pressed in other contexts, we are the first to insist that scientific theories are always useful but limited approximations, capturing some part of reality but certainly not the whole. Furthermore, even experimental data do not provide any unmediated glimpse of reality; not only are there error bars, but there are also the irreducible theory-laden choices about which data to collect, and how to fit them into our frameworks. These are commonplace scientific truisms, but they are also deep postmodern insights.

In my personal intellectual utopia, postmodernists would appreciate how science differs from morality and ethics and aesthetics by the ontological independence of its subject matter, while scientists would appreciate how there is a lot we have yet to quite understand about how we use language and evidence in an ultimately contingent way. Just as Rorty wanted to make ironic skepticism compatible with human solidarity, I’d like to see suspicion toward final vocabularies made compatible with the undeniable truth of scientific progress.

Or am I just being ironic?

More: Mixing Memory has a list of other blog posts on Rorty; Continental Philosophy has a collection of links and a recent video.

Richard Rorty Read More »

25 Comments

We Know the Answer!

Chad Orzel is wondering about the origin of some irritating habits in science writing. His first point puts the finger right on the issue:

Myth 1: First-person pronouns are forbidden in scientific writing. I have no idea where students get the idea that all scientific writing needs to be in the passive voice, but probably three quarters of the papers I get contain sentences in which the syntax has been horribly mangled in order to avoid writing in the first person.

It’s not exactly right to call this a “myth”; as Andre from Biocurious points out in comments, the injuction to use the passive voice is often stated quite explicitly. There’s even an endlessly amusing step-by-step instruction guide for converting your text from active to passive voice. What would Strunk and White say?

The same goes for using “we” rather than “I,” even if you’re the only person writing. There are also guides that make this rule perfectly explicit. The refrain in this one is:

Write in the third person (“The aquifer covers 1000 square kilometers”) or the first person plural (“We see from this equation that acceleration is proportional to force”). Avoid using “I” statements.

Interestingly, these habits did not just emerge organically as scientific communication evolved — they were, if you like, designed. I learned this from a talk given by Evelyn Fox Keller some years ago, which unfortunately I’ve never been able to find in print. It goes back to the earliest days of the scientific revolution, when Francis Bacon and others were musing on how this new kind of approach to learning about the world should be carried out. Bacon decided that it was crucially important to emphasize the objectivity of the scientific process; as much as possible, the individual idiosyncratic humanity of the scientists was to be purged from scientific discourse, making the results seem as inevitable as possible.

To this end, Bacon was quite programmatic, suggesting a list of ways to remove the taint of individuality from the scientific literature. Passive voice was encouraged, and it was (apparently, if Keller was right and I’m remembering correctly) Bacon who first insisted that we write “we will show” in the abstracts of our single-author papers.

It always seemed a little unnatural to me, and when it came time to write a single-author paper (which I tend not to do, since collaborating is much more fun) I went with the first-person singular. I decided that if it was good enough for Sidney Coleman, it should be good enough for me.

Keller has a more well-known discussion of the rhetoric of Francis Bacon, reprinted in Reflections on Gender and Science. Here she examines Bacon’s personification of the figure of Nature, specifically with regard to gender roles. Bacon was one of the first to introduce the metaphor of Nature as a woman to be seduced/conquered. Sometimes the imagery is gentle, sometimes less so; here are some representative quotes from Bacon to give the gist.

“Let us establish a chaste and lawful marriage between Mind and Nature.”

“My dear, dear boy, what I plan for you is to unite you with things themselves in a chaste, holy, and legal wedlock. And from this association you will insure an increase beyond all the hopes and prayers of ordinary marriages, to wit, a blessed race of Heroes and Supermen.”

“I am come in very truth leading you to Nature with all her children to bind her to your service and make her your slave.”

“I invite all such to join themselves, as true sons of knowledge, with me, that passing by the outer courts of nature, which numbers have trodden, we may find a way at length into her inner chambers.”

“For you have but to follow and as it were hound nature in her wanderings, and you will be able, when you like, to lead and drive her afterwards to the same place again.”

[Science and technology do not] “merely exert a gentle guidance over nature’s course; they have the power to conquer and subdue her, to shake her to her foundations.”

But, while Nature is a shy female waiting to be seduced and conquered, we also recognize that Nature is a powerful, almost God-like force. Tellingly, when Bacon talks about this aspect, the metaphorical gender switches, and now Nature is all too male:

“as if the divine nature enjoyed the kindly innocence in such hide-and-seek, hiding only in order to be found, and with characteristic indulgence desired the human mind to join Him in this sport.”

So much meaning lurking in a few innocent pronouns! We like to pretend that the way we do science, and the way we conceptualize our activity, is more or less inevitable; but there are a lot of explicit choices along the way.

We Know the Answer! Read More »

52 Comments

What I Believe But Cannot Prove

Each year, John Brockman’s Edge asks a collection of deep thinkers a profound question, and gives them a couple of hundred words to answer: The World Question Center. The question for 2005 was What Do You Believe Is True Even Though You Cannot Prove It? Plenty of entertaining answers, offered by people like Bruce Sterling, Ray Kurzweil, Lenny Susskind, Philip Anderson, Alison Gopnik, Paul Steinhardt, Maria Spiropulu, Simon Baron-Cohen, Alex Vilenkin, Martin Rees, Esther Dyson, Margaret Wertheim, Daniel Dennett, and a bunch more. They’ve even been collected into a book for your convenient perusal. Happily, these questions are more or less timeless, so nobody should be upset that I’m a couple of years late in offering my wisdom on this pressing issue.

Most of the participants were polite enough to play along and answer the question in the spirit in which it was asked, although their answers often came down to “I believe the thing I’m working on right now will turn out to be correct and interesting.” But to me, there was a perfectly obvious response that almost nobody gave, although Janna Levin and Seth Lloyd came pretty close. Namely: there isn’t anything that I believe that I can prove, aside from a limited set of ultimately sterile logical tautologies. Not that there’s anything wrong with tautologies; they include, for example, all of mathematics. But they describe necessary truths; given the axioms, the conclusions follow, and we can’t imagine it being any other way. The more interesting truths, it seems to me, are the contingent ones, the features of our world that didn’t have to be that way. And I can’t prove any of them.

The very phrasing of the question, and the way most of the participants answered it, irks me a bit, as it seems to buy into a very wrong way of thinking about science and understanding: the idea that true and reliable knowledge derives from rigorous proof, and anything less than that is dangerously uncertain. But the reality couldn’t be more different. I can’t prove that the Sun will rise tomorrow, that radioactive decays obey an exponential probability law, or that the Earth is more than 6,000 years old. But I’m as sure as I am about any empirical statement that these are true. And, most importantly, there’s nothing incomplete or unsatisfying about that. It’s the basic way in which we understand the world.

Here is a mathematical theorem: There is no largest prime number. And here is a proof:

Consider the list of all primes, pi, starting with p1 = 2. Suppose that there is a largest prime, p*. Then there are only a finite number of primes. Now consider the number X that we obtain by multiplying together all of the primes pi (exactly once each) from 2 to p* and adding 1 to the result. Then X is clearly larger than any of the primes pi. But it is not divisible by any of them, since dividing by any of them yields a remainder 1. Therefore X, since it has no prime factors, is prime. We have thus constructed a prime larger than p*, which is a contradiction. Therefore there is no largest prime.

Here is a scientific belief: General relativity accurately describes gravity within the solar system. And here is the argument for it:

GR incorporates both the relativity of locally inertial frames and the principle of equivalence, both of which have been tested to many decimal places. Einstein’s equation is the simplest possible non-trivial dynamical equation for the curvature of spacetime. GR explained a pre-existing anomaly — the precession of Mercury — and made several new predictions, from the deflection of light to gravitational redshift and time delay, which have successfully been measured. Higher-precision tests from satellites continue to constrain any possible deviations from GR. Without taking GR effects into account, the Global Positioning System would rapidly go out of whack, and by including GR it works like a charm. All of the known alternatives are more complicated than GR, or introduce new free parameters that must be finely-tuned to agree with experiment. Furthermore, we can start from the idea of massless spin-two gravitons coupled to energy and momentum, and show that the nonlinear completion of such a theory leads to Einstein’s equation. Although the theory is not successfully incorporated into a quantum-mechanical framework, quantum effects are expected to be unobservably small in present-day experiments. In particular, higher-order corrections to Einstein’s equation should naturally be suppressed by powers of the Planck scale.

You see the difference, I hope. The mathematical proof is airtight; it’s just a matter of following the rules of logic. It is impossible for us to conceive of a world in which we grant the underlying assumptions, and yet the conclusion doesn’t hold.

The argument in favor of believing general relativity — a scientific one, not a mathematical one — is of an utterly different character. It’s all about hypothesis testing, and accumulating better and better pieces of evidence. We throw an hypothesis out there — gravity is the curvature of spacetime, governed by Einstein’s equation — and then we try to test it or shoot it down, while simultaneously searching for alternative hypotheses. If the tests get better and better, and the search for alternatives doesn’t turn up any reasonable competitors, we gradually come to the conclusion that the hypothesis is “right.” There is no sharp bright line that we cross, at which the idea goes from being “just a theory” to being “proven correct.” Rather, maintaining skepticism about the theory goes from being “prudent caution” to being “crackpottery.”

It is a intrinsic part of this process that the conclusion didn’t have to turn out that way, in any a priori sense. I could certainly imagine a world in which some more complicated theory like Brans-Dicke was the empirically correct theory of gravity, or perhaps even one in which Newtonian gravity was correct. Deciding between the alternatives is not a matter of proving or disproving; its a matter of accumulating evidence past the point where doubt is reasonable.

Furthermore, even when we do believe the conclusion beyond any reasonable doubt, we still understand that it’s an approximation, likely (or certain) to break down somewhere. There could very well be some very weakly-coupled field that we haven’t yet detected, that acts to slightly alter the true behavior of gravity from what Einstein predicted. And there is certainly something going on when we get down to quantum scales; nobody believes that GR is really the final word on gravity. But none of that changes the essential truth that GR is “right” in a certain well-defined regime. When we do hit upon an even better understanding, the current one will be understood as a limiting case of the more comprehensive picture.

“Proof” has an interesting and useful meaning, in the context of logical demonstration. But it only gives us access to an infinitesimal fraction of the things we can reasonably believe. Philosophers have gone over this ground pretty thoroughly, and arrived at a sensible solution. The young Wittgenstein would not admit to Bertrand Russell that there was not a rhinoceros in the room, because he couldn’t be absolutely sure (in the sense of logical proof) that his senses weren’t tricking him. But the later Wittgenstein understood that taking such a purist stance renders the notion of “to know” (or “to believe”) completely useless. If logical proof were required, we would only believe logical truths — and even then the proofs might contain errors. But in the real world it makes perfect sense to believe much more than that. So we take “I believe x” to mean, not “I can prove x is the case,” but “it would be unreasonable to doubt x.”

The search for certainty in empirical knowledge is a chimera. I could always be a brain in a vat, or teased by an evil demon, or simply an AI program running on somebody else’s computer — fed consistently misleading “sense data” that led me to incorrect conclusions about the true nature of reality. Or, to put a more modern spin on things, I could be a Boltzmann Brain — a thermal fluctuation, born spontaneously out of a thermal bath with convincing (but thoroughly incorrect) memories of the past. But — here is the punchline — it makes no sense to act as if any of those is the case. By “makes no sense” we don’t mean “can’t possibly be true,” because any one of those certainly could be true. Instead, we mean that it’s a cognitive dead end. Maybe you are a brain in a vat. What are you going to do about it? You could try to live your life in a state of rigorous epistemological skepticism, but I guarantee that you will fail. You have to believe something, and you have to act in some way, even if your belief is that we have no reliable empirical knowledge about the world and your action is to never climb out of bed. On the other hand, putting aside the various solipsistic scenarios and deciding to take the evidence of our senses (more or less) at face value does lead somewhere; we can make sense of the world, act within it and see it respond in accordance with our understanding. That’s both the best we can hope for, and what the world does as a matter of fact grant us; that’s why science works!

It can sound a little fuzzy, with this notion of “reasonable” having sneaked into our definition of belief, where we might prefer to stand on some rock-solid metaphysical foundations. But the world is a fuzzy place. Although I cannot prove that I am not a brain in a vat, it is unreasonable for me to take the possibility seriously — I don’t gain anything by it, and it doesn’t help me make sense of the world. Similarly, I can’t prove that the early universe was in a hot, dense state billions of years ago, nor that human beings evolved from precursor species under the pressures of natural selection. But it would be unreasonable for me to doubt it; those beliefs add significantly to my understanding of the universe, accord with massive piles of evidence, and contribute substantially to the coherence of my overall worldview.

At least, that’s what I believe, although I can’t prove it.

What I Believe But Cannot Prove Read More »

67 Comments

Making Demands of the Foundation of All Being

Quote of the Day: David Albert, philosopher of science at Columbia. He was interviewed for, and appeared in, What the Bleep Do We Know?, the movie that tried to convince people that quantum mechanics teaches us that we can change physical reality just by adjusting our mental state. After seeing the travesty that was the actual movie, he complained loudly and in public that his views had been grossly distorted; this quote is from one such interview.

It seems to me that what’s at issue (at the end of the day) between serious investigators of the foundations of quantum mechanics and the producers of the “what the bleep” movies is very much of a piece with what was at issue between Galileo and the Vatican, and very much of a piece with what was at issue between Darwin and the Victorians. There is a deep and perennial and profoundly human impulse to approach the world with a DEMAND, to approach the world with a PRECONDITION, that what has got to turn out to lie at THE CENTER OF THE UNIVERSE, that what has got to turn out to lie at THE FOUNDATION OF ALL BEING, is some powerful and reassuring and accessible image of OURSELVES. That’s the impulse that the What the Bleep films seem to me to flatter and to endorse and (finally) to exploit – and that, more than any of their particular factual inaccuracies – is what bothers me about them. It is precisely the business of resisting that demand, it is precisely the business of approaching the world with open and authentic wonder, and with a sharp, cold eye, and singularly intent upon the truth, that’s called science.

Read the whole thing. The use of emphases is characteristic of David’s writing style, which is also on display in his fantastic books on quantum mechanics and the arrow of time.

The only really misleading part of the above quote is choosing “the Victorians” as Darwin’s foil; things haven’t changed all that much, sadly.

Making Demands of the Foundation of All Being Read More »

24 Comments
Scroll to Top