Last year we talked a bit about Sam Harris’s attempts to ground morality on science:
See especially the third one there, where I try to be relatively careful about what I am saying. (Wouldn’t impress a philosopher by a long shot, but by scientist/blogger standards I was careful.) Upshot: concepts relevant to morality aren’t empirical ones, and can’t be tested by doing experiments. Morality depends on science (you can make moral mistakes if you don’t understand the real world), but it isn’t a subset of it. Science describes what happens, while morality passes judgments on what should and should not happen, which is simply different.
By now Harris’s book The Moral Landscape has appeared, so you can read for yourself his explanations in full. In a different world — one where I had access to a dozen or so clones of myself with fully updated mental states, willing to tackle all the projects my birth-body didn’t have time to fit in — I would read the book carefully and report back. This is not that world.
Happily, Russell Blackford has written a longish and very good review, in the Journal of Evolution and Technology. He also blogged about it, and Jerry Coyne blogged about Russell’s review. As far as I can tell, Russell and I basically agree on all the substantive points, and he’s more trained in philosophy than I am, so you’re actually doing a lot better than something one of my clones would have been able to provide. It’s an extremely generous review, always saying “I liked the book but…” where I would have said “Despite the flaws, there are some good aspects…” So you’ll find in the review plenty of lines like “Unfortunately, Harris sees it as necessary to defend a naïve metaethical position…”
Any lingering urge I may have had to jump into the debate again in a substantive way has been dissipated by Harris’s response to Blackford’s review, which appears in the form of a letter to Jerry Coyne reprinted on his blog. It seems that very little communication is taking place at this point. Coyne paraphrases Blackford as asking “How do we actually measure well being?; for that is what we must do to make moral judgments.” Seems reasonable enough to me, and echoes very closely my first point here. Harris’s response is:
This is simply not a problem for my thesis (recall my “answers in practice vs. answers in principle” argument). There is a difference between how we verify the truth of a proposition and what makes a proposition true. How many breaths did I take last Tuesday? I don’t know, and there is no way to find out. But there is a correct, numerical answer to this question (and you can bet the farm that it falls between 5 and 5 million).
This misses the point, to say the least. The problem of measuring well-being is not simply one of practice, it’s very much one of principle. I know what a breath is; I don’t know what a “unit of well-being is.” The point of these critiques is that there is no such thing as a unit of well-being that we can look inside the brain and measure. I’m pretty sure that’s a problem of principle. Of course, Russell and Jerry and I (and David Hume, and a large number of professional moral philosophers) may be wrong about this. The way to provide a counter-argument would be to say “Here is a precise and unambiguous definition of how to measure well-being, at least in principle.” That doesn’t seem to be forthcoming.
Latter Harris says this:
The case I make in the book is that morality entirely depends on the existence of conscious minds; minds are natural phenomena; and, therefore, moral truths exist (and can be determined by science in principle, if not always in practice).
Taken at face value, this implies that truths about the best TV shows or most delicious flavors of ice cream also exist. My opinion that The Wire is the best TV show of all time is a natural phenomenon — it reflects the state of certain neurons in my brain. That doesn’t imply, in any meaningful sense, that the state of my brain provides evidence that The Wire “really is” the best TV show of all time. Nor, more programmatically and importantly, does it provide unambiguous guidance concerning which new programs should be green-lit by studio executives. The real problem — how do you balance the interests of different people against each other? — is completely ignored.
At heart I think the problem is that Sam and some other atheists are really concerned about the idea that, without objective moral truths based on science, the field of morality becomes either the exclusive domain of religion, or simply collapses into nihilism. Happily for reality, that’s an extremely false dichotomy. Morality isn’t out there to be measured like some empirical property of the physical world, but that doesn’t mean it’s impossible to be moral or to speak about morality in a rational, thoughtful way. Pretending that morality is a subset of science is, in its own way, just as much an example of wishful thinking as pretending that morality is handed down by God. We have to face up to that temptation and accept the world as it is.
“Taken at face value, this implies that truths about the best TV shows or most delicious flavors of ice cream also exist. The real problem — how do you balance the interests of different people against each other? — is completely ignored.”
My solution for balancing interests has always been to say that “the best TV show (or whatever) is the one that gives its enthusiasts the most pleasure”. So, you look at the people who like various TV shows, and try to figure out which people like their favorite TV show the most. I think this comes closest to what is actually meant by the word “best” as commonly used. For example, in this comparison Alinea would come out better than mcDonalds (despite many more people liking mcDonalds), but Alinea versus Frontera Grill would be much more interesting. This standard of “best” is pretty objective since in principle an alien race that didn’t have a taste for any of our TV shows or retaurants could simply survey the relevant enthusiasts and provide a ranking.
“I think the problem is that Sam and some other atheists are really concerned…”
I whole heartedly agree that this must be the case. Why else would you work so hard to build up some complex thesis that obviously has so many flaws. (I mean, if I can see how morality isn’t a strict subset of science it must not be that hard to see.)
“The real problem — how do you balance the interests of different people against each other? — is completely ignored.”
I haven’t read Harris’s book so I’ll take your word for it that this is the case. And if you’re looking for something a little deeper on the subject of ethics and morality which completely ignores religion, I recommend Amartya Sen’s recent book “The Idea of Justice”. It’s philosophy, sure, and sometimes he goes off into that space where only philosophers dare enter, but usually he comes back out to address that real problem.
Of course it’s not just any old problem, it’s one of those “wicked” problems (http://en.wikipedia.org/wiki/Wicked_problem) so you can’t expect an easy solution.
I’m not sure I agree with the way Harris argues, but I have a lot of sympathy with him on the following point: None of the problems with deciding the best movie or the most moral action are specific to value judgments. Maybe we can’t all agree what exactly well-being is, or that the most moral action is always what maximizes it. But then, neither can we always agree whether warmth refers to heat-index, temperature, rate of heat transfer, or a specific emotional response, nor can we agree exactly where to draw the border between brown and orange, or whether stop signs are the same shade of red on a sunny versus a cloudy day. Natural language is full of open ended and poorly defined terminology whether value laden or not.
If “you can’t get an ought from an is” is supposed to mean that you can teach a sociopath physics, but you can’t make him ACT morally. I don’t think that’s controversial.
If, however, “you can’t get an ought from an is” is supposed to be a constraint on rational discourse, then it seems to me that if I can’t make a conclusion about what ought to be the case from what is the case, then I also can’t make a conclusion about whether my dinner rolls are warm or even what color the sky is. This is where I’m with Harris.
Anyway, singling out moral discourse from the rest of natural language as potentially problematic seems wrong to me for exactly the same reason that those stickers on biology books singling out evolution do.
Harris came to Harvard a few months ago to lecture on his moral-landscape thesis, and drew an audience of two hundred or more. Coincidentally, he held the event in Memorial Church, precisely where Mitchell Heisman killed himself just weeks prior. (http://www.thecrimson.com/article/2010/9/22/heisman-harvard-mother-death/)
Heisman was a sensitive, intelligent young man whose life entered a downward spiral at the age of twelve, after the untimely death of his father. Heisman went on to write a cogent, 2000-page suicide note (http://www.suicidenote.info/) on his “experiment in nihilism.” His philosophy is probably most accurately summarized in his final section, “Happiness, Wonder, Laughter, Love” (pp. 1874-1879), wherein he dwells on the impossibility of logically proving that life is superior to death, or that existence is superior to nonexistence, or that human emotions and preferences have any purchase on what is fundamentally good and true; he acknowledges that Darwinian processes have caused human beings to desire life, but he can’t find any ultimate reason why that should provide a logical justification for life.
He refers to people like Harris as quasi-religious “secularists” who instead of believing in God “believe that meaning is to be found in the material, biochemical processes that humans experience as emotions. They generally believe that it actually means something when these old biological mechanisms produce the familiar emotional routines.”
Say what you will about Heisman’s dissertation; it’s long, often rambling, but there’s nothing insane or irrational about it. And, above all, it plays no games and is brutally honest, in a manner that Sam Harris is not and probably never will be. Heisman’s life and death, and his sprawling thesis, stand as a stark refutation of the simplistic and fundamentally simple-minded story Harris is selling.
Harris’s lecture at Harvard was really a piece of work. He showed up in the standard all-black uniform that’s become de rigueur for the self-deifying TED set. He began with an undefended blanket rejection of Hume’s is-ought problem, and a snide insulting comment about those who worry about such trifling issues. He also explicitly dismissed out of hand anyone who questioned his central dogma that morality is mysterious. Then he proceeded to make a series of crude emotional appeals to the audience, showing photographs of dying children and the like. At one point he even made a populist appeal against the elitist pointy-headed intellectuals who wrestle with trying to understand what Harris’s notion of “the well-being of humanity” actually means.
It was one of the most dishonest presentations I have ever seen; if he’d presented it in a philosophy class, he’d have received a failing grade. At the end, he tried to play a literal straw-man game, allowing audience members to form a small line and have a few seconds each to try to “refute” his thesis; it should have been easy for him, but, unfortunately for him, there was a freshman philosophy student was in the line.
The student asked Harris one of the most elementary questions in moral philosophy: If human suffering is so obviously evil and human happiness so obviously good, then why not just plug everybody into a virtual-reality happiness machine? To this, unsurprisingly, Harris had no answer, and was left rambling that it was a “mystery.”
The fact is that to start making logical arguments, you need to have some unprovable axioms, axioms that not everyone may agree on. Trying to derive morality ab initio without any axioms is like trying to derive arithmetic without the axioms of set theory. That doesn’t mean to say you need to take the Christian Bible as your set of axioms; the Bible is so vast and self-contradictory that you can use it to prove just about anything, as so many religious people do.
So the goal is to find a small set of moral axioms that are simple, as non-self-contradictory as possible, are agreeable to most people, and, above all, to be honest about what you’re doing.
I sometimes wonder why so many right-wing groups desire a person like Sarah Palin on their side, representing their case. I wonder the same thing about “our side” and Sam Harris.
Sean, you say “you can make moral mistakes if you don’t understand the real world”. But isn’t that precisely what Harris is arguing?
Most apologists for the points of view expressed or linked herein need to go back and reread their Wittgenstein. Words are seductive, and neologisms even more so (to the extent that they appear to efficiently convey the coiner’s original meaning), but most of the arguments I’ve encountered about the relationships between science and morality seem specious to me… they are too much about the dance of words and syntax and not enough about first principles.
My opinion is, attempting to demonstrate the “emergence” of a morality from a physicalist interpretation of conscious experience is a form of wishful thinking very similar to that which engenders the proselytizing behaviors common to religists.
I’m curious as to how Sam has build such a respected reputation in the scientific community with such a fundamentally flawed thesis. He’s a smart guy. You’d think eventually he’d realize his gaping errors and shut up about the whole thing.
Mike, thanks for telling us what an arrogant jerk Harris is, but it’s not terribly relevant. And that was a nice piece about Heisman, but how many of us demand a “logical justification for life” to go on living? Also, I wonder what you might say about the need for a set of moral axioms if you were, say, the victim of torture.
Sean says “there is no such thing as a unit of well-being that we can look inside the brain and measure” Right. But it’s much easier to define what well-being isn’t, e.g., torture, or a flavor of ice-cream that will make you sick (or kill you). I think what worries Harris more than allowing Religion to define morality is that if you can’t make moral choices based on reason (or science), then you really can do it at all. It’s all up for grabs (moral axioms? I don’t think so.)
Pingback: Tab dump
If we define morality as the answer to the question of how to best behave within the kind of social setting that we humans evolved, in order to better our chances of survival, then I suppose it becomes less vague. Because, then, we do not need to worry about such things as whether measuring well-being is possible in principle. And we do not need to worry about whether it should be taken as an axioms that mental states are equally important regardless of whose they are. Rather, these things will arise, at least as approximations, from the conditions under which we evolved, perhaps most importantly the significance of cooperation for our survival.
Of course, it is possible that evolution would solve this problem in several different ways which would coexist in an equilibrium. From this point of view, it is rather remarkable that on the whole, and across cultures, people tend to agree on the most basic moral questions. This probably shows more about our joint evolutionary origins than about the universality of our morals, however. Indeed, it is hard to imagine that most of the things that form the topics of the moral debates of today would be of interest to an intelligent alien race with a completely different social structure. Perhaps, some universal concerns would, though.
I think that the fundamental problem is not Hume’s “you can’t derive ought from is” but rather “you can’t derive much of anything without axioms.” If we all took it as axiomatic that moral behavior is whatever makes Sean Carroll happy, then you could easily determine morality empirically just by observing Sean’s reactions. With morality, though, it’s very hard to get people to agree on the axioms.
I see Mike above had a similar comment regarding axioms. I’m not sure I follow sjn’s response. What does being the victim of torture have to do with anything? Certainly most people (sadly not all) agree torture is wrong. It would be nice if we had an agreed upon set of moral axioms from which we could derive right and wrong, but in the absence of such, we still shouldn’t go around doing things we already believe are immoral.
Measuring well-being is actually fairly easy: ask people how they are doing. Ask them to rate their happiness on a scale of 1 to 10, and you have units of well-being.
It isn’t obvious that this will work. People might lie or be mistaken, but research shows that asking people actually works pretty well. See, for example, the first chapter of “Happiness, Lessons from a New Science” by Richard Layard.
Given that it is pretty easy to distinguish well-being from misery in most situations, gaining some measurement of well-being is obviously possible in principle. If we study this we will get better at it, as some social scientists are doing.
We don’t need to all drop what we are doing and take up this cause. However, it is not productive for people who aren’t experts to be declaring the whole project impossible in principle when the experts are actually making reasonable progress.
Gavin, I think that’s a fair point about measuring well-being. But it strikes me that the bigger problem is not “How do we measure well-being?” but “How can we deduce morality from well-being?” If you were to murder me and transplant my organs against my will into people who need them, you might have caused a net increase in global well-being (as measured by self-assessment of happiness.) My family would be sad, and I’d be dead, but the recipients and their families would be happy (especially if you concealed the shady origins of their new organs). But I doubt many of us would call your actions moral.
I realize this is a basic Philosophy 101 kind of argument against utilitarianism (which tells you about how much I know about philosophy), but if there’s some universally accepted answer to it, I don’t know it. The question of how you go from your empirical “well-being” function to actual rules of morality doesn’t seem to be something we can answer through empiricism alone.
I hate torture. If I were attempting (perhaps in vain) to draw up my minimalist set of basic moral axioms, I would surely include that deliberately inflicting physical or mental torture on an unwilling captive is a non-negotiable evil. (Though getting Dick Cheney to agree with me would be no easy task.)
But I’d be dishonest if I didn’t acknowledge that it was still an unprovable axiom. (How would you, practically speaking, convince Cheney, after all?) Sure, you might say that it follows from regarding all needless pain and suffering as evil, but what constitutes “needless,” and how do you prove that’s evil, either?
At some point you just have to admit that there’s a familiar example of infinite regress here, and to stop it you’re going to need to take some basic principles as your starting points; all logical arguments must begin with some nontrivial propositions, only then proceeding with sound reasoning to a set of conclusions.
When I say that Harris is dishonest, I mean that he secretly does this, too, even though he denies it. He says outright that he “defines” as good that which reduces the pain and suffering of conscious beings and improves their physical health. Well, that’s nice, and I agree with the sentiment, but it’s still an unprovable axiom! It is not a tautology. It follows from nothing else, not to mention that it depends on some other rather tricky notions like what a “conscious being” is, and what “suffering” is.
The same would go for “defining” units of well-being according to self-reporting. Putting aside the question that different people are going to derive their sense of personal well-being from different sorts of environments (what makes a miser happy? a masochist? a transcendental monk? a gun-nut? a religious fanatic?) and assuming that the majority of people have overlapping requirements, you still have to say you’re going to define “the good” to be that self-reporting scale of well-being. But why? The honest answer is just to admit that it’s your starting axiom, and there’s nothing wrong with admitting that. It’s certainly a simpler and less-self-contradictory axiom than most religious holy books.
When I compared Sam Harris’s representation of “our side” to Sarah Palin’s representation of the American right-wing, what I meant was that I couldn’t understand why any side would want a dishonest emissary who would be easy picking for critics. Harris is easy picking, and what’s more, he’s too full of himself and impressed by his own “genius” to understand just how big a fool he’s making of himself and of his cause.
As for me, I don’t require perfect rationality or a logical complete answer to everything I do; that’s the sort of thinking that drove so many of the great logicians to madness. Maybe that’s what drove Heisman to suicide. (And how can any of us prove they were wrong?) But at least I’m trying to be honest and humble about my lack of logical rigor.
Gastronomic realism:
http://www.uvm.edu/~dloeb/GR.pdf
there is no such thing as a unit of well-being that we can look inside the brain and measure.
Well, i would suggest that the latest in cognitive research does look inside the brain and measure well being, pleasure, spirituality, and a host of other previously unmeasured bits and pieces. The work on PTSD, while struggling to make headway against the thousands of new cases each year, is producing very interesting results along the lines of well being, happiness, and pleasure. When more mapping is completed in the next two years, we may very well have units of measurable well being articulated in the brain with electrodes and chemicals. What this means for the moral landscape debate, non lo so.
Missing from this discussion is the concept of a moral imperative.
Jerry Coyne says this
“According to Blackford, Harris fails to give a convincing reason why people should be moral.”
Like it or not, society has evolved a moral consciousness and precepts that, to the ordinary person, are surprisingly clear and well understood. It has also arrived at a consensus (outside the ranks of indulgent atheists) that there is a an imperative to behave in a moral way. It is also apparent that there is a war between our immediate needs, as interpreted by our primitive brain, and our higher needs, based (some think) on empathy and love, which we call morality and that this war explains our frequent lapses from morality.
It seems, to me at least, that the really interesting debate is about the nature, origins and force of the moral imperative and the way society is constructing mechanisms to reinforce the moral imperative. To simply ascribe it to ‘well-being’ ignores the force of the moral imperative.
To try and derive morals from axioms or ‘is’, when they seemingly evolved with our species, is just as pointless as trying to derive the evolution of dinosaurs from axioms.
I know that this is a descriptive view of morality as opposed to the absolutist views of both Harris and religion in general. But this is not moral relativism either but rather a belief in the value of the process by which society is evolving moral precepts and mechanisms, even if the process proceeds slowly and in fits and starts.
No, no, you are absolutely correct that “The Wire” is the best TV show ever.
psmith misses my point. I’m not saying that the appropriate approach is to postulate totally arbitrary moral axioms purely out of thin air and then try to derive an ethical framework from them. Surely nature can inspire us in choosing our axioms, just as it does for mathematics or physics, even if those axioms are unprovable.
But, at the end of the day, before you can start deriving an ethical framework, you must state which axioms you’re choosing, wherever you get your inspiration from. Maybe you look for inspiration from evolutionary biology and psychology. But at some point you must declare “x is good, y is bad,” and that’s where you’re taking on axioms.
As for the dinosaurs, you certainly need axioms there, too. You must take on basic epistemological axioms about how you evaluate the validity of data and observations, and logical axioms about how you connect everything together into a coherent story. You also need the data, of course, which is what paleontologists are for. It’s much the same as for ethics: You need some moral axioms, but you also need ethicists, historians, and so forth to gather data to extend those axioms to reach conclusions.
I agree there are moral questions for which it seems science cannot help, as there is no objectively right or wrong answer. However, we do sometimes need to make decisions on such questions, and I’m confused as to how we can do this. It’s clear we’re doing some kind of reasoning, but it seems to be based on fuzzy concepts we don’t try too hard to make precise. This is unsettling to me, and apparently to Sam Harris as well, and I admire him trying to address the problem, although I’m not sure how much meaningful progress he’s made.
TimG (#13), my statement about torture and moral axioms was an attempt at irony, e.g., as the Taliban is slowly extracting your fingernails with pliers, you’re thinking “damn, where’s that moral axiom when you need it.”
Associating morality and well being is a liberal approach and relatively modern. Liberals tend to think of morality in terms of minimizing harm and being fair to each party, but this is an outgrowth of the Enlightenment, a philosophical and political movement developed in 18th century Western Europe. Sorry, you’ll be hard pressed to find this type of morality applied in practice anywhere else, ever. For most of mankind’s history, morality has been associated with respect for authority. purity, sanctity and loyalty to one’s group. In fact, these are the primary moral axes in the world today. for those who find it hard to understand the likes of Dick Cheney.
For many, it is a surprisingly alien world view. The moral code of science fiction’s Klingons is based on courage and loyalty, but they care little about well being, particularly of others. They make a good foil for the liberal values espoused by the white hats of Star Trek, but it also addresses our real world clash of moral outlooks. Naturally, on a science blog, most of the readers and commenters are likely to have liberal Enlightenment values. After all, the Scientific Revolution, also a unique European event, grew from the same Protestant philosophical base as the Enlightenment.
In the sense that liberal morality and scientific reasoning draw from organized thinking about the world, empirical analysis, theory testing and theory tossing, science does inform morality, but not everyone’s.
—–
Science had a good article on this a while back: The New Synthesis in Moral Psychology
Jonathan Haidt, et al. Science 316, 998 (2007); DOI: 10.1126/science.1137651
“there is no such thing as a unit of well-being that we can look inside the brain and measure”
I think this is asking a little bit too much from Sam. There is also no unit of mental health, but that doesn’t stop us from recognizing and treating depression or other mental problems.