Update and reboot: Sam Harris has responded to my blog post reacting to his TED talk. In the initial version of this response-to-the-response-to-the-response-to-the-talk, I let myself get carried away with irritation at this tweet, and thereby contributed to the distraction from substantive conversation. Bad blogger.
In any event, Sam elaborates his position in some detail, so I encourage you to have a look if you are interested, although it didn’t change my mind on any issue of consequence. There are a number of posts out there by people who know what they are talking about and surely articulate it better than I do, including Russell Blackford and Julian Sanchez (who, one must admit, has a flair for titles), and I should add Chris Schoen.
But I wanted to try to clarify my own view on two particular points, so I put them below the fold. I went on longer than I intended to (funny how that happens). The whole thing was written in a matter of minutes — have to get back to real work — so grains of salt are prescribed.
First, the role of consensus. In formal reasoning, we all recognize the difference between axioms and deductions. We start by assuming some axioms, and the laws of logic allow us to draw certain conclusions from them. It’s not helpful to argue that the axioms are “wrong” — all we are saying is that if these assumptions hold, then we can safely draw certain conclusions.
A similar (although not precisely analogous) situation holds in other areas of human reason, including both science and morality. Within a certain community of like-minded reasoners, a set of assumptions is taken for granted, from which we can draw conclusions. When we do natural science, we assume that our sense data is more or less reliable, that we are not being misled by an evil demon, that simpler theories are preferable to complicated theories when all else is equal, and so forth. Given those assumptions, we can go ahead and do science, and when we disagree — which scientists certainly do — we can usually assume that the disagreements will ultimately be overcome by appeal to phenomena in the natural world, since as like-minded reasoners we share common criteria for adjudicating disputes. Of course there might be some people who refuse to accept those assumptions, and become believers in astrology or creationism or radical epistemological skepticism or what have you. We can’t persuade those people that they’re wrong by using the standards of conventional science, because they don’t accept those standards (even when they say they do). Nevertheless, we science-lovers can get on with our lives, pleased that we have a system that works by our lights, and in particular one that is pragmatically successful at helping us deal with the world we live in.
When it comes to morality, we indeed have a very similar situation. If we all agree on a set of starting moral assumptions, then we constitute a functioning community that can set about figuring out how to pass moral judgments. And, as I emphasized in the original post, the methods and results of science can be extremely helpful in that project, which is the important and interesting thing that we all agree on, which is why it’s a shame to muddy the waters by denying the fact/value distinction or stooping to insults. But I digress.
The problem, obviously, is that we don’t all agree on the assumptions, as far as morality is concerned. Saying that everyone, or at least all right-thinking people, really want to increase human well-being seems pretty reasonable, but when you take the real world seriously it falls to pieces. And to see that, we don’t have to contrast the values of fine upstanding bourgeois Americans with those of Hitler or Jeffrey Dahmer. There are plenty of fine upstanding people — you can easily find them on the internet! — who think that human well-being is maximized by an absolute respect for individual autonomy, where people have equal access to primary goods but are given the chance to succeed or fail in life on their own. Other people think that a more collective approach is called for, and it is appropriate for some people to cede part of their personal autonomy — for example, by paying higher taxes — in the name of the greater good.
Now, we might choose to marshall arguments in favor of one or another of these viewpoints. But those arguments would not reduce to simple facts about the world that we could in principle point to; they would be appeals to the underlying moral sentiments of the individuals, which may very well end up being radically incompatible. Let’s say that killing a seventy-year-old person (against their will) and transplanting their heart into the body of a twenty-year old patient might add more years to the young person’s life than the older person might be expected to have left. Despite the fact that a naive utility-counting would argue in favor of the operation, most people (not all) would judge that not to be moral. But what if a deadly virus threatened to wipe out all of humanity, and (somehow) the cure required killing an unwilling victim? Most people (not all) would argue that we should reluctantly take that step. (Think of how many people are in favor of involuntary conscription.) Does anyone think that empirical research, in neuroscience or anywhere else, is going to produce a quantitative answer to the question of exactly how much harm would need to be averted to justify sacrificing someone’s life? “I have scientifically proven that if we can save the life of 1,634 people, it’s morally right to sacrifice this one victim; but if it’s only 1,633, we shouldn’t do it.”
At bottom, the issue is this: there exist real moral questions that no amount of empirical research alone will help us solve. If you think that it’s immoral to eat meat, and I think it’s perfectly okay, neither one of us is making a mistake, in the sense that Fred Hoyle was making a mistake when he believed that conditions in the universe have been essentially unchanging over time. We’re just starting from different premises.
The crucial point is that the difference between sets of incompatible moral assumptions is not analogous to the difference between believing in the Big Bang vs. believing in the Steady State model; but it is analogous to believing in science vs. being a radical epistemological skeptic who claims not to trust their sense data. In the cosmological-models case, we trust that we agree on the underlying norms of science and together we form a functioning community; in the epistemological case, we don’t agree on the underlying assumptions, and we have to hope to agree to disagree and work out social structures that let us live together in peace. None of which means that those of us who do share common moral assumptions shouldn’t set about the hard work of articulating those assumptions and figuring out how to maximize their realization, a project of which science is undoubtedly going to be an important part. Which is what we should be talking about all along.
The second point I wanted to mention was the justification we might have for passing moral judgments over others. Not to be uncharitable, but it seems that the biggest motivation most people have for insisting that morals can be grounded in facts is that they want it to be true — because if it’s not true, how can we say the Taliban are bad people?
That’s easy: the same way I can say radical epistemological skepticism is wrong. Even if there is no metaphysically certain grounding from which I can rationally argue with a hard-core skeptic or a Taliban supporter, nothing stops me from using the fundamental assumptions that I do accept, and acting accordingly. There is a weird sort of backwards-logic that gets deployed at this juncture: “if you don’t believe that morals are objectively true, you can’t condemn the morality of the Taliban.” Why not? Watch me: “the morality of the Taliban is loathsome and should be resisted.” See? I did it!
The only difference is that I can only present logical reasons to support that conclusion to other members of my morality community who proceed from similar assumptions. For people who don’t, I can’t prove that the Taliban is immoral. But so what? What exactly is the advantage of being in possession of a rigorous empirical argument that the Taliban is immoral? Does anyone think they will be persuaded? How we actually act in the world in the face of things we perceive to be immoral seems to depend in absolutely no way on whether I pretend that morality is grounded in facts about Nature. (Of course there exist people who will argue that the Taliban should be left alone because we shouldn’t pass our parochial Western judgment on their way of life — and I disagree with those people, because we clearly do not share underlying moral assumptions.)
Needless to say, it doesn’t matter what the advantage of a hypothetical objective morality would be — even if the world would be a better place if morals were objective, that doesn’t make it true. That’s the most disappointing part of the whole discussion, to see people purportedly devoted to reason try to concoct arguments in favor of a state of affairs because they want it to be true, rather than because it is.
I think the only thing that constitutes bad blogging is ranting monologue; strong remarks about an individual’s intelligence do not.
What I find interesting is that most of these discussions, on both sites, are dealing with fine details about what each author said and are missing the whole point. If you step back and see the forest instead of focusing on the trees, you see that Sam is saying that there are right and wrong answers to the most important moral questions, and that we as human beings can and should have open discussions about them. Further, people should not be able to hide behind religion when it comes to discussions about these moral questions. We should be able to speak up when we see obvious human suffering occurring in the name of, or condoned by dogma, weather religious or otherwise, and we should strive to move towards a global society where this form of discourse is the rule and not the exception.
Timo Sinnemäki wrote “Robert, you’re being either extremely lazy or dishonest. I will quote this one more time:
‘Even if there is no metaphysically certain grounding from which I can rationally argue with a hard-core skeptic or a Taliban supporter, nothing stops me from using the fundamental assumptions that I do accept, and acting accordingly. There is a weird sort of backwards-logic that gets deployed at this juncture: ‘if you don’t believe that morals are objectively true, you can’t condemn the morality of the Taliban.’ Why not? Watch me: ‘the morality of the Taliban is loathsome and should be resisted.’ See? I did it!'”
Nice try, but no sale.
First, you’re attempting to negate the quotes *I* provided with other quotes. But they don’t negate them, they just *contradict* them.
Second, claiming that one can “argue with a hard-core skeptic or a Taliban supporter” or even “condemn the morality of the Taliban” is not the issue at hand. That just maintains the status quo (see above), since Westerners have been doing those very things (arguing and condemning) for years. And where has it gotten us? Exactly nowhere.
What Harris is advocating is using scientific inquiry to produce a new set of tools for establishing standards for behavior. Without those metrics, we’re left with the status quo of subjectivity. “You’re wrong.” “No, YOU’RE wrong!”
And as for “Robert, you’re being either extremely lazy or dishonest” — leave out the ad hominem schoolyard bullshit, pal; you’re not impressing anyone with it.
Moral relativism is the only rational perspective.
There is certainly no universal morality, history clearly shows that most human males will keep raping and murdering as long as there are no negative consequences for them. People will will always be willing to increase their well being at the cost of others.
There is nothing inherently wrong in murder or rape, they are both part of human nature and can form a basis of a successful reproductive strategy. Biology purposefully puts us on a collision course, competition is beneficial for the species, even if it leads to untold suffering of individuals.
There is certainly no universal morality. That doesn’t however mean we cannot or should not setup our own rules for our own benefit, we can and we should, but they cannot be grounded in some nonsensical notion of universal morality, they have to be grounded in pragmatism. Every society should agree on a set of such rules and most do, they are called laws.
There is no contradiction.
You should also elaborate on the status quo issue because I don’t understand what you’re saying.
Timo Sinnemäki Says:
“Your horror towards those quotes is unfounded and you should think their words through before you start to pass around insulting non-sequiturs about ivory towers and Gödel and such.”
I guess my point about Godelian vertigo was a bit obscure. As Sam notes the heart of the matter is: “‘Who decides what is a successful life?’ Well, who decides what is coherent argument? Who decides what constitutes empirical evidence? Who decides when our memories can be trusted?”
When justifying a position on values one eventually slams into a cognitive wall as impenetrable as Godel’s incompleteness theorem. There is no way to step outside the system and see absolute truths. All logical systems require some axioms that cannot be justified within the system. But that’s OK. We don’t need absolute truths. We need explanations that work well enough to improve our collective lot in life. Stuffing women into cloth bags to prevent their bodies form driving us poor men into sinful rages of lust is obviously not one of them.
Science is the art of not fooling yourself, to paraphrase Richard Feynman. Its an enterprise devoted to separating explanations that work form those that don’t.
There are no reason-free-zones and this is especially true of moral reasoning.
Tim (66), your sarcasm is inversely proportional to your substance.
Mike (90) , your article is nice evidence to show that Sam Harris, or anyone who argues that morality is anything but a human driven abstraction, is flat out wrong. It is interesting to see that feelings of morality do appear to be involuntary, and this gets back to arguments that morality may be an evolutionary artifact. One thing that is interesting is that it is known in mathematical psychology circles that humans are a species that do not always follow optimal solutions, even when less intelligent animals like rats do. I suspect that morality is linked to human capacity to compute non-optimal solutions. This would make evolutionary sense, because it would give them an advantage against species that would always take optimal paths.
I further suspect that the large set of contradictory moral values that people tend to have are a hallmark of non-optimization.
Tim (66), your still not funny.
Chesire cat (107), in reference to Mike’s (90) article, apparently it plays a key part in Harris’ thesis.
See Richard Dawkins’ comment on his website in regards to Harris’ article.
Pingback: Sam Harris and Sean Carroll Debate « The Skepticalist
Jon (109),
Here is Dawkins’ quote regarding the issue:
“I was one of those who had unthinkingly bought into the hectoring myth that science can say nothing about morals. To my surprise, The Moral Landscape [Sam’s upcoming book] has changed all that for me. It should change it for philosophers too. Philosophers of mind have already discovered that they can’t duck the study of neuroscience, and the best of them have raised their game as a result. Sam Harris shows that the same should be true of moral philosophers, and it will turn their world exhilarating upside down.”
He doesn’t go into further detail on his blog, but my initial reaction when I saw the article reporting the study was that it is only a matter of time until science shines the light of reason on moral questions as it has with respect to other areas where humans have historical been ignorant. Part of that answer will come from neuroscience, but just as physics reductionism isn’t the whole answer with respect to a “TOE,” there will be role to play for, among others, moral philosophers. This area, like other aspects of physical reality (including what it means to be human) will give up its secrets one at a time. We are fortunate indeed that our brains have the ability to model the world that we live in and can interpret that world in a verifiable ways.
Sorry, I misspoke, Dawkins didn’t say it played a key part in Harris’ thesis, but said “Relevant to Sam’s thesis is this research, showing that moral judgments can be experimentally altered by magnetic pulses administered to particular parts of the brain.”. It can be found in the comment section of the article on his site.
@Matt T – Sam’s editorial on justifying torture makes me want to throw up, and reminds me that Sean should be commended for being so polite in his responses to Sam.
Chanda (113),
I respect the depth of your feelings in this regard, but they are unrelated to the validity, one way or the other, of the issue being discussed, and I fail to see how they relate to the comment from MattT. Sorry.
re: point1.
“At bottom, the issue is this: there exist real moral questions that no amount of empirical research alone will help us solve.” – Sean
Sam’s point is that the same can be said of all scientific questions (see. Hume’s induction problem). In a sense, you and Harris are in agreement. If someone made a factual proposition that could not in principal (could not possibly) be tested, you would reject it. Sam is saying, likewise, that any moral proposition that can not in principal be tested, should be rejected. As with science, underlying assumptions are going to be required to get moral science rolling, but the need for – or existence of – those assumptions is not an argument against the prospect of an ethical science.
“The crucial point is that the difference between sets of incompatible moral assumptions is not analogous to the difference between [competing theories drawn from common assumptions]”. – sean
This is, of course, logically true – to compare propositions, we need common presuppositions. Harris’ claim is that there is an existing global consensus on the root (or radical if you like) presupposition of morality. He calls it “wellbeing” (broadly speaking, the absence of suffering – I think this is a fair translation). He also claims that suffering can be measured, and so this is a basis for scientific inquiry into ethical propositions. If others want to propose other presuppositions, that’s fine, but isn’t it necessary to inquire into the truth of moral propositions whatever their underlying presuppositions may be? And doesn’t it make sense to reject any moral proposition whose underlying presupposition is by design unverifiable?
I think there are a few objections one might make.
1. There is no consensus on wellbeing. Harris’ claim, in this regard, is errant.
2. Wellbeing cannot be measured.
3. There are presuppositions that are better than wellbeing.
4. Science is bogus to begin with, so why muddy morality with it?
5. Untestability is not an adequate ground for rejecting an ethical proposition.
You appear to be attempting an argument for the last point. I hope this helps a bit.
re: in conclusion…
“That’s the most disappointing part of the whole discussion, to see people purportedly devoted to reason try to concoct arguments in favor of a state of affairs because they want it to be true, rather than because it is.” – sean
This isn’t valid given your previous arguments. If truth, any truth, is ultimately dependent on a chosen presupposition, then every truth is ultimately only true because the believer wants it to be. According to your own reasoning, the truths we “want” to be true, are not simply the only truths we have, but the only truths possible. It doesn’t seem reasonable to criticize others for expressing mere preferences when you’re doing the same thing yourself.
@JamesAllen
Now we are getting somewhere. Construction of alternatives.
Quote:
I think there are a few objections one might make.
1. There is no consensus on wellbeing. Harris’ claim, in this regard, is errant.
2. Wellbeing cannot be measured.
3. There are presuppositions that are better than wellbeing.
4. Science is bogus to begin with, so why muddy morality with it?
5. Untestability is not an adequate ground for rejecting an ethical proposition.
End Quote.
1. Does not everything start from a point where definitions are worked? I believe in his talk he said if we can agree that there is some measurable thing called wellbeing then science would be useful. He showed a continuum and gave analogies to physical wellbeing. He also said that in the future we might find technology lets us measure it.
2. Yet. And, in my profession there are many things I can not measure, yet that does not mean that a weight does not exist. For example, how do you measure the strength of a teacher? Is it how well pupils do on a standardized test? How happy the pupils are? How much they contribute to the educational community? Quantification fails continuously, and yet surely we can agree that there are good and bad teachers.
3. Now we are getting somewhere! What are your suggestions? The Buddha seemed to boil it down to these two (suffering and cessation of suffering) and after intense meditation many Buddhists reach the same conclusion. They seem to have beat us to the punch because they are willing to forgo the above scientific debate and use another method for arriving at truths.
4. Maybe you are right, but it works so well for so many things as does mathematics.
5. Scientifically, of course it is. Which is why Harris is trying to pigeon hole it by defining axioms.
RMB, thanks for catching that. I actually should have directed that at @spyder who posted the link (just a few comments above Matt’s comment).
As for the relevance — I think that we can talk philosophy or we can talk about reality or we can talk about both. If you want to keep it just at philosophy that’s fine, but I don’t think it’s irrelevant to look at the real world implications of the ideas under discussion. From that point of view, I think Harris’s perspective and application of his so-called reasoned-to-moralism is extremely relevant.
So, a hearty thanks to Spyder for pointing out the link and reminding me that Sean is to be commended for being so polite.
@ religionistheissue
I didn’t intend to argue those points, but to point out that they are possible lines of contention. I should probably clarify the last one – I think you’re misunderstanding what I meant to say.
“5. Untestability is not an adequate ground for rejecting an ethical proposition.” – me
Part of Harris’ argument is that untestable ethical propositions *should* be rejected. Why else would he say that testing ethical propositions is necessary? If someone wanted to object to this claim (ie. sean), they could argue that there are ethical propositions that cannot be tested by any presupposition, but are nonetheless necessary – or something to that effect.
Sean is close to arguing this, but not exactly. He hasn’t yet claimed that there are necessary ethical propositions that cannot be tested. He has merely argued that wellbeing is not currently a presupposition of science, and that he does not believe that such a presupposition can be successfully integrated into science. If he’s right, of course, science will not be able to answer ethical questions. Sean has provided some arguments for WHY an ethical presupposition can’t be integrated into science, but Harris has used the same points to say exactly the opposite. In the end, Sean’s arguments do no more to note the impossibility of ethical science than they do to note the impossibility of any kind of science.
It’s not that he’s “wrong”, but that he hasn’t said anything beyond his own preferences.
Chanda (117),
In light of your original comment taken as a whole, I assume when you say that Harris’ perspective and the application of a “reasoned-to-moralism” approach is extremely relevant, you mean that his particular way of seeking moral answers through the scientific process would result in torture.
I think he would (with certain caveats) argue the opposite: human history clearly is most rife with widespread torture and barbarism in exactly those places and times where morally informed decisions are based on religion, superstition, blind nationalism, racism, and other outmoded ways of analyzing the world. As the world has become more oriented to a scientific way of thinking, torture and barbarism have generally declined in scope and severity. I think of that as progress.
I don’t think he’d argue that a moral system more closely based on science (i.e., ought from is) would eliminate torture altogether, and he acknowledges that such an approach could well support otherwise objectionable methods of, for example, obtaining information in extreme situations — this issue might be an absolute for you, but insert whatever extreme hypothetical situation you wish that might compel otherwise objectionable behavior.
From my perspective, I would greatly prefer a world where morals are increasingly informed by science, however incomplete and messy that may be at any given time.
I think my objection, RMB, is that in a scenario where you become the victim of “incomplete messiness,” I’m sure you would hope that people would stop to think about whether what they intend to do to you resonates with their inner moral voice, and not just whether some kind of logic tells them that it is ok. And it’s not clear to me that this particular quality (“appealing to inherent humane sensibilities” if you will) is completely tangible or deducible.
I guess in the end some people might say that it is illogical for me to be absolutely against “objectionable behavior” on par with torture. But I’m okay with it because I know in my gut that the world I want to live in is one where this is not how we acquire information, no matter how badly we want it. And I hate the idea of a moral system that doesn’t leave room for those kinds of gut reactions, even when they aren’t based on immediately obvious evidence.
@Chanda
“And I hate the idea of a moral system that doesn’t leave room for those kinds of gut reactions, even when they aren’t based on immediately obvious evidence.” – Chanda
Moral systems by definition disambiguate between gut feelings and moral truth, evoking the later in order to suppress the former. These are the only kind of moral systems we’ve ever known. There has never been, for example, a legal code that allows for a “gut feeling” defense. But I get your point. I don’t think Harris is suggesting that people shouldn’t have gut feelings (could we even stop ourselves if we wanted to?), but that when we endeavor to suggest codes of conduct to our fellows (codes intended to override our gut feelings), we should form those moral codes using the best means available. Besides, there’s nothing to say that “gut feelings” wont be a factor in the development of wellbeing – or whatever else is considered important and measurable. Harris might focus on cognitive research as a basis, but others focus on experimental philosophy, and thee might be other better ways still. If there is an over-arching point though, it’s that we should start taking right and wrong seriously rather than leaving it to cultural accident.
# A little mini-play on the subject#
Judge: James Allen, you stand here accused of first degree murder. How do you plead?
James Allen: Sure, I did it, but my gut told me it was right.
Judge: Oh! Why didn’t you say so? [the judge slams down the gavel] Case dismissed!
[the court cheers and spontaneously bursts into a musical number]
… not really how things work.
I would highly recommend all the would be Sam Harris inspired torturers to watch the movie Brazil.
http://en.wikipedia.org/wiki/Brazil_(1985_film)
As far as the questions Sam is bringing up, there is no new space to explore here. We visited the realm of the abusiveness of science based morality several times. One of the most recent episodes involved eugenics.
http://en.wikipedia.org/wiki/Eugenics
It is important to understand that eugenics is better known by the well established science of selective breeding.
http://en.wikipedia.org/wiki/Selective_breeding
I almost forgot this one
http://en.wikipedia.org/wiki/Social_Darwinism
I really like this article as highlighting the darkside of the “moral values” that will be determined by our would be future “moral” overlords.
http://www.nytimes.com/2006/11/21/science/21belief.html?_r=3&n=Top/Reference/Times%20Topics/People/T/Tyson,%20Neil%20DeGrasse&pagewanted=all
I think this quote nicely sums up the morality of science:
By the third day, the arguments had become so heated that Dr. Konner was reminded of “a den of vipers.”
“With a few notable exceptions,” he said, “the viewpoints have run the gamut from A to B. Should we bash religion with a crowbar or only with a baseball bat?”
Sam Harris did not offer an adequate response to Carroll. His argument rests on the whole idea that “morality is a matter of well-being” is as fundamental a truth as, say “1+1=2” or something of that nature. It’s not. Take Harris’ world-view, delete moral realism, and you still have a perfectly consistent and informed world-view.
Moral realism isn’t necessary to have a fully enlightened, scientifically viable view of the world. And since it isn’t supported by deductive or inductive reasoning, its irrational to believe in the existence of moral truth. Hume got it right; Carroll probably quoted him not because he was appealing to authority, but because his statement makes sense.