Update and reboot: Sam Harris has responded to my blog post reacting to his TED talk. In the initial version of this response-to-the-response-to-the-response-to-the-talk, I let myself get carried away with irritation at this tweet, and thereby contributed to the distraction from substantive conversation. Bad blogger.
In any event, Sam elaborates his position in some detail, so I encourage you to have a look if you are interested, although it didn’t change my mind on any issue of consequence. There are a number of posts out there by people who know what they are talking about and surely articulate it better than I do, including Russell Blackford and Julian Sanchez (who, one must admit, has a flair for titles), and I should add Chris Schoen.
But I wanted to try to clarify my own view on two particular points, so I put them below the fold. I went on longer than I intended to (funny how that happens). The whole thing was written in a matter of minutes — have to get back to real work — so grains of salt are prescribed.
First, the role of consensus. In formal reasoning, we all recognize the difference between axioms and deductions. We start by assuming some axioms, and the laws of logic allow us to draw certain conclusions from them. It’s not helpful to argue that the axioms are “wrong” — all we are saying is that if these assumptions hold, then we can safely draw certain conclusions.
A similar (although not precisely analogous) situation holds in other areas of human reason, including both science and morality. Within a certain community of like-minded reasoners, a set of assumptions is taken for granted, from which we can draw conclusions. When we do natural science, we assume that our sense data is more or less reliable, that we are not being misled by an evil demon, that simpler theories are preferable to complicated theories when all else is equal, and so forth. Given those assumptions, we can go ahead and do science, and when we disagree — which scientists certainly do — we can usually assume that the disagreements will ultimately be overcome by appeal to phenomena in the natural world, since as like-minded reasoners we share common criteria for adjudicating disputes. Of course there might be some people who refuse to accept those assumptions, and become believers in astrology or creationism or radical epistemological skepticism or what have you. We can’t persuade those people that they’re wrong by using the standards of conventional science, because they don’t accept those standards (even when they say they do). Nevertheless, we science-lovers can get on with our lives, pleased that we have a system that works by our lights, and in particular one that is pragmatically successful at helping us deal with the world we live in.
When it comes to morality, we indeed have a very similar situation. If we all agree on a set of starting moral assumptions, then we constitute a functioning community that can set about figuring out how to pass moral judgments. And, as I emphasized in the original post, the methods and results of science can be extremely helpful in that project, which is the important and interesting thing that we all agree on, which is why it’s a shame to muddy the waters by denying the fact/value distinction or stooping to insults. But I digress.
The problem, obviously, is that we don’t all agree on the assumptions, as far as morality is concerned. Saying that everyone, or at least all right-thinking people, really want to increase human well-being seems pretty reasonable, but when you take the real world seriously it falls to pieces. And to see that, we don’t have to contrast the values of fine upstanding bourgeois Americans with those of Hitler or Jeffrey Dahmer. There are plenty of fine upstanding people — you can easily find them on the internet! — who think that human well-being is maximized by an absolute respect for individual autonomy, where people have equal access to primary goods but are given the chance to succeed or fail in life on their own. Other people think that a more collective approach is called for, and it is appropriate for some people to cede part of their personal autonomy — for example, by paying higher taxes — in the name of the greater good.
Now, we might choose to marshall arguments in favor of one or another of these viewpoints. But those arguments would not reduce to simple facts about the world that we could in principle point to; they would be appeals to the underlying moral sentiments of the individuals, which may very well end up being radically incompatible. Let’s say that killing a seventy-year-old person (against their will) and transplanting their heart into the body of a twenty-year old patient might add more years to the young person’s life than the older person might be expected to have left. Despite the fact that a naive utility-counting would argue in favor of the operation, most people (not all) would judge that not to be moral. But what if a deadly virus threatened to wipe out all of humanity, and (somehow) the cure required killing an unwilling victim? Most people (not all) would argue that we should reluctantly take that step. (Think of how many people are in favor of involuntary conscription.) Does anyone think that empirical research, in neuroscience or anywhere else, is going to produce a quantitative answer to the question of exactly how much harm would need to be averted to justify sacrificing someone’s life? “I have scientifically proven that if we can save the life of 1,634 people, it’s morally right to sacrifice this one victim; but if it’s only 1,633, we shouldn’t do it.”
At bottom, the issue is this: there exist real moral questions that no amount of empirical research alone will help us solve. If you think that it’s immoral to eat meat, and I think it’s perfectly okay, neither one of us is making a mistake, in the sense that Fred Hoyle was making a mistake when he believed that conditions in the universe have been essentially unchanging over time. We’re just starting from different premises.
The crucial point is that the difference between sets of incompatible moral assumptions is not analogous to the difference between believing in the Big Bang vs. believing in the Steady State model; but it is analogous to believing in science vs. being a radical epistemological skeptic who claims not to trust their sense data. In the cosmological-models case, we trust that we agree on the underlying norms of science and together we form a functioning community; in the epistemological case, we don’t agree on the underlying assumptions, and we have to hope to agree to disagree and work out social structures that let us live together in peace. None of which means that those of us who do share common moral assumptions shouldn’t set about the hard work of articulating those assumptions and figuring out how to maximize their realization, a project of which science is undoubtedly going to be an important part. Which is what we should be talking about all along.
The second point I wanted to mention was the justification we might have for passing moral judgments over others. Not to be uncharitable, but it seems that the biggest motivation most people have for insisting that morals can be grounded in facts is that they want it to be true — because if it’s not true, how can we say the Taliban are bad people?
That’s easy: the same way I can say radical epistemological skepticism is wrong. Even if there is no metaphysically certain grounding from which I can rationally argue with a hard-core skeptic or a Taliban supporter, nothing stops me from using the fundamental assumptions that I do accept, and acting accordingly. There is a weird sort of backwards-logic that gets deployed at this juncture: “if you don’t believe that morals are objectively true, you can’t condemn the morality of the Taliban.” Why not? Watch me: “the morality of the Taliban is loathsome and should be resisted.” See? I did it!
The only difference is that I can only present logical reasons to support that conclusion to other members of my morality community who proceed from similar assumptions. For people who don’t, I can’t prove that the Taliban is immoral. But so what? What exactly is the advantage of being in possession of a rigorous empirical argument that the Taliban is immoral? Does anyone think they will be persuaded? How we actually act in the world in the face of things we perceive to be immoral seems to depend in absolutely no way on whether I pretend that morality is grounded in facts about Nature. (Of course there exist people who will argue that the Taliban should be left alone because we shouldn’t pass our parochial Western judgment on their way of life — and I disagree with those people, because we clearly do not share underlying moral assumptions.)
Needless to say, it doesn’t matter what the advantage of a hypothetical objective morality would be — even if the world would be a better place if morals were objective, that doesn’t make it true. That’s the most disappointing part of the whole discussion, to see people purportedly devoted to reason try to concoct arguments in favor of a state of affairs because they want it to be true, rather than because it is.
Sean,
> At bottom, the issue is this: there exist real moral questions that no amount of empirical research alone will help us solve
Even if this is true, it does not mean that science cannot be used to answer other moral questions.
It is the same case in physics. Suppose there two ways of something happening, but either way, the observable outcome is the same. By the definition of such a situation, empirical research cannot tell us which way things happened. But that does not mean empirical research cannot answer other questions about the world.
““But what if I believe that the highest moral good is to be found in the autonomy of the individual, while you believe that the highest good is to maximize the utility of some societal group?”
I really just don’t know what to say about this kind of formulation. Let me put it to you as an analogy:
I believe chocolate is the tastiest of all ice creams. You believe it’s vanilla.
Is all food science now out the window?
The only assumptions that we need begin with, to start talking about the possibility of a science of morality are: that conscious states are concrete facts about the Universe; and that some of those conscious states are better to be in than others.
The word ‘better’ seems to be problematic here, but there is little doubt that some states of consciousness are more desirable to find oneself in than others. Well-being versus suffering – surely the acknowledgment that such a spectrum of phenomenal states exists is not too much of a consensus to expect.
Timo Sinnemäki wrote “There you have it: ‘We do.’; he contradicts himself and concedes the entire argument to Carroll, but keeps on talking down to him like nothing had happened.”
You might want to leave that celebratory champagne in the ‘fridge, Timo; Harris neither contradicted himself nor did he concede anything.
When Harris wrote “We do”, he’s saying that “we…decide what is a successful life…what is coherent argument…what constitutes empirical evidence…when our memories can be trusted” — _we_ do all of these things, and we do them as best as we can with the tools we have.
We don’t decide what constitutes empirical evidence on a whim. We don’t decide what a coherent argument is because a burning bush tells us what criteria to use. We don’t decide what memories can be trusted on command of the State. *We* (tool-using H. sapiens) decide.
What Sean and others here seem to be suggesting is that, since there are no clear-cut instructions for how to do this, and since there are cultural disagreements about the matter, that the _only_ remaining course of action is inaction. We can’t construct a foolproof system, so the status quo must remain intact. It’s a good thing that Darwin didn’t subscribe to that notion!
Sean’s original post said “It’s ultimately a personal choice, not an objective truth to be found simply by looking closely at the world. How are we to balance individual rights against the collective good? You can do all the experiments you like and never find an answer to that question.”
Well, I’m sorry, but _rubbish_. I’m deeply suspicious of any scientist who claims “we can never find an answer to that question”. How often has that been proved true? Has it *ever* been proved true? We’re tool-makers by nature. To claim that the necessary tools to perform any task cannot be created, simply because we don’t yet know what they are or how to use them, is to dismiss nearly the whole of human achievement. Scientists ought to be very circumspect when it comes to even harboring such thoughts, much less making declarations to that effect.
DamnYankees (I’m sorry to see you’ve heavily edited your post 27, making mine unintelligible, but I’ll leave it here anyway):
1) I think there’s a danger of misattribution here. If a group of people want to make me act in some way against my own desires, for the sake of common good, I might justify (to myself) my refusal solely based on the principle of individual autonomy, while still arguing to the others based on common utility, merely as an effort to persuade them to leave me alone.
2) I think Sean has made it quite clear that he doesn’t argue against the usefulness of science in resolving moral questions, in cases where the *premises are agreed upon*.
3) Yes, I would smile, nod, and move on. Luckily there are larger groups of other people whose goals, including reasons for supporting astronomy, I find more sensible. The reasonable uniformity (based on biology and culture) of such moral communities permits us to declare that morality is non-arbitrary, while not giving an inch to claims of ‘universality’.
To respond with respectiove points:
1) I’m not sure what this means. You mean people will lie to get their own way? Well…duh. Not sure what you are trying to say.
2) And Harris has provided a premise that enough human beings agree upon that we can usefully study this as a science. I still haven’t seen anyone rebut his contention that moral actions are almost 100% always geared toward create a state of pleasant consciousness free from suffering. Now, you might say that premise is too vague, but it is a premise on which we basically all agree.
3) I’m not sure why a reasonable uniformity of moral opinion is somehow not enough to build a scientific foundation, but a reasonable uniformity of the usefullness of empiricism is good enough to build a scientific foundation. Harris makes this point in the linked article, and I haven’t seen anyone address it.
Robert H.: “What Sean and others here seem to be suggesting is that, since there are no clear-cut instructions for how to do this, and since there are cultural disagreements about the matter, that the _only_ remaining course of action is inaction. We can’t construct a foolproof system, so the status quo must remain intact. It’s a good thing that Darwin didn’t subscribe to that notion!”
I’m glad to say that you’ve completely made that up. Go through both posts: there’s not a shred of evidence for what you said.
Sorry, I’m an idiot. He kept making comparisons to physics, but I couldn’t figure out what experiments we could perform that resolve value differences.
Could someone explain how he does this?
It seems that both Sean and Harris ultimately get to a point where communication stops. In Sam’s case, it was when he talked to the academic who defended Burka-wearing. He described how his final argument was to drop his jaw, turn and walk away. So much for discourse, let alone scientific discourse. Sean, for his part, describes how, for some, “We can’t persuade those people that they’re wrong by using the standards of conventional science, because they don’t accept those standards (even when they say they do). Nevertheless, we science-lovers can get on with our lives, pleased that we have a system that works by our lights…” So after all this talk, we’re back to science and religion ultimately shrugging their shoulders and walking away from each other, as they’ve done for thousands of years, each one shaking his or her head and muttering “Can you believe that guy!?”
DamnYankees:
1) It doesn’t have to have anything to do with lying. My common-good argument might make sense to both parties. My point was that just because I present a common-good argument to the other party, my own decision to refuse might not rely on that common-good argument but on the principle of individual autonomy.
2) He’s not merely saying that we *can* use maximum well-being as a premise in scientific inquiry, he’s *equating* morality with maximum well-being. As to whether nearly 100% really do take it as the ultimate moral axiom, well, the burden of proof is on him, and whatever the result, I fall outside that group. (applies to number 3) as well)
1) That’s true, but I’m not really sure what your point is. No one said morality was simple. There are always competing norms within all of us.
2) Actually, he has zero burdon of proof, because he’s not really trying to prove anything. Rather, he is *defining* morality that way because it is a highly useful definition. We went over this in the last thread, so pardon me if I repeat myself. Harris’ is putting forward a definition of “morality” which is open to testing, open to science, and can (in theory) produce predictable results. Anyone can propose any definition they want for any word, but the test of a good definition is how useful it is. The definition Harris puts forward is a useful one, to me. Many other definitions of “morality” seem to be intentionally useless (eg “what God wants”), a talisman used to force an avoidance of actually discussing the issue.
“I merely assumed what I set out to prove.” – Sam Harris
I’d like to point out that this is not science, this is a fanciful thought exercise with no real world applicable benefits. grade 10 students are taught this. in science you should assume the opposite of what you’re trying to prove. if you fail to prove it is wrong, it must be right. at least until something changes at which point you go through it all again to see if the same conclusions hold.
Sean, if someone said that they would prefer to raise their children in a cave, malnourished and exposed to disease, presumably you would argue that it’s their perfect right to do so since they may not share the same “underlying assumptions” as 99.9% of the rest of the world about what constitutes physical health. A point that Sam Harris makes is that objective measures of mental or emotional wellbeing (e.g., freedom from abuse) may not be fundamentally different from objective measures of physical wellbeing (e.g., freedom from disease). If so, then science can show us how to proceed. Granted, what constitutes mental health is much fuzzier than physical health, but advances in neurophysiology may one day change that. I think Harris is saying that certain assumptions about biological wellbeing — and by extension certain shared assumptions about an environment in which people flourish — transcend culture, religion, etc. Apparently Hume would argue that we can’t prove that sexually molesting children or torturing small animals for fun is “objectively” immoral. But who cares.
I also agree with the point made by DamnYankeesSays who asks an important question: how do rational beings make moral decisions? This is not just a subject for philosophers, but the question itself can be scientifically addressed (by social scientists, psychologists, neuroscientists, etc.). Obviously there are huge gray areas (e.g., burkas vs. bikinis), but there are also areas that probably 99.999% of the human race would agree on, e.g., reducing the physical suffering of innocent people. These could be part of the “shared moral assumptions” that Sean mentions, but shared among (almost) all people. The fact that 0.001% of the human population might disagree may prove Sean’s (and Hume’s) point, but in the real world their point is purely academic anyway.
Ahmed: you claim that nothing has happened since Hume. Wrong. Many moral philosophers would agree that a significant step forward was made by the 20th century philosopher, John Rawls (see post #120 in the earlier thread on Harris). A good recent book that discusses Rawls’ contribution is “Justice” by Michael Sandel, based on a course he teaches at Harvard on moral philosophy.
SteveN, did you not read this?:
“Even if there is no metaphysically certain grounding from which I can rationally argue with a hard-core skeptic or a Taliban supporter, nothing stops me from using the fundamental assumptions that I do accept, and acting accordingly. There is a weird sort of backwards-logic that gets deployed at this juncture: “if you don’t believe that morals are objectively true, you can’t condemn the morality of the Taliban.” Why not? Watch me: “the morality of the Taliban is loathsome and should be resisted.” See? I did it!”
SteveN, I believe that you didn’t read Sean’s second point.
how do rational beings make moral decisions?
by developing a sense of morals, which has nothing to do with rationality.
DamnYankees,
Timo considered your second point, that Harris was simply defining morality as such. He disagrees:
“He’s not merely saying that we *can* use maximum well-being as a premise in scientific inquiry, he’s *equating* morality with maximum well-being.”
“Timo considered your second point, that Harris was simply defining morality as such. He disagrees:”
I am aware. I can read.
DamnYankees,
Than I am thoroughly perplexed why you simply reiterated the claim?
I think Sam and Sean are both right. Let me elaborate.
It’s all about axioms. Sam believes there’s an axiomatic framework on which to ground questions of morality, Sean thinks there isn’t.
To an extent, I think Sam is right. Let me offer a candidate axiom.
Axiom M: No human being should be killed or made to suffer just for the heck of it.
I wager it will be very difficult to find any sane person who disagrees with this.
In fact, many people would have no problem extending it to other living creatures as well.
Now Sam offers an example of a society that blinds every third child (I’ll call them the Blinders), and is appalled when a fellow academic sees nothing wrong with it.
But if we ask ourselves what exactly bothers us about this society, the answer is, “A blatant violation of Axiom M”. Children are being made to suffer horrendously for no reason.
But say, it turns out the Blinders do have a reason for this practice.
They claim that the blinded children lead a blessed life.
They also claim that this practice protects them from the wrath of God.
Of course, we rationalists know that this is mere superstition – Axiom M is still violated.
But what if it turns out that the Blinders are right?
Suppose, after extensive study, you find that the blinded children do indeed go on to lead very successful lives in all other respects and become great scholars and leaders?
Additionally, it is proved beyond doubt that the society suffers huge calamities whenever blinding is stopped and prospers otherwise?
Now we come to Sean’s side of the table.
This is the question of individual choice versus social wellbeing and Axiom M can’t help us here.
In fact, it seems extremely difficult to find any axiomatic framework which helps settle this issue (at least, I couldn’t think of any).
So, to summarize, I agree with Sam that morality isn’t “completely relative”, that there are statements like Axiom M which could be used to argue against extreme practices.
On the other hand, I agree with Sean that such axioms may not be broad enough to address more generic ‘individuality versus social good’ issues – especially since individuals may have very different ideas on what is good for society.
@ Xeridanus
““I merely assumed what I set out to prove.” – Sam Harris”
No. He observed that the vast majority (if not all, I think he said all), of cultures with moral concerns are concerns about changes in consciousness.
@josefjohann
I think you’ll find those were copied and pasted from his post, which is why I placed those words in quotation marks and attributed them to Sam Harris.
@DamnYankees
You have taken statements about moral axioms, which necessarily imply a choice (is this action good or bad?), then modified those statements and turned them into predilection choices about nutritionally irrelevant flavors – which are not moral choices by definition, and then you make the leap that these ice cream choices relate to nutritional values in the exact same way that moral choices relate to moral values. You fail to see the missing piece in there. Are you saying the very nutritional emptyness of chocolate and vanilla, equeate the moral irrelevance of comparing greed vs. altruism? I hope not, cause that really is a non sequitur.
Perhaps in your analogy you actually meant that the flavor choice IS meaningful, one has more nutritional value than the other. So let for argument’s sake say that this vanilla we’re talking about is rich in protein, complex carbos, and some nice polyunsaturated fats. Omega-6, why not. Plus all the micro-nutrients you need. The chocolate is nothing more than sweet, delicious chocolate. Now that flavor choice “vanilla vs. chocolate” can unequivocally be shown to prolong life in one organism over the control organism. Now, we are talking about whether it’s actually “good” or “bad” to choose one over the other. with our axiomatic end goal being to “live long enough and reproduce” vs. “tragic death from massive chocolate overdose on 6th birthday.”
Back to the quote:
““But what if I believe that the highest moral good is to be found in the autonomy of the individual, while you believe that the highest good is to maximize the utility of some societal group?”
Here, using the now modified analogy, how we can measure the ‘nutritional value’ of greed vs. altruism? which one is ‘good for ya’, and ‘good for the whole clan’? Is classical neo-darwinian fitness to be our measuring stick? So the strategy that makes most babies wins! altruism it is. but wait – no, Harris says it’s that ‘most of the indiviuals’ be at peace or happy most of the time. ok, fine, we should be able to bash out a balance there, make it all workout all groovy, man.
However! Someone else might say, that the point is to create a society where the opportunity of maybe someday becoming REALLY rich and REALLY happy, yet for only a small minority of the group; is the most ‘moral’ of all solutions. Or better yet – that all statements of good or bad come only from the man with the VERY LARGE HAT. Or that there are no morals, really. only highly fluid, situational, conditional group behavior rules developed unconsciously by hyper-intelligent apes trying to simultaneously satisfy a plethora of mostly contradicting impulses both within themselves and the other 7 billion apes crowded around it.
I’ve heard all of those, very elegantly argued, plus many more. And I am sure you have as well. The point being that they all share the characteristic of being wholly arbitrary. Kind of like “i personally like when people are like this-and-that to me, so ALL people should behave like this-and-that to me” Nice enough of a thought, but a bit of a problem when you consider the vast spectrum of actual human behavioral variation.
I think Sam and Sean are being about equally stupid. Fundamental moral values aren’t like evidence-based beliefs about reality, as Sam says, nor are they like assumptions about reality, as Sean says. They’re like mathematical axioms. But then, Sean doesn’t seem to acknowledge the difference between assumptions and axioms.
Beliefs and assumptions can both turn out to be false, but axioms can’t: They’re the things _by which_ we judge whether a statement/theorem is true or false. If you’re wondering which axioms or fundamental moral values are ‘true’, you’re harboring some serious misconceptions.
Timo Sinnemäki wrote “I’m glad to say that you’ve completely made that up. Go through both posts: there’s not a shred of evidence for what you said [that “Sean and others here seem to be suggesting is that, since there are no clear-cut instructions for how to do this, and since there are cultural disagreements about the matter, that the _only_ remaining course of action is inaction. We can’t construct a foolproof system, so the status quo must remain intact.”]
Let’s see:
“There are not objective moral truths”
“When people share values, facts can be very helpful to them in advancing their goals. But when they don’t share values, there’s no way to show that one of the parties is ‘objectively wrong.’ And when you start thinking that there is, a whole set of dangerous mistakes begins to threaten.”
“The problem, obviously, is that we don’t all agree on the assumptions, as far as morality is concerned. Saying that everyone, or at least all right-thinking people, really want to increase human well-being seems pretty reasonable, but when you take the real world seriously it falls to pieces.”
“there exist real moral questions that no amount of empirical research alone will help us solve”
“I can only present logical reasons to support that conclusion to other members of my morality community who proceed from similar assumptions. For people who don’t, I can’t prove that the Taliban is immoral.”
…and so on. Sounds very status quo-y to me — that since there aren’t clear-cut instructions, and there are cultural disagreements, we can’t construct a foolproof system, ergo, your proposal fails.