Update and reboot: Sam Harris has responded to my blog post reacting to his TED talk. In the initial version of this response-to-the-response-to-the-response-to-the-talk, I let myself get carried away with irritation at this tweet, and thereby contributed to the distraction from substantive conversation. Bad blogger.
In any event, Sam elaborates his position in some detail, so I encourage you to have a look if you are interested, although it didn’t change my mind on any issue of consequence. There are a number of posts out there by people who know what they are talking about and surely articulate it better than I do, including Russell Blackford and Julian Sanchez (who, one must admit, has a flair for titles), and I should add Chris Schoen.
But I wanted to try to clarify my own view on two particular points, so I put them below the fold. I went on longer than I intended to (funny how that happens). The whole thing was written in a matter of minutes — have to get back to real work — so grains of salt are prescribed.
First, the role of consensus. In formal reasoning, we all recognize the difference between axioms and deductions. We start by assuming some axioms, and the laws of logic allow us to draw certain conclusions from them. It’s not helpful to argue that the axioms are “wrong” — all we are saying is that if these assumptions hold, then we can safely draw certain conclusions.
A similar (although not precisely analogous) situation holds in other areas of human reason, including both science and morality. Within a certain community of like-minded reasoners, a set of assumptions is taken for granted, from which we can draw conclusions. When we do natural science, we assume that our sense data is more or less reliable, that we are not being misled by an evil demon, that simpler theories are preferable to complicated theories when all else is equal, and so forth. Given those assumptions, we can go ahead and do science, and when we disagree — which scientists certainly do — we can usually assume that the disagreements will ultimately be overcome by appeal to phenomena in the natural world, since as like-minded reasoners we share common criteria for adjudicating disputes. Of course there might be some people who refuse to accept those assumptions, and become believers in astrology or creationism or radical epistemological skepticism or what have you. We can’t persuade those people that they’re wrong by using the standards of conventional science, because they don’t accept those standards (even when they say they do). Nevertheless, we science-lovers can get on with our lives, pleased that we have a system that works by our lights, and in particular one that is pragmatically successful at helping us deal with the world we live in.
When it comes to morality, we indeed have a very similar situation. If we all agree on a set of starting moral assumptions, then we constitute a functioning community that can set about figuring out how to pass moral judgments. And, as I emphasized in the original post, the methods and results of science can be extremely helpful in that project, which is the important and interesting thing that we all agree on, which is why it’s a shame to muddy the waters by denying the fact/value distinction or stooping to insults. But I digress.
The problem, obviously, is that we don’t all agree on the assumptions, as far as morality is concerned. Saying that everyone, or at least all right-thinking people, really want to increase human well-being seems pretty reasonable, but when you take the real world seriously it falls to pieces. And to see that, we don’t have to contrast the values of fine upstanding bourgeois Americans with those of Hitler or Jeffrey Dahmer. There are plenty of fine upstanding people — you can easily find them on the internet! — who think that human well-being is maximized by an absolute respect for individual autonomy, where people have equal access to primary goods but are given the chance to succeed or fail in life on their own. Other people think that a more collective approach is called for, and it is appropriate for some people to cede part of their personal autonomy — for example, by paying higher taxes — in the name of the greater good.
Now, we might choose to marshall arguments in favor of one or another of these viewpoints. But those arguments would not reduce to simple facts about the world that we could in principle point to; they would be appeals to the underlying moral sentiments of the individuals, which may very well end up being radically incompatible. Let’s say that killing a seventy-year-old person (against their will) and transplanting their heart into the body of a twenty-year old patient might add more years to the young person’s life than the older person might be expected to have left. Despite the fact that a naive utility-counting would argue in favor of the operation, most people (not all) would judge that not to be moral. But what if a deadly virus threatened to wipe out all of humanity, and (somehow) the cure required killing an unwilling victim? Most people (not all) would argue that we should reluctantly take that step. (Think of how many people are in favor of involuntary conscription.) Does anyone think that empirical research, in neuroscience or anywhere else, is going to produce a quantitative answer to the question of exactly how much harm would need to be averted to justify sacrificing someone’s life? “I have scientifically proven that if we can save the life of 1,634 people, it’s morally right to sacrifice this one victim; but if it’s only 1,633, we shouldn’t do it.”
At bottom, the issue is this: there exist real moral questions that no amount of empirical research alone will help us solve. If you think that it’s immoral to eat meat, and I think it’s perfectly okay, neither one of us is making a mistake, in the sense that Fred Hoyle was making a mistake when he believed that conditions in the universe have been essentially unchanging over time. We’re just starting from different premises.
The crucial point is that the difference between sets of incompatible moral assumptions is not analogous to the difference between believing in the Big Bang vs. believing in the Steady State model; but it is analogous to believing in science vs. being a radical epistemological skeptic who claims not to trust their sense data. In the cosmological-models case, we trust that we agree on the underlying norms of science and together we form a functioning community; in the epistemological case, we don’t agree on the underlying assumptions, and we have to hope to agree to disagree and work out social structures that let us live together in peace. None of which means that those of us who do share common moral assumptions shouldn’t set about the hard work of articulating those assumptions and figuring out how to maximize their realization, a project of which science is undoubtedly going to be an important part. Which is what we should be talking about all along.
The second point I wanted to mention was the justification we might have for passing moral judgments over others. Not to be uncharitable, but it seems that the biggest motivation most people have for insisting that morals can be grounded in facts is that they want it to be true — because if it’s not true, how can we say the Taliban are bad people?
That’s easy: the same way I can say radical epistemological skepticism is wrong. Even if there is no metaphysically certain grounding from which I can rationally argue with a hard-core skeptic or a Taliban supporter, nothing stops me from using the fundamental assumptions that I do accept, and acting accordingly. There is a weird sort of backwards-logic that gets deployed at this juncture: “if you don’t believe that morals are objectively true, you can’t condemn the morality of the Taliban.” Why not? Watch me: “the morality of the Taliban is loathsome and should be resisted.” See? I did it!
The only difference is that I can only present logical reasons to support that conclusion to other members of my morality community who proceed from similar assumptions. For people who don’t, I can’t prove that the Taliban is immoral. But so what? What exactly is the advantage of being in possession of a rigorous empirical argument that the Taliban is immoral? Does anyone think they will be persuaded? How we actually act in the world in the face of things we perceive to be immoral seems to depend in absolutely no way on whether I pretend that morality is grounded in facts about Nature. (Of course there exist people who will argue that the Taliban should be left alone because we shouldn’t pass our parochial Western judgment on their way of life — and I disagree with those people, because we clearly do not share underlying moral assumptions.)
Needless to say, it doesn’t matter what the advantage of a hypothetical objective morality would be — even if the world would be a better place if morals were objective, that doesn’t make it true. That’s the most disappointing part of the whole discussion, to see people purportedly devoted to reason try to concoct arguments in favor of a state of affairs because they want it to be true, rather than because it is.
“Science can not help us decide which of these objectives is correct.”
What does “correct” mean in this sentence?
Exactly.
“Exactly.”
So you used a word that you don’t even know how to define, and you see this as an actual benefit of your argument?
You, as well as many others, are simply using vague and undefined words like “ought” and “better” and “correct”, and not even supplying definitions. This lets you simply avoid the issue.
If I walk up to you and say that 2+1=4 is a “more correct” equation than 2+1=3, you would rightly think I’m an idiot. But if I do the same with a moral claim, you don’t. I would like to see someone explain why.
No one can define it, that’s the point. Any definition of what “correct” means in this context is subjective. In the Big Lebowski’s parlance it would be “just like your opinion, man”.
“No one can define it, that’s the point.”
Lots of people can define it. Anyone can define anything. Harris defines it in his talk.
You do realize there’s a difference between “can’t define” and “I don’t like how you define it”. You are doing the intellectual equivalent of putting your fingers in your ears and saying “LALALALALALALA”.
Sean himself has made this point many times in various posts when he tries to define “science” in a useful way. And he has said many times that definitions are never right nor wrong, they are merely useful or useless. Harris is trying to define our morality in a useful way, such that is it open to testing and empirical evidence based on scientific principles.
You are taking the route of a moral Luddite, quite frankly, not only unwilling to join the progress, but actively trying to deny the existence of the argument.
slw: I don’t have a definition for a “winning at life” but I would like to understand more about how to optimize life. I do have beliefs such as using physical violence on my child is not a good method to raise my child which is backed by research. Could this and other research into what are the best answers to optimal methods raising a child or optimize “well being” be of value? I believe the answer is yes.
There are many values that are shared among people to one degree or another on the moral landscape. Also, it was considered acceptable not that long ago to punish children physically and it still is acceptable in a large portion of the world. What if the answer to this can be found in neuroscience through this research? The danger you speak of as I see it is if these optimal ways to promote ‘well being’ are institutionalized and have nothing to do with the study of these optimal ways to promote ‘well being’ beyond the possibility that after the evidence is published the evidence can be used to institutionalize the findings. I personally have less fear of that happening than I do of previous belief systems that claim a “one size fits all” way of life becoming institutionalized since I am sure there will be those people that don’t fit the norm studied as much if not more than those that are considered the norm.
There have also been regimes that have done what you are worried about without the research into how to optimize “well being” and I highly doubt that the research in and of itself would be the cause of such a regime. Maybe it will be a genie that can’t be put back into the bottle but it would still be the regimes that have the desire to institutionalize ways to live that are the real problem and not the knowledge that is attained.
DamnYankees , how old are you? like 15? did you just discover logical positivism? don’t worry, with hard work you’ll get over it. keep up the good work!
Moral Luddite? Where do you come up with this stuff?
By “can’t” define, I obviously meant “there are many ways to” define, where science does not provide a suitable criteria for evaluating the “usefulness” or “value” of the definitions.
“By “can’t” define, I obviously meant “there are many ways to” define, where science does not provide a suitable criteria for evaluating the “usefulness” or “value” of the definitions.”
There are many ways to define everything. There are multiple ways to define logic. There are multiple ways to define knowledge. There are multiple ways to define reality. We’ve had entire nations define logic is weird ways which led to ridiculous results, like Lysenkoism, but this huge disagreement on basic forms of math and logic didn’t seem to trouble us in the West. Does these disagreements render all logic subjective and thus not open to science? Presumably not, but I have no idea on what basis you would say so. Is it simply that a larger majority of people agree on what “compelling logic” is than what “compelling morality” is? If it’s just a numbers game, I’m afraid your bright line categories fly away.
You continue to make the fallacy that if people disagree on something, it must therefore not be amenable to scientific inquiry. I have no real response when you merely re-assert the same argument over and over. The answer has been provided to you many times in this thread and all over the interwebs. This is the time when I smile, shake your hand, and move on, I guess.
Oh, and as for moral Luddite, I made that up. Not sure it really works as a metaphor, but I went with my gut.
I’m not saying there are _no_ questions for which scientific inquiry can not provide answers, just this specific one – namely “what does the word ‘correct’ mean in my original post?”
I mean if you really want to answer this question, provide me a way that you would scientifically investigate whether it was better to maximize (using some future neuroscientifically derived happiness metric):
a) the overall sum of human happiness (say arithmetic mean, to normalize for pop. size)
b) the mimimum happiness of all humans
(This is just an example. There are obviously many other possible objectives that meet Harris’s “wellbeing” criteria.)
I think what Sam is after is a logical solution to the need for morality other than religion. So, to that end, he is proposing the building blocks for logical alternative to the purely faith based morality that currently dominates human decisions. His examples of killing a daughter who was raped and throwing acid on a girl for learning to read are (obviously) pointed at (dangerous) morality founded in religion.
To deconstruct every word of Sam’s logic, seems contrary to the point. If we deconstruct the logic in any religious text, we’d never get anywhere. Sam is not perfect. Show me someone who is? God? Unfortunately, He has “written” so many contradictory books, that I am not buying it.
Is their anyone posting here that would argue for morality purely based on religious texts? If not, shouldn’t we be constructivists rather than deconstructionists. Shouldn’t we take Sam’s suggestions and correct mistakes, make adjustments, and retest their merits?
Morality is messy to be sure. To expect pure logic to conclusively design a set of standards is ambitious. He postulates that with advancing technology, we’ll get closer (better). We need to.
Have you seen the news lately?
@Dave
“This still is assuming that the overall sum of happiness is the objective. There are other reasonable objectives, like maximizing minimum happiness, or maximizing median happiness, etc. Science can not help us decide which of these objectives is correct.”
Yes, exactly. Which is why I make the distinction between my points (1) and (2). Harris seems to want to conflate them. I don’t think conflating them is necessary in order to think that (2) is a good way to build a moral system (where “good” can be subjective or objective, depending on your own personal take on morality). Conflating them seems to mainly lead to people writing a lot of blog posts and calling others’ arguments stupid.
Hot off the press in Science Daily:
http://www.sciencedaily.com/releases/2010/03/100329152516.htm
“MIT neuroscientists have shown they can influence people’s moral judgments by disrupting a specific brain region — a finding that helps reveal how the brain constructs morality.”
Which way does this cut?
Wow, quite the responses here!
All I can say is, of course the damn universe is absurd! Get over it.
We must plod along like Albert Camus suggests. At every moment we must create and re-create (in our mind’s eye) that which does not exist, and fight on.
We can give it a try. But nothing is absolute!
Accept the the dark abyss! Accept the Matrix!
Accept one’s own insignificance and contradiction! (and fight it will all your might, for all its worth!)
And you see, in order for there to be “room for moral judgment” it is necessary that no absolute definition or standard exist. To learn, there must be freedom to adapt. And blurriness.
Beside we are all just one drug/neurological disease or impairment away from “feeling” very differently about everything.
“Life Goes on within You and without You”
You didn’t exist before. And soon you will cease to.
Life goes onward.
Your very thought this moment didn’t exist a second ago. And it too will cease a moment later.
But your heart keeps beating…… until it doesn’t.
Get over it.
Anyway, I think this is why Aristotle emphasizes “Politics” as so important.
Cheers all!
@DamnYankee
I still haven’t seen anyone rebut his contention that moral actions are almost 100% always geared toward create a state of pleasant consciousness free from suffering. Now, you might say that premise is too vague, but it is a premise on which we basically all agree.
Far too vague, you can’t sensibly measure it nor test for it except, perhaps, as an average for society.
What on earth does “pleasant consciousness” actually mean. Sounds like new age woo speak.
Also, suffering isn’t always bad in the long run. No pain no gain.
Now if you claimed that it was personal maximisation of progeny, that could be argued for.
Personal power and wealth could also be argued for.
For either of these to come about someone else may have to do worse.
Who wins and who chooses who loses?
Mr Harris needs to state a measurable thing, explain it clearly (no weasel words or waffle) and give a reasonable argument for it. Then his “morals” might be considered
“Pleasant consciousness, free from suffering”?! Obviously written by someone who has never had children. Having a child is a journey FILLED with suffering, both physical, in the case of the mother, and emotional, in everyone’s case. What rational reason is there for having children? You don’t really need them to take care of you when you get older any more, and most rational, scientific minds would argue that the world is overpopulated anyway. They break your heart, they exhaust you physically, emotionally, and financially. They suck your resources, they put themselves and you in danger…. I would argue that having children is utterly irrational, and violates most every rational definition of moral utility. And yet most of us not only devote our LIVES to our children, but usually find religion in the process. You think there are no atheists in foxholes? There are no atheists with 10 year-old kids. How to explain that?
I posted this on Sam’s site:
I find the positions of Sean Carroll and Russell Blackford, both of whom I admire greatly, to be surfing from a kind of Godelian vertigo when we come to this is/ought nonsense.
Godel showed that even in the pristine world of mathematics, non-trivial systems can never be both complete and logically consistent. This did not render the enterprise of mathematics pointless. Enormous amounts of mathematical knowledge pours daily from those industrious minds.
As Sam has repeatedly pointed out, splitting philosophical hairs may be an entertaining occupation in the Ivory Towers but if we are to survive contact with reality, humanity must converge on rules for living together that attempt to maximize useful metrics for the health of societies. This must become a high priority SCIENTIFIC enterprise if any headway is to be made.
When I read quotes like “I’ve never yet seen an argument that shows that psychopaths are necessarily mistaken about some fact about the world” (Blackford) or “Why should we value human well-being?” (Carroll) from very smart and admirable minds I begin to despair for the world.
Kudos to Sam for shining some light on these matters.
Your horror towards those quotes is unfounded and you should think their words through before you start to pass around insulting non-sequiturs about ivory towers and Gödel and such.
From another thread in another forum: “No matter, never mind.” –Gary Snyder, Turtle Island
Go outside, sit under a tree, and ask yourself if either of the two arguments offered above are relevant.
And by the way: http://www.huffingtonpost.com/sam-harris/in-defense-of-torture_b_8993.html
shock of egos…
the need to be proven right… humm…
I wonder if and how this will factor into the discussion:
http://blogs.discovermagazine.com/80beats/2010/03/30/magnetic-zaps-to-the-brain-can-alter-peoples-moral-judgments/
Imagine for a moment that Harris is right; that we live in a world with some real, knowable moral answers, but they’re obscured by a position usually condescendingly called moral relativism (don’t worry about why this is).
Now imagine we’re in Carroll’s world, where moral certainty is unobtainable. Forget the chains of reasoning and ask: would there be any observations we could make of actual human behavior that would suggest that one or the other view is correct?