You Can’t Derive Ought from Is

(Cross-posted at NPR’s 13.7: Cosmos and Culture.)

Remember when, inspired by Sam Harris’s TED talk, we debated whether you could derive “ought” (morality) from “is” (science)? That was fun. But both my original post and the followup were more or less dashed off, and I never did give a careful explanation of why I didn’t think it was possible. So once more into the breach, what do you say? (See also Harris’s response, and his FAQ. On the other side, see Fionn’s comment at Project Reason, Jim at Apple Eaters, and Joshua Rosenau.)

I’m going to give the basic argument first, then litter the bottom of the post with various disclaimers and elaborations. And I want to start with a hopefully non-controversial statement about what science is. Namely: science deals with empirical reality — with what happens in the world. (I.e. what “is.”) Two scientific theories may disagree in some way — “the observable universe began in a hot, dense state about 14 billion years ago” vs. “the universe has always existed at more or less the present temperature and density.” Whenever that happens, we can always imagine some sort of experiment or observation that would let us decide which one is right. The observation might be difficult or even impossible to carry out, but we can always imagine what it would entail. (Statements about the contents of the Great Library of Alexandria are perfectly empirical, even if we can’t actually go back in time to look at them.) If you have a dispute that cannot in principle be decided by recourse to observable facts about the world, your dispute is not one of science.

With that in mind, let’s think about morality. What would it mean to have a science of morality? I think it would look have to look something like this:

Human beings seek to maximize something we choose to call “well-being” (although it might be called “utility” or “happiness” or “flourishing” or something else). The amount of well-being in a single person is a function of what is happening in that person’s brain, or at least in their body as a whole. That function can in principle be empirically measured. The total amount of well-being is a function of what happens in all of the human brains in the world, which again can in principle be measured. The job of morality is to specify what that function is, measure it, and derive conditions in the world under which it is maximized.

All this talk of maximizing functions isn’t meant to lampoon the project of grounding morality on science; it’s simply taking it seriously. Casting morality as a maximization problem might seem overly restrictive at first glance, but the procedure can potentially account for a wide variety of approaches. A libertarian might want to maximize a feeling of personal freedom, while a traditional utilitarian might want to maximize some version of happiness. The point is simply that the goal of morality should be to create certain conditions that are, in principle, directly measurable by empirical means. (If that’s not the point, it’s not science.)

Nevertheless, I want to argue that this program is simply not possible. I’m not saying it would be difficult — I’m saying it’s impossible in principle. Morality is not part of science, however much we would like it to be. There are a large number of arguments one could advance for in support of this claim, but I’ll stick to three.

1. There’s no single definition of well-being.

People disagree about what really constitutes “well-being” (or whatever it is you think they should be maximizing). This is so perfectly obvious, it’s hard to know what to defend. Anyone who wants to argue that we can ground morality on a scientific basis has to jump through some hoops.

First, there are people who aren’t that interested in universal well-being at all. There are serial killers, and sociopaths, and racial supremacists. We don’t need to go to extremes, but the extremes certainly exist. The natural response is to simply separate out such people; “we need not worry about them,” in Harris’s formulation. Surely all right-thinking people agree on the primacy of well-being. But how do we draw the line between right-thinkers and the rest? Where precisely do we draw the line, in terms of measurable quantities? And why there? On which side of the line do we place people who believe that it’s right to torture prisoners for the greater good, or who cherish the rituals of fraternity hazing? Most particularly, what experiment can we imagine doing that tells us where to draw the line?

More importantly, it’s equally obvious that even right-thinking people don’t really agree about well-being, or how to maximize it. Here, the response is apparently that most people are simply confused (which is on the face of it perfectly plausible). Deep down they all want the same thing, but they misunderstand how to get there; hippies who believe in giving peace a chance and stern parents who believe in corporal punishment for their kids all want to maximize human flourishing, they simply haven’t been given the proper scientific resources for attaining that goal.

While I’m happy to admit that people are morally confused, I see no evidence whatsoever that they all ultimately want the same thing. The position doesn’t even seem coherent. Is it a priori necessary that people ultimately have the same idea about human well-being, or is it a contingent truth about actual human beings? Can we not even imagine people with fundamentally incompatible views of the good? (I think I can.) And if we can, what is the reason for the cosmic accident that we all happen to agree? And if that happy cosmic accident exists, it’s still merely an empirical fact; by itself, the existence of universal agreement on what is good doesn’t necessarily imply that it is good. We could all be mistaken, after all.

In the real world, right-thinking people have a lot of overlap in how they think of well-being. But the overlap isn’t exact, nor is the lack of agreement wholly a matter of misunderstanding. When two people have different views about what constitutes real well-being, there is no experiment we can imagine doing that would prove one of them to be wrong. It doesn’t mean that moral conversation is impossible, just that it’s not science.

2. It’s not self-evident that maximizing well-being, however defined, is the proper goal of morality.

Maximizing a hypothetical well-being function is an effective way of thinking about many possible approaches to morality. But not every possible approach. In particular, it’s a manifestly consequentialist idea — what matters is the outcome, in terms of particular mental states of conscious beings. There are certainly non-consequentialist ways of approaching morality; in deontological theories, the moral good inheres in actions themselves, not in their ultimate consequences. Now, you may think that you have good arguments in favor of consequentialism. But are those truly empirical arguments? You’re going to get bored of me asking this, but: what is the experiment I could do that would distinguish which was true, consequentialism or deontological ethics?

The emphasis on the mental states of conscious beings, while seemingly natural, opens up many cans of worms that moral philosophers have tussled with for centuries. Imagine that we are able to quantify precisely some particular mental state that corresponds to a high level of well-being; the exact configuration of neuronal activity in which someone is healthy, in love, and enjoying a hot-fudge sundae. Clearly achieving such a state is a moral good. Now imagine that we achieve it by drugging a person so that they are unconscious, and then manipulating their central nervous system at a neuron-by-neuron level, until they share exactly the mental state of the conscious person in those conditions. Is that an equal moral good to the conditions in which they actually are healthy and in love etc.? If we make everyone happy by means of drugs or hypnosis or direct electronic stimulation of their pleasure centers, have we achieved moral perfection? If not, then clearly our definition of “well-being” is not simply a function of conscious mental states. And if not, what is it?

3. There’s no simple way to aggregate well-being over different individuals.

The big problems of morality, to state the obvious, come about because the interests of different individuals come into conflict. Even if we somehow agreed perfectly on what constituted the well-being of a single individual — or, more properly, even if we somehow “objectively measured” well-being, whatever that is supposed to mean — it would generically be the case that no achievable configuration of the world provided perfect happiness for everyone. People will typically have to sacrifice for the good of others; by paying taxes, if nothing else.

So how are we to decide how to balance one person’s well-being against another’s? To do this scientifically, we need to be able to make sense of statements like “this person’s well-being is precisely 0.762 times the well-being of that person.” What is that supposed to mean? Do we measure well-being on a linear scale, or is it logarithmic? Do we simply add up the well-beings of every individual person, or do we take the average? And would that be the arithmetic mean, or the geometric mean? Do more individuals with equal well-being each mean greater well-being overall? Who counts as an individual? Do embryos? What about dolphins? Artificially intelligent robots?

These may sound like silly questions, but they’re necessary ones if we’re supposed to take morality-as-science seriously. The easy questions of morality are easy, at least among groups of people who start from similar moral grounds; but it’s the hard ones that matter. This isn’t a matter of principle vs. practice; these questions don’t have single correct answers, even in principle. If there is no way in principle to calculate precisely how much well-being one person should be expected to sacrifice for the greater well-being of the community, then what you’re doing isn’t science. And if you do come up with an algorithm, and I come up with a slightly different one, what’s the experiment we’re going to do to decide which of our aggregate well-being functions correctly describes the world? That’s the real question for attempts to found morality on science, but it’s an utterly rhetorical one; there are no such experiments.

Those are my personal reasons for thinking that you can’t derive ought from is. The perceptive reader will notice that it’s really just one reason over and over again — there is no way to answer moral questions by doing experiments, even in principle.

Now to the disclaimers. They’re especially necessary because I suspect there’s no practical difference between the way that people on either side of this debate actually think about morality. The disagreement is all about deep philosophical foundations. Indeed, as I said in my first post, the whole debate is somewhat distressing, as we could be engaged in an interesting and fruitful discussion about how scientific methods could help us with our moral judgments, if we hadn’t been distracted by the misguided attempt to found moral judgments on science. It’s a subtle distinction, but this is a subtle game.

First: it would be wonderful if it were true. I’m not opposed to founding morality on science as a matter of personal preference; I mean, how awesome would that be? Opening up an entirely new area of scientific endeavor in the cause of making the world a better place. I’d be all for that. Of course, that’s one reason to be especially skeptical of the idea; we should always subject those claims that we want to be true to the highest standards of scrutiny. In this case, I think it falls far short.

Second: science will play a crucial role in understanding morality. The reality is that many of us do share some broad-brush ideas about what constitutes the good, and how to go about achieving it. The idea that we need to think hard about what that means, and in particular how it relates to the extraordinarily promising field of neuroscience, is absolutely correct. But it’s a role, not a foundation. Those of us who deny that you can derive “ought” from “is” aren’t anti-science; we just want to take science seriously, and not bend its definition beyond all recognition.

Third: morality is still possible. Some of the motivation for trying to ground morality on science seems to be the old canard about moral relativism: “If moral judgments aren’t objective, you can’t condemn Hitler or the Taliban!” Ironically, this is something of a holdover from a pre-scientific worldview, when religion was typically used as a basis for morality. The idea is that a moral judgment simply doesn’t exist unless it’s somehow grounded in something out there, either in the natural world or a supernatural world. But that’s simply not right. In the real world, we have moral feelings, and we try to make sense of them. They might not be “true” or “false” in the sense that scientific theories are true or false, but we have them. If there’s someone who doesn’t share them (and there is!), we can’t convince them that they are wrong by doing an experiment. But we can talk to them and try to find points of agreement and consensus, and act accordingly. Moral relativism doesn’t imply moral quietism. And even if it did (it doesn’t), that wouldn’t affect whether or not it was true.

And finally: pointing out that people disagree about morality is not analogous to the fact that some people are radical epistemic skeptics who don’t agree with ordinary science. That’s mixing levels of description. It is true that the tools of science cannot be used to change the mind of a committed solipsist who believes they are a brain in a vat, manipulated by an evil demon; yet, those of us who accept the presuppositions of empirical science are able to make progress. But here we are concerned only with people who have agreed to buy into all the epistemic assumptions of reality-based science — they still disagree about morality. That’s the problem. If the project of deriving ought from is were realistic, disagreements about morality would be precisely analogous to disagreements about the state of the universe fourteen billion years ago. There would be things we could imagine observing about the universe that would enable us to decide which position was right. But as far as morality is concerned, there aren’t.

All this debate is going to seem enormously boring to many people, especially as the ultimate pragmatic difference seems to be found entirely in people’s internal justifications for the moral stances they end up defending, rather than what those stances actually are. Hopefully those people haven’t read nearly this far. To the rest of us, it’s a crucially important issue; justifications matter! But at least we can agree that the discussion is well worth having. And it’s sure to continue.

83 Comments

83 thoughts on “You Can’t Derive Ought from Is”

  1. Well, we have to agree on our assumptions if we want to prove *any* truth, whether it’s a truth about morality or Euclidean geometry or whatever. And science too is based on assumptions, albeit fairly minimal ones (in the sense that almost everyone believes them) like “inductive reasoning is valid”. It seems to me that with morality the problem is just that there’s a lot of disagreement about what the *right* assumptions are… maybe that’s your point.

  2. You can actually go a little further with this claim:
    “3. There’s no simple way to aggregate well-being over different individuals.”
    and say that there is no way, simple or not, to aggregate well-being. Arrow’s Impossibilty Theorem proves that, aside from certain trivial situations, there is no way to assemble multiple people’s (presumably coherent) moral preferences into a coherent societal moral preference.

  3. I’m not convinced that the parallel arguments about epistemic norms are at a different level of description–your argument shows that they are distinct, but not that they aren’t analogous. Epistemic norms, like morality, are associated with sciences that are far, far less mature than the physical sciences–the social and cognitive sciences–and the project of naturalizing epistemology has backed away from Quine’s proposal of using science as a replacement for the normative philosophical component, as opposed to the still-live and quite fruitful proposal of using it as a supplement. That strikes me as quite analogous to the situation in morality. There are also still relativists in both domains, and those who argue for moral realism and moral progress as well as those who argue against scientific realism and scientific progress. Epistemic norms for science, like moral norms, do not have universally agreed-upon goals–there’s maximization of truth (itself a concept with multiple competing accounts), unification of mathematically elegant theories, finding instrumentally successful (predictive) theories, and gaining evidentially probable explanations, for example.

  4. @Colin: You’ve misread Arrow’s Impossibility Theorem. It states that no system can aggregate individual preference orderings into a social preference ordering that meets certain reasonable criteria. It does not rule out interpersonal comparisons of utility, it just means that we could not do so through a democratic system in people give rank-ordered preferences. If, hypothetically, a central planner could have access to each invdividual’s utility functions and weight them against each other, the social utility function would simply be the sum of weighted individual utilities. Such a social utility function would meet ALL of Arrow’s criteria as long as each individual’s utility is given some positive weight.

    @Sean: You’ve failed to demonstrate that ought can be derived from is. Nowhere in your article do you actually derive OUGHT. The fact that individuals attempt to maximize their own utility (whether true or false) has no normative implications. It is a statement of what is. So to derive anything normative from that is begging the question.

  5. I think very few people would recognize the above definition of morality. I don’t know about deriving “ought” from “is”, I find the notion itself incongruous, but to even consider the question you have to understand the “is” of “ought,” i.e., answer the question: what is morality? All indications are there is a distinct cognitive process for making moral judgments, as one would expect for a social animal that had to get along in groups long before formal systems of ethics came about. Pinker wrote a nice NYT Mag article on the subject a while back (link). I tend to call the emotive/instinctual responses of a person to questions of conduct “morality,” reserving “ethics” for the theoretical systems constructed to explain the former. Since one’s emotional responses don’t follow logical rules, much ink has been wasted trying to make a system that matches them. What you state above is basically that an ethics built on a vague notion of maximizing utility will be a inconsistent mess, which is no surprise.

  6. I think you’re awfully close to jumping the shark. “These may sound like silly questions, but they’re necessary ones if we’re supposed to take morality-as-science seriously.” I think they are far from necessary. The reduction of the meaning of human existence to a single variable, well-being, is remarkable. It’s one way to go, just as the reduction of the meaning of human existence to how much money you have in the bank is a way to go. I think you don’t mention the dynamics of well-being are linear or nonlinear, but surely it’s “necessary” to discuss the dynamics?
    Surely, please, we can retain a slightly higher-dimensional discussion? Health and happiness, perhaps? Do we really have to decide what single-variable function of health and of happiness we want to optimize before we start? What of the complexities of measurements that are acceptable to behaviorists, instead of single almost self-assessments of internal state? Is it “necessary” that we not consider those? It shows that I’m no psychologist, and the mathematician has taken over in my simplistic identification of the number of degrees of freedom in your model as “the” problem, but pl-ease.

  7. amdahl: Not clear to me why you thought Sean was trying to derive ought from is, since his main point in the post is that it’s not possible.

  8. I agree that a science of morality is possible, through neurscience, psychology, evo-devo bio-logy, but philosophy to has a place in this basic template for a 21st century science. Well-being is a great metaphor for the possible emergence of a Being of human qualities.

  9. You know, I think you’ve actually won me over with this. Your first point is the most compelling one, from my pov. If I honestly evaluate my life, I think the most instructive and long-term beneficial events have been the most horrible and painful ones, from an “absolute” sense. The stuff that, if i were a social moralist, i would by definition be attempting to minimize in the population at large. I don’t know, now i’m worrying i’m opening a whole other can of worms here, but something seemed to click for a minute anyway. FWIW.

  10. The problem with Dr. Carroll’s argument is that it does not distinguish between “what brings well-being” from “what people think brings well-being”. These are not necessarily identical. The fact that people do not agree about what the definition of well-being is or think that well-being is a worthy goal does not show that no such definition exists or that well-being is not a worthy goal. People disagree about evolution, does this mean that evolution is somehow invalid?

    As Carrier and Boyd has demonstrated, psychopaths are largely irrelevant to moral normativity. If a horse is one day born with two heads due to some mutations, does this mean that it is wrong to say that horses in general have one head? Morality is more like biology and less like mathematics. There are certain normative procedures for growing corn, but given some bizzare weather or climate conditions, these might not apply, but this does not mean that they suddenly become useless or invalid.

    Lastly, epistemic justifications are in some sense identical to moral justifications. Imagine the following conversation between a biologist (B) and a creationist (C).

    C: Is there any reason why I ought to accept evolution?
    B: Yes, there are tons and tons of scientific evidence!
    C: So you are saying that I ought to accept evolution (“ought”) because of the evidence (“is”)?
    B: Yes.
    C: But now you are trying to derive and ought from an is! You cannot do that says Hume.
    B: Err…
    C: So, indeed, there is no reason why I ought to accept evolution!

    *the creationist smugly walks away*

    An “epistemic ought” has the same functional outcome as a “moral ought”. Epistemic ought only make sense if we agree to a number of presuppositions, such as “truth is better than falsehood”, “reason is more virtuous than faith”, “an outside world exists” etc. Without these, science shatters.

  11. This is far outside my normal realm of thought, so forgive what may be a naive question. But is it really not possible to aggregate the “well-being” — assuming we can define it, which I am unsure of — over individuals? There are many emergent properties in the real world that pop out of aggregated (and sometimes random) individual behavior: sand flowing down a dune, for example, or radioactive decay of a sample over a large sample long period of time.

    Sean, the next time we meet, I suspect we’ll have an interesting discussion.

  12. Not directly related, but there have been some interesting studies in which cognitive scientists have posed various applied morality questions (do you save 10 people or one? what if the one is a baby? do you still sacrifice the one if you have to physically push him into the path of the train to stop it from hitting the 10? etc) to people while taking images of their brain activity. I don’t recall the specific results, so I’ll have to see if I can dig up the papers or some of the coverage about them.

    I don’t know if any of the studies checked variations between different cultures or not, but I think testing like that might help scientifically evaluate moral issues (especially regarding cultural variances vs inherent morality questions.)

  13. There is an added complexity to all this too — determining what’s best for our collective “well-being” today versus what’s best for our collective well-being in the future (and not just in the near future, but a future only the descendants of our grandchildren will see).

    Given that we value the well-being of our living descendants highly, much of that well-being is inexorably tied-in with our own well-being anyway, but the further into the future we go, the more tenuous that link becomes. And thus what may seem good and moral today (in terms of well-being), may not be considered so when looked back on from 100 years into the future.

    For example, if the worst predictions of global warming turn out to be true then maximizing our collective well-being for the present generation could be in direct and devastating conflict with the well-being of the third or fourth generation down the road. Now, if we only need to make small sacrifices today to avert the worst effects of global warming in the future, then the knowledge that we have done something to help the families of our great-grandchildren is probably enough to offset any collective loss of well-being felt from those sacrifices. But what if it was determined that the only way to stop a global catastrophe affecting billions of lives 100 years from now was to radically change our way of life today? (e.g. banning gas-driven cars, increasing the tax on fossil fuels many times over, restricted air travel, etc.).

    I doubt anywhere near enough people would be able to factor in the well-being of future generations in that case (until the crisis is almost upon us), and yet that future generation, say, 100 years from now—the one afflicted by the disastrous changes in Earth’s climate—would almost certainly look back on our generation as one that was deeply selfish and immoral.

    It is true that human beings can and do make great sacrifices (even unto death) when they believe they are doing it for the greater good, but that is almost always when the threat to ourselves and our loved ones is imminent. Such considerations of morality/well-being tend to break down when that’s not the case.

    (Note: Please don’t make my example an excuse for debating GW on this thread — it is just an illustration, nothing more.)

  14. Once you have put any degree of thought into this issue, it’s easy to see why so many people like the idea of a supreme being dictating a series of moral absolutes for everyone to follow. It actually doesn’t make the reality of moral choices and their impact any less messy in the end, but it sure does take a lot of the worry and complexity out of it.

  15. But is it really not possible to aggregate the “well-being” — assuming we can define it, which I am unsure of — over individuals?

    I would say that as the number of people aggregated increases the overall ‘value’ of each individual’s well-being would necessarily decline, unless it is a purely neuro-chemical or otherwise physical datum (such as could be manipulated via plugging everyone into a Matrix-esque system which maximizes the physical chemical processes). No two people have identical opinions, so there is always some degree to which they would conflict, even if it is only minorly, such as two otherwise identical people except one prefers A-Rod to Jeter. So instead of 100% well-being, they are at 99.999999999%. Add in a third person otherwise identical person who favors some other Yankee and then the well-being maximum has been reduced to 99.999999998%.

  16. It’s a subtle distinction, but this is a subtle game.

    Exactly.

    My approach is far less subtle. First ‘science project’ I would suggest is to determine whether the human race is the greatest cause of suffering.

    ‘To be or not to be’ may or may not be an immature question, but “Should WE be?” is the mature collective question that the human race is almost certainly not ready to face.

  17. I agree with several other commenters here that the “is/ought” divide is at least partially synthetic — at the very least because as Emil Karlsson points out, deciding whether or not to believe a particular proposition is precisely deriving an ought (“Ought I believe?”) from an is (whatever the reasons given).

    That’s not to say that I think we can scientifically derive some optimal moral code, or even an optimal system of government. But I don’t think this is really because of some a priori philosophical distinction between facts and values so much as the fact that any moral code or system of government exists to discourage people from pursuing their own best interests, or what they perceive to be so at the time.

    What I’m saying is that the exact same problem noted by Carroll and others about finding an optimal set of values extends to science as well: one has to presuppose an ontology amenable to the explanations engaged in by scientists. Whether we’re dealing with epistemology or ethics, is or ought, the truth of any proposition is dependent upon exactly those presuppositions which cannot be proven. So the necessity of presuppositions is not what causes the is/ought divide — it applies equally in both cases.

    The only reason epistemological propositions seem to be on surer footing than ethical propositions is that within science, everyone has agreed on the same ontology. In discussing ethics, there is no comparable near-universal set of presuppositions. That is, the distinction is not an a priori philosophical one, but a practical, sociological one. If everyone could agree on definitions and measurements for human well-being the same way they can about, say, an electron, Harris would be absolutely right that we could derive an “is” from an “ought.” (Conversely, if there was a sizeable school of physicists who disbelieved in the notion of matter-waves, what “is” an electron wouldn’t be so clear as it is for us. )

    Whether the outcome of such a program would actually be better than what we have is another question entirely.

  18. Matt Tarditti

    As is often the case, I think this one comes down to semantics. If you define science as something based in the empirical (and ONLY the empirical) arena then you are of course going to have a hard time arguing that there is something empirically verifiable about morality.

    But I’m not convinced that science, that the study of “is” as Sean has defined it, MUST involve empirical data. Does psychology deal with measured, independently verifiable, data sets to advance it’s theories? Is string theory ever going to be able to verified through a measurable quantity? Neither of these even have an imagined quantity through which to make measurements. Nevertheless, we have apparently have no problem grouping both in under the larger umbrella of “science”.

    If empirical requirements are lifted from the study of morality, then I believe that we will end up looking at, primarily, the biology behind happiness as tempered by the requirements of the categorical imperative (the Golden Rule).

  19. Does psychology deal with measured, independently verifiable, data sets to advance it’s theories?

    Yes. Perhaps you’re thinking of psychotherapy.

    Your example of string theory is a practical, not an ‘in principle’, problem. Beside which, the answer may well be yes to that too.

    @ Dan L, 18 – On your supposed equivalence of presuppositions: Yes, science proceeds on the extrapolation of the principle that knowledge of a state of affairs facilitates a response to that state of affairs. You are free to try not to gather knowledge of states of affairs, or to attempt truly ignorant/random action, at any time you like. There’s no ought to it. There is a moral argument that you should believe facts that are true, but that moral argument is not represented by EK’s trite dialogue; It is not that you should believe true things simply because they are true.

    I don’t think it is possible to get very far at all with the argument that presuppositions of science are equivalent to ethical presuppositions, unless you subscribe to the radical view that there is no ‘is’. This is a million miles away from the position of Sam Harris, of course.

  20. You say and repeat that you’re presenting arguments for why it cannot be done in principle. What I read in the meat of the paragraphs, though, are lists of questions and challenges which summarize how difficult it is for you to imagine the project being successful.

  21. Matt Tarditti

    Thank you for the response, DaveH. But to avoid a future repeat of my apparent mistakes, what is the “in principle” empirical data for psychology? I’m not trying to be facetious, but I am honestly at a loss to describe how psychology is a science, even though I believe it is a science.

  22. Dan L. – Deciding whether or not to believe a proposition is NOT deriving an ought from an is. Not in the moral sense of “ought”, which is how it’s used here. Believing is not something we do because we think we ought to morally. We say we believe things when we are firmly convinced that they are true. That has nothing to do with morality.

    It’s true we often say “I ought to believe X” – what people usually mean by this is not that we feel a moral obligation to believe X, but that we think the evidence supports X despite our inclination to disbelieve it. Generally speaking, people think it’s in their ultimate interests to believe things that are true, and generally speaking, people think evidence is ultimately more reliable than their intuitions. Hence this is a self-interested “ought”, not a moral “ought”.

    You can certainly derive self-interested “ought” from “is” if you know your preferences well, but that has nothing to do with morality and is absolutely not “ought” in the sense Sean refers to.

  23. @DaveH, 20:

    If there is a division between “is” and “ought,” it doesn’t get there by baldly asserting that it is there. “Ought” is not precisely defined or used in an esoteric context where its meaning is clear.

    When a proposition is asserted, we need to figure out whether to believe it or not. We might rely on the say-so of friends and loved ones, perceived ignorance or expertise of the sort, etc. There is no one optimal formula; agreeing with the expert against the charlatan always brings with it the risk that the expert is wrong and the charlatan happens to be right. Often, believing a proposition despite the say-so of experts will be remembered as courageous is that proposition turns out to be true after all. Since there is no one right way of figuring out whether to believe a particular proposition or not, we need some set of criteria or heuristics.

    These will not resemble scientific laws. They are values. Not necessarily moral values (I actually think they are, but I’m not going to try to make that case). However, since they are nonetheless values, I think that “ought” applies.

    I don’t think it is possible to get very far at all with the argument that presuppositions of science are equivalent to ethical presuppositions, unless you subscribe to the radical view that there is no ‘is’. This is a million miles away from the position of Sam Harris, of course.

    Yeah, of course — I think you may have missed my point. Which was: “is” questions are just as subject to presuppositions as “ought” questions; the difference lies in the fact that the presuppositions needed for the “is” questions readily yield falsifiable propositions, which makes it easy to sort out “good” and “bad” sets of scientific presuppositions (theories, vaguely). If ethical presuppositions could yield propositions that are falsifiable in the same way as scientific presuppositions, the “is”/”ought” divide would disappear, because we would all start tossing off ethical presuppositions that were obviously invalid, and the average set of ethical presuppositions would start to show a lot less variation. Which is kind of how science works: you start with a set of theories and throw them out as their predictions are falsified.

    So I agree with Harris that the “is”/”ought” divide is synthetic, but I disagree that we can derive an optimum moral code using the methodology of the sciences. I agree with Carroll that trying to use science to derive an ethical system is a fool’s errand, but I disagree that “is” and “ought” only apply in fundamentally different contexts.

    Why? Because “is” and “ought” are linguistic conventions and have no normative power over the universe — the same problem that comes up during almost any argument with a philosopher. You can talk about necessary conditions until you’re blue in the face, but if I find a real-world counterexample (perhaps a situation in which deciding what to believe is itself an expression of moral values?) it’s the philosophical theory and not the universe itself that is in need of revision.

  24. what is the “in principle” empirical data for psychology?

    Matt, I don’t understand what you mean by that question.

    You’re not completely at a loss to describe how Psychology is (or can be) a science, because you yourself mentioned the use of measured, independently verifiable, data sets to advance theories. That’s not a complete definition, but it’s a start.

Comments are closed.

Scroll to Top