How human beings behave is, for fairly evident reasons, a topic of intense interest to human beings. And yet, not only is there much we don’t understand about human behavior, different academic disciplines seem to have developed completely incompatible models to try to explain it. And as today’s guest Herb Gintis complains, they don’t put nearly enough effort into talking to each other to try to reconcile their views. So that what he’s here to do. Using game theory and a model of rational behavior — with an expanded notion of “rationality” that includes social as well as personally selfish interests — he thinks that we can come to an understanding that includes ideas from biology, economics, psychology, and sociology, to more accurately account for how people actually behave.
Support Mindscape on Patreon.
Herbert Gintis received his PhD in economics from Harvard University. After a long career as professor of economics at the University of Massachusetts, he is currently a professor at Central European University and an External Professor at the Santa Fe Institute. His book Schooling in Capitalist America, written with frequent collaborator Samuel Bowles, is considered a classic in educational reform. He has published books and papers on economics, game theory, sociology, evolution, and numerous other topics.
- Web site
- Santa Fe Institute page
- Google Scholar publications
- Books (Princeton University Press)
- Wikipedia
[accordion clicktoclose=”true”][accordion-item tag=”p” state=closed title=”Click to Show Episode Transcript”]Click above to close.
0:00:01.3 Sean Carroll: Hello everyone, and welcome to The Mindscape Podcast. I’m your host, Sean Carroll. And today’s episode is going to be one of the more ambitious mind-bending episodes that we get here on Mindscape. But not because we’re doing some esoteric physics or mathematics subject, we’re thinking about human beings. Today’s guest is Herbert Gintis, who originally became well known as an economist, but these days, probably better to classify him as a behavioral scientist. Because, really, Herb’s whole thing, the thing he really wants to get across is there’s something called “How human beings behave,” and we should study and develop theoretical models for that behaviour in a rigorous, quantitative, empirically based way. And then whatever we learn about how human beings behave should inform economics, but also psychology, sociology, anthropology, and so forth. These different disciplines might care about different aspects of human behaviour, but they should ultimately tell compatible stories about human behaviour, right? To me, this is just pushing all my buttons because this is a very poetic naturalist way of looking at human beings.
0:01:11.4 SC: There are different vocabularies for describing them, but they better be, at the end of the day, consistent with each other in some deep sense. So, how do you do this? Well, Herb has some ideas about how to do this. Roughly speaking, based on the idea that we need to understand the sense in which human beings are rational, there’s this whole story about rational choice theory which gets a bad name because we think we know what the word “rational” means, and it doesn’t mean that in the sense of rational choice theory. He suggests the name “beliefs and preferences” as a replacement for rational choice theory, but we’re stuck with the name. But the idea is that people have beliefs and preferences. Faced with different situations, they will act in certain ways.
0:01:54.0 SC: We can study those ways using the tools of game theory to understand why they would think that they have incentive for behaving one way rather than another. And then you can use ideas from actual, empirical, psychological studies, as well as, biology, and evolutionary psychology, to think about why people don’t maximize their one-shot return in a game they’re going to play. Rationality is not completely individualistic in these situations, it’s social rationality. So Herb wants to unify all of the human sciences. Like I said, very, very ambitious point of view. I’m not enough of an expert to judge whether he’s right, but much of what he says is completely compelling to me. And he’s also extremely entertaining and provocative while saying it. So I think this is going to be a popular episode. Buckle up, hold tight. Let’s go.
[music]
0:03:04.5 SC: Herb Gintis, welcome to the Mindscape Podcast.
0:03:06.1 Herbert Gintis: Oh, it’s great to be here.
0:03:07.9 SC: So I get the impression, and maybe this is unfair, correct me if I’m wrong, from reading your stuff and listening to you talk, that you have a certain sense of frustration about the fact that psychology and economics and sociology and anthropology are all separate disciplines rather than sort of one big discipline of human behaviour. Is that an exaggeration?
0:03:28.8 HG: I wouldn’t say exactly that, what I would say is, like in the physical sciences or the natural sciences, there are overlaps in chemistry and physics, or biology and chemistry, etcetera. But when they overlap, they agree. And if they don’t agree, then they fight about it and figure out what the truth is. But in the social sciences, they overlap, but they say totally different things. So for instance, economists, classically, would say, “People are motivated by self-interest.” Now you have to give them the proper incentives, and there’s no notion of morality. But in Sociology, they don’t even talk about incentives, they always talk about norms and violation of norms and proper behaviour and improper behaviour, etcetera, etcetera. Well, they can’t both be right. And in fact, they’re both wrong.
[chuckle]
0:04:23.0 HG: Now if… There’s some other fields, like for instance, economics and biology both use what’s called the rational actor model. Biology, you maximize fitness. In economics, you maximize something called utility. But they’re both rational actor models. Other fields, like in psychology, you can read 10 textbooks and they’ll all say, “Rational actor model’s stupid, it’s not correct. We don’t use it at all.” So, what’s going on here? It’s not science. And I attribute it to the fact that you have these developed… In the early 20th century, these feudal fiefdoms where if you’re an economist, you publish in certain journals and you talk to other economists and you don’t care what sociologists say. Nobody cares, ’cause they don’t hire you, they don’t fire you, they don’t publish you.
0:05:17.4 SC: Yeah.
0:05:18.1 HG: They don’t go to conferences with you. And so they develop in strange directions. But if you care about the truth, it’s a scandal, it’s terrible. It can’t be right to just forget about these conflicts between the way they model human behaviour in one field or another. And also in biology, there’s a tendency towards saying that everything is inclusive fitness. That is that what you do is… What an individual does is he helps himself and his nearby relatives. That’s not true for humans. It’s not true for many social species or we have very complex relationships that go way beyond kinship. So, anyway, that’s what the issue is.
0:06:06.7 SC: And I’m very much on your side here, I tend to agree. But so to play the devil’s advocate, is it conceivable that for the purposes of economic behaviour, people act one way, and for purposes of social behaviour, they act another way, and actually these are compatible secretly?
0:06:27.1 HG: No, I mean of course that’s possible but it isn’t true, people do not behave in the economy as though they are self-interested. Now when you have very complete markets you get almost self-interested behaviour but when you get interactions among people, face-to-face interactions or other social interactions even in the economy people do not behave self-interestedly, people in a firm that work together have a culture and it could be a very positive culture that promotes cooperation and openness or a very back-biting culture which inhibits cooperation. And these are economic behaviours, very important ones having to do with the organization of the labour, and similarly consumers care a lot more than just about the physical attributes or what goes into their consumption, they care about values, they care about who… For instance if you buy Nike shoes you wanna know that your Nike shoes weren’t made by slave labour, well that’s not self-interested etcetera. So and similarly sociologists just forget about the incentive side of behaviours and they partially characterise some things but not completely, so my argument is when you put them together you get much more explainable behaviour.
0:08:05.9 SC: It seems so obviously true what you’re saying, at least the part that they should sit down and hash things out. It’s a little bit shocking that they don’t or that they haven’t and I know the silo-ing of academia etcetera, I feel it myself but still one wants to say, is it really that bad? But maybe it just is.
0:08:26.4 HG: Oh yeah, absolutely it’s totally that bad, look at sociology and anthropology, what do they do? They both study human organization, they have totally different reading lists they don’t even overlap, Margaret Mead is not studied by sociologists and Talcott Parsons is not studied by anthropologists. How is it possible you’re going to have a theory, or a model of human social organization if you divide the world up into two places which have totally different theories.
0:09:02.3 SC: Yeah, I can’t argue with you there.
0:09:04.8 HG: What can I say, that’s the way it is, now I’ll tell you this, when I started out working on these issues economists would say, “Well we have to assume that people are self-interested because otherwise you could just put anything you want into their utility function and it doesn’t really explain anything.” And the answer to that… And that’s a very good objection if you all you’re doing, you say, “Okay, this guy, when he goes to the dump he always travels around by his brother’s house, it’s way out of the way but he likes to go by his brother’s house.” So I put that into his utility function, well that’s a big help I mean all you’re doing is epicycles on behaviour, the problem there… The answer to that is we now have a whole discipline called Behavioural game theory in which we take subjects in the laboratory and see how in fact they behave and infer what they value from their behaviour and that is very advanced now. There have been Nobel prizes for it and Danny Kahneman, Daniel Kahneman the psychologist, Richard Thaler, so we go into this, to the lab and then we can figure out what people value and the objection to the standard theory goes away.
0:10:29.0 SC: Well, yeah and I wanna get to the specifics of how we do that but everything you’re saying pokes at another thing that I’ve heard, a criticism of economics which is that it’s not nearly as empirical as it should be, we have a theory we like the theory and rather than testing the theory against data or experiments we just elaborate the theory more and more, that’s a cartoon but is there some validity to that kind of criticism?
0:10:55.4 HG: Well, let me divide it into two parts. First of all when you learn economic theory there are no facts, I had an amusing thing happen to me many years ago about… But when I started working on this one summer I was studying this big thick thousand-page book for introductory graduate economic students on economics, so everybody reads it around the world, and I was also reading a book on quantum mechanics, and in the quantum mechanics book I learned Black-body radiation problem, Compton scattering, Lamb effect all sorts of empirical effects that gave rise to the standard model. In the economics book there wasn’t a single empirical fact, not one. 1000 pages no facts and the fact is what the economists are trying to do is derive human behaviour from the concept of rationality, what isn’t rational to do? And you can’t do that because there’s huge numbers of variables involved and rationality is important but it doesn’t determine which one you choose. So I would say that’s right and theorists really do not like experimentalists in economics. It’s quite different from physics where there’s a deep respect for the incredible versatility and depth of knowledge of the experimenters, but not in economics.
0:12:34.1 SC: We pretend to have a rivalry between theory and experiment but in fact we both know that we need each other desperately and hope that each other are doing good things but okay.
0:12:42.5 HG: Oh my gosh yes.
0:12:43.9 SC: Yeah, exactly…
0:12:45.4 HG: It’s not like that in economics at all but now of course there is a side of… There is a… There is a branch of economics, applied economics, like applied macroeconomics, and this and that, and they use data all the time, all the time, but they usually don’t use much of theory, they put it in the computer and let the regression coefficients tell you what’s gonna happen.
0:13:11.3 SC: I would argue that’s a different mistake, but yes, that’s right so. But let’s get back to this issue of game theory and rational behaviour, I mean, in some sense, I do get the impression that you do… I mean you take the rational actor model seriously, even if you don’t think it’s the complete story, so what is the sense in which we can think about people in some sense as acting rational?
0:13:33.6 HG: In what sense can you.
0:13:34.9 SC: Yeah.
0:13:36.3 HG: Yeah, okay, the overall… My overall statement about this is very simple, there are two of the behavioural sciences that have core theories, meaning all over the world, people will learn the same thing, and that’s economics and biology, and these are the two disciplines that use the rational actor model. Psychology is all over the place, there’s always these new theories and people hate the old theories and there’s no cumulative development of theory. And similarly in anthropology, there are waves of popularity of different views, but there’s no core theory, so I argue that the rational actor model is exactly what you need in order to have a core theory of animal or human behaviour, and the disciplines that ignore that just don’t get very far, they end up being voguish and scattered all over the place. And the reason this happened was historical, namely that the rational actor model to use you have to know some math.
[chuckle]
0:14:49.1 HG: You can’t just do statistics, you really have to be able to model mathematically the way you do in physics, or chemistry or biology. So they try to avoid that and the people in it all agree that we don’t wanna use it, but the real reason I think is it’s just hard to do, it’s really hard to use the rational actor model in biology and economics, but it’s a very important part of the discipline.
0:15:16.5 SC: I was once…
0:15:17.0 HG: By the way, there’s a wonderful statement I once heard, of a psychologist said, he was giving a talk and he said, people aren’t rational… Or, I’m sorry, people aren’t logical, they’re psychological.
[chuckle]
0:15:27.4 HG: And I was like that’s, it really told the story.
0:15:31.9 SC: That sums it up, well yeah. The joke I heard was that if you don’t like math, you go into sociology and if you don’t like emotions, you go into economics. So…
0:15:39.7 HG: Exactly, exactly.
0:15:40.7 SC: There could be something there too.
0:15:43.4 HG: And you learn as a graduate student to make fun of the other discipline, you really learn that, you learn oh… And if it would be like if you were doing natural science, and someone said, “Oh, in chemistry we don’t believe in this… We don’t believe in this relativity theory, we have crystemanthism, It’s a real nice little alternative, and physicists are all crazy.” It’s something like that, but it goes on all the time in this behavioural sciences.
0:16:13.6 SC: Okay, but the rational actor model then you’re suggesting is a route to a core theory in economics and biology, and should it be even broader than that, is it the good starting point for these other ways of thinking about human beings?
0:16:32.5 HG: Yes, I mean they’re… If you look at our literature on the rational actor model, there have been dozens of criticisms of it, outside of economics and biology, and they’re all wrong, it’s hard to believe, very intelligent people say really silly things about it, mostly because they don’t understand it. But the rational actor model does have one, I think serious defect, okay? And here it is, the rational actor model is ’cause individual minds have individual, what they call subjective priors that is individual ideas about how the world works and the probabilities you attach to different behaviours. But I think in the real world, people have what we call entangled minds, that is what you believe and what you think it’s a function not of you, yourself, but of your network of entangled brains around you, the people you agree with, the people in your family, the people in your social group and this and that. So people become very, very adamant about what’s true and what’s false, and they believe it only because everybody else they know believes it. If you go to a different place, they believe completely different things, and this is an important part of rational behaviour that people form beliefs, which they validate, not against empirical evidence all the time, but simply they validate it against the beliefs of everybody else. And this is a weakness of the theory, I think for at least for humans, I don’t think it really is that important for any other species.
0:18:13.6 SC: But how do we… I just wanna hear in your words how we dismiss the most naive objections to the rational actor model, namely that people do stupid irrational things all the time.
0:18:27.0 HG: Okay, first of all, this is a very important point, it’s a good point bringing up. There’s a lot of… The word rational means all sorts of different things, but here’s what… There’s two meanings that you can give it in economics or in biology, or where the rational [0:18:45.2] ____. One is called instrumental rationality, that means you have a goal, and being rational means you’re choosing the means to get to that goal most efficiently, the fastest, the cheapest, etcetera. That’s called instrumental rationality so… But there’s another called formal rationality, an individual is formally rational, if his preferences are transitive, what that means is if you prefer A to B and B to C, you’d prefer A to C, and that’s for… That’s all there is to it. There is a little bit more to get Bayesianism into it, but very little more. It’s basically, if you prefer A to B, and prefer B to C, then you prefer A to C, and let me give you an example.
0:19:35.4 HG: You see a guy fall down in the subway, he fainted. What do you do? Well, instrumental rationality, help the guy get up, but that’s not what happens. What happens is you say, “Oh Jesus, I’m late for work, let somebody else help him up, I don’t wanna hurt my shoes. I wanna help him up, but I wanna get as far away as possible as soon as the guy gets up.” In other words, you have multiple goals, you don’t have single goals and instrumental rationality is irrelevant then. What you need is a formal rationality, and that’s what I think we use when we say this is how people behave. We don’t say they’re trying to achieve some goal, we say that they have preferences which are transitive so that we can model them using a mathematical decision theory.
0:20:27.7 SC: Right, and so it can seem to us from the outside that they are in fact being dumb or being irrational, but at the moment they are just doing what their preferences say they should do, and that’s all we need.
0:20:37.4 HG: That’s right. So for instance, if you smoke cigarettes, that’s a pretty stupid thing to do these days, but it’s not irrational. It’s… People do things all the time to hurt themselves, we know that, I do. I don’t like it, I wish I didn’t. But we all do that, and we also make what’s called performance errors, that as you try to do X, but you don’t do it right. Your logic is a little bit, it’s a complicated thing to do. So there are errors all over the place. When someone behaves crazy, you’re not sure why. Is it because they really have some underlying rational transitive behavior, or are they just momentarily out of their minds, who knows? The point is that no one’s ever developed anything alternative to the rational actor model that you can use systematically to explain behavior.
0:21:32.6 SC: But that word rational is a little bit overloaded and that gets us into trouble.
0:21:36.4 HG: I know, I tried to change it. I started saying, I called it… What did I call it? The beliefs and preferences model. [chuckle]
0:21:44.9 SC: I like that. It’s not gonna catch on, but I get it.
0:21:46.8 HG: I found an acronym for it. But people… It didn’t hold. So I say rational, and say what I mean by rational… Formal rationality, and by the way, it doesn’t mean you’re self-interested either, that’s the other thing people think if you… Rational means you only care about yourself. Well, that’s of course ridiculous if… The only people who only care about themselves are sociopaths. Chuckle] Seriously, we all care about a wide variety of things besides ourselves, except for 10% of the population that are real sociopaths.
0:22:21.4 SC: And as a minor technicality here, my impression is that if you have a consistent set of preferences, that can always be modeled as you’re trying to maximize some utility function, even if that’s not actually what you’re doing, it’s just sort of a formal equivalence, right?
0:22:36.7 HG: Right. That’s fundamental theorem of rational actor model. They talk about utility and maximizing something, but it’s all really… That’s not what’s really going on. What’s really going on is for mathematical purposes, it’s really nice to have a function you can maximize. You’re bringing the Calculus and differential equations and all that stuff.
0:23:00.9 SC: Very happy.
0:23:01.5 HG: But what people are really doing, is just having preferences once they’re transitive, and then you can build the utility function out of that.
0:23:11.3 SC: Yeah, and once you do that, that’s when game theory becomes a useful tool, maybe you should talk a little bit about game theory. Probably people know what the words mean, but does game theory really give us a useful model for how people behave? Maybe you’re saying that it does in the sense of this beliefs and preferences idea.
0:23:31.1 HG: Well, I wrote four books on games theory, so.
0:23:33.5 SC: It’s a leading question. Yeah. [chuckle]
0:23:35.9 HG: No, I think it’s the… Along with the rational actor model, it’s one of the central tools for behavioral science. Games used to be… Well, it started out with Von Neumann and Morgenstern at Princeton trying to model war games.
0:23:57.4 SC: And poker.
0:23:58.1 HG: But it’s… The major development that happened in game theory, really in biology, when Maynard Smith, John Maynard Smith and Price, developed a game theoretic model to explain fights among butterflies for mating space. And it developed into a whole theory, it’s called evolutionary game theory, which has, I think the basis for understanding the dynamics of behavior of animals and humans. So it’s used all the time in economics and biology, it’s used all the time actually in political science, although there… I hope we get time, I’ll tell you there’s some really funny things going on, applying it to things like voting behavior. But yeah, game theory is very simple. It’s decision theory with more than one decision maker in a situation where there are only certain things, the various decision makers know about what the others are doing. It’s like when you play bridge, I have my hand and you have your hand and I can signal a bit what mine is, and you can signal a bit what yours is. But basically, we don’t know what the other one’s doing. All we know is, we’re all trying to win the game.
0:25:26.3 HG: So in the game theory, you have players, it could be two, three, four, five or 10,000, and each player has a strategy set to choose from, and when each player chooses a particular strategy, there’s a particular payoff to the whole… To each person in the group, and that’s game theory. And I’ve argued and a lot of people have argued that it’s really the basic language that can be used across all behavioural disciplines, whether you’re talking about dung beetles on cow patties or you’re talking about humans voting in elections, etcetera, etcetera. It’s the same thing, it’s a game, it’s got players, got strategies, players make choices and the payoffs depend on the various choices the players have made, yeah. And that’s game theory.
0:26:19.0 SC: And I think that the one concept within game theory that will be helpful for the rest of the conversation is the equilibrium idea. Famously, there’s the Nash equilibrium, so I’ll let you say what that is, ’cause I’m sure you understand it better than I do.
0:26:31.6 HG: Okay. John Nash was this incredible, incredibly bright mathematician at Princeton. He had mental problems which really destroyed him for most of his life, but he invented, as a young, very young man, the concept of Nash equilibrium, which is the equilibrium of a game occurs when no player has an incentive to change his behaviour. So everybody then plays it, but they’re doing the best they can. Everybody’s doing best they can do, and that’s called an equilibrium because nobody has an incentive to change. Now, when you learn game theory, you learn… They tell you something like this almost always ’cause I’ve taught graduate students and I know what they have learned. They say, “Rational players choose Nash equilibria.” Well, that’s just false.
[chuckle]
0:27:33.0 HG: I’m not gonna go through it in detail, but it is just… That’s absurd. Now, there are articles in the literature that show you clearly the prerequisites for saying that players choose a Nash equilibrium and they’re extremely implausible. So then you say, “Well, why do you say it as game theory is so interesting?” Oh by the way, if you want me to give you an example, I will. Have you ever heard of flipping coins?
0:28:06.3 SC: I have. Yes.
0:28:06.8 HG: I flip a coin and you flip a coin, we have… I’m sorry, not flipping coins, flipping fingers. You put out one or two fingers, I put out one or two fingers.
0:28:16.0 SC: Oh, yeah, sure. Good.
0:28:18.0 HG: Right, if we both put out the same number of fingers, you win. If we put out different numbers of fingers, I win. Well, how do you play the game? Well, the Nash equilibrium is you play 50-50. Half the time you put out one, half the time you put out two. That’s Nash equilibrium. Problem, if you are playing 50-50, it doesn’t matter what I do. I can play all heads or all tails because it doesn’t matter what I do. Moreover, you know that it doesn’t matter what I do, so there’s no reason for you to do 50-50 either. You get it?
[chuckle]
0:28:57.3 SC: Yes.
0:28:57.8 HG: When I say all this to my students, they complain to the Dean. [laughter] No, I’m kidding. They don’t complain to the Dean. But here’s the point. Now, here is a very important point, I wrote a whole book on this called Game Theory Evolving. In an evolutionary game, where people play the game over and over and the people who do well get to reproduce more like in the Darwinian evolutionary sense, then the only equilibria of that dynamical system is a Nash equilibrium of the underlying game. And that justifies using game theory in situations where you’ve had social evolution, because social evolution will favour people who choose strategies which in the long run lead to a Nash equilibrium in the system. Okay, so for instance, if you have a whole bunch of people playing one finger, two finger, throwing fingers, they will evolve towards 50-50.
0:30:09.8 SC: Yeah.
0:30:10.0 HG: In any one instance, it doesn’t matter what they do, but the people who play 50-50 in the long run do better. So they evolve.
0:30:19.9 SC: And some of the subtleties here are because in various games, various different kinds of payoffs you could get for making choices, there’s more than one Nash equilibrium and you could get stuck because the definition of Nash equilibrium is that no one can unilaterally improve their life by changing, but if you cooperate, if you’re not competing, then in principle, everybody could get a higher payoff, yeah?
0:30:44.1 HG: Well, no where you said… ‘Cause it’s two different things. First of all, there can be multiple equilibria. Now, some Nash equilibria will never be attained in an evolutionary sense. They’re just evolutionarily irrelevant. Other times, there’s multiple equilibria, and they have basins of attraction, and if you get into one basin of attraction, you go to one equilibrium, you get into a different one, you go to another one. So it’s a basic dynamical systems theory how that might work out. The second thing is, I don’t know, there’s not many situations where universal cooperation is a Nash equilibrium unless there are no errors. If people make no errors, then you can sometimes support… You can support a complete cooperation equilibrium in the game just by making a new game and saying, “Look, we’ll all cooperate. If it ever one person does not cooperate, then we never cooperate again forever.” That’s called a trigger strategy. And that’s fine until someone gets sick and they don’t come to work one day and you say, “Okay, we can’t cooperate anymore. He didn’t contribute.” So if there are errors, then you have big complications and the models become much more sophisticated and cooperation is never universal.
0:32:17.5 SC: Okay, fair enough so… But then there is this new idea that I found in your book, in one of your books, which is that of a Kantian equilibrium after Immanuel Kant in the categorical imperative the…
0:32:29.8 HG: Oh, yeah. Oh yeah.
0:32:31.3 SC: So explain that to us, I don’t think it’s original with you, but it certainly changed by speaking about this…
0:32:36.6 HG: No, in fact, the idea of a Kantian equilibrium is as old as Kant. I mean the categorical imperative really is a game theoretic concept which says, I will choose a strategy, which if everybody chose we would all do really well or we would do best, and if everybody else chooses that strategy, then we have a Kantian equilibrium. Now, the problem with that is that may be true, but even if… It still could be that I do better if I violate the cooperation. So for instance, I go to sleep rather than go hunting with the other guys, so a Kantian equilibrium is a moral equilibrium usually. It says, to behave morally, I should do X, Y, or Z. And I was always very suspicious of that idea, but I decided a few years ago, that Kant was, he’s right on target in understanding human morality.
0:33:42.7 HG: For instance, let me give you an example, this is something which is, it’s so pervasive that it’s stunningly mind-blowing. If you’re rational, according to the rational actor model you don’t vote, you’ll never vote. Why? Well, it does… You’d say, “Well, so if you care about social issues, you’ll vote,” another reason you might not vote is ’cause you’re self-interested, no. It doesn’t matter what you care about, no vote in a large election an election more than 40,000 voters, no vote has ever been determined by one person. Okay? So no individual has ever determined the outcome of one of these elections, so when you go to vote, you’re not… The fact is you’re not gonna change what happens no matter what you do, so why vote? Moreover, why even read the newspaper about politics, it’s a waste of time. You can’t change the outcome of an election, but people vote and they care about the election, and in fact, I’ve done this, I’ve done this, stand in line while you vote, this is before the pandemic…
0:34:57.3 SC: Oh yeah.
0:34:57.7 HG: Stand in line and say to the other person, “Why are you here?” They say, “What do you mean why I’m here ’cause I’m here, ’cause I want John Smith to win.” And I say, “Well, yeah, but do you think your vote’s gonna change the outcome?” “Oh no of course not.” “Well then, why vote?” And they say, “There’s no good answer.” If everybody thought that way, we can’t have a democracy.
0:35:24.0 SC: It’s the Kantian move.
0:35:25.8 HG: The Kantian move exactly, it’s not just some highfalutin philosopher, it’s deep in our souls that this notion that we should behave in ways which if everybody behave, we’d all be better off. And this is the deepest moral principle of human life, I believe. And people obey… They obey it so much that they don’t even know they’re obeying it. If you ask someone, why are you standing in line? They make it sound like they’re gonna change the election.
0:35:57.3 SC: Right.
0:35:58.7 HG: Right? So all I’m saying is that we have these deep… People by moral choices, which go way beyond the rational actor model towards what I call a social rationality, which no one’s really explored that much except the part that I’ve been talking about voting in elections, but the whole idea that people are self-interested if they vote in an election is just inconsistent. You may vote for your social group, if you’re a union member, you may say, “Okay, I vote the union,” that doesn’t make me selfish, if I were selfish, I wouldn’t vote at all. Anyway, you get the idea.
0:36:42.7 SC: I do, and you mentioned something very provocative along the way there, which is that people do it, and you just gave a sort of justification for why they do it, but the people themselves aren’t necessarily good at giving that justification I mean I’ve heard…
0:36:57.5 HG: Are not? Excuse me.
0:37:00.3 SC: Are not necessarily good at giving the correct reason why they are doing this thing, that they’re doing. [chuckle]
0:37:04.7 HG: Oh no. Of course not. Of course not. No.
0:37:07.8 SC: Or even analysts online or in newspapers when they say, “Go out and vote,” they don’t really give you the right reason to do it.
0:37:16.3 HG: Well, they can… It’s there’s no right reason. There’s an unarticulated notion of social rationality, but it’s so sophisticated that you can’t give it as a reason… [chuckle] We’re talking about it now, but I swear to God, it took me five years before I understood it, it’s I understood the idea that you… People have selfish preferences and altruistic preferences, but it didn’t occur to me for years, that people can have altruistic preferences, and still if they’re rational, they won’t vote. Okay, it’s very difficult. And by the way, some audiences never… I talked about this a lot, and some audiences, they simply do not understand what you mean when you say you can’t affect the outcome of election, they immediately go to say, “Well, if everybody thought that, [chuckle] you know, we couldn’t run a democracy,” which is true.
0:38:17.1 SC: Yeah which is true but…
0:38:18.3 HG: So anyway, these are the deep issues that come out when you try to deal with the nature of human morality, it’s deeply intertwined with notional rationality and choice.
0:38:29.2 SC: Is it somehow… Is it too cheap to attribute this to the fact that we talk and reason as consequentialists, but we act more deontologically in some sense, more rules-based…
0:38:44.1 HG: Well, I don’t… Some people, I’m sure that’s true of. A lot of people don’t talk consequentialist at all, about politics? No way. The biggest thing about politics, most people are not consequentialist, they’d think they wanna just… They wanna vote for what’s right, or they want to happen what’s right, and… That… So that could have any kind of consequences. I think people are fundamentally in many ways, not consequentialist. They don’t think in terms of what the consequences are. They think in terms of what’s right and what’s wrong.
0:39:25.1 SC: Well yeah. My own thinking about morality is undergoing a very gradual shift. Way back in the days when I was a kid, I was absolutely a utilitarian, a consequentialist. And just the more I think about it, the more, it just doesn’t work for a whole bunch of reasons, but I’m not exactly sure what I am. As a Meta-ethicist, I’m a moral constructivist. I think that there’s no objective morality out there, that we make it up, but I’m not sure what the best thing to make up is. And my understanding of what you’re talking about is not even necessarily a way of saying here’s what the best morality is, but just saying here’s how people actually behave. You’re trying to be more descriptive than prescriptive here.
0:40:05.5 HG: Oh, yeah. Well, that’s the difference between, a philosophical approach to morality in which people are searching for the, what is truly moral and what is not, and a behavioral approach, which is just to say, how do people behave? [chuckle] And it’s really interesting that people actually take morality seriously. It’s not at all what the philosophers expect, by the way, this is an interesting thing. Now, of course, the philosophers are doing more behavioral game theory and they understand what’s wrong. Perhaps I should give you an example of a behavioral game. Would you like that?
0:40:46.4 SC: Yes, please.
0:40:48.3 HG: Okay. Maybe you’ll do more than one, [chuckle] but the simplest one is called the dictator game.
0:40:53.7 SC: Okay.
0:40:53.8 HG: You can play it in the laboratory. Although I can, if I have time, I could tell you how it’s played out in the field, as well as the laboratory. The dictator game, there are two players, A and B, and they never see each other. They’re in different rooms. They never get together. And the experimenter comes in and tells both of them, “I’m giving B $10. B can offer anything he wants to you from one to 10. If you accept the offer, we split the money the way he wanted. And if you reject the offer, you both get nothing.”
0:41:32.9 SC: Right.
0:41:33.9 HG: “I take the 10 dollars back.”
0:41:35.0 SC: I remember this one, yeah.
0:41:37.2 HG: Okay. So what does the… And one of them’s called the proposer? He’s the one who’s gonna offer the other one, that’s A. And B is the responder. He’s gonna say yes or no. Now, if both players are self-interested, player A will offer player B a dollar and player B will accept it. Why? Because otherwise he loses a dollar.
[chuckle]
0:42:01.7 HG: Why do that? And player A says, “Well player B’s self-interested. So if I offer him a dollar, he’ll take it and I get nine.” When you play this game that has never happened; [chuckle] and we’ve played it in societies, not only in Boston, Massachusetts, and in Palo Alto, California. We’ve played it in the Peruvian jungles, in the Mongolian Highlands, in African jungles, etcetera, etcetera. Nobody ever plays it that way. The most common offer is half. I’ll give you $5.
0:42:44.2 SC: Fair.
0:42:44.4 HG: And if you offer $3 or less in most societies, people will reject it. Not a 100 percent, but enough so you shouldn’t do it. You should offer five. Now, if you ask a philosopher, “Well, what’s going on? Okay, what’s really going on here? Why would someone reject $3?” And the answer is ’cause he’s pissed off at the unfairness of the proposer, [chuckle] and he gets more pleasure or satisfaction out of depriving the proposer of $7 than he does by gaining $3. And the less the proposer offers the wider the gap is. If he offered one, well, I lose one, but he loses nine.
0:43:28.6 SC: Yeah.
0:43:29.8 HG: So it’s retaliation. It’s revenge. It’s a… Yes. That’s what it is. There’s another word for it.
0:43:38.1 SC: Retribution?
0:43:39.4 HG: Okay. What’s the difference? It’s revenge and retaliation. If you asked… I’ve asked philosophers, like the conferences. They don’t know what’s going on there. They say they don’t know what’s gonna happen. They go all over the place on it.
0:43:55.0 SC: What about economists?
0:43:55.9 HG: Okay, that’s an example of how humans are not self-interested. In particular they like to hurt people who hurt them.
0:44:03.5 SC: What do economists say about this game? What is their prediction?
0:44:09.1 HG: By the way, the economists would say $1, at least the traditional ones; but everybody knows now that, and this has been done hundreds of times. [chuckle] So everybody knows that there’s this concept of fairness. And that’s, what’s going on. There’s another game. I’ll do this one. It’s called the… It’s an honesty game. There are two players, again, A and B, and they can’t see each other. They can only communicate as follows: A is given two piles, two boxes with money in it. And A can look in the boxes and see how much money is in each box. And then A can say to B, “Either, B, please choose… If you want the most money choose box one, or if you want the most money choose box two.” And then B can either choose box one or box two. Now, if people are…
0:45:20.1 HG: If people are honest, then player A will say, “Choose the box that has the most money in it.” And B will say, “Well, we’re all on it, so I’ll choose it.” But why should A be honest? [chuckle] It’s costless for him to say, “Choose box B… ” No, “Choose box two.” And then player two, player B rather should say, “Well, why should I think he’s truthful? He’s gonna lie, but I don’t know he’s gonna lie ’cause he knows I might think he might gonna lie.” Like Professor Moriarty and Sherlock Holmes. [chuckle] You can’t tell what’s the right statement. So it’s a completely indeterminate game, but when you actually play this game, almost always the player A tells the truth. As long as the splitting of the money is not too uneven, player A will tell the truth, and player B will assume the player A is telling the truth.
0:46:21.8 HG: And so player A, for telling the truth, loses money and player, and know he’s gonna… Expects to lose money. There’s complications on it, but as you see, people don’t like to be dishonest. Now, I should say when you play this game, if the stakes are not… It’s not the high the stakes, it’s not how much the stakes are, it’s how uneven it is. If it’s very uneven, and then player one gets pissed off and says, [chuckle] “Well, I’m gonna lie.” And you get the cooperation falling apart. So we play these kind of games all the time, and they show the extent to which people are self-interested versus altruistic, or to the extent of in which they believe honesty is important, etcetera.
0:47:08.7 SC: Well, and surely, a lot of this is because human beings in the real world interact many, many, many times with many people and develop senses of fairness and expectations, and even when you put them in these isolated psychological tests, they’re still going to bring those external ways of thinking to bear.
0:47:30.2 HG: That’s exactly right. I had a large project, it was funded by the MacArthur Foundation some years ago, and we have 16 anthropologists go to their countries that they work in around the world and play things like the dictator game, the ultimatum game, and the honesty game, etcetera. And when the results came back and I was putting them together, the result was that people take the… They take their values from their society, whatever they are, and they take them into the laboratory, even in conditions of complete anonymity, no one will ever find out and they behave in the same ways as they do in their societies. Very egalitarian societies, people play the ultimatum game, which I haven’t really described… Yes, I have, that’s the ultimatum was what I described, not the dictator game. Excuse me, I described the ultimatum game, not the dictator game from the very beginning, and it was a shocker to me.
0:48:46.1 HG: It reminded me of the old sociologist, a brilliant sociologist, Talcott Parsons said did values really matter. Values do really matter. [chuckle] Really, there are societies where, for instance, people do cooperative hunting, like for instance in Lamalera in the Pacific who hunt whales. They hunt together in a big boat, and they split it up in a very egalitarian way. When they play the game, they do what’s called “Hyper-fair offers.” Player A when he gets $10, he offers the other guy $8. [chuckle] And if he offers him too much, the other guy rejects it.
[laughter]
0:49:28.3 HG: Can you believe that? Why?
0:49:29.7 SC: No American would ever do that.
0:49:30.8 HG: Okay, here’s why. You ask him, “Why did you reject $7 in the ultimatum game?” “Well, he thinks he’s a big shot. He’s gonna give me money now he’s come in to some money. [chuckle] He’s gonna make me feel so bad and humiliated, screw him.” So you get all sorts of behaviors that express the variety of human morality.
0:49:51.8 SC: And so again, it’s a completely convincing case, I think that human being… Human behavior can be thought of as rational in the sense of beliefs and preferences, but the rationality is not purely self-interested. There is this social rationality aspect. So let’s then ask, “Why is that the case?” Let’s get into the issue of how we evolved these particular sets of behaviors.
0:50:20.8 HG: Yeah. Boy, you ask hard questions, don’t you there?
[chuckle]
0:50:25.1 SC: You wrote the book, I’m just… [chuckle]
[chuckle]
0:50:27.8 HG: Well, one thing that I should say from the beginning is, I not only believe you have to integrate the behavioral sciences better. That is so where they overlap, they agree. People should know more of the other sciences. So, for instance, over the years I’ve taken upon myself to learn all of these disciplines. I’m an old man now, [chuckle] I’ve learned a lot of disciplines including anthropology. So for instance, I have a long article in the evolution of human socio-political systems. It was a lead article in Current Anthropology a couple of years ago, and so I went through exactly this question. Now, it’s a very difficult question but the point is that… It’s a long story, I think, but I think I worked some of it out. The first thing you have to understand, I think, is this. The common ancestor of all the primate species was almost certainly a multi-male, multi-female group in which you had promiscuous relations, no pawn bearing. All females were accessible to all females or at least random parts thereof and male hierarchy that it’s run by an alpha male, just like chimpanzees are today. But when humans came along, the split between hominids and the other primates.
0:52:10.4 HG: The humans had to cooperate a lot more because they were doing cooperative hunting and they developed tools. Now, what tools? The throwing tools, they had bats and balls, not arrows, that came 20,000 years later or more, or much more. And when they became very good at using these tools, you couldn’t support a hierarchy anymore. You couldn’t have an alpha male take over because he’s the strongest, ’cause when he goes to sleep, you can kill him. [chuckle] I know it sounds silly, but think about it. Chimpanzees can’t throw.
0:52:55.2 SC: Really?
0:52:55.4 HG: Or they can, but they can’t hit anything when they throw, and when they fight, they fight with their hands, they don’t fight with any tools of any kind or weapons.
0:53:03.0 SC: Okay.
0:53:04.9 HG: And it takes a very long time for even three chimpanzees to kill a fourth one, but once you get an accurate weapon, you can kill a guy in his sleep. So you can’t maintain a hierarchy based on power. And human societies move towards cooperative leadership. It’s called reverse hierarchy by Chris Boehm, who’s worked on this anthropologically, that is, people choose their leaders, and they choose their leaders according to their ability to promote the values and the fitness of members of the group. But once you do that, then you move towards having a language, because people have to make promises to each other, and you have the whole development of the vocal system of humans, which by the way is not just cognitive, it’s not just you got a big brain, it’s you have incredible musculature, you have a larynx which allows you to speak, etcetera.
0:54:13.4 HG: So the upshot of this is, there’s a lot of cooperation in small-scale hunter-gatherer societies. They’re not run hierarchically, they’re run democratically. And in that situation, there’s a huge benefit to cooperation, which pays off for the individual, because they’re rewarded by other members of the group when they behave in a cooperative way, and they’re ostracized when they behave in a non-cooperative way. If you wanna read about this, go to my website look at some of the references. But I think that’s really it… Human society is very, very, very singular compared to I think, other social species in that regard.
0:55:02.9 SC: So to dramatically over-simplify, you’re saying that the invention of weapons led to the invention of language. [chuckle]
0:55:09.0 HG: Yeah, oh yeah, absolutely.
0:55:12.7 SC: Is that…
0:55:15.6 HG: Great. Because the way you defeat a person who has a good mind and can speak it, is to have a good mind and speak it.
0:55:26.0 SC: Yeah, yeah. [chuckle] Interesting.
0:55:28.3 HG: You can’t just bash him over the head, because that doesn’t give you… The group won’t elect you their leader because you bashed the leader over the head, only if you’re having better ideas.
0:55:42.3 SC: And this is…
0:55:44.2 HG: And by the way, this goes on right up to almost the present, the foot soldier, people talk about who wins wars? Well, the answer is, foot soldiers win wars. It is such a shame the United States lost in Vietnam, it lost in Iraq. Right today, as we’re speaking it lost in Afghanistan. In all cases, it’s not the atom bombs and the big bombers, and the tanks, it’s the foot soldier, okay. So these weapons, the democratic weapons have been a very important source of basically democratic success, that is. In the Second World War, the cavalries lost, and they lost out to foot soldiers with the small caliber weapons, and that gave rise of the strong push for democracy after the First World War. Anyway, yeah, I think weapons are very important.
0:56:53.0 SC: Well, and this is an example, especially because as you mentioned, we had to literally change the biology of our larynx and our vocal cords and so forth, so this is an example of gene culture co-evolution, right? One of the emphasis that you get from your book is, some kind of evolutionary psychology plays an important role here, right?
0:57:16.1 HG: Right. No, it’s very… When you really think about it, it’s very dramatic, the idea that… Notice first, the reason that chimpanzees can’t speak is not because they’re stupid, they can’t produce the sounds. Chimpanzees can go hehee, huhuu, they can make about six or seven sounds, but they don’t have the muscles in the tongue and in the cheeks, and they don’t have the larynx low in the throat that allows them to articulate the way that humans can. And that could only have developed because people who could communicate that way were valued and were given more opportunities to have offspring who had also those characteristics. So when people say language is because people have big great brains, that’s just not right.
0:58:11.1 SC: Yeah.
0:58:12.2 HG: People have language because of gene culture evolution. Here’s how it goes. You have a little bit of communication and people care a lot about it because they need to communicate to figure out where to go to find the next profitable location for hunting and gathering. And so then we were people who have a little bit of ability to communicate, that gives rise to genetic changes that make people more capable of communicating verbally, and that leads to more cultural dependency because people use communication more in their deliberations, and so you have a circle of genes affect culture and then the culture promotes a more genetic behavior. And this is only true in humans really, because humans only… Only humans really have cumulative culture that is where from one generation to the next, you maintain a body of knowledge and pass it on, and animals, animals have culture, but they don’t have very much cumulative culture, if birds learn how to open milk bottles, one generation learns how to do it, after a while, they forget, it goes away. It’s not cumulative.
0:59:44.9 SC: Yeah, from a physics jargon perspective, this invention of cumulative culture was absolutely a phase transition in how not only how we behave but how we evolve for the reasons you just mentioned, I mean it’s very tangible differences in how our genomes evolve over time.
1:00:01.5 HG: That’s right, exactly.
1:00:04.2 SC: But it does also get us into yet another hot button issue, which you have to talk about, which is the group selection controversy, as you already mentioned a while ago, there’s this sort of standard belief in certain corners of evolutionary biology of inclusive fitness, I would sacrifice my life for two siblings or four cousins or whatever, but the idea that there are groups that are attached to each other in evolutionary ways without necessarily being kin is a controversial one.
1:00:40.3 HG: Right. Well, I’ve written a lot about this, and I have a lot of supporters in the biology community, including EO Wilson and Martin Nowak and other people, and a lot of… There’s also, it’s a very ideological dispute, and a lot of population biologists are pure inclusive fitness supporters, and they’re wrong, I mean they just… It’s just wrong, but I’ll tell you what’s really going on. What’s really going on is there’s this guy, William Hamilton, who is a brilliant biologist, and I loved his work and I love his work, who developed the concept of inclusive fitness, which says we shouldn’t just be self-interested we should try to promote our genes wherever they are in other people, so we should help… We should help relatives. And that’s what you said, Sean, right. The problem is this… The model that he built to show this is a single-locus model, okay. That is, if you have a genome, your genome has 23 chromosomes and there are all sorts of genes all over it, his theory is what happens exactly at one locus, at one chromosome and one gene at that locus. It’s not about all of the loci, it’s about the single ones.
1:02:14.2 HG: Now, the problem is this, what they say in the literature is, “Oh, we’ll assume that it’s additive across genes.” That is, genes don’t affect each other, but they each follow the same inclusive fitness rule that is act so as to maximize your relatives at that locus. But the problem is the different loci have different interests, so I’m a benefit by helping you at a different locus, or I’m a benefit by suppressing you. Suppose at locus A, you’re producing a poison that hurts… It helps you as your gene, but it hurts the rest of the genome, well, the rest of the genome is gonna develop mutations to suppress that gene. Conclusion, that’s why we have what are called mesomorphs, that is multi-genetic organisms because the genes affect each other. They’re not… It’s there side by side. They’re affecting each other.
1:03:21.3 HG: And when they affect each other, inclusive fitness, no longer works. Now, I can say this, you get a… What I can say is, you get a complex evolutionary dynamic in the genome that’s well developed in the literature, you can read all my… If people are interested, go to my website and look up sociobiology and look at the various entries on inclusive fitness. So there is this debate, and it’s actually… It’s very interesting, it’s amusing. Martin Nowak and EO Wilson and our co-author Corina, published an article in Nature, I think or Science… I forgot… Well, Nature, I think. I’m not sure. Showing that inclusive fitness doesn’t work. And the response was, 127 biologists sent a letter to Nature saying, “This is wrong.”
1:04:24.5 SC: Yeah, I remember.
1:04:25.6 HG: Remember that?
1:04:27.5 SC: Mm-hmm.
1:04:28.7 HG: I was appalled. What do you… And I said… And I’m… The first paragraph, first paragraph of their letter is just wrong. It’s just wrong. So when I see biologists in conferences, I say, “You signed this? Why did you signed this? This is just wrong. You don’t believe this do you?” They said, “Well, we all signed it.” And It’s crazy, by the way. This reminded me of, you may have seen… You may have seen this, after Einstein developed his special theory, there was a book written in German called a hundred authors against Einstein and they asked Einstein, “Sir, what do you think of this book?” A hundred…
[foreign language]
1:05:18.9 HG: And he said, Well, if I’m wrong, one would be enough. You ever heard that?
1:05:25.1 SC: I have, I have, but just to be fair to the 100 authors here, what was the argument against the Wilson and Nowak and Tarnita, I think paper?
1:05:35.8 HG: Oh. I think that you could… Some of the arguments were, I think, correct. And I don’t think their article was the last word on it, I think my article is the last word on here. I actually, believe it or not, I presented this article, I just laid out kind of argument for you, I presented it at Oxford in the biology department, which is the hotbed of inclusive fitness theory. And people were very nice. So it’s okay. The whole idea is this, you see, if inclusive fitness were true, it would be wonderful because there’s no complexity. There’s no evolutionary dynamic needed, it’s just one little equation…
1:06:23.1 SC: It’s linear.
1:06:23.5 HG: Which says something like, B is greater than RC or BR is greater than C or… But it’s just not that way. You have social species. They’re incredibly complex and diverse, and they can’t explain them in terms of inclusive fitness theory. Now, let me say one final thing on this. Group selection, the notion of group selection, when people talk about it, it’s… They’ve reduced it to groups competing with each other, group competition, even John Maynard Smith did this, but that’s not right. Group select… When you say that a certain thing is selected, it means that it’s… It doesn’t mean it’s selected through competition with others. For instance, if I can escape a fox, that is a fitness enhancement for me, I don’t have to fight with another one of my species. I don’t have to say, “Well, I have to fight with you. You just can’t run as fast as I can, so I beat you.” Similarly, group selection in general, means that the evolutionary dynamic is favourable to the evolution of groups with certain social interactions, that’s all.
1:07:48.5 HG: How it works inside is extremely complex and we don’t understand. We don’t really understand social species. We know a little bit about them, but there’s a lot we don’t know, all we can say is they evolved probably because there was some benefit to this particular social organization of that species. So for instance, bees are incredibly social, they’re not highly related by the way, this is a myth, one queen, and they’re all the offspring, no way. First of all queens die all the time. Some species of… Social species of bees have six or seven queens and they can have many males, so the genetic… This is tactical literature, but if you measure the relatedness of workers in some species of bees and wasps, they’re actually quite low. They’re almost to the level of being no relation at all. And they still have incredible levels of cooperation, so I’m not saying inclusive fitness theory is silly, it’s very, very important, and it explains most of what happens, but it doesn’t explain social species, it explains what happens in basically, non-social species.
1:09:11.8 SC: Right, so we have this development of sociality, once we had weapons and then language, and that sort of has influenced how we play our games, how we think rationally, become social rationality and what… There’s obviously a million places to go here from here, but let me helm in on one thing that you say which is very provocative, the idea that we evolve both a private and a public persona, and presumably this is unique to social species. My cat doesn’t have a private and public persona [chuckle] it’s more or less the same cat no matter what is going on. So could you say a little bit more about what these personae are and the roles they play in our game making life?
1:09:57.8 HG: Yeah, by a private persona, I mean in daily life we go around doing our business without great concerns for how the whole society works, what the rules of the game are, how I relate to the larger public, etcetera. What we’re doing now is my private persona, I’m just talking to you about stuff and you’re gonna, hopefully, put it on air, and this and that.
1:10:24.2 SC: It’s gonna go public, I hope you know. [chuckle]
1:10:25.7 HG: Excuse me?
1:10:27.4 SC: It’s gonna go out into the public, I hope you know.
1:10:30.4 HG: Yeah, that’s why I’m talking to you. Okay, so a public persona, and in your private persona, you’re not asked to evaluate the impact of everything you do on everything in the world around you.
1:10:47.9 SC: Right.
1:10:48.8 HG: You buy, you go to the supermarket, you buy eggs, and they’re your eggs. You’re not concerned about the egg industry or the farmers profit or the chickens usually. Now that’s changed. Now, we only buy free range chickens and this and that. But the public persona, you automatically enter a different frame in which you’re thinking as a Kantian kind of categorical person and you’re talking about what’s right and what’s wrong, who do I support, and who do I not support? What values do I accept in the large and which do I not? And you get a very different dynamic, for instance, one that supports what I called social rationality as opposed to individual rationality, and people do both all the time.
1:11:37.8 SC: Yeah.
1:11:38.4 HG: So you move from one to the other. Now, a lot of people never enter the social realm.
[laughter]
1:11:45.0 HG: I mean I know people, they just never think about anything except, what’s happening today and what we’re gonna have for dinner. And there are a few people in the other, they’re insufferable.
[chuckle]
1:11:57.2 HG: All they think about is… Everything they do has this deep social meaning, so… But most people are in the middle, and I have a typology on that which I’ve shown you, there’s Homo Socialis, Homo Universalis. Homo Universalis is the public persona, which is Kantian that is I do what I think is best for the world as a whole. Homo… I forget the names I used even, Parochialis, yes the parochial is, I do best for my group. I vote for my group, I support my group, whatever that group is, and that is not a selfish behavior that by the way, it is a support for a particular social group.
1:12:47.0 SC: So the public persona isn’t just what we do that is visible to the public. But what we do, that is taking the public into consideration in some sense.
1:12:56.0 HG: Right, from that perspective, the humans have that they can adjudicate and act on the rules of the game, not just play within the rules of the game, but they can act on the rules of the game. And they’re very different, Hegel had, I really got that from Hegel, who talks about exactly that. Now I should say one more thing that I think is really interesting. It’s the term that I coined about the year 2000 called strong reciprocity, what is strong reciprocity? Well, we have to start out… This is about humans. We have to start out with what is called reciprocal altruism by the great biologist Robert Trivers, which is in some animals and humans, I scratch your back, you scratch my back. So we’re a… It’s a mutualism, but what we found out for humans, is there’s another thing where people spontaneously help others without expecting anything in return, and they spontaneously hurt others without expecting anything positive in return.
1:14:13.0 HG: They simply do it because they feel like doing it. Let me give you an example, a few examples, this by the way is really big. I’m at the airport and I wanna get to a certain aisle or a certain exit, and I stop someone in the airport and I say, how do I get there? And they stop and they tell me, well, why did they do that? They don’t know me, they’re never gonna see me again. Why would ever someone open the door for me, if I have a package when I’m going into a building, that’s what I call the strong reciprocity cooperative, that is, people act like they would like to be treated themselves. And by the way, this is a big deal, and they don’t even care about you, sometimes. When I was a kid, I used to drive a truck, delivered furniture in Philadelphia, this is way before there were GPS systems, so if I had to find a certain street, what can I do, stop and look on the map for 20 minutes? No, I stop and I ask someone how do I get to Jula… How do I get to this street. Half of the time, they always answer and half of the time it’s completely wrong.
[laughter]
1:15:26.3 HG: So I was only 17, but I developed a… I said, okay, I’m going to invent a street, Julapi street, there is no Julapi street in Philadelphia. And I went out and within my truck with my furniture in it, and I stopped some guy and say, “Hey, where is Julapi Street?” Well, you go about three blocks off into the church, turn right and blah blah. Now, that didn’t help me at all, but it made him feel real good, he’s helping out there. So that’s the one side, the other side is even more interesting, which is the negative side, people love to hurt people who hurt them. Go to your sociology book and see if you can find the word retaliation or yeah that. Or vengeance in the index of your sociology book, it’s not there, it’s treated as a… What do you call it? It’s treated as an abnormal behaviour to want vengeance and retaliation, but it’s one of the basic human behaviours, we do it all the time, people love to hurt people who hurt them. I remember, well, of course, we do a lot of experiments to show this, and this is really what’s going on in the ultimatum game, where a guy rejects a positive offer.
1:16:54.2 HG: He gets pleasure out of hurting the other guy who he thought wasn’t dealing with him fairly, and this… So I used to tell my students, look, there are two kinds of movies, there are love movies, and there are revenge movies. The love movies, we all know love is human and we love the reach… The revenge movies, the guy like Arnold Schwarzenegger gets… His family gets hurt in the beginning, and the rest of the movie he goes around killing everybody in spite, and you come out of the movie, and you say, “Oh that was really interesting, I feel really good about that movie.” People love to reciprocate evil with evil with no gain in mind.
1:17:39.7 HG: And the thing that’s important about that is that this is a major reason you have social stability, it’s not governments, no. We didn’t have… Humans didn’t have governments until a few thousand years ago, there were no governments, there were no jails, hunter-gatherer societies, they could… All they could do is punish you or ostracize you, and there were no judges, there were no policemen, so strong reciprocity is what kept things going, people help those that they thought were being nice and hurt those that are not being nice. Again, this is human behaviour that we discover in the laboratory that really nobody ever talked about. I got that notion because of an experimentalist. He’s a wonderful experimentalist, Ernst Fehr in Zurich but… So this is what we do, we play games in the laboratory or in the field.
1:18:42.8 HG: So for instance, the ultimatum game, people said about the ultimatum game, “Okay, who cares about $10?” So people… It’s a little bit of money. Okay, let’s go to a society, a poor society, get farmers who make about $300 a month and play the ultimatum game for $900 or a $1000. There, you’ll see whether they accept it. You know what happens? Nothing is different. They still reject three… They’ll reject a month’s wages.
1:19:19.2 SC: Wow. Well, this is clear that you’ve already made this connection, let’s just draw it out because it’s so important because what you’re getting at is the idea that these kinds of socially motivated behaviours, even taken and abstracted outside of an explicitly social context can be derived or thought of starting from these principles of some kind of rationality played out by a game theory. I think that probably a lot of people have this idea that once you start talking about rational actors and game theory, you’re gonna end up with selfishness and individuality, but you’re getting from there to social behaviour in a very connected and tangled kind of way.
1:20:01.5 HG: Oh, absolutely. That’s… The beginning, when Darwin did his stuff, people said, “Oh, it’s nature, red in tooth and claw,” it’s all about competition, but for the past 100 years almost, it’s been about cooperation, that’s what really works well. And Sam Bowles and I, one of my co-authors, we wrote a book called A Cooperative Species, A Cooperative Species, explaining how human cooperation evolved. Now, part of that is cooperation evolves because hunter-gatherer groups make war against each other, and to make war, you have to cooperate. Non-social species do not make war, only social species can make war, like ants or humans. So yeah, we’re all about how evolution develops, not only conflict and a struggle, it also explains cooperation.
1:21:11.0 SC: Well, maybe this is a good place to give you the final question, the final issue anyway, to talk about, ’cause I always say it’s the final question, but then follow-up sometimes happen, so the final thing to talk about… I’ll give you a softball. I’ll give you an easy question. You’ve mentioned how these different disciplinary approaches end up talking about human beings in very different ways, and they should talk to each other more, etcetera, so how do we fix that? How do we fix academia and our intellectual lives so that these ideas are not siloed so strongly that we don’t even talk to each other anymore?
1:21:42.9 HG: Well, the first thing to notice, there are two really important developments to notice. One is, if you look at scientific results and the behavioural sciences, they have been interdisciplinary. As people who have approached things, for instance, in epidemiology, it’s not just microbiology, it’s also a social interaction. So the whole theory of viruses and their spread, the epidemics, etcetera, involves both sociology and microbiology. So the gain from doing these things in a more transdisciplinary way is very high. Now, for me, the second thing has been the internet. When I was younger, I still thought the same way, but I’d have to go to the library at Harvard, Widener Library, and I was very strong, and I’d take out journals and bring them home, piles of journals, but it still took weeks.
1:22:57.7 SC: Sure.
1:22:58.1 HG: To do anything. Now with the internet, I can learn a new subject in a year without any problem. Now, I must say I’m now working on physics and physics is harder than some of the other subjects. It’s taking me five years so far, but I think the internet has made all the difference in the world in the ability to gather information from all over, and to talk to experts in fields that are not your own in places far away from you. So I think it’s happening and there are examples of it. For instance, the University of Arizona has organized in an interdisciplinary way, and I think it’s been very successful. Now, what else can I say about that? I think that’s about it. I will say that for young people, when you start out, you have to go into a particular discipline and you have to really learn them very well before you can branch out, but I had the idea a long time ago of setting up a school where the first year for behavioural students, people who are studying life, behaviour of life forms. Now the first year, you all learn the same thing, you learn statistics, you learn mathematical model building, you learn the scientific method, and then you learn the basic core of every one in the fields, psychology, sociology, political theory, economics, anthropology. And then in your third year, you start specializing on what you wanna do, and I think that’s not a bad model.
1:24:51.3 HG: So, that’s all I can say. Now, I should say this, there’s also a political problem. For instance, I have found very high levels of political ideology tainting people’s judgment in almost all of this field. For instance, sociology is said to be extremely liberal, as far as I know. I joined the ASA for a few years, and I felt like I was back in SDS…
[laughter]
1:25:22.8 HG: In 1960, 1968 or something like that. Support the workers at the school, such and such, and this, and that. And anthropologists have gone into what’s called post-modernism, which is a denial of the importance of science, you probably had experience with that. And I’ve done a lot of work in anthropology with my co-authors, it’s very hard to get anthropologists to think scientifically, and it was very hard for us to get funded by the NSF because they run their proposals through a council of anthropologists.
1:26:15.0 SC: Sure.
1:26:15.4 HG: And anthropologists don’t like to go in there and experiment on the simple society people. When we try to… I’ve published a lot with Princeton University Press, I think it’s a very good press. But when we submitted our anthropology book to them, which is called, Fifteen Small-Scale Societies, the anthropologists wouldn’t take it. So we got published by Oxford. So there’s political stuff all over the place and it will take a long time. But I think it’s very interesting that scientists talk about being interested in getting the truth, but when they disagree with each other, if they’re not in the same field, who cares? Forget about it. I think that’s a very strange attitude.
1:27:12.9 SC: Well, and your mention of the funding situation is also important. I think a lot of people give lip service to being interdisciplinary, but they only have finite resources and when they partial them out, that they’re gonna partial them out to the people who do things that they feel comfortable with.
1:27:27.4 HG: Right. No, I’m sure, but they… Why don’t they feel comfortable with our doing game theory and Fifteen Small-Scale Societies? Well, one person did. A senior NSF guy, and he pushed through the funding that we needed, to actually carry out these experiments. It was not a lot, it was a few million dollars. But it’s still very difficult. Interdisciplinary is very difficult, and people think inter-disciplinary means, well, you just combine the wisdom of the different disciplines, but that’s false because the wisdoms don’t agree with each other.
1:28:08.5 SC: Right.
1:28:09.2 HG: They don’t add up. They are contradictory. So you have to… And I’ve done that in two books, one book, Game Theory Evolving, I said, “What does economics have to change in order to be compatible with sociology and political theory, and psychology?” In my last book, which is called Individuality and Entanglement, I ask mostly the other question, that is, “What do the other disciplines have to do to be compatible with economics?” but I think there’s a lot more work to be done in these areas.
1:28:42.4 SC: Well, that’s always a good place to end. There’s a lot more work to be done. I’m hoping that a lot of young people are listening here and being inspired. And you’ve given good advice that it’s good to learn a discipline and master it before moving on to learn many more, but…
1:28:54.1 HG: Oh, yeah.
1:28:54.9 SC: You’ll never burn out.
1:28:56.3 HG: Not only because you need to get a job, but also you have to get really deep into a subject…
1:29:02.9 SC: Right.
1:29:03.9 HG: To know what’s going on. If you’ve never done that, if you’re taking up the philosopher’s road, you’re not gonna be able to deal with the intricacies of particular disciplines. You need the zits flights of working out [chuckle] one particular one.
1:29:21.5 SC: I tend to defend the philosophers here because they get into the nitty gritty of their own individual issues more than any other field that I know about, especially. [chuckle]
1:29:28.6 HG: Oh, I know. I read a lot of philosophy of physics, and I think a lot of people are very, very good at it, but I must say I’d rather spend my time learning a standard model.
1:29:39.5 SC: Fair enough. I cannot…
1:29:41.0 HG: Than reading more philosophy.
1:29:43.0 SC: I cannot argue with that. Okay, so Herb Gintis, thanks so much for being on the Mindscape Podcast. This is a very eye-opening, thought-provoking conversation.
1:29:50.3 HG: Okay, it has been fun. Thanks, Sean.
[music][/accordion-item][/accordion]
The presumption of the premise: that all behavioral sciences out to be inter-integrated and discoverable, an admirable goal, if unattainable. Here, at least in the podcast, Gintis cherry picks from 5 or 10 disciplines, to come up with a thrust towards a TOE, theory of everything. For me, the world is messy.
Archimedes claimed that if he found a fixed point, he could move the world. The desire for an all-inclusive theory that takes in all human affairs at every level is at least a 7,000 year old endeavor. These slippery sciences, psychology, economics, sociology, etc. are alive and changing in interrelation to the culture they are elucidated in. They are messy for good reason. They have inherent ambiguities the way cosmology has upper and lower bounds of knowing. compelling story tellers often impose a priori reasoning while claiming empirical data.
OK. For me, then, I did not come across the fundamental axioms and concrete and clearly falsifiable established dictums in his own presentation that he faults all the other disciplines for not having.
Perhaps there is more, that I didn’t see. Please direct me to something useful/fundamental. Too broad a topic for the podcast?
A few anecdotal game theory ideas, one offs in anthropology show a rash reductionism not palatable to science.
The real geniuses in his broad field are highly remunerated, and working the monopolized arena of the internet.
I must be missing the bulk of it.
The key to any successful model of a system ( in physics, chemistry, biology, anthropology, whatever) is knowledge of that system. There are two main sources of knowledge, one that relies mainly on intuition and reasoning, usually referred to as rationalism, and one that believes experience is the only true source of knowledge, usually referred to as empiricism. The ideal model of a system would seem to be a properly balanced combination of these two different ways of looking at reality. But in order to achieve that goal there must be cooperation between contributing members of that discipline, where each member most likely, whether they know it or not, is probably either a rationalist or empiricist by nature. So achieving that ideal model in all likelihood will never happen, but of course it won’t keep people from trying.
Just for fun I took one of those online test, where by answering a bunch of multiple choice questions it supposedly could tell by your answers the probability that you were an empiricist, a rationalist, neither or both. My results were:
Empiricist: 81%
Rationalist: 41%
Neither or Both: 14%
BTW that adds up to 136% so I’m not quite sure how reliable the test was.
Pingback: Sean Carroll's Mindscape Podcast: Herbert Gintis on Game Theory, Evolution, and Social Rationality | 3 Quarks Daily
I found Gintis brought up multiple interesting sociological (or anthropological since the distinction was specifically pointed out) phenomena, but I was somewhat saddened to not find more interesting discussion going down different rabbit holes. Unfortunately, to me it felt somewhat rather like a lecture than a conversations at times (not to ascribe blame to any party, sometimes that’s just how life is). Still glad to have listened and broadened my own thoughts on the matter, and keep up the good work.
Gintis is helpful in broadening the understanding of traditional notions of self-interest and cooperation. But he is still stuck in a vocabulary that doesn’t adequately describe human behavior. People do not act rationally. They do act based on their subjective personal values which incorporate their needs and desires to cooperate with others in their social group. Self-interest is a subjective concept. Rationality is an objective concept and objective concepts cannot capture the meaning of the subjective drivers of human behavior which are personal not rational. What Gintz misses is that people act self-interestedly but those interests include their values and thus their needs to cooperate. This isn’t altruism at all.
Cooperation can benefit the whole group in ways that the individual may think outweighs the individual’s personal interests, say in stealing the group’s money since the group’s pro social acts may benefit the individual more than he would be benefited by absconding with the cash and risking the enmity of the group. Such judgments get incorporated into each individual’s decision making process as values. But decisions made based on personal values are just as self-interested as any other decision. So decisions to eat an ice cream, help an old lady across the street, donate to charity or rob a bank can all be self-interested depending on the subjective values of the actor. Humans do what they like based on what they value. And their decisions are largely emotional. That may involve cooperative behavior or extremely selfish behavior. But it’s all self-interested behavior as humans have no other way to behave. Rationality and altruism have absolutely nothing to do with it. We aren’t rational animals or altruistic animals, we are self-interested animals with interests that can often incorporate pro social values.
TNF