285 | Nate Silver on Prediction, Risk, and Rationality

Being rational necessarily involves engagement with probability. Given two possible courses of action, it can be rational to prefer the one that could possibly result in a worse outcome, if there's also a substantial probability for an even better outcome. But one's attitude toward risk -- averse, tolerant, or even seeking -- also matters. Do we work to avoid the worse possible outcome, even if there is potential for enormous reward? Nate Silver has long thought about probability and prediction, from sports to politics to professional poker. In his his new book On The Edge: The Art of Risking Everything, Silver examines a set of traits characterizing people who welcome risks.

nate silver

Support Mindscape on Patreon.

Nate Silver received a B.A. in economics from the University of Chicago. He worked as a baseball analyst, developing the PECOTA statistical system (Player Empirical Comparison and Optimization Test Algorithm). He later founded the FiveThirtyEight political polling analysis site. His first book, The Signal and the Noise, was awarded the Phi Beta Kappa Society Book Award in Science. He is the co-host (with Maria Konnikova) of the Risky Business podcast.

0:00:00.5 Sean Carroll: Hello everyone and welcome to the Mindscape Podcast. I'm your host Sean Carroll. Human beings have a tough time being rational sometimes. We all know that, right? There are various sorts of cognitive biases from wishful thinking to confirmation bias. There's a whole long list, you can look it up on the internet. But we human beings, we evolved over biological time to survive under certain conditions. Sometimes sitting down and thinking things through perfectly rationally is not the best survival strategy. You have to have some heuristics, you have to see something and react to it very very quickly. But I think most of us would agree that all else being equal, it is better to be rational than to be irrational, and therefore it's good to try to learn ways to overcome those cognitive biases to be a more rational human being. That however begs the question, what does it mean to be rational? Who says whether an action is rational or not? People thought about this, decision theorists, philosophers, economists care about this a lot, psychologists. Roughly speaking, if there are two outcomes that could come to pass because of an action that you take and you want one outcome more than the other one, it is rational to do the thing that brings about the outcome that you want.

0:01:15.9 SC: Okay. That's the easy part. And then they make it more complicated, what if there are three different outcomes and you have different preferences. So there's a feeling of, there's a theory of what it means to be rational. The theory becomes complicated when an action that you take does not directly lead to a consequence, but maybe only has a probability of having different consequences. Now, there's a simple thing you can do which is just to say, okay, attach a number, the utility, to every possible outcome then multiply the utility by the probability that that outcome will actually come to pass, add those up, and that's the effect of your action. This is called the expected value of your action. Which is a weird, [chuckle] which is a bad name because the expected value is not the value you expect. The expected value is the value you expect on average and in a very very clear context like playing poker, for example, where you know exactly what the value is of different outcomes. You can very easily calculate or at least you can attempt to calculate expected values and you can talk about certain moves, certain plays being +EV, positive effective value. But in addition to that expected value, which is just sort of the average, there's the issue of risk.

0:02:32.1 SC: We talked about this before with philosopher Lara Buchak, the idea that, okay, it's not just enough to say that on average the outcome is going to be good. Because if the worst possible outcome from this action is really super bad, then maybe I don't wanna risk that, even if the best possible outcome is super good. Other people will go the other way around. They like the idea of risk, they would very happily take a risk for the promise of a really good best outcome. So today's guest is Nate Silver, someone who is well known to many listeners, I'm sure, he became quite prominent in the realm of political forecasting, presidential elections and so forth. Was very successful with that, founded the FiveThirtyEight website, has since left there, but he doesn't want to primarily be known as the political forecasting guy. It's not the politics that interests him, it's the forecasting and the idea of probabilities and living in a world of uncertainty where we have to make decisions based on different possible outcomes coming true. So his new book is called On the Edge: The Art of Risking Everything. And it's about the difference in personality types between people who like taking those risks and the pluses and minuses of that, and people who don't like taking those risks.

0:03:51.3 SC: Now, the sort of psychology here of grouping people together based on not only their risk-taking but also on their analytical prowess, their conventionality and things like that, their openness to new experience. This is something I am not in a position to really judge, but it's absolutely worth thinking about. I do think that on the list of things that we as a society don't think enough about, uncertainty and risk and probability and how to deal with those things is absolutely high up there on the list. What do you do if there's some action you can take that has a very very tiny chance of being bad, but bad means the extinction of all life on earth? This is a very relevant question right now, so we should be thinking of it. And I think this kind of conversation helps push us to think about those things more carefully. So let's go.

[music]

0:04:58.9 SC: Nate Silver, welcome to the Mindscape Podcast.

0:05:01.1 Nate Silver: Thank you so much, Sean.

0:05:03.6 SC: I guess there's a... Let's start with the biggest view here. Why is it so hard for human beings to think probabilistically about the world? It seems like it would be a natural skill.

0:05:14.8 NS: I think in some context that it is natural. I think if you see storm clouds in the horizon then you might bring an umbrella out if it's gonna rain, or if you're driving home from work you might have some sense of, there's a drawbridge on a certain route so you wanna gamble on that not happening or whatever else yet. I think people are not... Look we have millions of years of evolution where we're not living in an environment of abundance, right? We're just barely trying to eke out a living, if you get some type of disease or infection it might kill you. A predator might come and raid your household or your tent or whatever. So we're not used to having the amount of choices that we have and the amount of kind of complexity in the world that we have, and we tend, I think, naturally to be risk averse, fight or flight when we face stress. So, yeah, partly it's coping with the complex modern world where things are uncertain and maybe not getting enough practice. I mean, one good thing about poker, which of course I love and write about in the book, you do get to play out the long run. If you play thousands of poker hands, then a third of the time your opponent makes her flush you actually experience that, and so it becomes more visceral, I suppose.

0:06:29.6 SC: Right. And of course in the real world we very often, as you say in the book, have to think like this, but in cases where we don't get the long run, like presidential election forecasting, you don't really get to do it a number of times.

0:06:43.6 NS: Yeah. Which is why it's ironic. This is a thing that happens every four years, so in my adult lifetime, whatever, maybe you have 15 or 16 of these things. And, yeah, you'd need hundreds to know. So you just have to be zen about the fact that I became really well known for this thing where you'll never actually know if I'm any good at it or not.

[chuckle]

0:07:01.2 SC: Well, I do wanna talk about the famous/infamous example of the 2016 election when Donald Trump won over Hillary Clinton. The standard prediction day before the election was that Hillary would probably win. You said Hillary would probably win, but you were less confident than many people. And then when she didn't win, people gave you a hard time and that was legitimately unfair. [chuckle]

0:07:28.4 NS: I think so, in part because the way... Look, before I ever forecasted an election or really got into politics at all, I come from a background involving poker. And so, and I dealt in the world where you're dealing with a lot of gambler types of people that I call, the environment, I call in the book, Hall of The River. We can talk about that in a minute.

0:07:46.6 SC: We will. Yeah.

0:07:47.4 NS: But to a poker player, the conventional wisdom, the betting markets were that Trump had a 15% chance to win 1/5, and our model had him at 30% or 29% to be more precise. So a poker player looks at that, or a sports bettor, and says, "I'm gonna go bet on Trump. I'm getting six-to-one odds and I'm gonna win a third of the time or something." So what we call the expected value of the bet, meaning, if you were to play out things repeatedly, then that bet on Trump would make money. So the poker buddies of mine were like, "Oh, great. I made money betting on Trump." And like everyone else in my life was like, "You're an idiot, Nate." But, yeah, and look, elections are different in part because of the high stakes, I think in part because campaigns always say, "Oh, this is the most important election of the rest of your life, or the most important election of all time." And there are different types of uncertainty too. It's not like the elections like literally determined by the roll of a dice, like a game of craps or something would be. But, I mean, it's worth considering some of the crazy contingencies that we've had in recent political events.

0:08:49.4 NS: I mean, the one that comes to mind just recently is that, because Donald Trump turned to look at a display at a data visualization of immigration statistics, he turned his head just enough where a bullet nicked his ear instead of did something far more serious. Or in 2000 when Al Gore lost Florida by 537 votes in a state of 10 plus million people, based in part on a flawed ballot design in one county in South Florida. Sometimes decisions of history can be actually random and contingent.

0:09:20.0 SC: Well, I guess that's what, when I say that people are not very good at thinking probabilistically, the biggest thing I have in mind is this idea that mentally we tend to move all probabilities toward either zero or one. And when you say there's a 29% chance that Donald Trump will win, people read that as he's not going to win.

0:09:42.7 NS: Yeah. People move to 0/1 or 50/50.

0:09:44.3 SC: 50/50, yeah, they like that.

0:09:45.8 NS: You can allow them 50/50. Anything in the 75/25 or 70/30 zone is kind of a thankless place to be. You get yelled at if you're wrong, but you'll still be wrong a lot 'cause 30% chances happen all the time, of course.

0:09:58.0 SC: Well, and poker players know that 1% chances happen all the time because then there are really many many events. And so does the poker playing experience sort of inure you to that a little bit?

0:10:10.6 NS: I think so. Or risk taking experience in general, where there are times... In poker, there's no risk-free way to play poker. If you can get the money in as a 60/40 favorite you're thrilled. The best you can usually do is 80/20 or so. I talk in the book to a guy named Victor Vescovo who's literally an explorer. He's climbed mountains and things like that. And he climbed... At one point attempted to climb Aconcagua, which is the tallest mountain in South America, in Argentina and kind of slipped, lost his footing, triggered a mini little avalanche, was knocked unconscious, nearly died. Fortunately, some French climbers saw him from uphill and rescued him and took him back to base camp. When I talked to him, he was like, "Yeah, you know there's always a 2% chance something like that happens. You're not gonna totally eliminate risk climbing Aconcagua." So that mentality is really ingrained in people who face virtual danger in the form of financial risks or real actual physical danger.

0:11:11.0 SC: I do have one unfair question. I'll try to be fair for the rest of the podcast. But I'm on your side for the 2016 election. Are there any examples that come to mind where you blew it? Where in retrospect you should have had a much better prediction than you did?

0:11:27.7 NS: Sure. I think you're always looking at your mental process and your mathematical process. I think in the 2016 primary we didn't have a model, but I was very confident that Trump wouldn't be nominated. And I thought, "Oh, this is just a sugar high in the polls 'cause of name recognition and media coverage." Thought the Republican Party at the end of the day was the conservative party and nominated Mitt Romney types and people like that. So I was very wrong about that, and that was a result of not being very rigorous. So, yeah, sometimes you are. It's not always an excuse. Sometimes you are just plain wrong.

0:12:05.7 SC: I have exactly the same choice for my worst example, worst prediction. I had a really good track record without anything like the models that you build, but I was very good at predicting political outcomes until 2016 and since then I've sworn off 'cause it's too hard. But...

0:12:21.2 NS: Yeah. And look, people can overly pattern match. In 2012, this is getting a little into the weeds in terms of Republican primary history, but you had Newt Gingrich and Rick Santorum and Herman Cain, all these different flavors of the day that peaked and then fell. But politics changes and the electorate changes, and so that's why it's not easy.

0:12:43.1 SC: Right. And you already used the word, but let's dig into it. Because your new book is mostly not about thinking probabilistically so much as risk aversion versus risk tolerance, how comfortable people are with risk. So how is that different than simply being good at judging the probabilities?

0:13:02.8 NS: In some sense, and judging the probabilities is a good first step, although there are also people in the book who are bad judges of probabilities but are still very risk taking. But it's really about people that have two skills. I mean, the first is the probabilistic mathematical thinking, but the second is whether it's a skill or maybe more of a personality type, they're very competitive. They want to compete against themselves or they want to compete against other people. As a result, they take chances to get ahead and many of them fail. I mean, there's a little bit of bias in the book in the sense that, like, you hear from the survivors and the survivors write history, you don't hear... Although there are some counter examples in the book of the people who try to start up and had a big investment and failed and were never heard from again. But, yeah, it's unusual to have those, 'cause you think about people who are good at calculation. You might think, oh, it's like a actuary or an accountant, someone who's kind of risk averse, maybe a verge of being neurotic. But this is books about the rare type of people that have the overlap of highly competitive risk-taking personality type with the analytical skills.

0:14:07.5 SC: Okay. Good. So let's figure out... You mentioned, is it a personality type or is it something about how we do rationality? I guess, let me ask as a question. What do you think of it as? Do you think of it more as a high risk taking, high risk tolerance as more a personality issue or is it a different way of being rational?

0:14:27.7 NS: I think it's correlated with other things and some definitions of rationality. But, no, I think there's something, there's something intrinsic that is formed early in our life course is my impression. If you talk to Victor Vescovo, who I talked to about before, the mountain climber, he of course is in the small fraternity of other mountain climbers and he's like, "Yeah, we all have something innate or genetic. We just like to live on the edge." If you look at people who, in the financial sector, in venture capital, or founders. Elon Musk had a difficult childhood, Jeff Bezos was adopted. It's people who have some degree of this weird middle ground where they have enough trauma to really have a chip on their shoulder, but enough, I don't know what you wanna call it, maybe privilege to be given multiple opportunities to succeed. And somehow in that, the way it's calibrated just leads to really extreme outcomes.

0:15:21.9 SC: Well, I know that you do quote Lara Buchak in the book, and she was a philosopher who we had on the podcast. And she did actually claim that we should change our definition of rationality to allow for different tolerances of risk. I mean, the sort of most straightforward definition of rationality is, assign utility to every possible outcome, calculate the utilities for what you're gonna do and maximize your utility. But that makes no statement about risk or variance or anything like that.

0:15:56.9 NS: If you look in terms of expected values. So if I get to play out a poker hand 100 times or 1000 times, or if I get to invest in 500 companies over the course of my career as a VC, then I think you can critique people who are not thinking probabilistically, but for one-off decisions. Should I get divorced? Should I radically abandon my career path? Should I have some procedure, some medical procedure that is undoable? Those types of things, I think we should be much more tolerant of people's varying types of rationality, for sure. As I think, Lara would agree with.

0:16:39.6 SC: Yeah. Okay. Very good. And I guess before, I wanna get into poker 'cause I also love poker. I've had too many poker players on the podcast compared to the audience. But just to tickle the philosophers out there, do you care about questions of what probability is, sort of Bayesian versus frequentist, epistemic versus something else? I mean, does that matter to your life at all?

0:17:06.4 NS: I joke in the book, it's kind of above my pay grade. I'm interested in the philosophy of it and the philosophy of, is the universe truly random on some irreducible level? I think some interpretations of quantum mechanics would say that it probably is intrinsically random to some degree. But, no, look, what I'm concerned with in practice is estimation and decision-making under practical uncertainty. It may be that if you had perfect information that it wasn't actually uncertain. In some sense, even in poker, in physical poker, my opponent has two cards that have been dealt to her already, those cards are not gonna magically change. So in some sense, and the order of the deck is not gonna magically change. So in some sense, even poker once the cards are shuffled is deterministic, but you don't have information unless you can have extra vision to see her cards. And so it's dealing with incomplete information and uncertainty.

0:17:58.7 SC: Right. Okay. Good. So it's very compatible with the sort of Bayesian epistemic view of what we mean by chances and probabilities.

0:18:06.0 NS: For sure. And people that have that more humble kind of Bayesian approach, I think tend to be better risk takers in some pursuits. Sometimes you actually want people who are like maybe on the verge of being irrational. If you have, if you're venture capitalist and you want a founder who says, "Well, I have this crazy idea and there's a 95% chance it won't work, but if it does work, you'll make a thousand times your money back." That's a good positive expected value bet for the VC. It might not be good for the founder, in the sense that, 95% of the time then they've wasted 10 years on something. And so you want people sometimes who are a little bit crazy if you're in Silicon Valley.

0:18:50.0 SC: Yeah. So let's talk about poker, because it's a great laboratory. I mean, it removes all of the extra weird things. When you make, when you talk to a founder that gives you those odds. Where do those odds come from? [chuckle] they made them up, I can be skeptical. But on the poker table, unless someone is cheating, I know that every unknown card has a 1/52 chance of being a certain card. So is that why you like it?

0:19:18.2 NS: I like it because, yeah, maybe it avoids some of the messy real-world complications that you get. Although not zero, I mean, a lot of what you're trying to do in poker is read people and read their mindset. A hand in a pretty high stakes game the other day where I tried to bluff an opponent, a friend actually of mine, because he had been having a rough night poker-wise and calling a lot and losing a lot. And I'm like "Ah, he's not gonna call me again. [0:19:46.8] ____.

0:19:47.5 SC: You were taking advantage of the weakness of your friend. Yeah.

0:19:51.0 NS: Well, but he figured it out. He was taking one step ahead and wound up calling instead. And so I cost myself a big pot. But, yeah, look, I mean, it's calibrating your internal probability meter, where poker players can be uncanny about, "Oh, I need 30% chance to win and I have 25% equity, so I fold." They can really get to within a couple of percentage points, just through having so much practice that it becomes this kind of sixth sense eventually.

0:20:18.8 SC: Does it happen at the... I play poker, but at that sort of hack level. At the very elevated levels, when you're playing against great people, does it become less important to read them because they're all too good at faking it?

0:20:37.2 NS: There's some truth in that. I think when all the super pros are playing against one another, what some of them actually literally do, is they actually literally randomize their action. They won't actually bring a dye, but they can look at the clock, and if the last digit of a clock is a seven or a nine, they might bluff or something like that. So, yeah, in some ways, I mean, poker has gotten closer to being a solved game. It's a complicated definition. But, yeah, in some ways it's taken a little bit of the variety out of the sport or the game in terms of different styles of play. But it's very rare to have two super world-class players playing each other. And then even when you have world-class players, when you introduce an extra element of stress. So day one of the main event of the World Series of Poker everyone is on their best behavior, everyone is fresh, bright-eyed, bushy tailed. By day six or seven, even for a high stakes pro, the stakes are just objectively really high. And we talk about this in the book. Physically your body starts to behave differently when you're in conditions of extreme stress. And so they may not be as rational as they might like to think.

0:21:46.2 SC: [chuckle] Well, tell us about that. I mean, that's fascinating. How much do we know about what the physiological changes are actually like when you're trying to make these high stake moves?

0:21:56.7 NS: So I talked to a couple of people about this in the book. One of them is a guy named John Coates, who was an economist who became a derivatives trader, I think for Deutsche Bank or Goldman Sachs or something like that. And having an academic background, he's like, "Boy, these fucking traders are really strange. The way they process stress is really strange." And he kind of decided to change careers and devote his life to doing neurological studies of Wall Street traders, basically. He found out a lot of things. One thing he found is, actually, the traders who were more successful actually had more of a physical stress response.

0:22:35.1 SC: Wow.

0:22:35.7 NS: It may not have been on a conscious level, they were kind of outwardly calm, but their body recognized that sometimes we face much more important circumstances than others, and that actually can give us information. I mean, if you've ever been in a situation, I haven't told the story before. I was in a situation where I was, in January, in Los Angeles. We were in LA to see some friends, my partner and I. It might have been the day after New Year's. And I go to get coffee at the coffee bean and roastery or whatever it was. And my back is to the window, and this woman in the other register line starts yelling at people and looking like a ghost has hit her and saying, "Get down, get down." And it turns out that outside there was an armed robbery.

0:23:25.0 SC: Oh, wow.

0:23:25.5 NS: A guy was getting carjacked. And this is open windows, and so the gun was kind of pointed right at the store, basically. And eventually they took his watch and his wallet and went away. This is right on Pico Boulevard, middle of Beverly Hills. Kind of a crazy experience. But what you experience there, is that time slows down a little bit. And you actually have this moment where you think with a lot of clarity. And then later I was getting tacos or something later, and I texted my partner, "Ha, ha almost got shot." And then an hour later I'm like, "What the fuck?" And then it kind of hit me, the stress that I had put out of my body. And you experience that in poker too. If you're playing a really high stakes game, it doesn't happen everyday obviously. But a few times a year you run deep in a tournament or playing for higher stakes than you maybe should be playing. And you think clearly in the moment, and then you get home and boy, man, it hits you like a ton of bricks, just the stress of it. And you process it in weird ways and you might have dreams about it, positive or negative, days later. So we have a different operating system, to be slightly non-technical about it, when we are under conditions of extreme risk taking.

0:24:35.6 SC: I think, for myself, what I keep finding myself doing, catching myself doing is holding my breath, but I'm holding my breath whether I'm in a good position or not, as long as it's a big pot. So I don't think I'm giving much away, [chuckle] just that there's a lot of money in the pot.

0:24:49.8 NS: Yeah. Look, there are a couple of things. I mean, I think the notion of, I have to remind myself a lot to slow down. I tend to make decisions pretty quickly, even relative to other poker players. So slowing down and breathing and taking the situation in, I think is pretty important. I mean, as you mentioned, it's hard to hide a stress response in poker and sometimes you have to live with that. Although there are things you can do, you can cover up your neck where we tend to show stress a lot, for instance. However, some people actually feel more stress when they have a good hand than when they're bluffing. Then they get excited, like, "I'm gonna either win a little bit of money or a lot of money. This is really fun." When they're bluffing, they're like, "I have to get this bluff through to survive in the tournament." and they actually can be more focused. I'm probably more like that, but that's the reverse of how other players play. And so therefore, correlating people's stress response with their behavior and... Look, sometimes just got a text saying that your plans got wrecked that night and therefore you're in a bad mood for reasons that have nothing to do with a poker hand. So it's not quite as easy as you think to read people, but, for sure.

0:25:56.0 SC: Are there... I think the answer is yes, but are there common personality traits among very successful poker players?

0:26:04.4 NS: For sure. They tend to be lone wolves kind of anti-authority streak people. Because if you think about it, I mean, the combination of mathematical skills and people reading skills that you have in poker could translate well to a lot of things. Some hedge funds hire ex poker players. And believe me, you can make a lot of money doing something like that, or going into investment banking or going into tech or opening a business. These skills, I think, are somewhat transferable. But the poker players, they don't want a boss, they don't wanna have to put up with other people's rules. It's kind of a weird hippie culture almost in a way. You get up when you want, and you do what you feel like. And so that's why it attracts people who are probably actually underachieving their financial net worth potential, but find the lifestyle fulfilling and unique and enjoyable.

0:27:00.3 SC: But maybe something more like a libertarian kind of hippie than a peace and love kind of hippie.

0:27:02.9 NS: Oh, for sure. The political leanings of poker players are an interesting cross-section of kind of smart male, mostly public opinion. And that can tend toward being a little bit more libertarian.

0:27:16.2 SC: Right. And so are poker players more or less... Do they come close to being optimal in terms of maximizing their expected value? Their EV. How good are they at that?

0:27:29.0 NS: Pretty damn good. I mean, they're getting to within a couple of percentage points most of the time. You can now study against computer simulators. And the computer will say, "Buzz, you made a huge error." and then you look at the fine print and it's like you were off by 0.01% or something. I mean, this is new though. If you go back and watch poker games from the '80s or the '90s, then the play was much less sophisticated. So it just, it's sort of, I don't know if it's a good thing or a bad thing, it's just a matter of everything in life is becoming more ruthlessly efficient. And poker with computer tools that people have and the financial incentives, and just the fact that, I mean, the game has remained really popular. Live tournament poker is more popular than ever. We had another record breaking World Series of Poker this year. So, yeah, it's careened toward maximal efficiency.

0:28:19.0 SC: And I presume that the answer is yes here, but is there a correlation between being comfortable taking risks and being a good poker player?

0:28:29.1 NS: Yeah. No, you can't avoid having to take some risks when you play poker. And in fact, that can... There are the players who will make it to the money. That means you make it through the first 85% of the field, and then get very shy about taking any risks, and then finally they go all in. And of course, their opponents know they have Aces or Kings most of the time. You have to... Even though tournament poker is more technical than you probably care about.

0:28:53.5 SC: Please, no, go for it. Yeah.

0:28:55.3 NS: But, yeah, in tournament poker there is some reason not to take risk. There's some risk aversion that's embedded in the structure of how the payouts occur. That's a little technical. But even then you're not gonna get to the money... Deep in the money of poker without taking and winning a few, what we call coin flips, winning a few 50/50 spots.

0:29:16.9 SC: Well, I think it's simple enough to understand, the way that I understand it anyway, which is that in a cash game where you're just playing for the pot in front of you or whatever, maximizing your expected value gives you one answer. But in a tournament the payoffs are different. You can win a big pot that doesn't win you the tournament. So it makes perfect sense that the strategy might actually be different even if ultimately the goal is just to maximize your expected value.

0:29:40.2 NS: Yeah. And then you also have to think about what are you doing with your life value. If it's really valuable to you to win a tournament or get on TV and make a final table or have something to brag about, then maybe the life EV is worth more than the tournament EV, or maybe if you have a flight to catch. I've been in situations where I had a flight home from Vegas one night and will cost $300 change fee. And that can actually kind of, I think it's probably bad to think about those things. I've stopped not booking a return flight when I go to a poker tournament for that reason. But, yeah, I mean, there's life decisions too to think about.

0:30:17.4 SC: But what struck me, and I'm still not sure that I've wrapped my head around this, but a lot of super good poker players are degenerate gamblers in cases where they know they will probably lose. They win a lot of money, the poker player, and then they go play Baccarat or whatever, where unless they cheat they can't win.

0:30:38.4 NS: It's kind of 50/50. The poker players that gamble it up in the pits and the ones who don't. I would think that, in general, the better players are the ones who don't. But there are some real degenerate gamblers among the poker playing scene too. Personally, I have lots of risk taking tendencies, but I don't find... I mean, since I can play poker and in principle will be making positive expected value wagers, or maybe fool myself in to being good enough at sports betting where at least I'm somewhere near break even and probably have fun doing it. Yeah, I don't see any reason to voluntarily give the MGM corporation or the Caesars Corporation my money, although I like the reward points and stuff.

0:31:19.4 SC: Well, good. So I'm glad we don't need to get in the technical details of the rules of Texas Hold'em or anything, but you do invoke a vocabulary word frequently in the book, which is The River. And you're using The River the last hand... The last part of the poker hand as a commonality of a certain personality type. So why don't you help us understand that?

0:31:41.6 NS: Yeah. So The River is the last card that's dealt in poker. It's apparently called The River because poker had origins in the Mississippi riverboats of, I guess, the mid to late 19th century. And if a dealer dealt an unexpected card and was thought to be crooked, he'd be thrown in the river, was how the term... The etymology of the term. But, yeah, I think of The River as kind of this frontier where people who have this risk-taking gene crossed with the analytical capabilities tend to exist and coexist. And I thought, when I started the book, of this as like a metaphor. I mean, literally in the book, I like... The first flight I took... The first day I was fully vaccinated or whatever, got in a flight to Fort Lauderdale and go to a giant casino in the middle of Florida in what's still technically a pandemic. And it was every bit the shit show you'd expect a giant casino in Florida to be. I was there for a poker tournament. I'm like "Oh, these are my people." These are the people that [chuckle] decided this was a good idea in the middle of the pandemic and kind of... And explored the world out from there.

0:32:44.1 NS: And there are lots of people that cross over different domains. I mean, poker players who go and work for hedge funds or I played in some of the venture capital of poker games, like, the All-In Podcast. People, even in the effective altruism world, that's a different part of The River where people are trying to solve difficult problems using expected value and probability theory. And their mindsets are, I mean, they have the same shared nerdy vocabulary. So I think there is something to it. And they know one another. It's a small community, too. It surprised me. Once I got to know, I'm not like best friends with people in Silicon Valley. I live in New York, so New York is more my speed. But I was surprised to know once you talked to one or two of the top VCs or founders, and they're happy to vouch for you to talk to other ones. And these are small communities. It's a community of elites, I think you could say. But I think they're trying... Or starting to develop some group identity as well. You see The River being a more active political force in some ways. I mean, Sam Bankman-Fried, who we haven't talked about yet, was a major donor in both Democratic and Republican politics, kind of actually surreptitiously in a latter case.

0:34:01.6 NS: Obviously, people like Elon Musk and Peter Thiel have gotten very involved in political ventures of various kinds, or Mark Andreesen, people like that. So, yeah, you have this self-interested group of elites. By the way, I don't mean to say that Silicon Valley are the only people in The River. Again, it starts out in poker and kind of moves up from there to different parts of it. Wall Street is there somewhere, the kind of rationalists are there somewhere. But there's more overlap than I thought of people of literally the same personality types you see over and over again, a certain type of analytical nerd.

0:34:37.5 SC: [chuckle] Well, so this is interesting. And here I'm not completely sure that I'm convinced yet, but maybe you can convince me. You argue that there is a certain recognizable personality type, and being analytical is part of it, being risk tolerant is part of it, but there's also this aspect of decoupling, and now I think it gets serious. So explain to us what decoupling is.

0:35:01.0 NS: So decoupling means to separate a problem out into component parts or to decontextualize something. So the example I give in the book is, let's say that you are in the market for a chicken sandwich and you pass by a Chick-fil-A and Chick-fil-A's founder, I think they've melted in this somewhat since, but was like a notorious opponent of gay marriage. And decoupling is the ability to say that, "I really dislike the Chick-fil-A founder's politics, but I really like their chicken sandwiches." Those are separate attributes. Or to say, "I like this Woody Allen film, even though he's extremely problematic in a lot of ways. I think it's good art." and of course art maybe I'm getting a little bit over my head. The art and the artists are hard to separate, but that ability... Or a more relevant context from my life. So I voted for, I'm kind of a centrist, slightly libertarian myself, kind of befitting my Riverian roots, I guess. But I voted for Joe Biden in 2020. I thought there were a fair number of things I thought he did a good job of in the first two, three years in office. I also thought that he was gonna lose to Trump most likely because of his age and other factors, he had fallen quite far behind in the polls.

0:36:21.3 NS: And since that's my beat, I kind of talked a lot about that in my newsletter and things and in media appearances, and people got very mad at me. They were like, "Why do you hate Joe Biden so much?" I'm like, "I like Joe Biden. I vote for him. I might vote for him again. I wouldn't vote for Trump." But my job is to decouple and to make a forecast that's disconnected from the outcome I'd like to see occur. I'm trying to say what will happen, not what I wanna see happen. And that skill is hard, especially for people who are, I guess you'd say, politically minded.

0:36:48.8 SC: And I totally agree that the Chick-fil-A founder's ethos is terrible and the sandwiches are very good. So I guess [chuckle] I have evidence for decoupling there. But I guess, let me ask, is there... And just to... I should clarify first, you're not saying that this person who makes these claims would then conclude therefore I should buy the sandwich 'cause it's good. Right?

0:37:12.6 NS: No, that's right. Yeah. You could say, "I don't wanna support this person's business, so even though it's a tasty sandwich, then I'm not gonna eat there." That's totally fine. That's a moral judgment, but we're trying to perceive objective reality and then we can make decisions based on the accuracy of our estimates.

0:37:30.3 SC: And I guess, what I don't understand is, this description of decoupling, sort of on the one hand, leads to a more nuanced view of the world. Because I can simultaneously think that I don't wanna buy the sandwich, but the sandwich tastes good, rather than just totalizing everything. But on the other hand, there's an element of, what in my area is called, engineers disease or physicist disease, where you model the world in overly simplistic ways because you can, and therefore you sort of are ignoring nuances and subtleties.

0:38:03.3 NS: Absolutely. It's very easy to build a bad oversimplified model of the world, or to build a model that describes a past and isn't robust for the future. There are a lot of bad models out there, and there are examples in the book. I mean, Sam Bankman-Fried is the most prominent one of like, would quantify everything, he was utilitarian, but quantify things in crazy, very imprecise ways. Where he like way underestimated the risk of another Bitcoin crash, for example. Or, I mean, he was also very risk loving. He was willing to gamble at a high risk of ruin and quite self-admittedly. But he was a pretty bad estimator actually. He didn't play poker ever, really, he told me. Maybe he should have, that would've kind of trained some better risk-taking skills, I think, potentially.

0:38:52.9 SC: Actually, is there any evidence for that? I mean, I get the argument that poker is so clean and unforgiving in some way that maybe it should train you. Do we know? Has anyone studied that?

0:39:06.8 NS: That's interesting. I mean, when I've actually talked to people, my buddies in finance who who are poker adjacent. I mean, they'll say that like, yeah, the analytical skills are there, but as we were talking about before, the people skills may or may not, may or may not be. Poker players are selecting for the one field where you kind of can. Like many poker, I mean, some of my best friends are poker players.

0:39:30.1 SC: Sure.

0:39:30.5 NS: I don't mean that in a sarcastic way. I mean, half of my friends at this point are poker adjacent. But you definitely get some difficult types who would have trouble thriving in an office setting, I would say.

0:39:41.6 SC: Okay. Fair enough. Speaking of which, oh, that was a terrible, terrible segue, but effective altruism. [chuckle] You talk, just to, we don't need to have spoilers here. So you start the book with poker players and you introduce the idea of the Riverians, but then you go on to talk about effective altruism, rationalism, Silicon Valley entrepreneurs as these exemplars. Which is interesting 'cause they all sound a little bit different, but you can see the relationship. So talk to us about effective altruist first.

0:40:17.0 NS: So effective altruism is what I call a brand name. It was created by two Oxford philosophers Will MacAskill and Toby Ord. And it's a pretty good brand name. Altruism means obviously thinking, being unselfish, more particularly, it used to be in the context of philanthropy. So people give their money away to the American Red Cross or something. And the effective part is measuring how much bang do you get for your buck? How much good do you do based on the amount that you spend? So one early intervention was they found that malaria is still a very deadly disease in the tropics. And so putting up bed nets at the cost of a couple of thousand dollars each might save lots and lots of lives. You can save a life for 5000 bucks if you target anti-malaria mosquito nets, basically.

0:41:07.2 SC: Right.

0:41:09.9 NS: What happened though is, I mean, this gets at the question that you were getting at a moment ago, Sean, about bad models. [chuckle] and a lot of it is more kind of back of the envelope estimation than people might be wanting to admit to. And then they get into higher grade problems. How effective is a charity is a reasonably tractable problem. Not as simple as poker, although poker itself is quite complex. But that's something that... Yeah, I felt like if you spent a year and had a team of smart people doing that, you'd get somewhere and have tangible answers. These existential problems are a lot harder to wrap your head around. I mean, the big one, the big area of concern is, is there existential risk of things going very wrong from artificial intelligence, which a lot of EAs are convinced that there could be potentially. But it's not settled science. I mean, you can find some EAs who think this is ridiculous, and some who literally think there's a 99% chance that we're all gonna die within the next 40 years because of runaway AI. The consensus is probably 5% or 10% chance of catastrophic or worse existential risk potentially. And I think it's helpful most of the time to be willing to put numbers on things. I can say, "Oh, there's a chance of this or that." And those are weasel words that avoid accountability a lot of the time.

0:42:35.3 NS: I can say, "Oh, Kamala Harris has a chance." Well, does she have a 10% chance or 50% chance? 'Cause those are very different things. But, yeah, look, Sam Bankman-Fried is an example who identified as an effective altruist of someone who made a lot of bad calculations. And effective altruists made a lot of bad calculations about Sam for so much of their revenue and the reputation to be tied up in this one founder who was, had his hands in all types of good and bad and crazy and not so crazy things, was a risk that they did not foresee. And to the point before about how... The River is an actual place or maybe a series. Well, the reason they call it a river, by the way, is 'cause it's not one discreet town. It's like a region where you go back and forth between different communities. The EA community is a pretty small community. One thing that you find is that the word was out that I'm writing a book about risk, and the EAs notice and invite you to their events and kind of evangelize a little bit, which is helpful. I mean, obviously as an author, you never mind sources who are interesting people, who are excited to talk to you, that's always helpful.

0:43:46.4 NS: But they're trying to spread the gospel. There is like, and I use that term intentionally. I think there are some parallels to religion, which I don't mean to put down necessarily. I mean, they're asking big philosophical questions. These are not solved problems. And I think that's okay, but there is a proselytizing aspect to it.

0:44:08.6 SC: And I guess maybe I'm better at decoupling than I suspected, but I can see the good and bad here. I mean, if I'm going to be altruistic, I would like it to be effective. [chuckle] I would like to learn where my money has the biggest impact. But on the other hand, I have this feeling that I also like maybe donating to my alma mater or the local cat shelter. These are clearly not the highest impact for my dollar. So I wanna hear the argument, but I'm not entirely persuaded by it.

0:44:42.5 NS: Yeah. Look, I would not identify as effective altruist myself. I might be in some part of some broader rationalist community. I think what they found is that the differences are pretty large. Particularly for things like university endowments, where you're giving to very rich institutions that are running hedge funds on top of their universities and giving this hedge fund more money. Well, I mean, why not just donate more money to Goldman Sachs or something like that. I think the more ambiguous cases are things like giving to the local symphony orchestra where, does the symphony save lives? Well, I don't know. I don't know the amount of utility that having beautiful music in the world, or challenging but well articulated music creates for the world, and how that inspires people. I mean, there might be a lot of good there. People have to distinguish the things you can quantify and are low impact from the things where the impact is hard to quantify. And I think some EAs, and more broadly people in The River tend to neglect that.

0:45:47.6 SC: Well, you used the word utility and we've used it before in the conversation, and maybe we should fess up to the fact that the EA philosophy goes very much hand in hand with being a utilitarian about morals and ethics. And not everyone is a utilitarian. So that's another way to express some skepticism, I suppose.

0:46:08.8 NS: Yeah. And I have both... In the middle of this book about gambling, there is a 3000-word philosophical discussion where I talk to all these Princeton and Stanford philosophers and stuff like that. So that's the kind of book it is. If that's the kind of book that you like. So I have both practical and philosophical concerns with utilitarianism. For one thing, I don't necessarily buy that we should be totally selfless and not have some favoritism toward people that we love, basically. I'm not sure that's a bad thing ethically or otherwise. But also, yeah, you do get people who are overconfident in their ability to calculate these different equities. And I think one line I use in the book, or maybe I'm stealing it from Eliezer Yudkowsky who's a rationalist, someone who's concerned about AI risk, but a smart guy who's always fun to talk to. He's like "Yeah, we need a world where everyone has 150 IQ." And unfortunately we're in this world of 100 IQs, and so we're probably gonna destroy ourselves. Because we can create this technology, like, nuclear weapons or large language models, but we aren't smart enough to know how to control them. So I think there is a lot to be said for having a practical sense and kind of winding up halfway in between "common sense" and some EA adjacent form of utilitarianism. The halfway in between, I think, is probably a lot better than either extreme.

0:47:32.4 SC: Well, I do wonder who is it that is making these terrible tools that are gonna destroy the world? Is it the 100 IQ people or the 150 IQ people?

0:47:41.9 NS: Subjectively, I think that the AI people are very smart. I mean, AI is like the most, if you're this type of Riverian nerd, then AI is the most exciting thing happening right now. And there are different projections, and I'm actually a little bit ambivalent myself about, I'm I an optimist or a pessimist or a doomer or what? But ChatGPT and other large language models were this amazing, almost miraculous breakthrough, where let's just kind of, let's keep feeding lots and lots of texts to a computer and then have this relatively simple transformer model, and then just leave it on for a really long time and see what happens. And now it's like a magic box that can talk to you. I mean, it's kind of crazy. And I'm using non-technical terms 'cause those are the terms that you talk to engineers. OpenAI, and they call it a big bag of numbers. And they'll tell you that we kind of have a sense for how it works, but we're not really sure why it's doing what it's doing. And that is inherently a little bit frightening sometimes. But they are smart people. I mean, it's attracting, I think, the best and brightest between being on the frontiers of this next revolution if it occurs. And, again, I do not take for granted that it will. But if you think there's a 30% chance of a new industrial revolution, in the meantime Anthropic and OpenAI and Google I'm sure are paying people very well, too. So it's interesting high impact work and it's attracting bright people for sure.

0:49:11.2 SC: And you also mentioned the rationalists a few times. Philosophers are very annoyed that this term has caught on because they already have a meaning for the word rationalist. And it doesn't mean, what do you mean by, tell us what you mean by it.

0:49:22.5 NS: So rationalism is a sort of EA adjacent movement that's more informal. I mean, terms like team, people like Eliezer Yudkowsky use the term rationalism, but it kind of is more of a cluster of attributes. It's like people who are using expected value to think through problems more clearly and have, less biased in their words answers. But not necessarily to like, for the greater good. It might just be to learn how to make a better trade or something like that. Or it might just be to solve some interesting problem on a prediction market. Prediction markets are a big, I should say I consult for a prediction market company called Polymarket. Prediction markets are a good rationalist tool 'cause it forces you to put money behind your answer. It forces you to quantify things. So, yeah, rationalism and The River have a fair amount of overlap, especially the kind of lowercase, r, rationalism used more broadly. Whereas effective altruism is a more specific term and also different. It's more prem and proper, all the EAs are at Oxford and Stanford and Princeton and Harvard and Cambridge and all these places. Whereas the person I mentioned, Eliezer Yudkowsky, may be the most influential rationalist who never finished high school. Very much self-trained, self-taught. And so it's kind of scruffier, it's more politically incorrect to use a slightly dated term. And I'm a little bit more sympathetic to it, I think in someways.

0:50:51.6 SC: I can tell. Yeah. Which is fine, of course. But, so when I try to be a good Bayesian the following thing occurs to me, like, the sales pitch for effective altruism or for rationalism is very compelling. Yes. Altruism should be effective. Yes, I should be rational rather than irrational. And then there's a bunch of objections, or at least people who object, who sound very unconvincing, they have vibes based objections. It's kind of icky, it's kind of mechanistic or whatever. But then if I look at the history, those people are right, some non-trivial fraction of the time. Oh, there is corruption, there is people bait and switching saying, "We're gonna send mosquito nets, but really we're gonna send it to our own salaries," or whatever. So I got to update and worry that I need a more sophisticated model of this community.

0:51:46.6 NS: Yeah. And the SBF, Sam Bankman-Fried stuff. I mean, look, if the movement had a hundred years of history and dozens of chapters around the world, then you could say, "Okay. Sometimes weird things happen." You don't have the fraud of the century happen on your watch every day. But the fact that he was maybe the single most important person fundraising-wise for EA, you have to update on that, I think, quite a bit. Maybe for lack of common... And, yeah, some of the critiques are in bad faith. And by the way, whether they call themselves EAs or not, I mean, you have had people like Bill Gates, Elon Musk, maybe less so now, but in the past, Warren Buffet, Mark Zuckerberg, I mean, they all give... Musk is something of an exception. The other three give quite a lot to charity and have demonstrated at least some interest in effective charity.

0:52:40.7 SC: Yes.

0:52:42.2 NS: I mean, Bill Gates in particular. So that reflects the EA influence even if it's not the EA brand name. And I know the people I mentioned are some of the most controversial figures in the world, but I think their charitable philanthropic work has been quite impactful and quite positive for the world. So that's something which, you kind of have to weigh things in different buckets. And the SBF thing weighs heavily in this bucket, but motivating billionaires to be smarter about their philanthropy. Plus the broader EA/rationalist community was ahead of the curve in terms of understanding the impact of AI. Eliezer Yudkowsky has been talking about this stuff for literally a quarter century. And it was not the consensus view. I mean, the consensus view until transformer models was like, "Yeah, we're kind of in an AI winter or something." So, they, yeah, the book, when you have a long book, it is a long book. You won't read it in a single sitting, I don't think, but there's time for nuance and subtlety and reporting, talking to a lot of people and giving them time in their own words to make their case.

0:53:49.7 SC: SBF famously said that every book that you write is a waste, is a mistake. [chuckle]

0:53:54.2 NS: I disagree. In part because there's...

0:53:56.0 SC: So do I.

0:54:00.0 NS: Yeah, in part because taking the time to write a book, I mean, this sounds a little through the looking glass. It's a little bit like a large language model when it has time to bounce a bunch of parameters off of one another, then sometimes you have these magical reactions that catalyze and you discover new things. I mean, first of all, just having, and let me do this on the podcast, but like just having an excuse if nothing else to spend three years talking to 200 really amazing experts and practitioners that are having these candid conversations with you, mostly on the record, sometimes off the record. That alone is incredibly valuable. That's like getting a PhD or something. The amount of instruction you get. So thank you all those people for their time. But, yeah, there are a lot of people reflected in the book and I have a lot of good material to work with.

0:54:46.1 SC: Well, and maybe it's part of a prophylactic against this engineer's disease, where you do come up with a model that you like because it's very quantitative, but in fact doesn't capture some important aspects of the world. Just empirically bumping into different people with different attitudes, and trying to think it through and put it in your own words can force you to think a little bit more carefully.

0:55:10.3 NS: Yeah. Look, there's this contradiction where on the one hand, people on The River tend to be very contrarian, very individualistic. I mean, definitely very individualistic, they take pride in that. On the other hand, in things like prediction markets, and an appreciation for markets in general. I mean, even the best sports bettors in the world will tell you the betting lines are really good. You have to be really good to beat the consensus Las Vegas over/under lines or whatever. So we have this duality where we appreciate consensus on the one hand, but we wanna be contrarians on the other hand. And that's why this is difficult. That's why it takes actual, that's why the subtitle of the book is The Art of Risking Everything. There is art involved in this, and there are, at the core of it, a lot of complicated and ultimately quite flawed human beings.

0:55:58.0 SC: Well, okay. And you've mentioned Sam Bankman-Fried quite a lot. And I'm sure that most of the listeners know the whole story. We don't need to go through the whole story. But I love your use of that example to illustrate the Kelly Criterion and talk about that in the book. So tell us what the Kelly Criterion is and how maybe it was a little perverted by SBF.

0:56:20.6 NS: Yeah. So what the Kelly Criterion tells you, is how much of your bankroll, and how you define bankroll is a little fuzzy, but how much of the money you're willing to gamble with, you should bet on a particular game? So let's say there's a college football game and you think you're gonna win the point spread bet on the Michigan Wolverines 60% of the time. The Kelly Criterion tells you how much you should wager in the long run to maximize your expected return while minimizing your risk of ruin. What Sam said is that the Kelly Criterion is too conservative, even though most gamblers will tell you that. Like, actually it has you betting way too much. You're gonna lose and go on tilt too often. He pointed out, correctly actually, that the Kelly Criterion is telling you how to maximize your expected return while minimizing your risk of ruin so that you won't go totally bankrupt, basically. However, if you don't care about going totally bankrupt, then you should basically just go all in over and over and over again. And so he pointed that out, but the practical implication is that you basically have literally a scenario, it's called the St. Petersburg Paradox, where let's say you can press a button for double or nothing, your money at a very small advantage, like a 51%. You win a coin flip 51% of the time to double your money or go broke.

0:57:39.3 NS: The paradox is if you keep pushing that button, the expected value of the bet is infinity, where you're making a positive EV bet and then doubling it an infinity number of times. However, you're also infinitely likely or one over infinity, so zero, to survive all those coin flips and button pushes. And he was, quite literally, he told this to Caroline Ellison, in revealed court testimony. He told it to Tyler Cowen, the economist, in an interview. If you could double the amount of good in the world, plus 1%. So the world is 2.00001x better, has 2.001x more utility, but there's a 50% chance of the world being destroyed in some paperclip apocalypse because the computers have the wrong objectives and are misaligned. He would, I think quite literally, take that bet. I mean, Sam was almost proud at various points, and I talked to him before, during and after the bankruptcy, almost proud of how willing he was to accept this very high risk of ruin for the chance. I mean, he thought he could become the world's first trillionaire. He literally thought he could become president at one point. He gave himself like a 5% or 10% chance. I forget exactly what. And that's kind of insane. I mean, that's somebody who, whether it's chemical reasons, I don't mean to imply that I... There are stories about Adderall use.

0:59:03.3 NS: I don't think that was the, from my reporting, not that big a part of the story, but something about the way his brain was wired where common sense didn't really enter into the equation. Even a low dose of common sense can go a long way, I think, to at least abetting the most potentially self-destructive tendencies.

0:59:26.0 SC: Well, this goes back very, very nicely to where we started because if you did... I mean, you can see where his argument comes from. If all you wanted to do was maximize your expected value, then yeah, destroy the earth with 50 minus epsilon percent, double the utility with 50 plus epsilon percent. But like you implied, almost any person would say, "That's actually not rational." And they would say, because there's things in life other than maximizing your expected value. It is okay to try to lower your risk of complete annihilation.

1:00:04.5 NS: I think one thing with Sam, and it gets a little dark, but there's reporting on this in the book, including reporting on the record from his ex-Alameda co-founder Tara, or Tara, I'm sorry, I forget, Mac Aulay. Who said that Sam's baseline was, he had, I'm not sure I'm saying this right, anhedonia, which is...

1:00:23.1 SC: Oh, yeah.

1:00:23.2 NS: An inability to feel pleasure or happiness. So to him, and again, this is getting dark, but to him being in a prison jumpsuit, and eating terrible prison food maybe didn't have the same impact as it would for other people.

1:00:42.1 SC: Yeah.

1:00:45.0 NS: And maybe he had, on some level, some type of self-destructive, death wish is a strong term, but I'm not sure that's wrong. If he literally says, you should be willing to risk ruining your life. And he did ruin his life. And by the way, there are also some tragic human elements to this, too. One critique I have of EA is that it tends to recruit true believers, whereas people elsewhere in The River are by nature very skeptical. Most savvy gamblers know if you're being offered a bet that seems to be too good to be true then it usually is too good to be true. Why is the counterparty offering you that bet? I think Sam recruited true believers who didn't push back on him as much. He was in The Bahamas, which is, the part of the island he was in, is quite far removed from the rest of civilization. He's in kind of an isolated compound. His girlfriend works for his hedge fund, his parents work for the company. So he's not getting very much good advice.

1:01:45.0 SC: Yeah. Okay. Putting him aside, there are definitely aspects, it's a fascinating psychological story, but all of us have this issue of weighing very unlikely things that have enormous consequences with very likely things that have smaller consequences. I mean, that's the issue with AI risk and other existential risks. I mean, having gone through the book, having spent the three years, do you have any wisdom? 'Cause I struggle with this. I have feelings, but I don't have any rigorous theory about what to do about those possibilities.

1:02:22.0 NS: Look, Silicon Valley in some ways comes out looking better than other parts of the book, even though these people are sometimes vainglorious assholes. I mean, you're gonna get I think detailed and nuanced portraits of Peter Thiel and Elon Musk and Marc Andreessen. I talked to most of these people, but not all of them, didn't talk to Elon. But Silicon Valley, it's a mantra that you hear from every VC you talk to will repeat the same two things. Which is, number one, having a long time horizon is good, that you're making bets that may have may take 10 years, 15 years, 30 years to pay off. And we live in a country where it's kind of get rich quick, not got rich slowly. And so almost any time that you're willing to take a longer time horizon, that's just an advantage. People discount the future, even their own narrow selfish future too much, almost universally. And number two, understanding expected value, understanding that a small chance of a very big payoff, if you can make that bet repeatedly and fold different investments into a fund or a portfolio that... That's a very lucrative profession. Once you get the wheels turning and it becomes like a flywheel and you say, "Okay. I am gonna invest in 15 companies in this fund, and some of them will flounder and some will pay off 1x, but we'll get one that pays off 50x, and the next fund we will get one that pays off 1000x.

1:03:50.7 NS: And those two things, understanding expected value, particularly low likelihood, but high impact events, and understanding time horizons, and willing to be patient. Those two things can make up for a lot of very flawed human beings who are making these decisions.

1:04:09.5 SC: Yeah. No, and I think those aspects, again, I'm... The sales pitch is very compelling to me. I get it. And if it's my job to sort of raise a question, the question would be, that makes them very good at making their lives good, is it necessarily making the world a better place? Especially when, because we all are human beings, as you know, some of these folks get to thinking that only they have the answers, and therefore they should be given all the power.

1:04:42.3 NS: Yeah. So there are a couple of things here. If you look at the most successful top decile VC firms, I mean, there actually is evidence that I think is pretty good. I mean, the returns are mostly private, but there are enough pieces of evidence that they really might be making 20% annualized returns per year. If you compound 20% over several decades now, it just begins to eat everything else. And you see now how much power... I mean, even look at the 10 richest people in the world are now twice as rich as they were 10 years ago. I mean, that's what you get if you have that compounding annual increase in wealth. So you now have... These people are more powerful than many countries in the world. So that's one thing, is the compounding nature of it, where it gets bigger and bigger every year and eats rest of the economy. And the fact that... Look, I am a kind of neo-liberal capitalist. I think technology has mostly been good, not mostly has been, overwhelmingly more good than bad for the world. And you look at growth in human lifespans and the reduction of poverty in India and China and places like that, and the amount of... Even the amount of individual rights in the world. It's easier to be a gay person now, for example, than it was for almost all of human history, for instance. However... I'm losing my train of thought a little bit here. What were we talking about?

[chuckle]

1:06:11.9 SC: The power going through their head. Yeah.

1:06:12.8 NS: Okay. But the latest round of technologies... Does social media create net social utility? I think a lot of people would say, no. What about cell phones? I mean, there's Jonathan Haidt, my Penguin Press co-author just wrote a book about the impact of phones on adolescents and things like that. With AI, Sam Altman will tell you it might destroy the world. And he's proceeding anyway 'cause he thinks the good outweighs the bad.

1:06:40.9 SC: And he's rushing to build it. Yeah.

1:06:43.2 NS: Yeah. I mean, like that...

1:06:44.1 SC: Sorry, sorry. It's not just because he thinks the good outweighs the bad. He thinks that he can control it better than someone else can. Therefore he wants to be there first, right?

1:06:54.4 NS: Which can be a self-serving rationale. Yeah, it's kind of a form of the prisoner's dilemma where if they think China is gonna build it, or Google is gonna build it if OpenAI doesn't build it. That, "Hey, we're the smart people, we can do it the right way." I mean, I'm not so sure I would trust that. And it's a time... I'm someone who, again, you have a little bit of that free market ethos and you're a little bit skeptical of government regulation. But in the Manhattan Project, it was government controlling this technology that could potentially destroy the universe. I'm not sure I would want Silicon Valley to develop nuclear weapons. And I also am not sure kind of like, what right do you have? So during the Trinity test, which is a Manhattan project in 1945. Enrico Fermi and other physicists were slightly worried that there was maybe a 1 in 1000 chance that testing the first atomic bomb might shut off a chain reaction in the atmosphere and destroy all life in the earth, and probably neighboring planets as well. So what right did they have to take that 1 in 1000 chance of destroying all of civilization?

1:08:03.8 NS: Well, and I guess you could say, "Well, if we don't do it, then Germany or Japan will develop the bomb first, and there's a holocaust going on." I mean, these are very heavy things in that circumstance, then I think it was a rationally justifiable risk. But that was done by government, at least in some sense you have some consent of the people, whereas like, what gives Sam Bankman-Fried the right to press this button for 50/50 odds, or even for 90/10 odds or something like that. And so if you're not someone in The River, if you're someone who thinks Elon Musk is evil, I can, at least read the book so you understand what the mentality is behind these people.

1:08:44.4 SC: Yeah. I think... And I will second that recommendation. The book is very worth reading for exactly those reasons. And for the last question then, in these communities we're talking about, one of the quantities they like to estimate is P-doom, the probability that we are gonna do something that is gonna destroy the world entirely. And of course, people qualify what they mean by doom differently. All right. So what is your estimate? What numbers do you put on P-doom yourself?

1:09:12.9 NS: So, in the... I'll give the answer I gave in the book, which is kind of a punt. Which is, I say 2% to 20%, because the definitions are so different. Some people say literally all life has to be destroyed. Some people say, "Oh, if we just lose agency, and computers and AIs control our life for practical purposes... " I think that latter definition is dangerously plausible. I find the former pretty unlikely, but the consensus in the community is like 5% to 10%. And I expand that outward because, I spent three years talking to the smartest experts in the world about this. They can't agree. So I'm not gonna try to superimpose my view on top of that.

1:09:53.7 SC: I mean, those are still huge numbers. If I wanna end on a optimistic note for my listeners out there, for me it's more like 10 to the minus three or less. I think that these are crazy big numbers that people are getting by extrapolating from very tiny bits of data. [chuckle]

1:10:09.8 NS: Look, I think since I was working on the book, there's been more of a pause in terms of large language models in particular. So maybe I would shade down a little bit if I were re-estimating today. But there are a lot of various serious people who have thought more about it than I have, and as a Bayesian, I have to defer to them somewhat, I suppose, I'd say.

1:10:31.0 SC: That is a perfectly valid way of thinking about it.

1:10:34.6 NS: My gut is on the more skeptical side, but I don't... You learn not to trust your gut sometimes, or at least not to give full weight to it.

1:10:41.4 SC: [chuckle] Not full weight. That's a very good way of going through this stuff. So Nate Silver, thanks very much for being on the Mindscape Podcast.

1:10:48.9 SC: Cool. Thank you so much, Sean.

[music]

6 thoughts on “285 | Nate Silver on Prediction, Risk, and Rationality”

  1. Pingback: Sean Carroll's Mindscape Podcast: Nate Silver on Prediction, Risk, and Rationality - 3 Quarks Daily

  2. EA utilitarianism is a quasi-religious cult. It’s based on faith that the EA community knows what’s best for humanity. It is replete with personal subjective value judgments which characterize certain things as having “utility” and others having less utility or value. There is no objective utility in the universe. Utility is a subjective
    concept. For a serial killers, their victims have utility only because the serial killer enjoys killing them. There is no meaningful definition of utility as utility is a concept that rel;ates to a goal and humans don’t have any universally shares goals. Nate Silver has drunk some of the EA Kool-Aid. Although he hasn’t swallowed all of it he has absorbed enough so that he believes the mythical ideas of AI doom, and seems to believe in the dawn of AGI super intelligence, an undefinable godlike omnipotent threat to the human race. AGI doesn’t exist, and no one has even been able to define it in a meaningful way. As a result, no one is working on it (even though they think they are) because they don’t know what it is or how or where to start. They believe in computational functionalism and that AGI will just magically “emerge” as AI computational powers are increased. There is not an iota of evidence for this faith.

  3. I am 100% sure that Elon Musk and Sam Bankman-Fried have had nothing but their own self-interests in mind in any causes they have supported, businesses they have started, or projects they have been involved in.

  4. It’s virtually impossible to have a meaningful discussion about ‘Prediction, Risk, and Reality’ without invoking Bayes’ Theorem, named after 18th-century British mathematician Thomas Bayes. Bayes’ Theorem is a mathematical formula for determining conditional probability. Conditional probability is the likelihood of an outcome occurring based on a previous outcome in similar circumstances. It provides a way to revise existing predictions or theories (update probabilities) given new evidence.
    The video posted below ‘The Bayesian Trap’ is a good introduction to the topic, and cautions that we must remain open minded and willing to adjust our way of thinking if we are not satisfied with the results of our actions.

    https://www.youtube.com/watch?v=R13BD8qKeTg

  5. I was catching up on some episodes and greatly enjoyed Quiggin and Acemoglu, but then had the misfortune to blunder straight into this wafer-thin garbage . Sad.
    I hope that one day you will speak to the humane and excellent Ha-Joon Chang, a man who appears to exist only to dispel bullshit economic theology unlike this self aggrandising crank.

  6. I didn’t catch all of this, but I wonder if utility is logarithmic in shape. I would not want to risk 100 utility for either 0 or 200+e. It suggests to me that utility does not scale linearly.

Comments are closed.

Scroll to Top