It's always a little humbling to think about what affects your words and actions might have on other people, not only right now but potentially well into the future. Now take that humble feeling and promote it to all of humanity, and arbitrarily far in time. How do our actions as a society affect all the potential generations to come? William MacAskill is best known as a founder of the Effective Altruism movement, and is now the author of What We Owe the Future. In this new book he makes the case for longtermism: the idea that we should put substantial effort into positively influencing the long-term future. We talk about the pros and cons of that view, including the underlying philosophical presuppositions.
Mindscape listeners can get 50% off What We Owe the Future, thanks to a partnership between the Forethought Foundation and Bookshop.org. Just click here and use code MINDSCAPE50 at checkout.
Support Mindscape on Patreon.
William (Will) MacAskill received his D.Phil. in philosophy from the University of Oxford. He is currently an associate professor of philosophy at Oxford, as well as a research fellow at the Global Priorities Institute, director of the Forefront Foundation for Global Priorities Research, President of the Centre for Effective Altruism, and co-founder of 80,000 hours and Giving What We Can.
0:00:00.0 Sean Carroll: Hello everyone. Welcome to the Mindscape podcast. I'm your host, Sean Carroll. I don't know about you, but sometimes it's enough to just get through the day, right? [chuckle] There's a lot of things going on in our individual lives, in the wider world. Just staying afloat is a bit of an effort. But we would like to do more than that. We would like to take a step back and really think about how should we behave as good, moral, rational, nice people here in the world. Of course, if you've been listening to the podcast at all, you know that even asking this question is not enough, because the possible answers about how we should behave, what it means to be a good person, to be moral, to be ethical, not clear. We have not only disagreements about meta-ethics, about how we should justify our ethical beliefs, but about what the actual ethical beliefs are. So today's guest, William MacAskill, who goes by Will, is someone who is really leading the push for a very specific way of thinking about how we should act, how we should be a good person. It's within the tradition of utilitarianism consequentialism.
0:01:04.0 SC: Okay? The idea that whatever we do, we should judge its moral worth by what consequences it has, what effects it has on the world, and that's not obviously true. It's plausibly true. It's a very reasonable thing to do, but it's not the only way to go. There's alternatives about deontological views, where it's a matter of following the right rules, whatever the consequences might be. Virtue ethics positions, which say that it's about cultivating your virtues, not precisely about how you act and everything. But certainly utilitarianism is a big player in this game. And Will has been one of the most influential in really turning utilitarianism into a philosophy of action and thinking about how it should apply to real world problems, especially sort of big picture problems.
0:01:53.6 SC: He's a leader of the Effective Altruism movement. We've had other interviews with people like Josh Green, talking about the best ways we can be charitable, but then also we've had supporters of the Mindscape podcast, GiveWell and the 80,000 Hours podcast, have advertised here on Mindscape about ways in which we could be better altruists. Okay? How when we do want to give some money away to make the world a better place, what is the best possible way to do that? In his new book called 'What We Owe the Future' Will is turning his attention to a specific question within how to be a good person, which is how much should we weigh the lives and statuses and situations of future people in our calculations? So there's a standard trick in economic theory where you try to decide what good to do, which is discounting the future. Which is to say that, roughly speaking, we don't count future situations as much as present situations.
0:02:55.8 SC: And part of that is just, we don't know what the future situations are going to be. Now, the counter-argument to that is, "Okay, sure, we don't know, therefore, there's some uncertainties, and we should take those uncertainties into consideration, but there's an awful lot of people who could exist in the future." [chuckle] So this is intellectually fascinating to me because it's a classic example of taking a big number and multiplying it by a small fraction. And then the big number is the possible number of people who might live in the future and the small fraction is the probability that we're correctly getting right what our current actions are going to imply for their conditions. Sometimes we know, sometimes it's pretty obvious, like that destroying the climate and the atmosphere and the environment of the Earth is probably worse for the future than keeping it in good working order. Other times it's less clear. Is super intelligent, artificial intelligence a big worry? What are the risks from pandemics? They're certainly there, but how big are they? All these questions. So Will has thought very carefully about all these questions. In fact, he did allude to in this podcast, something I wish we had more time to go into, which is his PhD thesis, which is not that long ago. He's a very young guy, was about basically...
0:04:10.5 SC: If you don't know the right moral theory, if you're not sure whether it's utilitarianism or virtue ethics or deontology, you can kind of split the difference. He developed ways to sort of take the average of the recommended actions given by all these different points of view. And so it's a crucially important question because it sounds all abstract and there's math involved and things like that, but it has potentially enormous consequences for what we do here on Earth right now. Where we spend our money, our attention, our concern. And honestly, if you listen to the podcast, you know that I am not convinced by utilitarianism. I think that there are issues there that kind of gets swept under a little bit by its advocates. And so we talk about that with Will and I'm not anti-utilitarianism either. I'm open-minded about these things. And so this was a conversation that definitely gave me lots of food for thought in thinking about these questions, which I repeat, are super duper important questions that we should all be thinking about. Whatever your views on long-termism itself are, it's an important question, one that matters to us going forward in the world. That's what we do here at Mindscape. So let's go.
[music]
0:05:35.9 SC: Will MacAskill, welcome to the Mindscape podcast.
0:05:38.5 William MacAskill: Thanks so much for having me on. I'm a big fan.
0:05:40.5 SC: Oh, thanks. And I'm a big fan too, I'm a big fan of... Actually, I'm gonna disagree with some of your conclusions, but in a very soft way, like this is stuff I don't have fixed opinions about. And so I really I'm interested in learning more and hearing how you think about it, but I'm certainly a big fan of the idea of thinking deeply about morality and how we should think about it, because I suspect that in this world that we have of technology and dramatic change, our inborn moral intuitions aren't always up to the task of guiding us as a civilization.
0:06:15.3 WM: I completely agree with that.
0:06:17.5 SC: And so your idea about how we should be thinking is this thing called long-termism, so let's just start with the basic thing. So what is long-termism? And take as much time as you want, tell us why you think that this is the way that we should be thinking about how to guide ourselves from moment to moment as we do live here in the world.
0:06:38.2 WM: Thank you. So long-termism is a view I defend in my upcoming book, 'What We Owe the Future'. And it's about taking seriously the sheer scale of the future and how high the stakes are in potentially shaping it. Then thinking what are the events that could happen in our lifetime that really could be pivotal for the course of future human trajectory. And then ensuring that we navigate them well, those events, so that we can help bring about a just and prosperous and long-lasting future. The sort of thing that our grand children's grandchildren will thank us for. So that's the view...
0:07:14.8 SC: I mean our grandchildren's grandchildren, but also, there's talk in the book about more cosmic perspectives, right? Millions, billions of years.
0:07:23.3 WM: Yeah, so it's taking... One thing that's distinctive about this view is that it's taking the whole course of the future seriously. So long-termism really means long-term. Not just thinking, "Oh yeah, years, decades into the future," but really for how long we might live, which I think is thousands, millions or even billions of years, and so... Yeah.
0:07:44.1 SC: I was just gonna say, in some sense, sorry, there's a delay between us, so I'm gonna keep interrupting you. I don't mean to do so. But in some sense, sure, of course, we should care about the long-term future, but I think that you're trying to make a slightly stronger claim than just the thing that everyone would agree that every... The future does matter in some sense. You wanna say that it matters a lot. [chuckle]
0:08:04.1 WM: It matters a lot. And so, why might we have that view? Well, a common view in moral philosophy is that acting or thinking morally is about taking a certain sort of stance, a certain perspective on the world. What the philosopher Henry Sedgwick called looking at the world from the point of view of the universe. So no longer just from your own particular point of view, but from this objective impartial standpoint. And so when you take this point of view of the universe, what do you see? Well, the universe formed 13.8 billion years ago, Earth forms 4.5 billion years ago. First replicators 3.8 billion, eukaryotes 2.7 billion, first animals about 800 million years ago. Listeners of your podcast I'm sure would be familiar, [chuckle] human beings 300,000 years ago, agricultural revolution 12,000 years ago, and scientific revolution really just a few hundred years ago. And there's a number of striking things here. One is just how early we are in the universe. Second is how where we seem to be. A third is how fast things are changing now compared to the billions of years that have come before this. And the final is that maybe, if anything, things are kind of speeding up in terms of these great transitions.
0:09:24.2 WM: But we can also ask just... Not just so what's so distinctive about the present, but what does the future of the universe look like? Well, here are kind of two ways in which things could go, like how might the story of the universe unfold. On one, humanity continues to advance technologically and morally, and some day we take to the stars. And at this point it's no longer... The story of the universe is no longer just governed by the really fundamental physical laws about how planets orbit stars and stars form, but you would start to see human agency is telling a significant part of the story of what happens. You have these interstellar settlements, you have... Perhaps you have galactic mega-structures, humans intervening to keep the sun smaller so that it can burn longer, perhaps even for trillion of years. Perhaps lowering suns into black holes as power sources. Perhaps constituting solar systems into giant computers. That's one way of the kind of future could go, and that would last for a very long time indeed. The last stars would burn out after hundreds of trillions of years. Actually a small but steady stream of collisions of brown dwarfs would make that last even longer. I think a million, trillion years.
0:10:41.2 WM: But there's another possible future, which is that humanity does not get to that point of developing technologically, instead we cause our own extinction. And then the story of the universe would be much as the story of the past. It would be stars forming and stars dying. It would be galaxies colliding over time, but the universe would be empty, it would be lifeless, it would be dead. And so one of the ways into thinking about long-termism is taking that perspective of humanity as a whole, appreciating that, wow, actually the... What we call history or even long history, 2000 years since [chuckle] in the common era, is just this tiny sliver of time. And there really are some things that we could do now that could have this course on humanity... This huge impact on the course of humanity's future. That's of enormous importance just 'cause of the sheer time scales involved, 'cause humanity's lifespan might be truly vast. And one of the ways in which we can impact that very long-term future is by ensuring we have a future at all, by preventing human extinction. There are other ways as well. I think as well as have, ensuring that we have this kind of long future, that on Earth would be hundreds of millions of years, if we take to the stars would be billions more, but also ensuring that it's good, ensuring that society is actually one of flourishing, rather than some authoritarian, perpetual nightmare.
0:12:16.8 WM: And so that's the kind of insight and background into this idea of long-termism, is taking that perspective seriously, thinking, "Wow, this is just huge stakes of this like tiny sliver of time that we have." The future is gonna be wild, and we should really at least be morally thinking what are the things we can do that might be impacting the long-term, and how can we make them go better?
0:12:39.5 SC: I think that most of our conversation will not be very science fictiony, but let's lean in a little bit to the science fictiony aspects here. I mean, you're talking... I think in that explanation, talked about the future of humanity, but it might not be humanity 10 million years from now, right? We'll be a very different species, not even taking into account the idea of technology and our blending with that. So is it really something else, like consciousness or awareness, that you're trying to help out?
0:13:10.4 WM: Yeah. I think there's nothing distinctively special about Homo sapiens. I think that all animals with consciousness have moral worth, and so I also am a big proponent of animal welfare and reducing factory farming, but I never thought... Years, yeah, in the future, there could be very different beings. Perhaps they're artificial intelligences, perhaps we just... The path of human evolution has continued. I would call them humanity, in the sense that if at least they're representing values and pursuing goals that we think of as being good ones, where if there is a future society that includes pursuit of knowledge and love and friendship and happiness and scientific accomplishment, even if that's being done by beings that are somewhat different from us. I would still think that's morally worthwhile. Something morally we should care about.
0:14:09.8 SC: And you gave us two options: One of which was basically going extinct and the other was spreading to the stars. Is there any third option where we more or less decide we're happy here on Earth and have some flourishing existence for the next couple of billion years?
0:14:22.3 WM: Absolutely. I think I sketched this potential future of spreading to the stars, but if we do... If we played our cards right, the future would be one that we choose for ourselves, and on the basis of what we think is morally good and morally right. And perhaps in the future that we would decide, actually the universe is a big place, but we wanna leave it pristine. That's a good future we could have. And even there, even just staying on Earth, we have hundreds of millions of years before the Earth is no longer habitable and billions of years before the sun's... Before our sun dies. So even on that much more modest scale, we have this enormous future ahead of us.
0:15:16.5 SC: Just to get as science fictiony as we're going to get.
0:15:19.2 WM: Please. Yeah. Yeah.
0:15:20.5 SC: You talk about love and the search of knowledge and things like that, and I agree, these are important things, but once we start extrapolating a billion years into the future, how plausible is it that those are not going to be the important things anymore? I can at least vaguely imagine in my mind that if we did upload ourselves into the matrix, all of those values that we have, all of the goals and desires that we have as organic beings turn out to be intimately connected to our embodiment in a biology that wants to eat and reproduce and so forth. And we lose interest in those things that were so important to us when we were just biological organisms. This is one plausible, at least conceivable resolution to the Fermi paradox, right? Every technology... Every civilization becomes so technologically advanced that they fulfill all their needs and become less ambitious. Do you give any credence to that possibility?
0:16:26.9 WM: So I think as a solution to the Fermi paradox, I don't give it enough credence. Where for there to be some filter on the evolution of life from something like an Earth to a very technologically advanced species spread out across the galaxy, because there are just so many planets and so many solar systems, you need this really hard filter. So it would have to be about 99.99999% of civilizations decides to be unambitious, and it seems hard for me to believe that it could be that high.
0:17:00.1 WM: But you do raise a very important point, which is that the values that guide the future might be, in fact, probably will be quite different from the values that guide society now. And that could be very good or it could be very bad. So it could be very good if we've made moral progress, where from the perspective of the Roman empire, the values that guide society today are very different. We don't own slaves. Where has our honor gone? We're wimpy. Women have political office, but this is all a very good thing, 'cause it seems like this is kind of moral progress. And that could continue into the future, such that our distant descendants have made further moral progress and if they value different things and pursue different goals, but if we could have a conversation with them with sufficient time, they would convince us that those goals were actually admirable in the way that hopefully with sufficient time perhaps I could convince, at least maybe I couldn't convince Nero or Caligula, [laughter] but I could convince Aristotle, I think, that owning slaves is morally wrong.
0:18:09.3 SC: Okay, I think...
0:18:09.9 WM: So that's one way it could go. But then the second is that it could be just dystopian.
0:18:16.1 SC: Sure.
0:18:16.2 WM: Perhaps you've ended moral progress too soon, perhaps some Fascist ideology just took over and entrenched its values indefinitely, or perhaps is just some very alien set of goals, 'cause we made some mistake perhaps on the development of AI, and the AI systems themselves took over and pursued some just fundamentally valueless goal. And that's a set of worries that I'm very concerned about. I think it's neglected. And I call value lock in, this idea that certain bad values could take control and then just entrench themselves, keep themselves there indefinitely.
0:18:51.8 SC: Okay, thank you for indulging my science fictiony questions, because there's a lot of pretty down-to-earth moral and ethical and political questions involved with the philosophy of long-termism. So am I right to contrast long-termism with a more conventional economic strategy of temporal discounting? In economics, we know there will be people in the future, and we do count their value a little bit, but we count it less as a very explicit move. And it seems that long-termism is saying that's a mistake.
0:19:23.5 WM: Broadly, yes. So there are some good reasons for discounting the future. One is if you can invest money that you have and get a return on it, then money earlier is worth more 'cause you get further years of investment returns. Or if you think that you as an individual or society are going to be a bit sure in the future, then you should discount just 'cause money has diminishing returns.
0:19:50.2 WM: So, people are poorer today than they are in 25 years you might think. In the same way that we today are richer than we were 50 years ago. But then there's this final reason that economists discount the future, which they call the rate of pure time preference. Where you just say a given benefit or harm, and not just talking about money, but talking about anything, is intrinsically worth less year-on-year. And a typical value they might give is like 2%. So a happy day today is worth 2% more than a happy day in a year's time. But this gets really absurd once you start considering... I think it's absurd at any point in time, but I think it gets really absurd once you consider long time scales. So suppose I can give you a moral choice: You can prevent a single death in 10,000 years or you can prevent the genocide of a million people in 11,000 years. And suppose there are no other knock-on effects of these two things, which do you think you ought to do?
0:20:52.0 SC: [chuckle] I'll just be the naïve answer here and say I would prevent all the large number of deaths.
0:20:58.0 WM: Well, I think you've got the correct answer there. [chuckle] Saving a million lives is more important than saving one life, even if the million lives are in 11,000 years and the one life in 10,000. But if you have this rate of pure time preference that economists sometimes use of 2% a year, then you think it's more important to save the one life in 10,000 years than the million lives in 11,000 years. And that is absurd. I think everyone would agree, all the economists as well, who are normally discounting over much smaller time frames, a few years or a few decades, would agree that's absurd as well. And in the very long term, we should not discount the future in that way.
0:21:35.1 SC: There's at least an analogy to sort of spatial discounting, and this has been one of the emphases Peter Singer and other people have put on how we should think about morality, that we tend to value ourselves and our friends and people near to us more than people far away. And maybe we shouldn't. It's sort of spatial discounting. And is that a good analogy, or is it basically perfect, or is it very different considerations?
0:22:00.4 WM: I mean, I think it's a great analogy, in particular because the distinction between space and time is not that clean-cut. [chuckle] And if you disc... It gets a little technical, that if you want to discount across time but not across space, then you get into all sorts of paradoxes because of general relativity. But on a fundamental ethical level, look, it just seems obvious that the well-being of someone doesn't diminish if they move farther away from you in space. Someone on the other side of the world matters morally there. They have a life too. They have interests, hopes, joys, fears, they suffer. And if you can prevent that suffering or avoid harming them, you ought kind of morally to do so. And similarly, just the fact that someone lives in a century's time or a thousand years time or even a million years time should not matter when we're considering kind of how important are their interests? Is it wrong to harm them?
0:23:04.4 SC: But, so this is where my skepticism begins to arise. I keep calling it skepticism, but that's too strong. My lack of clarity begins to arise. I get the kind of philosophical appeal of the idea that lives matter equally no matter where they are, no matter when they are, but as a practical matter. I mean I think Singer's original examples of saving someone who's drowning in a pond in front of you versus saving someone who is in a different continent, it seems to be perfectly okay to save the person that you're looking at [chuckle] and worry about that, care about that, more than someone who's far away. Just because I know terrible things are happening far away, and I'm not in favor of it. I would like them to not happen, but the actions I'm going to take in response to them are just not going to be of the same order as the actions that I see happening right in front of me, and maybe that's okay. Do you... What side do you come down upon that?
0:24:02.8 WM: Yeah, I want to say two things: So, one is that I do think we have some extra reasons to care about people in the present generation compared to people in future generations. And in the same way, I have special reasons to care about people near and dear to me compared to distant strangers. So, a couple of reasons. One is just special relationships. If you're my mom, that gives me an additional reason [laughter] to care about you. Similarly with friendships. Possibly with compatriots, that's a little bit less clear. But I think everyone has a very intuitive view that if you can save one person from dying or two people from dying, and the one person is your spouse or a family member, then yeah, it's fine to save the one. A second reason is reciprocity as well. So by being embedded in society, people have given me enormous gifts. Education I've received, again, this compounds the special obligations I have to my parents, also to teachers that have helped me. All these sorts of benefits. Also, just benefits I've received from the state, all these sorts of benefits, I have some kind of obligation to repay them. But people in distant nations, they still matter a lot.
0:25:20.0 WM: I still am not justified in harming them in order to... Even to benefit my family. And in so far as I have disposable income, money I would otherwise spend on luxuries like nicer clothes or a nicer house or things I don't really need, then that's not conflicting with these obligations of reciprocity or partiality, and I can help them. And I think the same is true when we look at future people. So I don't have special relationships with them in particular. Perhaps if I have children, I'll have great-great-grand kids, but perhaps that gives me some extra reason, but that's not true for most future people.
0:26:00.0 WM: And I certainly don't have a relationship with reciprocity with them. Perhaps I have some sort of passing the baton obligation, where past generations have benefited me enormously through innovation, through moral change that people fought for, through laws or political systems that have been developed. Perhaps that gives me a reason to pay it forward. But the fundamental thing is that people... We owe what people... At least we owe people in the future at least what we owe strangers in the present. And that might not be as much as we owe our near and dear, but it's still an awful lot and it still really makes a big difference for how we act morally, I think.
0:26:37.5 SC: Well, there is at least one disanalogy in that the people who are far away exist. [laughter] And the people who are in the future may or may not exist, right? I mean, there's some sort of uncertainties involved in thinking about caring for future generations that are less of a worry when we're just caring about people who exist right now, but are far away from us.
0:27:01.2 WM: Sure. So, yeah, uncertainty is huge, of course. And in what we owe the future, essentially, the whole book is just addressing the issue of uncertainty. I think a very reasonable starting point of skepticism is, well, can we really affect any of this in a predictable way? And I spend five chapters, then I argue yes. But even then it's still fraught with uncertainty. But that I think doesn't undermine the fundamental moral issues. So, Singer has this thought experiment, you can walk into a shallow pond and save a child's life and ruin your suit. Now suppose you're not certain it's a child, you think it's probably a child, but could just be a buoy or something, a buoy as you would, Americans would say.
0:27:44.4 SC: Yeah.
[laughter]
0:27:48.0 WM: I think you should still go in and save the kids.
0:27:50.2 SC: Yeah.
0:27:50.4 WM: Uncertainty is often used as a justification for inaction, but I think it's a very bad justification. So in climate science or the discussion around climate change, people say, "Oh well, we don't really know what the impacts of climate change will be exactly." And that's true to an extent, but it doesn't justify inaction, I think if anything, it justifies further inaction, further action then we might otherwise take. Because things could be much worse than we think. And similarly when thinking about the future, it's like, "Okay, yeah, we don't know that people in the future are gonna exist." That gives you something of less of a reason, but also, things could be very different from how the present they are. Things could be much, much better if we play our cards right or much, much worse in fact, if things go in a dystopian direction. And so, if anything, uncertainty, I think gives us more reason to act, more reason to take a cautious, responsible approach to the future rather than just winging it.
0:28:47.1 SC: Yeah. No, I completely 100% agree that uncertainty by itself is not an excuse for inaction. I mean, we have to do our best, right? To weigh the uncertainties and balance them against rewards, etcetera. But that is hard to do, I guess, is definitely a lesson that we should keep in mind. But okay, let's just be as dramatic as we can, you know? You've been very active in thinking about the possibility of existential risk, so you started by...
0:29:15.5 WM: Yup.
0:29:16.2 SC: Contrasting a glorious future of life in the stars and exploration and love and discovery versus extinction. [chuckle] So, how big of a worry is extinction? Let's grant that it would be bad, but lots of things would be bad, you know? I could quantum tunnel through the floor of the building I'm on right now, it's just so unlikely that I don't worry about it. How likely or unlikely are these existential risks? Are they things that should affect our day-to-day planning or at least our social planning?
0:29:46.5 WM: I think they should certainly affect our social planning. So, I'll focus just on pandemics as one source of existential risk. Where we as a community, we're really worried about pandemics for many years, making donations in the area, encouraging people to go into career choices from 2014 onwards. And people didn't pay all that much attention [laughter] unfortunately. But I think... And we got the COVID-19 pandemic, showed us that we are not immune in the modern world to a pandemic. But in many ways, COVID-19 was much less bad than it could have been. It's an enormous colossal tragedy, but the virus could have been more deadly, it could have been more infectious. And when we looked to developments over the coming decades, I think we have reason to worry, to expect things to get potentially worse, because we don't just have to conflict with... Wrestle with natural pandemics, but we'll increasingly be able to create engineered pandemics, namely by creating new viruses, new pathogens or upgrading existing ones, and that could really wreak destruction on an enormous scale that I think... I think it's unlikely. I think it's 1% or less that it could reach within the coming century, that it could reach the scale of killing everybody.
0:31:16.5 WM: But 1%, that's... You're more likely to die of that than in a fire or from drowning. If you got in a plane and it's like, oh, only one in a thousand chance of dying on this plane, I don't think you'd [chuckle] get on it. And that means that we as a society... And then, so, it's a real non-negligible risk and the stakes would be extraordinarily great. The work that one does to prevent pandemics, which I'm happy to talk more about, has enormous other benefits in the near term too. In such that, often this work is just justified on, without even having to think about the long-term future. And it has a particular policy prescriptions. It means we should perhaps be more cautious about certain areas of biotechnological advancement. I think it should be... Means we should be even more concerned about the potential outbreak of a third World War or some other war between great powers, where the risk of using bio-weapons and engineered pathogens goes up a lot in war time when people start doing very dumb things.
0:32:28.2 SC: Is there something about our current moment, historically, a moment broadly construed, plus or minus a few hundred years, that is especially perilous? You know, we've come across new technologies in the last hundred years that we didn't have before and our threat to ourselves is greater than ever before. Is that just the permanent future condition and we've had a phase transition or is it a temporary case where we're gonna become safe again down the road?
0:32:56.7 WM: I think there's a good argument for thinking it's temporary, or at least that the rate of technological change we've seen can't persist indefinitely and that we're living through can't persist indefinitely. We've only had really rapid rate of tech progress for a few hundred years, but again, even on Earth, the human race could survive for hundreds of millions of years. Well, if we have this continued rate of technological advancement, even for thousands of years, what happens? Well, how do you measure tech advancement is hard, but let's just use the kind of economic measure, like contribution to economic growth of, again, about 2% per year globally. Well, we have an economy of a certain size now. Continuation of 2% per year growth for 10,000 years, means that the economy would be, I think, 10 to the power 89 times as big as it is today.
[chuckle]
0:33:58.0 SC: That would be awesome.
0:34:00.9 WM: 10 to the power 67 light atoms within 10,000 light years. So, it would mean that we would have this economy of a trillion civilizations for every atom we could access, and it's just not plausible that we're gonna have that much technological advancement. So things have to slow. And so what this means is that we are living through a period of unusually fast technological progress. Unusually fast compared to history or compared to the future. And if every technology is just you're reaching into an urn and you're blowing a bowl. And most of the time, it's just like, it's great. Technology in general it's just been enormous, really important, in terms of improving our lives, but you get dual use technologies. So nuclear fission. Well, could of and could still give us abundant clean energy, but it also gives us the power to develop nuclear weapons. Similarly advances in biotechnology I think could do a huge amount to eradicating disease, but could also help us create new pathogens. I think similarly with artificial intelligence. Could bring about a new era of prosperity, could also enable dictatorial lock-in or result in a future where artificial systems themselves can develop things and human values are kind of out of the loop.
0:35:20.4 WM: So that's why I think things are particularly important now, and then will we ever get to a state where we can just reduce existential risk to zero? Basically, I don't know. If we can't then we've got no future, we're kind of doomed anyway, but maybe we can. Maybe we can get to a position of what my colleague, Toby Ord, calls existential security, where we've actually just matured as a species, we figured stuff out, we've managed to get these risk, all the relevant risks down to zero, and then we can continue with civilization's trajectory. And if you think that's 50-50, well, I think those are odds worth betting on at least.
0:36:01.8 SC: Well, okay, good. So, now we're getting into slightly more technical questions here, because a lot of this seems to... And maybe it doesn't, maybe I have a misimpression, but it seems to depend on these probabilities. I mean, you said, okay, 1% chance of a pandemic that would wipe out all of humanity. And I completely agree that 1% for that particular kind of number is enormously big and we should worry about it a lot. But if it's a 10 to the minus 10 chance or a 10 to the minus 100 chance, then maybe things are different, I don't know. So like, how do we know what these increasing small probabilities are? I guess, there's a bigger question here that I always worry about, even though I'm a very good Bayesian, I think that's how you should think about things. Multiplying really small probabilities times really big effects is just always really dangerous to me. I mean, have you... Have we learned about a better way of doing that, or do we just do our best and cross our fingers?
0:37:01.7 WM: So on the issue of tiny probabilities of very large amounts of value, I just completely agree. If the arguments I was presenting were about one in a trillion, trillion, trillion, trillion chance of such enormous stakes.
0:37:15.5 SC: Right.
0:37:15.9 WM: Then I'm like, I would not have written this work, I'd be doing something else, I'd be developing something else. And actually, many of my colleagues at this institute, Global Priorities Institute, I helped to set up, have worked on this very topic of how do you handle tiny probabilities of enormous amounts of value. And I'm sorry to say that the bigger results have been impossibility results, [chuckle] so just showing that no plausible view... In fact, proving that no plausible view... Sorry, no view you can have on this topic has implications that are just generally plausible. There's just... You've got some very un-intuitive implications for all such views. However, thankfully, we're not in that world, so it's kind of philosophically interesting. But the probabilities of the risks we're facing are really pretty sizeable of just the same sort of magnitude as risks from dying in a car crash, higher than the risks of dying in a plane crash. Exactly the sort of threats that we typically worry about.
0:38:23.1 WM: So my, again, colleague, Toby Ord, puts the overall chance at about one in six of existential risk in our lifetime. And I don't significantly disagree with him on that. That I have like some differences in how we might lose most future value, but not enormous differences in the fact that, yeah, the size of these risks really are very great. And then when you consider the fact that most of the work you're doing to reduce these extinction risks are also helping to reduce just catastrophe. So, what's the chance that we get a pandemic far worse than COVID-19 in our lifetimes? I'd say... Or considerably worse than COVID-19? I'd say one in three perhaps. What's the chance of like a World War III? You know, war between great powers in our lifetime? Again, I'd say at least 25%. Could even argue to go higher.
0:39:22.8 WM: You asked the question of how do we even assess these probabilities? And it's super hard, and I think there's just enormous amounts of work we can do to try and get these more and more precise. But one thing we can do to start making estimates a little more reliable is the kind of what's called the art and science of forecasting, where you get lots of individuals who each come up with their kind of own best guess judgment on the probability of something or some sort of forecast. And then you use certain algorithms to aggregate those forecasts into kind of an all-things-considered prediction. And so, there's a community prediction platform called Metaculus that does this, but like, in a... Among kind of aggregating predictions among many people. Back in 2015, I think it made a prediction, what's the chance of a pandemic that kills at least 10 million people occurring between 2016 and 2026? And it gave, one in three was the answer. And if you thought it was one in three, we would have taken a lot precautions, that would have done pretty well.
0:40:33.8 SC: Yeah.
0:40:34.0 WM: And so then when they said the risk of a biological catastrophe was something like one in 1%, maybe a bit lower, that was, again, appealing to Metaculus, where its estimate of the chance of an engineered pathogen that kills at least 95% of the world's population is about 0.9%. And then, what fraction of that goes to extinction? It's hard to say, maybe it's 0.5% in total risk of extinction. So, I'm trying to use the aggregate of many, many people, each having their own takes.
0:41:08.6 SC: But I think this seems a little bit different than pure long-termism. I mean, sure, I do want to avoid the extinction of the human race. And it is absolutely true as you say that many of the things we can do to avoid the extinction of the human race are good for other reasons, like you also wanna avoid the extinction of half the human race. [chuckle] But how do we connect that... Or let me put it this way, why do we need to care about a billion years in the future to talk like that? I mean, I don't wanna... Especially if the worst risks are the next thousand years, isn't this compatible with a much more near-termism? Like don't do things that could wipe us out in the next couple of centuries?
0:41:51.2 WM: I completely agree, basically. And it was... Yeah, this is kind of intro I was giving was me experimenting with different framings. [chuckle] Where often I just wanna say, "Look, at least consider centuries or thousands of years," 'cause I don't think it makes an enormous difference whether we're on a time scale of thinking thousands of years into the future or billions of years into the future.
0:42:16.2 WM: But having said that, I'm a moral philosopher, I really want people to just have the underlying moral views and the big picture morality correct. Where, look, there are these enormously pressing reasons in the near term for reducing risk of catastrophe, for stopping the human race from ending, but as... I'm hoping that the ideas we promote, in the way the ideas in the past, like liberalism or the scientific method, continue to have relevance and impact over decades or maybe centuries, maybe even longer. And for that, I just really want people not just to be acting on the right conclusions, but also have the right underlying world view. And even if that's weird and has this initial, "Oh wow, this sounds like sci-fi," I'm like, "Okay, yeah, but it really isn't." [chuckle] Just really start, "Oh if you think it is, explain to me or tell me why." I'm really not someone who came in from the kind of sci-fi background. I don't have much interest in that.
0:43:16.1 WM: I just care about people, and I care about them on the other side of the world, I care about them in the future too. And I think it's really just important that people in general have the right moral views such that, I don't know, 20 years time, let's say we have got extinction risk just down to close to zero because of pioneering work by people who are inspired by some of these ideas, then I want us to keep acting to try and benefit the long-term future, even if that might start looking rather different.
0:43:48.1 SC: I guess this does resonate with something I worry about a lot, which is that the human brain or human values, anyway, don't seem very good at dealing with risks that are less than 50% chance over a century, right? We really just don't care about that, whether it's solar flares or pandemics and so forth. So, I don't wanna put words in your mouth, but maybe... Like forgetting about millions or billions of years in the future, maybe one kind of obvious moral upgrade that we could do is just to pay more attention to things that are unlikely but disastrous over a century time scale.
0:44:29.9 WM: Absolutely. So the way people learn and society learns tends to be a kind of trial and error process, learning from experience. And so look at planes. Planes are very, very safe. One of the safest modes of transport per mile. And why are they so safe? Because there have been a lot of plane crashes in the past. And so over time, we've been able to build better and safer planes to reduce the risk of accidents. Pandemics kill many, many more people per year on average than plane crashes. I think we could just take the last... You know, I doubt 10 million people have died in plane crashes over the last 100 years.
0:45:12.4 WM: But are we prepared for them? No, and it's because in terms of natural pandemics, something like COVID 19 or the Spanish flu, that's like a one in a century event, we don't get to learn from feedback. And the same is true with these like somewhat lower probability but high consequence events. If something happens just every few decades, then we don't learn from experience. And that means what we need are people who are firstly just clear thinking and numerate, so really understand... And yeah, understand the risks that we face going forward. And secondly, kind of morally motivated, concerned to overcome the political hurdles, such that we take action on them now. Because the situation we face is actually even harder than for natural pandemics, 'cause at least there we have a history of pandemics to use as evidence.
0:46:06.4 WM: But when we're talking about truly new technology, you don't have that history. So think about the situation just before development of the atomic bomb. Those who are agitating for US political leadership to take such a possibility seriously, were saying you should take very seriously, this technology, this event that we have never seen before. And that's pretty tough. And we face I think a similar situation where we're talking about the ability to engineer new pathogens. We've never seen that before, but we should be worried, we should be thinking this through. Or the development of artificially intelligent systems that are at human level or greater than human level. We should be taking that really seriously, and we should be taking it seriously before it arises, because we don't get nearly as much feedback as... And ability to learn from trial and error as we do from these everyday sources of tragedy, like car crashes, plane crashes and so on.
0:47:07.7 SC: And it reminds me of an issue that seems to come up in democratic governance. I've talked a lot about, on the podcast, about democracy and the thrust to it and so forth. And recently, and I had a good question in the Ask Me Anything episode where they said, okay, so what is the downside of democracy, what is the argument against it? Just so we could sort of steelman that. This is probably one, right? I mean, democracies don't seem to be super good at long-term planning, because the people you vote for very naturally worry about the next 10 years at most. Is that... Am I too cynical about that? And is there something we can do about that?
0:47:45.9 WM: Yeah, I want to say two things on democracy. One is the question of what exactly you're democratizing? Where... With nuclear weapons, they're terrifying. One, the way in which we got lucky is that fissile material is easy to control. So it's not the case that anyone can manufacture a nuclear weapon in their garage. With future dangerous technology, ability to engineer new pathogens could... If, depending on the regulatory environment, could go a different way.
0:48:15.7 SC: Right.
0:48:16.5 WM: And so there are some sorts of power we don't want to democratize precisely because they're very dangerous. So that's one thing. And then you're right that there's this... Once you take it seriously, the impact to future generations, it seems like democracy has this major flaw, which is that of the people that will be impacted by the policies we choose today, where let's conservatively say that future generations outnumber us a thousand to one. That's just if we live as long as a typical mammal species, that's how many people there will be. Well, then it's only 1.1% representation of those who are gonna be impacted. So what can we do about it though? That's a tougher question.
0:49:03.7 WM: And again, in 'What We Owe The Future', I was going to have a whole chapter just on institutions dedicated to taking the long-term into consideration and how we can augment democracy to have a more long-term view. And I ended up not having that chapter just 'cause I think the situation is a little bit dismal. There are some things we can do. So you can have an ombudsperson for future generations. So Wales has the future generation's commissioner, and their point in government is just to go around advocating for future generation's issues.
0:49:37.6 SC: Wow.
0:49:39.7 WM: I really like citizens' assemblies. So, this is where you take a randomly selected sample of the population, so let's say it's a 1000 people, but that statistically represent the United States electorate. And they would get together, you would present them with various different issues that might impact future generations. You tell them to take on the role of like their great, great grandkids and imagining like, "Okay, what should we be doing in order to make sure that their lives, our descendant's lives, go well? And you get them at least in an advisory capacity to say, Okay, this is what we think we should do. And these are things that I think are good.
0:50:23.8 WM: I think they would make the world better. They would help inform government. I just think they're a far cry from seriously taking the long-term impacts of our actions and put... And of democratic decisions and having that in practice, and so we can make these incremental changes, but I think something that's like trying to represent future generations in any serious way is just... My guess is that it's impossible, because future generations aren't here yet, they can't represent themselves, and so any system you set up to try and represent them will have the force of special interests aiming to corrupt it. And that's just a tough situation with anything.
0:51:07.3 SC: Okay, all right, that was about as cynical as I am also. So I can't really argue with you. It is a flaw. We should think about...
0:51:13.9 WM: My apologies.
0:51:15.9 SC: Trying to do better. Yeah, you know, reality never promised us a rose garden. Let's back up, and you're a moral philosopher and we sketched out some of the ways in which we do long-termism thinking. But now I wanna really dig into why we should be thinking this way in the first place. And I take it that your starting point anyway is some kind of consequentialist utilitarian point of view. So tell us what that is and then maybe a bit about the meta-ethical why you think that's the right perspective to take.
0:51:46.3 WM: Terrific. I'll say that the things I argue for a utilitarian flavored or consequentialist flavored in so far as they are very focused on what good outcomes can we promote. But I'm pretty careful to just in public only defend things that I think are justified in a wide variety of moral views. And my second book, it's an academic book, is precisely, and my PhD topic, was precisely on this question of, on this issue where I think you should have some degree of belief in a variety of different moral views, consequentialism and non-consequentialism. And when you're taking action you should try and act in a way that's the best synthesis or compromise between these different moral views.
0:52:32.3 WM: And I think the concern for the long term is something that would pop out of that. But briefly, what is consequentialism? Consequentialism says you should do what's best, where what's best is what brings about the best outcomes. And it says that that's all that matters, and that can be unintuitive in one of two ways. One is it means that it's never permissible for me to just spend some resources on myself or do my own thing if I could do something else that would bring about the best outcome. So I could spend that money on a nicer house or a nicer car, but that money could be used to improve the lives very significantly of people in very poor countries or could be used to help steer the future onto a more positive trajectory.
0:53:25.5 WM: And let's say that, which I think is very likely to be the case, that will do more good over all from this kind of impartial point of view. Then you ought to do it on consequentialism. The fact you shouldn't give yourself any greater weight than any others. And that I think is very practically relevant because we, in Western countries, just do have a lot of resources that could be used to do a lot of good. There's a second aspect, which can be unintuitive, which is that promoting good outcomes is always the right thing to do, even if that involves a kinda means that is not as well, that we have a common sense objection too.
0:54:08.5 WM: If you can kill one person to save a 100 other people, then on consequentialism, you ought to do that. Now, I think that's much less practically relevant for a couple of reasons. One is that when you do this kind of compromise between different moral perspectives or not least some plausible moral views, people have rights that protects them against being used as a means even for the greater good. And that's really serious on those moral views. And so the best compromise I think means we do respect people's rights while trying to promote the good in cases where you're not violating anyone's rights or doing harm. And then secondly, it's just like, how often is the best thing to do to kill one person to save a 100?
0:54:51.6 WM: It's just like, hasn't come up in my life. It hasn't come up in the lives of anyone I know. It's the sort of thing that tends to be more... It would come up in war time, but it's the sort of thing that comes up more in the philosophy seminar room than in practice. Where in practice, I think if you're a good consequentialist, you actually wanna be even more scrupulous. You wanna go around being very, very honest, being very respectful to other people, being very cooperative. And the Effective Altruism community in general has really try to promote those virtues. And so... But yeah, the key thing is, that makes the things I promote like Effective Altruism and Long-termism kinda sound consequentialist is this rigorous focus on bringing about good consequences, making the world better. But that's something that other moral view points really care about too. Non-consequentialism, virtue ethics, they also think that doing good things matters. It's just that they think it's not the only thing that matters, and I would agree.
0:55:57.7 SC: Can I ask? I'm actually very intrigued, I'm kinda sad we didn't do a whole episode on this probability distribution over moral systems and using that as a guide to action. I wanna ask one very simple question. Would that strategy also be applicable to religious views? Should we take the expectation value of what we think God wants us to do even if not God exists?
0:56:21.8 WM: I think you should give some... Am I certain? A 100% certain? That yeah, I'm not gonna wake up after I'm dead and God will tell me that I got it all wrong. No, I'm not a 100% certain. And then there's this very thorny argument going all the way back to Pascal of saying, "Well, if you've got some credence in God and God can produce infinite amounts of value, then you should really be wagering on God." This, again, I'm having to be like a bit of an apologist for philosophers, this again I think is something where we just don't have a very good answer. Again, because we mentioned before there's tiny probabilities of enormous amounts of value. Well, this is that issue writ large.
0:57:10.2 SC: Right.
0:57:11.4 WM: Where it's not tiny probabilities of enormous amounts of value, it's tiny probabilities of infinite amounts of value. And I'm like, my practical attitude is like, we haven't figured this stuff out. If I finally take action on it, I feel about as equally likely that I'll do harm as I'll do good. Maybe it's slightly more likely that I will produce infinite positive value and infinite negative value. But I can't think of a much better strategy than trying to ensure that future generations exist and they can figure this stuff out and have more of a clue than I do. Having... But, I think if someone was like, was really hard-nosed, bullet biting, decision theorist, I don't think I have a good philosophically justified response to them, to be honest.
0:57:57.7 SC: Well, I think... Okay. You are careful in the discussion of consequentialism, you didn't emphasize the utilitarian version of consequentialism very much. So maybe explain what makes utilitarianism special within consequentialism. And then are you a utilitarian, and should we all be?
0:58:17.1 WM: Sure. So, utilitarian is a form of consequentialism. So consequentialism says just do what the produces the best consequences. Utilitarianism has one particular understanding of what those consequences are, which is namely the sum total of people's well-being. So you can take your action, look at everyone who's impacted, look at all those who have benefited, add up all those benefited, look at all everyone who's harmed, subtract away those harms. That's the amount of good you've done. And that can be contrasted with views, primarily with views on which things other than well-being matter. So preservation of the natural environment, knowledge for it's own sake, perhaps. Art for it's own sake. Just complexity perhaps, life itself.
0:59:06.6 WM: And again, I think you should have a view where you take lots of different moral perspectives into account, I think... And acting on compromise, that does mean that in practice I would give some weight to those other things. Having said that, at least in the seminar room, I find the arguments in favor of utilitarianism as compared to other sorts of consequentialist views really quite compelling. Where I find it much more plausible that there could be moral properties, properties that are good and bad, that pertain to conscious experiences in particular, than there's this thing good and bad that is the property of a natural landscape or of a piece of human created art. Where it seems to me I have direct awareness of this, of goodness, when I have a good experience. I have direct awareness of badness when I suffer. I don't have direct awareness of the goodness or badness of a painting, instead that feels just like some judgment I have. And that's this crucial distinction, it seems to me, between well being, in particular as it consists in conscious experiences, and other things that are purportedly good or bad.
1:00:23.3 SC: And that does raise the issue of the meta-ethical justification for all of this. I mean why should we be consequentialist? Are you someone who thinks that, well, we have moral intuitions and we're trying to systematize and formalize them, or is there some kind of objective transcendent argument that this is simply the right thing?
1:00:41.6 WM: Great question. So, within the Effective Altruism community and people who work on this, there's a big diversity of views. Many people have this view, which is called 'sophisticated subjectivism', where what you're doing morally is just trying to act on the preferences of what an idealized version of yourself would want yourself to want. So if you could just reflect forever, that is ultimately, this is what you would want to do, but ultimately, it's just about your preferences. I actually personally don't find that view that plausible. I think it's a kind of awkward kind of halfway house between one of two views. One view is just nihilism or what moral philosophers' called error theory, on which every time I say some moral claim, murder is wrong, giving to charity is good, it's all false. It's like I'm talking about witches or phlogiston, it's just a false kind of view of the world. And honestly, I find that view quite compelling.
1:01:45.2 WM: I'm not sure exactly what probability I give it. Depending on my mood somewhere between 10 and 90%, I think. But then there's this is other view, which is that, no, there actually is, like as part of the fabric of the universe, there are moral truths or at least evaluative truths.
1:02:04.3 SC: Yeah.
1:02:06.3 WM: Where, yeah, on my kinda preferred view, those truths are grounded in conscious experience where there are these famous arguments for error theory. So JL Mackie writing in, I think it's the '70s, calls them the 'Arguments From Queerness' that moral property is a queer. And it's funny how the term has evolved 'cause it's like, oh yeah this... [chuckle] It sound like oh, these properties are too gay to exist.
1:02:32.4 WM: But he's just saying, well, if they are moral properties, things that in virtue of just understanding them you think should promote them, that would make them radically unlike other sorts of properties, like natural properties in the world. And I'm like, yeah, they are kind of different, but consciousness is already just radically different from other sorts of things that we find in nature. So at least I've got a bit of a partners in guilt argument there. And then he has this second argument from queerness, which is that, well, how would we know if there's this moral property something we ought to do, what's the way in which we get to this truth? And again, my answer is via direct awareness. Like how do I know truths about consciousness? It's because I directly receive them. It's not because of anything I know in the kind of scientific realm. And so that I think at least, there's plenty more to be said, but gives us at least some grounding for thinking that, oh no, it really could be the case that there are these fundamental facts about what is good and bad, and what we ought to do and not ought to do. And then if you have got some degree of belief in both nihilism and the moral realist view...
1:03:46.2 SC: Yeah.
1:03:46.3 WM: Well, you'll... There's no chance of making a mistake by doing the moral realist thing, 'cause if nihilism is true, nothing matters. And I've spent all my time doing these good things. Well, the nihilist is like as good as anything else. 'Cause crucially the nihilism appeals to reasons I have to benefit myself as well. It's not saying there's no morality, it should just benefit me. Saying I've got no reason to do anything...
1:04:09.3 SC: Yeah.
1:04:10.1 WM: It'd be including helping myself. So in that situation, you just may as well do the moral realist thing. You may as well act as if there's a meaning to and purpose to the world, because it works better.
1:04:26.5 SC: So even if Pascal's feature doesn't work for God, it may be, it works for moral realism. You might as well act that way.
1:04:31.1 WM: So if you, well, you might as well act that way. To be clear, if I thought, if I had a 0.00001% degree of belief in moral realism, I'd be a bit sketched out by the argument, but really, I'm kind of more like 50/50 or something.
1:04:44.9 SC: Okay.
1:04:48.4 WM: It's... Or at least, you know, I think it's really pretty plausible that this could be true. And so that makes me think this isn't this kind of crazy, Pascalian low probability argument.
1:05:00.7 SC: Okay. I mean, again, we could talk about this for a long time, but I really wanna start applying it to long-termism. And in the book you have a wonderful chapter about population ethics. Where you sort of give the apologia for being a utilitarian about all the wellbeing of the future generations and so forth. And correct me if I'm wrong, but you seem to be careful in that chapter to really use utility, the total wellbeing of all the people in the world, in a kind of ranking sense without actually attaching real numbers to it. So, I mean the skepticism I've had about utilitarianism has always been what number are we attaching to the wellbeing of a person and how in the world do you think you can just add them up for all these people? And I mean, maybe there's a weaker version where you don't really need it to be an actual number, as long as you can reach conclusions just from saying, but you have to think that this is better than that.
1:05:57.9 WM: Terrific. So, let's, we'll get onto util... Maybe population ethics next, because this first question of how are you even measuring wellbeing?
1:06:07.0 SC: Yeah.
1:06:11.3 WM: Is kind of more fundamental really. And so the way that I would measure wellbeing is by appealing to people's like carefully considered preferences and the trade offs that they would make. My particular favorite method is time trade offs. So if I'm... Well, you can do both time tradeoffs or probability tradeoffs. But if I'm indifferent between, let's say, you know, I'll definitely die tomorrow or I can have two days skiing or one day at the beach, and I'm indifferent between those two things. One day, one day at the beach and then I die. Two day days skiing and then I die. Then plugging, up as long as my preferences fulfill certain other axioms, then we can create what you know is called a ratio of scale such that my preferences, where the fundamental thing is just preference for X over Y. It's not like I, there are numbers like buried with...
1:07:17.6 SC: Right.
1:07:23.3 WM: Buried in deep inside me. I just have these preferences for, yeah... Okay. Indifference between two day skiing and one day at the beach. Perhaps I might their prefer to either of those things four days in Paris. And I just have preferences over all of these things. The preferences satisfy certain conditions. Then we can use numbers to represent my preferences and that's all that's going on. The numbers are just like, yeah, being used to represent those things. Then there's a second question of, okay, that's my own preferences. How do I... How do we compare my preferences with yours and so on? Or at least my wellbeing with yours, the preferences are just a means in to what my wellbeing is.
1:08:08.3 WM: And that's really hard. There's like various philosophical accounts of it. But the thing I wanna say is, look, if you are saying that it's, you know, me pinching you in the shoulder and, or... And someone else getting horrifically beaten up, there's no comparison between those things. Can't say for whom is that worse? I'm just like, come on. [laughter] We obviously know. And so perhaps we, there's debate about like, what's the correct theoretical account of how we are making comparisons of wellbeing, differences across different people, but clearly we know we are, they are... And in general, you know, you can just look at various sorts of experiences. Like headaches are probably, you know, on a similar range for most people or like other things like similar range for most people. You can at least kind of roughly start to make these comparisons, and that once you've got that, then at least approximately you're able to engage in, you know, utilitarian reasoning where you can meaningfully talk about sum total of people's wellbeing.
1:09:15.0 SC: Well, maybe this is fair, but I'm not completely convinced you're not cheating a little bit here. I mean, I see the move that you're making, which I'm sympathetic to. Which is, look, if you spend all of your time worrying about the weird edge cases, you're gonna miss some basic truths in the simple cases that actually arise. And okay, good.
1:09:32.7 SC: Yeah.
1:09:33.3 SC: I'm on board with that. I get a very similar dialogue when I discuss the many-worlds interpretation of quantum mechanics. [laughter] But we are gonna be extrapolating in some sense, right? I mean, this is what long-termism is all about. I mean, couldn't a skeptic say like, no, no, no until you have a rigorous theory that works even in the edge cases, I'm not gonna believe your extrapolation a million years into the future.
1:10:04.8 WM: I'm curious on what you see is that edge age cases that we're worrying about. Like when it comes to measuring wellbeing...
1:10:11.9 SC: So I guess, I guess the fundamental... The fundamental, all objections to utilitarianism I think boil down to the idea that do you really believe that there is, if I have a person with a certain amount of wellbeing and another person with much, much more wellbeing, that there is some difference between their wellbeing that is so great that I am indifferent to the person with a little bit of wellbeing existing versus a 50% chance that the person with a lot of wellbeing exists versus a 50% chance that nobody exists. [laughter]
1:10:44.3 WM: Yeah.
1:10:48.3 SC: Could I take that expectation value? Like those seem a little bit incommensurable in some sense to me, but it, it kind of lies at the heart of what utilitarianism asks me to do.
1:10:56.9 WM: So I think two things to say. One is at least on the... Utilitarianism doesn't yet make any claims about population ethics. You have to have a view of population ethics that you can attach on. So in particular, total utilitarianism says maximize total wellbeing. So if you can add someone with 1000 wellbeing to the population that's making the world a plus 1000 wellbeing better, but there's other sorts of forms of utilitarianism. Average total utilitarianism says increase average wellbeing. That's the same if you've got the fixed population of people, but it changes if you're adding additional people. But now if you're looking at... Let's even just assume total utilitarianism for now, and we've got a choice between adding one person for sure or adding 50/50 chance of another person with a better life. So it seemed to me that, okay, we've got that choice. Let's say person one has this life that is positive, but just barely worth living. It's really drab, there's a lot of suffering in it. The happiness just outweighs it. Person two has most wonderful life that's ever been lived, and lives like 10 times as long, and it just has peaks of giant creative insight. Just imagine your best day, but 10 times better every single day.
1:12:28.4 WM: Then I think it's perfectly obvious that you should take a 50% chance of the one person with a very happy life and then, Okay, make the person's life... The really happy person, like less happy. The person with a life barely worth living a bit better, there will become some range where we just don't know because these comparisons are really hard. And perhaps there's even some range of incommensurability. So not just that we don't know, but actually there's fundamental differences. So perhaps one person's life has great accomplishments and it involves pursuit of knowledge. The other person is just this kind of beach bum and just like surfs all day and has a great time. And both of those lives are very good, but perhaps it's just very hard to weigh off those... Weigh those two sort of lives together. The first view wouldn't deviate from total utilitarianism at all. It's just limits to our understanding of what we can know about the world. The second one would move away from total utilitarianism as it it's standardly defined, but not in a really fundamental way. Instead it would just be... Sometimes you can say that one outcome is better than another one, sometimes you can say they're equally as good, but sometimes you can't.
1:13:44.0 WM: Sometimes one outcome is neither better, nor worse, nor equally as good as a second outcome. They are incommensurable. And as long as at least sometimes one outcome is really is better than another, then utilitarianism is still action guiding it. Still saying, at least in some circumstances, it's clear what you want to do. Even though in other circumstances, it just says, "There's no fact of the matter," because these are both good lives, but they're good in different ways.
1:14:12.7 SC: But the conclusion you said was perfectly obvious. I worry if we try to extend it from one extra life to the entirety of the human race. Like if we say, there's gonna be a trillion people that exist in the future, and either they will be, yeah, meh. [laughter] They'll have moderately happy lives. Or there's a 50/50 chance they'll have amazing lives and a 50/50 chance there'll just be extinction and no one will exist. I think that there, it's not so obvious and maybe even most people would vote for a 100% guaranteed existence. Even though I just scaled up the numbers. Right?
1:14:49.4 WM: It could be. And this actually... I love how you're pressing me.
[laughter]
1:14:56.4 WM: And how deep we're getting into these issues. So there's this... And it actually relates to these issues before about tiny probabilities of large amounts of value. You start to get into the same kind of formal tools. So the question is, is value unbounded? So if I have two really good lives, happy lives, is that twice as good? Or once you've already got one, has... Maybe a second one is still really good, it's still making the world better than virtue of there being another happy person. Or does it make it just a little bit less good than the first one did? Are the diminishing returns the value itself? And the intuitive case for this is very strong, I think. Imagine a future where you have populated the solar system or even the galaxy or just earth, but it's just you've created this Utopia. There's a large population, 100 trillion people, all living the best possible lives. And then I say, "Yeah, a 50/50 chance of that going to zero or you can have three such Utopias." [chuckle] And people are like, "Umm, [laughter] doesn't seem... I don't know. The gamble doesn't seem worth it." I have those intuitions very strongly, and that suggests that value is bounded. So just as in an individual life, it's plausible that you can get happier and happier and happier, but there's some kind of upper limit, at least on a day-to-day.
1:16:32.6 WM: It's not that just your day... You couldn't have a day that's like a trillion times as good as your best days. [chuckle] That might be true just for value as a whole. And that would mean that... Take this flourishing Utopian civilization, have it three times over, maybe that's 10% better or something, 'cause you're very close to this upper bound of value. However, there are really bad problems for that view too, because if you've got... If you think there's an upper bound to value, then it really matters how... When you make decisions now, it really matters how many people have lived in the past or how many people are alive in different places in the universe? And how good or bad those lives are? 'Cause you need to know, am I very far away from the upper abound? In which case, I don't really need to think about them. I can just say, okay, if I'm far away from the bound, then two happy lives is approximately twice as good as one happy life. But if I'm close to the bound, then it really makes a big difference, and that just seems absurd to me. The fact that we ought morally to be like figuring out what were the lives of the ancient Egyptians like, and were they good or bad?
1:17:49.3 WM: How many animals were alive in the past, and were they good or bad? Are there other alien species, and are they good or bad? That just seems absurd to me too. And so again, we have a paradox, I think. We've got this issue where any view we have has these devastating objections. And honestly, actually, one of the things I think moral philosophy has contributed most over the last five years, especially as we've started to use some more formal tools, is demonstrating a whole bunch of these areas where you can prove a... I call it a paradox, that's only in the informal use of the term, but you prove that impossibility results is the technical term. You have four conditions, four principles, each one is just... Yeah, absolutely this... [chuckle]
1:18:39.9 WM: You've just go to accept this principle. Couldn't not be true. And then you prove that we are inconsistent with each other. And this is one of those cases, so I'm not saying I'm gonna... But then the thing I should say is that's compatible with utilitarianism, that's just a different... The view in which we have this bound and additional happy lives get less and less x, have less and less additional value, that's utilitarianism, but with a different view of population ethics kind of slapped on. And that's actually... That category of views is one of the categories I didn't have time to get into in 'What We Owe the Future', but that's a shame 'cause they're an interesting and important category of views you might have.
1:19:27.9 SC: Well, I guess, personally, I'm in that, what you'd call the wishy-washy middle ground of not being a nihilist, but not being a moral realist either, being a constructivist who thinks that we have these intuitions, we try to formalize them, and as you say, it very often is the case that you think you're formalizing your intuitions and you use your formal system to prove a conclusion you think is morally repugnant. So clearly you've made some mistake there or you're just incoherent, and neither one of those is a happy place to land, but let me just push one more bit on utilitarianism because it does seem, even if you sort of have diminishing returns to value or an upper bound on the total amount of value, the fundamental idea is you're adding up value or wellbeing or utility or whatever in all these different lives. And that leads to a feeling that, all else being equal, more people is better if all else is equal. And Derek Parfit famously reached what he called the 'repugnant conclusion', and maybe you could tell us what the repugnant conclusion is. And if I'm not mistaken, your attitude is, yes, let's accept this repugnant conclusion. He was cheating to call it that.
1:20:46.5 WM: So I agree that it was cheating to call it that, and I'll tell you the reason why, which is that, again, within this area where you have these very strong intuitions and they're all inconsistent with each other. And I actually do think that the repugnant conclusion is maybe the least repugnant of the principles we would have to reject. But let me explain. So again, bearing in mind that utilitarianism is compatible with many views of population ethics, and every moral view has to engage with population ethics. We all face the question of how should we weigh up bringing new people into existence, is that good or bad at all? The repugnant conclusion is the following: So imagine you have a population of 10 trillion people, some large number of people, and they have amazing, amazing lives. Let's say it's plus 1000, let's use that number just to mean really, really good lives, bliss. And now consider you have some second population where you can imagine by population just future of the world, let's say. And they have lives that are barely worth living, let's say plus 0.1. It's good. They think, yeah, I am happy to have lived, but I'm close to indifferent. [chuckle]
1:22:09.4 WM: Perhaps my life has just been very boring, not much has happened, or perhaps I've had both happiness and suffering in it. But there are a very, very large number of them. Let's say there's a trillion trillion of them. Or trillion, trillion, trillion. We can make the number very large indeed. Now, if you have this total utilitarian view, which is saying even in cases where you're adding people to the population, what you do is just add up the wellbeing across the whole universe, across the whole future. Well, then you'll get this conclusion that the very, very large population with trillion, trillion, trillion, trillion people with lives that are just barely worth living, that has total greater wellbeing, then the much smaller, but still a very large population of people with total bliss lives. And that seems repugnant, says Derek Parfit. And he really stuck... He deserves to name the conclusions because he really inaugurated this field, and he really did believe it was repugnant. Before he died, he told me that he would give up on the idea of there being moral truth at all, and he was a pretty committed moral realist. He would give up on that idea before accepting the repugnant conclusion.
1:23:32.7 WM: However, you can prove that if you... You can prove that there are small numbers of principles within population ethics that are inconsistent with each other, and I'll kinda give you the argument just very quickly. So start off with this 10 trillion people, who can live extremely good lives, plus 1000. Call that the A world. And second, now we're gonna change the A world in two ways: We're gonna double it in size, so it's now 20 trillion people. With the original 10 trillion people we are gonna add just plus one well being, so they're at 1,001 now.
1:24:10.3 WM: The other trillion people, 10 trillion people, are gonna have lives that are just a little bit less good, so plus 998. Now, has that move... And we'll call that world A plus; twice as big, you've made the original people a little bit well off, then the new people have lives that are extremely, extremely good, but not quite as good as the wellbeing of the lives and people in the A world. Does that make the world better? So maybe I'll just ask you. You can tell me what you think.
[chuckle]
1:24:42.6 SC: Well, actually, no, I mean, it's cheating 'cause I've I read your book and I have had time to think about it. And actually, this is the step that I would deny. I think that just adding more people of equal levels of happiness, it's not necessarily better. And again, I started off the whole thing by saying I don't have the once and for all final answer in my own mind, so I'm happy to change, but that's the weakest link to me.
1:25:07.0 WM: So I agree it's the weakest link, actually. And that if you're gonna reject the repugnant conclusion, it's this step of the argument that I'm giving you that you should reject. But let's linger on it. So let's say you're one of these 10 trillion people originally, original people. By making this change, you're gonna make your life a little bit better and the life of everyone else you know. You're gonna move from a 1000 wellbeing to a 1001 wellbeing. And the cost of that is adding more people who will also have supremely good lives. In fact, lives close to this bliss world I stated, plus 998. What's your argument for saying that that's wrong?
1:25:45.6 SC: Sure. And the intuitive pull of the argument is perfectly clear to me. So I don't have an argument that it's wrong, I have the idea that maybe the function that goes from the set of individual people and their happiness to the total consequentialism-ness of the whole thing is highly non-linear and not just additive over the number of people.
1:26:14.2 WM: That might well be true. That doesn't... Okay, I'm gonna stop [laughter] pressing you on this at the moment. I think you're right at the stage to get off the boat, and I think actually what you can say is that yes, in this case, it would be better. But once wellbeing levels start to get lower, then it's no longer better. And in fact, in the all-things-considered view that I kind of present or endorse in 'What We Owe the Future', it actually rejects that step. And so the view I present in 'What We Owe the Future' does not endorse the repugnant conclusion, even though I'm sympathetic. I don't think it's crazy to endorse it.
1:27:00.9 WM: But okay, let's say that you think, "Yeah, this is better." You've made the existing people better, 10 trillion people have moved from a 1000 to a 1001. You've added another 10 trillion people at 998, seems pretty good. We'll call that the A plus world. And the move there was by a principle of like... For saying that's better, that's called dominance addition. Addition 'cause you're just adding good people, dominance 'cause it's also making existing people better. Now we're gonna move from A plus world to world B. And to do that, we're not adding anyone at all, and in fact, what we are gonna do is increase total wellbeing and we're gonna make it more equal 'cause we've got this world of 10 trillion people at 1001, 10 trillion people at 998, and we're gonna move that to a world of everyone at 999.5. That has increased average wellbeing, has increased total wellbeing, and now everyone is equal. There was a little bit of inequality in that former world. We've made that more equal. Again, it seems like that world is better. Like what is their argument why you would reject that move? We'll say that that is not making it better.
1:28:14.1 SC: Right. I mean, you have now lowered the average wellbeing compared to the original world.
1:28:20.8 WM: So that's the rub. [laughter] Because, okay, I'm now comparing A and B.
1:28:28.5 SC: But there's more people.
1:28:29.8 WM: Yeah. I've created a larger population and I have a larger population of lower average wellbeing. Now, you can iterate that. So I had A to A plus, to B. We can do B plus. Now it's 20 trillion to 40 trillion people moving from 99.5 to, let's say, a 1996, and so on and so on and so on. You keep iterating going from A to B, then B to B plus, to C and so on, and what you're doing is increasing the size of the population and lowering the average wellbeing. Keep doing that, you end up moving from the A world, 10 trillion people of a 1000 wellbeing, to the Z world of everyone that barely, with lives that are barely worth living, but a very large number. You end up with that world being better. And so what are the principles you have to reject? Well, you either have to say that at some point the move from something like the A world to the A plus world is bad, or that move from the A plus world to the B world is bad. Or you have to deny transitivity, which is to say that if B is better than A plus, then A plus is better than A, then B is better than A. That seems pretty compelling. Or you have to accept the repugnant conclusion.
1:29:49.0 SC: Yeah.
1:29:49.9 WM: And in light of those... Yeah, I think it's hard. My best guess is that the best thing to do is to accept the repugnant conclusion, but as I've been saying, what I think we ultimately want to do is take a compromise between different moral world views, including different views of population ethics. And what that ends up giving you, if you really reason it through, is what's called a critical level view. Where there's some difference between... The zero level is the point at which I as an individual am like indifferent between having ever been born and never having been born. And there's just this higher level, and it's only above that higher level that it's good to bring someone into existence. So bringing someone to existence with a life that's barely worth living is not a good thing. It might even be bad. But it they have to have a sufficiently good life, then it's a good thing, and that avoids the repugnant conclusion. And it does that by rejecting this dominance addition principle once you start to get to lives that are not very good.
1:30:56.7 SC: Okay. Thank you for indulging me in my [laughter] poking about utilitarianism, 'cause I talked about in a lot of the podcast, but never quite directly in the way I wanted to. And your answers are extremely illuminating.
1:31:08.5 WM: Thank you very much.
1:31:09.1 SC: And non-dogmatic as well, which is great. So let's wrap up by going back to the real world and being practical and saying that, "Okay, if we have this idea that future generations matter and we can influence them, and therefore that should color our actions in the present," what are the practical things we should do? And actually, let me not be quite as fair yet. [chuckle] Let me start with the really hard questions, like is there some implication that we should... It's more important to have kids than anything else because that would bring some extra positive utility into the world?
1:31:47.2 WM: I'll say two things on that. The first is that if you think that, yes, bringing a sufficiently good life into the world is good, then the overwhelming focus or the overwhelming upshot of that is that extinction of the human race is much worse than we would otherwise think if you didn't have that view, because the loss of life would be huge, astronomical, trillions upon trillions of future lives. And in fact, like the question of whether you should have your kids today would actually just depend on how does it impact the long-term future, including a reduction of extinction risk. The second thought though, it's like, okay, let's just take that question at face value. Is it good to have kids? There's obviously a very influential view that says, no, because of the climate impact. I think that argument is really quite weak and perhaps even harmful, and I think because it only looks at one side of the ledger. It looks at the harms that a child imposes on the world, but not the benefit too, where if you have children and are a good parent and bring the kids up well, your kids will go on to do lots of good stuff too.
1:33:09.3 WM: They will contribute to society, they will pay taxes. If you bring them up well, they'll be moral changemakers. But then also people innovate. So they help create new technology, new ideas that help make the world a better place. And if you have the view that, "Well, it'd be bad for me to have kids." Do you think it'd be good if there was just no people at all? Well, probably not, I hope not. But if so then it's like, "Okay, well, where's the line? What's the difference?" Or at least in the past, supposing there had just been way fewer people in the past. Well, let's say they've been... Yeah, half as many people. Well, we'd be still living as farmers. [chuckle] We would... The amount of technological progress and intellectual progress we've made is because we've had a lot of minds tackling these problems. So we would still be living as farmers, we wouldn't have anesthetic, we wouldn't have modern posterity, we wouldn't be able to travel. And the same, I think, is gonna happen... The same considerations motivate larger population sizes in the present as well. Because in fact, like the fact that population is gonna peak at 2050 and then decline after that, I think poses real challenges for us as a civilization.
1:34:27.0 WM: The standard economic models would say that the technological process that we're used to would actually stagnate. And I think that could be a very bad thing. Could increase extinction risk, could turn the world into a kinda zero sum affair. And then the final consideration is just the benefit to the kids themselves. Where... Yeah, if you can bring up... If you can be a good parent, and bring up your children to have sufficiently good lives, that is a good thing from the perspective of the kids. I'm happy I was born and I got a chance to live. And a good thing from the perspective of the world. So I think the decision whether to have kids are not, it's a deeply personal matter. Certainly not something that the government should be involved to try to legislate or dictate, but is this one way of being a good citizen and contributing to the world? Is having children and bringing them up well, I think, yeah, absolutely.
1:35:22.9 SC: Well, is it... I understood and got the logic behind everything you just said without necessarily agreeing with it, but are you sort of wimping out there at the end? Shouldn't we... If you buy everything you say, just conclude that it is a moral imperative to have kids?
1:35:39.3 WM: Well, I think we have a moral imperative to try and do good things, to promote the good, and there are many ways of doing that. And for some people, perhaps the best thing is to have a family and bring up your kids well. There are other ways to contribute as well, which will segue me onto the other action [chuckle] points that I talk about in 'What We Owe the Future'. So one is donations, so you can do enormous amounts of good by spending your money well. I have set up an organization called 'Giving What We Can' that's encouraging people to give at least 10% of their income to the charities that you believe do the most good, that's just a way that I think almost anyone can contribute to making the world better. You can pursue what career you're excited about, but donate 10%, you're having a really big impact. And there are organizations that you can fund that are... Yeah, really doing important work to reduce the chance of the next pandemic, or safely guide artificial intelligence, or reduce the risk of war. And so the Effective Altruism long-term future fund is one place where one can give.
1:37:00.5 WM: And then a second thing is career choice. So you spend 80,000 hours working in the course of your life, maybe even more. So I set up an organization called 80,000 Hours that encourages people to pursue those careers that do good and give advice on how you can have the biggest impact you can, because you can really have enormous amount of impact if you choose to use your careers in ways that will benefit the common good. And so I would encourage people to see this as a menu of options. There are many ways to contribute to the world. I think that having children should be one. And if you want to have kids, I'm like, great, go ahead. Don't feel bad about that, at least if you're gonna bring them up well. If you don't, okay, great. There are other ways to contribute as well. Donate to really effective places. Also talk about voting is something that's just extremely important, especially politically well-informed voting. But most of all, if you can pivot your career on to issues that are... To help and work on issues that are gonna be pivotal for the future of the human race, enormous amount of good that you can do.
1:38:10.1 SC: I don't know if you know, but 80,000 Hours, the organization is a sponsor of the Mindscape Podcast, so I've been doing ads for them.
1:38:17.7 WM: Oh, I actually didn't know that, but it makes me really glad to hear because, yeah, I'm sure then your audience many are scientifically-minded and altruistic people who I think could take up the mantle and really try and put some of these ideas into practice.
1:38:34.8 SC: I do wanna give you the chance to respond to what I think is maybe the most common worry about long-termism, which is that... And you sort of came close to talking about it there. If I really thought that, let's say there was a 1% chance of a pandemic, and therefore, that would wipe out all the... And maybe I do think that. I'm not sure what my credence is, but that would wipe out all of humanity. Then wouldn't I be tempted to give all of my disposable income to fighting against that and not give any of it to helping starving children around the world? Is there a danger that being long-term in our thinking gets in the way of just doing something immediate and tangible and good right here and right now?
1:39:19.5 WM: So this is the trade-off we face. And I call this the horror of effective altruism, the absolute horror of the world we are in at the moment. Where you now sitting [chuckle] in your office and me now sitting in my office, live in the mother of philosophical thought experiments, where this same question applies, even if you're just focused on the present generation. I fund bed nets to protect children against malaria. That means I've not funded deworming tablets. I funded malaria bed nets in one country, it means I've not funded them in another. Some people are dead if you're donating, even just thousands of dollars, because of your decision. And so anything we do has this unreal opportunity cost, and so we need to just grapple with these hard tradeoffs.
1:40:07.0 WM: The next thought is going back actually all the way to the beginning of the podcast, is to think in terms of humanity as a whole. The point of view of the universe. But humanity as a whole, where should it be all of my disposable income or should I split it? Well, from the point of view of the world, that really is not a large decision. You are just ever so slightly changing the allocation of resources in the world. And how much money at the moment is being spent on these issues that protect future generations? My guess is that something like 0.001% of global GDP. And now, what percentage should it be? I don't know. [chuckle] Should it be 1%, should it be 10%? I honestly don't know. I also just honestly don't know at what point long-termism just stops being interesting as a framing at all. And instead, the thing that's best for the long term is just exactly the same as the thing that's best for the short term, which is building a flourishing society.
1:41:07.1 WM: At the moment, there are kind of more targeted things that are still very good from the near term, even if they're not necessarily the very best for the near-term, like preventing the next pandemic, like avoiding dangerous biotech. But what I am saying is that for the world as a whole, yes, I would love there to be a radically more spending to improve the lives of the very poorest people in the world, but I would also love there to be an awful lot more attention paid to issues that impact the entire future of our civilization from this incredibly low baseline that we currently have.
1:41:48.1 SC: That's a very, very good place to end, I think. There's no reason to think that we can't do both of those at once, helping ourselves now and the future. So Will MacAskill, thanks very much for being on the Mindscape podcast.
1:42:00.2 WM: Thank you so much for having me on.
[music]
William MacAskill has created a great deal of buzz about his approach to “Longtermism” which is a naive new moral philosophy that values unborn generations far into the future and attempts to influence current actions of living beings in favor of increasing the “well-being” of generations that don’t yet exist and may never exist. This approach which is essentially a consequentialist utilitarian approach asks us to make decisions by taking into account the effects on future generations. MackAskill bases his approach on the idea that he know what is best for everyone who will live in the future based on the utilitarian fantasy that you can measure well being and add it up and then do what maximizes most people’s happiness. It is remarkable to hear an argument for a greater good for future generations when utilitarians have been wholly unable to demonstrate a greater good or workable ideal system for those living now. Of course, the slippery and easily manipulable concepts of ” well being” and “greater good” are completely subjective and utterly lack objective grounding. One person may be ecstatic going to the opera once a week. For another that would be like being burned in hell. As we all have different and contradictory ideas of what would make us happy, it takes monumental moral hubris to say that we should decide what is best for future generations when we are so far from being able to agree on what is best for our own.
MacAskill starts by saying he wants to look at morality from the point of view of the universe when there obviously is no point of view of the universe. The universe does not care what human beings do and human beings really only care about their own subjective needs and desires and things they value. While it is obviously most people’s view that we would be better off without immediate worldwide thermonuclear war, we don ‘t need longtermism to tell us that. And it is amazingly arrogant for MacAskill and other so called “effective altruists” to tell the rest of us how we should spend our money. If people want a new house or car and don’t want to contribute to poor people in Afghanistan that is entirely their right and we don’t need naive busybodies to tell us how to spend our own money or what we should do to worry about future generations. I can just see the expression on Julius Ceasar’s face when some Effective Altruist tells him that he shouldn’t invade Gaul because it might cause harm to future generations or changes to maps of Europe which might hurt someone in two thousand years. He would probably just smile briefly before decapitating the the speaker with a quick swipe of his sword.
Let’s stick to the here and now. We have enough to worry about for the living to keep us fully occupied for our lifetimes.
Pingback: The Will MacAskill Festival - James Aitchison
Going to an opera? This is your example for maximizing „greater good“? Spending your money for a new house instead of helping other people to minimize their suffering. And you talk about hybris. Lol
Pingback: Sean Carroll's Mindscape Podcast: William MacAskill on Maximizing Good in the Present and Future - 3 Quarks Daily
William MacAskill um jovem que defende o consequentalismo, utilitarista, mantendo-se firme nos seus argumentos até ao final do episódio, não obstante a “pressão” de Sean Carroll.
Dou-lhe os meus Parabéns. Não que eu seja e-ou pense como William MacAskill, mas, por ser e pensar como tal, e, julgo que (consegue, integridade) aplica essas crenças éticas no real.
É minha opinião que pensar e agir para, tão longe no tempo, em futuras gerações tão distantes no tempo, não faz sentido.
Mas, ele apresenta os seus argumentos, para tal!
Também, o “ser bom, ético, moral”, é subjetivo.
Crenças éticas podem ter um significado discutível.
Para um mundo melhor, no presente e no futuro, deveriam haver mais jovens como William.
Obrigada.
Would an authoritarian world govt be much less risky to long term survival than competing short termist democracies rushing to exploit the worlds resources as fast as possible and get one up on each other.
And if we are aiming for a long future of humanity, there is no need to rush and risk overloading our planet in the short term.