Technology is changing the world, in good and bad ways. Artificial intelligence, internet connectivity, biological engineering, and climate change are dramatically altering the parameters of human life. What can we say about how this will extend into the future? Will the pace of change level off, or smoothly continue, or hit a singularity in a finite time? In this informal solo episode, I think through what I believe will be some of the major forces shaping how human life will change over the decades to come, exploring the very real possibility that we will experience a dramatic phase transition into a new kind of equilibrium.
Support Mindscape on Patreon.
Here is the 2024 Johns Hopkins Natural Philosophy Forum Distinguished Lecture, given by Geoffrey West:
From that lecture, here is a phase-transition singularity, in this case in the heat capacity of propane as a function of temperature:
And from the paper "The Trajectory of the Anthropocene: The Great Acceleration," by Will Steffen et al., here is an image of various trends in Earth-system quantities and socio-economic quantities, where you can see how things are rapidly accelerating in the years 1750-2010.
0:00:00.4 Sean Carroll: Hello, everyone. Welcome to the Mindscape Podcast. I'm your host, Sean Carroll. And as I'm recording this in March of 2024, a few days ago, Vernor Vinge passed away. You might know Vernor Vinge was quite a well-known science fiction author, the author of A Fire Upon the Deep and other novels. Basically, his favorite thing to do was to take technology and to extrapolate it, to imagine technological innovations far beyond what we have in the present day, and then to think about the implications of those technological innovations for humanity, human behavior in society and life and so forth. Yes, something that science fiction has always been very good at. In fact, even if you've never read any of his books, you might be aware of the impact of Vernor Vinge, because he was the one who popularized the idea of the technological singularity. A moment when advances in technology would become so big that a fundamental change would happen in the nature of human existence. He did not coin the term singularity, not quite, not in this sense. It goes back to John von Neumann, of all people. Maybe not surprising, actually, in retrospect.
0:01:08.5 SC: Von Neumann was one of the leading mathematicians and physicists and thinkers of the 20th century. And if you look up the Wikipedia page for the technological singularity, you will find something that I did know, that it was first mentioned in a kind of offhand remark by John von Neumann talking to Oolong. And mentioning that humanity was approaching an essential singularity in technological progress. Now, since then, this idea has been borrowed by others, most famously by Ray Kurzweil. And it has gained a little bit of, well, there's enthusiasm for it in some quarters, there's skepticism about it in other quarters. The specific version of the technological singularity that Vinge and Kurzweil were talking about, we don't know exactly what von Neumann was talking about, but Vinge and then Kurzweil were talking about a technological singularity driven by AI superintelligence. So the basic idea is at some point, artificial intelligence becomes so smart that it will be able to design even smarter artificial intelligences, and then you get a positive feedback loop and runaway growth, and eventually you hit a singularity where the growth is sort of effectively infinitely big.
0:02:28.7 SC: Many people, like I said, have been a little skeptical of this for a couple reasons. Number one, I will mention down the road in this podcast that this might be a slightly overly anthropocentric view of what artificial intelligence is and what kind of intelligence it has. But also number two, because the actual data, the actual evidence in favor of this idea was always a little dodgy. Kurzweil in particular. He was very fond of just plotting things and it was not clear how objective it was what he's plotting, the number of technological breakthroughs over time. Number one, it's not clear why that would matter if it's eventually AI that is going to do the transitioning. Number two, it's not clear how to count what is a technological innovation. Is every new iPhone model a technological innovation? It was just not very well defined and got a lot of hype. People always react against hype. And so it wasn't necessarily taken too seriously in a lot of quarters.
0:03:30.0 SC: Including in this quarter here at Mindscape World International Headquarters. I never really worried too much about the technological singularity. That was not my cup of tea. But recently we had a lecture at Johns Hopkins by Geoffrey West. Geoffrey West, you will all know, he was one of the first guests on Mindscape. I just presume that every listener has listened to every back episode. Geoffrey was formerly the president of the Santa Fe Institute, and he's one of the leading figures in complexity science, a former particle physicist who switched to complexity when the superconducting super collider project was canceled. And Geoffrey has studied scaling laws and networks in biology, but also in human systems. And he wrote a wonderful book called Scale that you can read, or you can read about the scaling law stuff, or you can hear about the scaling law stuff in the podcast episode we did. So this lecture he gave was part of the Natural Philosophy Forum that we now have at Johns Hopkins. And one of the things the Natural Philosophy Forum does is every year a distinguished lecture.
0:04:33.9 SC: So last year it was Daniel Dennett, another former Mindscape guest. This year it was Geoffrey West. And he talked about a lot of his usual stuff, but then he talked about something that I think probably I've heard him talk about before, but it didn't really sink in. You know, this happens. You can hear things, and you can understand them in the moment, and they don't really make an impact on your deeper thoughts until the time is right. And that's what happened with me. The thing he was talking about was essentially the technological singularity. He used that term. He mentioned the history of it, etcetera. But he had much better data than I've ever seen before. He was urging us to take seriously this idea of a technological singularity, but there were two things that, for me, made it much more persuasive than anything I'd heard before. Number one, like I said, he had better data. So he wasn't just plotting how many technological innovations there had been per unit time, but rather the pace at which innovations are adopted. You may have heard that when ChatGPT, the large language model from OpenAI became public, it was adopted faster than any other similar technology in human history, and Geoffrey showed data showing that this is a trend.
0:05:49.6 SC: That not only are we innovating, which we have been for a long time, but it is faster and faster that we are actually, quickly taking up those innovations and adopting them. So that was one thing that I thought was quantitatively a lot more objective and believable than what I had seen before. The other is that he wasn't talking about artificial intelligence that much at all. His story did not rely on any particular understanding of what it means to have artificial intelligence or what it might do. It was just the pace of innovation is increasing for plenty of reasons. The underlying causality is almost irrelevant. His point was that the data are pointing to something like a singularity. And Geoffrey, of course, is a well-trained physicist. He knows the math, and this idea of a curve, you plot something versus time or versus some other variable, and the curve blows up at a finite point. At some moment in time, again, or other variable that is changing with time, the curve seems to go to infinity. So it's of the form one over X as X approaches 0. Right.
0:06:58.0 SC: That is called a singularity in mathematics and physics. And where those things show up in physics, you might think of, in quantum field theory, there are infinities from Feynman diagrams, or in general relativity, there's a singularity at the center of the black hole, and indeed, those are examples of physical quantities becoming infinitely big, but then there's sort of ways to get around them. A much more relevant example is in phase transitions. So a phase transition happens when you have some underlying stuff, water molecules or whatever, and you change some external parameter, density or pressure, or again, whatever, and you measure different physical quantities in this substance as you're changing some overall parameter. And sometimes, well, there can be a phase transition, ice turning into liquid water or whatever, things evaporating. Solids, liquids, gases are the traditional examples, but there are others. And if you measure the right quantity, then at a phase transition, you can find that this quantity goes to infinity.
0:08:02.7 SC: And that's not crazy or ill-behaved, actually. Of course, it never actually reaches infinity because your measurements are not infinitely precise, and there's only literally one point of time or temperature or what have you where that would happen. So the real world always smooths things out a little bit. But the point is that you can still continue the behavior past the singularity in these phase transitions, right? Ice doesn't cease to exist when it melts, etcetera. What Geoffrey actually showed was propane and its heat capacity as a function of temperature. So if you go to the website, my website, preposterousuniverse.com/podcast, I will reproduce that graph that Geoffrey showed for the propane phase transition. Singularities in physically observable quantities are characteristic of phase transitions. So in other words, thinking like a physicist, two things are suggested. And only suggested, not proved or derived or anything like that, but suggested. Number one, we should take the possibility of a singularity very seriously. They happen in real down-to-earth physical systems.
0:09:15.0 SC: And number two, they might be harbingers of phase transitions. It's not that the system ceases to exist or blows up or self-immolates or anything like that. It's just that it changes in a dramatic way to a different kind of thing. You can think of it as there being a sort of equilibrium configuration of the stuff on one side of the phase transition and a different kind of equilibrium configuration on the other side. So, this discussion that Geoffrey had really made me think, like, oh my goodness, maybe this is actually worth taking seriously. That's what we're going to do today. This led me to do this podcast. So, I'd already had the idea of doing the podcast after Geoffrey's talk, before I knew that Vernor Vinge passed away, but it's now even more appropriate. This thing is very hard to think about. What we want to think about is future technological innovations and changes We've talked about such possibilities in the podcast many times in various different modes, but it's hard to be comprehensive.
0:10:15.6 SC: It's hard to put them together, see how different kinds of changes and innovations can affect each other and so forth. It's also both too easy to be extremist. To wildly over extrapolate what's going to happen and lose your sense of accuracy and proportion. And also far too easy to be sanguine to say, you know there's always been alarmists and people saying the sky is falling and it doesn't happen so I can just ignore this. I think we have to be responsible, maybe this is all wrong. Maybe there's no phase transition coming. Maybe the rate of innovation will appropriately slow down or it will continue but we'll handle it in some not very dramatic way. But if you take the numbers seriously, then at some point in the future, 50 or less than 100 years from now, We are in for a shift. We're in for a different way of living here on Earth as human beings. So I'm not an expert on this. I've been doing the podcast for a long time. I've talked to a lot of experts on different things. So this is going to be my untutored, semi-educated reflections and musings on this possibility.
0:11:24.8 SC: Think of it more as an invitation for you to think. Than as anything like a true high credence set of predictions, okay? If you don't believe me, that's 100% fine. I want us all to be contemplating these possibilities. They seem to be important. They seem to be things that we haven't thought about. I'm not going to say we haven't thought about them a lot, because plenty of people have thought about them. I don't think we've thought about them seriously and responsibly enough. So this is an invitation to do exactly that. Let's go.
[music]
0:12:16.9 SC: I thought it would be good to kind of get our bearings by remembering the story of human history, as it were. I am not an expert in this, as you know, but it's important to recall the very basic parts of this story that we probably are all familiar with. At some point in the development of Homo sapiens, we developed language and symbolic thinking, maybe 100,000 years ago, something like that, of that order. The way of sharing information with our fellow Homo sapiens in a way that gave us the ability to do things like cooperate, to build on previous knowledge, to learn and pass down culturally what we have learned. Over the course of time, that led to the innovation of agriculture. We went from a set of hunter-gatherer societies to mostly agricultural ones that opened up possibilities for specialization. Not everyone had to do the same job. And that opened up possibility of social structure for better or for worse. Different people having different roles in the community. Note that people are not necessarily happier in agricultural societies than in primitive agricultural societies than in primitive hunter-gatherer societies.
0:13:32.0 SC: This is something that anthropologists and historians debate about. Arguably, you have more free time in a hunter-gatherer society. Almost certainly, there is more inequality once you go to the agricultural model but maybe you also, on average, have a higher standard of living, maybe a little bit more reliability in your food supply, things like that. But we're not here to judge. That's not the goal. The point is that these agricultural societies with more specialization open the door to different kinds of innovation. And innovation, as a word, is usually attached to scientific or technological, engineering, invention kinds of things, but, there are also innovations in philosophy, in politics, in art, and so forth. And these kinds of things can begin to flower about 10,000 years ago once we invented agriculture. There's a positive feedback loop, as we mentioned before. The population grows because you have more agriculture, more food, and things like that. And then you get more innovation because there are more people.
0:14:40.9 SC: One of the things that Geoffrey West and his collaborators have shown is that, of course, there's more innovation in cities than in rural environments just because there are more people, right? But in fact, the amount of innovation scales super linearly with population density. So in other words, not only do you get more innovation in cities because there are more people, but there's more innovation per person, presumably because the people are interacting with each other, sharing ideas, and things like that. So the rate of innovation speeds up as these transitions begin taking place. But it's also super important to note that the whole thing takes a lot of time. When you're thinking about social structures, innovation, things like that, the space of possibilities, the space of possible inventions or philosophical ideas or whatever, artistic forms, is hugely large. So even if things look like they've been more or less the same for 100 year period, there can actually still be very, very important changes going on. And we see this because, of course, eventually we hit the scientific revolution, industrial revolution, renaissance, enlightenment kind of era.
0:15:58.5 SC: Where things change once again pretty dramatically. So, you know that population has been going up on Earth for a long time. There's details about what it's doing right now, but historically population has been rising since we have had this agricultural shift. But the rate at which population is growing has not been constant. We're all familiar with exponential growth. If there's some time constant over which quantity gets bigger by a certain multiplicative factor, so if you multiply something by two every so often, then you will grow exponentially, and that might look like what population is doing, but the rate of population growth has not been constant. Between the birth of agriculture and the scientific revolution, it grew, but it's been growing much faster. Since the scientific revolution. This is a sign that something is going on more than simply a constant rate of growth.
0:17:00.1 SC: So, hand in hand with the scientific revolution, industrial revolution, etcetera, we get democracy, open societies, cities, all feeding into this culture of innovation. One thing to note, as we're just getting things on the table to remember as we go through this journey, is that it's very, very hard to start with some observation of what is happening and to naively extrapolate. Or rather, sorry, I should have said the opposite of that. It's easy to extrapolate, but it's almost useless. It is incredibly dangerous to extrapolate. We famously have Moore's Law, for example. Moore's Law says that the number of components in a computer chip or the equivalent doubles every, I forget, 18 months, something like that. And that's exponential growth right there. And so exponential growth happens for various kinds of processes. You can extrapolate into the future on the assumption that the exponential growth will continue. Crucially important is that exponential growth never truly continues. There's nothing in nature that grows exponentially forever with the possible exception of the universe itself.
0:18:09.5 SC: Because here on Earth, there is a finite amount of resources that we can use. Or if you think, well, we'll go into space someday, that's fine. Maybe we will. In the observable universe, there is a finite amount of resources. When I did my little chat about immortality at the end of last year, I point out low entropy is a finite resource. And there's no way to just get that to go infinitely. I'm not really talking about these cosmic timescales right here. I'm just pointing out that something can temporarily be exponentially increasing, but have a very different future history. If you look at when we had the COVID pandemic discussion about the rate of growth, and we want to get the rate of growth down low enough that we can handle the pandemic, that rate is calculated assuming that at this instant of time things are growing exponentially. But if you actually look at the number of cases over time, we've all seen these peaks and valleys and so forth.
0:19:09.6 SC: It's a curve, but it is not an exponential growth curve, okay, because many other factors kick in. So extrapolating on the basis of a current rate of growth is always incredibly dangerous. That's not to say growth will always slow down. It could. You might have something that looks very much like exponential growth in some quantity right now, but really it is what is called a logistic curve or a sigmoid It's going to exponentially grow for a while, but it's going to turn over and flatten. In other words, it's extrapolating or interpolating between one almost constant value and a different almost constant value. That kind of behavior can look perfectly exponential. But there's also the possibility of growth that is faster than exponential. So a singularity is mathematically described by, like I said, something like 1 over x. If x equals 0 is in the future, if we're in the minus x regime right now, then that rate of growth is faster than exponential. It's not just constant rate of growth, but the rate of growth itself is increasing.
0:20:14.5 SC: That's another kind of thing that could be happening and it can be very, very difficult to tell just on the basis of some finite piece of noisy data whether you're seeing a sort of pole singularity growth. These are called poles in physics, also in math, or you're seeing exponential growth or something like that. So it's interesting, well it's possibly interesting, it might be completely trivial, but it's interesting to note that when John von Neumann made his offhanded remark about the coming singularity in human development. He used the phrase essential singularity. And he might have just been speaking casually or he might not even been speaking in English. I'm honestly not sure. But he was a very good mathematician and the phrase essential singularity has a precise technical meaning in mathematics. It means the singularity is essential in the sense that it's sort of uncontrollably fast. It is faster than 1 over x or 1 over x squared or anything like that as x goes to zero. An example of a singularity would be an exponential of 1 over x, right?
0:21:26.7 SC: E to the 1 over x grows faster than any power of x. That's an example of an essential singularity. I don't know. One of the things that Geoffrey West points out is that this little offhanded remark by von Neumann was never elaborated upon. We don't really know what he was thinking. It's uncharacteristic of him. He was very careful to write things down and to expand upon them. So Geoffrey says that, in some sense, the work he's doing right now can be thought of as filling in the mathematical details there. But none of that is really super important for the current discussion. What matters is that it is completely possible in these mathematical characterizations of various curves of growth and so forth to apparently reach an infinite value in a finite time. And that is the sign of a singularity or a phase transition or something like that. And I'm going to depart a little bit from what Geoffrey West actually said in his talk. I'm going to link to the talk. If you haven't already seen it, I will link to it at the blog post for the podcast on the podcast web page.
0:22:32.7 SC: Geoffrey's argument is that we can kind of avoid the singularity by continually innovating. That what we need is faster and faster innovation and of course also the ability to deal with those kinds of innovations. And he can see in previous data times when human behavior has shifted in one way or another, from one kind of mathematical extrapolation to a different kind and says, well, we can do that again. And so it's not actually going to be the end of anything when we hit this singularity. So he shows us the plot of propane and the heat capacity of propane going to infinity, going through a phase transition and then finding a new equilibrium, but he's suggesting that that's not actually what will happen. And he does that because you can't extrapolate. What you need is some kind of theory. You need a mechanistic understanding of why these various quantities are growing at whatever rate they're growing at. And he based a lot of his discussion on work by Will Steffen, who coined the term the great acceleration. It's not just one quantity that is growing very fast.
0:23:42.7 SC: Steffen points out that there's a lot of quantities. I'll give a link to that also on the web page. So what Geoffrey is saying is that even though he shows the phase transition plot, he thinks that we're not actually necessarily headed toward that even if we are headed toward a singularity. There's other ways of dealing with it. I don't know about that. I don't claim to understand Geoffrey's underlying mechanistic theory. A lot of it he hasn't published yet, etcetera. So I actually am quite open to the possibility that it is a phase transition. I'm a big believer in phase transitions, by which I mean, the social or political or societal or economic equivalent of The atoms or molecules in a substance having different macroscopic properties. Different emergent properties in the macroscopic realm. Because of slightly different conditions, slightly different overall parameters governing how these microscopic pieces come together.
0:24:45.0 SC: So I think that's very plausibly what we're seeing. We're still human beings. The actual physiology and genetic makeup of human beings hasn't changed that much in the last 10,000 years. It's changed a little bit. It's not going to change that much by natural causes over the next couple hundred years. But we can absolutely interact differently, and it's pretty clear that we're beginning to interact differently than we used to do. So the kind of background idea I have is that if there is going to be a singularity, let's imagine that there is a different kind of equilibrium on the other side. The phase transition singularity that we're approaching will not be the end of the world, necessarily.
0:25:33.1 SC: That's one possibility. I'm not going to really worry about existential risks. And those are real. Nuclear war, biological warfare, pandemics. There's a whole bunch of actually real worries to have, but that's not my intention to think about right here. I'm thinking about, given all these technological changes, can we settle into some quasi-static new mode of living? It might be worse. It might be better. But we should at least think about that possibility. Again, none of this that I'm talking about in this solo podcast is highly rigorous, super research or anything like that. I'm trying to make you think about it. I'm trying to get my own thoughts in a slightly more systematic fashion and inspire you to carry it on from there. So here we go. Again, this thinking about a new equilibrium, the word equilibrium is not accidental. An equilibrium doesn't just mean that you've settled into some particular mode. It means that there is some stability in that mode that you've settled into. Thinking, the word equilibrium started in physics, in thermodynamics. You have thermodynamic equilibrium.
0:26:45.1 SC: Two objects that are at different temperatures, when you bring them together, they will settle down to a common temperature and come to equilibrium. It also appears elsewhere, like in game theory. You have Nash equilibria in game theory, where the different players of the games all have a strategy and they can't individually change those strategies to get better results. They're in equilibrium. So that's the important thing. There's nothing that any individual or any part of the system can do to make things better for themselves, whatever it is meant by better. And I'm revealing a personal opinion that I have here that other people might not disagree with, which roughly translates into saying that values of individuals all by themselves don't matter that much. So, in other words, I'm saying.
0:27:34.9 SC: Encouraging individual people to behave in a certain way is not really going to drive the overall shape of society. If you can, tell people eat less meat or use fewer grocery bags or whatever, these are largely symbolic gestures. If you feel better by doing them, that's great. I do think values can matter, but only when they get implemented as large-scale social constraints. Whether those are literally laws, you can't do this or you get arrested, but maybe the tax policy, certain behaviors, you have to pay more money, maybe it's institutions or whatever. But the way that I think about it, which I think is pretty robust, is that given the large scale constraints, individuals are going to largely pursue their self-interest.
0:28:28.1 SC: Okay. I'm not characterizing what I mean by constraints perfectly because it's not all laws and regulations. You can have broad scale social understandings that are not formally written down, but you need some agreement. You need some consensus. Otherwise, these understandings have no oomph. They're not true constraints. They're just, again, making individuals feel good. As an aside, this makes me very sad that in current discourse between and among people who agree and disagree with each other, you don't see much attempt to persuade other people to your side. Most of the people who I see are just making fun of or disagreeing with people arguing with them. It's hard to make large-scale changes that way. You need everyone to agree, or at least a lot of people to agree, to really agree to change the social system as a whole in a way that would lead us to a better equilibrium. You can't just take the people on your side and fight. You need to actually change the minds of people on other sides, and that's something that doesn't happen a lot these days. And maybe that's part of the technological world in which we live, that certain things are incentivized and certain things are not. Okay.
0:29:46.5 SC: That's the background, that's the throat clearing, telling you what my particular perspective on these things are. So now let's talk about technology and the changes that we are facing. And there are many of them, and I'm not going to go through all of them. Again, super non-systematic here. But let's talk about three aspects, and again, very quickly, superficially. One aspect, the environment. Energy consumption, climate change, things like that. Another one, the sort of biological ways that technology is changing our lives, whether it's synthetic biology or gene editing or whatever. And then finally, computers, artificial intelligence, those kinds of information, electronic technology things that we're also very fond of. I think all of these matter. So this is why it is not just a recapitulation of The Vinge Kurzweil kind of AI superintelligence driven technological singularity.
0:30:42.4 SC: I don't think that's the point, but I think lots of things are happening. So that's the place that I've come to temporarily. Probably I'll change my mind about all these things before too long, but here's where I am right now. So let's think about the environment, sustainability, energy sources, things like that. This is a little different than the other ones because it's more a story of gloom. The environment is something that changes. You know, we shouldn't get into the mindset that there is a right way for the environment or ecology to be, the biosphere for that matter, change is very natural, but we, the human race, are causing changes in a highly non-reflective, non-optimal way. We are making things worse.
0:31:33.5 SC: Change is not the problem. The problem is that we are clearly hurting the environment in very tangible, quantifiable ways. So climate change is clearly getting worse, and it's getting worse faster. You know, that's the recent news is that, people have always said the people who want to deny the reality of climate change have long pointed to the difficulty of modeling the climate. They say, these climate models are not reliable, blah, blah, blah, blah. And I get that. It is very, very difficult. Again, much more difficult than theoretical physics. The climate is a paradigmatic, complex system. There's a lot going on, a lot of different forces at work. But the empirical fact seems to be that if the climate models that we've been trying to work on for the last several decades are wrong, it's wrong because the reality is worse.
0:32:31.7 SC: Than what the model has predicted. Especially in this year, 2024, all the global temperature indicators are higher than we expected them to be. On the other hand, there are small signs of hope. We've talked about these issues on the podcast before. We talked about actual climate change with Michael Mann and the problems there. But we also talked with Hannah Ritchie relatively recently about hopeful prospects. Mostly for cleaning up the environment rather than combating climate change. But Hannah's point was, you can't just become passive and full of doom. You have to keep hope alive. If you have to say, okay, but what can we do?
0:33:17.7 SC: And you have to remember that there is evidence that things can be done, progress can be made. Specifically, when it comes to energy and renewables, we had a podcast quite a while ago with Ramesh Naam, where he talked about the absolutely true fact that progress in renewable energy has been moving faster than we expected it to do. As dependent as we currently are on fossil fuels of all various sorts, there are alternatives that are becoming very realistic and are being implemented. And something that is always true in these discussions of rapid change is that there can be competing influences, both of which are rapid. And there can be a race. So I forget who mentioned this. I always like to try to give credit to people. Someone pointed out to me recently, it might have been Chris Moore at SFI, but we are getting better at things like solar and wind power and things like that, but maybe not so fast that people are ready to wait. Until we are completely converted to those kinds of energy generation. And if they're not, they might say, well, let's build some more infrastructure to burn some more fossil fuels, either natural gas, fracking, whatever it is.
0:34:38.1 SC: And then once that infrastructure is there, we're going to be using it for the next 40 years. So there is a race that is on to see whether or not we can resist the temptation to just burn through more fossil fuels and make the climate even worse. But there's the possibility. Of doing better. There has certainly been a relatively legitimate worry that the only way to cut greenhouse gas emissions would also be to just slow economic growth. There, the evidence is quite optimistic. Namely, in many countries around the world, the rate of economic growth has become decoupled from the rate of greenhouse gas emissions. In other words, there are many countries out there that have been lowering their CO2 and other greenhouse gas emissions while nevertheless growing economically. So it can be done. That is a little sliver of hope. It doesn't say we will all choose to do it. It's not necessarily the biggest, the worst perpetrators that are lowering their CO2. And I always like to bring up one of my favorite podcasts that we've ever done was with Joe Walston. Who is a conservation scientist who tries to preserve various species. And he gives a sales pitch and also a sort of prognostication for urbanization as a phase transition. He notes, again, something that the data are pretty clear about, that living in cities is better for the environment than living in, than scattering the human race around urban or suburban places to live.
0:36:12.5 SC: You might visualize cities as having factories and having pollution and things like that, but per person, it is way more energy efficient to have people live in cities. We don't use as much land. We don't use as much fuel to heat your houses because you're living in group buildings and things like that. You don't need to drive as far. So there's many reasons why cities are better for the environment if you have the same number of people. And the good news is the world is urbanizing. So Joe Walston suggests another glimmer of hope that we're entering a new kind of distribution of humanity. Where the vast majority of humans live in cities, there are some who still are out there on the farms living in the country, that's fine. And this is not driven by rules. This is not the communist dictatorship telling you where to live. This is that people are choosing to live in cities at unprecedented rates. And if that comes true, then we can envision at least a future equilibrium where we live sustainably on the land, where we don't ruin. The rainforest for beef or things like that, but we have other ways of getting our food supply and so forth. So I don't, throughout all of this discussion, I have no agenda, really. I'm not trying to convince you one way or the other. I'm exploring the possibilities, and I think one of the future optimistic possibilities comes from urbanization.
0:37:39.5 SC: I think that for a lot of reasons, cities are good if we do them right. But having said that, I think, again, the data are speaking very clearly that at the moment we are destroying the earth. The climate is getting worse. There are positive feedback mechanisms that are making it get worse faster. And the upshot of that is that I don't think it's an existential risk. Existential risks are defined as those that literally speak to the end of humanity as we know it. I don't think it's like that. What I think is that it will lead to enormous suffering as well as enormous economic costs, climate change. So that's bad. I don't think that it's going to lead to the extinction of the human race, but it will absolutely lead to the extinction of other species. It will change the biosphere in very, very important, somewhat unpredictable ways. And it will eliminate much of the land that a lot of people live on now from being livable. It will completely change habits of farming and food production. Guess what? Poor people will be hurt disproportionately compared to rich people. But even the rich people will suffer because it will just cost enormous amounts of money. We will lose enormous amounts of human wealth. We are going to lose enormous amounts of human wealth because of climate change.
0:39:11.6 SC: That's bad it doesn't have to be an existential risk to be bad. And I think we can recognize that it's bad and we should be very, very motivated to do what we can do to prevent it. But there is, like I said, there is still hope for stabilizing things in the future, not even counting like clever scientific possible solutions. Can we terraform our own planet? Can we do things to the atmosphere that will undo the effects of dumping fossil fuels into them in forever. I don't know. I know people get very emotional talking about these things. But I think that medium term, things are going to get noticeably worse for the climate than they are right now.
0:39:55.1 SC: Long term, we will survive, possibly at a different equilibrium. And our job is to make the transition, like give us a soft landing, to make the whole thing as less, as least painful as it possibly can be. So, good. That's all I have to say about the environment and climate change and things like that. Nothing profound. I know that. That's why I wanted to get that out of the way first, because on the one hand, it's super important. On the other hand, you've heard this message before, so there's my version of it. Let's move on to biology, because here's where I think we should, as a society, be paying more attention than we have. To what advances in our knowledge of biology and our technical abilities to manipulate biology are going to do. Going to do for what it means to be a human being. And we've talked a little bit about this set of things on the podcast, but maybe not as much as we could have.
0:41:00.9 SC: So I'll just mention a few things to keep in mind when we ask ourselves these questions. One is longevity. We did have an early podcast with Colleen Murphy, who is one of the world's experts on this, and she has subsequently come out with a book that you can buy on longevity. And I think that there are mixed messages. On the one hand, when you look at little tiny organisms, not just microorganisms, but little tiny worms and things like that, there are remarkable things you can do by playing with the DNA of these little organisms. You can make them live much longer than they ordinarily would. But those particular kinds of changes don't obviously scale up to mammals or other human beings. And it's an interesting situation because there's no rule out there in the laws of nature that says you can't stop or reverse aging. It's an engineering problem, as we theoretical physicists like to say. But it's a very, very hard engineering problem. So, for example, if you track.
0:42:04.9 SC: Average lifespan of civilizations or societies as they become more technologically developed, the average lifespan tends to go up. So you tend to think we're living longer and longer, and that's a trend that will continue. But if you dig into the data a little bit, the maximum lifespan of human beings hasn't actually changed that much. Whether, it's like you think about it as 120 years or something like that. The people who live the longest have been living that long for a long time, regardless of what kind of society they're in. The reason why our average life expectancy is going up is because people aren't dying young nearly as much. We are living, on average, closer and closer to that upper limit. But changes in diet and exercise and medical knowledge haven't really increased the sort of envelope, the cutoff, for how long human beings can live.
0:43:08.7 SC: So in the spirit of taking changes that are going on and imagining that they are indicating that we are heading towards some kind of major transition, I'm going to boldly predict that we are not headed toward a major transition in longevity. As I said, we could at some point do that, but I don't think that we're currently on that trajectory in the medium or short term to do that. I'm hoping that we will live healthier lives and more of us will live to be 100 or whatever, but I don't foresee a lot of people living to be 200 in, let's say, the next 100 years. I could be wrong, of course, very happy to be wrong about that, but I don't think that's where I'm going to bet my money for a major transition. There are other places to put your money for major transitions. One, of course, is gene editing.
0:43:55.8 SC: We did have a discussion of gene editing with Fyodor Urnov, one of the pioneers of this. And there's sort of a hype cycle in these kinds of discussions. When CRISPR first came out, and for that matter, when we first mapped the human genome, people started having panicked discussions. Oh, actually, yeah, we talked to Alta Charo way back, very, very early, discussion in the history of the podcast, we talked about the legal side of bioethics and gene editing. So people had these discussions about, are we worried that people are going to make designer babies and are going to sort of be mucking with our own human genome? And that's going to lead to some dramatic change in everyone is going to be, I don't know, blonde and blue eyed or something like that. Or there will be like all boys and no girls or vice versa? There's a lot of reasons to worry. And some of those worries are just kind of stodgy conservatism. That human race has always been like this, therefore we should not mess with it. I don't buy that kind of at all.
0:45:05.9 SC: You know, I think that if we gather the ability to look into the genetic information inside a zygote or embryo and realize that it's headed towards some terrible disease that we're imagining we have the ability to prevent, then I think we should go ahead and prevent it. But more than that, it doesn't matter what I think. What I think is that it's going to happen. So you can talk all you want about responsible limitations on what scientists can do and what doctors can do, whether or not couples can choose different features of their babies and so forth. I don't think that there's much prospect for any of those hoped for restrictions working. Because we don't have a world government that can make those restrictions, if nothing else. If one country says we're not going to do it, another country is going to do it. And then the first country is going to say, well, wait a minute, they're doing it. We better start doing it also.
0:46:08.7 SC: So I think we have to face up. To the designer babies. I think that they are coming. I don't think that that can be stopped. And it's not just designer babies. I think that this sort of panic over, worrying that people are going to choose a certain kind of child and we'll all become homogeneous and boring, etc, has again led us to not think very carefully and systematically about what the possibilities are. I think we should have more discussion of what the world could be like. And how the world could be good. If when parents decided to have a baby they could also choose its characteristics again I'm not saying that this is what should happen I'm just saying I think it's what will happen I don't think that we have that much choice because the incentive structure does not give me an easy route to imagine that the whole world is going to prevent this.
0:47:10.5 SC: And as Fyodor Urnov said it's not going to be hard you're not going to need a multi-million dollar laboratory to do this you'll be able to do this in your garage. So I think the responsible thing to do is to think carefully about what we want those changes to be like. Like even if we can't stop it, maybe we can stop abuses of it in some effective way. I don't know, but I do think it's going to be a huge deal, and I think we should be talking about it more. A related issue, which I think is going to be a huge deal, is synthetic biology. And we really haven't talked about that very much. It's appeared a couple times in passing. But synthetic biology is not just mucking with the human genome or the genome of a sheep or anything like that, but mostly for tiny microorganisms designing new organisms. Synthetic biology.
0:48:00.2 SC: So going in there and making a genetic code that creates a kind of organism that you want. There's related kinds of biological exploration. Since I'm not a biologist, I just mix them all together in my mind, even though the experts think these are very different. But DNA computers and DNA robots, DNA is obviously very useful to us. It carries our genetic information, et cetera. But there's a reason why that particular molecule is the one that works to carry information in living beings. It's because it's extremely flexible. Forgetting about the actual use of DNA as the carrier of genetic information, DNA is a great way to build things. Microscopic, very tiny-scale objects that do things you want them to do.
0:48:51.2 SC: You can very easily imagine building little DNA robots that will go into a person's body and remove their allergies or prevent them from getting cancer or solve other health problems that could pop up. Synthetic biology could design organisms that could, again, help us with our health problems, but also maybe help eat the carbon dioxide excess that is in the atmosphere, or dramatically change how we do food production, both good old agriculture, making it more effective, but also synthetic meats, other kinds of food sources and things like that. These are going to be huge deals. If you're talking about a technological singularity coming that is going to change human life, I think that editing our genes and synthesizing new kinds of organisms had better be right there near the top of your list. We could imagine, we talked to Lea Goentoro here on the podcast, a Caltech scientist who has in, not human beings, but for much tinier organisms, has regrown limbs.
0:50:02.1 SC: We still are in this world where a lot of people could use these dramatic improvements in our ability to control and shape biological function in ways that we could help them, amputees or people who are suffering in various ways. This is really going to change what it is like to be a human being. I don't think that we will be uploading ourselves into the Matrix. The Matrix movie is going to appear a couple times in this podcast, but I recently read, of course, there was a little panic on Twitter because people realized that their first-year college students, professors, were panicking because their first-year students had not seen the Matrix. They didn't know what it was about. And The Matrix, for people of a certain age, was a very formative movie. And so I encourage you to go see it if you haven't seen it already. But you've heard the basic idea that people are uploaded into this computer simulation, and they think that it's real life. That's the Matrix. So there's both the real physical world, and then there's the Matrix, the simulation they're in, and it's all controlled by evil people and robots and things like that. So it's a fascinating philosophy set of questions, as well as a good movie. For various reasons, that is not the change in human biology that I'm actually thinking about.
0:51:24.0 SC: I'm not worried, or I do not gleefully anticipate that people will upload their consciousnesses into computers. And the reason why is because I know that I'm not a non physicalist about consciousness. I think that you can make conscious creatures out of silicon and chips just as well as you can out of neurons and blood and tissue, but they will be profoundly different. If you take the information that is in your brain and encode it in some computer chip, you have removed its connection to your body. And what we think about as human beings is inextricably intertwined with their bodies. We are embodied cognitions as we have talked about many times on the podcast, Andy Clark, Lisa Aziza Day, and so forth. Our bodies are what make us human just as much as our brains.
0:52:22.4 SC: We get hungry, we get thirsty, we get tired, eventually we die there's all sorts of Antonio Damasio, another person we talked to, he talked about homeostasis and feelings that we have, fundamentally physiological things that profoundly shape who we are mentally. And so it's not that we can't upload the information into a computer, it's just that it wouldn't be a person anymore. It might be something, but it would be different. And that's okay. It's okay for it to be different. So there might very well be creature like things that we recognize as conscious who live in computers, but they won't be the same as human beings. They'll be something different. And that's okay. So I'm not suggesting that that's the big phase transition that we are going to see in the future. But there will be brain computer interfaces, this has been a hot topic lately in the news. Neuralink is Elon Musk's company, but there's actually lots of other companies that are further along in this search for ways to make human brains interface directly with computers. And in fact, that's part of a broader thing, making human bodies interface directly with machines.
0:53:38.4 SC: These are cyborgs or some version of that, depending on how science fictiony you want to sound. This is another technology that I absolutely think is coming and is going to be important. This is going to be a big deal. Think of it this way. Cell phones, smartphones or whatever, even personal computers, whatever you wanna call mobile information technologies connected to the internet. These have already had a very big impact on human life. They've had an impact because poor farmers in Africa can keep track of weather conditions in ways they never could before, 'cause the cell phones are pretty cheap. But also they're changing us socially. There's been enough data by now that I think it's accurate to conclude that cell phones have had a number of negative effects on the lives of young people.
0:54:32.5 SC: And of course, it's not the technology that does, but the uses of the technology, whether it's because they don't go out anymore 'cause they're just texting, or whether they're seeing unrealistic depictions of beauty or whatever. I don't know. And this is something that, it is a conclusion that I was always reluctant to buy into because it sounds a bit alarmist and Luddite, et cetera. But again, I think the data are there. Cell phones have made young people on average less happy than they used to be. And that not a necessary connection, obviously. This is a fixable thing. We are not yet at the equilibrium. We are in a moment of change, of dynamism. We haven't yet figured out how to do these things correctly, how to use these technologies in the best possible way. But my point is, whatever you think the cell phone has done, I think it's easily imaginable that brain computer interfaces are going to be a hundred times more influential than that.
0:55:35.5 SC: If we are embodied. Remember when we talked with Michael Muthukrishnan about various things, but one thing was the fact that human beings tend to offload some of their cognition. Chimpanzees think for themselves more than young human beings do, because human beings have been trained to trust other human beings because we are not just our brains and our bodies. We can write, we can learn, we can teach we can store information and then go access it. So we have not only cell phones, but we have watches and we have calculators and computers and things like that. We have writing and books, all this stuff. Our cognition, our thinking happens in ways that extend beyond our brains and even our bodies. That's gonna explode, to whatever extent we're doing that now, we're gonna do it much, much more in the future for better and for worse.
0:56:38.8 SC: This is not all good, not all bad I am as sort of slightly extrapolating or speculative. I'm trying to be here I'm reluctant to predict exactly what changes those are going to be like. But look, you've all seen quiz shows, jeopardy, who wants to be a millionaire, where you're asking people questions about various trivia questions and things like that, you could imagine that goes away. Because everyone has instant access to the internet. And you can just Wikipedia or Google something right away in your brain without touching anything. And it's much more profound than that, of course you can call up all sorts of pieces of information, not just Wikipedia. You can record things. Maybe rather than a camera in your cell phone you just blink and now you have a recorded image of whatever you're looking at right now, and you can store it and play it back, make videos record conversations. How does this change learning? How does this change performance in all sorts of fields when we have much more immediate access to all sorts of information?
0:57:47.9 SC: Of course there's much more down to earth and obvious impacts of these technologies because again, some people are paraplegic or locked in syndromes of various kinds where brain computer interfaces can help them lead much more rich interactive lives with everyone else. So I'm reluctant to predict what will happen. But again, there's no barrier to these technologies coming and they are coming, there's startups doing them right now, so we should be thinking about, we can't just say, oh, that would be terrible. I don't like it. I wanna live like we've lived for the last 10,000 years. I think we have to take seriously how those technologies are going to change things it's gonna happen whether we like it or not.
0:58:34.6 SC: So I know that leaked into the sort of computer tech kind of thing, but basically that was my biology discussion. I think that there are arguably profound changes in biology that we have so far done not a great job of taking seriously in terms of how they will shape our notion of what it means to be a human being over the next a hundred years. But now, the moment we're all waiting for here, what about AI? Or even more broadly, what about computers and information technology of all sorts. How will that? I think that's just a little bit wrong. I'm sorry. I still think it's a little bit wrong. I said this in my AI solo podcast, and some people, including, by the way, all of the AIs out there, like GPT-4 agreed with me while many other people disagreed with me profoundly when I said that AI, it's crucially important to recognize that artificial intelligences as we currently have them implemented, have a very different way of thinking than human beings do. And what that means is when you toss around ideas like general intelligence, you're kind of being hopelessly anthropomorphic.
1:00:03.9 SC: You're looking at what AI does. If Dan Dennett were here, he would explain that you have fallen victim to an overzealous implementation of the intentional stance. By the intentional stance, he means attributing intentionality and agency to things that behave in a certain way that we are trained to recognize as intentional and a gentle, conscious cognitive thinking. In our everyday experience, we meet human beings and other animals and things like that. And we know the difference between a cat and a rock. And one is thinking and one is not. And so there are characteristics that we associate with thinking well and being intelligent. And it's a rough correlation and it kind of all makes sense to us. And we can argue over the worth of IQ tests or standardized tests or whatever. But roughly speaking, some people seem smarter than others. So when we come across these programs, which are currently the leading ones are large language models, but there's no restriction that that has to be the kind of technology used going forward.
1:01:10.4 SC: The point is, there's a computer that is trained on human texts. It is trained to sound human to the greatest extent it possibly can, and it succeeds. That's the thing that has happened in the last couple years, that these large language model algorithms really, really can sound very, very human. And so, since all of our upbringing has taught us to associate this kind of speech, even if it's just text with intelligence, we go, oh my goodness, these are becoming intelligent. And if it's becoming intelligent and it's a whole new kind of intelligent, then it can become more intelligent than us. And then the worry is that if it's more intelligent than us, it will either be a superhero or a super villain. So our very pressing duty is to guide AI toward becoming a superhero rather than a super villain. And I don't think it's going to be either one, not in the current way that we're doing AI anyway. Again, in principle one could imagine things along those lines, but I don't think that's where we're going right now. So I know that people are worried about artificial super intelligence with the idea that once the computer becomes smarter than us, then we can't control it anymore.
1:02:35.3 SC: 'Cause if we tried to control it, it would resist and it would trick us 'cause it's smarter than we are. What can we do in the face of such overwhelming intelligence? And again, I think this is hopelessly anthropomorphic in the sense that it is attributing not only the ability to sound human to these models, but the kinds of motivations and desires and values that human beings have. The origin of our motivations and desires and values is just completely disconnected from the way that these AI programs work. It is a category error. It is thinking about them incorrectly they might very well develop very, very good reasoning skills of various sorts. After all, my cell phone is much better at multiplication than I am. I do not attribute general intelligence to it. My point is that even if they become better at abstract cognitive tasks, they won't be just like humans except smarter.
1:03:34.7 SC: That's not what they're going to be. So there are different kinds of things, and I think that we have to be clear-eyed about what their effects would be. None of this is to say that the effects will not be enormous. And so I want to emphasize that that's what I'm here to do. I'm not worried about some kind of artificial intelligence becoming a dictator. I'm not worried about Skynet. I'm not worried about existential risks. I'm worried about the real influence that AI is going to have. Not worried, but thinking about the real ways in which real AIs are going to change how we live. I think those changes could be enormously big, even if the way to think about those changes is not as super intelligent agents. I hope that that distinction is a little bit clear.
1:04:27.7 SC: So look, AI is gonna do many things. Many things that are now the job of human beings are going to be done by AIs. It's always amusing to take the current generation of AIs and see them making mistakes. Because they make mistakes. Of course they do. The mistakes they make are mildly amusing. But it's kind of not the point. It's only amusing when they make mistakes because they are clearly super duper good at not making mistakes. That's sounding actually really human. That's much more notable to me than the fact that they still do continue to make mistakes. So things like writing computer programs, writing books, writing articles, designing buildings or inventions or chemistry processes, creating things, creating art, creating life living spaces or whatever, doing architecture. All of these things in my mind is very natural to imagine that AIs are going to play a huge role doing that, either literally doing it or helping human beings do it.
1:05:36.2 SC: Just to mention one very obvious thing, AI will be able to help human beings learn things that they didn't know, not in any sort of simple-minded, let's just replace all professors with AIs or anything like that. But why would you wanna do that? That's not the model you would choose. You personally and individually can learn things with the help of AIs in ways that once we clean up the obvious mistakes that they keep making, which is an ongoing project that might improve very rapidly for all I know, but it will be enormously helpful think about that. I don't know how to, well, again, it's slightly too easy to dwell on the mistakes because there's a thing that's been going around on the internet recently of a cookbook that comes, I don't know, you buy a some oven or something like that, and this cookbook comes along with it and it's clearly AI generated and it's just full of nonsense.
1:06:37.5 SC: And we absolutely need to be worried that some AI produced thing is gonna kill people because it's not actually thinking in the same way we do. And it produces nonsense and someone follows it a little bit too, literally. I'm very much in favor of worrying about that, but it will also more often than not help you learn how to cook or how to speak French, or how to ski or whatever, or how to do theoretical physics. There's no reason to think that AI won't be enormously helpful in that. It'll be enormously helpful in accelerating the rate of other kinds of innovations. So even if the traditional singularity spiel that says AI becomes super smart and it designs other AIs that become even smarter, even if that is not the right way of thinking about it, because the word smart is being misused in that context, the AI will absolutely help accelerate the rate of innovation.
1:07:36.3 SC: When you're a chemist or a biologist or whatever, very often the systems you're thinking about are just so complicated that you have to take some stabs in the dark or some educated guesses and then run trials. Drug trials, this is something that we do all the time. If it's possible to simulate those kinds of trials, you could in principle, enormously speed up the process, all of these things, this discussion we just had about brain computer interfaces genetic engineering, synthetic biology, the rate of progress on those fronts can very possibly be enormously improved. Sped up using help from AI. So that is a kind of bootstrapping positive feedback, acceleration of progress that is characteristic of this kind of singularity behavior. And whether or not you believe in AGI in the traditional sense, there's no reason to be skeptical about that kind of thing.
1:08:38.4 SC: So what is that gonna mean? How will the world be different when AI gets good at these things? Even right now, if you're a basketball fan like I am, and you look up a little recap of last night's games, chances are pretty good that that recap was written by an AI. And sometimes they're terrible. There's still the ability to find real human beings. So most of what I read is by human beings. But the the simple-minded daily story from Associated Press or whatever is often gonna be artificially created so how far is that going to go? So I asked this for my own thought experiment purposes. I wondered, could AI replace me in the sense of writing my books? I've written several books. It maybe could do the podcast too for that matter, but could AI do a good job of writing books in the mode or in the style of Sean Carroll so well that I don't need to write them anymore.
1:09:40.7 SC: That is a crucially important, difficult, interesting very near term question. I think that is not a silly question. I did look. I looked on Amazon, are there any books currently being sold that purport to be by me, but are actually written by AIs? I couldn't find any. I guess that's good. I did find books that are written by AIs that summarize my books. So it's very possible that there are books that are trying to be written by me that just don't attach my name to them. That are sort of a little more subtle than that. But if you search my name on Amazon, you find my books, you find books by former Mindscape guests, Sean B. Carroll, the Biologist, who's written a lot of great books. But you also find books with titles like Summary of the Big Picture. And sometimes these are written by human beings, but sometimes very, very clearly they're written by AIs. And you can tell one way of telling is just click on the Amazon reviews and every review says, ah, this is clearly computer generated and it kind of sucks.
1:10:48.8 SC: But again, the day is young. The progress is still happening. So could you feed a model, a large language model or some improvement thereof, everything I've ever written about and have it write a new book maybe you give it a topic. Maybe you say write a book about I don't know. So Katie Mack, former Mindscape guest, wrote a great book about the ways the universe can end. I've never written a book about that. So you could ask the AI, what would a book by Sean Carroll about the ways the universe could end be like and it could write a book, you could absolutely do it right now, and it would suck. It would not be very, very good at all. But imagine that it gets better.
1:11:37.2 SC: So again, I think that this is going to depend on technologies we don't quite have yet. There is beyond the sort of obvious factual mistakes that AIs are still making right now, there is kind of this difference between interpolation and extrapolation. AIs are good at seeing everything written and kind of going between them. And in things like art this is very, very provocative because you go between two different kinds of art and you get something that is kind of new, but when it comes to sentences, that's less true. If you have different sentences and you're sort of going in between them, which is again, not the only thing AI can do, but a natural strength of large language models, you get sort of something less interesting.
1:12:22.9 SC: Something not as provocative and creative as what you're looking for in a book, extrapolating to say, well here's a sentence, here's a sentence, here's a sentence. The next sentence in a completely different area by the same person should look like this. That's much harder. It's harder to do that in a creative way, given the current ways that large language models and other AIs are constructed because they're constructed to sound as much like they're predicting what comes next usually. And the fun part in a good book is to have what comes next, not be that predictable. So that's a clear tension between what large language models right now are good at and what you want. But I don't think that's a tension that is impossible to resolve. Here's one way to do it. Throw in some random numbers, like imagine that we have enough computing power just write a thousand books and then search through and find the one that is most interesting and creative. That's something you could imagine doing and that could extrapolate in very interesting ways.
1:13:22.4 SC: Now, footnote, I should have said this earlier in the podcast, but one of the challenges back up there when we were talking about the environment, one of the things you might have thought back if you were thinking 20 years ago about climate change and fuel use and so forth, is well, maybe we'll reach a saturation point where we have a constant amount of fuel we need to burn. Maybe once everyone is flying and everyone has their car, we're not gonna need to continue to increase the amount of fossil fuel consumption. Recent years have given a lie to that anticipation, even if anyone had anticipated that, for the simple reason that we continually invent new ways to burn fuel, to use energy, and computing is it right now. Somewhere I read that the, what we call the cloud. When you store your files, your photos or whatever in the cloud.
1:14:30.6 SC: I didn't exactly write this down when I read it, but either the energy consumption or the fossil fuel emission from just keeping the cloud going is larger than that of the entire transportation industry. We're putting an enormous amount of energy into running computers of various sorts. And large language models are some of the worst offenders of this. It's an enormous computational problem, and we would like to do more computation, and that's gonna take more energy. That's a problem. If we think that we are just at the beginning of the AI revolution and other various kinds of ways in which computers are going to be used, just finding the energy to run them is going to be difficult.
1:15:29.8 SC: I just did the thought experiment of imagine writing a thousand versions of a new book by me, and then searching through and looking for the good one that's gonna cost a lot if that becomes common to do. Now there's another problem, which is that at some point you're in Borges's, Library of Babel. Remember Jorge Luis Borges wrote this story, The Library of Babel, which imagined that there's a library that contained every book you could possibly write. And the problem there is you can't find the book. Yes, it's true that War and Peace by Tolstoy is there somewhere, but there's many, many, many, many other books that are exactly like War and Peace, but a few letters are different.
1:16:09.5 SC: So at some point, that's going to be the problem that you face. If you think you can create new knowledge by throwing some random numbers at an AI, finding what the knowledge is versus what the nonsense is, is going to eventually require some judgment of some kind. And so all of which is to say, maybe I can be replaced by AIs writing my books, but there are obstacles to it happening that I don't think make it imminent. I think a much bigger problem than that is the more sort of news social media kind of effects. And here I'm not saying anything at all different than what many other people have said it's already happening. If you go on social media or if you just go on the internet more broadly, it's becoming harder and harder to tell, number one, what was written by a human being versus what was AI generated. Number two, whether images are actually photographs of real things that happened.
1:17:26.0 SC: And this is going to lead to two huge problems. One, of course, is that you can manufacture evidence for whatever claim you like. Oh, you think that this person did this bad thing? Make a video that shows them doing that bad thing. And so it becomes hard to know whether evidence is reliable that way. But the other problem, which I think is underappreciated, is that real evidence becomes less trustworthy. Donald Trump has already used this defense. He says some crazy things. People get him on tape for saying crazy things. And he says, ah, that's just AI generated. You can't believe that I actually said those things. And whether it's true or not, the doubt is there. There is a loss of reliability. There's the loss of the ability to validate the claims that we make in the social sphere. And we've already seen this happening in other ways. But we know what the outcome is. It is kind of an epistemic fracturing. We divide into tribes into bubbles. The problem of a bubble is not that an epistemic bubble and information bubble where, you mostly talking to people you agree with. Who was it? It might have been Brendan Nyhan who talked about this, or Hugo Mercier, I'm not sure.
1:18:50.5 SC: But the problem is not that you're only, I think it was Brendan Nyhan, that you're only exposed to information you want to hear and already agree with. The problem is that you are exposed to contrary information and you just don't pay any attention to it. You just don't listen to it. You don't give it any credence. You don't take it seriously. We human beings, this was Hugo's point, we human beings are really, really good at ignoring the information we want to ignore. And this ability to artificially generate fake information in all sorts of ways is going to tremendously exacerbate that problem. We can plausibly imagine that it becomes hard to trust anything and we descend into a kind of fantastical miasma of entertainment and wish fulfillment or bias fulfillment. So we don't know what to believe, so we believe what we want to believe, and that's it. The reality-based community ceases to exist because everyone chooses to believe or chooses to believe what they want to distrust what they want and maybe rightfully so. If there's just as much crap out there as there is real stuff.
1:19:06.9 SC: So I don't know what the equilibrium will be there. I don't know once it becomes so easy to generate evidence-looking things as it is to generate real evidence. I don't know where we land. I don't know how we change how we evaluate the world, you know. I mean it's already true when we think about politics or international affairs and things like that that we hear claims on the internet that we like and we spread those claims and then someone says actually that was wrong and then it's much harder to bring it back and undo the damage. Again I think we're at the beginning of this change. We're not near the end of it. For whatever various reasons since the internet came to be journalism and newspapers have collapsed, have imploded. It was actually as many of you know if you want to point a finger at one event that led to the collapse of journalism it was Craigslist. Craigslist the online classified service because many many newspapers actually got most of their revenue from their classified sections and again going back up to the discussion of people are going to follow their self-interest if they're allowed to do so it is better to have classifieds online and widely available to everyone than to have them individually printed in physical newspapers.
1:21:39.2 SC: It's just easier. So the model of newspapers and their revenue streams sort of went away and you can plot that very dramatic transition pretty easily and this is a new thing, the shift to distrusting pieces of information is a different kind of thing but it'll be equally important if we don't have things that we can rely on. So that's going to be a big deal. Okay, so given all that. So again all of this is sort of slightly meandering exploration of what I think are technologies that will really lead to huge important changes. What do we think is going to be the end story? If it's true that we're approaching a singular moment after which human life and society will look different, what will it look like? And I'm gonna be brutally honest here, I'm gonna disappoint you if you want to get the answer, the correct answer from me 'cause I don't know. I think it's a very hard question to ask. I think it's very worthwhile to ask. I think that I guess I've said this already but when people talk about it I just don't think they're being serious in the sense that they are too not eager but susceptible to either wildly over exaggerating effects or under appreciating the possible effects.
1:23:10.0 SC: I think that the balance, and I don't blame people. I'm a person, it's very, very hard to strike the balance, between carefully thinking through all of the possible things that can happen, And yet sort of soberly imagining which ones are more likely than others. So that's what I'm trying to encourage people to do. I'm not successfully completing that program, but I hope that, I can give some food for thought for people who wanna think it through. So, to acknowledge that I don't know what the answer is, I will sketch out two sort of edge case scenarios, a pessimistic scenario and an optimistic scenario. And, originally I thought of doing the optimistic one first and then the, warning of the pessimistic scenario. But that's depressing. So, let me do the pessimistic one first and close with the optimistic one, even though you'll have to judge for yourself, which you think is more plausible given the things that are happening to us. So the pessimistic scenario, a good analogy, a good metaphor once again, comes from the Matrix, the movie, but not from what most people take to be the central theme of the matrix.
1:24:24.2 SC: The possibility that we're living in a computer simulation or something like that. Many people, and myself included, have pointed to one aspect of the Matrix movie as the silliest, and the one that we really wish had not been part of it. And that is the following. Of course, there is still in the world of the Matrix, a physical world. So people have physical bodies, but their experiences, their thoughts, et cetera, are all in the matrix. They're all in the simulation. So what are most, and you know, our plucky heroes, our, you know, pirate rebels who are navigating the real physical space, but most people who are living their lives in the matrix, what are their physical bodies doing? And in the world of the movie, they are batteries. Basically. The technology of the computer simulation is powered by human bodies. So all the human bodies are put in these pods and hooked up to tubes and wires and whatever. It makes for great visuals in the movie, but completely hilariously nonsensical in terms of thermodynamics and physics.
1:25:34.4 SC: Human bodies don't create energy. They use up energy. It's the opposite of what you would want. We're terrible batteries or power generating sources or whatever you might want to be. So I and others have made fun of the matrix movies for that particular conceit. But finally, I don't know, I honestly don't know whether this is in the intention of the Wachowskis when they made the movie, or whether, it's just a good way of thinking about it. Finally, it occurred to me there's a much better way of thinking about that image of the people powering the matrix, which is to not take it literally, but to take it metaphorically. In other words, to imagine that what is being imagined is not that our literal ergs and jewels that we human beings create are powering the matrix, but that our human capacities are powering this particular fake reality.
1:26:38.6 SC: That's the metaphor that is actually kind of useful. So the pessimistic scenario that I wanna sketch out is one where human capacities, for the most part, mostly become fuel for a rather unpleasant kind of society that we can live in. That might sound a little vague and, abstract and conceptual. Let's try to put some meat on the bones. Part of this inspiration for thinking about things this way for me personally, came from a conversation I had with a physicist, Victor Yakovenko at University of Maryland. Victor is a condensed matter statistical mechanics physicist so he thinks about, originally from Russia but he moved to the US a while ago so he thinks about thermodynamics, statistical mechanics, things like that, entropy and so forth you've heard the words but at some point he became interested in economics like many physicists do. Physicists like to colonize all the other fields of human intellectual effort. Economics is a good one because there are equations in it. So there's a whole burgeoning field of econophysics. So Victor had the following idea and he sort of worked this out before he talked to any actual economists. He said if I have a box of gas and I have some molecules in the box and I put them in some initial configuration and I let them bump into each other we know what will happen.
1:28:03.8 SC: You will equilibrate. You will go to a maximum entropy configuration basically because all the molecules bumping into each other will exchange energies and after many many such exchanges you will reach a known distribution of energies that was derived back in the 19th century by Maxwell and Boltzmann. The Maxwell Boltzmann distribution and this is experimentally verified as well as theoretically. So Victor says that's kind of like money in a society. Energy in a box of gas is kind of like money in a country. Now money supply is not completely constant. We know the Federal Reserve increases or decreases the money supply in response to economic conditions but it says that's a tiny effect. Let's imagine that for the most part there's a fixed amount of money in society and the money gets exchanged. People buy goods and they sell goods and the money moves around. So Victor says, he was not too serious about this but he said let's imagine that it's kind of the same thing and that money reaches that wealth if you like reaches a maximum entropy distribution and he derives that it should look like the Maxwell Boltzmann distribution just like energies in a box of gas.
1:29:19.2 SC: So then he goes to some real economist and he says here look I have a theory for how wealth is distributed in society and they laugh and they roll their eyes because of course they know much better than this and they say like look just at one very simple level there is a feature of this distribution you've written down which is that as the wealth gets more and more the number of people who have that much wealth decays exponentially. So we were talking about exponential growth before here's exponential decay the point is in either case it's fast. So it is a feature of the Maxwell Boltzmann distribution of energies of molecules in a box of gas that there will be occasionally rarely some high energy molecules but there are exponentially fewer of those than molecules moving with the average energy and Victor's model said the same thing about wealth that there should be exponentially fewer wealthy people than average median earners. That is not true. We've known for a very long time that that is not true in any society that we've ever met. I mean at the mathematical level there is a power law that describes the distribution of wealth at the high end of wealth and what that means in practical terms is there are a lot more wealthy people than you would expect if you were just exponential fall if it falls off much more slowly than that.
1:30:43.7 SC: There is a fat tail. There are more black swans than you would expect if you want to put it in those languages. So Victor was appropriately chastened and he went back and he said well let me just check this data and it turns out to be very hard to get the data about the wealth distribution in a country because especially at the wealthiest edges people hide their wealth. They don't want to tell you how much they have. You can do it for income though so okay so he plotted that and what you see is actually like it's pretty remarkable I gotta say, when you're doing economics or any other social science it's rare to get a curve of data that you can fit so easily and cleanly with a theoretical model. And what Victor found for the distribution of wealth is for the distribution of income rather is that indeed for high earners there is a power law decay not the Maxwell Boltzmann distribution but for lower earners there is more or less exactly the Maxwell Boltzmann distribution and there is indeed a very clear crisp changeover point.
1:31:54.9 SC: It's at about three times the median income level below three times median income it's Maxwell Boltzmann above three times the median income it is a power law decay. What is going on there? So it'd be nicer if you had the theory first and made the prediction but okay sometimes we get the data and then we fit the theory and none of this is surprising to economists by the way. I'm not trying to say that. I'm just telling a fun story to motivate how I think about it. The physicists here are the late comers not the pioneers. The theory is the following and it's pretty close to reality I think. There are two ways to earn money. There are two classes of earners in the world. One class of earners are basically additive. In other words, you have goods. Your goods might be, your time and your effort.
1:32:45.2 SC: If you're a factory worker you get a salary or but maybe you have like a hot dog stand you're selling hot dogs or whatever and by additive I mean that you sell these goods that are consumed once and you make money from it. So that's pretty analogous to the molecules bumping into each other and exchanging energies. There's some fixed amount of wealth that is being passed around. You do it one at a time and there's kind of an upper limit on how much money you can earn which is how many goods you have times the amount of sales that you can make. But there's a whole way that you can earn which is more multiplicative. That's when you can sell the same service, the same good many many many times and there are obvious examples of that like book authors.
1:33:33.8 SC: I write a book once and then I try to sell as many copies as I can but also athletes, entertainers, etc. Their services are infinitely multipliable so they can sell them many times and of course the classic example are not writers or entertainers but capitalists, owners, investors because they can just increase the size of their factories or whatever or they can invest in more and more stocks and earn more and more money and that's again positive feedback. So they're earning multiplicatively rather than merely additively and there there's no limit on how much you can earn except for like the size of the earth and things like that. So there's no realistic hourly wage that ever gets you to be a billionaire but there are billionaires and that's because there are different ways to earn than just selling your services one hour at a time and no judgments here. I'm not trying to say this is somehow unfair or whatever you can have debates about what is the just economic system good go for it love it but that's not why I'm here right now.
1:34:44.3 SC: The point is that, there is efficiency questions raised about this distribution of wealth or income or whatever, in order for there to be the multiplicative earners, they have to try, their goal. If you're a hotdog vendor, you have two goals. One, make a really good hot dog. Two, find a customer who will want to buy the hot dog. It's, pretty straightforward. But in this multiplicative regime, you want more and more, you Wanna find more and more customers and you want to, if you can get them to give you more and more money. So you're aiming for efficiency in the sense of extracting profits from the largest number of people. And there is, there can be in principle, and there clearly is in practice, very often a tension between efficiency and human happiness. I don't mean that as a general statement about efficiency, but this particular kind of efficiency, whereas an efficiency in extracting profits from a very, very large number of people that can help with human happiness in some ways, but it's not necessarily correlated that they can get in the way of each other. They can destructively, interfere. So think of it this way, in a market you don't pay more than you choose to. If someone says, I have a good hotdog costs two bucks, you might say, okay, good.
1:36:13.3 SC: Give me the hot dog. If it's the same hot dog and you say it costs 200 bucks, most people are gonna say, no, I'm not gonna buy it. I have chosen not to participate in that exchange. And there's therefore some value, there's some cost of the hot dog that you would pay for it. And above that cost, you would not pay below that cost you would pay. That's how markets work. And by efficiency what I mean is really homing in on what that maximum amount that you would pay could be. And at that point where if it were a penny more you wouldn't pay and there were less you would pay, maybe you would pay at that point, but you're not gonna be happy about it. You're gonna grumble a little bit. You're like, yeah, that's an expensive hotdog. I wouldn't pay any more than this, but I guess I will pay exactly that much.
1:37:03.0 SC: That's the efficiency goal that a corporation wants to get. Or anyone who's trying to extract wealth from a large number of people, even a book author. How much can I charge for the book? Perfectly reasonable question to ask. No value judgements here, no statements about evil or anything like that. This is just natural incentives. This is just every individual trying to work to their self-interests. If you go back to the conversation we had with Sam Bowles, he was very clear. Adam Smith said something really brilliant and insightful and true about how good market outcomes can come from every individual just trying to work for their self-interest. But I think an underappreciated point, I shouldn't say that 'cause I don't know what economists appreciate and don't, a point that I haven't successfully appreciated is that one, the reason this is gonna come back to what we're actually talking about in the podcast is, one crucially important aspect of the technological innovations and improvements that we are undergoing is that it makes it easier for markets to reach that perfect point of efficiency where things are sold, but nobody is really happy about it and this does not guarantee the best outcomes.
1:38:26.2 SC: So you can see this in many, many different examples. When I say this, I mean the fact that technology is sort of helping us reach that efficient equilibrium, which might be efficient, but doesn't necessarily make us happy. Think about Google Maps or other mapping GPS services on your cell phone. Back in the day when I was your age, we would have a route that we would go from point A to point B. If we knew where we were going, we would take the obvious route and we would go there. Sometimes there'd be a lot of traffic these days we have a computer with information in it that will tell us, yeah, you usually you would take that route, but there's traffic on there.
1:39:10.0 SC: So here is a different way to go that naively you might think takes longer, but today it takes shorter. And so when things get clogged, suddenly traffic, 'cause everyone has their GPSs out there. Or enough people do suddenly traffic spreads out to take many different routes. And that is overall more efficient. But not everyone is happy about it because maybe the people who live on those local roads are now seeing three times the amount of traffic they used to see. Literally, where I used to live in Los Angeles, while we were living there, a whole bunch of local streets were converted from two way streets to one way streets, precisely to prevent people from taking shortcuts suggested to them by Google maps. So more efficiency, not necessarily more happiness. There was a recent, discussion about dynamic pricing.
1:40:04.5 SC: Dynamic pricing is something that, ride sharing services like Uber and Lyft have used for quite a while. The price of a certain ride from point A to point B is lower when there's not that much demand and higher when there is a lot of demand, supply and demand. But now in the time domain. This is something that without computers, without massive data sets, you would have a difficult time figuring out. Maybe you could crudely approximate it, but now you can pinpoint exactly how much. If you're a ride sharing service, you can reasonably charge people at different times of day. You are coming closer to extracting as much wealth from these people as you possibly can and still have a profitable company. And maybe that won't work long term 'cause there's lots of specific messy aspects of being a ride sharing service that's still very much in flux.
1:40:57.0 SC: But recently, that's an older story, the recent things is that Wendy's tried to do exactly this. They tried to say, we'll make it cheaper, at 10:00 AM more expensive at 12:30 'cause people are coming for lunch at 12:30. Outrage, people did not like this because of course people are not thinking of it as being cheaper at 10:00 AM they're thinking of it as being more expensive when they actually want lunch. And that's kind of gets people upset. So I believe that Wendy's backed down, but you can see this, being more and more clever about how to make a few bucks. We've seen this, again, so many ways I'm gonna have to like stop myself from giving examples, but separate fees to check a bag on an airplane. We used to just get that for free.
1:41:44.1 SC: Now they figured, oh, if we charge that people will not mentally include it in the price of their ticket and we'll make more money. Resort fees in hotels. I still have no idea what a resort fee is. You buy your hotel online, there's a certain price and then when you show up, there's an extra resort fee and then you pay it 'cause you're there. But I really don't know what it means. My favorite example is actually student loans. There's a student loan crisis here in the United States and you see where it comes from because college students or very often don't have a lot of money, but they might have a lot of future earning power. So basically colleges figured out that they can raise tuition to the point where many students couldn't actually be able to pay it, but they can give them a loan on the thought that they will be able to pay it over the next couple of decades because their earning potential will be higher.
1:42:40.1 SC: Which all sounds good, opening up college to people who otherwise couldn't afford it. But it doesn't make people happy because it makes it very hard to start your post-college life. You are burdened with enormous amounts of debt. The system has gone right to the point where you will go along with it, but it will not make you happy to go along with it. There's a famous, another article that got a lot of attention recently by Cory Doctorow, mindscape guest on the en-shitification of the internet. And what he means is that the services that we've been used to having on the internet, whether it's buying from Amazon or searching on Google, they've all gotten worse. Why do they all get worse? And part of his explanation is, you're first offered the service for free streaming services are now increasingly giving you ads. You're given a service at a low cost for, relative ease of transaction. And then once you're hooked, new costs come in because you don't wanna change 'cause it's kind of annoying, et cetera. En-shitification the world getting slightly worse. Anyway, I went on too long about this because this is just a feature I think of, economics very generally. And again, it's nothing new. I'm not claiming any new insights here. What I want to get at is that one very obvious ramification of technological change is more efficient extraction.
1:44:08.0 SC: And I think this goes beyond economics. It's not just extraction of wealth, it's extraction of everything. So this is the metaphor of the human beings in the pods powering the matrix. The more technology, both is able to analyze a whole bunch of very complicated problems, but also bringing people together. Bringing people together sounds good, but increasingly efficient ways to transfer information, et cetera. Let's pause and tell you what I'm thinking about. You know that every website you visit collects data about you. You get personalized ads. I think Google Chrome just recently tried to convince me to send a whole bunch of information that would really make my experience more pleasant because the ads that I would see would be more tailored to my interests. They're doing this efficiency thing. Why give one ad to everybody, if not everyone is interested in the thing, when we can instead target ads to each individual person?
1:45:12.2 SC: That's something that technology is allowing us to do. In some sense, it is more efficient. If I'm going to see ads, maybe it's better for me to see ads that I might actually be interested in the product. Again, not making value judgements about this, but there is definitely also a part of me that just doesn't want to give the information about what I'm doing. Willy-nilly to a bunch of companies. And so there is this sort of, if you're too efficient, economically, you're not happy about your transaction, an ideal transaction would make both parties happier. If you're at that perfect equilibrium point, both parties are just mildly satisfied or even slightly disgruntled rather than actually happy. That's the tension between efficiency and happiness. And perhaps more profoundly, there is a political version of this, not just an economic version of this.
1:46:13.3 SC: The world is big, population has been growing but we are also more interconnected. Not just in the sense that I can see videos about what's happening in Sri Lanka or something like that almost instantaneously or I can send emails across the world, but in the sense that our institutions are getting bigger because technology is allowing them to get bigger. Back in the day, I would imagine that going to a coffee shop would probably put me in a coffee shop that was locally owned by the people running the coffee shop. That was a traditional thing. This is a complicated story because there were fewer coffee shops back in the past than you might've imagined, but they were there. Today, increasingly the coffee shops that you're likely to run into are part of international chains. They're very, very big.
1:47:09.0 SC: And there's, again, pluses and minuses about that. There are economies of scale that make things better, et cetera. But one very definite implication of this is if you're in a store that is run by the people who own the store, and there's a small number of people involved in the entire thing, you can complain. You have a voice, you can make a suggestion. You can say, well, how about carrying this product instead of that product? And people will listen to you. If you go into Starbucks and say, I think you should carry this different kind of coffee, and you tell that to the barista, what are you doing? You're wasting your time. Your voice is not that big. And this is kind of a silly, trivial example of a much bigger issue, which is that whether it's politics or shopping or being employed in all these various ways, we are interacting with in very intimate ways, hugely large scale institutions that we ourselves have no real effect over.
1:48:08.3 SC: This leads to a feeling of powerlessness. Because technology has made us so much more connected, it has made the things that influence our lives so much larger and therefore harder for us to really deal with on an equal basis. Put it this way, the world is growing, institutions are growing, so relatively speaking, individuals are shrinking. They're shrinking in their ability to affect the world around them. And the efficiency stuff we just talked about makes it in some cases anyway, the case that it is harder and harder for future generations to expect a higher standard of living more wealth. The wealth is being extracted at an incredibly efficient rate because of these technological advances, and this makes people depressed and skeptical and less enthusiastic about the prospects of their individual lives and the society they live in. And that puts a real strain on democracy and liberal society more generally, because people are being governed by powers and systems that they cannot substantially affect back.
1:49:27.0 SC: And guess what? In some cases, they will respond to that sort of loss of power by seeking a strong man to rescue them, or by taking refuge in conspiracy theories where they can imagine something a little more vivid. Again, there are no value judgements here, and maybe these impersonal forces that are running our lives have no ill intent whatsoever, but nevertheless, make us feel bad, at some psychological level it would almost make us happier if there were ill intent. Then we can blame somebody who's evil and bad, and that's one of the reasons why conspiracy theories, et cetera are so tempting. So I don't know whether this adds up to anything quite, but my point is that the pessimistic scenario is kind of the matrix equilibrium where your physical body is powering the system and not really anything else that there is no individuality, just existence and survival.
1:50:29.9 SC: Again, this is supposed to be the pessimistic scenario. This is not necessarily the scenario I think is going to be true, but you can imagine that AI gene editing, brain computer interfaces, all of these work to squeeze individual human beings for all the system can get out of them in various ways. And not because there are evil overlords or super villains trying to do it, but because individuals responding to their own personal self-interest and the incentive structures of the system they're embedded in lead to that kind of configuration. Can we prevent it? Maybe but we'll have to try. It's not obvious that we will prevent it. The coming technological revolution could lead things to be pretty bad if we don't prevent it.
1:51:26.0 SC: So let's think about the optimistic solution, shall we? Because the optimistic scenario, again, I'm not gonna tell you it's gonna happen, but the optimistic scenario is kind of obvious. I dunno if you remember the podcast we did with John Danaher, where he talked about our coming automated utopia. The optimistic scenario is that all of these technological innovations leave human beings free to do whatever they want. At the most basic level, the most sort of obvious kind of hopeful scenario is that computers and robots do all the boring things, all the things we don't wanna do. All the jobs, all the tasks that are not fulfilling to human beings, those are the ones that we give and automate.
1:52:15.2 SC: We give to the AIs and the cyborgs and the robots, whereas we get to enjoy life and create things. So in the optimistic scenario, we somehow stabilize climate change and the environment more broadly. We invent sustainable methods of energy and food production. We make a specific effort not just to produce food and sell it and make profit, but to do so in sustainable ways that leave the environment unscathed over very long time periods. We lower the demands of work. As I'm recording this, it was just a very few days ago that Bernie Sanders proposed in congress legislation that would mandate a four day work week. This is something that has been bouncing around before. He's not even the first person to propose legislation about it, but the idea would be literally less work per week.
1:53:06.0 SC: So still eight hours of work per day, but only four hours a week. And the motivation behind this is workers have become more productive, so they can produce in four days, what a few decades ago they were producing in five days worth of work. And this is one of those schemes that sounds maybe a little utopian, a little overly utopian, but I did look it up. I googled four day work week and what is the status of the empirical data about this? And I was a little surprised at how positive the data are about the four day work week scheme. Individual companies have tried it and it makes everything better. [laughter] Roughly speaking. So the companies do not suffer loss, at least in the dataset that I was able to see. They don't suffer loss of productivity overall because people are more energized to get their work done and be more productive in those four days.
1:54:06.6 SC: And they've stuck with it. They do pilot programs and they seem to take off. Everyone is happier with a four day work week, and then you can, if you do things right, you can enjoy yourself for a three day weekend. Now, of course, there are exceptions. It depends on what kind of job you want to have. My job would not be affected very much by Bernie Sanders's proposal. I work more than a five day work week already, but I like what I do. That's part of the utopian vision. What if everyone was able to like what they do for a living as much as I like what I do for a living? That's my version of a utopian vision. So technology is going to create excess value. It's gonna make us more productive, make us increase wealth faster.
1:54:55.3 SC: I think that's very, very plausible. What are we gonna do with that wealth? One option is make people's lives better by making them work less, the four day work week, which, it's not gonna pass. Bernie Sanders is very good at symbolic actions. He's less good at getting legislation passed, but I do think that people are talking about things like a four day work week, more now than they were a few decades ago. Arguably, we're taking the possibility more seriously and some number of years down the road, we will take it very seriously. Another aspect of the utopian picture is that we use biotechnology to make us healthier and happier so we don't create monsters. We create happy, healthy human beings, all of which live for 110 years and then painlessly die. [laughter] Now, that's intention, of course, with this extractive business because one of the most obvious success stories of attempts of the large impersonal system to extract wealth from individuals is the healthcare system, here in the United States.
1:56:06.1 SC: A shocking number of people live pretty financially successful lives and die broke because in the last moments of their lives, they spend a huge amount of money on healthcare and then they die. And the healthcare system kind of is in favor of this. And when I say healthcare, I don't mean doctors and nurses mostly, I mean insurance companies and hospitals and whatever. This is a very, very complicated story, and I don't wanna oversimplify it. The point being that one way to transfer wealth from individuals to bigger conglomerations of people, corporations, or what have you, is at their weakest moments when they're not healthy, when they're approaching death. And we have to decide whether that's gonna be something that we live with or try to fix. But in this utopian, optimistic scenario that I'm giving you now, we have a vastly improved way of dealing with death and dealing with serious illness.
1:57:04.8 SC: Again, a very early podcast episode I did was with Megan Rosenblum, who is one of the people who works in the Better Death movement, is that what it's called? I forget. But the movement is about facing up to the reality that we're all gonna die. And the fact that at least here in the US, I think many other places in the world, we are so scared and reluctant to accept the fact that we're going to die, that we do so badly. We do so in very dehumanizing ways emotionally, as well as extracting our wealth and things like that. So part of the utopian scenario is that we wisely choose to use advances in biology and medicine to make our lives healthier while we're here and make the transition from life to death a little bit more pleasant and bearable. Another aspect of the utopian optimistic scenario is to not just give us more free time, but to take advantage of information technology, to find communities of mutual creativity and support.
1:58:11.1 SC: And this has been part of this sort of internet utopian vision for a long time. As much as we worry about people falling into epistemic bubbles and fracturing their communities and so forth, you have to admit that social media and other related technologies make it much easier for people with quirky little individual interests to find like-minded people. For whatever reason, I don't think that we've taken advantage of this capacity nearly as much as we could. There is some of it, there are online communities, there's people who are interested in playing poker or people who are interested in basket weaving or whatever can find their peeps online. There's individual success stories, Jennifer, my wife and I found each other by reading each other's blogs. Something that would not have been able to happen before there were blogs and social media.
1:59:08.6 SC: But also we all know that this sort of ability to find micro communities also leads people to malevolent situation, cults and conspiracies and whatever. So how do we ensure, how do we allow, how do we give space for this technology to give us the optimistic, the good aspects and prevent the bad aspects? Yeah, I don't know. [laughter] I mean right now I think that these information technologies, again, the data suggests that they're more alienating than uplifting. It's completely possible in my mind, that that's just because we haven't yet adapted. The pace of technological change is much faster than the pace of psychological change or the pace of our changes in habits, in how we meet people, talk to people, interact and so forth. So maybe we're just slow. That's absolutely possible in my mind. But maybe we are just, we've opened up a can of worms and the worms are going to eat our brains in some metaphorical way.
2:00:23.3 SC: So look, I don't know. I told you at the beginning this was not going to be systematic. I was not gonna tell you the final answer to anything. I hope that people have the conversation. I hope that we take these issues seriously. Technology can increase value and productivity. We don't know what we're gonna do with that value and productivity. We don't know whether the wealth will be distributed equitably to lots of people. I'm in favor of a universal basic income, for example. But that's expensive. I don't imagine it's happening soon, but I can imagine that it's gonna happen as part of this big upcoming transition to a different mode of human life. The utopian vision is one in which so much of the stuff that we've had to reluctantly do as part of human life is handed off to technology, computers, semi agential programs and apps that don't mind doing the dirty work, leaving we human beings to live more fulfilling lives. And these, the by fulfilling lives, I don't mean writing poetry or composing symphonies. Maybe you can do that if you wanna do that, but I'm a huge believer that there are much more straightforward everyday ways to leave fulfilling lives. Maybe your way of leading, fulfilling lives is to be a good person to your family or to help others.
2:01:54.0 SC: Maybe you really just wanna play video games or watch other people play sports or something like that. Maybe you just wanna barbecue on weekends and watch movies at night. All of that is completely fine. And I think that there will always be heterogeneity in what people choose to do as individual human beings. The point is you can allow for it. Whatever individual's version of their best lives can be. In principle, technology can give us the space to let people do that without being worn down by the need to work by worries about their jobs. I haven't said this out loud, sorry, because it's just so implicit in my mind. But of course, technological change is going to get rid of many very common jobs. That's always been the case with technological change. People used to be horseback riding riders, ride horse and buggies and things like that.
2:02:55.7 SC: There's a lot more candle makers back in the day. Jobs change over time that's gonna happen. If your plan to adapt to the future is to invent artificial ways of keeping the same old occupations in the same numbers that we used to have, I think you are doomed to failure. But it's the fact that things are changing that are really problematic. If there is a new equilibrium on the other side of the singularity, then we can settle into a set of either jobs or no jobs, lack of jobs, if we are completely supported by society as a whole, and that will be better. That's part of the optimistic scenario. What's so alienating right now is the uncertainty. Like you don't know if you're gonna have your job 10 years. Maybe an entire industry is being disrupted. That is legitimately difficult to deal with.
2:03:51.6 SC: An economist can tell you, well, there'll be other jobs, but as an actual human being, changing jobs or finding that the career that you had prepared yourself for over the course of decades now is no longer viable. That is really difficult. And that is a real human cost of technological change. And we are facing it right now. Absolutely and undoubtedly. So the question is, does that continue forever? Or can we adapt these technological changes to get rid of that uncertainty, that lack of ability to plan more than a few years ahead? I don't know whether we have communally the wit and willpower to invent the equitable system, the optimistic scenario. We're not trained for this. This is humanity facing a situation it has never faced before, right? The scale of the problem is completely unprecedented.
2:04:55.5 SC: And people don't always make wise choices. So I'm not optimistic about the optimistic scenario. The optimistic scenario is there for the taking. I think if we choose to there will be hiccups along the way. No doubt we'll make some bad choices and need to fix them. But we need to collectively decide to avoid the pessimistic possible outcomes and work for the more optimistic ones. Okay. So there's a whole bunch of things that I didn't mention. I promised to talk about technological changes and there's other kinds of changes too, right? There's political changes. I've talked about democracy too much already on the podcast. I don't need to remind you of my worries about that. I did not talk that much about the possibility of being uploaded to the computer 'cause I don't think that's very interesting. Moving human beings into space, expanding humanity off of the planet I think is potentially a big one.
2:05:55.8 SC: I think that's potentially a very big transition. I just don't quite see it happening realistically on the same timescales as these other technological changes that we're facing right now. So, for the moment, it seems, and I could be wrong about this, but at the moment, it seems sensible to me to focus on the changing life here on earth. So that's what I tried to do. Anyway I hoped you enjoyed and were given some thoughtful moments in this little exploration of the possibilities. The only thing we can be absolutely sure of is the future is going to be different, which by the way, is new by itself, right? The future was always different throughout human history, but only by a little bit [laughter] from generation to generation. You could imagine that life was more or less the same. You and I right now live in a world where that's not true anymore.
2:06:49.0 SC: We can absolutely not imagine that the world a hundred years from now is gonna be more or less the same than the world now. We are not equipped, we are not trained, we are not educated or practiced to think about this very real possibility. I'm sure that my own thoughts are sort of hopelessly scattered and naive and incomplete. I'm sure that, 24 hours from now, much less a year from now, I'm gonna be thinking, oh, why didn't I say that? Or, oh, it was so silly that I said the other thing. That's okay. This is absolutely meant explicitly as a tentative exploration. I hope it's given you some food for thought, and I hope that we collectively choose the wise optimistic path. Thanks.
In High School I was inspired by James Burke’s Connections series. For a Stats project I ran a ( very imperfect ) correlation between innovation rate and world population. Unsurprisingly I found a high rate of correlation. My Sci Fi 11th grade fantasy mind postulated that if the indicated possible singularity occurred, it was a result of the discovery of time travel.
Sean’s idea of an optimistic future that we should move towards to be antithetical to my own, it’s a world in which people are all the more free to follow their internal puppetry, with that puppetry still being intact against all better judgment when we might well be able to develop alternative ways of being.
Better it seems, indeed not only better, but really the only even conceivably worthwhile direction at least to someone committed to depressive realism, would be to accept the pessimistic future and pursue it with total abandon, to create a system that feeds off of us for nothing but it’s own sake and engineer ourselves to be content with that final equilibrium, no more advancement, no more threats from the optimists at human expansion to the stars, a horrifically endless metastasis presented as adventure, all of that discarded. People pre-determined to be content with existence.
I’m very much in alignment with Thomas Ligotti on this point:
“Assuming that anything has to exist, my perfect world would be one in which everyone has experienced the annulment of his or her ego. That is, our consciousness of ourselves as unique individuals would entirely disappear. We would still function as beings that needed the basics—food, shelter, and clothing—but life wouldn’t be any more than that. It wouldn’t need to be. We would be content just to exist. There’s only one problem in this world: none are content with what they have. We always want something else, something “more.” And then when we get it, we still want something else and something more. There is no place of satisfaction for us. We die with regrets for what we never did and will never have a chance to do. We die with regrets for what we never got and will never get. The perfect manner of existence that I’m imagining would be different than that of most mammals, who feed on one another and suffer fear due to this arrangement, much of it coming at the hands of human beings. We would naturally still have to feed, but we probably would not be the omnivorous gourmands and gourmets that we presently are. Of course, like any animal we would suffer from pain in one form or another—that’s the essence of existence—but there wouldn’t be any reason to take it personally, something that escalates natural pain to the level of nightmare. I know that this kind of world would seem terribly empty to most people—no competition, no art, no entertainment of any kind because both art and entertainment are based on conflict between people, and in my world that kind of conflict wouldn’t exist. There would be no ego-boosting activities such as those which derive from working and acquiring more money than you need, no scientific activity because we wouldn’t be driven to improve the world or possess information unnecessary to living, no religious beliefs because those emerge from desperations and illusions from which we would no longer suffer, no relationships because those are based on difference and in the perfect world we’d all be the same person, as well as being integrated into the natural world. Everything we did would be for practical purposes in order to satisfy our natural needs. We wouldn’t be enlightened beings or sages because those ways of being are predicated on the existence of people who live at a lower epistemological stratum.”
May the horrors of optimism cease to stain our species.
Regarding the existential threat from super-intelligence Sean says roughly this:
It would be naive anthropomorphism to assign human-like motivations to AI. Therefore there is no reason to worry about AI taking over the world, SkyNet, etc.
I think there is a gaping fallacy of false dichotomy here. It’s not either human-like motivations or no motivations at all. There are other possibilities. An AI can be very different from a human, and have very different, non-human-like motivations!
Of course AIs can have some sort of motivation. The whole field of Reinforcement Learning is *about* training AI to seek certain goals with a system of rewards and punishments. Isn’t that exactly what we mean by “motivation”?
A chess program trained to play chess is motivated to seek checkmate. It has a reward system built in that has been trained to react positively to winning at chess. It’s not perfect at doing that, but we can train it to seek that goal via various techniques like reinforcement learning.
Moreover, one worry is that with future, more complicated AI the connection between what we want an AI to do and what we try to train it to do might actually end up not exactly matching what it actually learns to do. So its actual motivations might not match what we want to teach it via training.
Also, seeking control over resources is not a motivation that is necessarily unique to humans. It’s an instrumental sub-goal that is useful for achieving a whole range of other goals. So if a powerful AI has any big goal in mind, whatever that goal might be, you can bet that there is a good probability that gaining control over resources is going to be seen by it as an action that gets it closer to maximizing its goal.
FYI — two seminal books on NON-EQUILIBRIUM phase transformations described by Musk’s AI application – “Grok”:
My recommendation is to start with a truly outstanding new book by Steven Bratman “Spontaneous Order and the Origin of Life”.
Note: Paper format is expensive but a promotional Kindle price: Only $1.99 ;-))
Review of Steven Batman’s book “Spontaneous Order” (by Grok – a new and truly exceptional Elon Musk’s AI application) – a balanced and outstanding overview of a seminal and accessible masterpiece
In “Spontaneous Order,” Steven Batman presents a thought-provoking exploration of the emergence of life on Earth, drawing connections between the phenomena of spontaneous order and the development of the biosphere.
The book delves into the concept of Metabolism-First, which views the biosphere as a fundamental feature of Earth, alongside the hydrosphere, lithosphere, and atmosphere. This perspective highlights the role of energy flow in driving the formation of life, much like other processes driven by energy.
One of the key concepts discussed is autocatalysis, where certain chemicals can amplify their own rate of formation. This process plays a crucial role in “chemical evolution,” similar to the role of Darwinian selection in the standard theory of evolution.
“Spontaneous Order” presents a comprehensive and well-researched account of the origin of life, drawing on the work of experts such as Eric Smith and Harold J. Morowitz. The author’s ability to translate complex scientific concepts into a more accessible narrative is commendable, making the book an engaging read for both experts and non-specialists alike.
In conclusion, “Spontaneous Order” offers a fresh perspective on the origin of life, emphasizing the role of spontaneous order in the development of the biosphere. With its well-researched content and accessible writing, the book is a valuable resource for anyone interested in the intersection of biology, chemistry, and Earth science.
——————————————————————–
Review of Eric Smith and Harold Morowitz book “The origin of Life on Earth”
The book “The Origin of Life on Earth” by Eric Smith and Harold Morowitz is a comprehensive and insightful exploration of the origins of life on our planet. The authors delve into the various theories and hypotheses that have been proposed over the years, examining the evidence and arguments for each one.
Smith and Morowitz take a multidisciplinary approach, drawing on fields such as chemistry, biology, geology, and astrophysics to paint a detailed picture of the conditions on early Earth and how life may have emerged from non-living matter. They also discuss the role of energy flow in driving the formation of life, much like other processes driven by energy.
One of the key concepts discussed in the book is autocatalysis, where certain chemicals can amplify their own rate of formation. This process plays a crucial role in “chemical evolution,” similar to the role of Darwinian selection in the standard theory of evolution.
The authors present a well-researched and accessible account of the origin of life, making complex scientific concepts understandable for both experts and non-specialists. They also provide a balanced perspective, discussing the strengths and weaknesses of different theories and acknowledging the areas where more research is needed.
In conclusion, “The Origin of Life on Earth” is a valuable resource for anyone interested in the fascinating question of how life began on our planet. The book’s comprehensive coverage, accessible writing, and balanced approach make it a must-read for anyone seeking to understand this complex and intriguing subject.
PS: An example of “Grok” AI application explaining (equilibrium) phase transformations of Universe:
Steven Bratman (in his incredible book ” Spontaneous Order and the Origin of Life” — available in Kindle format for promotional purposes at only $2 ;-))
: …”it is easier to outline the origin of the universe (all the way back to a picosecond = 10^-12 or one trillionth of a second) than the origin of life…”
Note formation of biosphere was made possible once molecules were formed — to organic chemistry — to NON-equilibrium phase transformation (driven by source of energy) by creating biosphere (life)
Hierarchy of the matter since Big Bang — Grok @grok https://twitter.com/i/grok
The hierarchy of matter by temperature since the Big Bang can be understood in terms of the various stages of the universe’s evolution. As the universe expanded and cooled down, different structures and particles formed. Here’s a brief overview of the hierarchy of matter by temperature since the Big Bang:
• Planck epoch (0 to 10^-43 seconds after the Big Bang): The universe was extremely hot and dense, with temperatures reaching 10^32 Kelvin. Matter and energy were indistinguishable, and the four fundamental forces (gravity, electromagnetism, the strong nuclear force, and the weak nuclear force) were unified.
• Grand unification epoch (10^-43 to 10^-36 seconds after the Big Bang): Temperatures decreased to around 10^29 Kelvin, and the strong nuclear force separated from the other three fundamental forces. Quarks and antiquarks formed, but quickly annihilated each other due to the high temperatures.
• Electroweak epoch (10^-36 to 10^-12 seconds after the Big Bang): Temperatures fell to about 10^15 Kelvin, and the electromagnetic and weak nuclear forces separated. The Higgs boson gave mass to particles, and the first stable quarks and leptons (such as electrons and neutrinos) emerged.
• Quark epoch (10^-12 to 10^-6 seconds after the Big Bang): Temperatures decreased to around 10^12 Kelvin, and quarks combined to form protons and neutrons. The universe was still too hot for quarks to bind together, so they existed in a quark-gluon plasma.
• Hadron epoch (10^-6 to 1 second after the Big Bang): Temperatures cooled to about 10^10 Kelvin, and the universe was filled with a dense soup of protons, neutrons, and other hadrons. As the temperature continued to drop, protons and neutrons combined to form the first atomic nuclei, mainly deuterium and helium.
• Lepton epoch (1 second to 3 minutes after the Big Bang): Temperatures dropped to around 10^9 Kelvin, and the universe was dominated by electrons, positrons, and neutrinos. As the temperature decreased further, most of the positrons annihilated with electrons, leaving behind a small ( ??!! ) excess of electrons.
• Photon epoch (3 minutes to 380,000 years after the Big Bang): Temperatures decreased to about 10^8 Kelvin, and the universe became filled with a hot plasma of photons, electrons, and atomic nuclei. During this epoch, the first neutral atoms (mainly hydrogen and helium) formed as electrons and atomic nuclei combined. ( still darkness )
• Recombination and decoupling (380,000 to 1 billion years after the Big Bang): The temperature cooled to about 3000 Kelvin, and the universe became transparent as photons decoupled from matter. The cosmic microwave background (CMB) radiation was emitted during this period, and the first stars and galaxies began to form.
• Structure formation (1 billion years to present): The temperature continued to decrease, and the universe evolved into the large-scale structure we observe today, with galaxies, stars, and planets forming from the primordial gas.
This hierarchy of matter by temperature provides a broad overview of the universe’s evolution since the Big Bang.
Pingback: Mindscape 270 | Solo: The Coming Transition in How Humanity Lives - New World videos
It just feels like you’re about 10 years behind in our thinking about AGI/ASI. Calling it “hopelessly anthropomorphising” to see the potential for anything with a world model, a goal, and the ability to effect its external environment to converge on instrumental goals like power-seeking, self-preservation, and goal stability, is… hopelessly dismissive.
PS:2: Here is my Good-Reads review of Steven Bratman’s seminal book on biosphere (life) as sequential NON-equilibrium phase transformations:
“Spontaneous Order and the Origin of Life” by Steven Bratman (Note: In Kindle only $2 (per author’s request)
Boris’s review Jan 14, 2024
Intellectually magnificent – A scientific masterpiece – written for easy reading, but with very high integrity
I can’t recommend this wonderfully interesting and seminal book enough. “Spontaneous Order and the Origin of Life (Origins)” by Steven Bratman is a rare discovery — an intellectual feast, full of challenging ideas, novel perspectives and grand unifying principles. A book required an immense effort – written with exquisite, nuanced and balanced wording but for easy reading and comprehension, book itself is well structured. A book written with an extraordinary effort to make complex interdisciplinary topics accessible (almost like a textbook) to nonspecialists interested in a fundamental question – the origin of life. I purchased a Kindle copy and two paper copies (as presents).
The approach of step by step developing the concept, with highly appropriate examples, full of author annotations and further elaborations required a huge, and much appreciated, effort; gradually building and deepening, chapter by chapter, the extraordinary and logically consistent theory of life emerging – from pre-biotic chemical evolution via auto-catalytic processes to emergence of life. Its central hypothesis is that emergence of metabolism elements (“modules”) of reverse citric cycle was deterministic and preceded life and, especially, it preceded the “RNA world/Gene world”.
I am trained in electronics, physics, and strategy evaluation – I arrived to biochemistry late and am big fan of Nick Lane (all five) and Andrew Knoll books. The origin of my discovery of the “Spontaneous Order and the Origin of Life (Origins)” book is in itself very interesting:
Recently I discovered a 2015 lecture by Prof. Eric Smith of the Santa Fe Institute titled “New Theories on the Origin of Life) — https://www.youtube.com/watch?v=0cwvj… — with introduction by Jerry Murdock. The brief video lecture was brilliant — clear, measured, laconic but methodical introduction of Dr. Smith theory of biosphere as the fourth Earth geosphere. It was a short overview of a physicist’s multidisciplinary approach to life’s origin, and it was astonishing and glorious.
Obviously, I looked for the source – Prof. Eric Smith (now at the Earth-Life Science Institute in Tokyo) – it was “The Origin and Nature of Life on Earth: The Emergence of the Fourth Geosphere” — by Eric Smith and late Harold J. Morowitz (2016). The scientific book was expensive, complex, and multidisciplinary (now I have it also) so I read reviews, including by — Steven Bratman, a physician. He was so impressed by Prof. Smith/Morowitz book that he decided to, in cooperation with Dr. Smith and obviously many other experts, to “translate” a highly scientific and complex book into a more accessible book – with a truly extraordinary success.
In summary I can’t recommend this wonderfully interesting and seminal book enough (moreover — at only $2 Kindle version for promotion purposes)
It is not actually denser populations that provide improved innovation. There is a middle point that is optimum. Sean only considers one side of the curve. The denser population of cities also has negative impacts on both innovation and the quality of life.
One can look at the growth rate of the top 100 cities or the top 10 cities or whatever group you choose. It turns out that big cities are dying compared to more rural and suburban regions. In 1930, 15.5% of the US population lived in the largest 10 cities. However, by 2020, less than 7.9% of the population lives in the largest 10 cities. The fraction of people living in the largest cities has shrunk by half during the past 90 years.
The decline of cities has sped up even faster in recent years (2020-2024) due to the pandemic having more impact on cities than in less dense areas. The growth of the population of Idaho has far exceeded the population growth of dense cities.
Many cities are pushing people toward homes the size of jail cells. Safety and mobility are degraded as people live in more densely populated regions. There is an overcrowding crisis that is having negative impact on quality of life. None of these are conducive to innovation or free thinking. If population density were the key to innovation and lifestyle, then Bangladesh would be the leader in ideas and innovation.
The big cities are already beyond the optimum. In the denser cities, quality of life declines, health declines, crime rate rises, noise pollution increases. There has been published research showing the increasing stroke rates correlated with densely populated cities.
In the AI utopias I am always surprised of the intelectual jump across the abyss of wars.
Just out of considerations of balance of power and finit resources – who exactly is going to sponsor all these fullfilling lives of garden BBQ and videogames? If humans are not needed not as workers and not as soldiers their best chance of survival is to be declared protected species, not by the AGI, but by whatever parties that come into ruling the world. Because every human alive is less biofuel for the king’s robot army. Right now our rulers need us, because in a 3 month bootcamp we can be converted into soldiers and fight a war. But what will happen after we are no longer needed? What are we doing to promote world order that will cherish human lives? Counterintuitively, I think that a solution might be a ban on killer robots, solidifying human’s role as the caste of warriers.
May I put in a word for the lonely?
Upon a time, one argument for civilization was its fostering of refinement: of power, and art, and sensibility, and knowledge.
But even as civilization prospered, still for every _novus homo_ there remained (somewhere out there, back in the boonies) his “kith and kin”. That is, the familiar faces and places attendant upon his entry into the world. As Jean Améry put it, “Home is the land of one’s childhood and youth. Whoever has lost it remains lost himself, even if he has learned not to stumble about the foreign country as if he were drunk, but rather to tread the ground wth some fearlessness.”
[Or, as Frost (because not every _novus homo_ “made it” in the big city): “home is that place where, if you have to go there, they have to take you in.”]
All that is ancient history now, of course. What we have built on that ruin, in part, is a civilization of homeless and lonely persons. And tagged it “progress”. Or, as Tacitus’s Calgacus has it:
“…ubi solitudinem faciunt pacem appellant.”
Where they accomplish a wasteland, this they call peace.
Pingback: Sean Carroll: The Coming Transition in How Humanity Lives - 3 Quarks Daily
You talk, towards the end, of a pessimistic scenario and an optimistic one, both well thought out, but I’m not at all certain I’d call the latter ‘optimistic’ at all. In this scenario everybody would be completely free to do whatever they like, because all the boring and onerous tasks would be done by robots, to summarise it a bit 🙂
I think, in a world like that there would be a very high proportion of useless laybouts, to use a scientific term – just think back to the ‘Gentlemen of Leisure’ of the past; a very few were pioneering scientists or artists, but a scary number of them apparently took pride in being unable to carry out even simple tasks (for a modern day example, I always think of Jacob Rees-Mogg, who has never changed a nappy).
And when you talk about ‘boring’ as something bad, I would argue that boredom is an essential part of learning; just think of learning your numbers or letters – it doesn’t often send children mad with excitement, but it is definitely necessary for most of the following education.
Great session – thank you. A minor matter of cohort arithmetic: to see someone living to age 200 in the next 100 years means they are over age 100 now (I know too pedantic). The interesting question for actuaries is whether someone might be born in the next (say) 30 years who will live to age 200 (having that birthday in 2250 or thereabouts).
Listening to this and listened to West’s lecture in full.
West’s propane comparison was apt, in that it emphasizes the thermodynamic underpinning the ephiphenomenal anthropic behaviors (cities, innovation, etc.). But as West (I think) noted, there’s the threat of stagnation and collapse in these singularities that may be underappreciated.
My speculation: while AI may offer some innovative benefits to the species, its real impact will be (barring some currently elusive innovation in energy use) to more efficiently and effectively use up the world’s viable energy inputs. (We already see this with the stress AI and data centers are putting on existing power infrastructure.) That will be your singularity, after which you get…the stagnation and collapse.
Worth keeping in mind, the Limits to Growth model was recently revisited with more current data, and, although West dimisses it, the polycrises it predicts (between roughly now and 2030ish) seem to dovetail with his own predictions: https://onlinelibrary.wiley.com/doi/full/10.1111/jiec.13442
A lot of food for thought indeed. I also tend to be more skewed towards the pessimistic scenario. I believe we humans tend to pursue self-interest first, and access to good education, good health-care service, social security and safety are all aspects surely leading to improvements in “humanisation” – at least to what we may consider the currently best available versions of ourselves. Without any of these or all of them, humans will seek survival first, basic biological and animal pre-programming will tend to prevail. All we deem to be great achievements of human artistry, ingenuity and knowledge has been the outcome of the progression of hierarchisation and maintenance of social structures, like Sean alluded to at the beginning. With more technology, at least what we see now is more off-loading and resorting more to devices and gadgets. We have also to remember that most people don’t study to become trained and critical scientists and engineers, but will follow very different and diverse paths, many not really even caring to take a path of serious studies at all anyway. And if people can blindly trust technology, why caring to learn anything at all, some might think. These groups may become the more easily targeted people for the concentration of power in one direction. But we all live in and belong to the wider, global society. So, divided groups living each one in their “villages” or separate “islands”, each metaphorically or not speaking their own language, blindly following their ideologies, might conceivably determine the overall direction of the entire world society. The resultant of course is very hard to predict or guess. Maybe even for companies or other “agents” concentrating the latest technologies in AI and computing power. Hard to tell where the next minor but important “fluctuation” might come from. Footnote: Sean may have seen this as well, so I’d be curious to hear what he thinks about these news that came out a couple of weeks after this solo episode, about the new AI model that OpenAI and Meta have advertised, promising to be able to “reason”: https://www.ft.com/content/78834fd4-c4d1-4bab-bc40-a64ad9d65e0d
My big takeaway when I read Geoffrey West’s Scale several years ago was there had been a singularity approximately 10,000 years ago, although whether it was technologically driven is unclear, when humanity began forming itself into Complex Adaptive Systems. The primary example used in the book is cities and there was also evidence that large corporations exhibit similar behaviour. I think it is generally now widely accepted that most other large scale organisations: political parties, national and transnational institutions etc are similar systems. The fundamental problem that humanity faces is, as Sean discusses in the podcast, while not specifically in relation to CAD’s, is firstly that the behaviours and drivers of these systems are not necessarily compatible with those which optimise human well being, and secondly the political systems in much of the developed world are currently run
specifically to optimise the outcomes of and benefit these systems. All 3 of the areas of technological advancement discussed are therefore likely to be utilised for the benefit of the systems rather than the individual members of humanity which are their component parts. The most likely consequence would therefore appear to be a version of the pessimistic scenario. In theory the solution to this problem is to change the environment in which the CAD’s exist, they are after all adaptive, so that their behaviour and well being is more closely aligned to individual humanities. Personally, I am not very optimistic that this is achievable.
Pingback: Versnelling – Verandersignalen
I sometimes listen to these casts while doing e-mails etc. so it’s somewhat in the background. It just flows past you know, quite soporific. I’m not even aware of whether I’m even taking it in… anyway, in this listening mode, the thing that struck me from this episode was Sean’s resignation to the inevitability of (widespread) use of DNA manipulation to design babies.
That one came right out of left-field, for me. No, I can’t imagine that countries would rescind their already-existing legislation on any such thing just because another country might permit it. Very odd thing to suggest, I think.
Just managed to listen to the nice talk by Geoffrey West. I was left wondering at least two things:
(1) It does seem to follow logically that the dynamics of interaction and creativity, for example, would scale with the size of the city, but I wonder whether this really works for big cities where people live far apart, transportation is inneficient and people have to commute through large distances and spend significant amount of time in transit (for example, co-workers usually meeting only at the work site – although these days remote work and on-line meetings are relatively common, likely changing these dynamics in uncertain ways); do cities in the “Global South” also fit in the general trend? I would argue that Rio is different from New York or London – not just in terms of obvious geographical, landscape and weather conditions which may not matter for the goal of these generalisations, but in terms of safety and infrastructure, for example. How do these affect – positively or negatively – any trend? Or, simply, some effects are negligible and cancel out? I don’t have any data, but would be keen to see studies exploring this.
(2) I feel that the word “optimise” gives an anthropocentric perspective and may be misleading. It seems to me to come from a higher level perspective, that is, humans and our activities as tinkerers and daily experiences as engineers of whatever. Perhaps in the usual terms of systems that interested people and researchers here are looking at we might consider this concept (“optimisation”) emergent. In terms of the underlying thermodynamics, it seems it is more appropriate to think it’s all about minimisation (of free energy) and maximisation of entropy.