Big data is ruling, or at least deeply infiltrating, all of modern existence. Unprecedented capacity for collecting and analyzing large amounts of data have given us a new generation of artificial intelligence models, but also everything from medical procedures to recommendation systems that guide our purchases and romantic lives. I talk with computer scientist Tina Elassi-Rad about how we can sift through all this data, make sure it is deployed in ways that align with our values, and how to deal with the political and social dangers associated with systems that are not always guided by the truth.
Support Mindscape on Patreon.
Tina Eliassi-Rad received her Ph.D. in computer science from the University of Wisconsin-Madison. She is currently Joseph E. Aoun Chair of Computer Sciences and Core Faculty of the Network Science Institute at Northeastern University, External Faculty at the Santa Fe Institute, and External Faculty at the Vermont Complex Systems Center. She is a fellow of the Network Science Society, recipient of the Lagrange Prize, and was named one of the 100 Brilliant Women in AI Ethics.
0:00:00.2 Sean Carroll: Hello everyone and welcome to the Mindscape Podcast. I'm your host, Sean Carroll. There's a kind of history myth that sometimes gets promulgated in, I don't know, elementary schools maybe, or just folk tales we tell each other. According to which, when the first European explorers landed in the New World, the indigenous folks saw them and thought, "Oh my goodness, these are gods coming to visit us and we need to worship them and they're too powerful to deal with." Turns out, nothing like that is actually true. This is a story that the Europeans made up after the fact to make themselves look good, justify some of the things that happened. Nowadays, we are being faced with a new set of visitors from another world, namely, artificial intelligences. Whether it's large language models or some other kind of constructed program that in many ways, can act human but has a different set of capacities, and we're learning to deal with them. And unlike the myth of the European explorers landing in the Western Hemisphere, today there are a bunch of people who quite literally, who are very willing to say that these are gods coming to deal with us.
0:01:18.3 SC: I know there's also plenty of skepticism out there, but there are people who think not only that AIs are going to be and are human-level intelligence and agency, but well beyond that, superhuman godlike creatures that we're gonna have to deal with. I am myself not of that opinion. I do not think that that is actually what is going on. But just like the landing explorers, AIs do have different capacities than we do. They're trained of course, they're designed, they're made to in many ways act very human, but they're really not, they're thinking in a different way. They're capable of some things much better than we are, in other things not nearly as good as we are. So, how do we think about this world in which interacting with AIs, interacting with computerized systems more broadly is going to be a crucially important part of how we live our lives? Today's guest is Tina Eliassi-Rad, who is a computer scientist whose work spans the space. And that's why I really like it, from very technical stuff, just how do you better detect certain nodes or communities in an abstract network that you have embedded in some sort of data? But then also the human side of how you deal with this stuff, how these computer systems, how these AIs are going to affect our lives and we're going to affect them, all the way up to Human-AI coevolution.
0:02:54.6 SC: Once we build these systems and then we interact with them. And then we use them to decide how to go shopping or decide how to find a romantic partner. Guess what? That affects who we are, how we live our lives, and the survival strategies we're gonna have to move forward in this very brave new world. Again, many positive aspects here. There are things that we don't wanna do, we don't wanna bother doing, or is hard to do for us as human beings that we can outsource to the AIs. There are other ways in which it's very dangerous. The biases, the bad things that we have in our own brains can be inherited by the AIs, and they can have new failure modes that we human beings don't have. It's a world that is changing super duper rapidly, obviously. As a lot of research is coming in and a lot of influences are out there. It's not all about necessarily writing the best program. Some people who are very good at writing programs wanna optimize for making the best money, right? And we have to take that into consideration when we consider what to do, how to regulate, how to control, how to optimize for our own actual goals, rather than just seeing what happens next and living with the consequences. So the more informed we are, about what the possibilities are and how to deal with them, the more we'll be able to do that. So let's go.
[music]
0:04:31.3 SC: Tina Eliassi-Rad, welcome to the Mindscape Podcast.
0:04:34.1 Tina Eliassi-Rad: Thank you. Thank you for having me.
0:04:36.1 SC: Normally, I like to start the conversation with someone talking about the most basic stuff, the things everyone knows about. For your stuff, I kind of feel like going in reverse order. We'll end with the fun stuff about AI and democracy and things like that. But let's start with understanding graphs and networks and things like that, especially using neural networks to understand things that human brains can't quite wrap their minds around. So what is the most general way of stating what it is that you're trying to understand when it comes to thinking about graphs and networks?
0:05:17.3 TE: Well, when you're trying to understand the phenomena, usually you have multiple entities, like multiple people and they have relationships with each other, right? And so when we're looking at graph, like machine learning with graphs or graph mining, we are trying to find those, what we're calling relational dependencies that the probability of you and me being friends, given that we both like Apple products, is greater than the probability of you and me just being friends.
0:05:47.7 SC: Okay.
0:05:47.9 TE: Right? Or the probability of me liking Apple products, given that we're friends is more than the probability, the prior probability of each of us liking an Apple product. So the second one that is, we are friends, you influenced me, and so I like Apple products and I buy Apple products or I buy this headphone or headset. And the first one is that because we like similar things, we become friends. This notion of homophily, or birds of a feather flock together. But in a nutshell, people who work on machine learning on graphs, network scientists who are interested in understanding phenomena, network science as an interdisciplinary discipline. It is about these relational dependencies, and what can we find? What are the patterns? What are the anomalies in the relationships that get formed?
0:06:43.3 SC: So for the audience who wasn't there? What Tina is not telling you is that we spent 10 minutes before the podcast struggling with our Apple products to make the recording work, but we still use them. So I guess take whatever.
0:06:56.3 TE: Exactly.
0:06:57.8 SC: Lessons from that. Okay. But I guess in the current era, the issue is you have too much data. Or at least in principle, one would like to imagine having too much data. There's so much stuff, right? Is a large part of the worry, like how to pick and choose what to pay attention to? What to draw connections between?
0:07:18.8 TE: Yeah, there's some of that. I would say that. So I have this thing I call the paradox of big data, which is like, there's a lot of data, but to predict specifically for what Tina wants, it's difficult, right? You don't have maybe as much information about Tina. Now, if Tina belongs into some majority group, then maybe you can aggregate from the majority and say, "Well, Tina is part of this flock, and so Tina will like whatever this flock likes." But really, I feel like the problem these days is more about exploitation and going with things that are popular than exploration, right? Like in the past, we would go to the library or the bookstore, and you're looking for a book, and you would find other things. And those were basically the cherry on top of the cake. The cream, it's like, "Oh, yeah, I found this." Right? And now we're really not getting that, right? So when you use all these recommendation systems, whether it's Google or any other Amazon, et cetera, they oftentimes show you what is popular or what they believe you would like.
0:08:25.4 SC: Right.
0:08:26.3 TE: And so in a past life, I worked at Lawrence Livermore National Laboratory, which is a physics laboratory. And when I would do searches there, and this is many years ago, I would get more physics books, than when I lived elsewhere they would sell me, they wouldn't show me as much physics books. Just based on the location, the zip code. And so there's some of that that's going on and I feel like that is more of the problem of not really serving the individual or exploring as much as possible.
0:08:57.9 SC: So thinking though, purely like a mathematician or a computer scientist, faced with these big networks, how should we think about them? What are the tools that we use to tease out what are the important relationships?
0:09:10.5 TE: Yeah, so it depends on what kind of network it is, right? So in social networks for example, we know that there are two dominant processes that form social networks. One is closing of what we're calling wedges. So if I am friends with you and you are friends with Jennifer, then I will become friends with Jennifer, right? We close that triangle. And in fact, if you and I have, for example, many common friends, or let's say me and Jennifer, in my example, we have many common friends and we are not friends, then there is something going on, that there was lots of opportunities that we could become friends, but we chose not to become friends. Now, there's also of course, partial observability and that maybe I didn't observe it. However big your data is, you're not omniscient, you don't see things. But we do expect that friend of a friend is also a friend. That's one. The other one is this notion of preferential attachment. That everybody wants to connect to a star. And so you're interested in basically those are the two big patterns and then you look at deviations from that.
0:10:20.8 TE: So a work that was done by Jon Kleinberg at Cornell, he's a very well-known computer science professor, this is a while back was, think Facebook for example, who is your romantic partner on Facebook? And he and his colleagues showed that basically you are the center of a flower and you have petals around you. These petals could be your high school buddies or college buddies, et cetera. They have just more triangles in them. And people who fall outside of these petals and have a lot of connections to these petals, are either your sibling or your romantic partner. That is, you are introducing them to other facets of your life.
0:11:00.9 SC: Ah yes, okay.
0:11:01.0 TE: And they showed that when that connections stopped, establishment of those connections stopped, it's a leading indicator that you will break up. [laughter] So you were talking about which connections to pay attention to. It's like, so those are some of the things that are fun when you look at social networks.
0:11:18.8 TE: Biological networks are totally different. So in biological networks, it's a whole other ball of wax. There's not like, you're not looking for common friends, you're looking more for complementarity between different proteins that serve some function.
0:11:35.1 SC: So it's interesting because it seems like an attempt to go from syntax to semantics in some sense. You're going from structure to meaning broadly speaking.
0:11:45.3 TE: You're trying to understand what is going on, what is the underlying process that is happening in this network, and why these links exist. Now, the one thing that makes studying of graphs and networks really interesting is that, it is not a closed world. So just because you didn't see a link between me and Jennifer, doesn't mean that we're not friends. And so for machine learning, where you need both positive examples and both negative examples, which negative examples do you pick becomes difficult. Because the edges or the links or the friendships that don't exist may because, they don't wanna be friends or for other reasons. And so this, what are the negative examples, becomes an important aspect of things.
0:12:31.4 SC: Well, or as you were giving the example, I was thinking, I don't interact with my romantic partner on social media that much.
0:12:37.6 TE: Exactly.
0:12:38.3 SC: 'Cause we interact in the real world. We don't need that.
0:12:41.4 TE: Indeed, indeed. So there are lots of assumptions being made, obviously, in terms of like how the network is being observed. And in fact, this is one of the big differences between computer scientists that study graphs and network scientists that are typically physicists or social scientists, where for example, they're like, well there's a distribution and this graph fell from it versus the machine learning graph mining folks typically don't question where the graph came from. They're like, "Oh, here's data." And they run with it. And it's just, it boggles the mind that you should think about where this data came from, how it was collected, what were maybe the errors in collecting it. And in fact, this touches on a sore point for me because what happens is they don't question the data, right? They just feed it into their machine learning AI models. And then on the other end, they don't measure any uncertainty. So if you have something like, let's say a social network that you've observed, there's all this stuff about representation learning, right? Where basically I take Tina in the social network and I represent her as a vector in a Euclidean space. Maybe with 60,000, a vector with 16,000 elements in it.
0:14:05.2 TE: So the cardinality is 16,000 and there's no uncertainty. They're like, "No, Tina falls exactly here." And it just doesn't make sense at all. And so then those kinds of models, given that you didn't start with, "Okay, well my data could have some noise in it."
0:14:21.3 SC: Right.
0:14:21.4 TE: "Some uncertainty in it." And then you don't even capture the uncertainty of the model at the end. It just, there are lots of problems that can occur, including for example, adversarial attacks or your model is not gonna be robust. Let's just speak up.
0:14:38.3 SC: Well, this sounds just like full employment for enthusiastic graduate students. Because how hard could it be? It could be hard, but it's very well-defined. The problem that you just set out, allow for the existence of noise in these descriptions and see how your answers change.
0:14:55.3 TE: Yeah, I think in part, one of the reasons that folks, at least in the CS side, the computer science and the machine learning side, aren't too bothered by it these days is because we are going through this era where prediction is everything, prediction accuracy is everything. And so, there are these benchmarks, and it's basically benchmark hacking or state of the art hacking. And that's basically what is going on, that's the reality of it.
0:15:23.2 SC: Right.
0:15:25.3 TE: And so there's a lot of that kind of engineering going on as opposed to really thinking about, well what is the phenomena that I'm interested in, how is the data coming to me, what are the sources of noise, should I, how should I take them into account? Should I even take them into account? And what are the uncertainties in terms of the predictions that I am outputting?
0:15:47.1 SC: Let's help the audience understand the idea of benchmark hacking, 'cause that's probably a cool but important one. What's a benchmark and how do you hack it?
0:15:54.0 TE: Yeah. So basically, you create a bunch of data and you get a buy-in from the community that these are good data sets to test a machine learning or an AI model on. And then there's a leaderboard and you wanna be number one. And so you hack the systems that exist, or you hack your own system, you create your own to be number one, as much as possible. And that's basically what is going on. And I like this metaphor. So my colleague Barabasi said, "It's like there are two camps. There's like a toolbox, it's a finite toolbox." And the machine learning, the AI people, the engineers put tools into that toolbox. And because it's finite, it's very competitive. That is my tool beats your tool, even if it's like 1% by 1%, that it's not clear if it's statistically significant or not. And I may be king for only 30 seconds because another tool comes in. And then there's like the scientists on the other end that just open the toolbox and say, "Okay, well what is good for whatever prediction task I want to do?" And then they pick a tool out of that. And so a lot of this benchmark hacking or state of the art hacking happens on the engineering, on the AI machine learning side, the computer science side, because you want your tool in that finite toolbox.
0:17:19.9 SC: But on the science side, the physicist or social science side, the people who are interested in these models that create the sets of data you have, there's also as I understand it, a lot of worry about degeneracy or over determination or under determination where very different physical models could give you essentially the same kind of graph or network. How big of a problem is that?
0:17:47.0 TE: It is a very big problem. There are multiple angles to this. So one is, for example, because of all the hype, oftentimes people on the engineering side, don't talk about the assumptions that they have made or the technical limitations of their system. And in fact, because of that, we have this reproducibility problem. So not even a replicability problem, but a reproducibility problem which is just a code. Can I just reproduce your code as you have it. And even with your training data, even with how you broke it up with these different folds or whatever, and so which is very, very, a very low bar to pass. But that does not happen because there are lots of assumptions that are being made, et cetera. And then there's this notion of, we are living through this era of big models. So I want a model that has many, many, many parameters, even if I don't need all those many parameters, or for example, maybe I do care about the interpretability. That is, I want to know what the model is actually doing. But because again, for that one or two percentage point on the prediction side, you let go of it and you go with the big models.
[laughter]
0:19:11.0 TE: But yes, it's a big, big problem of, for me, the lowest bar would be that we require, at least with federal funding, and in some of the service that I do for the federal government, I've been pushing this, I'm not gonna be a very popular person, but that if you get taxpayer dollars, in your reports to the government, you have to have a section on assumptions and technical limitations.
0:19:35.6 SC: Mm-hm. Okay, good.
0:19:36.7 TE: Because the problem is the way the peer review culture goes is that if I have a technical limitation section in my paper, the reviewer will just copy and paste it and say reject. But the federal government isn't gonna do that. NSF isn't gonna do that. NSF has already given you the money, and you're doing the annual report. And so it has to be. Come on, just be honest. I did not test this method on biological networks, and they're very different than social networks. So like, caution.
[laughter]
0:20:06.8 SC: Well, this is because what you do for a living matters a lot to the real world, and to money, and things like that, unlike the foundations of quantum mechanics that I do. I don't need to worry about people being overly concerned with the results. They're all willing to give me a hard time anyway. Okay, so I have this sort of philosophical mathematical problem. I don't know. If I have a graph, a big graph, so some nodes, some edges that are relationships, and I have a different graph, how, are there measures of similarity between them? If I add one node to the graph, is it a completely different graph or is there a metric I can put on there? How much is that even understandable?
0:20:46.1 TE: Yeah, I love that problem. I've thought about that problem a lot. So the issue there is similarity is in the eye of the beholder. And it depends on the task itself. So similarity is an ill-defined problem. And so you can say, okay, well I can go with something like an edit distance, like, okay, how many new nodes do I have to add to graph number two? And how many new edges do I have to add or remove to make it look like the other graph? And then try to solve the computationally hard problem of isomorphism, in fact, alignment. And in many cases you don't need alignment. So, for example, you can think about two networks and you have started a process of information diffusion on it. Like you started a rumor, let's say. And you would just measure, how similar does this rumor, the same rumor, travel through Network 1 versus Network 2? And if it travels similarly, let's say I'm gonna throw some jargon, like the stationary distribution of a random walker that is spreading this rumor becomes the same at the end. You would say the networks are similar enough. And so, you don't need to have the sizes exactly be the same.
0:22:05.1 TE: So it could be, for example, you have a social network of France and a social network of Luxembourg, and you start a rumor in France and in Luxembourg, and they are processing the same way. And you would say the networks are similar even though one is much, much bigger than the other.
0:22:19.6 SC: That makes sense, in fact, because I was gonna ask about when you have a big graph and you somehow coarse grain it. Or you group subgroups into single nodes. You want to somehow have the feeling that it's still representing the same thing, even though you've thrown away a lot of information.
0:22:37.6 TE: Yeah. Now the problem was like grouping nodes. This is a very important problem and has been studied by lots of people within graphs. It's called community detection. Basically, you want to group similar nodes together. Now you can have different functions that you define about what similarity there means. It could mean that these people just talk to each other more. So there's more connections between them than what you would expect in a random world. Or just more connections between them than other folks. Now this kind of community detection, Aaron Clauset, who's a professor at Colorado, showed that there's a no-free-lunch theorem there. And actually, it was Aaron Clauset and others. And I think actually Aaron was the last author. So I think the first author is Leto Peel. But you know how it is, you usually just name your friend.
0:23:28.5 SC: Yeah, I do know.
0:23:29.9 TE: My apologies to the other authors. But they showed it in no-free-lunch theorem, which basically means that it is not the case that there is one particular group or one particular collection of nodes that you're grouping that would give you the best or the true communities. You see what I mean? So because when you are doing these groupings of nodes, you have some objective function that you're trying to maximize. And Basically, the idea is that there is no one peak there. So there's not one particular community that you can put Tina on and say, okay, Tina belongs here, that's where she has to sit. And so they become some of, some of that becomes an issue. But this notion of what does it mean for one network to be similar to another network is has it's tentacles to community detection, to clustering of nodes, and all of those are ill-defined. So it really is driven by the task at hand.
0:24:31.5 SC: Okay. I guess I'm spoiled by caring about what probably, in your world, would be the simplest possible case. 'Cause I think about, the emergence of space from some set of quantum entanglements or something like that. And it sounds all very fancy and highbrow, but basically, something is entangled with something else if it's next to it. And there's this very similar spatial, or a very simple-minded spatial coherence. But of course, in social networks, I can be connected to people anywhere, and that makes it a more complicated problem.
0:25:03.2 TE: Yeah. And that becomes what we call the small world problem. That you, or the Kevin Bacon or the Airdish number. You don't have to go that far out to be connected to famous people.
0:25:19.2 SC: And so, how good are we these days at detecting real clusters, communities, figuring out what's going on just from knowing about a graph and the connections between the nodes?
0:25:31.6 TE: For downstream tasks that you can like have some, let's say, confusion matrix where you can draw like true positives, false positives, true negatives, false negatives, we're actually very good at it. But if it's about like, okay, I found these communities, and do these communities make sense? It kind of breaks down into whether they're hard clustering, where you put Tina into just one community, or you put Tina into multiple communities. And then there's a little bit of just like eyeballing it, in a way. If you do not have this downstream task that you could say, okay, here are the true positives, here are the false positives, and so on and so forth. But in many cases, it's difficult to place a person in a social network only in one community because people are multifaceted.
0:26:21.5 SC: Right, but you started with the example of being given recommendations by Amazon or whatever. And sometimes the algorithm fails because it's not picking up our individual idiosyncrasies, it's just giving us the most popular thing and is that tie into the well-known problem of polarization or extremization of network recommendations? Everyone is pushed to some slightly more extreme set of YouTube videos or Reddit posts or whatever?
0:26:55.6 TE: I think they're in part, they just want your attention. And so the objective function is such that they just want to hold your attention. And so they will show you whatever necessary that will keep your attention. And so if they believe that my tie to Brandon is very strong, that we have a strong relationship and Brandon found these things interesting, then they will show it to me as well to just test it, to see whether they can capture my attention. And then through that, they can show me more ads, for example.
0:27:29.4 SC: I guess that makes perfect sense. So the point is, if Amazon wants to recommend things to me, it's not maximizing the chance that I want this, it's maximizing it's profit. [laughter]
0:27:40.3 TE: Exactly, exactly. And so they kind of go hand in hand. And in fact, this touches on this issue that we have written a couple of times about. There was a Nature Perspective piece a while back, and more recently an AI journal piece on, in a way, like Human-AI coevolution. So if you think about it, when you're using Amazon, when you're using YouTube, when you're using Google, you're providing data for them. We talked about this. And they take that data into account and they make recommendations. Those recommendations then affect what you do in the real life, and then you go back and you provide them more training data. And so there's this kind of feedback loop that goes on and on, and it's oftentimes not captured in terms of who's influencing who most. And one example that I like here is like, think about dating apps. There was a story recently from Stanford that most people are meeting on online dating apps these days. Instead of like through college or through their friends, family, et cetera, or at the local bar. Now, those dating apps have recommendation systems. And based on those recommendation systems, perhaps you meet somebody, you partner up, and you have babies.
0:29:01.0 TE: And so over time, these recommendation systems actually have an impact on our gene pool going forward.
0:29:06.6 SC: Yeah, I've not quite gotten that far.
0:29:08.5 TE: Right. Yeah, but it's like, as opposed. And because these recommendation systems are all about exploitation and not exploration. But maybe you would say, like my aunt or my grandmother or my college were also all based on exploitation and not exploration. But there is this this notion that there are these algorithms that we can't understand what they're doing, and perhaps 100 years from now, they may influence how our genome is evolving.
0:29:37.5 SC: Well, we are part of the world, and we create the world, and it reflects back on us. Right. It reminds me a little bit of discussions about extended cognition theories where, you count your calculator and your pad of paper and whatever as part of your brain because you keep information there, you do calculations, et cetera. And so, our environment and who we are is being increasingly populated by these artificial algorithms that we put out there.
0:30:06.1 TE: Yeah, I don't know, how far do we think certain things are going? And society has to design it. For example, New York Times had this article a while back about how this, there's a person who's trying to set up a company, an online dating company, where on the first or second dates, which are usually, not very good, my avatar and your avatar will go on the date and then they will report back. And only if, both avatars are happy, then on the third date, we actually go out on the date. And so, how much of actually our human behavior are these things going to take over as opposed. Interesting.
0:30:42.6 SC: So I didn't see this article. What's your actual opinion? Is there any chance that that would help?
0:30:47.9 TE: I think like, I'm an introvert, so I'm like. And also I'm a computer scientist. I'm like, oh, this is great, let somebody else do the dirty work. And then maybe, if it's a good day, I'll get out of my cave and I'll go and talk to. But for extroverts, they don't like it at all. So my husband, who's an extrovert, is like, "What are you talking about? Am I just a brain in a vat now? What's happening?" So I think it depends on where you are in this extrovert, introvert spectrum.
0:31:14.9 SC: We should also reveal to the audience that Tina has the good or bad fortune of being married to a philosopher.
0:31:20.4 TE: Indeed, indeed. For 30 plus years. It's been fantastic.
0:31:25.4 SC: Yeah. So the evolution, I was going to get to that later, but it's so good we have to talk about it now. Coevolution of humans and AI. And my guess was when I heard that phrase, we were thinking more about culture evolution. Memes more than genes. But of course they are interconnected with each other, now that you say it's obvious because our culture reflects our behaviour, our behaviour affects how we pass genes on to the next generation. So AI is going to be affecting the population genome of human beings.
0:31:58.5 TE: Yeah, and I think, in particular with, for example, generative AI as it's generating content, whether it's text or video or images. And there's this notion, and the late Dan Dennett, who you had on your podcast, very famous cognitive scientist, called these generative AI models counterfeit people. And he had an Atlantic article a few years back about it. And so, and also because people treat these generative AI systems, these counterfeit people, as if they're more objective somehow. They know more than me. People tend to give their agency to them. And also, these AI systems evolve faster than us. And so it's not quite clear, not that it's a race, but it's that they're evolving a lot quicker. Their objective functions are different, like attention, money, et cetera, than perhaps the objective function of people, like maybe the good of the society or public good or something else than just like money or some, like GDP or some, some measure like that.
0:33:06.1 SC: Are we good enough that we could at least imagine some kind of new equilibria that we get into when we're tightly coupled with our AIs, that there is some happier state of being we could at least aim for if we're working together well, or is it too much in flux these days to know much about that?
0:33:27.2 TE: I think these days it's too much in flux. But I think, for example, there are certain things that can be done to improve it. Whenever you or another human being asks me a question, perhaps I would come back with another question. I'm like, did you mean this, Sean, or did you mean that? But for example, with ChatGPT or these large language models, they never come back and say, did you mean this? The reason is that it reduces their utility. Me as a human being, when I ask the question, I want an answer and I want it now. Or it never comes back and says, "I don't know or I'm not sure of it." And maybe you would accept that from a human being, but you don't accept it from a large language model. You're like, oh, you're a tool. You need to tell me. I asked you about this and I want the answer now. So there's some of that going on, but the big tech companies could add those features to make it more equal in terms of this conversation that is going on. But at this point, utility is winning over all this other thing.
0:34:28.2 SC: But utility is tricky. I was talking with ChatGPT or whatever the other day, and I was trying to get it to imagine. And maybe I didn't try too hard. I didn't really put too much effort into it, but I was trying to imagine a character in a fictional narrative who was very insulting and who would, give out some good insults. And I said, what are some good insults that it could give out? But it wouldn't tell me? It's like, oh, no, you shouldn't give out insults. You should talk to people politely. It's clearly programmed not to go down that road.
0:34:57.2 TE: Yes it is, there are actually other generative AI systems, especially for programming, that I've heard where it tells you, okay, if you wanna code X, this is how you code it. And then you code it and you're like, oh, they didn't work. You're stupid. To the generative AI, the human says, you're stupid. And then the generative AI says to the human, "You're not a good programmer." And so then there's some kind of a, then they get at it.
0:35:22.2 SC: Gets in a loop, yeah.
0:35:23.4 TE: But that's only like, for specific ones. You're absolutely right. With ChatGPT, it's not gonna be that kind of antagonistic.
0:35:31.8 SC: And I know, this is probably related to the big worry that a lot of people have had about bias in AI algorithms. If you train AI on human discourse and human beings are biased, then of course the algorithm is gonna be biased. It's not because the computer is biased, it's because you've trained it on data that is, and is that something that your tools can help us deal with?
0:35:57.4 TE: You can try to find biases. There's a lot of work in that, these large language models are sexist, misogynist. We wrote a report for UNESCO for last year's International Women's Day about how sexist and misogynist these large language models are. The problem was, that is whenever like I get, somebody asks me that question, that, well look, humans are biased too. The problem is that I can hold a human accountable. I can sue a human being. Who am I gonna sue? You know what I mean? And especially in America, we are very litigious. And so then this gets into accountability. And in fact, there's a lot of work. And the government, for example, our government, is putting a lot of our tax dollars into trustworthy machine learning, trustworthy AI, et cetera. Et cetera. And to me, it rings a little hollow because there's no accountability. How can I trust you if there's no accountability? I feel like they go hand to hand. And so there's some of that going on, which is like, who am I gonna sue? Am I gonna sue OpenAI because it's sexist and misogynist? One of it's products is sexist and misogynist? That's not the case right now.
0:37:12.0 SC: Well, and human beings, this is an ongoing cultural flashpoint. So there's a lot of different opinions about it, but human beings might at some point think of something to say, that we know is inappropriate and then we're smart enough or we have enough controls that we don't say it. Is that a kind of thing that we, that it makes sense to try to implement in the context of a large language model?
0:37:39.0 TE: Perhaps. The thing is at this point what it gives out is what's the most probable and what it believes you will like. So it's a two-place function what's probable on what you will like. But yes, you could definitely do that and there's this comedian unfortunately, I forget his name now but he was saying, "The secret to a long marriage is to never say what comes to your mind first or second, always say the third thing that comes to your mind." And this goes back to what you were just saying. Maybe you should just say this third thing. The third most probable thing and in fact along those lines usually the students who use these generative ai tools, for math problems or math homeworks. The first answer is usually wrong, because a lot of the answers that have been uploaded into Course Hero, et cetera, they're wrong. Usually it's the second answer. That's the correct answer.
0:38:32.0 SC: Oh, that's very interesting. Is that actually true or is that a feeling that people have?
0:38:36.0 TE: These are just anecdotal. I haven't had anybody do like a systemic study of this, but that like usually the first answer is not quite there.
0:38:44.0 SC: Well, it's interesting because one of the things we discover, you discover we in the royal we, thinking about these very very large data sets is a sort of sometimes you can predict even more than maybe you thought you'd be able to. I want to ask you about this paper that you wrote about using sequences of life events to predict human lives. That sounds interesting, but also maybe scary. [laughter]
0:39:12.0 TE: Yeah, yeah. So in the true computer science AI machine learning sense, we're very good at coming up with names for our systems. So we called it life to vex So we're just putting your life into a vector space. Whether you like it or not.
0:39:29.8 SC: Yeah, that's okay.
0:39:32.3 TE: But you're just the vector in this vector space. Now basically the idea is that if you look at these large language models, so they're analyzing sequences. And so as human beings, we also have a life story. That's a sequence. And so I was lucky enough to work with a group of scientists, in Denmark So if America has a surveillance capitalism in Denmark, they have surveillance socialism. So there is a department there, department of statistics they call it. Ministry of statistics that collects information about people. And so we had information for about six million people who have lived in Denmark from 2008 to 2020, and we were like well, can we write stories for these people in a way? And then feed it to what is the heart of these large language models a transformer model Which is basically just the architecture of a neural network that learns association weights for within some context window. And that's what we did. So but instead of, so for example chat GPT goes online and gobbles up all this bad data that people have put in all the misogynistic sexist data, we didn't do that.
0:40:55.4 TE: So we had very good data from this department of statistics and we created our own artificial symbolic language. And then we fed that artificial symbolic language for these like six million people into a transformer model. And we then we were able to like predict life events. And so one of them that caught the media's eye was will somebody between the age of 35 and 65 pass away in the next four years. And we picked that that age range because that's a harder age range to predict for if you're over 65. Then it's easier to predict whether you're going to pass away in the next four years And if you're younger than 35, it's also easy the other way. You're unlikely to pass away. And so that's one of the things. The other prediction task was like will you leave Denmark? So then you can predict for that. But it had this similar technology as these large language models which is like you have this one. What they call like predefined where you just learn Based on the data that you have what's likely to happen next, and then you fine-tune it for whatever prediction tasks that you have.
0:42:10.0 SC: What does it mean in artificial symbolic language like literally a human language or you it's like some logical encoding.
0:42:18.0 TE: It's a logical encoding because the data that the department of statistics has in Denmark is all tables. So it is not like this kind of sequence. So then you could say like Tina was born in Copenhagen in December blah blah blah. And we could generate like a natural language, but that's difficult. Why would we do that? So then we generated a vocabulary for this artificial symbolic language and then and that was actually a lot of the intellectual property of the work is like, okay well, how do you take these tables and then create this artificial symbolic language that then you can give to a transformer model?
0:43:00.0 SC: And what's the answer? Are we likely to die if we're 38 years old? How do we know?
0:43:06.0 TE: Well, the thing that we found, which was very interesting. I think so like the accuracy in terms of the model was about like 78% et cetera. And I think that's why people were showing a lot of interest in it. But to me that wasn't really the takeaway. The takeaway actually was that labor data is a very good indication of whether somebody in that age range is gonna pass away in the next four years or not. Because health data is very noisy and inconsistent. So even in Denmark where they have universal health care, it's not like everybody goes to the doctor all the time and you have good data for them.
0:43:48.0 TE: And so some of the indicators of whether you're gonna pass away, one was whether you're male. We know this, males tend to more crazy things females. Oh, yes, I can jump over this ravine. No problem. I think and then the other stuff was basically just which sector you were working in.
0:44:10.2 TE: So if you're an electrician, it's a bad thing. It's not a very good thing. As opposed to like an office worker. So the labor data was actually very, very helpful than the health data.
0:44:25.5 SC: How important is it to extract causality from these relationships? Maybe risky or minded people just become electricians.
0:44:35.5 TE: Yeah, maybe. We didn't do any kind of causal stuff. Like a lot of their work, a lot of the hype that's happening now in AI and machine learning. They're all on the correlation side, not on the causation side. So we didn't look at that at all about what causes what. That's very difficult. And I haven't touched the field of causation in part because I'm married to a philosopher. And so it's like no. I ain't going there 'cause every time I try to approach the topic, I just heard nightmares, and so I haven't gone that way.
0:45:09.0 SC: There are some issues there. Yeah. No, absolutely. But I guess, it's interesting. Is it too much to draw a general lesson that by looking at these large data sets we might find simpler indications of what we're looking for, than we expected? You might have said okay, "How many calories is somebody ingesting is the important thing to look at?" But then you look at the data and you learn, "No, what is their job?" That's what's the important thing to look at.
0:45:38.1 TE: Yeah, I think there's some of that. I think the best way of using this is perhaps government policy, when government issues a policy, and then maybe 20 years from that, you have, if you have good data, you could see okay, "What has been some of the correlations that have come about based on this policy." And then maybe the actual social scientists and political scientists can then draw some causal diagrams from what we find. Because the one thing is usually like from computer science AI, machine learning, We treat causation and correlation as if binary. As it's like a coin this way or that way, but that is really not the case. It's more of a spectrum and so if you have a model that is producing robust predictions, there there is some underlying causal model. You just don't know it. And then maybe that could steer you into the right direction for that kind of work. But we didn't look at that for this particular work.
0:46:45.0 SC: So, human beings of course are examples of complex systems themselves, but this raises the the larger question of human beings will eventually die for whatever reason. Complex systems have their lifespans? Or maybe they're infinite. I don't know but they can also change dramatically and die and that's something else you're interested in trying to tease out in a general way.
0:47:10.0 TE: Yeah I'm very interested in the feedback that we were talking about and how do we capture that feedback between for example, when I go and I'm using amazon and amazon is making me these recommendations and then I buy things I tell my friends. And then all of that data goes back into amazon and how much does like my contributions or my friends contributions amplifying what amazon is doing? And so there's some of that going on and then there's also in terms of like society is a complex system. And the place of these tools in these systems.
0:47:51.1 TE: So the tools that help us spread misinformation. And disinformation make our society unstable. In that then you're not quite sure of what you are reading is true or not. So right now with the fires in LA, there's a lot of misinformation and disinformation going on and it's like, "Who do I believe?" And maybe you believe LA Times and you you believe, what you read in ca.gov and so on and so forth, but not what you're seeing on instagram. And so there's this notion of the place of these AI tools within our society and whether they're making our society better or worse. And by better or worse here, I mean stable versus not stable more chaotic. And I think we can all agree that we would like to live in societies that are more stable than not. So It so there's some of that that is going on. And I have a new project along those lines which actually touches on philosophy Which is called epistemic instability, which is what are some stability conditions of what you know?
0:49:05.1 TE: So if you genuinely know, that whales are mammals, no matter what I show you perhaps I won't be able to convince you that a whale laid an egg. You're like, "A whale is a mammal and mammals do not lay eggs." And you're very sure about it. But then you start talking to me and to chatGPT and maybe if you don't know something, then you're like as as well as you thought. Then you're malleable, then I can change your mind. And then now you have groups of people who are talking to these within themselves and with these generative AI tools, and then basically you go from individual to groups, to this hyper graph notion...
0:49:52.8 TE: And what I'm interested in is when are phase transitions in this hyper graph. In terms of what the society believed, like maybe the society believed that vaccines are good. And now all of a sudden, The society doesn't believe the vaccines are good. And what are the leading indicators of those kinds of phase transitions in our society as it's being modeled By conversations, formally represented as these hyper graphs.
0:50:16.0 SC: Yeah, I guess it's a good example. I hadn't quite thought of the vaccine thing. The traditional example that I hear for sort of a social phase transition is opinions about gay marriage. Where it was universally against it somewhat rapidly changed to generally for. But this is the vaccine stuff is more subtle. Because it's not that the whole society is going against them but about half or whatever. There's this political polarization and there's sort of more than one consensus being built up. Is that just my impression or is there some idea that the modern informational ecosystem lets us have these larger sub communities where they have their own sets of beliefs, different from other communities?
0:51:00.0 TE: Yeah, I think it's the second one. In the past when you did have people that tend to be on the fringe, people wouldn't hear them. But now even if you're on the fringe because of the information technology that we have, you can connect to other people who are on the fringe and then you believe, "Oh, no, we're bigger than the fringe." We're actually in the middle, and then that kind of thing spreads.
0:51:27.3 TE: So that is one of the things I'm interested in. Regarding gay marriage, one of the things that was interesting is, I was talking to a philosopher who has taught for a very long time, at the Ohio state university. And he and he was teaching ethics and issues related to gay marriage and abortion et cetera. And he was saying that with gay marriage similar to what you were saying, he saw a shift in terms of opinions, for or against gay marriage mostly for. But he didn't see any change when it came to abortion. And I think that had to do with the vagueness of when is, let's call the thing a baby. When is the actual Fetus a baby or whatever? And so and that vagueness 'cause we could all agree that maybe the day before you're about to give birth, obviously, you're not gonna do anything, we all believe it's a baby. But that vagueness is something that doesn't shift the opinion on abortion so much for or yes. And I like that vagueness aspect of it. So there are certain things that are vague and maybe you will never have that kind of phase transition. And then there are certain things like the vaccine where there are people in the fringe that our information technology allows them to connect to each other. And so it feels like a bigger thing. And then maybe there are other aspects of information that really do, make people change their mind just based on talking to other people and so they're not as sure or as stable in their knowledge.
0:52:54.0 SC: So I like the hypothesis that the vagueness of the proposition makes it harder to have a phase transition. How would we test that hypothesis? Is that something that we can sort of sift through the data and figure out whether or not that's on the track?
0:53:10.2 TE: So it's a work in progress right now for us on this. I'm trying to stay away from making it a psychology or a social science problem 'cause then you get all these confounding factors and that's what I said it has more tentacles to philosophy. So in terms of what people ought to do in terms of their knowledge and how sure they are of their knowledge. And so right now the way that we're representing the knowledge or what you know these things as vectors 'cause I'm a computer scientist everything's a vector.
0:53:40.0 SC: Everything's a vector. It's okay. It's all in your algebra.
0:53:42.0 TE: And so basically how much does this vector space move in one direction versus another? So as you talk with others. So you can build these kind of simulations. Not kind of, you can build these simulations in terms of conversations and see how much the vector space shifts.
0:54:00.0 SC: So, one thing about complex systems is they can Survive a long time, like the human body fends off attacks pretty well because it's complex enough to to catch things. The other thing is that they can sort of go into this wild negative positive feedback loop I guess and crash. The economy or something like that. So is this something maybe this question is too vague, but is are we learning general purpose lessons about complex systems? Concerning what features they need to be stable versus what features they need to be delicate.
0:54:36.0 TE: Yeah, so there's a book, by Ladyman and Wiesner. And I know that you had James Ladyman on your podcast as well. He's a philosopher at Bristol and Carolyn. Wiesner is a mathematician at Pottstown now. About what is a complex system and their book that came out I think in 2020, talked about complex systems in terms of features and how there are certain necessary features. And there are certain emergent features and then there's some functional features where for example, our human brain is a complex system. And as you were saying if it has a shock it adapts and it still perhaps can function unless the shock is like catastrophic. And so, what we are not seeing, if we tie this to for example, the AI models and how they are operating within this system, is we don't know even the role of this AI system. How much instability is it causing in the system? How much feedback is it causing in the system? How much memory does it have 'cause they are evolving so quickly that it's not quite clear. So this is like an open area of study, of going through these different features of a complex system and trying to see. Okay, well, how do I measure it for let's say a chat GPT. In fact a lot of people say oh well, "It doesn't have a good memory, based on what I told it yesterday." Kind of a thing. So memory is one of those features. That a complex system has.
0:56:12.0 SC: Okay, so I guess. And one of the important applications here that you have talked about explicitly is democracy. Democracy is a complex system, and democracies do fail sometimes. And I guess one way of putting the worry is that, or at least the interest is that the introduction of AI as a new feature in some sense, opens the possibility of a new instability. It could lead to sort of a runaway disaster that destroys democracy. Not to put it into alarmist terms.
0:56:46.4 TE: Yeah. I think where it comes in, in fact this is how it links to my new project on epistemic instability, is that it introduces epistemic instability. Like when my dad was getting his PhD in America back in the '60s, the most trusted man in America was Walter Cronkite. If he said something, you believed him. Now, we don't have such a thing. We don't have a person or an institution where you say, "Okay, I read it here and I believe it." And then there's also, depending on where you are on the left or the right, you're like, maybe you believe in New York Times, you believe Fox News. And so because of that, I feel like one of the things that we need to do if we value our democracy is teach our kids critical thinking. Just, like, "Don't believe what you read or what you hear." Question it. Does it make sense? Talk to different people and make your own decision and don't give up your agency. But that's a hard task. Thinking is not easy, and people don't wanna think in the age of TikTok.
[laughter]
0:57:50.0 SC: Well, is that true? Maybe it is true. I'm certainly willing to believe that's true. But again, I always worry about comparing eras. Because I was a different person in the '70s and the '70s were also a different time. But I don't know what things are common between different eras. And things are not. Did we really want to think more back in the 1970s than we did in the TikTok era? I don't know.
0:58:15.7 TE: I think there was less distraction for sure, than it is now. I think the dopamine hits that we get by just scrolling through Instagram, TikTok, et cetera. Is something that has been studied, and I'm not a psychologist or a cognitive scientist, but that people. It's just like you let your brain go to mush and you just spend hours on it instead of maybe actually sitting quietly and thinking about a problem. It's boring. It's just...
0:58:48.9 SC: Yeah. Okay, good. So this is another aspect. So, okay, that's actually nice. Despite not really trying to. I think that I see a bunch of threads coming together here. Technology, broadly, not just AI, is giving us new ways to fulfill our own objective functions. Maybe it's a dopamine hit or whatever, but it's objective function might not be ultimately our flourishing. So there's an absolutely danger mode there.
0:59:15.6 TE: Yeah. In fact, that's such a perfect thing you said. I always say to my students, what is your objective function? 'Cause we all have an objective function, and that objective function changes over time. And perhaps if like all of us, just think, okay, did my objective function change from yesterday or from last month or whatever would be helpful for society. So as a computer scientist, as a machine learning person, I always think about objective functions. And in fact, I cannot look at a mountain range now and not think. Okay, if you drop me there, will I find the peak or not, the global peak? Probably not, but like, please drop me at a nice place.
1:00:00.4 SC: You've co-evolved with your network. That makes perfect sense to me. Yeah so.
1:00:02.3 TE: Yeah, yeah, so the gradient is with me.
1:00:04.8 SC: Exactly, exactly right. So, okay, you've said many things about this already, but I just wanna get it as clear as possible. The trust, the community of trust idea that is so central to a democracy is one of the things that is in danger of being undermined by AI. Like you probably saw the story about Instagram having it's AI accounts. The Sassy black lesbian lady who was programmed by a bunch of people who are neither black nor lesbian and just pure AI. And that one was admitted. Like they said that was AI. And do you personally worry that people are just going to mostly become friends with non-existent human beings in the long term?
1:00:52.9 TE: As an introvert, I'm fine with that. [laughter] But yeah, no, I think we see this in society now where people aren't as good as interacting with other people, or they're not as courteous to other people, perhaps as before. I don't know, maybe I'm at of an age now where I'm like, "Oh yeah, people are not as... "
1:01:13.0 SC: We're all grumpy Now.
1:01:13.3 TE: "Not as courteous as they were before." But the more you interact with people, the better you get at them, unless you interact with them, the worse you get at them. And so if we don't put a premium on like, "Oh, look Tina can actually pick up the phone and call somebody and get something done." As opposed to just like sending a zillion emails or text messages. I think there's a value to that. And I think there is this notion of trust, even the most introvert among us. There are a few people that we do trust. And so if it comes to a point where you trust an AI system that we don't know how it works and that it's vulnerable to attacks, then that is a problem. And so, in fact, this gets us to this phrase called red teaming that we hear all the time now that, "Oh, well, don't worry about it, they will red team it." And so the phrase red teaming came from the, cold War era.
1:02:10.5 TE: So the Soviet Union, the red team, America, the blue team. So, and there was a lot of this red team, blue teaming, for example, for cybersecurity. But this phrase red teaming is not well defined when it comes to these generative AI systems. And my friend and colleague, professor Hoda Heidari at Carnegie Mellon has written, extensively about this. Because there's no guarantee. So you cannot guarantee that somebody cannot jailbreak ChatGPT. And jailbreaking is basically that ChatGPT has put in some kind of guardrails. It shouldn't tell you how to like rob a bank. But you can jailbreak that, and it will tell you how to rob a bank. But there's no guarantees. It's not like, "Oh, here's Ethereum the proof, QED go home." You cannot jailbreak this. And so, if you're getting all of your information from these AI systems that we know can be manipulated and we don't know how they exactly work, then you may not have a shared reality with other citizens. And that's, I think, the worst for democracy. We really do need a shared reality to be able to withstand our democracy to hold it and not lose it.
1:03:26.3 SC: So how do we get that? What do we do? [laughter] This sounds very scary, but I'm not quite sure what to do about it.
1:03:32.4 TE: Well, I guess as a professor, to me it's education. I think actually, educating the public, and I spend a lot of my time educating the general public and not just the students at my university, but general, but educating the public about how these tools work, what they're good at, what they're not good at, not giving their agency to these tools and critical thinking skills. I think that that's the way forward.
1:04:01.5 SC: But the problem with that, of course, is that the value of getting an education is also susceptible to this loss of trust. I don't know if you saw the recent people were getting upset because there was a poll that showed that young men were becoming less and less interested in going to college. But then someone else pointed out that if you go into the cross tabs, if you look at other questions that were asked, there's actually no relationship between male and female versus going to college. It's all about Republican versus Democrat. It's that there are more, it's a Simpsons paradox kind of thing, where most of the young, Republicans are male, and those are the ones who've become very polarized against wanting to go to college. So that's part of the problem you've been talking about. Like there's a whole new epistemic community out there that is forming, and it seems to be solidifying over time.
1:04:53.6 TE: Yeah. Perhaps we should think about how we educate people and maybe they'll see the value of education. And that education is about enlightenment. Education is about empowering yourself. So education isn't like a teacher just pouring knowledge into your head. It's about you learning about the world, and so you could do better in the world. As a teacher, I'm already 11 in my guitar. I just want you to do better. And if you do better, then I will also do better. The society will do better and we'll all do better. And so I think part of that is maybe we should rethink about how we sell education.
1:05:34.7 SC: Do you think that, AI and associated technologies can be a force for good in education?
1:05:41.8 TE: Yeah, I think so. There are certain things that I have heard. So for example, now there's some privacy aspects to this, but if you are a college and you are tracking how students are doing on their homeworks, et cetera, and let's say Tina took calculus and she didn't do very well on differential equations, and now she's taking machine learning and they're gonna talk about differential equations that you could tell Tina, "Oh, you know, maybe you should go brush up on differential equations, 'cause they're gonna talk about differential equations."
1:06:14.4 SC: Yeah. Okay.
1:06:14.5 TE: So there's some of that kind of a thing to help you. And then there's also like basically personalized tutoring that I think AI can be helpful there.
1:06:27.9 SC: Do you yourself, use chatGPT or something equivalent to help figure things out, to learn about things?
1:06:35.8 TE: I use it for fun. [laughter] Like give me a bio of Sean Carroll in the King James style or whatever, just for, I don't use it. I haven't used it for any realm work stuff or anything.
1:06:53.9 SC: I Actually...
1:06:54.0 TE: I don't trust it. That's the problem.
1:06:55.6 SC: You don't trust it. I certainly don't trust it, but sometimes, I did realize that there was a good use case because I was trying to understand in mathematical things, they will often tell you true things, but you don't understand what the point of it is. And I was trying to understand type III von Neumann Algebras. And so I asked, and I got chatGPT to explain to me not just what the definition was, but why it was important in this particular case. And that was actually very helpful.
1:07:23.8 TE: Oh, that's great. Yeah. I asked it some stuff about linear algebra and matrix norms, and it was really bad at it. And I was like, wait, what? There's so much about linear algebra. [laughter]
1:07:34.4 SC: I think that's...
1:07:34.5 TE: In world, you should know about Matrix source.
1:07:36.1 SC: That's the problem. There's too much, like you just said. [laughter] There's too much junk out there when you... In some sense, if you get technical enough that it knows about it, but not so technical, all the good, all the stuff that's been written about it is sensible. No one's gonna make up stuff about type III von Neumann Algebras. What would be the point?
1:07:53.0 TE: Exactly. Yeah. So, well, so I guess maybe the point is let's not teach linear algebra to kids and then no, no, no. 'Cause the whole of machine learning is basically linear algebra.
1:08:02.3 SC: It's all the algebra and like quantum mechanics also. So yeah. Linear algebra kids, that's your lesson for today from Mindscape, learn more linear algebra. I think it's the key to everything. [laughter]
1:08:11.8 TE: Yeah. Exactly. But it's very good at like, basically admin stuff. So if you show it some picture of Google Scholar, put it into BIP xTech, put these references into BIP xTech that it does it for you. So some of those kind of admin stuff it's good at.
1:08:30.4 SC: Yeah. I think that the weird thing is we're trying to use it for creative work, whereas the most obvious use case is for the least creative things that we don't wanna do.
1:08:40.6 TE: Indeed. [laughter] Indeed. Indeed.
1:08:42.6 SC: All right. It's all very complex and it's evolving and it's, there's a lot of degrees of freedom. So Tina Eliassi-Rad, thanks very much for helping us all figure it out.
1:08:52.7 TE: Thank you. Thank you for having me on, Sean.
Thought provoking.
One little niggle on value functions. At one point there is a throw away comment: “I think we can all agree we want to live in a stable society”. In fact societies work best when they are neither boiling chaos, nor frozen stability. Moreover, people will have different ideas for the ideal amount of order in society. Witness the mixed feelings about HOAs for example.
Should AIs optimize their value function to the individual being answered, should they employ some agreed value function for society or more likely some compromise between these and if so how is that compromise arrived at?