304 | James Evans on Innovation, Consolidation, and the Science of Science

It is a feature of many human activities - sports, cooking, music, interpersonal relations - that being able to do them well doesn't necessarily mean you can accurately describe how to do them well. Science is no different. Many successful scientists are not very good at explaining what goes into successful scientific practice. To understand that, it's necessary to study science in a scientific fashion. What kinds of scientists, in what kinds of collaborations, using what kinds of techniques, do well? I talk with James Evans, an expert on collective intelligence and the construction of knowledge, about how science really works.

James Evans, director of the the newly-funded Metaknowledge Center, at the University of Chicago's Manseuto Library Monday, Feb. 11, 2013.    (Photo by Robert Kozloff)

Support Mindscape on Patreon.

James Evans received his Ph.D. in Sociology from Stanford University. He is currently the Max Palevsky Professor of History and Civilizations, Director of Knowledge Lab, and Faculty Director of Computational Social Science at the University of Chicago; External Professor at the Santa Fe Institute; External Faculty at the Complexity Science Hub, Vienna; and Visiting Faculty Researcher at Google.

0:00:00.3 Sean Carroll: Hello, everyone. Welcome to the Mindscape Podcast. I'm your host, Sean Carroll. Something I've figured out after doing science for quite a long time is that science is hard. It is hard to do it. It is hard to do it well. That's probably not really news. That's not probably something that is very controversial out there. But I wonder if people appreciate the ways in which it is hard. I mean, of course, the actual doing of the science can be hard, right? Doing a great experiment, doing a difficult calculation, if you're a theorist or something like that. But there are all these pre-existing difficulties that come long before when you get to that. Mostly like, what problem do you choose to work on? What is a promising area of research, right? Resources are finite. The time that you have as a scientist is finite. The money that you have, the students, whatever.

0:00:54.2 SC: You're trying to not only make a big discovery, but also ensure that there are future discoveries to be made, which is the nice way of saying, you got to get a job. You have to impress the people who might be hiring you or promoting you or whatever enough that you will be supported in your aspiration to keep doing science.

0:01:16.0 SC: You wanna be new and innovative, but you don't wanna be so crazy and out there that people don't pay attention to you. And even once you've chosen what to work on, how exactly do you work on it? Do you sit by yourself trying to think very, very hard? There's people like Paul Dirac who very famously thought that all of the good papers ever written in science, all the good science ever done, was done by a single individual all by themselves. But the world is very different than in Dirac's time right now. There are big collaborations experimentally, of course, but even within theorists who are very nimble and can change what they're doing from month to month, it's so easy to communicate with people over the internet or in person. You can start up collaborations with people who don't know the same things you know and might be helpful to you.

0:02:04.3 SC: So who is your set of collaborators and co-authors and helpers? These are all great questions and they're next to, but separate from, questions about is science really making the progress that it could be making? Is it moving fast enough? Is it daring enough and bold enough? Or is it stodgy and set in its ways? Are we leaving food on the table, as it were? So of course, lots of people have opinions about these things. These are issues that many people have talked about for many years. The difference now is we can really collect data in a more or less objective, at least wide-ranging way, rather than just sort of looking at this or that key feature in the history of science or key episode. We can actually look at many, many scientific investigations. Many people, many researchers writing papers. Who are they collaborating with? Are they making advances because they're collaborating with the same people or with a different set of people every time, with people in their field, outside the field? Do old people make the advances 'cause they're very wise?

0:03:10.1 SC: Or do young people make them because they're not restricted by the conventional wisdom? These are all great questions, but don't try to answer them just by thinking and by using your intuition. Look at the data. That's what today's guest likes to do. James Evans is technically a sociologist, at least he's in the sociology department at the University of Chicago, but he does wide-ranging research on computer science topics and all sorts of sociological topics, but in particular, the science of science, understanding how scientific progress gets made, and thinking of it as an example of collective intelligence, right? Science is not just one person. There's lots of scientists doing things. So how do these scientists come together in groups? Where do the groups come from? How big are the groups? What kind of groups make the best decisions moving forward and so forth? And I think that my prediction is that if you're someone who's thought about these things for a little while, what James has to say will in some ways reinforce things you already thought and in other ways surprise you a little bit. These are tricky questions.

0:04:16.8 SC: That's why data is important. That's why doing the science of science is important. In addition to doing the history of science, the philosophy of science, the sociology of science, the psychology of science, etcetera, all of these are important. These days, we do the science of science at an unprecedented level, and we're learning new things, both about how science works, how it can work, how changes like AI are going to help it and hurt it and things like that. So the world is changing. We got to keep up with it. Let's go.

0:05:01.4 SC: James Evans, welcome to the Mindscape podcast.

0:05:04.3 James Evans: Thank you. I'm delighted to be here, Sean.

0:05:07.1 SC: I have to start by saying a couple of months ago, we had Blaise Aguera y Arcas on the podcast talking about the spontaneous emergence of computational life. And it took me a while to ping on the fact that you are a co-author on that paper, even though there's no sociology involved in that paper. So how did that happen?

0:05:25.9 JE: I'm kind of a computational scientist. That's the way I think of myself. Or even really a meta-scientist. I study how science works as systems, and a lot of that comes through an evolutionary lens. And so I've been involved in following AI life for a while. I'm actually, I work with Blaise on his team, his Paradigms of Intelligence team at Google. And there are kind of three levels there where we're looking at the ways in which biological forms, processes, and organisms shape the way in which we think about AI. And one part is evolution, one part is neuroscience and cognition, and then the third part is collective intelligence. And so I spend even more time helping to cultivate and catalyze work in this space of using principles of collective intelligence and diversity underlying collective cognitive populations to think about building AI in that space.

0:06:27.8 SC: Well, I think it's a good place to start because it's like a personal spin on the more big picture substantive questions we'll be getting to later. But someone like you who researches topics like this, sometimes it can be hard to find a department to hire you. Right? Like, so what do you do really, right? You must have gotten that question before.

0:06:48.8 JE: Right, right. It's true that right now I'm in, so I think part of it has been that I've been involved in creating a fair number of departments. I began my home in sociology, and I think some of my colleagues think some of my work looks like sociology, especially the work where I'm kind of looking about how it is that scientists and people in the world organize around information. But a lot of it doesn't look like that. I'm studying the complex systems underlying deep neural networks. I'm understanding, so I think that, so I've spent a lot of time, I built the computational social science program that's been increasing where we built a data science institute and now kind of department at the University of Chicago.

0:07:33.7 JE: We're in the process of trying to create a new college or division about artificial intelligence here. So I think that not having like a fixed home also creates an instability that allows me to create new things, which is fun.

0:07:51.4 SC: But the, but yeah, I completely agree and I'm exactly in a very similar position myself, but the journey to get there is tricky. And I mean, maybe one thing we'll talk about along the way is how do we sort of get, make on ramps for young people who wanna do that? Like it seems when people ask me for advice, my advice is always like, be pretty conventional early on in your career 'cause there's a lot of gatekeeping, a lot of bottlenecks you have to jump through. It's hard to be too quirky as too young and unproven person.

0:08:23.6 JE: Yeah. I think, I would say the one caveat is I would say you have to be able to present conventional early on, which is to say, I think if you just do conventional early on, you will likely do conventional forever. But if you have a portfolio that you're cultivating, enough of which you can purpose to present conventional leadership, then that's and I, yeah, and I had to do that in the context of sociology, even though I would consider myself kind of a more multidisciplinary, post-disciplinary scientist.

0:09:00.4 SC: Yeah, no, that's a very good amendment. So when we are thinking about science and how it works, this is one of the things that you do. Just to get a feeling, obviously this is something that people have talked about for many, many years. A lot of it has been in the context of like history or philosophy of science. Are we doing it today in a more data driven way?

0:09:21.2 JE: Yeah, we have, and I think what's interesting both in science in general, and I would say in artificial intelligence models is their performance, their predictive possibility, the ability to create generative digital doubles is all driven by massive data. And so I think that's a phase transition. It's not just like, oh we didn't have data and we had some science and we improved it a little bit by increasing the data. I mean, it's just the difference between systems that work, that predict and those that do not. And so, yeah, I think we're doing it with large scale data. And because scientists are incentivized to leave droppings, which is to say our capital is our credit in the system, then we leave more droppings than people in business in an other context where people are covering their tracks. It's really hard to tell stories about how action in those places, micro action, personal action, decisions, but in science, because we have to broadcast every step to the world because we're certified and that's how we get promoted. And it provides a rich landscape for thinking about collective knowledge and individual contributions.

0:10:39.3 SC: That is a great point. My wife is a journalist, a science writer. And one thing you notice there is that scientists, when they give you an interview or whatever, they really care about getting credit in the news article that's being written. And not only for themselves, but like, no, you must list all eight authors of our paper in your 1,000 word article. And the journalists are like, that's just not how we roll. But it's very, very important for academics more broadly, right?

0:11:10.5 JE: Right. And this is the corner of the realm as you contributed to this paper. And we can basically say all the papers that cited your paper kind of get back propagated to your enlarging avatar of influence.

0:11:26.9 SC: So as a scientist, let's try to help the audience, most of whom are not professional scientists, right? One of the things you got to do, forget about getting jobs or whatever, you have to decide what research to do, right? How to pick a research problem. And this is hard, both because ideas are scarce and things like that. But also, there's a style question. Am I going to work on something quirky and innovative? Or am I going to fit in to kind of a bandwagon? Is that something that we can study quantitatively, sociologically?

0:12:00.8 JE: Yeah, yeah, I've been studying this question or variants of this question for 20 years. For two reasons. One, we wanna just understand how it is that science happens, how people make decisions. And this is a valued quantity of life. On the other hand economic growth happens through scientific advance. I mean, this is, credit like financial credit is given because we assume that some magic will happen when we give this money to a corporation, that they will reorganize things in a new way and new values will be generated. Like credit in the ye old days was extractive. I'm going to give you money and I'm going to take more money from you. And so it's just, so I think as a result, then it becomes really important for nations, which are typically the units that bankroll science, like countries bankroll science, there is international science, people experience it, but where do they get their funds? They get their funds from their country.

0:13:05.3 JE: Most countries, all universities are effectively public in the US, even though we have "private public universities", almost all science research funding is well, increasingly it's from private organizations, private and public, but certainly a vast contribution and the most focal element is from public contribution. So I think part of this is like, how do we think about if like the country as a whole is relying on its competitive power, its ability to kind of generate new markets and services and new values for life, then they need collectively to organize people to start exploring different kinds of things, or they're just gonna, they're not going to get any return in their investment.

0:13:54.0 JE: And so, yeah, so I think it's a big question, even though it's a micro question, but it has really collective implications for countries. And so it's kind of an existential question for countries. So, yeah, so one of the ways in which we've studied it is looking precisely at what it is that people are engaged with relative to the distribution of prior work? And this is where big data matters, right, because think about it like if you were to kind of take, if you want the modal story, the average story, you don't need much data. You can take a very small sample of a few scientists. If you wanna tell a story about change between one period and another, you need at least twice as much data, right? It's a difference between one period and the next.

0:14:53.1 JE: To tell a story about innovation requires exponentially more data because you have to understand everything that was expected to understand the significance of something that was not expected. You need to understand, in theory, everything that was expected by any scientist in the space to understand really what's new relative to that space. So this is, so I would say innovation is something that you can only study with large-scale data. It doesn't even make sense to study without it. You need the full distribution of expectations that normal scientists would have so that we can understand when and why they're surprised and how their world of technological incentive possibilities updates. And so we've been studying this. Initially, we looked at this in the perspective of networks where individual concepts or technological components, a method or a gene or even a theoretical concept were kind of nodes in a complex network. And we would look at how those networks evolved over time, the difference between bridging different parts of the network, the likelihood of bringing a new node into the network.

0:16:03.8 JE: But increasingly, with the emergence of artificial intelligence, natural language processing, and now large language models, inside those large language models is a very thick, deep representation of meanings. So, for example, in the new ChatGPT models there are trillions of parameters. And those, all those parameters are numbers inside of a graph that represent a complex representation of all the meanings associated with our language. These language models have, they're kind of the love child between, on the one hand, what I'll call the autoregressive language model, where we predict the next word from prior words, and these basically kind of meaning pixels, which is to say, rather than selecting on words, predicting the next word, we're really predicting the next meaning, right?

0:17:04.3 JE: And it turns out that the moment they started doing that, all of a sudden, these models became much more stable, because it's not about the particular fragile word that you're using, it's that I'm selecting this meaning, followed by that meaning, followed by this meaning, followed by this kind of word function, followed by, and all of a sudden, these models worked. And, of course, we added many parameters so that they could self-learn, important features. So we use those internal representations as a map of the cultural world. And we can look at the difference between past, for example, research and future research. Or we can plot every piece of research in the past, and we can plot a new piece of research and say, exactly how surprising is it relative to that past, relative to the past model?

0:17:55.2 JE: And in fact, I just got a $20 million grant from the government to basically build what I call chronologically trained language models. We're building these models based on the past so that we can evaluate when a new paper or a new patent or a new product emerges, how surprising or improbable was that idea to this model, which is a machine for probability. It's predicting what the most probable next patent, paper, or product description is. And when really surprising things happen, what's interesting with these models, we can say not just, oh, that's surprising. Everyone knows it's surprising. We can say, well, if this improbable thing is actually now probable, right?

0:18:46.0 JE: Like, no one would have thought of this paper, but it turns out if you write this paper, you do the research behind this paper, it's true, then what are all of the other things that were improbable before that become probable now, right? What are all the adjacencies that were like distant before, but now are all of a sudden close to one another inside the space, which allows things like the government to take fast breaks which they don't do historically. You know, if, if one thing becomes true, it's like, oh, okay, here's 10,000 other hypotheses that were impossible or extremely improbable but now are absolutely the next thing that you should study inside the space?

0:19:26.8 SC: But maybe this sounds like a little scary to me. I mean. If we could predict ahead of time what the next innovation would be, why would we have to do boring things like experiments and stuff?

0:19:39.1 JE: Oh, no, no, no. My whole point actually is I'm not building a model that's going to predict the optimal next paper. I'm saying, like the experiment in this space is that someone did an experiment and it surprised this whole cultural world model we had. And it allows us to kind of predictively update. Okay, how does the whole world change as a result of this surprising thing that came as a result of experiment?

0:20:08.2 SC: Good, okay, so you're not predicting what the surprise will be, you're sort of ringing the changes on the surprise to figure out how everything shifts in response to it.

0:20:16.6 JE: Exactly. And one other thing that we can do in these models, which I think is exciting, is we can do the same kind of thing that scientists have historically done. So Charles Sanders Peirce, who was, I think really the great 19th century American philosopher, had this idea of abduction. He was unsatisfied with the philosophical cartoons of reasoning, which were kind of deduction, which kind of comes out of Aristotle's invention of the syllogism, where you kind of build from established axioms or assumptions or facts and you kind of say, well, if these are true, then what necessarily must be true? What other things must be true? That was kind of old style science. And then... And then Sir Francis Bacon wrote this book called Novum Organum, the New World. And it had a picture on the front of a big ship going to America and like all this new stuff, you know, like, and it was like, okay, no, we need to scrap that. We need to do induction. We need to basically like learn all the new things in the world and then generalize from those, it's not gonna create something that's provable but it will allow us to grow and learn collectively.

0:21:29.4 JE: And that was the basis of the scientific revolution and this thing called the experimental philosophy, which is the core of what you're saying. And then Charles Sanders Peirce kind of emerges in the end of the 19th century. He was kind of a failed academic, but academics were all failed back then. You know, tenure was uncertain. He had a funny personality and so he bumped from place to place.

0:21:53.2 SC: He was at Johns Hopkins for about a year.

0:21:55.4 JE: He was, that's... He was a lot of places for about a year.

0:21:57.9 SC: His only academic position officially. Yes.

0:22:02.0 JE: So he had this idea that as knowledge grows, as you learn more about the universe, then your best signal for theoretical development and collective knowledge is surprise. Right. Which is to say when you run an experiment or have an observation that violates your expectations, that violate your growingly accurate world model. And initially he kind of thought, well, where do these surprises come from? Well, they come from... He also lived at the birth of psychology and was a kind of a proto-psychologist. And so he believed that the real model was the subconscious, you know, that you would basically people would dream and associations would be made in their subconscious mind. And he even has a series of stories, including one about he himself that's very much like Sherlock Holmes where he is engaged in abduction. You know, someone steals his watch and he like figures out where they are through this set of combination of surprises. And coming to the best explanation, one of the things that we've studied recently is, well, how does abduction actually occur? How do people identify surprises? Because it turns out that the people who are most likely to identify a surprise are kind of like the priest class of a scientific field. They've read the literature, they know what everyone expects.

0:23:28.8 SC: They know.

0:23:29.0 JE: They're the large language model of the field. And then they see a surprise and they realize, hey, this does not fit in. But that class of person is the least likely to have the intellectual resources to resolve the uncertainty, to develop a new pattern. And so we show in recent work with large scale data that on average that those basically kind of abductive discoveries are mergers. You know, they're conversations between insiders and outsiders. So increasingly you get your resources not through induction, where you just kind of generalize from your mind about, you know, from patterns. You have a surprise that occurs in your field and you look... You survey a range of other fields for the intellectual resources that come from the most disconnected fields, from yours. You know, that could, it could be literary studies, could be astrophysics, it could be molecular biology. People who are not part of your conversational cultural world and who have access to a set of distinct patterns through theory, through data that is accessible to them. And that those people are systematically kind of like coming in, in an expedition to solve your problems.

0:24:47.7 JE: That's where disruptive advance occurs. So part of this, in some ways I'm kind of pushing back against the question of like, oh, it's about people choosing where to go. It's really, it's about this collective conversation that occurs between problem makers and problem takers inside this market for ideas.

0:25:09.7 SC: I can definitely think of examples in my own fields where some wonderful, brilliant new idea has come along, typically from a young person and then some older person who knows everything, has sort of fitted into the broader context in a way that suddenly everyone gets excited about.

0:25:27.5 JE: That's a common pattern. Another pattern are what I find, what I call expeditions, where you have a whole group of people from an outsider field, physicists coming into molecular biology, and they bring a set of tools, a set of perspectives. And the reason why in those early works, you don't see teams forming between insiders and outsiders is because teams require a kind of a social contract that this thing makes sense. These expeditions are like, it's like blitzkrieg. You know, it's like someone, a barbarian coming in, you know, with an approach, and they publish it in this field that's never seen anyone like them before.

0:26:07.7 JE: And no one from their field has ever seen this field or has ever published in the space before. And those tend to be, on average, the big hits, the things that really kind of grow and disrupt the knowledge space and really update how people think about problems in advance.

0:26:26.2 SC: It's a tricky thing, I guess, because I think that especially from an outsider's perspective, people are like, well, why isn't there more innovation? Why isn't there more creativity? Isn't there some ossification of your field because you're all working on the same stuff and it's just moving ahead incrementally? Isn't it more fun when someone just completely upsets the apple cart and comes in with the new ideas? And it's not that that's wrong, but I think you need both, right? I mean, you need some apple carts being upset, but you also need the gradual accumulation of conventional wisdom.

0:27:00.5 JE: Well, exactly. If you didn't have the gradual. I mean, you can imagine a version where it's all disruption all the time. And that is a system with no memory and no accumulation. Right. So it's like. But it's true that there is an oxymoron in creating innovative institutions. Right, because it's like what we need, what we're trying to. We're basically saying, hey, we're going to try to surf the boundary between order and chaos.

0:27:27.7 SC: Right, very, very hard.

0:27:28.0 JE: And so it's very difficult. I mean, basically education systems, I mean, these are control systems. Like these are... We've studied syllabi and it turns out the oldest people in the field have the greatest influence on the things that young people learn. And that's a recipe for reproduction and not growth.

0:27:52.7 SC: And there's also some worry about bandwagons or bubbles. Like something becomes a popular idea and suddenly everyone has to do it. And then a few years later it turns out, well, okay, it wasn't that exciting after all.

0:28:07.0 JE: That's right. Yeah, that's the case where, yeah, there's no memory in the field. And everybody jumps to the kind of the hot new thing we've of a project that we recently worked on when we looked at across countries at this style of exploration. And we find, for example, when we compare China and the US that individually, Chinese scientists, for example, move further across topics over their career within any one particular time. China as a whole actually has far fewer topics that are focused on because on average, people are following the trend to the center of this big... There are many smaller pockets in this space which are effectively autonomous. People are less likely to move over the course of their career, but that becomes a reservoir of diversity from which abduction becomes collectively possible.

0:29:05.7 SC: I wonder if it's possible to sort of institutionalize and nurture this kind of thing. I mean, we can valorize innovation. But I know just from personal experience, if I write a very, very boring paper in physics that is nevertheless correct in every equation, I can get it published, no problem. There's going to be zero worry that the referee is going to reject it. If I try to be interesting and novel, maybe that's going to be more important down the line. But it's also much harder to get it into the journals because the referee is like, I've never seen this before, I don't know how to deal with it. And likewise, when you're hiring people, you kind of feel comfortable hiring people who work on the things you work on or whatever. So what can we do to nurture the right amount of difference innovation thinking outside of the boxes that we've built?

0:29:57.6 JE: Yeah, so I think that it's from my perspective and when we've looked at this, you can think of what an economist might call a single equilibrium solution, which is like, what's the right amount for each person? What's the average amount of the balance of innovation versus institutional history and tradition for each particular person. But again, systematically, we find that it's a multi equilibrium solution. It's really about a complex balance of people who are much more likely to be the kind of the priests of the field versus the prophets who are traveling around and coming from outside. So I think one of the challenges is how do we build an ecology that facilitates multiple kinds of career paths, that allows people to kind of... And because institutions defend and reproduce themselves, then I think one of the reasons we talk about innovation all the time is because we need to talk about it and we need to find novel institutions that kind of like resist and push against these natural forces of preservation and memory, which, I mean the entire educational and kind of scholarly enterprise, reinforcement force, journals, method sequences in graduate school, like all these things, you know, they're run by thought collectives which were formed most of them with, with titles and names in some cases 200, 300, 400 years old.

0:31:38.8 JE: Yeah, right. So these are very old associations and they're powerful associations and they run things like the Nobel Prize committee and, you know, they, you know, these have disproportionate amounts of power. So the reason we're talking about innovation all the time is just because we fail at innovation most of the time. And in fact, one of the things that we were looking at recently, a project I was just writing this morning is just, for example, a scientist age their apetite for change and innovation shifts, as you were suggesting. And one of the things that we find is not only... So the average scientist citation, so this is like the distance between the paper they are publishing and the papers they cite age at about a month a year, this is in all fields over all time except for the one field whose methods haven't changed in 200 years, mathematics, which age in two months a year and it turns out everyone favourite paper was published on average a year before their first paper. And what happens at about year 10 is scientist basically start...

0:32:56.8 JE: We looked at all their citations and all the context and on average they basically start policing their field. So they basically start criticizing other papers disproportionately and almost always young papers, young people's papers, bringing new ideas and new methods from outside the field or new to the field. So basically, there's this. So one of the challenges here, it's like how do we influence this innovation? It turns out that just the demography of your field dramatically impacts the likelihood of churn of new ideas in the field. And it's not the reverse causal direction. It's not like, oh, fields are stalling out and so young people stop coming in. It's the opposite. It's basically, if you have a high proportion of old people in your field or even in your department, you can see this at a micro level. Then the field hits rigor mortis and no new ideas come in, and new ideas are getting shot down systematically inside the papers that exist. And those new ideas that are getting shot down are in the same context as the citation context, like paragraph as the citations, which are their favorite citations, which are on average the paper that was published the year before their first paper. Right? So it's like, so I think part of it is, like, we actually need to.

0:34:17.7 JE: And you see across countries, for example, India has the youngest scientific workforce and there's a lot of innovation there. China has a slightly older but much younger than the US workforce, and there's a lot of innovation of a certain kind there. The US scientific workforce is getting older. Japan's scientific workforce is getting even older. And you can see the reflection of those field-level within country environments dramatically impacting the likelihood that new ideas will enter and thrive. So one of the ways we manage this is actually by managing the demography, and the US, the Supreme Court made it illegal for tenure track professors and institutions to have an age cap for retirement. So it's, you know, became illegal for ageism. And we can show that in 1994 when that happened, there was just a linear increase in the age of citations on average in the field and a decrease in the associated churn of ideas within those fields. So I think that's one of the ways we have to think about like the big picture, managing the whole environment, the whole gene pool of ideas.

0:35:38.1 SC: It is interesting because I've noticed that the best popular music is what I was listening to as a teenager. I think that's just an objective fact. Right?

0:35:45.1 JE: Of course, yeah, yeah. And I think we're probably about the same era, so, yeah, the '80s danceable music. I mean, that was like, where is that?

0:35:54.0 SC: And am I remembering correctly these studies of where funding dollars go these days or where prizes are awarded, at least in the US they are going to older people than they used to?

0:36:07.0 JE: Absolutely, yeah. There's a linear increase in the age of the first NIH grant, for example, that people experience. Prizes on average are highly conservative, which is to say. And the reason is because they are given by context. They are given by the physics community or by, you know, they're given by associations which systematically undervalue work that violates the boundaries between those contexts. So the things that are the most disruptive in citation, that attract the most kind of burst, the biggest bursts of attention, are not the same things on average that are getting prizes, the Nobel Prizes, etcetera. Yeah.

0:37:00.1 SC: So back to the, like, individual scientists. I know you've written one paper that was kind of provocative to me about why scientists disagree with each other. I mean, the thing about science is, of course, you've established some things as reliable, but then you're also speculating about what's going to happen next, what is interesting, where to go and so forth. And it's remarkable how people can be really devoted and even passionate about insisting that it's going to be going a certain way. Or I guess the way that I like to say it, that is most generous, is every approach has its looming obstacles. And every advocate of every approach says, "Oh, but our obstacles will be overcome. I can see that." Whereas your obstacles are absolutely going to make you stuck. Where does that come from? It's not purely objective. Is it like personality? Are there some cognitive traits that individual scientists have that get in the way?

0:37:57.0 JE: Yeah, I think there's, that, there are many forces. I mean, so one are the paper that you're referring to that's forthcoming, in Nature, human behavior is one where we basically look at a deep dispositional, trait. So we look at psychologists, we run the same kind of psychological profiles on psychologists that they run on [laughter] college students around the world.

0:38:21.0 SC: Fair enough.

0:38:22.4 JE: And [laughter] it turns out that there are really stable and strong associations between their relative, you know, acceptance of ambiguity and whether or not they're, you know, interested in, for example, like multimodal findings, you know, that aren't just supported by, for example, you know, a certain kind of statistical or mathematical model, you know, if they can. So, there's definitely, their background proclivities, which shaped the relative likelihood, of, of whether or not something's, attended to. I would say another approach really is the zeitgeist of an age, of a moment. There's a, there's one paper that I love, it's by a guy named Paul Foreman in 1971. It's a book length paper, called Weimar Culture, Causality and Quantum Theory. It basically kind of looks at this, it's...

0:39:18.2 SC: Oh yeah, I remember that.

0:39:18.3 JE: It's a great, it's a beautiful paper. It basically kind of shows, it tells a story and it shows that, okay, we've got a whole host of, of people in the wake, in the Weimar Republic, in the wake of the loss of World War I, by Germany, who kind of are against causality as an approach. And so philosophers are kind of writing against this nihilism as emerging in the artistic space. And it turns out that the physicists, there are some physicists who were reading this philosophy, who were writing philosophical tracks against causality, and those were the ones who embraced the kind of the quantum revolution, which is arguably an acausal physics.

0:39:57.0 JE: And so it's kind of like, you know, this moment, this, you know, this kind of post-war, depression, malaise, you know, anti and causal enthusiasm, allowed the relative selection of what was otherwise, you know, epistemically very unpopular and very unsatisfying from the ways in which, so, you know, epistemic standards in science are the standards by which we credit something as knowledge, right? If and we've got standards and, and that, that crediting there, there's some strict criteria, oh, you've got to, you know, we have to demonstrate some empirical accuracy maybe through experiments or through observations or through quasi, you know, synthetic experiments. But there are a host of other hidden epistemic standards, and those hidden standards come from what is familiar to us, what we associate with advance, what's beautiful, you know, what's simple.

0:40:57.8 JE: So we have all these deep preferences about what science should like, and so when science shows up that looks like that, then we are ready to give it awards. We're ready to credit it, with advance. And if it doesn't look like that, then it's often hard. And one of the reasons I'm interested in studying that is because I really am interested in collective advance and the way in which science is and can be an engine. And that means we need, to basically critically observe the impact of these kinds of hidden epistemic standards on locking in, a certain kind of scientific, slow, gradual progress.

0:41:36.6 SC: It's a great point because I think I came across that, paper you're referring to when I was an undergraduate, and I was very dismissive of it because, you know, quantum mechanics works because it fits the data in some sense. And I wanted to say like, who cares about the philosophical predispositions in post-Weimar Germany? But now in my twilight years when I'm more sophisticated, I can absolutely see that there's certain ideas you're open to, certain speed at which you're willing to change your mind about things which will be affected by the wider world. And I'm actually, in retrospect, super impressed with the scientists of the '20s, that they were so quick to embrace this very, very different view of the world. And no doubt, the wider context had a lot to do with that.

0:42:23.8 JE: Right. Yeah. No, I, and I think, you know, one of the reasons we can study this is just to identify it, but I think another generative and constructive way in which we can use this. So I spend a lot of time talking with science funders, you know, and advising science funders in the US and in Europe and in China and elsewhere. And, so, you know, part of this is, is the ability to do what I'll cut, you know, it's science fiction, right? So if we didn't have a predisposition against this approach, then let us now turn the crank and see what other kinds of approaches would or could have been, or could now be probable. So if this is the flow of the scientific river, then what are other tributaries that let's just imagine that some of the decisions that were made to pursue or not pursue areas were not rational.

0:43:16.5 JE: Let's just assume that, I mean, maybe many of them were obviously, you know, science moves forward, but let's just assume that some of them have to do with these complexities of history. Now we can basically build, we can use these models. Large network models are now, you know, kind of like tuned large language models to basically run science fictions. And my group spends a lot of time now creating conferences that never [laughter] have and never could exist. Generating paper topics, patents, proposals, that are, that the people don't exist in the population of our world who have created those. But if we had a different structure of education system.

0:43:54.5 SC: Wow.

0:43:55.4 JE: And then these proposals will be very likely to exist. And this allows us to think about like, you know, with an expanded view, [laughter] what are places we could pivot to? What is the field of action, that we can engage in? So I think of this as not just an act of humanities, but an an act of speculation in the most generative tradition of speculation, can we build engines that facilitate and illuminate new possibilities?

0:44:22.6 SC: I'm very curious as to what are the fun conferences that could have been held in other possible worlds that we weren't in, [laughter] and do they teach us anything about what conferences we should organize?

0:44:33.3 JE: Well, this is, this actually, so one of the nice things about science, again, is there's lots of data people say on their CVs every conference they have participated in since they were baby, you know? And so, and, and every year there are new conferences, and I would say, about 25% of science funding is convening, [laughter] you know, it's pulling together people in different places for summer schools. And so we have basically experiments continuously that basically, allow us to see what's the, you know, what are the effects of the, of these convenings basically in the unfolding space of science. And we have identified, you know, so for example, one project we're, we're working on is, is there is, you know, there's a kind of a attentional and status order of science, which is to say...

0:45:23.1 SC: I noticed.

0:45:23.6 JE: You know, like electrical engineers site physicists more than physicists site electrical engineers; CS, computer scientists, site electric, EE more than EE sites CS; information science site computer science, more than computer science sites. So there are flows and institutions of information, and we show basically systematically on average when something gets cited against that flow, it's associated with much more disruptive attention in the field, which, and it turns out, well, maybe those were the best things to explore. Well, it turns out that it's much more likely, for example, for the things that get discovered against the grain to be accidentally discovered from a person at your institution, it's even more likely for you to be married to the person [laughter] in the lower status field, that you get exposed to there. So it's not like just the best ideas are flowing against the grain.

0:46:25.2 SC: No.

0:46:26.1 JE: It's like I occasionally, when we randomly violate the status order, then, systematically we found, disruptive and disruptively useful things to these fields. So there's a whole host of conferences where, in this world, people from this camp are too good, to associate with people from this camp, [laughter] you know, they have epistemic standards that are different. They have, and we're building these playful, you know, large language model agents that are having conversations and disagreeing with one another and coming up with, and have, who have different dispositions and personalities, and we're building simplexes of these things. Those are kind of spaces where you can basically steer these models, not just through text, but through just through the simplex of the model space to pick exactly this particular kind of person, you know, a stoic, physicist with a view of, this from, 1950. And we can put them in a room with someone who has a very different view of the world and we can see sparks fly. And, so I, it's, ridiculous. And yet at the same time, it's just another extension of this question of expanding the space of speculation. [laughter]

0:47:50.2 SC: Well, yeah. And it's very useful if you want to go forward and figure out how to do things better. I mean, one thing is that science is just bigger now, right? There are more scientists, there are larger collaborations. There are teams that are trying to get things done. What can we say about that in general and in particular, like are big collaborations better in some ways than smaller teams, etcetera?

0:48:17.9 JE: So, you know, so the, this kind of a few layers to this, you know, this morass, right? So one is at the field level, as fields get larger, we find, there is a kind of caring capacity to a field, which you can think of as a field, as a conversation, a sustained conversation that occurs in departments and in journals and in all these environments. And, as the field, as fields get larger, there's an exponential decline in the likelihood that new ideas will enter the canon of kind of most sighted ideas.

0:48:51.1 SC: Sure.

0:48:51.8 JE: So if you have two ideas, two fields that merge, they won't retain the kind of the intellectual diameter of the independent fields union, you know, those two added to one another, they will collapse. All of a sudden there will be forced competitions between ideas and over eyeballs and attention, and they'll collapse to kind of one part of the space. So I think one challenge we see is like, as we grow fields, as we have greater and greater success, as we recruit, you know, new ideas, to these kind of mega fields, then, they, you know, that that size, is massively sublinear relative to the relative new ideas that those new individuals bring. Within teams, we see kind of, a similar but slightly different dynamic because teams are constructed, project by project often sometimes, you know, into larger institutions that have, you know, some greater stability. But we find that big teams they tend to not produce highly disruptive work. And so they, and there are a few reasons, that appears for this, you know, so what do these teams do systematically?

0:50:17.4 JE: Well, they produce papers really quickly. So they're more productive on average. That's one of the things that teams have to demonstrate. And, how do they perform that productivity, and still get, you know, a measure of attention? Well, they tend to do what, like a big production company does, you know, if they're trying to decide between, you know, producing Transformers 9 [laughter] and, you know, Slumdog Millionaire, it's like they're gonna predict they're gonna do slum, you know, they're gonna do Transformers 9. Like they're gonna bet on the winning horse. And this is what happens in science. They basically take huge popular ideas of yesterday, and they basically momentum invest. They take like the next step, and they're gonna get, you know, like Transformers 9 is gonna get Transformers 8 receipts minus Epsilon.

0:51:04.3 SC: Yep.

0:51:04.4 JE: And, you know, and they're gonna get some all, you know, most of the market that existed for the prior finding. And so small teams are much more nimble. And part of it is because they have to be, because they can't produce Transformers 9. They can't be the biggest hit that builds on AlphaFold2. Like they can't do that, so they have to do something different. So they dig deeper into the past, they dig deeper across fields or farther across fields, and so they're much more predictive of advances, that are likely if they succeed to occur and be important in 5 or 10 years, basically, when this, when whatever this bet we've made has really run dry. Like we've mined all the good stuff out of this vein. It's like that's when like, we're like, oh, wow, those small teams had, you know, other ideas.

0:51:55.5 JE: And it turns out that, when we look at the structure of teams underlying those papers, so we can build hierarchies, within those projects, we find the flatter the teams, which is to say, the more people that are involved in, like the design of the ideas, it slows down the production of papers, but increases the instability of the papers. It makes it more likely for those papers to have fused ideas fundamentally from different fields inside these spaces. So it's a complex bet, you know, if, it turns out that people do more innovation on average than their, if they just were trying to maximize their citations, would recommend [laughter], because as you said, you could publish something if it's true, but boring, and familiar. But at the same time, you know, every time we measure this, like how much innovation is happening relative to what the optimal amount would be to maximize discovery while retaining some memory in the system, we find that more is more, you know, mo more is, is better. We're not, we haven't reached this optimal limit of innovation...

0:53:06.7 SC: Of innovation.

0:53:07.8 JE: Inside almost, all the fields that we're looking at. Because, you know, we may have a plan, oh, hey, we're gonna build, an institution where 15% of the grants go to completely new things, and, you know, 85% of the grants go to completely establish things. But then even within those proposals, you have established people who are running both of the... Both the innovative and the non-innovative, like, it just, the conservatism goes all the way down.

0:53:35.4 SC: Right. [laughter]

0:53:35.5 JE: And so... [laughter]

0:53:38.5 SC: But there's also the, the fact that most things that might qualify as innovative are bad. Right? Like, there's a lot more ways to be wrong than to be right.

0:53:49.0 JE: Absolutely. Yeah. Absolutely. Yeah. Yeah. Most, yeah. So it's really, so this point becomes critical. It's like if you want to increase the likelihood of disruptive success, you have to increase your failure rate.

0:54:02.8 SC: Yeah. Good.

0:54:03.7 JE: Like, you have like, if you're in a lab or in a field that is consistently making good bets, then a robot could have made those bets. Like, you're not succeeding. Like succeeding is violating expectations. Which, and most of those violations will obviously be wrong because we know things like, because a lot of this physics, a lot of these other fields, you know, have insights that work or that are highly descriptive. I will say, this is where, you know, another place where kind of AI comes in, fields that are kind of like fundamentally multi-scale that are complex. And by complex, I mean. Like self dissimilar, you know, it's not fractal. It's not like if you see it at this level, it's also true at this level. It's, different at these different levels.

0:54:54.5 JE: Like these kinds of complex systems have in many cases really have not even been the subject of intensive scientific study because we don't have scientific traction, and these are the kinds of places where the kind of function, arbitrary function fitting that the large AI style models are, actually bringing a really big delta of inference. And it's less likely for them to discover new things about simple and basic physics and about some of these other things, which have been, you know, we've had great minds on them for centuries. But areas which are kind of like, okay, the intersection of, you know, physics and chemistry and biology, complex systems, self dissimilar systems, these are the spaces where, you know, like the right most parsimonious equation might be like 500 pages long, [laughter] like that, and the most parsimonious.

0:55:52.8 JE: I'm not saying the, just the most overfit, you know, like that might be the best description and it might actually be quite predictive and quite general. So, these are, so all of a sudden, you know, science, leaves the capacity kind of, of potentially of the human brain. I think this is one of those science fictions that we have to consider, you know, AlphaFold2 is learning a huge complex function that does not look like, any equation that has sit inside biology techs ever. And it kind of killed structure prediction. I mean, it just does it better. And it's not reducible. We're not gonna be like, we're gonna search through the neuroscience of that model and find, like, oh, it's doing this one thing like really well. Like, no, it's, we've squeezed down those models and they're still large.

0:56:49.0 JE: And so it's kind of like, you know. Yeah, I think we're entering an interesting era where we're considering the fact that, you know, there are going to be different forms and shapes of knowledge, science and kind of technology or control, and there increasingly there are going to be some things like AlphaFold2, which is not science. I mean, it is a regularity. The machine knows what it knows, so it has machine science. We don't have access to that science. There's a great paper called "Crumbs from the Table." It's at the last page of Nature magazine in 2000 by Ted Chiang, where he basically says he kind of describes a world of meta-humans. At that time, he was kind of talking about these pharmaceutical creatures that we couldn't understand anymore. And he kind of talks about, you know, he imagines a world in which all of science is hermeneutics. We're just all interpreting the artifacts that have kind of come to us without understanding. And we're trying to do neuroscience on these artifacts. And I find myself and a lot of other scientists, this is in fact, precisely what we're doing. We're trying to make sense of brains that look like they know things.

0:58:08.2 SC: That's a great point. I recently talked to Jeff Lichtman, who is a Harvard professor neuroscientist, who is a leader in mapping the connectome of the human brain. And one of the things I thought was interesting and it was unexpected in our conversation was he pushed back on the idea that we're trying to understand the brain. Because he said, look, if you could understand the brain in sort of, in the sense of coming up with some simplified description of it, then the brain would be simpler. The brain is as simple as it can be and it's super duper complex 'cause it's doing these things. We need to sort of figure out what it does, but we're not going to find a short description that encapsulates the brain.

0:58:50.2 JE: Right. Yeah, but this represents a new era. If we take that seriously. If like that's what our intellectual ventures become about, then science changes its character, it changes its standard, it changes its meaning. And the epistemic standard of does something feel right, does it look right? Does the equation have the right number of variables? Is it elegant? All these things, you know, I won't say they go by the wayside. I think they're important. Obviously, pruned models and simplified understandings allow them to travel like kind of components, new areas. So there's all kinds of reasons that simplification is useful. It just may be, it's likely to be for many of these complex self-dissimilar systems that the components that we learn from these systems are much larger in themselves, more complex than we had previously imagined, and they become objects in the world. Like a FAB facility. I was in Silicon Valley when they moved the first big chip fabrication facility, Intel, from there in the kind of San Jose area to Ireland. And they had done so many changes on that facility, like they'd fixed something like 15 million errors. So they had no idea why the FAB facilities were at the tolerance levels it did.

1:00:23.0 JE: And so when they moved it, they were like, okay, we need to retain everything like its orientation to magnetic north. It's like, you know, like the relative density of the air, you know, its elevation. Like they had no idea and they took it over and it broke for reasons, again, that they didn't understand and had to kind of fine tune out, but that it's like our new theories are looking more like this, the FAB facility, facility that produces like chips with a high accuracy, low tolerance level. And it's unsatisfying as a scientist in some ways to look at that FAB facility and say, wow, that's what we know. You know, what is that? I don't even know what that is. Like, like...

1:01:02.6 SC: Yeah, I mean, I guess I kind of get it and I have very mixed feelings about it. I mean.

1:01:06.4 JE: I agree.

1:01:08.1 SC: It's... Yeah, no, no my feelings are not going to affect what... What's going on. But to sort of put it in other words, and you can tell me whether I'm capturing it correctly, I mean there's a whole new way of understanding things, of modeling things, of being a scientist. Once you have, large language models, probably isn't the right word, but these deep learning, many, many number of parameters, things you can capture complexity in a way that is unprecedented and impossible if you need a one line equation to count as understanding. So you can do that. But so much of science is sort of counterfactual, right? Is sort of saying, well, if things were different, what would happen? And without that simple understanding, I mean maybe the example of the fab factory is an important one. How good are we at saying we truly understand or even the AI truly understands if we can't say, well, when we change things, what are the effects of those changes?

1:02:11.7 JE: So I've got a paper where we look at the impact of AI on science over the last 25 years. And, of course, it's gone through different generations. First it was machine learning and then it was kind of deep learning and now more recently these large transformer style kind of foundation models. And we look at the impact on scientists and on science and we find that basically, on scientists it increases their mobility across scientific ideas, it increases their speed of getting papers done, decreases the number of people in those papers, increases their rate of promotion. For science as a whole right now it actually narrows the scope of ideas that are discussed because these are big data seeking models and where do we have big data? Well, on problems that we've been generating data forever about, problems we already know about. So, exactly as you suggest, on average right now these models are not being tuned to origins questions. Origin of space, you know, origin of time, origin of language, origin of, you know, like you know, any of these origins questions that live before the institutions that produced and preserved data. And so that's a world of counterfactuals.

1:03:26.0 JE: Now that being said, it's not obvious to me that AI has to fulfill that role. That is the role that it has fulfilled. What I was describing earlier is that with these large data driven models we can actually engage in kind of like continuously steering counterfactuals in a way that was unprecedented before, creating a factory for counterfactuals. And so I actually think it's possible for it to enter some of these new areas. But it's going to require a new range of conceptual thinking. And scientists effectively will have to become, in some ways, philosophers of science. You know, where they're selecting between philosophies, they're navigating, you know, what their tolerance for kinds of philosophies of hypotheses are, and then the machines will produce them and in some cases will test them.

1:04:15.9 SC: Well, I guess creativity is a big sticking point with the AIs. Like, is it? I guess there's a school of thought which will say that they're just not creative in the standard way. They're remixing all these various ideas that they've been fed from their training data or whatever. But I see no reason in principle that an AI couldn't be creative. I mean, if only because you could throw some random numbers in there. But then there's a question of how do you cleverly throw the random numbers in there? How do you nudge the AI towards being creative in useful ways? Is this just my lack of understanding or is this a frontier here?

1:04:57.3 JE: This is an interesting frontier. I've got a DARPA grant with Lav Varshney, who's a computer scientist. Really creative engineer at the University of Illinois Urbana-Champaign. Yeah. He was at IBM when he created this kind of Blue Chef system that kind of created really creative combinations and ingredients and recipes that then would automatically kind of create it.

1:05:22.1 SC: Yeah.

1:05:22.2 JE: So we're spending time looking at the sciences and also building theories of creativity that will facilitate the kinds of abductive creativity that we see in the world. But I think a critical piece of this is what we were talking about earlier. Outsiders and insiders, abduction, surprises that are because like our expectations, they come from literature and like the data they come from, experiments and observations are separate. They're not... So what does that mean? Large language and large multimodal models and these deep learning models, they're not one model. They're many models. And so we basically have to exploit that underlying diversity and accentuate that underlying diversity. We have to separate...

1:06:08.0 JE: Some of these in turn. So if you just train these models in prediction, you'll find that they actually create themselves little schools of thought, but then they recombine at the last moment to make predictions.

1:06:21.5 SC: Interesting.

1:06:21.9 JE: They're doing science inside themselves, for us to use them to kind of push the boundaries of science. We need to use them to, to generate really deep expectations. And then we need to use other models to, survey the space of experiments and observations. And we need to like identify dynamically, like what are the surprises and how should that... So absolutely. If we just, use these models to create more data and then use that data to train the model, we'll have a collapse of attention and the models will not get better, they'll get worse. But if we basically build an ecology of models in the same way that we need to cultivate an ecology of scientists with different temperaments from different fields with different expertises, those ecologies of models are, have the potential to kind of create, a new world of creativity in the same way that when we think about AI safety and it's no, we, okay, we've got to, we have to put the thumbs screws, the, restraining bots on these robots. No, we do the things that we do with unpredictable intelligences in the past. We make them sign the Magna Carta. We create a set of independent, institutions, checks and balances. Like we need to create an ecology of regulatory AI's, an independent court system, an independent, legislative body, the, all these things are, that's how like safety works.

1:07:48.6 JE: That's how innovation works. Even though those systems are kind of, in some ways opposed to each other, we do it collectively. And so we need to basically harness, ecologies of kind of, artificial intelligence models and agents to kind of create the same recipe basically for knowledge generation that we see working or have, has worked really increasingly works, in the human system.

1:08:18.0 SC: That seems to be a common thread here. One thing we didn't get to is a paper you wrote on, Political Polarization in Editors of Wikipedia Articles and how you get the best articles when it's not written from a single point of view, when there's some team of rivals aspect there. And in some sense, you're saying that's just as true for the computers as it is for the human beings.

1:08:42.0 JE: That's right. And the problem is that, the computer model generators would love to be monopolist. And so they're trying to sell a product that is, the one true AI product, that has all diversity completely within itself, embedded. And the answer is that cannot possibly work as a long-term generator of surprise, in the same way that if you have two communities and you force them into one conference room, they will talk about half of the ideas that they talked about when the two communities were separate. We need to create an ecology of the AIs that basically enhance the population genetics, from which future innovation becomes possible.

1:09:31.0 SC: I'm glad you just said that 'cause I think you said it before a version of it, but it didn't quite sink into me. You need the different communities to talk to each other, but they also need to be different communities. Otherwise they become just homogeneous.

1:09:45.0 JE: Yeah. And this is where tradition is absolutely critical in creating membranes between communities that can basically self-evaluate and have their own tastes. So I'm not a, I'm not an anti field. I, in fact, I think fields are absolutely critical in this connected age, like they need to hold their own standards. Those standards become resources from which other fields can draw. And if you get to the point where everything is interdisciplinary, everything is post-disciplinary, then you destroy the very intellectual, dispositional and epistemic assets, that themselves become building blocks and new knowledge.

1:10:22.0 SC: It's just mush rather than a bunch of interesting little things.

1:10:25.0 JE: Just mush. No structure. That's right. Yeah. Black hole.

1:10:26.0 SC: Okay, good. It's the end of the podcast, so I'm just going to completely change gears and give you a couple of minutes to talk about the fact that you and, my old physics buddy, Daniel Holz teach a course at the University of Chicago on the end of the world, on the forthcoming doom. This seems like a, like a, bleak topic for a college course. Give it, give us the sales pitch for why we should be thinking about these ideas.

1:10:55.0 JE: Well, I think, if you look at the mammalian, kind of historical, prehistorical record, you find that on average, like the species last for about 3.5 million years, and we're about, I don't know, one, 200,000 years into our lifespan. That means we should have 3.3 million years left. If we're average, we might be better than average. We were smarter, than many of the creatures that went before. But who thinks, who's listening to my voice now that, we can project a thriving world in 3 million years, right? We haven't, we can't think 20 years, we're making decisions on a next year basis, on two years in on our competition between some other country and 15 years. The biggest plans are 2050, that's 25 years. That's the, I think Dan Holz, who is a physicist that worked at Los Alamos national lab, and it's very sensitive about, the potential for, nuclear arms to get in the wrong hands, the wrong place, and to set off a cascade of, so I think he's thinking things could happen on a very short timeframe that could be cataclysmic. And there are an increasing number of those things because they're the flip side of powers, artificial intelligence and environmental and, nuclear powers, that we're controlling, can get out of hand.

1:12:28.1 JE: There can be lab leaks. There can be, nuclear meltdowns, right? So the more powers we have, the more ways we can destroy ourselves. From my perspective is even if we destroyed ourselves in 10,000 years, it would still be 3.3 billion years too soon. We need to build and, and, and what, why don't we think longer? Why don't we have a longer time horizon? It's because of the very specific evidence-based epistemic standards that drive scientific advance. Like we have standards for what represents knowledge. And if we, if for many fields, in modern economics, if you haven't done a causal identification of a certain flavor, then it is just, it's just not knowledge.

1:13:13.8 JE: It's you just, you can't even make the comment, like just get out of the room, it's just not. And, so we've got like this great scientific, the scientific standards, which have been very historically useful for advance in our fields, but are ill fit for thinking about alternative futures, and these singularity events where something melts down or something explodes or something takes over and it's all over. We have a single experiment that we're running, which is humanity. And, so we have to imagine alternatives in a way and with a precision that we have never done before, which invites us to explore new epistemic approaches, include, and including things like narrative, including things like science fiction and stories to think about possible futures and harms because of the risks that we're creating because of our increased power over the world.

1:14:10.9 JE: So I think I actually found the class therapeutic students came in with anxiety about climate and about other issues and in talking about them and facing them and talking about, policy possibilities and thinking about alternatives, we actually, I think it conveyed to all of us a sense of agency and possibility and also a long view, we don't think about this unfolding set of future possibilities and people and environments that will be the inheritors of our choices today. And so anyway, that was the reason for the class.

1:14:50.0 SC: There's a tricky balance, right, because he wants to emphasize the very real worry that disastrous things could happen with the lesson that, and there's still things we can do about it, right? There's still tools we have to prevent it. Be alarmed, but don't despair.

1:15:08.0 JE: Right, exactly. And I think talking through with experts from around the world about, some of whom were despairing, some of whom were busily engaged in their particular projects, I think gave us an expanded view for how to think about talking about knowledge, extrapolating from the singular experiment, which is humanity and this world in this universe, you know?

1:15:30.0 SC: All right. You've given us an expanded view. James Evans, thanks very much for being on the Mindscape podcast.

1:15:36.0 JE: Thank you, Sean. It's a, it's been a pleasure.

[music]

3 thoughts on “304 | James Evans on Innovation, Consolidation, and the Science of Science”

  1. Great episode, made me think of this MathOverflow answer by Terence Tao on group collaboration https://mathoverflow.net/questions/487041/collaborative-repositories-on-open-problems/487065#487065 that I saw recently (good timing). In it, he talks about the idea of signal-to-noise ratio of an idea pool and groups made up of optimists and pessimists to grow and prune the idea tree respectively. And also invokes Metcalf’s law about the network effect of a group, though there the model is of a complete graph whereas Evans’ point in the podcast is about how, in my interpretation, you want cliques that are mostly disjoint otherwise you get oversmoothing.

  2. I am not so sure that long term thinking is the key or even useful for a successful species that exists beyond the 3.8 million year mark.
    When one thinks about the most successful species, say nitrogen fixing bacteria or maybe ants which exist pretty much everywhere and have for a long, long time…. And then one thinks about how much long term thinking they do, one realizes these have been very successful but don’t seem to do much long term thinking at all, in my humble opinion. 🙂

  3. Pingback: Sean Carroll's Mindscape Podcast: James Evans on Innovation, Consolidation, and the Science of Science - 3 Quarks Daily

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top