Octopuses, artificial intelligence, and advanced alien civilizations: for many reasons, it's interesting to contemplate ways of thinking other than whatever it is we humans do. How should we think about the space of all possible cognitions? One aspect is simply the physics of the underlying substrate, the physical stuff that is actually doing the thinking. We are used to brains being solid -- squishy, perhaps, but consisting of units in an essentially fixed array. What about liquid brains, where the units can move around? Would an ant colony count? We talk with complexity theorist Ricard Solé about complexity, criticality, and cognition.
Support Mindscape on Patreon.
Ricard Solé received his Ph.D. in physics from the Polytechnic University of Catalonia. He is currently ICREA research professor at the Catalan Institute for research and Advanced Studies, currently working at the Universitat Pompeu Fabra, where he is head of the Complex Systems Lab. He is also an External Professor of the Santa Fe Institute, Fellow of the European centre for Living Technology, external faculty at the Center for Evolution and Cancer at UCSF, and a member of the Vienna Complex Systems Hub. He is the author of several technical books.
0:00:00.1 Sean Carroll: Hello, everyone, welcome to the Mindscape Podcast. I'm your host, Sean Carroll. You all know, if you're a listener here, that I am on the complexity bandwagon. I think that complex systems are really interesting. I mean, maybe that's just kind of obvious. Maybe everyone thinks complex systems are interesting. The question is, can we make progress thinking about complexity in and of itself? In other words, there is a human being, here is an economy, here is the Milky Way galaxy. These are three complicated systems. Do they have anything in common? Does it make sense to study the idea of complexity as a field of study rather than just studying the individual examples of it separately? So I think that it does make sense, and I think that we don't have the answers yet. We don't have a fully fleshed out theory for how best to think about complexity. I did a recent podcast with David Krakauer, where I suggested that maybe we should think of complexity science as pre-paradigmatic. He didn't like that, he says he thinks that there's a paradigm out there, which is great. It's great that people disagree about this, I mean, maybe we actually agree on the substance and are just slightly disagreeing about the words. But one way you can make progress on thinking about complexity is to really narrow in on a particular kind of complex system and study it from all angles.
0:01:24.0 SC: And if that's true, then what better kind of complex system to study than the brain. Or not just the human brain and it's biological specificity, but the idea of an intelligent brain, okay? So of course you can study the brain, you can be a neuroscientist, etcetera, etcetera. But you can also take a step back, you can abstract. You can say, "Okay, I think that a human brain is intelligent, it has thoughts, it does reasoning." We are also kind of talking about artificial forms of intelligence. So what are the general principles here? What are the general circumstances under which a system of any sort can be thought of as thinking, as doing cognition, as being intelligent? What is the space of all possible cognitions? And how do you get there? How do you get there in biological evolution? How do you get there in design? What are the different phases? So do you need it to be kind of a solid structure? The brain in a human being is kind of squishy, but it's still mostly solid. The neurons are hooked up to each other in more or less a predictable way, whereas if you look at the flight of starlings, for example. A flock of birds communicating with their nearest neighbors, can you think about that as a kind of collective intelligence? What about ant colonies or bee colonies? Is there some thinking going on?
0:02:51.9 SC: Is there some intelligence that is not from individual neurons hooked up in a rigid way but rather from individual units that can move around and flow into each other in different ways? So you have solid brains, you have liquid brains, you have artificial brains. What's going on? What is the space of all possible ways of thinking? Well, if that's interesting to you, you've come to the right place. That's what we're talking about today with Ricard Solé, who is the head of the Complex Systems Lab at the Catalan Institute for Research and Advanced Studies in Barcelona, also external professor at the Santa Fe Institute. And he's trained as a physicist, but like many people in this area, he has let his curiosity fly around and he's ended up thinking about exactly this question. How complexity develops? What is the space of possible cognitions? What kind of architectures are there for information to flow in a network that you and I would recognize as something intelligent, something doing some kind of cognition?
0:03:54.7 SC: That might help. This kind of thing might help not only in thinking about biology, but in designing intelligent agents, whether it's artificial intelligence in a silicon-based computer, or maybe synthetic biology when we're in there editing the genes to making new, to make new kinds of organisms that might qualify as being intelligent. So again, very much along the lines that we support here at Mindscape, which is that the basic research into these grand concepts will, if you do it correctly, pay off down the road in specific ways of thinking about really down to earth systems. So Ricard is a great guy to talk to about these things. We go over a wild bunch of things. It's a lot of fun in the conversation. So I hope you enjoy it. Let's go.
[music]
0:04:58.1 SC: Ricard Solé, welcome to the Mindscape Podcast.
0:05:00.4 Ricard Sole: Thanks for having me here.
0:05:02.1 SC: So the space of cognitions is the phrase that has appeared in talks that I've heard you give, and articles you've written. That's just a very exciting concept, the space of cognitions. Do we understand that very well? Is this a well-known thing or are we just trying to develop the concept?
0:05:20.6 RS: Yeah. No. It's a still an ongoing research. The ambition here was when you speak to people, you hear people speaking about cognition in very different kinds of biological systems, oftentimes you feel like things should be much better defined. And something that we launched at the Santa Fe Institute in Alachua was the idea of trying to kind of map the cognition space by including things that go from what we call the solid brains, which means our brain, for example, with neurons located in specific positions and where all the fun happens in the interconnections.
0:06:02.7 SC: Right.
0:06:03.4 RS: Whereas in nature, you have ant colonies, the immune system. It's plenty of interesting stuff there that doesn't involve that kind of picture of fixed neurons in space. But instead, they move around and they process information in different ways. So how do we put all this together? And that's kind of the ambition.
0:06:24.2 SC: And just so that the audience knows sort of how to triangulate where we are here, this sounds pretty darn interdisciplinary. Like actual brains, but also collective brains in ant colonies of the immune system and also AI kinds of things. It sounds like a lot of training is involved in figuring this out?
0:06:47.2 RS: Yeah, of course. And in fact, this ambition of mapping the cognition space come from our interesting understanding whether or not there are general laws for complex systems, and in particular, general laws that define or constrain the possibilities that evolution can explore in terms of language, cognition, sentience. And yeah, this is a very much interdisciplinary effort. And artificial intelligence comes now as an interesting item because of course, you can ask now maybe much better than two decades ago whether or not so-called artificial intelligence will be similar or not than ours. And the way... Contributing to the question that I'm really, really fascinated about is what kind of possible things can evolution generate and whether or not it's a big convergence. So even for artificial intelligence, maybe you'll find out in the end the same kind of design principles. We'll see.
0:07:52.3 SC: Well, that's good because I was just gonna ask, you mentioned the possibility of general laws governing behavior of complex systems. That's a pretty frequently mentioned ambition in the complex systems spaces that you and I move in. So what do you think? Are we going to get general rules, or is it more like we should settle for a bunch of specific rules applying to different contexts?
0:08:15.3 RS: I hope we do. Of course, it's not a simple task at all. Even, as you know, even agreeing in what exactly complexity means, sometimes it is a bit controversial. For me, the definition comes from emergence, essentially. This idea that like in a termite nest, termites construct these amazing structures which are 1,000 times larger than the individuals. Whereas individuals are blind, they communicate in very simple way. So you can spend your whole life looking at the individuals, you will never figure out how the collective creates the hyper structure. And besides that, there's a lot of ideas that comes from very different areas, and that's kind of the part of the trip we need to figure out from theoretical or statistical physics and computational theory, and we have to blend concepts from all of them. But personally, I think, for example, sometimes people discusses the idea of whether or not life in another planet or in an alternative biosphere will be different from the one we know. And I'm pretty convinced that it might seem different, but the basic logic of life is probably universal, and unique.
0:09:31.0 SC: Okay. Well, that's a very good open question and hopefully we will get some data on it within our lifetimes. That would be very exciting. So speaking of life then, let's back up. We're talking about the space of all cognitions, that gets us excited, a little bit of foreshadowing, but let's just get to cognition in the first place. Can you say... I don't know how you yourself think about the evolution of cognition. Is it just one in a series of major transitions that happened in evolution, or is it something special?
0:10:03.6 RS: Yeah, that's a very good question. Well, I think that if you think in terms of, again, general laws of complexity, one of the things that I think, and that's where cognition is so important. That is probably a part of the explanation of why the biosphere is complex and not, as somebody said, the body is not just a field of microbes or replicators, very simple... Where is this whole complexity coming from? And the answer to that is that, there's a big payoff in predicting the future and being able to gather information from the environment and respond to that in adaptive ways. And that propels brains, that's the engine of cognition in evolution. And that can come from this first simple cells where we still need to figure out when those cells were able to use information and when those cells were able to do computations. Because this is something that is... I will say in the crossroad between biology and physics. And then, I will agree of what you mentioned that across evolution, I think, there are several cognitive transitions that have been happening.
0:11:16.9 SC: So there's, I always think of it in terms of a pay... What is the word? Trade-off kind of thing. Where as you say, you want to be able to predict the future. The world is a scary, unpredictable place, it's complex all by itself, even without organisms in it. And so we might want to process information and make predictions, but that costs energy, it makes us vulnerable in different ways, but it evolved nevertheless. What do we know about that evolution that... How does one little neuron help us?
0:11:50.8 RS: Well, if you look at the basic units, clearly the invention of neurons is one of the big transitions. All and every single cell we know, there are systems that are far from equilibrium that they maintain themselves because they create these gradients between outside and inside, and that's kind of the first step. That was something that cannot only create this difference of ions from the internal and external medium, but you use those to propagate information. And I always tell my students, actually, it's beautiful to see neurons as this kind of specialized cell type. You see cells from the liver or the kidney, they have these special functionalities which have to do with metabolic things. But neurons, when you look at neurons, you see something that wants to communicate. That's why the JPs as it is, it wants information. And that was a big revolution in evolution.
0:12:55.6 SC: So going back then, is it... Is there any sense in which a single cell thinks about the world, a single cell organism, I should say?
0:13:08.1 RS: Well, it's a bit of controversy here because I personally think that it's too often... In the last decades, two decades I will say, it's too often misused a term like sentience or understanding from single cell organisms. I will say that we have a good deal of understanding of what they do, and they do use something that we believe was fundamental in some of the revolutions towards cognition, which is associative learning, this capacity of learning and connecting different kind of external ideas. And they can do that, and plants can do that. But I personally think it's not much more than that. This is not the kind of memory storage that is so spectacular in other animals.
0:14:05.6 SC: But even that sounds impressive to me. What does it mean to say that a single cell bacterium I guess, does associative learning?
0:14:18.7 RS: It is. So the thing is that they can gather information, and you can actually engineer that in the levels. So we can put complexity there. So that they can gather information in a natural way about some kind of signal. But under an environment where there are some external stress that appears correlated with that signal, they eventually are able to recognize that stress signal, de-correlated from the natural original signal that they had habituated.
0:14:48.9 SC: I see. Okay.
0:14:49.8 RS: So it's kind of the puzzle of dogs.
0:14:52.3 SC: Yeah.
0:14:52.3 RS: The version and single cell version.
0:14:55.3 SC: I see. So there's something bad, and they recognize there's a correlation between this bad thing and this other signal, and then they start responding to the signal, whether or not the actual original bad thing is there, and that's learning. Okay. I get it. That actually makes sense.
0:15:08.6 RS: That's right. Yeah, yeah.
0:15:09.5 SC: How do they do that? Where in the cell do they store that information? I get the feeling from talking to you and other people that cells are more complicated than I give them credit for.
0:15:20.3 RS: Yeah, yeah. They are very complicated. And as you probably know, I mean, we actually talk about the cell, the genome of cell that are very small, kind of the minimal cells, and we still ignore a lot of what many genes there do. But the thing is that they have these signaling networks that gather information from the membrane. You can imagine as a kind of cables going from the membrane into the genome or the nuclei, depending on what kind of cell is that. And in a way, it reminds us kind of a neural network, except that this neural network is just an analogy. Whatever is there is being fixed by evolution. And then they can store information in, for example, in switches. And within cells, we have switches that allow to store bits of information. But...
0:16:15.4 SC: Sorry...
0:16:16.0 RS: Despite...
0:16:17.5 SC: What kind of literal switches are we talking about? Like a chemical that can be one way or the other? I'm not... What kind of... What's a switch?
0:16:24.3 RS: The usual thing is genetic switches.
0:16:26.3 SC: Okay.
0:16:27.4 RS: And actually, the logic of that, we know it very well, it is usually switches that involve two genes that regulate negative each other. So I try to inhibit you, you try to inhibit me, to repress. And that allows to store memory. It's pretty much in a way what happens in electronics.
0:16:43.6 SC: Okay. Is it actually editing the DNA or the RNA or just expressing it differently?
0:16:53.6 RS: Well, in gene regulation, it is expression.
0:16:54.0 SC: Okay.
0:16:56.6 RS: And whether or not a given protein appears or not. So that way, it is kind of binary.
0:17:01.9 SC: Okay. Good. All right, good. I've learned something already. If our connection dies now, it'd be a worthwhile podcast, but I want more. So do we know when along the evolutionary progress the first neuron came to be?
0:17:20.9 RS: Okay. It's an ongoing discussion because it's very recent researched by a number of groups. The first neurons have to appear in a context that has to do with multi-cellularity. Because neurons...
0:17:35.5 SC: Sure.
0:17:35.6 RS: By definition, the least is that, and that's the origin of that. So at the beginning were cells that were able to act as receptors. And the cells themselves were able to detect something and secrete some signal. These still exist, there's a lot of cells that do that. But you don't send signals to, in principle, to anyone else. The beginnings is that receptors that are able to gather information in simple ways. But that information within the organism is not propagating all around.
0:18:11.0 SC: Got it.
0:18:11.5 RS: For that you need further evolution.
0:18:14.2 SC: Okay. Good. So the first, I always like it when something complicated can be broken down into steps where you can see why each step would make sense. So first, we receive a signal from either the outside world or maybe even inside the organism itself?
0:18:29.0 RS: It could be. Yes.
0:18:29.9 SC: Yeah. And then we just do that something within ourselves. But then the next step to being a neuron is talking to other neurons, I suppose.
0:18:39.1 RS: Exactly. Exactly. In a way. And it makes sense that this happens eventually. I mean, before, the Cambrian explosion that happened, 550 years, I have to remember always, million years ago.
0:18:55.6 SC: Million. Yeah.
0:18:56.5 RS: Before that we had a biosphere essentially with very simple organisms that performed functionalities, like filtering water, pretty, pretty boring stuff. But to evolve the first predators, which in a way, was a big revolution that changed everything, you need to have sensors and the sensors have to integrate the information. And that's probably the key. And I think it is a pretty reasonable idea, that if you have to move in an environment, not just beings settling out, but you have to move in an environment that is uncertain you need to integrate information. And the fact you need to integrate and in a way predict is probably the engine that made brains happen.
0:19:42.0 SC: And the idea, is this before or after the idea of predation? I've been told that once animals or once organisms start eating other organisms, a whole bunch of new capacities need to be developed.
0:19:58.3 RS: Yeah, absolutely. We're talking about Metazoans multicellular systems, because predation is also in the realm of bacteria.
0:20:08.9 SC: Okay.
0:20:09.5 RS: A different predator bacteria also. But for animals, yeah, predation came with eyes and the nervous system. So once this started to be in place, you had a whole biosphere of poor animals it is just sitting there unable to escape. And that promoted a huge arm race. Right?
0:20:35.2 SC: Right.
0:20:35.2 RS: Of developing defenses against predators, etcetera. And that changed everything.
0:20:39.8 SC: Yeah. Okay. Good. And the arms race centers, I'm gonna boldly conjecture this and you can correct me, around information in some sense. I'm still trying to, for my own sake, figure out how the process by which organisms got better at using information. I mean, in some sense, a bacterium uses a little bit of information. A gradient. There's more nutrient in one direction than another direction, but it's not really thinking about the information in the way we usually think about it.
0:21:11.7 RS: Yes, yes. Right. I will say that there are two major events here that have to be considered. One is, as I was saying before, that movement was crucial because without movement, you don't have predators, really. And that required a nervous system. And the other thing was the development of sensors. And for doing that and integrating information you need another revolution, which is interneurons.
0:21:44.4 SC: Okay.
0:21:44.4 RS: So elements that they are not just detecting signals or executing tasks, they are in the middle, they are connecting. And once you have that, you have this beautiful thing, which is information processing. So you can, from there, you can actually jump into the real big complexity. And precisely because some systems, like plants, plants don't have neurons, but they don't have also anything equivalent to interneurons.
0:22:14.8 SC: Yeah. Okay.
0:22:15.4 RS: Information processing elements. And because of that, not having that is a huge limitation in many ways.
0:22:23.9 SC: Got it. So I think, again, that's a concept that I'd never really appreciated the importance of. So it's relatively straightforward to imagine the usefulness of neurons that sense things, likewise, neurons that send out instructions to the rest of the organism. But then there's a revolution when you invent just neurons that only talk to other neurons and can really therefore process information. That's their job.
0:22:50.6 RS: Yes, exactly. Yep.
0:22:52.4 SC: When did that happen? Can we pinpoint that?
0:22:57.5 RS: Well, probably, I mean, before the Cambrian we for sure, we had organisms that have nets, nets of neurons, not brains, not centralized control, but nets of neurons like hydra nowadays, or others very simple organisms or like jellyfish. So you do have a network and you do have interneurons, but that is typically connected with things that have more to do with locomotion and not exactly complex information processing. But that means that it was at the bases of the Cambrian explosion also.
0:23:40.8 SC: Okay. And I guess that helps explain why the word network keeps appearing in this kind of discussion. I mean, you have cells, they can do some things, but hooking up those cells in an array, in a network is a crucial step in truly processing complex information.
0:23:57.4 RS: Exactly. Because then again, you can also have emergent behavior. You can also store memories in complex ways. And again, processing means that, you are started to have access to a space in terms of dimensions, a big space of possibilities. Whereas you have only sensing and reacting, you're just limited to respond to the environment in a very predictable and simple way.
0:24:26.3 SC: And this is jumping ahead a little bit, but despite the fact that there's a lot of excitement about neural networks and AI and things like that, I take it that a typical computer architecture, like the laptop that I'm using to talk to you with, doesn't have this kind of network structure. There's not subunits that you would recognize as neurons, it's more homogeneous, like there's a memory, there's a CPU and they're doing different things.
0:24:53.6 RS: Yes. The logic architecture is totally different. This is the Von Neumann architecture. It's going to our computers, and it has to do with something that is easy to understand that in a way, typically information is being processed in a sequential way. And in a way that is extremely efficient with the how we have, but has very little to do. But that said, I would like to mention that one interesting thing that happened when people started to build this computers with a very dense arrays of microprocessors, that interestingly, one thing that was found out is that the web of connections, the network that connected in a very efficient way to reduce costs and signal processing, turns out to have statistical properties that are pretty much identical to what we observe in parts of the brain cortex.
0:25:53.4 SC: Okay.
0:25:54.4 RS: And it's, again, as we were saying before, if you look for a kind of universal laws, interesting to see that the engineers who didn't know anything about the brain cortex ended up in a scaling law, the Renz rule, with the same kind of behavior than parts of the brain cortex, again, suggesting that maybe there are really universal laws.
0:26:14.0 SC: And you mentioned the Von Neumann architecture, is that distinct from how a neural network works?
0:26:22.2 RS: It is, it is totally. Because in a neural network you have... On the one hand, it's formed by elements that are, in the case of living systems, are polar systems are neurons and have a priority that send signals from one part to another in one direction. And are organized usually in multi-layers and information processing is highly parallel. Something that happens in the brain, but not in a computer, even in parallel computers.
0:26:55.0 SC: Right, okay. Whereas, maybe explain to our audience what the Von Neumann architecture is in contrast with that kind of network point of view.
0:27:05.8 RS: Well, Von Neumann architecture is grounded in the fact that you have kind of basic modules, essential processing unit to process information and memory where you actually put the data ready for being processed there. And essentially it's an architecture that we identify very easily in our computers. But as I was saying before, in a way is inspired in the idea that you have to deal with software, because that was actually, it was the revolution that happened in Von Neumann's time. And Von Neumann was the one who actually foresee that. And the way of doing that for a system that is binary and using the kind of architectures we use, the simplest, nicest, and most powerful way is using that kind of separation within processes.
0:27:55.8 SC: Okay. But, nevertheless, that, so that sounds like a sensible thing for human beings to design when you first start designing computers. But you're hinting that as we're pushing our capacities more and learning new ways, we're kind of converging back on a more biological networked vision.
0:28:18.3 RS: Yes. In fact, we wrote a paper recently, that we entitled Evolution of Brains and Computers: The Roads Not Taken. And in that paper that I wrote with one of my former students, with Seoane, we argue that when you look carefully at the things that artificial neural networks and that includes the most common thing they're using nowadays. When you look at the way they work and the potential that they have and what's really being deliberate, we defend the idea that, probably in order to get into the real general artificial intelligence, something that really matches what we do, you probably need to go through some of the paths that our brains have followed in evolution. And in particular, there are several things that I think they are extremely different and not yet there in the machines that are kind of the singularities of our brain.
0:29:19.6 RS: One is language, complex language, as much as you see, that they can use language, it's not the same kind of thing. The other is, something that fascinates me and is that it's time. Somebody said we are, mental time travelers. That we on the one hand use memory, and the same architecture we use for memory allow us to do something absolutely amazing, which is thinking in not one feature, but many possible features. That is a revolution really, an evolution of humans. And then this, apparently disconnected, but very important thing, which is this capacity of understanding the mind of the others. Of understanding what the other is thinking, so to speak. Because when you put these three things in connection, something really singular happens.
0:30:13.5 SC: Yeah, absolutely.
0:30:14.6 RS: Nothing like that is in the artificial intelligence that we have.
0:30:17.4 SC: Okay. Good. I'd like to maybe expand on that a little bit. 'cause I'm a big believer in the importance of the mental time travel stuff. We had Adam Bulley, I don't know if you know him, but he was a guest talking about how that helps distinguish human...
0:30:29.0 RS: Okay.
0:30:30.6 SC: Ways of thinking from other people. But you said the AI way of thinking is not the same kind of thing. So you just identified some features that are true for human cognition. In what way is AI not doing that? Because we all know, I'm on your side here, but anyone who has interacted with ChatGPT knows that it sounds human. So how can it be sounding human if it's doing something so different?
0:30:55.0 RS: Well, I guess it depends who you ask. I like this idea, I mean, I'm impressed by ChatGPT, I don't want to be dismissive here because I can't. But it's interesting to see that this system who has no past, so it's, there's no childhood or learning or anything connected with other, as a humans, other people. Which the cultural part is enormously important. But they have been trained in making, so to speak, a cultural compression process that in the way of doing something apparently so trivial, which is predict the next word. Because that comes from this idea.
0:31:41.7 SC: Yeah.
0:31:42.1 RS: How you predict the next word. But it turns out that what happens is that, and I think it's important to try to understand it, that in the process of optimizing this prediction, this system seems to have been generating something that is kind of reasoning, kind of something that mimics reason.
0:32:03.5 SC: Yeah.
0:32:04.2 RS: And I say mimic because, of course, there's no understanding there. But it's interesting to see that for us, the humans, we are kind of looking there. And I always think in the origins of artificial intelligence, I mean, and I was thinking how do we see the machine operating and how do we interpret that? And I was thinking when I was a student and there was this program, ELIZA, which was kind of a very simple program, right?
0:32:34.5 SC: Yeah.
0:32:34.7 RS: But even for us, I remember my colleagues, that we programmed that, and even for us knowing that there was no intelligence, there was nothing there. But it's kind of something that calls to your brain and kind of have a feedback with that machine. And ChatGPT, of course, has amplified this in ways that we didn't expect. But again, in terms of, there's no time there, I mean, clearly. And actually you can actually test a little bit ChatGPT with questions and things about time and see that there are some troubles there. But as we move more and more into big versions, I mean, as clearly the lines get blurred.
0:33:19.1 SC: Yeah. It gets better.
0:33:21.3 RS: No. We'll see.
0:33:23.7 SC: We will see. Okay. Let's get back to the biology a little bit, I'm not quite done with that, before we move on. 'cause you referred to the kinds of structures that are in ordinary biological brains. I presume that what you're gesturing toward is the claim that the brain is a scale free system. That it's on the edge of criticality, the edge of, or it is critical, it's the edge of chaos. If I'm right about that supposition, explain what all that means and why it matters.
0:33:55.3 RS: Okay. There's one clear thing in the brain in terms of dynamics, which is, this critical state, meaning that, pathological brain states, looking at the dynamics. Well, things that you can actually measure from EEGs or from any kind of non-invasive method. You have time series of changes, in an epileptic state or in some kind of pathological state. You can see that the brain state much more organized, much more regular. Not good, not good at all, of course. Whereas in, you are in, for example, coma states, lock-in states, you see that there's low activity, much more random. So you have kind of disordered state. We don't want that, of course. And the nice thing of the research, it has been showing that the healthy brain seems to operate on a critical state, right on the boundary of a phase transition. Is ongoing discussion of what kind of exact transition is going on there. But clearly it happens to take advantage of the regularities that you have because you have internal installations. But this amazing capacity that you have at the critical point of reacting quickly into any kind of stimuli. That said, this is in the dynamical state, how is this connecting with actual cognition. Right?
0:35:30.8 SC: Okay.
0:35:31.6 RS: And because for example, language also exhibits some features of scale free behavior. Other attributes that you can find out seem also, but how are they connected. This is something, because in principle you could think in an intelligence that doesn't use criticality. So we need to know.
0:35:56.9 SC: You're kind of insinuating or implying that they are connected. But maybe you're saying we don't know yet. That's a conjecture.
0:36:08.2 RS: Yes. Yes. We don't have yet a good theory to kind of connect the computational tasks, as you will describe a functional cognition and the dynamics.
0:36:19.7 SC: Okay.
0:36:20.2 RS: We don't have that Yet.
0:36:21.3 SC: Good. I mean, maybe one thing just to get clear for the listeners, 'cause you and I will use a word like phase transition, but I bet that some people think that that's a process unfolding in time, like water boiling or ice melting. And that's not what you're saying is happening in the brain, I presume.
0:36:41.6 RS: No, no, no. I mean, we do observe phase transitions, in particular in some kind of experiments that you prepare for people. For example, looking at an object that has two different interpretation that you can see the brain flipping between different states. But it's more like, at right of this transition where you have this critical state, you kind of live there.
0:37:04.8 SC: You stay there.
0:37:05.3 RS: And again, that's an interesting thing that seems common to other systems. Like RNA viruses that live on the edge of catastrophe... [0:37:13.3] ____ Seems though life kind of likes it.
0:37:16.0 SC: Yeah. And I'm never completely clear on whether that should be obvious or surprising in the sense that there are more ways for the brain to be either completely disordered or completely ordered. Like there seems to be some need for regulation to keep it right at this critical point where there's something going on at very different light scales and timescales.
0:37:40.3 RS: Right. It's... As I said, since we are still lacking a theory as for example, we lack a theory, a neural theory of language. Which is something that we really need to figure out. I like the idea that actually learning that was kind of introduced many years ago that computation in complex systems... Computation in biology in particular needs to occur somewhere where you can take advantage of the order that you need to store information and have regularities that are predictable. But on the other hand, be open-ended and able to actually manipulate information and probably on the boundaries with the order and disorder is what this happens. And for a physicist, the natural language is thinking that you are on the border of this phase transition between order and disorder.
0:38:39.7 SC: So let's start then applying this to trying to figure out the space of cognition. So these are, examples of brains and neural systems, and we're kind of familiar with that. You wanna say that there's a whole another world out there that these are solid brains? And what about the liquid brains?
0:39:00.0 RS: Yeah, yeah, yeah. The liquid brains is a whole story. We think, for example, we tend to think in humans that we have been, unfortunately for our planet, we have been very, very successful. And one of the reasons for that is that we are what we call in ecology, ecosystem engineers. We have been able to manage, to transform the planet, changing the flows of energy and matter to massive scales. It's interesting that we have a competitor here, which is social insects. Termites and ants, as Edward Wilson said, if humans were not here, there would be the planet of the ants because they also, we don't realize that they are another kind of intelligence that has been able to do also ecological engineering on massive scales. Interesting that, and I want to point out, when people think about intelligence in the universe, there may be planets where intelligence have emerged, but it's liquid. And liquid intelligence cannot be as complex as the solid one. So there's not going to be signals being sent from those planets anywhere. But that doesn't mean that it's not intelligent, of course. And that can be extremely successful in transforming the planet. So in that respect, one of the things we've been investigating precisely and trying to do a good theory of that, is what is the power of liquids brains? Brains formed by individuals that move around, which in a way, as Dan Dennett said, is brain to brains. Because every neuron infact has a liquid brain.
0:40:40.7 SC: So by liquid, because I think that people are going to be imagining a glass of water or a cup of coffee, and you're imagining in your mind, a colony of ants or of termites, they're liquid in the sense that the individual pieces move around and interact differently with each other, unlike the neurons in your brain.
0:41:02.0 RS: Exactly. And so there's no individual identity between pairs of ants. There's no such a thing as a connection between two given ants that in a way is stable, there is not such a thing. And that totally changes the landscape of possibilities. But the same happens with the immune system, the immune system is a fluid neural network. And it's able to learn, it's able to store memory, of course, with a very well-defined functionality dealing with pathogens. But it is a network and you find if you make a model of the immune network is not much different from a standard neural network.
0:41:42.5 SC: Okay. I don't know.
0:41:43.3 RS: So...
0:41:44.4 SC: Yeah.
0:41:45.6 RS: Yeah, it is, I can tell you. And so it's interesting to see that on different scales, you actually can find out, for example, we feed an organism, another neural network that is in this case, liquid. And there's also the question, for example, with some particularly interesting things like the microbiome, this huge ecosystem that we carry out inside, and that in a way, makes the claim that we are a single species, kind of nonsense, because we are carrying out a whole ecology. And we know the microbiome, which is just, you could say just bacteria, but they communicate with the immune system and indirectly with the brain. So you see, the separation between the solid and the liquid is not so simple, 'cause it may be that they kind of interconnect.
0:42:39.2 SC: Okay. But you just said, provocatively that the liquid brains are not going to be as intelligent as the solid ones. And I might've thought that the sort of extra flexibility of having the ants move around and talk to each other gives the capacity of potentially more intelligence than the fixed hard wiring in our brains.
0:43:01.0 RS: It could be great, but think about what the ant colony or the termite colony does. In evolution, it's emergent of this super organism that in a way warranties to seek out the same idea of how do you reduce the uncertainty of the environment? How do you predict? One way of doing that is creating a nest, having an internal environment that is stable. But the ultimate goal here is to reproduce the whole story. That's why the lifecycle of a nest, of an ant colony if you want is so similar of the one of a multicellular organism. Okay? And development from not a single cell, but from a queen. And then this grows, you need to monitor your environment, warranty that you have resources. But all the cognition in a way it's plays in the colony reproduction and being able to actually be able to reduce uncertainty. This, our brains does it in a way they do it in another way, but our way has this, an amazing potential for memory. And that's just because neurons have identity.
0:44:20.3 SC: Yeah. Okay.
0:44:21.8 RS: A pair of neurons have a special identity. If you destroy that, we have kind of a proof of that. The potential for storing, for example, memories, is extremely reduced.
0:44:33.1 SC: I see. All right. So then in that case, let me just ask the skeptical question here. Are we even playing fair by talking about ant colonies or termite colonies as brains? Do they live up to their implications of being compared with the human brain?
0:44:53.3 RS: Well, of course, one of the reasons that we came about with this term when we made, we write this working group at the Santa Fe Institute, and the idea was, how do we map the space of cognition? How do we label things the way that it is meaningful and provides kind of basic categorization of what is there? I can understand your question because, say brain is a big claim, but on the other hand, you still, you have a system that can store information, process information because there is information processing. And in a way it will be kind of unfair not allow them to do that. I must say that if I have to tell you the truth, this comes from when I was an undergraduate student and I was reading [0:45:43.0] ____ book [0:45:46.6] ____. And I was in love with the idea that an ant colony and a brain are kind of very related things. So what could I do?
0:46:00.2 SC: Yeah. What can you do? Do ant colonies talk with other ant colonies?
0:46:05.6 RS: No, they fight with other ant colonies.
0:46:08.0 SC: Okay. [chuckle] They don't gang together to ant colony societies as far as we know.
0:46:13.8 RS: No, no, no. They don't do friends that way.
0:46:18.4 SC: Okay.
0:46:18.8 RS: In other way. And instead they can expand... Some species of ants can expand over vast areas going through entire countries making super colonies.
0:46:30.8 SC: Okay. All right. Well, good to know. So that sounds like an axis or one coordinate on the space of cognitions, liquid versus solid. So I guess one question is, are there gaseous and other forms in that particular dimension? And then secondly, are there other dimensions?
0:46:56.5 RS: Yeah, as you know in biology, in some way as a different from physics, you will find exceptions.
0:47:04.7 SC: Sure.
0:47:05.3 RS: And there are organisms in our list that in a way look like outliers. And one of them is Physarum. Physarum is this, kind of mold that is a single cell, but you can see it with the naked eye. It can be as big as this table, it is yellow. It's kind of a extraterrestrial thing.
0:47:28.7 SC: Yeah. It sounds a little creepy.
0:47:31.3 RS: Yeah, yeah. Totally creepy. And it's a single cell in the sense that there's, a single mass, it's a lot of nuclei inside the cell. And in nature they can manage to, it's always moving around and spreading. It's, again, between liquid and solid because...
0:47:54.5 SC: Okay.
0:47:55.7 RS: It's clearly they maintain a lot of structure, but this fluctuating and changing all the time, this as they organize. If you look close and make a picture, seems like a neural network because they have all these very complex venation patterns as they move around and they can detect sources of food and make decisions about which one is the richest and go in, go, when I say go, I mean changing morphology to exploit that source the most efficient way. And somebody thought, "Okay, let's use that, because what if I put two food sources, and they like flakes, so it is easy to maintain and I put them in the entrance and the exit of a maze.
0:48:38.7 SC: Oh yeah.
0:48:39.7 RS: And I put Physarum there because I can cut into pieces, etcetera. And in the end, Physarum fluctuates, changes, changes, changes. And in the end you have a single tube that goes the optimal path from entrance to exit. So there are two things to say. One is the beautiful thing... And so why is this so different? Is that the computation is the shape. So really morphology is what says, I made this optimization, this is the solution. So it's nothing similar to that anywhere.
0:49:09.8 SC: Yeah.
0:49:10.3 RS: The other thing, because sometimes it, this is sale, like, well, look, Physarum solves mathematical problems or, well, yes, but it's the humans who define the boundary conditions. And that's a big difference. I put the [0:49:22.8] ____ here and I put Physarum there and Physarum exploits this special capacity that has in a way is, least action. This is what is happening here. It's kind of finding out the shortest path. And you can use that, but we have to be aware that it can solve problems if we do prepare the problems properly. It's not like it's smart and goes in the forest doing calculations.
0:49:52.2 SC: Right. And this is, you're using the correct name for this beastie, but we, is this what we call a slime mold, informally?
0:50:01.5 RS: It's in the same class. Yes.
0:50:01.8 SC: Same class as slime mold. Because I know these stories of slime mold solving mazes, and it is a little creepy, but it's a good example of a different kind of cognition. So, okay. So there are examples that span the space from liquid to solid and in between. So if we're mapping out the multidimensional space of cognitions, what other things should we be thinking about other than the liquidity solidity transition?
0:50:28.1 RS: Well, plants are of course, extremely important elements here, because plants have evolved in a way that, if you're thinking what I said before, that movement is something that has been the engine of building brains and evolving brains. Plants do not move, plants have this special status that they can gather energy just from the sun. You don't have to move. They have an enormous morphological plasticity, meaning that if I ask you how many organs you have, you just can count them because it's totally regular. How many organs has a plant? And since every single leaf is an organ, but can they... They can appear and disappear. It's totally flexible, it depends on what you need. And that makes an enormous different from anything. They don't have neurons nor anything, has been claims on that, nor anything similar. So my position about that, is that plants are absolutely amazing, they really have terra-formed the planet and on, and they are spectacular in many ways. And I don't think they like Mozart. It's, I think it's a different story.
0:51:43.9 SC: Okay. Very good. So, but what about like different organizational architectures? You talked about the critical brain. Are there... Is that just the single right thing to do if you want to be intelligent? Or are there other ways of achieving this that might, that biological, whether real or hypothetical biological systems might contemplate doing?
0:52:08.8 RS: Oh, okay. Well, I wanted to mention that also in ant colonies, at least in some species, we observe also the critical state, see fluctuations of activity that match very well that. In flocks of birds, the ways they change in shape, it is been very well characterized. They live in the critical state. They will lead, move around, because they're a criticality. So they, in a way that you see changes that are very predictable, but they're absolutely ready for any kind of external signal to react immediately. What other kinds of things you'll find out? I have made a conjecture of that... And I want, and one of the things I'm doing as part of my research is trying to put this in a formal way. That anything that is evolvable into cognition will be characterized by two things. One is that, as I mentioned before, in fact, that you have elements that are threshold elements so that when you send a signal you weigh it, how strong it is and you react all or none depending on how big is the signal, as nearest for example.
0:53:23.3 RS: And you will have a multi-layer structure. And again, one of the reasons that I don't think it's a kind of a surprise to, that engineers building this artificial neural networks keep using what you have been finding out in the brain cortex example the multi-layer of structures, the visual cortex neurons that are threshold elements. Can you escape from that really? Because I don't think you can't. And another thing that helps to actually make a good argument is that, this hold area of artificial life. Where in principle, in silico, you got a bold thing that could be absolutely different from anything you find out in nature. Why we don't find any kind of cognitive system that in a way deviates from what we see?
0:54:15.4 SC: Well...
0:54:17.2 RS: I bet that this is because they are universal things.
0:54:21.6 SC: Yeah. That's one of the two options. The other option is that we're just not that imaginative, and so therefore we keep reinventing what we are familiar with.
0:54:30.8 RS: Well, but again I think artificial life it's one way of actually getting this taste that it's not us. It's probably the constraints that are there. And I want to also make the point that, why in the neural networks of the brain we see multi-layers of threshold elements. And then when you go into the inside cells, you look at how genes interact with each other and how... What kind of models you make, you make models with threshold element responses and waves. And why the immune system, when you make a model of the network of immune cell interactions is a threshold network with sometimes multi-layers. It's a bit suspicious.
0:55:19.2 SC: Well, good. So I've heard people mention this, but you're emphasizing it more strongly. So that's very interesting. The idea of a threshold element... You have some input, but your output is not just proportional to how much input you have. It's like you get no output for a little bit of input and then a lot of output for more. There's a threshold that you cross. So what is it that makes that so crucial to this kind of architecture? Why is that so good? Why is this non-linearity so important?
0:55:47.8 RS: Well, I guess that I don't have a complete answer for that. It's something that I'm thinking about. But really, a threshold elements means that you can perform the simplest way of integrating signals or making a decision whether or not a majority of input signals crosses a given boundary. If you isolate that just from one neural receiving input from everyone else, that's not very meaningful. But if you think in the network where different parts of the network have to wait, what's the state of the system? Threshold elements are really really an efficient solution.
0:56:26.1 SC: Okay.
0:56:26.7 RS: Maybe the only one.
0:56:28.8 SC: Good. Yeah. So that presumably leads us to lessons for constructing life and cognition, whether it's in AI or robots, etcetera. Are the lessons there clear yet, or are we still learning them?
0:56:45.1 RS: We're still learning. I mean, there's been beautiful achievement with artificial intelligence in the past for example in relation with language and sometimes showing you that emergence. And I do think that is the key that emergent phenomena is going to be the really relevant story here. Like many years ago, Luke Steele was working with these robots that exchange words, inventing words and reaching agreements. And in the end, you get a situation where your robots have made a lexicon to construct to the world, which was more or less programmed but surprise. There's... In order to make sense of the world, they invent a Proto-Grammar, which is emergent. It wasn't planned. So it's kind of, I think, an interesting insight into thinking of, if we allow artificial intelligence systems into, have opportunities for, for example, having embodiment, embodiment is so important in actually reaching cognition. Maybe we'll see big advances. But right now, the machines or the networks just live in this... I used to say, they live in kind of in the dark rules with no world. The world doesn't exist and they have nobody of course.
0:58:08.0 SC: Right. Well, we have had people on the podcast talking about symbolic versus connectionist approaches to AI and the idea of the AI building a model of the world versus just trying to predict what word comes next in the sentence. And there is this weird tension because the successful implementations have been mostly connectionist. Just huge neural networks that you feed a lot of data into and let it try to predict the next sentence. But my impression is that they don't actually have a model of the world inside. And that seems like a kind of limitation. But I know that that's also controversial in the field.
0:58:51.5 RS: It is controversial yes. Yeah. I mean, they don't have a model. The thing is as I was saying before, we cannot ignore the fact that since we do have models of the world and we can have a theory of mind, it's important to think that in the future, artificial intelligence is not going to be just what people is expecting, the real intelligence. I use this example from this movie I always recommend, I'm a very movie person, Robot & Frank.
0:59:28.2 SC: Okay.
0:59:29.2 RS: Which is a... It's an amazing story about this person that has stats, Alzheimer and the kids bring a robot. And it's clear from the beginning, the robot is not intelligent and uses natural language, which that makes a big difference. And changes because it learns. But it's not intelligent, and is a scene that I really find so fascinating where the robot is saying, "Frank, you erase my memory. I know you don't like it. But you don't like to hear that by not a person." And that's the case. But it doesn't change the fact that as it happens, maybe with our pets, for example. Where communication is actually very limited. It doesn't matter much really, because in a way it's kind of looking ourself in a mirror that is evolving in time as if you was someone else. And we haven't weighed much what would be the implications of that?
1:00:30.3 SC: Good. So I guess to connect before, to what you said before about embodiment we're gonna... It won't take too long I imagine before we are taking the AIs that we've built, the large language models and embodying them and putting them in robots and giving them bodies. And maybe even the scary part to me is giving them hunger [chuckle] and giving them desires to get resources out there in the world to persist. Is that a thing that is coming and should we be scared?
1:01:07.7 RS: I don't think we should be scared for one reason. You mentioned the right thing. We could be scary if in a way there are goals or emotions or in a way kinds of potential responses that have to do with something that are very human. I always find funny all these discussion about the artificial intelligence that will kill us. But my question is why?
1:01:34.6 SC: Why? Yeah.
1:01:35.2 RS: What is the motivation for that? And for human is natural because as I was saying before, we can have a theory of mind. We can understand how the others think in a way. You put yourself in the mind of another. And that was probably the origin of consciousness, if I can say. Selection pushes into understanding the mind of the others, you're equipped with language, the brain time machine. For me, it's almost inevitable in the end you understand that you are also a special individual in a world and you understand yourself. So, sorry, I'm kind of moving around.
1:02:14.0 SC: No, that's good. You should. But it's sort of charmingly optimistic. I'm probably on your side, but I guess the opposite argument would be how much risk are you willing to take?
1:02:28.6 RS: As I said, intentions is something that really has to do with a layer of cognition that I think escapes completely. Right now for artificial intelligence is not there. So you don't have any kind of motivation which is a really high level complex kind of thinking why you should be harmful at all.
1:02:54.3 SC: Okay. So by thinking about the space of cognitions liquid and solid brains and things like that, are we led to realize that there are possible cognitions that we haven't yet explored? And can we build them either in silicon or even biologically?
1:03:12.4 RS: Well, that's something that I'm very interested for one reason, when we build this space of cognition, we have several candidates ways of drawing this. We use cubes in evolutionology they're known as the morphospaces.
1:03:29.5 SC: Okay.
1:03:29.5 RS: You try to find out axis that represent relevant properties. For example, how complex is the computational power of one of these systems? How autonomous it is? Or in the vertical axis, for example, how social it is? And it's interesting that if you put there all the objects you know, which means animals from Topi to humans and ants and also robots and AI systems, there's a big void. There's a domain there that is empty. Why is that? It happens like in physics that sometimes you find out there's in a space of, free space you have, this is forbidden. Is it forbidden or is something that we haven't actually observed or maybe can be engineered? That's one of the fascinating questions that has emerged from the research.
1:04:26.9 SC: Well, we're late in the podcast, so you are allowed to speculate. What do you think?
1:04:33.5 RS: Well, my experience from another system which is morphogenetic systems, so we have been also, since we have a synthetic biology wet lab, we can actually play with some things. And in one of the spaces that we created, we also had a void, a big void of morphologies that we couldn't observe. But we were able to engineer some, which means that for us there was a path that maybe evolution was unable to follow, but we could do it. Is it going to happen with cognitions? That's one of the most fascinating things that comes out from the research, like finding out that this is an empty place. Why is that? And maybe it's something that we can invent, or maybe it's forbidden.
1:05:24.0 SC: So maybe just for the audience, a little more about the synthetic biology aspect of this. So you're going in as intelligent designers and editing the DNA and making new organisms?
1:05:37.6 RS: Okay. Well, let's not use the term intelligent designers.
[chuckle]
1:05:42.4 RS: Just in case.
1:05:44.4 SC: Okay.
1:05:44.5 RS: You shouldn't do it.
[chuckle]
1:05:45.7 RS: Well, the thing is, synthetic biology allows us to interpretate nature in very interesting ways. It's a powerful tool for biomedical research. It's an amazing way of actually playing with living cells. For us, as complexity theorists, actually, it's a new way of actually asking questions. For example, if I want... And this is an ongoing project, if I want to transform bacteria, modify genetically in such a way that they behave like ants. Ants can solve problems because they communicate in special ways. For example, they can find the shortest path between the nest and a given source of food. Could we tweakle bacteria? And if we can, why is that? Because bacteria could benefit from having extra cognition to do some things. Again, doesn't seem to be the case. Is it because evolution couldn't get there, or because there are trade-offs? That it's not worth to actually develop more cognition. So it's an amazing way of actually kind of creating things that... Solutions that nature hasn't found, and see how far can you go.
1:07:03.9 SC: Yeah, good. So plenty of work to be done. I like that. I'll give you one last thing to speculate about, since we're again, we're late in the podcast. If a colony of ants can be thought of as a collective intelligence, at what point does a group of human beings become a collective intelligence? Is Spain conscious?
1:07:28.5 RS: [chuckle] Well, I think it's not. I think I will say two things. One, Edward Wilson said very well something which is, from the point of view of our society, we have nothing to learn from ants. Sometimes it's easy to think that it's analogies, and I would have to be very careful with that. Well, Sometimes it's so disappointing to see humans behave that it's this collective intelligence, you think that this is not going to happen. There's clearly an amazing thing that has to do with some collective phenomena here, which is the fact that culture has been co-evolving with brains for humans. And I like it when Michael Lachmann, the scientist who told me that, we were discussing about the trade-offs that you see, that it seems that the more complex is an organism in a society, the less social is the whole system, whereas a society in insects is huge. The individuals turn to be more stupid. But he told me, but look at humans. If you isolate this amazing human brain from the rest, it's absolutely useless, it's worth nothing. Because isolated from culture, from learning, from imitation, from language acquisition, what are we? Nothing. It will be very interesting. Just something to make, it makes you think about how culture and brains co-evolve.
1:09:09.9 SC: So groups of human beings are not themselves conscious, but we do rely on their input and then social interactions to make us who we are?
1:09:17.7 RS: Oh, and it's clearly being social is part of the fact that we are cooperating agents and though it's a huge part of our success as a species, despite that, as I was saying before, some days it doesn't seem to be the case.
1:09:33.1 SC: I was gonna say, I hope we can keep up that success for a little while longer. So Ricard Solé, thanks very much for being on the Mindscape Podcast.
1:09:39.3 RS: Thank you very much.
[music]
Just FYI, your transcript is just a placeholder.
Oops, thanks! Fixed.
I just wanted to say that if you want to read pretty good sci-fi that takes on this question(what would the cognition of other kinds of beings be like) the author Adrian Tchaikovsky has this series that begins with the book Children Of Time. The first book asks what advanced spider and various insect cognitions would be like, especially if humans met them. The second book tackles Octopuses and something else I don’t want to spoil. The third book is about a raven pair that has this weird symbiotic relationship, and another thing I don’t want to spoil. I’ve read a couple other books of his and it seems like the “what would x non-human consciousness or culture be like” is a running theme.
Pingback: Sean Carroll's Mindscape Podcast: Ricard Sole on the Space of Cognitions - 3 Quarks Daily
There seems to be much concern that man-made robots will one day not only render their human creators obsolete but will actually try to kill them. Near the end of the podcast Ricard Sole notes, it’s important to keep in mind artificial intelligence in the future is unpredictable. He makes reference to the 2012 Si-FI movie ‘Robot and Frank’, where an aging ex-convict Frank Weld lives alone and suffers from Alzheimer’s and dementia. His son purchases a robot companion, which is programmed to provide Frank with therapeutic care. Frank with the aid of his Robot commit one last robbery. When the police become suspicious, the Robot sacrifices itself, convincing Frank to erase its memory to help Frank avoid jail.
The video posted below is the trailer from that movie.
https://www.youtube.com/watch?v=Hi9s-__B0TY&t=16s
Pingback: poesia e humor contra o nazismo, a invasão da nuvem negra espacial – radinho de pilha
Very interesting. Do we have a good enough definition of intelligence? Of mind? Of life? Can these be definided? I suppose some may argue it’s wrong, but I find all these are very similar problems (or, may be not problems, but questions, for a lack of better vocabulary), certainly commensurate with each other, if not fundamentally equivalent.
I have been wondering: given the geological (and cosmological) timescales and our timescales (of human lives), it seems to me that we might perceive ourselves only because that’s the way the organisation towards biology turned out to be in our planet – that is, within the timescales that individual life forms have been “adjusted” to work and to endure. I confess I struggle to put it more clearly, but an imperfect analogy might be comparing with our vision: we can’t see outside the visible spectrum, and this is in part at least a consequence of evolutionary contingency. To clarify, this is still not quite what I’m trying to say; I guess I mean it is something like, contingency and rate dependency. We belong to our own frame of reference, because that’s where we come from. But outside the relatively “fast” rates of our lifetimes, or lifetimes of societies, can “minds” and/or “intelligence” exist at all? Can they be faster? Slower? Would we observe them? I guess we might, given that we can use our own minds to “see” beyond the biologically imposed limitations of our senses. But this is not exactly the point I am trying to make either. I think it is more along the lines of “are our minds not really different from (philosophical) zombie minds?”
I had once an idea that I could get to the core message Douglas Hofstadter attempted to convey in GEB. Later, reading him further and watching some of his online talks, I thought I may have interpreted some of his thoughts wrong, because it seemed that he would contend human mind creativity is uncanny and nearly unique. But then, I get more and more convinced that he may have put too much faith in the “uniqueness” of human mind afterwards, while perhaps he diverged from his own original position – which I had interpreted to mean that minds are not that special, they can come about as anything else has come about on Earth, following transitions. Which is the way I think it is – not strongly emergent and essencially physical.
Anyway, this is all to say that I feel Douglas Hofstadter has a memorable sentence that summarises this all, even if we come and go, re-interpreting again and again the messages we had once thought we understood: “I am a strange loop”. It may all be that we are a self-reflecting, recursive images of ourselves and from interacting with each other. In the long term of geological and cosmological timescales, none of this bewilderment matters anyway, we become irrelevant.
It’s somewhat interesting that no mention was made of Aaron Sloman, who some time ago directly addressed “the structure of the space of possible minds” (see https://www.cs.bham.ac.uk/research/projects/cogaff/sloman-space-of-minds-84.html along with a number of other papers). One wonders if his observations have contributed or could contribute materially to at least some strata in what must be a complex foundation.
Loved the Podcast and wonder if we can see ourselves as the self-perpetuating, resource securing, habitat establishing AI of bacteria? I thought it would be a good thought experiment to help young budding biologists to explore what life is and address the debate of whether or not AI can be considered a part of nature: a natural evolutionary development.I started a search online along these lines but couldn’t find specifically this theme: I know similar themes draw a great deal of attention as thought experiments. May it’s too simple a thought experiment but if someone knows of something I’d love to read it.
Once again, love the podcast. I feel like it cleans the plaque out of my brain. Perhaps you could get a good toothpaste to sponsor the you.
All the best!