308 | Alison Gopnik on Children, AI, and Modes of Thinking

We often study cognition in other species, in part to learn about modes of thinking that are different from our own. Today's guest, psychologist/philosopher Alison Gopnik, argues that we needn't look that far: human children aren't simply undeveloped adults, they have a way of thinking that is importantly distinct from that of grownups. Children are explorers with ever-expanding neural connections; adults are exploiters who (they think) know how the world works. These studies have important implications for the training and use of artificial intelligence.

alison gopnik

Support Mindscape on Patreon.

Alison Gopnik received her D.Phil in experimental psychology from Oxford University. She is currently a professor of psychology and affiliate professor of philosophy at the University of California, Berkeley. Among her awards are the Association for Psychological Science Lifetime Achievement Award, the Rumelhart Prize for Theoretical Foundations of Cognitive Science, and a Guggenheim Fellowship. She is a past President of the Association for Psychological Science. She is the author of The Scientist in the Crib, The Philosophical Baby, and The Gardener and the Carpenter, among other works.

0:00:00.6 Sean Carroll: Hello everyone, and welcome to the Mindscape Podcast. I'm your host Sean Carroll. So here's a question, given a problem, how do you find the solution to that problem? This is one of those questions that sounds deep or profound or something like that, but in fact, you might worry it is a little bit too abstract. How can we hope to understand how imperfect generality to find the answer to a problem if you haven't given me a bit more information about what kind of problem it is you're talking about, but it's not so abstract that we can't make some progress. In general, how do we in fact go about solving problems? Sometimes, hopefully, indeed, you'll find a problem that resembles another problem, a problem that you've seen before. So you can either use or maybe adapt the kind of solution that you already knew about. Other times you know nothing about the context of a problem.

0:00:50.9 SC: So maybe you'll just try some things randomly, sort of flail about, just get some information. These kinds of questions as abstract as they are, turn out to be frighteningly relevant to things like building artificial intelligence, right? When you turn on the computer and there's no software on it yet the computer doesn't have any preexisting strategies for solving problems, you have to choose how to build them in. So how should we go about figuring out the best way in different contexts to solve problems? Well, one thing to do is to look at what actual human beings actually do. The lesson of today's conversation with Alison Gopnik is that human beings use different ways to solve problems in different life stages. There's a way that we do it when we're adults, when we're flourishing in the prime of life, and there's a different thing that we do when we're children or babies. Very roughly speaking, babies are a little bit more creative, a little bit more free flowing.

0:01:53.5 SC: They just try a whole bunch of things, their attention is hard to pin down. You might have noticed that if you've ever dealt with babies, and that's a feature, not a bug. The fact that babies have trouble focusing their attention on something is a reflection of the fact that're trying to learn about the world by interacting with it in many different ways. Whereas adults are optimized for something different. Hopefully by the time you're an adult, by the time you're in your 30s, you've learned a lot about problem solving strategies, and you're more about perfecting the methods that you already know, rather than flailing around randomly and learning new ones. Not that you can't do it, not that it's impossible, but you are better at some techniques than others, and maybe who knows? You brush upon the possibility in the conversation that later in life, once you're past your prime reproductive cycle, for example, you can go back to being less hidebound.

0:02:56.2 SC: Maybe that would be ideal, and we all know people who don't quite live up to that, but you serve different purposes. I guess the overarching lesson here is that there's a division of labor, not only between different human beings working in groups, but even between different life stages of single human beings. And as we'll talk about, we do in fact learn some lessons that might be very, very relevant to AI and programming computers and thinking about how we should best approach understanding the world. That's what we're here to do here at Mindscape. So let's go.

[music]

0:03:47.7 SC: Alison Gopnik, welcome to the Mindscape podcast.

0:03:49.7 Alison Gopnik: Happy to be here.

0:03:51.9 SC: So one of the things, I guess, we could start with from your work that I've derived is the idea that little children, kids, babies, whatever, maybe shouldn't just be thought of as unformed adults. That they actually think in a different way. Is that an okay way to put it?

0:04:11.2 AG: Yeah. That's exactly the right way to put it. So I think a lot of people have always thought about the 35-year-old psychologist or philosopher as sort of the height of all of intelligence. And then we just build up to that amazing person as we get older, and then we fall off as we get older. But of course, that doesn't make very much sense from an evolutionary perspective. And in fact, I think what's become clearer and clearer is that children are really fundamentally a different kind of intelligence than typical adults are. And there's also some interesting questions about elders having a different kind of intelligence as well. So it's more like a trade off between different kinds of intelligences than it is having one magic thing, which is called intelligence that we have more of or less of, and that we have less of to begin with, and more of more of later on.

0:05:07.2 AG: And an idea that I've been working on a lot is an idea that actually comes from computer science, which is the idea of a trade-off between different kinds of intelligences. And in particular a trade-off between what's called exploit intelligence and explore intelligence. And what I've argued is that childhood is really about this kind of exploration intelligence, which is not just different from, but even intention with exploit intelligence. And now I have to say, when I first started making these sets of arguments, I was, as you could tell from that last comment, I was a little snarky about those 35-year-old philosophers and psychologists sitting and thinking that they were the apex of intelligence. But since my own children have gotten older and I have more grandchildren, now I reverse that. My main feeling now is, "Oh my God, those poor 35 year olds, the children and the grandmoms are getting to do all the really fun human stuff."

0:06:07.6 AG: One of my other slogans is that we're basically human up till puberty and after menopause, and in between we're sort of glorified primates. We're doing the things that all the primates do, we're finding our way in the dominance hierarchy, and we're mating and we're getting resources and all that stuff. And it's only when we're little and we're old that we get to do things like theory of mind and discovery about the world and causal inference and cultural transmission and large scale storytelling, all the things that really make us human. So now I feel like, "Okay. Grandmoms and kids should just keep quiet. The fact that we're having all the fun and then the 35 year olds are having to do all the work."

0:06:49.1 SC: [chuckle] And so this is gonna be a preview for what's to come in the conversation. But that's basically because we're in that exploratory phase early on and in what is nominally thought of as the prime of our adult lives, we're more focused. We have tasks and we're very good at doing those tasks, but we're less good at being flexible and creative about things.

0:07:10.7 AG: So there's this idea that comes from computer science about trying to solve what happens when you're trying to solve what's called a high dimensional task. That means that a problem that has lots of different possible solutions that vary along a lot of different dimensions. And one thing you can do is you can just tweak what you're already doing a little bit and see if that makes things a bit better. And that's a very efficient, effective way of trying to solve the problem. So just change where you are a little bit, see if that makes it better, if it does, then try something else. And that's the essential exploit strategy. That's the grownup strategy, "Here's the thing that I need to do, I'm gonna focus on trying to do it." But of course, the problem with that is that there might be another solution that's much further out in this space that's, as it were in the far reaches of the box, at least, if not actually out of the box.

0:08:01.9 AG: And if you just keep making these little tweaks, you're never gonna get there. So what you could do is you could bounce around, try lots of things sort of independent of whether you think they're gonna be useful or not. Just try things, just see how the world works. And in computer science, they talk about this is the difference between a low temperature search. Think about it. Your audience will...

0:08:25.8 SC: Oh, yeah.

0:08:26.3 AG: Will get this analogy. [chuckle] Think about it as if it was like a molecule of air that's just either going very slowly or bouncing randomly around in the space. And the interesting solution that comes out of computer science is that the best strategy. So these two things trade off against each other. You can't do both at the same time. So what should you do? And it turns out the best strategy is start out with a big bouncy, random wild search, even though of course, that's gonna take a lot longer, and you're gonna spend a lot of time thinking about things that actually aren't gonna help you. And then gradually cool off to the more focused search. And people in computer science talk about this as simulated annealing. It's like what happens when you heat up, again, your audience should know this. So what happens when you heat heat up metal and cool it to make it more robust. And what I think is that childhood is essentially evolution's way of doing simulated annealing. So if you think about those two descriptions, there's one way that's focused and oriented towards outputs, and then there's this noisy, bouncy, random kind of behavior. You can pretty immediately think which one fits your 4-year-old better.

0:09:40.1 AG: And the thought is that, even though that might look from some perspectives as if it's a deficit, because the kids are saying weird, crazy things, they're often strange pretend lend, they do things that don't seem to make sense at first. If you're in explore mode, then that's really the thing that you wanna do more than anything else. So, of course, in some ways, what this means, and we've actually shown this empirically, is that children are more creative than grownups, at least sometimes. So for instance, we've done a bunch of studies where we give children and grownups a problem that either has a pretty obvious... There's an obvious solution you could try, or a much less obvious hypothesis. One that it's to start out with is less likely, and what we've discovered is that when the solution is, the more likely, the more obvious one, as you might expect, the grownups are better at getting to the solution. But when it's the unlikely hypothesis, four year olds are actually better than say, college undergraduates at getting to the right solution.

0:10:38.0 AG: So in some ways, the four year olds are more creative than adults but the trouble is, if you look at what we think of when we think about creativity in adults, it's really got two pieces. So it's not really easy to compare. So if you look at what a creative solution for an adult, the first part is generate lots of possibilities, and then the second part is pick out the one that's good, right? Pay attention to the ones that are good and not the ones that aren't. And that first part generate lots of solutions, think of lots of possibilities, that's where the kids really excel. But for adults, you also have to have the part about choosing the right solution in the end. And that's the part where those executive exploit capacities seem to be more useful. So the great philosopher, John Locke... Sean, I know you have the chair of natural philosophy, who was a great natural philosopher said that there were two faculties for human intelligence. One was the faculty of wit, and one was the faculty of judgment. So the faculty of wit was generate lots of new ideas, and the faculty of judgment was pick out which ones were actually good ones. And kids seemed to have a tremendous amount of the faculty of wit not quite so much of the faculty of judgment.

0:11:58.2 SC: Fair enough. Okay. If we're giving evolution credit for coming up with this nice division of temporal labor over our lifetimes. Let's face up to the fact that kids are pretty helpless, like human babies. They take a long time to be able to fend for themselves. Could you talk a little bit about how that fits in with what we know about other species and whether or not it's like an accident or actually optimizing something?

0:12:24.3 AG: Yeah. It's interesting because if you look across an incredibly wide range of species, even including insects, and there's some arguments that this is true even for plants, but certainly true for all the primates and for mammals in general, you see this really striking correlation between how smart, perhaps from a human perspective, but how good an animal is it learning how large its brain is, how much it uses its brain and how long a childhood it has. So you see it, the first context in which people notice this was with birds. So if you look at birds like corvids or crows, very, very smart birds they spend as long as a year or two being fledglings, needing to be taken care of. Whereas if you think about like the domestic chicken, chickens are mature within a couple of weeks. And although this gets me in trouble with chicken lovers and everywhere, they're not very bright there.

0:13:20.6 SC: Not very bright.

0:13:21.2 AG: Or a better way of putting it, is they're extremely good at doing the things that they do well like pecking for grain, and they're very good at doing those from the time they're born. They're not so good at learning. And if you think about it from this explore exploit perspective, that kind of makes sense. So if you're going to be a creature that lives in a changing, unpredictable environment that has to do lots of new things you're going to need a period to be able to do that exploration before you can exploit. And that means that in that period, you're gonna be really bad at doing things, you're gonna be helpless, and you're gonna need the other adult members of the species to look after you, make sure you stay alive, make sure you have enough calories to fuel all of this ferocious learning that's going on.

0:14:13.0 AG: And that seems to be the evolutionary strategy. The calories are a nice part of this because it turns out that if you look at what percentage of your calories your brain uses up. Even as an adult, 20% of your calories are going to your brain, which means that your brain is an expensive computing gadget. But when you were four, 60% in fact, almost 70% of your calories were going to your brain. So basically, your average 4-year-old is something out of Dr. Who or science fiction. It's basically this giant hungry brain that's going around the world, hypnotizing us into feeding it both a bunch of peanut butter sandwiches so its brain can work, and a bunch of data so its brain can work.

0:15:02.7 SC: So that's interesting because you're pointing to an explanation that is not purely physical or even mostly physical. It's not about brain size or anything like that. But that, if we allow ourselves to poetically attribute forces of natural design to, intelligent design rather, to evolution, this sort of planned difficulty in taking care of oneself is actually serving the purpose of letting the kids explore, be creative. And that's supposed to pay off later in life.

0:15:37.2 AG: That's right. So the thought is, and of course there's always in biology and evolution, it's complicated and it's connected to other kinds of aspects of animals. And there are smart animals who don't have a long childhood, like cephalopods, octopuses for example. Different kinds of animals seem to solve this explore exploit problem in different ways. Sometimes insects and... For instance, seem to do it by a kind of division of labor among, some ants and bees are explorers or scouts, and some of them are workers who are actually doing things. So there's different ways of solving it, but it does seem like this is a strikingly general and common solution among a very, very wide range of species. And that suggests that it's got this adaptive value.

0:16:24.8 SC: Is this division of labor thing a general feature? I remember it was Michael Muthukrishna, who I had on the podcast and who mentioned this experiment, which I guess is famous among psychologists, where human children and chimpanzees were asked to poke sticks into a box and get a reward. And then when it's revealed that one of the sticks isn't doing anything, the children didn't learn... Or sorry. Yeah. The children didn't learn, and the chimpanzees did because the children were trusting the grownups. [chuckle]

0:17:00.4 AG: Right. Yeah. And of course, that's another dimension of having this long childhood and having this particular kind of intelligence, which is that we are also social and cultural learners. So we pass on information from one generation to another in a way that no other... Other animals do it more simply, but we do more of it than any other animal does. And again, a lot of that social transmission is taking place in the context of caregivers, people who are moms and dads who are helping the kids. But it's interesting because that is a famous experiment, but it's worth pointing out that again, there's this trade off. So if you just imitate what the other people around you are doing, then there's kind of no point because you're not gonna make any progress. So somebody along the line has to actually innovate as well as imitating.

0:17:54.5 AG: And there's a bunch of work we've done some of it, and other people have as well, that shows that children are, when they're imitating, they're kind of balancing, "Well, what do I think about how this actually works? And what do I think about the person who's actually demonstrating it to me?" So, for example, there's a lovely experiment where if you have the experimenter says, "I'm gonna show you how this works." The children are much more likely to do the kind of over imitation that you described. So they'll just do what the experimenter do. But if the experimenter says, "Gee, look at this. I don't know how this works. Do you know how this works?"

0:18:28.0 SC: Ah. [chuckle]

0:18:30.1 AG: Then the children are much more likely to explore, including exploring in these kind of unlikely ways and vice versa. We did some experiments that showed that if you actually show, get let children discover something about a machine. For instance, a lot of our experiments depend on this wonderfully simple, inexpensive machine, the blicket detector. It's a little box that lights up and plays music when you put some things on it and not others. And it costs like, I don't know, 29.99 or something. It's nice from the grant perspective and it's been extremely productive. So if you do something like present the child with a little box, let them play with it, and they see that, say red blocks make it go, and then you have an adult who says, oh, blue blocks make the detector go. The children... It's interesting. They'll just come exactly, split the difference between what they see themselves and what the adult says. So they won't just rely on what they do see themselves. They won't just rely on the adult, but they'll produce one solution or another kind of in proportion to the probability of those two solutions. So I do think the fact that we're social beings, but in the course of cultural and social evolution, Michael might have talked about this, we have to balance imitation and innovation. And it's quite complicated to do that. I think that also plays a role in the special intelligence of children.

0:19:56.1 SC: Well, it makes me think very much of being an advisor to students, especially graduate students where you wanna say, "Look, please do be creative and come up with new ideas, but please also read some of the literature." And there's a balance there, you don't wanna just slavishly follow what other people have already done.

0:20:11.9 AG: Yeah. I think that one of the interesting things that we're working on now is thinking about caregiving, thinking about what happens as you take care of someone else. And that's been very... And particularly thinking about the intelligence of caregiving, thinking about how hard it is to figure out what you should do as a teacher or a therapist or a parent, that will allow the person you're caring for to become autonomous. You don't want them to just imitate what you do. You don't want them to just do the same thing, but you also don't want them to get into trouble. You don't want them to do stupid or dangerous things. And I think it's really interesting about how caregivers manage to walk that line, no matter what kind of context they're in. Whether you're thinking about being a grandmother of a 2-year-old, or you're thinking about being a supervisor or postdocs, it's the same. It's fundamentally the same problem.

0:21:13.5 AG: And it's having those caregivers that enables a lot of that cultural transmission to take place. One of the interesting things that has come out that I've been writing and thinking about recently is that actually grandmothers seem to be playing a... Grandmothers and grandfathers, I always talk about grandmothers egocentrically. It seem to be playing a particularly important role in that kind of cultural transmission, so there's quite a bit of anthropological evidence that when you look at, who gives the songs, who gives the stories, who tells you the big aspects of the culture. It's more likely to be older people than parents. So the parents are trying to deal with the everyday, keep the kids alive, keep them fed. And it's actually the grandparents who are, in my case, telling, reading Narnia books and singing old Broadway show tunes, and doing things that are things that are especially designed to serve this function of cultural transmission. And of course, just as we have this exceptionally long childhood, we also have this long post-menopausal grandmotherhood, elderhood, which is not characteristic of other other animals except interestingly for orcas.

0:22:30.3 SC: Oh.

0:22:31.6 AG: So orcas are one of the few examples of a mammal that also continues to live for some substantial period of time after menopause, and has grandmothers that live with a pod. And what the grandmothers do is pass on information about what to eat and how to function. Especially when things get short, the grandmothers are the ones who lead the pod out to, "Oh, okay." I remember 20 years ago there was curl in this site and help the children and grandchildren to survive that. And I think that's another really nice sense of a kind of intelligence that's different from what we think of as the standard grownup intelligence.

0:23:12.9 SC: That sounds almost too adorable to be true. How sure are we that the killer whales really put so much responsibility in the grandmother's hands?

0:23:22.4 AG: Well, what I like about this is what it shows is the really important thing to pass on is recipes. That's the thing that the grandmothers are designed by evolution to do.

0:23:36.1 SC: [chuckle] Again, with the advisor thing, just because it's on my mind, we can be egocentric a little bit, that's okay. I do think a very, very common mistake in dealing with students is that the older wise folks are sometimes too good at saying why a new idea won't work. And are you hinting that maybe I'll get better at that as I get older, I'll be more open to the exploration?

0:24:04.5 AG: Well, I think that's definitely one of the features of elders that seems to be important. And again, to use the analogy of the students, I think part of the reason for that is that the 35-year-old caregiver also has their own agenda, right?

0:24:25.1 SC: Yeah.

0:24:25.3 AG: So they wanna make sure that they get the results that they need for the grant. And the more the older person can say, "Okay. I've gotten what I need, I can be more open to the possibility that this younger generation is actually going to do well. Now of course, there's big individual differences among grownups about how likely they are to do that or not. But it's interesting that even if you... There's a little bit of evidence that even if you look at adult scientists... And one of my other slogans is that it's not that children are a little scientists, it's that scientists are big children. [chuckle] And I almost always get a round of applause if I say this at Fermilab, or Lawrence Berkeley lab, everybody agrees with that. And one of the... There's a little bit of empirical evidence that shows, for instance, that the labs that get the Nobel prizes are the ones where something unexpected happens. So what happens when you do your experiment, and what comes out is totally weird, not what you expected at all. And you could say, "Okay, something strange happened. Let's go back to the grant." Or you could say, "Huh, why did that happen?" And it turns out that the labs where when something unexpected happens, they follow up and try and figure it out. Do better in the long run than the labs where you're doing the thing that you were supposed to do.

0:25:48.0 SC: Okay. We'll try to let go of graduate students as an example here and think more about the actual human babies. Tell us more about how they get their picture of the world. I know that one crucial step is when they figure out that other people have opinions or beliefs that are different than their own.

0:26:04.5 AG: Yeah. So the big idea about child development, which now I and other developmentalists have been arguing for, for the last, I don't know, 20, 25 years, is that you should think about development as being like theory formation in science. And at first that idea seems unlikely. I mean, little kids, are they not really smart scientists, but when you look carefully at the way they understand the world, it looks a lot like the kinds of theories and the changes in theories that scientists have. And what that suggests is that from a computational perspective, that's just a really good way of getting information about the world. So what we've shown is that from the time they're very, very little, children are looking for causal relationships, for example, which is one of the things that it's really important in a theory.

0:27:01.5 AG: And we can show, again, with our little blicket detector, these little machines that have, where they have unexpected causal properties. We can show that even toddlers are doing the right kinds of inferences about how those kinds of systems work. So they seem to develop these causal models, these kind of what are sometimes called intuitive theories, and then they get new data and they change the theories depending on the data. And we can show that really systematically, so we can, for instance, show them information data about how that little machine works, the little blicket detector works, and then we can see what kinds of inferences do they make into a remarkable degree. They make the inferences that you should make if you're trying to... If you were being a good scientist.

0:27:49.6 AG: But of course figuring out how machines work is fine, but the thing that's most important in all of our lives arguably, is figuring out how the people around us are working. So way back in the '80s, I and others did developed what's come to be called theory of mind, which is figuring out what is it in baby's minds that enables them to understand your mind. And what we found out, which is typical of what we found out, is that babies both know more to begin with than we would've thought, but they also learn more. So between say one and two, for instance, they seem to start learning that people could want different things. I could want one thing, you could want another. Between four and six, they seem to start understand... Between three and six, they start to understand that I could think something different than you do. And those are really, really deep important things to understand. But more recently, one of the things that I think has been very interesting is the development of work on children's intuitive sociology. So they're also figuring things out like, "Oh, when people are allies, they'll behave this way, and...

0:28:58.1 SC: Ah, the next level. Good.

0:29:00.6 AG: Yeah. Right. When they're enemies and they'll behave this other way. And if one creature is dominant, is larger than the other, then they can get their way. And what we're just trying to do now, again, to go back to this caregiving point, is figure out what do they actually think about caregiving? And there's a little bit of evidence that even babies already are identifying, "Okay, this is what it looks like for someone to take care of someone else. This is a good potential caregiver for me." So there's this lovely back and forth between that helplessness that I talked about, and then the fact that if you are that kind of helpless creature, learning about other people, learning about love, learning about what other people are like is gonna be really important for your survival. And of course, as adults, that's still the most important thing that we learn about.

0:29:51.9 SC: I wanna get in the audience's mind, the idea of those blicket detector [chuckle] because clearly it's playing a big role in your psychology experiments. Is this something you buy off the shelf, or do you make them?

0:30:03.0 AG: Yeah. It's funny. We started out when we started doing this 20 years ago, we actually made them. So we had, in those days, in the psychology department, you had a shop, which we don't...

0:30:13.5 SC: Oh my God.

0:30:14.3 AG: Tend to have anymore. But we had a shop and we went to the folks in the shop and said, "Okay, well look, here's what we want. We want this little, it's very simple. It's a little box. You put different things on top, and sometimes it lights up and plays music, and sometimes it doesn't. And then we had various other kinds of variants of this. So we had one with gears where you flick a switch, and then the gears go, and then that makes something else happen. So they're like little Rube Goldberg machines that we'd put together. Now, at the time that we, this is a kind of interesting anecdotal observation. At the time that we first did this, we thought, "Gee, this would be so much easier to do if we just had screens and a computer, and you could do it that way." The kids didn't wanna have anything to do with it when it was a screen.

0:30:58.6 SC: Ah.

0:31:00.2 AG: They really needed to have the real thing. But now what we're finding is that even like three and four year olds and I think the big difference is now screens are interactive. So three and four year olds have the idea, and in fact, again, if you have a baby really love the idea that they can act on, they can touch something, they can talk to it, they can swipe it, and then figure out what the outcome is supposed to be. Although it still looks like we need the real things for the younger kids, but for the older kids, the older kids now have the idea, "Oh, okay." A screen is something that has causal powers as well as a little machine does.

0:31:42.5 SC: But still you have these little... Do you still doing the little boxes? I just like the idea that it's a rite of passage in the Gopnik lab for graduate students to build a little box that lights up under different circumstances.

0:31:52.9 AG: Well, that's exactly right. And we have a lab culture about where we get the doorbells that go in it. And one of the funny... A funny story about this, when we first started doing it, we actually had the folks in the shop who were putting these things together. And at one point, one of the machines that we had broke, and my brilliant graduate student and I went in and asked, said, "Look, it's not working. The gear is not making the other gear go." And they said, "Oh, no, that's not how it works. We just programmed it so it does this specific thing when you... So there's... No, there's actually no physics behind it at all. It's like a Tesla car. It's just a...

0:32:39.8 SC: Simulation.

0:32:40.2 AG: A computer masquerading as a real physical artifact. And we were... I'm have to say, we were a bit crushed about this. And also the way that it actually works most of the time in the lab is that there's a graduate student who's sitting and pressing a button underneath the table that's determining what it does. So.

0:32:58.8 SC: Ah. Wow.

0:33:00.4 AG: From the kids' perspective too, it's not... They have the illusion of actually doing physics, but what they're actually doing is doing experiments.

0:33:07.5 SC: It feels like the Wizard of Oz here. I feel like I've been fooled about all this...

0:33:11.6 AG: Yes.

0:33:11.7 SC: All this stuff going on. Okay. So if you say things like, "A child under 1-year-old doesn't have this theory of mind, and a 2-year-old does." So do we know of certain benchmarks or phase transitions and the growth of a child where their cognitive view of the world expands a bit?

0:33:32.7 AG: Yeah. So as I say from the very beginning, literally from the time they're born, babies are doing things like paying special attention to faces and interpreting them from very early on. They're imitating other people in a way that other primates don't seem to, which suggests that there's something about understanding other people that's really important. By the time they're nine months old, they're doing things like pointing to communicate, and a nine month old will point, and if you don't follow the point, they'll get antsy and go, "Ah, ah. There There. There." So that suggests that they know something about what... The fact that other people are looking at the same thing that they are. So sometimes what happens is that people say, "Oh, you don't get theory of mind until you're four or five." But that's not right you just get different theories the same way you would if you thought about physics. There's different kinds of theories about different parts of the mind, but very characteristically, you see these changes coming at roundabout a particular age. So about nine months you see this big revolution, and some of it may just be maturation, but I think the something that is increasingly important and that we haven't thought about enough is just how active learners, even these very young children are. Just how much what they're doing is not just observing statistics, although of course they do that, but also actively experimenting on the world.

0:35:04.1 AG: And if you think about even on newborn, they're looking at you, they're smiling, they're seeing what the effects of their actions are, and by the time you're talking about a toddler that's like their whole lives. They're just constantly, constantly doing experiments, except when they do it, we call it getting into everything. When physicists do what we call it being a good experimental physicist. And I think it's really under... So one of the things that we've been doing a lot recently is trying to think about how the children's learning compares to the learning of machines, for example, something like the large models that have had so much, play. So what's the relationship between what AI is doing and what these very little kids are doing?

0:35:52.9 AG: And I think one of the really big dimensions that makes the kids different from the AI systems is that they are going out and actively trying to get information about the world, and then changing what they think based on the information they get, as opposed to the large models, for example, that are basically just... It's interesting to think about this in terms of cultural transmission. The large models, I've argued, really what they do is just pick out patterns in all the things that other people have already figured out. So they're just doing the imitation part of cultural transmission. But the kids are actually going out there and finding out things that are new and experimenting and seeing new outcomes. And we think that may be really the crucial thing that lets them learn as much as they do as quickly as they do, and with as few calories as they do compared to the AI systems.

0:36:45.4 SC: So it is a little bit like the distinction in causal reasoning between simply finding correlations versus being interventionist about things. Going outside the dataset and saying, "If I did this, what would happen next?"

0:37:00.7 AG: Yeah. That's exactly right. And we've, for 20 years we've been collaborating with actually philosophers of science and computer scientists who've been trying to figure out how do we do causal inference in science. And one of the really basic ideas is that you can't... Or it's very difficult to do it just by looking at patterns of statistics or correlation that you want to make causal inferences. Well, what makes a causal inference different from just a correlation? This is an old philosophical problem back to Hume. And I think the idea that's become most prominent in philosophy of science and seems intuitively and in science itself is, "Well, okay, if you wanna really make the causal inference, you have to do an experiment." You have to actually intervene, do something in the world, see what the outcome is. That's what makes something causal.

0:37:52.3 AG: And I think we've been doing some work on this, but I think if you look at, even quite young babies, if you think about something like a busy box, a busy box is a toy that you use for little babies. They're called busy boxes. A busy box is a thing that has lots of causal possibilities, where you can do lots of experiments. It's cheaper than your standard cyclotron, but it has the same kind of character for babies. It's a way of actually being able to do the kinds of experiments that you need to figure out how the world works. And of course, as I point out the... Since theory of mind and intuitive psychology is so important, they have a great experimental subject there all the time, which is the caregiver. So if you think about them as being little psychologists, we're the lab rats. So a lot of what babies do, in the terrible twos, for example, is doing something and then looking to see how we react to it.

0:38:57.8 SC: We had Judea Pearl on the podcast, and at one point...

0:39:02.1 AG: Oh. Really?

0:39:02.8 SC: Yeah. And at one point he said, literally like, "Haven't you ever seen a baby? All a baby is doing is like touching things and making a causal map of the world." And I understood what he was saying and agreed with it, but I guess I didn't realize it was quite as literal as apparently it is.

0:39:18.6 AG: No. Absolutely. So what we've done is taken some of the formalisms that Pearl developed like causal Bayes nets and then show how the children are... The ways that children are solving these causal problems are just like what you'd expect from causal graphical models. So essentially what the children are... I think we can say that we've demonstrated this, what the children are doing is constructing causal graphical models from data and then using them to determine their interventions. Now, the thing is, that both for Judea and for the whole world of causal inference, and for us, it's still a kind of unsolved problem about... And for scientists, how do you decide which experiment is really the right experiment to do?

0:40:02.1 SC: Sure.

0:40:03.1 AG: So what something like, Judea framework lets you do is say, "Okay, if you get this result, here's the causal graph you should have." But how do you decide what you should test in the first place? And I think one interesting thing that's coming out is that a lot of the old shibboleths about how you do experiments, you have to keep everything constant. You can only vary one thing at a time. That's not true for babies, and it may not be the best way of doing science either. And I think this is like a wonderful open question about how do you decide what kind of experiments to do? How do you decide how to explore? And we've started doing this with kids and AI agents. What we've done is put them in the same kind of general environment, Minecraft is one of the ones that we use, if you know mine... But I imagine your audience will know, Minecraft, although there's a bit of a funny story about this, which is that I originally started doing this work about how you explore the Minecraft environment because my grandson was mad about Minecraft. And I figured, well, at least this will make me cool, if I can tell the grandmom's doing stuff about Minecraft. Then I gave a big talk about this, I got something called the Rumelhart Prize. It's a big deal. And my grandchildren were in the audience, and I said, "Well, I did this because I wanted to impress Augie."

0:41:33.6 AG: And he came up to me very sweetly after the talk and said, "Grandma, that was a little embarrassing because nobody cool plays Minecraft anymore. We all play Fortnite." So it isn't really cool to be involved in Minecraft if you're a 13-year-old. So I wasn't keeping up with the changes, but in any case, Minecraft is a great game. It continues to be a great game for grownups and for 11 year olds. And what we've done is put kids in that environment and then put AI agents in that environment and just said, "Figure out what's going on, explore." And even relatively young kids will explore that environment in a rational way. They're not just randomly trying things. They're doing things in a way that lets them figure out how the environment works, and they're much, much, much better at it than AI agents are.

0:42:32.8 SC: Interesting. Well, the thing you said about traditional scientific methodology asking us to keep all but one thing constant and changing that one thing and seeing what the effective... That makes sense. That sounds sensible to me, but is it just because it's simpler to keep track of what's going on, but maybe it's ultimately less effective than poking things in concert?

0:42:53.6 AG: Yeah. So that's a really interesting technical formal question about when... That it is true that if you did that, that's a simple normative rule about what to do. But sometimes actually trying to vary different things at once can be more informative than... 'Cause as you can imagine, it's really hard to keep everything else constant and just change one thing. And that, thinking about what the kids are doing can be very... 'Cause the kids, some the kids are often trying lots of different things in different ways. If you think about the 2-year-old with the busy box and yet they seem to be such good learners. So it's a really interesting question for AI, for science, for developmental psychology about how that's possible.

0:43:41.4 SC: And how literally true is it to describe these very young children as good Bayesians?

0:43:49.1 AG: Yeah. I think it is literally true. What we've done is take various kinds of ideas from Bayesian inference, and what we can do is give kids information, for instance, about the baseline probability of a hypothesis, and then we can give them information that will let them update that hypothesis and they'll do the right updating in the right kind of way there. Another discovery that was in my lab, but also even more in labs like Richard Aslin's and Jenny Saffran's back in the '90s, was that children are quite good at doing statistics. Which you might be really surprised at because after all, grownups are notoriously bad at doing probabilities. But if you do something like show the children that one blicket works eight out of 10 times, and the other one works four out of 10 times, they'll pick the one that has a higher probability of activate, and then this is with 18 month olds. And then you say, "Okay, which one will you use here make the machine go?" They'll pick the one that has the higher statistical probability. In fact, they'll do it even if it's like two thirds versus eight tenths. So they seem to be able to kind of do the math implicitly, obviously, and unconsciously to figure out how probabilities work. So there's...

0:45:10.3 AG: But again, this is led to this puzzle, which is they really seem to be doing something like Bayesian inference, and yet we know that Bayesian inference ideally is impossible computationally. So if you have a big set of hypotheses, then it's gonna take you... The Bayesian ideas, you take a hypothesis, you generate a pattern of evidence, you check it against the data that you've got, you figure out which hypothesis would have been most likely to generate that pattern of data. And the problem is, if you have a big space of hypothesis, that's just gonna take forever because you're gonna have to try each one of the hypotheses separately. It's called the search problem. And nobody's really quite solved that problem, either in statistics or in AI or in childhood.

0:46:00.5 AG: And the children do seem to be able to solve that problem because they end up finding out about the world. And it's a really interesting question about how they solve it. And my guess at the moment is that something about this active inference might be part of the solution, that it isn't just that you're sitting back having data float over you and then trying to update your hypotheses. You are actually out there being an experimenter and figuring things out, which should make at least the... I know that when, in physics, there's always a bit of a tension between the theoreticians and the experimentalists. So I think from the developmental perspective, we're team experimentalists. That's the...

0:46:43.2 SC: Fair enough.

0:46:44.1 AG: That looks like that's really the secret.

0:46:47.8 SC: And how much do we know about the neuroscience of this kind of thing? I mean, are these ideas that babies sort of get better at different kinds of reasoning as they grow older, mirrored in ways that the brain is wiring itself?

0:47:02.8 AG: Yeah. It's quite neat because one of the oldest developmental neuroscience findings, which is consistent with what we've seen to this day is that, early in development, what you see is lots of new synapses, lots of new connections being formed. And then there's a point, a kind of tipping point where the ones connections that have been formed are strengthened, they're myelinated, they become more effective. The ones that haven't been formed are pruned. So the weaker ones just disappear. So you have this early brain that is very, very flexible, very good at changing in the light of new experience, very plastic as neuroscientists say, and then... But not very effective. Not very good at actually doing anything. And then you have this later brain that's very good at doing things, but much less good at changing, with much less plasticity. And again, this is like another version of this explore, exploit, trade off. And empirically, if you look at brain development, you can see that depending on the domain that you're in, you see this pattern of very early proliferation and then later pruning.

0:48:13.6 SC: So if you look at the visual system, for instance, you see this tipping point at around 18 months. So the visual system is due getting set in that early period, and then it settles in, which is why if you have vision problems, it's really important to correct them. In a baby, it's really important to correct them early. If you look at the language areas, it's like five or six where you start seeing this transition, so it's when you've developed a language that then it becomes harder for you to learn a new language. And if you look at what's sometimes called the executive function, the prefrontal part of the brain, that's the latest one. And that's not completely getting set until adolescents, for example. So you see this, but you do see this general pattern of start out with lots and lots of connections, and then at some point start strengthening and pruning.

0:49:08.3 SC: And I just hate to bring it back to academia once again, but this comports with the idea that the energetic young idea creators are the young faculty in postdocs, not the older senior faculty who've been successful for many decades.

0:49:23.6 AG: Yeah. I think it's an obvious question that lots of people have asked is, "Well, are there things that grownups could do that would make them as creative as the as the children? And the first thing to say is, even if you're thinking about a lab, if you wanna get grant money, it's really good to have people who are really, really practiced and know how that system works and are really good at. And even if you forget about grant money.

0:49:50.4 SC: Oh, good. Yeah.

0:49:51.0 AG: Just actually doing the experiments if you're a physicist, for example, is a giant enterprise. It's really takes a lot of focus and executive energy to be able to get the experiments to run. So I don't think you'd want... You don't want your your department chair to be this wildly creative person who's thinking up strange ideas all the time. You want them to be focused. Nevertheless, I think it's interesting and relevant to this explore, exploit trade off that when adults want to... Adults often will have social institutions that allow them to switch back and forth between this explore and exploit. So sabbaticals, I think are a great example in the academic context, so we don't expect scientists to be able to just do this on their own. What we think is going off to a retreat, going off to a sabbatical, going off to a workshop, those are all ways that you can pull yourself out of the context that you're already in and allow yourself to do greater exploration. So it's a pain that we all spend so much time traveling as scientist, but I do think there's a bit of an argument that being in a different place, being in a different context, often doing interdisciplinary work where you are forced to think about things in really different ways because now you are looking at biologists instead of physicists. All those seem to be ways that grownups set themselves up to be more creative.

0:51:28.0 SC: And does it... You've mentioned already several times the connection between these ideas and AI, large language models, etcetera. How operationalizable are these insights? Can we imagine building better AI models using the lessons we've learned from development?

0:51:45.7 AG: Well, that's what we've been doing. So I'm part of the Berkeley AI research group. And what we're trying to do is take some of these ideas, ideas about active learning, ideas about social learning, ideas about causal model building, and actually implement them in AI systems. And to a certain extent, what's happened is that because the large models have been so effective in lots of ways, there's a tendency to say, "Okay, well they worked before, we'll just keep... We'll just pour more compute into them and they'll get bigger and they'll use up more energy and then they'll get better and better. And we'll have this mythical thing called AGI. And one of the things when I talk to AI groups, which I do a lot now, I have a slide where I say, there's no such thing as general intelligence, artificial or natural, which always lead sharp intake of breath from the audience. 'Cause it just isn't the theory... The ideology, the model, and... Oh, okay, there's this one... This gets back to our very first conversation. The model that there's this one thing called intelligence. Some people have more of it, some people have less of it. If you have more and more of it, then you're gonna be more and more effective.

0:53:09.7 AG: That's not the model that comes out of cognitive science at all. What comes out of cognitive science is that we have these trade-offs between different kinds of cognitive capacities. So if we could design... Sorry. So LLMs don't rather strikingly have any of these capacities. They're not going out in the world and doing experiments, they're not creating abstract causal models that then they can use to generalize. And you can see the difference between the amount of data that the children have and how good they are at generalizing it to new situations and the large models. The large models need enormous amounts of data, and they're not very good at generalizing, especially to what's called out of distribution cases, cases that weren't there in their training. The kids with much less data, are very good at doing that. So the question is, what are the kids doing? And could that make more effective AI? And I think the thing to say at the moment is we're very, very... If you start thinking about the comparison to kids, we're very, very far away. So the kids are orders and orders of magnitude better than any of the current AI systems that we have. But I think that's the direction that we might want to go in.

0:54:22.8 SC: Well, it's a difficult conversation to have sometimes because the AI... The people who worry that we're close to super intelligence any moment now will say, "Look, we can beat the best human beings at playing chess or playing Go or... " whatever. And you try to make the point as you just did, that there are other aspects of intelligence that they're not good at, but they seem a little bit fuzzier. It's quite... It's a little bit harder to put a benchmark on it. Is that something that we're trying to make progress on?

0:54:53.2 AG: Well, one thing is that for example, if you look at robotics the... Notoriously, the AIs are better at playing chess, but they're terrible at actually picking up the pieces. If you made part of the game B that you had to pick up, find the pieces in a... That have spilled on the floor and put them back in exactly the right places, they're really quite bad at doing that. Especially if you had, say, a new chess set that was like an Alice in Wonderland chess set or something where it wasn't... You couldn't just identify them from your previous training. And this is the famous mirage's paradox. This goes back to beginnings of AI, but things that look really hard for human chess and Go turn out to be relatively easy for AI and things that look really easy like perception and action and movement and the kinds of things that kids do when they figure out how the little blicket detector works turn out to be really hard for AI.

0:55:54.3 AG: I think the whole conversation about intelligence in AI is fundamentally misguided, because really the reason why the systems are as effective as they are is because they're taking advantage of hundreds of thousands of humans who've done things like put text on the web and in some ways, with reinforcement, learning from human feedback, have trained up the systems. My latest metaphor about this is, do you know the story of Stone Soup, the old children's story of stone soup?

0:56:34.4 SC: No.

0:56:34.7 AG: This is a wonderful children's story. This is another advantage of being a developmental psychologist is, you have wisdom of the stories that grandmothers have told in the past. So this turns out to be a very, very widespread bit of folklore. And here's the story. There's these visitors who come to a village and they say, "We want some food." And the villagers say, "Oh, sorry, we don't have any, we can't share any with you." And the visitors say, "That's okay, we're gonna make stone soup. We have magic stones." So they get a big cauldron, they put couple of stones in the cauldron, it starts boiling. They say, "See, we're just gonna make stone soup. It's so good. You know it would be even better if we had an onion and a carrot that we could put in it. But I guess if we don't have it, we don't have it." So one of the villagers says, I think I have an onion and a carrot somewhere." They go and put it in. They say, "Oh, that's great." Like, "Oh, see how well this soup is working? When we made it for the king, we put a chicken in and that was really great. So could you go and... " And of course you can guess what happens.

0:57:35.6 SC: Yeah.

0:57:36.3 AG: The villagers all contribute their bit of food. And then at the end, the villagers say, "This is amazing. This is magic. We got all this soup just from a stone." And I think if you think about a version of that where you said there were the computer people, the tech guys who went to the universe of computer users and said, "Oh, we have AGI, we can do it just with three algorithms, gradient, descent, and transformers, and we'll have AGI any minute now, but you know we really need a lot of data to make this AGI work and users said, "Oh, okay. We have all of our text and pictures that we put on the internet, all of our books, all of our newspapers we could give you that to make the intelligence." And then the tech guys say, "Oh, that's really great, but if we could do reinforcement, our AGI is still saying stupid things. So if we could get reinforcement learning from human feedback and get people to give us feedback about whether our AGI is doing well or not, that would make it even better."

0:58:42.5 AG: And the people said, "Oh, okay. There's whole villages in Kenya who could do this." And finally the tech guy said, "Oh, that's really... We're getting really good intelligence, but if we could do prompt engineering so we could figure out what exactly what prompt to use that would make it even more intelligent, could you do that?" And the user say, "Oh, yeah. We could think a lot about that." And then of course, in the end, what happens is that the tech guys say, "See, we told you we made AGI just from a couple of algorithms." And it's ignoring the fact that the reason why it works is not because of the algorithms, but because of the data. And that data comes from a bunch of humans who are doing the kind of exploratory creative intelligence that four year olds do.

0:59:25.1 SC: I do presume that there are other angles on building AI systems that are more oriented towards making causal models of the world, even intervening, putting them in robots. I really don't know.

0:59:38.9 AG: Yeah. That's one of the things that people have thought about, but as I say, it's hard. So trying to do the Bayesian inference is hard because the search problem is hard. There's what's called reinforcement learning. And reinforcement learning, an old idea in psychology is the... And actually take actions and then see if the actions make you better off or not. So the mouse who runs through the maze and gets cheese sometimes, but gets a shock other times, and learns to go towards the cheese and away from the shock. So that's a technique that, for example, in Go solution reinforcement learning played a really important role in trying to solve that problem. So the chess agent is playing against itself over many, many trials to try and figure out what the best reinforcement learning is. But the problem with reinforcement learning is that, it's too narrow. If you just end up on, you could... Since all you're thinking about is, am I better off or not, you are not going to be able to do the kind of exploration that you need to really figure out how the world works.

1:00:52.3 AG: And so the idea that I think is most promising and we've been working on is an idea that should be very familiar from science, which is an idea that you could do in learning, but instead of trying to calculate whether you've got higher usefulness or utility or not, or whether you got more cheese, what you're calculating is, "Did I get more information? Did I find out more about the world? Or did I find out more about the system when I did this action versus this other action?" And the result is that you are going to do... Again, to go back to the point we made before, you're gonna do the things that will tell you something about how the world works, even if they don't make you any better off in the short run. And there are ideas in... An idea I really like and that we've been working on is the idea of a kind of intrinsic reward. The reward you get as a scientist when you just are doing it, 'cause it's so cool when things work out the way that you want. So I think the solution to the problem means having a kind of reinforcement learning agent, but one where the reward isn't cheese or utility or winning the game. It's finding out something new, figuring out more about how the world works.

1:02:10.0 AG: And there's been a lot of interesting attempts both in development and in AI to figure out how you could make that happen. What kind of signal would tell you that as a result of this action you'd learn more. And one that we're working on now that I think is really interesting is something called empowerment. And the idea behind empowerment is that you get rewarded when you do something and it has a predictable effect on the world. So what you do is try to do as many things as you can, where varying the action that you take will vary what happens out there in the world. And when you do that, that's really exciting, that's really cool, that is something you're gonna be likely to try again. But you also want as many different kinds of actions as many different relationships like that as you can.

1:02:52.5 AG: So if you find one after a while, you'll get bored and you'll try and find another one. And I think that's very closely related to the causal learning that I talked about before. So if you think about what, you mentioned this Sean, that causation is all about intervention. So really what you're learning with this empowerment reward is I'm gonna learn how to intervene, I'm gonna learn how doing things out there in the world ends up having effects. And we have a little bit of evidence that even tiny babies, if you take literally a two or three month old and you put a ribbon between their foot and a mobile so that they can actually control what happens to the mobile, they'll sit there and try all sorts of different patterns of kicking to try and see what the consequences are gonna be for the mobile.

1:03:44.0 AG: So they really are like little... And they'll smile and they'll giggle and they'll go back to doing it. It's just seems to be an incredibly satisfying enterprise for them. And of course, they're doing the same thing with mom when they're doing all sorts of funny faces and seeing if mom gives funny faces back again. So I think that might be a really, really important part of the solution. And again, it speaks to science because going back to what we were saying before about how systematic do your experiments have to be one of the things that I've noticed is that the powers that be always say, "Well, you don't wanna just have a fish fishing expedition. This grant is just a fishing expedition." But a lot of times what you actually end up doing is doing a fishing expedition. You try something and then kind of unexpectedly it has this particular systematic result. And that's where the gold is, rather than the causation that you already know about. And empowerment is kind of a way of saying, "Yeah. Keep that fishing expedition going."

1:04:48.3 SC: Well, and there are better and worse ways to go fishing.

1:04:50.7 AG: Right. Exactly. Exactly.

1:04:53.3 SC: We did have Karl Friston also on the podcast, and he has these ideas of the free energy principle and the Bayesian brain. And part of the aspect that gives people pause is he seems to be saying that our brains try to minimize being surprised. And you might think that therefore you should just sit in a dark room and never do anything.

1:05:14.8 AG: Right. Right.

1:05:15.7 SC: But if I understand correctly, he was saying, "No, actually we wanna minimize the net surprise over our lives, and therefore we better explore around and do weird things now so we can anticipate what's coming."

1:05:28.9 AG: Yeah. That's exactly like the exploration, exploitation trade off that I was talking about. So it's interesting if you think about something like empowerment or all of these kind of intrinsic rewards. The failure cases are you just sit there and do the same thing over and over again, and maybe those are human failure cases too. So what you need to do is to be able to want to have a coherent story about what's going on around you. You don't wanna just be paying attention to random stuff that's happening, even though that might give you something that's new and surprising but you also don't wanna just do the same thing over and over again. And we've been looking at... Comparing, for instance, what do children think when you give them a machine that just randomly does different things like our blicket detector, but a kind of random blicket detector versus one where there's a systematic relationship between what you do and what comes out, even if what comes out is surprising versus one that just does something obvious that you know about beforehand. And they really seem to like to play with the machine that is surprising, but surprising in a systematic way. And I think that's true about scientists too.

1:06:39.8 SC: The world has some structure and we do take advantage of that. And I think this is something that we haven't quite as philosophers, physicists, AI researchers, whatever learned how to systematize perfectly.

1:06:51.9 AG: Yeah. One of the things that always strikes me... The reason why I started doing this work in the first place back when I was a philosophy student, completely a philosophy student, is if you think about going back to Plato and Aristotle, one of the deep philosophical questions is this question of knowledge. It's how we know that there's a world out there. It has structure, not only does it have... It has quarks and minds and all sorts of things that we can't immediately observe in it. And yet all that reaches us from that world is a bunch of disturbances of air at our eardrums and photons at our eyes. How do we ever at any point manage to reconstruct that world from that data? And I think going back to Plato and Aristotle, the two ways of trying to answer that question have been to say, "Well, okay, it just looks as if we're understanding that world from data. It's really there all along."

1:07:47.7 AG: So it's really some kind of innate... There's some kind of innate evolve structure that's responsible for this. And that's been one that's Plato's approach, that's one of the approaches. The other approach, going back to Aristotle, which is the approach of the most recent version of AI like the LLMs is, it just looks as if we're really understanding the structure. All we're doing is pulling out correlations between those photons and those disturbances of air. So all we're doing is just taking the data and pulling out correlations. And we think that that's telling us something about structure, but we have no particular reason to think that. And the great thing about development is that if you look at actual babies and children actually learning, they don't seem to fit either of those pictures. They seem to be able to learn really... And I think also if you look at science we seem to be able to learn really radically new things about the world whether it's learning that I might like broccoli and you don't, or whether it's learning about quarks and leptons.

1:08:49.1 AG: And yet it doesn't look as if all we're doing is just pulling together the finding statistical correlations in the data. It looks as if we're genuinely developing theories that go beyond just the correlations in the data. And I don't think we have a good... Even though we've been doing this for a thousand years, we still don't have a good formal model, computational model, good understanding of how that's possible. And I think looking at the kids who are clearly doing it, is a really good route to answering that philosophical question. And that's what I've spent my whole career doing.

1:09:19.6 SC: Sometimes looking at what actually happens in the world does help us understand it better. [chuckle] Alison Gopnik, thanks so much for being on the Mindscape podcast.

1:09:27.4 AG: Well, thanks so much for having me, Sean. A great conversation.

[music]

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top