Language comes naturally to us, but is also deeply mysterious. On the one hand, it manifests as a collection of sounds or marks on paper. On the other hand, it also conveys meaning - words and sentences refer to states of affairs in the outside world, or to much more abstract concepts. How do words and meaning come together in the brain? David Poeppel is a leading neuroscientist who works in many areas, with a focus on the relationship between language and thought. We talk about cutting-edge ideas in the science and philosophy of language, and how researchers have just recently climbed out from under a nineteenth-century paradigm for understanding how all this works.
David Poeppel is a Professor of Psychology and Neural Science at NYU, as well as the Director of the Max Planck Institute for Empirical Aesthetics in Frankfurt, Germany. He received his Ph.D. in cognitive science from MIT. He is a Fellow of the American Association of Arts and Sciences, and was awarded the DaimlerChrysler Berlin Prize in 2004. He is the author, with Greg Hickok, of the dual-stream model of language processing.
0:00:00 Sean Carroll: Hello everyone and welcome to the Mindscape Podcast. I'm your host, Sean Carroll, and let's start with a very quick meta-note about what's going on with the podcast. In particular, I wanted to mention that we now have transcripts of every episode that are going to appear on the webpage, with the individual posts for the episodes. We'll have the entire transcript as soon as it appears. This, of course, costs money. I'm not doing it myself, so a huge thank you to those who are supporting the podcast on Patreon. You're the ones who made that possible. So, I think it's gonna be very good both for accessibility and also for searchability. You can go in and you can see what the podcast is about, even before you listen to it. Search for terms throughout the whole archives. That's gonna be really good.
0:00:45 SC: Moving on to today's show, I want you to think about what is happening in your brain at this very moment. You're listening to my words, or you're reading the words, if you're reading the transcripts. One way or another, there's a signal coming in. Let's just say that you're listening. You're hearing sounds, so there's a vibration going on in your eardrums, words, sentences, and so forth, and you can recognize these words just like you can recognize nonverbal sounds. But there's something else going on. The set of words coming in, we create meaning, we attach to these words, to these sounds that we're hearing some picture of what I'm trying to say, some relationship between these sounds and something out there in the world or some abstract concept, or something like that.
0:01:31 SC: So, how does that happen? How is it that a bunch of sounds, hitting our eardrums, get turned into meaning inside the brain? That's what we're talking about on today's episode with Professor David Poeppel, who is a professor at NYU and also the director of Max Planck Institute in Frankfurt, Germany. I love this. Max Planck Institute is called the Institute for Empirical Aesthetics. I have no idea what that means, but I know that what David does is actually study what's going on inside your brain. He studies it not primarily with the standard FMRI picture, which is, when you put someone's brain inside someone's head, you don't take a brain out. You put someone's head inside a machine and look at where the blood is flowing to, to pinpoint where things are happening in the brain, which is very precise in terms of where things are happening, but it's slow, you can't see when things are happening.
0:02:26 SC: Mostly David uses MEG machines, magnetoencephalograph machines, to see exactly when a thought is happening inside your brain. In fact, if you have my book, "The Big Picture," you can see an image of my brain, not the conventional wrinkly crinkly thing that are used from pictures of the brain, but just a very crude image of that quadrants of my brain in which different magnetic fields are appearing as charged particles race around from neuron to neuron. Perfect evidence, that I really do have a brain inside my head. So, it turns out that this question we're asking about, how sounds get turned into language and meaning, people have thought about this for a long time. There's a standard model, if you like, but that standard model appeared in The 1800s and has not been updated as much as you would like since then.
0:03:16 SC: So, with his collaborator, Gregg Hickok, David Poeppel has suggested an updated model, something called the dual-stream model, which isolates not just one part of the brain but different parts of the brain that are responsible for different aspects of language processing. We'll talk about that exactly how it works. And also, just because David is an opinionated guy, we'll talk about all sorts of other issues in neuroscience, including the role of big data, how we're coming along and understanding memory, and so forth. So, let's go...
[music]
0:04:03 SC: David Poeppel, welcome to the Mindscape Podcast.
0:04:06 David Peoppel: Hi there. Nice to be here.
0:04:08 SC: Yeah. This is completely by accident that we met each other. We've known each other for a while. You were a famous participant at the Moving Naturalism Forward meeting back in 2012. But now I happened to be in the area where you live, so, of course, I'm gonna podcast you, and that's great. Let's start by laying the ground work. I'm trying to think of a way for the audience to describe what you do for a living? You're a professor in half of the departments of the various universities that you're a member of. Is it okay to say thought and language?
0:04:41 DP: It's fair enough. The one-liner that I try to remind myself of is study how you go from vibrations in the ear to abstractions in the head. So, as we're having this conversation, the only signal you're actually getting is your eardrum vibrating because I'm sending sound your way. And amazingly, that turns into abstract ideas, words, ideas in your head. The fact that that works at all is already amazing and weird, and all the subprocesses are things that I study in my labs.
0:05:14 SC: Yeah. That's great way of putting it, 'cause really the position of the eardrum, it only vibrates in one dimension. Right? It's only in and out.
0:05:19 DP: That's right.
0:05:20 SC: So, we get one number time stream into our brain, and from that we create everything we think about.
0:05:27 DP: The fact that it works at all is astonishing, and the fact that it works at the speed at which we're doing it is even more astonishing. You have a bunch of... You have tens of thousands of words stored in your head, and we're sitting in my lovely backyard in Connecticut, and it's a little bit noisy, we might see a bear walk by, and yet you can extract complicated information in segments of tens of milliseconds. How does that work at all? And so those are the kinds of problems I worry about, and that includes obviously listening to music, listening to sounds that are not speech, but the major emphasis in my lab is on speech perception and language comprehension.
0:06:06 SC: Just a note for podcast listeners, if a bear does walk by, I will let you know. But we are inside, we're relatively safe. Right?
0:06:13 DP: Ish.
0:06:13 SC: I'm thinking the bear could probably get through these flimsy walls that are surrounding us...
0:06:17 DP: Well, we're looking pretty tasty, so we gotta be careful.
0:06:19 SC: All right. We'll be careful. Good. So, how do you go about doing this? Well, actually let me back up even before we get to how you go about it. What got you here? Did you, when you were 10 years old, start thinking, "I wanna understand how audio signals in my ear turn into abstract thoughts"?
0:06:36 DP: Exactly.
0:06:38 SC: Good for you.
0:06:39 DP: That's exactly what I wanted to know when I was 10, 12, even 16. Yes, as a pubescent boy, those were my main thoughts. No, I got there completely by accident, or partial accident. After college, I really wanted to be an actor, or rather a director. I really wanted to be a director.
0:06:55 SC: You really wanted to direct.
0:06:55 DP: Everybody wants to be a director. I really wanted to be a director partially because my wife is an actress, and she was a successful actress. And then we thought, "Well, one of us should have a job." So, I thought I should perhaps go to graduate school. And I ended up accidentally in a neuroscience lab, working for a distinguished neurobiologist at MIT, learning how to do neurophysiology, and all the different... Using the different tools. But in the back of my head, I liked language. I grew up in a very multilingual environment and it was something I had a gut level intuition about. And people at that point said, "There's a famous language researcher here. You should probably go listen to some of the lectures. His name is Noam Chomsky." I said, "Oh, that's interesting. I'll go to some of those lectures." And I did, and that was sort of like somebody opens the curtains for you. You go to some of those lectures...
0:07:45 SC: A little epiphany, yeah.
0:07:47 DP: And you suddenly have a completely different view of how language can be looked at, studied, investigated, and really that's a game-changing experience. And at that point, I decided to go to graduate school to study how language works, how speech works, and then began to connect it to my interest in biology. It's accidental that... I really did wanna be a director. I'm not kidding. I still wanna be a director.
0:08:09 SC: You're young. You could do it.
0:08:11 DP: But now I'm a director of a Max Planck Institute, so that sort of counts.
0:08:14 SC: No.
0:08:15 DP: But it's not quite...
0:08:16 SC: It doesn't count at all.
0:08:16 DP: It doesn't quite have the same vibe.
0:08:19 SC: So, put Noam Chomsky in perspective for us, since you mentioned him. Obviously, he's a huge name, he's a huge name not only in the academic field of linguistics and psychology but outside. Forget about the outside and the politics. My impression is that... My impression is completely untrustworthy here, so I'm hoping you'll correct it, that Chomsky was extremely influential, but there's a sense of moving beyond him. Or have we simply improved upon what he... Tell us a little bit what he had to say and how we think of it today.
0:08:48 DP: He remains an incredibly influential and very polarizing figure. His influence derives from the fact that he really changed how we do psychology and language sciences and philosophy of mind in the mid-'50s. Based on the series of his early books and papers, he, first of all, effectively got rid of behaviorism. And one can even pinpoint that and...
0:09:16 SC: Tell us what behaviorism is, sorry.
0:09:18 DP: Behaviorism was the dominant view in psychology for decades before that, and it's a very pleasing and simplistic view. Effectively it boils down to there's one principle of the mind, which is the principle of association. It's the basis for all of the theories of conditioning. Conditioning underlies learning. Conditioning underlies effectively all of behavior. So, the term "behaviorism" became used as a catchall phrase for all of psychology, and psychology was based on the principle of association. Now that's a very nice...
0:09:53 SC: So, Pavlov and his dog is a classic.
0:09:54 DP: Pavlovian conditioning and operant conditioning. And then, of course, the most, let's say, egregious direction we just went is Skinner's work. So, Skinner's knows enough...
0:10:05 SC: BF Skinner.
0:10:05 DP: BF Skinner had famously, and a professor at Harvard, worked on the Skinner box with the presupposition that you could put even your own children. And I don't begrudge him and said, "Yeah." As a father of three sons, and I said, "Yeah." You put them into the box and train them explicitly to respond in selective ways to certain stimuli. And so the stimulus response, stimulus response paradigm, was the dominant paradigm for learning memory, for everything in psychology and neuroscience.
0:10:34 SC: Did some of this... I know this is a tangent, but that's okay, we have time. Did some of this reflect the influence of positivism, in the sense that, rather than looking for some underlying mechanisms, we should just look at what happens in the world and describe it as fully as possible?
0:10:46 DP: Yeah. And I think the more sophisticated behavior certainly were avid readers of Viennese positivism, I would think. But I think that the most disturbing part of the story of behaviorism is that it's still around.
0:11:01 SC: Of course.
0:11:02 DP: Deeply, and certainly true in my field in the neurosciences. I think it's actually the default position. Now, the influence of Chomsky was to argue, in my view, successfully that you really wanted to have a mentalist stance about psychology. And he had a lot of very interesting arguments. He also made a number of very important contributions to computational theory, to computational linguistics, and obviously to the philosophy of mind. And we're bracketing his political work and his...
0:11:29 SC: Yeah. Which is interesting, but it's separate.
0:11:32 DP: It's completely... It's separate, although I think he doesn't see it as completely separate in a principled sense, but it is separate. And he is, as you may know, very well known in Europe as a political dissident, in some sense, and he's basically not invited on most US channels because he's too obnoxious.
0:11:47 SC: Never, yeah. Of course.
0:11:48 DP: And you just don't hear him... You see him on TV shows in the Netherlands, but in the US, he's in some kind of fringe radio show in Cambridge, Massachusetts, or something like that.
0:11:57 SC: Yeah.
0:11:58 DP: But he is... He's difficult. I actually just finished a chapter, a couple of... Last year's so-called "The Influence of Chomsky on the Neuroscience of Language," because many of us are deeply influenced by that. The fact of the matter is, his role has been both deeply important and moving and terrible and it's partly because he's so relentlessly un-didactic. If you've ever picked up any of his writings, it's all about the work. He's not there to make it bite-size and fun. He assumes a lot, a lot of technical knowledge and a lot of hard work. And so if you're not into that, you're never gonna get past page one because it is technical. But that's made it very difficult because it seems obscurantist to many people.
0:12:40 SC: It's always very interesting, there are so many fields where certain people manage to have huge outside influences despite being really hard to understand. Is it partly the cache of the reward you feel when you finally do understand something difficult?
0:12:57 DP: I wish that were true. That would mean a lot of people would read, let's say, my boring papers. But I think in the case of Chomsky, it's true because there are superficial misinterpretations and misreadings that are very catchy. The most famous concept is language is innate. Now, such a claim was never made and never said. It's much more nuanced. It's highly technical. It's about what's the structure of the learning apparatus, what's the nature of the evidence that the learner gets. Obviously this is a very sophisticated and nuanced notion, but what comes out is, "Oh, that guy's claim is language is innate," and that's...
0:13:38 SC: But I can remember that and repeat it at cocktail parties.
0:13:40 DP: Exactly. And that's a little bit unfortunate because that's true for all fields. If there are nugget-sized one-liners, they're fun to remember, they're fun to talk about, but they're probably almost always wrong.
0:13:52 SC: That's right.
0:13:52 DP: Stuff is complicated.
0:13:53 SC: Yes, stuff is complicated. Generative grammar is the other phrase that I associate with Chomsky. All of my Chomsky comes from reading "The Language Instinct" by Steven Pinker.
0:14:04 DP: Steve is one of my professors in graduate school, did a remarkable job popularizing the language sciences, linguistics, although he's himself actually not a linguist. He's a psychologist. Of course, lots of the interpretation of what people think about Chomsky comes through the lens of how Pinker wrote about it and, of course, that has its own interesting flavors. Steve is a remarkable writer and fabulously interesting thinker, but he has his own lens.
0:14:35 SC: Yes.
0:14:36 DP: And one should probably go to the source and read the actual material to understand.
0:14:39 SC: I'm writing a book about quantum mechanics right now, and everyone should read it and everything I say in it is correct, but it's not necessarily what anyone else thought in the past, even though I tried my best to represent.
0:14:49 DP: No, no. I think in your books, of course, every word is true. It goes without saying.
0:14:53 SC: It's good to know that there's some people out there who like that.
0:14:56 DP: But, yeah, sometimes, especially in, let's say, technical disciplines, you have to do the hard work and you can't cut any corners. You have to actually get into the technical notions and what are the presuppositions, what are the, let's say, hypothesized primitives of a system, how do they work together to generate the phenomena we're interested, and so on. The concept of generative grammar, it's a notion of grammar that's trying to go away from simply a description or list of factoids about languages, and trying to say, "Well, how is it that you have a finite set of things in your head, a finite vocabulary, and ostensibly a finite number of possible rules, maybe just one rule, who knows, but you can generate and understand an infinite number of possible things?" It's the concept that's typically called "discrete infinity."
0:15:43 SC: Yep, okay.
0:15:44 DP: That's a cool idea, and the idea was to work it out. Well, if that's the parts list, how can you have a system and how can you learn that system, acquire that system? How can that system grow in you, and you become a competent user of it? That's actually subtle, and you can't just...
0:15:58 SC: Of course.
0:16:00 DP: Right? It requires thinking deeply about, what is actually the part that's architecturally given to you as a human being having a human brain. In which sense do we have to be parochial? A human brain has its own properties. And which things are, let's say, general properties of the vertebrate? How do they interact? What do you need? Is there extra special sauce? Do you need God? God forbid. [laughter] And things like that. Of course, this has had a tremendous influence on how we think about the system and how we study it, what kind of methods we can use and what are the bigger questions.
0:16:38 SC: But it's safe to say the mind is not a blank slate. The brain does have functions in there.
0:16:43 DP: Pretty much a good bet. I think you can put that on the bet.
0:16:46 SC: And when it comes to language, there is something... What I remember from Pinker is that there's, in some sense, a set of switches in the brain, hypothetically, which, by learning different languages, we flip one way or another.
0:16:57 DP: I think that's a pretty fair way to think about it. You can think of it as parameters or something like that. People think about it in many different ways, but we can certainly assume that we have a system... Look, let's back up for a second. Suppose we were talking about the visual system and not the language system, we wouldn't be having this discussion because people would say obviously...
0:17:19 SC: It's a picture.
0:17:20 DP: You have a visual system. You have it because you're a vertebrate brain, and obviously it changes in some specifics as a function of what's around you. This is not newsworthy. But as soon as we talk language, everyone is an expert. Everyone speaks a language. Everyone has a powerful intuition, and the amount of nonsense promulgated, it's astonishing. The level of silliness is kind of...
0:17:44 SC: I'm very sympathetic. I've written three popular books so far. One on the nature of time, one on the Higgs boson, and one on the big picture and the meaning of life. By far, the best Amazon reviews are for the Higgs boson book, because no one has a pre-existing view of the Higgs boson. They're willing to take what you have to say about it. But they think they know the meaning of life, and they think they know how time works, and if you don't agree with what they have to say, they're not gonna be receptive.
0:18:08 DP: Yeah. Again, there's something about... Well, in the arts, I guess, it would be called "connoisseurship." Do we still appreciate the pain in the neck of having to do the hard work to become a connoisseur or have technical knowledge on something? And, well, whatever, it takes a long time, it's hard, and you might not get very good at it, but it turns out to be required. If you wanna become really good at knowing the old masters of the Netherlands, you can become a connoisseur, only if you actually study it. Likewise, with language or the brain or physics, well, damn, sit down, do the work.
0:18:44 SC: Having said that, let's proceed to drastically oversimplify how the brain works and...
0:18:49 DP: Yes, let's drastically do it.
0:18:50 SC: Where did you go after being inspired by Chomsky and started studying how the brain processes what we hear?
0:18:55 DP: Yeah. I studied a lot of the technical aspects of language for a while, and I became, relatively quickly, like everyone else of my generation, I guess, seduced by the possibilities that we now have of recording from the human brain. So, when I was a graduate student, about halfway... Early on in graduate school was the time when the modern brain imaging machines were actually first developed and first rolled out. For many years, we had things like X-rays and CAT scans, but in the late '80s, there was a lot of research using PET scanning, that's positron emission tomography, a very onerous and in some sense invasive technique, to take pictures of the brain while it was processing something, in a complicated way with a lot of analytic steps. And then in the early '90s, there was really a game-changing event, the development of functional magnetic resonance imaging, fMRI we call it. Everybody's probably seen an MRI machine, it's in every hospital. If you've blown out your knee or your shoulder or, God forbid, you've had your head scanned, those are ubiquitous.
0:20:01 DP: And there were a lot of interesting developments both in the physics of magnetic resonance and in engineering and signal processing that allowed us to begin to use these machines to measure and quantify the human brain while something is happening. So, you're lying in this... It looks like a giant hair dryer or something like that, a tube, you're lying in this sausage, and it's really taking pictures. It's called a tomographic technique, or it's an imaging technique. And it was able to take pictures of which parts of your head were, informally speaking, active when you were doing something like listening to words, or listening to a piece of music, or moving your right finger, or looking at a checkerboard. Those are the usual experiments.
0:20:44 SC: And blood flow is the proxy for activity.
0:20:47 DP: And they are the proxy, is blood flow or actually blood oxygenation.
0:20:49 SC: Okay.
0:20:51 DP: And so that was an interesting technique, because you could take completely non-invasively. It's kind of cool. Imagine, you can take a picture inside someone's head from a meter away. Who knew? With a resolution, by the way, of about a millimeter these days.
0:21:04 SC: I was gonna ask about the resolution.
0:21:06 DP: The spatial resolution now with the machines that we use is on the order of a millimeter, or sometimes even better. There are high-resolution scanners, for instance, it's 7-Tesla, which is a really pretty substantial magnetic field. And you can scan things with a resolution better than 1 millimeter, which is pretty good. But what do you give up for that wonderful picture? You give up temporal resolution. So now you take a really, really great picture of someone's head, but the dynamics that you're able to capture are very, very slow. What have you given up there? Well, cool picture, but nothing about the change or the actual online processing.
0:21:44 SC: And by the way, a cubic millimeter brain still has a butt-load of neurons in it, right? It's not neuron-by-neuron imaging.
0:21:50 DP: A butt-load or a shit-ton, it depends on what your units are. If you take a square millimeter of tissue in the brain, in the cortex, in the cerebral cortex, and you look in the cortical column above, the tissue that's directly above a square millimeter... It goes up about 3 millimeters or so, depending on where you are, estimates are that it's on the order of 100,000 neurons. And that's just the neurons. There's other junk in there. The parts list of the brain is very complicated. There's lots of little stuff in there. We vastly underestimate what's going on in even, certainly, a cubic millimeter of cortex. We have no idea. It's shameful.
0:22:36 SC: But anyway, you were mentioning that we also don't have pinpoint timing of what's going on.
0:22:41 DP: Yeah. Here things get interesting, because we have to... Let's say we're committed to studying the human brain, and there's stuff you can learn from animal research, very important stuff, and you can do other kinds of experiments that we can't and should not do with people. So there, you can use techniques that have even higher spatial resolution, even higher temporal resolution but, except for very extreme medical cases, those are not appropriate. So, we have to use non-invasive techniques.
0:23:07 SC: You're talking about things where animals are sacrificing themselves to the cause of science.
0:23:11 DP: Yes, in very important ways. And if you've ever gone to the dentist, then you should be very grateful for their research. This is a very politically sensitive and complicated topic. To what extent do we support animal research? I'm 100% enthusiastically in favor of careful, responsible, ethically executed, well-managed animal research. There is no alternative for it, and there are currently wild and scary debates about this, in Europe in particular, more active than in the United States. And they're quite terrible, and the debates are irrational, vitriolic, and dangerous, and they are leading to a sharp reduction in careful animal research. Of course, it's not necessary for, as far as I'm concerned, let's say, cosmetics or something like that.
0:24:06 SC: Shampoos.
0:24:06 DP: Shampoos, forget that. But to understand basic principles of physiology... We have no alternative and I think it would be a very peculiar stance not to advance that so much.
0:24:19 SC: But at this point, we do not take human subjects and pry open their brains to look inside.
0:24:22 DP: We do not, and we should not. There are wonderful new techniques that are used, and everyone talks about them. Optogenetics is a particularly exciting one. You can treat cells, you can inject cells with particular light-activated molecules such that you can then control their activity, but you can't do that with people. You can record single cells in animals, but you can only do it under rare conditions in humans, for example, during epilepsy surgery. Look, we have non-invasive techniques that are amazing. We can take MRI pictures of your brain, but then we're sacrificing time. And if you don't believe time matters, how fast do you think our conversation is going?
0:25:08 DP: Our conversation, if you measured it, the mean rate of speech, across languages by the way, it's independent of languages, it's between 4 and 5 Hertz. So, the amplitude modulation of the signal... The signal is a wave. You have to imagine that any signal, but the speech signal, in particular, is a wave that just goes up and down in amplitude or, informally speaking, in loudness. The signal goes up and down, and the speech signal goes up and down four to five times a second.
0:25:35 SC: What does the speech signal mean? Is this what your brain waves...
0:25:38 DP: The speech signal is the stuff that comes out of your mouth. I'm saying, "Your computer is gray." Let's say that was two and a half seconds of speech. It came out of my mouth as a wave form, and that's the wave form that vibrates your eardrum, which is cool. But if you look at the amplitude of that wave form, it's signal amplitude going up and down. It's actually now... This is not debated, it's four to five times per second. This is a fact about the world, which is pretty interesting. So, the speech rate is... Or the so-called modulation spectrum of speech is 4 to 5 Hertz.
0:26:13 SC: Is it very different for different mammals?
0:26:15 DP: It's a good question. We don't know so much about that because nobody localizes as much as we do. It's probably a little different because it has to do with the cortical processing rate and, of course, the biomechanics of the articulator, so it's likely to be a little different. Music, incidentally, has a modulation spectrum that's little bit slower. It's about 2 Hertz. That's equivalent to roughly 120 beats per minute, which is pretty cool.
0:26:38 SC: Favorite number, yeah.
0:26:39 DP: Favorite number really comes out when you do the physics of signals. If you take dozens and dozens of hours of music, and you calculate what is the mean across different genres, what is the mean rate that the signal goes up and down, I's 2 Hertz. Cool fact. You should remember it.
0:26:54 SC: It's science.
0:26:54 DP: That's science, besides... The hell with that one. Speech happens very fast, and in that rate... If our mouth opens four to five times per second, that's not fast enough yet because, of course, inside those... That's roughly the rate of syllables. But syllables have internal structure, so that means it must be going even faster, faster than 100 milliseconds. If you really wanna understand what's going on as stuff comes into your head, whether it's hearing, or vision, or touch, you need devices that can measure things at the rate of milliseconds or tens of milliseconds, or thousands of a second. That's absolutely necessary, because that's the speed at which our mind and our perceptual apparatus works. There are other tools that we use...
0:27:36 SC: And an fMRI was...
0:27:37 DP: An fMRI's time resolution is on the order of, at best, a second. But more likely seconds, five seconds, eight seconds...
0:27:44 SC: So, we want a hundredth time improvement.
0:27:46 DP: Now, we need different machines. We have one kind of machine, like MRI, that takes really detailed pictures in space but has miserable temporal resolution. On the other hand, there are other tools. The most well-known one is electroencephalography. It's been around since the 1920s, and those are electrical techniques. They have very good temporal resolution. You can measure things at the outside of the head using electrodes, and now you have very, very high temporal resolution, as high as you want. It depends on your processor. Let's say, one thousand samples per second, so really every millisecond you measure the data, that's still very low for some processes in physics. That would be ultra slow.
0:28:28 SC: We're zeptoseconds, yeah, but that's okay.
0:28:31 DP: Yeah. Between friends what's the big deal? What's a few orders of magnitude? You wanna have those machines, too, because you wanna... Since processing is fast, you wanna be able to understand what's actually happening at those time scales. And for that, my own favorite technique is one called a magnetoencephalography. And that measures the magnetic fields generated by current flow in your brain, and it's the most sensitive technique we have to measure the human brain non-invasively. It looks like a giant hair dryer, and that giant hair dryer has little end detectors in it. Typically, you don't see them obviously. They're inside the hair dryer, and they're swimming in liquid helium to keep them at superconducting temperature. And there are little coils. Let's say there's about 150 above them, inside, surrounding your head. And then you can measure the brain activity at, let's say, a millisecond resolution and reconstruct as best you can how fast and where things happen. You really wanna pair these different techniques if you wanna have an increasingly comprehensive thing. And I remember, not that long ago, I stuck you into one of those machines.
0:29:41 SC: In fact, I got a lot of mileage out of that.
0:29:44 DP: I stuck your head into an MEG machine, and we measured your brain response to different tones and a few visual stimuli, and it turns out your brain worked.
0:29:52 SC: You confirmed the existence of my brain. I was very happy about that.
0:29:55 DP: So, the news was good. All is well.
0:29:57 SC: The internet has mixed reviews on the existence of my brain. So, I was glad you could confirm it using science.
0:30:01 DP: We found all the parts and no extra parts.
0:30:03 SC: Exactly. Exactly.
0:30:04 DP: So, it's all good.
0:30:06 SC: And I liked it especially because there's physics, right? The reason why you could get the signal is because a thought is manifested by charged particles being accelerated...
0:30:15 DP: That's right.
0:30:15 SC: And that's where the magnetic field comes from.
0:30:17 DP: It's amazing. Again, amazing that it works, right? The fact that we can have a conversation is a mechanical wave vibrating your ear, which turns into electricity in your auditory periphery, which sends a signal using a code we don't yet understand into different parts of your brain, where it's decoded or represents information using some code that we also don't understand, using electricity which flows around generating magnetic fields. It's wild out there.
0:30:50 SC: You have a lot to do. There's a lot of science left to be done here.
0:30:53 DP: But it's pretty cool that we can do it at all, suggests that using the insights and tools of physics, and the toolbox of physiology is the best way to go, if you have the theory.
0:31:09 SC: There you go.
0:31:11 DP: That's the other tool. You need some ideas.
0:31:12 SC: That's the next step, I was gonna say. We have learned something by doing all these things. We've changed how we think about how language is processed. It's not just, "Maybe we can do this." We've made progress.
0:31:22 DP: We've made good progress. It's very hard to measure in this area of research what would constitute compelling progress. In what universe would we say, "Holy cow, we have a true explanation"?
0:31:38 SC: We got it once and for all, yeah.
0:31:39 DP: And partly, that has to do with what do we think is an explanation. And that's a very complicated concept in its own right, whether you're thinking about it from the point of view of philosophy, the sciences, an epistemological idea. But we do have, let's say... What we have for sure is better descriptions, if not better explanations. And the descriptions have changed quite a bit. I don't know what the time scale is that would count as success, but I think it's... We can sort of say that we've had the same paradigm for... We've had the same neurobiological paradigm for about 150 years since Broca and Wernicke, since the 1860s actually. A very straightforward idea...
0:32:22 SC: So, that's older than electromagnetism?
0:32:23 DP: Yes.
0:32:24 SC: That's the same age.
0:32:25 DP: That's the same... And that should worry you.
0:32:27 SC: It's older than statistical mechanics and...
0:32:29 DP: What you should worry about that is that that model is still... What is that model? Let me tell you. The idea was, well, language is some faculty of the mind. It lives, whatever that means, in the left hemisphere typically.
0:32:44 SC: So the model says.
0:32:45 DP: So the model says. And there's a blob on the front part of your brain, in the frontal lobe on the left side, and there's another blob of tissue in the posterior part of your brain, in the temporal lobe. They're connected by a wire, which has the charming name arcuate fasciculus, and that's what you get. You've got an area for production, an area for comprehension, and a wire that connects the two. And if you go and... There's a couple extra wires, but they're not talked about much. If you go to classic neurology textbooks now, and if you have a stroke, the chances are probably better than 10 to one that the neurologist examining you will refer to that figure and that model. And that really should worry you.
0:33:31 SC: And was that model based on people dissecting human brains?
0:33:34 DP: That model was based on, really in some sense, the oldest model in neuroscience. It was based on patient data, famously the patient, a guy named Leborgne, or known in the literature as "Tan." And Broca, who was a neurologist in Paris, examined him, tested him behaviorally, noticed that this guy couldn't say much. Remarkably, a few days after he gave him a thorough examination, the man died. And was able to do, "Huh. Who knew? Why did it never works out that way?"
0:34:07 SC: Yeah. Well, let's say, "I went and seen your machine," so I think I'm happy that it doesn't work out.
[laughter]
0:34:13 DP: They found correlations. It's what's called deficit-lesion correlation. So, they found a lesion in this patient's brain, Broca in this case, and was able to correlate it with the particular behavioral deficit. He said, "Look, it's interesting. If this part of the brain is broken and this behavior is broken, there must be some kind of causal relationship between a particular brain area and the function it executes." Now, that's a very reasonable hypothesis, subject to subtle things one could change about it, but it seems like a good start. Some years later, the German neurologist, Wernicke, made a similar discovery for a different part of the brain. He found patients that had a lesion or a brain injury due to stroke in the posterior part of the brain and the temporal lobe. And that patient, or those patients, had trouble understanding. They could talk, they were very fluent, but didn't make any sense. So, the assumption was, "Okay, so their comprehension part is broken."
0:35:10 DP: We have a production part and a comprehension part, and so now we have a kind of understanding. Now, what's the problem with that? First of all, that's not a theory of language. For that, we had to wait another 100 years until the language sciences were more mature. And there, the work of Chomsky in the 1950s played an enormous role. But for those 100 years, from 1861 to, let's say, 1961, the theory of language that was at the basis of how we think about brain and language was pretty naive. It was something we could come up with right now with a piece of paper. Of course, it's amazing how powerful that model was and how hard it was to get beyond it. But now we've known for a...
0:35:51 SC: Sorry, remind me what about the two blobs connected by the wire?
0:35:54 DP: There's an area called Broca's region, Broca's area named after the neurologist Paul Broca, an area called Wernicke's region, after the neurologist Carl Wernicke, and then a set of wires really in between called the arcuate fasciculus. That's basically tissue... Those are literally the wires connecting one blob to the other, and the idea is, well...
0:36:14 SC: Yeah, the role of the two areas.
0:36:16 DP: The role of the two areas is one is for production and one is for comprehension.
0:36:19 SC: Okay, got it.
0:36:20 DP: Very simple idea and very elegant. Very elegant.
0:36:23 SC: If only the world is so simple.
0:36:24 DP: If only the world were also aligned to... So, models are doomed to be true for a while. In this case, this is very powerful because it has the elegance of simplicity but also is empirically wrong. It's wrong for many reasons, patients with the different lesions turned out not to have those syndromes. The brain organization is much more complicated. The wiring diagram is gazillions of times more complicated. We've known for certainly 40 years that it's incorrect. And now, in the last 10 years, I'm happy to say, there are a handful of models that really go well beyond that, and that show us actually how long the parts list is and how much more complicated the structure is. Partly that's because these contemporary models are much more in tune with what we know about the biology of the human brain and, likewise, what we know about the psychology of human language. So, they try to link the, let's say, models of linguistics and psycholinguistics to models of neurobiology. And surely, they're as inadequate, but hopefully they're wrong in an interesting way. And they are de facto the state of the art right now.
0:37:39 SC: One hopes for progress, not for definitiveness, and especially into the brain.
0:37:43 DP: That's a nice way to say it, yes. Look, the brain is a complicated place. You work on big things, really, really, really big places. The place I work on is small, by comparison, it's just the size of two fists really squished together. That being said, my small place is pretty complicated.
0:38:01 SC: Way more complicated. I'm very quick to admit this.
0:38:01 DP: It has a hundred billion parts. So, the current estimate for the human brain, it has 86 billion cells and each cell has... If you think about it in the Facebook sense, each cell has between 1000 and 10,000 friends.
0:38:19 SC: Are they really friends though or are they just acquaintances?
0:38:21 DP: Are they friends? Well, there's likes, there's acquaintances. How sure are we really care is out? But then if you imagine that they're, of course, communicating with each other electrically and chemically, the computational complexity of the prop gets out of hand in a hurry. This is one of the reasons we don't really have the kinds of theories that are successful and adequate at the moment.
0:38:43 SC: Right. So, you are part of the originator of something called the dual-stream model. You complicated the simple model a little bit by imagining there's more than one thing going on. Is it possible to simplify that enough to podcast language?
0:38:58 DP: Sure. That's, of course, the correct model.
0:39:01 SC: Of course, that we know, yeah.
0:39:02 DP: That was an attempt by a colleague of mine, Greg Hickok, and me. We've worked together on this kind of thing for many years, trying to bring together data from patients, patients with stroke, and imaging, and biology, and linguistics to come up with a much more, let's say, biologically realistic, computationally explicit, and theoretically well-motivated idea. And what we really did is we borrowed or, let's say, we adopted and adapted the standard model of vision actually. So, what do you have to do when you see something? The most straightforward way to think about vision is you have to locate things and then you wanna know what they are.
0:39:46 DP: So, there's a "where" system and a "what" system. And that's actually not that much of a simplification, as it turns out. So, there's a whole chunk of brain tissue... The human brain, the vertebrate brain has enormous amounts dedicated to the visual system, and so there's an enormous amount dedicated to processes that identify objects, the so-called "what" system and they go along, as it turns out, the temporal lobe and their job... This is sort of the side of the brain under your ears. And these series of regions, of which there are dozens actually, their job is to extract the information that lets you actually identify things. So, how do I know this is a glass of water, that's a glass of wine, this is a chair, that's a hawk flying by, which you have to identify the object. That's super useful, you want that.
0:40:36 DP: However, you also want another thing. You need to know where is it relative to where you are. You need a system that allows you, for instance, to regulate your eye movements, figure out where you're reaching to grab something, see that saber-toothed tiger is coming from the left and not the right. Vision has really worked us out in wonderful detail, very, very elegant. We have one stream principally responsible for localization information, other things as well, and that's a good summary statement, and another anatomic stream with sub-areas responsible for identifying objects. Very cool idea. By the way, you've now bought yourself an interesting problem. How do you put them together?
0:41:20 SC: Well, I was gonna say that it could have been that there was really only one system that did both functions, but it's two different systems both operationally but also literally where they are in the brain, the actual neurons doing that.
0:41:31 DP: They're literally a different stream of information, yeah. This was discovered a long time ago in the '60s, in the hamster actually. This notion of multiple sensory areas and multiple different streams responsible for... And in some sense, it's an engineering idea that you then see replicated in the brain. You want sub-specialization because the circuitry in these areas really does things optimized for that thing. Let's say you wanna really identify an object, you wanna have very high spatial resolution. You really wanna be able to see the details, analyze its surface, make guesses about how heavy it is, and so on, whereas you don't care so much when you just need to, let's say, move your eyes to the right to run away or something. So, we basically stole that idea.
0:42:16 SC: In the best scientific tradition.
0:42:17 DP: In the sense of adopting and adapting, and said, "Well, what if the speech and language system actually did something that's not that dissimilar, capitalizing on similar computations?" The visual system is pretty old, but it's pretty useful. One of the things you have to do in the language case is, what do you want from the information that you have? Well, one of the things you want is the content, the "what." I need to recognize words, I need to string words together to recognize meaning. I need to be able to tell the difference between "Dog bites Sean" and "Sean bites dog," which would be uncool.
0:42:52 SC: Yeah. I try to stay away from that.
0:42:52 DP: Those are the same set of words, but they mean something completely different, so you need to actually... In this case, the particular ordering has a clear consequence for the interpretation. So, we reasoned that maybe the brain capitalizes on the same computational principles. You have one stream of information that says, "Look, what I really need to do is figure out what am I actually hearing, what are the words, how do I put them together, and how do I extract meaning from that." And you have another stream that really needs to be able to deal with, "Well, how do I translate that into an output stream?" Let's call it a "how" stream or an articulatory interface.
0:43:33 DP: Why would you do such a thing? Well, let's take the simplest case of a word. And so what's a word? Word is not a technical concept, by the way. Word is an informal concept, as you remember from your reading of Steve Pinker's book. The technical term here would be morpheme, the smallest meaning-bearing unit. But we'll call them words, words roughly correspond to ideas. So, what is a word? You have a word that comes in. My word that comes into your head now is, let's say, "computer." And as it comes in, you have to link that sound file to the concept in your head. It comes in, you translate it into a code we don't know, let's say Microsoft brain, and that code then gets linked somehow to the file that is the storage of the word "computer" in your head. Now, in your head somewhere there's a file, an address, that says the word "computer," what it means for you, like, "I'm on deadline," "Oh, this file," or, "Goddamn, my email crashed." But there's many other things. So, you know what it means, you know how to pronounce it, you know a lot about computers, but you also know how to say it. So, it also has to have an articulatory code.
0:44:43 DP: Now, here comes the rub. The articulatory code is in a different coordinate system than all the other ones, because it's in the motor system. It's in basically time and motor... People call it joint space, because you move articulators, or you move your jaw, your tongue, your lips. So, the coordinate system that you use as a controller is quite different than the other ones. You have to have areas of the brain that go back and forth seamlessly and very quickly, because speech is fast, between an articulatory coordinate system for speaking and an auditory coordinate system for hearing. And some coordinate systems yet unspecified, which we don't understand at all, for meaning. You're screwed. So, even something banal, like knowing the word "computer," or "glass," or "milk," is already a deeply complicated theoretical problem.
0:45:33 SC: In the visual case, the dual streams... Do we call that in the visual case?
0:45:39 DP: Yeah. You simply call it the what and where. The dual stream is simply that you subdivide problems into multiple streams, like an engineer would.
0:45:44 SC: But those particular streams are what things are and where they are. And then in the audio case, the hearing case, it's what they mean, is that one of the streams?
0:45:53 DP: Well, let's say we call an interface to the meaning system, let's say structure and meaning, and an interface to the motor system. So, we call it a...
0:46:02 SC: Say the words, for example.
0:46:05 DP: We called it a sound to meaning interface and a sound to articulation interface.
0:46:10 SC: It does sound like it's a slightly different problem in the sense that language is where these meanings come from, or at least often. Sometimes we just yelp, or scream, or whatever. But the vision problem seems much more straightforward. Any vertebrate is gonna know where things are and what they are. We humans have a special problem when it comes to sound, which is that we wanna interpret these in much more abstract ways.
0:46:35 DP: Yeah. It's quite true that you don't wanna over-analogize here, and there are aspects of this which to me seem quite different. One of the main things you do in the language case is what's technically called "compositionality." It's to take things and put them together. And that's not so obvious how that's true in the vision case, although it may be, I don't wanna overstate my case here. But the kinds of things you have to do in language and vision are different, and certainly the output systems are different, so the eye movement system is not like the speech system, or something like that. But what you could imagine is that certain of the subroutines that are executed in the way there are shared. For instance...
0:47:24 SC: That's a very common evolutionary strategy, the brain is always borrowing, the body is always borrowing old systems.
0:47:26 DP: That's right. That's exactly right. You wanna basically recycle stuff. One reason that Greg Hickok and I argued for this particular dual stream position... Ours is now one of a few. There's a handful of these. I think there are... We, of course, think ours is still the best, although it's now 10 years old by now, it's actually... But it's de facto one of the... I'll tell you a fun sociological story about that in a second. One of the things that might be shared is this notion of a coordinate transformation. The reason I'm very attached to this is because the same part of the brain that we argue does this for the language speech case, for the speech perception to production sensory motor case is that you have to do a similar thing in the eye movement case. If you take the problem... Here's a simpler problem that everybody can do for themselves right now. You're sitting... While you're listening to this magnificent podcast, you have a glass of wine in front of you.
0:48:20 SC: And we do, by the way, gentle listeners, have glasses of wine in front of us.
0:48:24 DP: Which we do. And I now really wanna reach for this glass of wine. Terrific red wine, by the way. So, what I have to do to execute that ostensibly simple thing. The first thing is the glass falls onto my eyeball, onto my retina in particular, the surface, the back of my eye that does the initial encoding. So, the coordinates of that are retina-centric. That is a particular two-dimensional sheet, and the glass is now falling somewhere on that sheet, and the information now goes into my head. But now, note that I can move my eyes. Now, it's no longer in the coordinates of the retina but in the coordinates of the eyeball. Now, I can move my head. I can also move my trunk. But in the end, what I'm trying to do is reach for the damn glass of wine, so it has to be in coordinates that are relative to my trunk and to my arm. Because I'm gonna reach my arm out and it needs to be... I need to have knowledge of where is my current position of my hand, where does my hand need to be, how hard does my hand need to grasp?
0:49:29 SC: So, the simple act of reaching for a glass of wine, or a pencil or anything ever, requires a series of transformations, conceptually. It doesn't mean you're doing a series of the equations over there, but maybe, but you need to transform the information into a suitable format. And so if it comes in, in eye-centered coordinates, and has to go out in hand-centered or muscle-centered coordinates, what gives? How do you do that? That didn't come for free. So, the regions of the brain that do that in the posterior part of the parietal lobe, but different part, more on the top of your head, likely are optimized for that kind of computation. And we reasoned, "Well, look, if that's the same kind of mathematical problem, maybe that's really well implemented there, and maybe the speech problem is similar in kind.
0:50:16 DP: Now, obviously, the inputs and outputs are quite different. You're gonna get, let's say, informally speaking, a sound file in and some kind of motor command out, but the kind of problem is the same kind of problem. And then you have to have some kind of basis function, you have to transform it into a different thing. That's why we borrowed the thing from vision and tried to say, "Well, this is computationally similar." And that's why we think this constitutes a form of progress because we try to be explicit about the set of operations that you really do have to do in order to achieve what's ostensibly almost idiotically simple, but the bottomline is we do not know how we recognize a single word or a single object.
0:50:57 SC: I do have a question, but first, using sociological story. You promised me.
0:51:01 DP: Okay. This is how science sometimes works, and it has funny parts and slightly less amusing parts. About 10, or maybe by now 15, years ago, when Greg Hickok and I started working this out, one of the things we argued for was this dual stream concept, and another thing we argued for was that things really are much more bilaterally organized. And at that point, we were still extremely young. Well... Ish. Younger than now.
0:51:31 SC: Now you're only mostly young.
0:51:32 DP: Now I'm a little more mature. So, Greg and I wrote the stuff up and we sent it to... The initial reaction to our work was that we were basically crazy because there was a model from the 19th century, it worked, it was clinically useful, and we were accused of being charlatans and the most naive bunch of yahoos who didn't know the first thing about any of the relevant disciplines, which kind of bummed us out.
0:52:01 SC: Yeah.
0:52:02 DP: We were like, "Okay. We were just reading literature and doing our work." It's true that we departed pretty drastically from the standard model at that point, but we thought we were being very motivated by vision, thinking about linguistics, thinking about the biology of the brain. People basically dismissed us as complete nut jobs.
0:52:18 SC: The problem is that people who sound like complete nut jobs... If you do have a tremendously important breakthrough that changes the world, you will be told you're a complete nut job. But most people who sound like complete nut jobs do not have tremendously important breakthroughs that will change the world.
0:52:31 DP: Yeah. We thought we were being pretty careful in our reasoning. Of course, our feelings were... Because we were young, we were both, I think, assistant professors at that point, and our feelings were hurt, understandably, and... But then ironically... People started saying, "Well, maybe they're not so stupid." The data started amassing, and the people started really thinking about it and reading it and paid a lot of attention to this thing. And now, 10 years or 15 years later, it's become more or less a standard thing and now it is the textbook model. But now, ironically, all the young people stand up and basically take shots at us about how naive and how stupid... We never had the good years.
0:53:10 SC: You never worked the fee, yeah.
0:53:11 DP: Initially, we were the cranks who just didn't know, and now we're just the old guard who just really isn't on the cutting edge.
0:53:17 SC: You're outdated even before you were accepted.
0:53:20 DP: So, sometimes we, a little wistfully, think, "Don't we ever get a break here?"
0:53:26 SC: The answer is "No, you do not get a break."
0:53:27 DP: You do not get a break. We're obviously happy that people are interested in this, but it would have been nice to have two years where people say, "Good for you, nice idea. It's probably wrong, but good for you."
0:53:37 SC: I know it well. But I'm thinking, is there an obstacle to understanding both vision and auditory signals that we kind of, in the back of our minds, know a little bit too much about how computers work? When we take a picture with a camera, we have pixels and we imagine there's just data of where things are and what color they are, and the brain is much more based on extrapolations from incomplete data both in speech and in vision. Is that getting in the way of solving this important problem of how we go from the basic input stream to the meaning inside?
0:54:14 DP: Well, I think that's not quite right, to say the pixel or camera metaphor for vision is also wrong. Just as hearing and language comprehension is entirely constructive process, so is vision.
0:54:34 SC: No. In the brain, it is, yeah. I'm saying, for a camera, it's not.
0:54:35 DP: For a camera, it's not. So, we're taking the camera analogy, we don't wanna take that too far. What we do know and we've known effectively since, I wanna say, Helmholtz but probably earlier.
0:54:46 SC: Helmholtz, yes.
0:54:47 DP: Helmholtz is always right about most things.
0:54:48 SC: Yes.
0:54:50 DP: There's a good line by, I wanna say, David Hubel, the Nobel laureate winning neuroscientist who discovered fundamental principles of the early visual cortex. He and Torsten Wiesel won the Nobel Prize in 1981. I think it's... I vaguely remember hearing this in the lecture that David Hubel gave many years ago, most of what we do are footnotes to Helmholtz.
0:55:13 SC: Very nice.
0:55:14 DP: Which I thought was demoralizing but pretty cool.
0:55:18 SC: You know Emerson's line, that "philosophy is just footnotes to Plato."
0:55:21 DP: Yes.
0:55:21 SC: That's where he got this.
0:55:25 DP: I think the notion that it's entirely constructed, that you need a computational theory using the word "computational" but loosely, I think, is now completely convincing. That the way we do things are predictive, for instance, that most of what we do is a prediction, that the data that you get are vastly under-determined, the percepts and experience that you extract from the initial data, and so that it's a filling in process. So, you take underspecified data, that are probably noisy anyway, and then you build an internal representation that you use for inference and you use for action. Those are the two things you presumably wanna do most. You wanna not run into things and you wanna think about stuff. There, I think...
0:56:14 SC: Story of my life, not running the things and thinking about stuff.
0:56:17 DP: Yeah. We all have the same issue there.
0:56:18 SC: I often succeed.
0:56:19 DP: It's better than I do. I make both mistakes. I think poorly. I think that there is now... For many years, the start... This actually brings us back to what we're talking about earlier. The influence of Chomsky and mentalism was the embracing of what was and still is called the computational theory of mind. Very influential. I think people like Fodor played an enormous role in this and subsequent...
0:56:48 SC: Jerry Fodor, the philosopher.
0:56:50 DP: The philosopher Jerry Fodor. And then subsequently, people like Dan Dennett picked it up. It's de facto, as far as I can tell, the most reasonable way to think about the mind. There are no alternatives I find even vaguely credible. Not that they're not all over the interwebs.
0:57:05 SC: You can find them, the brain, yeah.
0:57:08 DP: But these are... So, these are extremely... What has made this helpful is the requirement to make things explicit and sort of to think a little bit like a program, really spell out what are... In some sense, it's taking work on the mind and brain, which I don't take to be particularly different, just like any other discipline, biology or physics, that is to say your first job is to identify the parts list. What is the ontological structure of your domain? "What are actually the primitives," is my call, or, "I just like to get the parts lists." It's less complicated sounding. What are the smallest elements that you need to use to generate the phenomena? And then you need to identify the forces or the interactions between the primitives that generate the phenomena, under study, and that's the computational theorists might have been very good about that.
0:57:58 DP: Obviously, one can do better because we don't... What the primitives are is a research program, just like in physics. It takes decades and decades. We don't know what the smallest pieces are. 'Cause it turns out to be really difficult. Every time we look, it's a little bit smaller, it's changed a little bit, or it's... For example, now, if you ask someone, "Well, what do you think is the relevant part of the brain?" They're gonna say, "It's a neuron." I'm like, "Well that's a nice idea, but maybe it's a subpart of a subpart of a neuron," or it could turn out that it's gonna be five neurons wired up a certain way, namely, two are capacitor and one's a... Who the hell knows? We simply don't know what the encoding of information is.
0:58:36 SC: Well, even for the neurons, it's progress, in the sense that... I get the impression that at some point in the history of psychology, it was thought that the brain was a bunch of blobs interacting with each other, and just the progress that has been made by thinking of it as a network of neurons, and they're not all connected to each other equally. There's some hierarchical structure there.
0:58:55 DP: That brings a very interesting and a slightly dodgy point. Nobody would argue that the network metaphor isn't necessary and important research agenda. That being said, "Isn't that just kicking the can down the road?" And before, you said, "Well, it's this part of tissue and we don't understand." Now, you say, "Well, it's five pieces of tissue with a bunch of wires interacting, and we don't understand those." So, simply saying, "Well, it's a network," is kind of punting on the problem for me.
0:59:32 SC: It's a tiny, little thing. I definitely get that, yes.
0:59:34 DP: It is very... Look. We're all on board with it. Obviously, everybody agrees. It's a super complicated dynamic system with many interacting parts. We're trying to just figure out how the extremely high-dimensional dynamics of the things work. But it's, to me, not a mechanistic answer. It's a metaphoric extension... It's a metaphor, not a mechanism, to say it's a network.
0:59:57 SC: Well, how far does... There has been progress made. How far does that get us to the stated goal of understanding what happens when we hear words like "freedom" or "love," words that are not referring to objects out there that we can point that abstract concepts?
1:00:14 DP: Very difficult question. The vast majority of standard work, shall we say, refers to, let's say, the so-called "open class" items, or nouns and verbs, chairs and dogs, and bears, and things like that.
1:00:28 SC: No bears yet, by the way.
1:00:30 DP: No bears, although some deer often over there, maybe I brought them to New York. So, the issue of abstraction is particularly difficult because you no longer wanna deal with things like, "Oh, this concept has a bunch of features that make them," and these necessary insufficient features make you a member of the category honesty. Now, there are such ideas, namely the concept of embodied cognition, very, very popular these days in psychology and neuroscience. I can't say that I find it coherent, but it certainly, be my guest and work it out, and then I'll watch the movie. It's all good. But it gets even more gnarly when you think not just about abstract concepts. Let's back up for one second. How does thought partition roughly? You might say, "Well, there are concrete things that we think about, like dogs, cats, and tables." And the untutored intuition says, "Well, those are the easy ones." Yeah, not easy either. Then there are abstract things.
1:01:36 SC: But is it that they're not easier or just not easy?
1:01:40 DP: They're not easy and they're not easier. There's no reason to believe that the way we understand "cat" is easier than the way we understand "honesty."
1:01:48 SC: Okay.
1:01:48 DP: I have no reason to believe that.
1:01:50 SC: All right.
1:01:50 DP: So, we have intuitions that because...
1:01:53 SC: My cat does not understand honesty at all, I know that, but I don't know if that's helpful.
1:01:56 DP: Your cat also doesn't understand cat actually.
1:02:00 SC: Cat, right.
1:02:01 DP: There are, of course, in our world, concrete things that we reason about. There are abstract things that we make inferences about, but then there are the real juicy bits, which is the small list of words in all our language, so-called "closed class" items that make it all worthwhile, namely things like "and," "or," "under," "through," "not," and that's where the fun begins. And those are the ones you really wanna understand, because that's actually the glue that holds the stuff together. You don't say, "cat," "dog... " Unless you're aphasic, actually. What you really say is, "No, I don't want that quinoa, pasta, gluten-rich pasta."
1:02:47 SC: I've often said those words, yes. But you're sounding pessimistic. You're sounding like... Well, at least...
1:02:51 DP: Well, no. I'm trying to sound realistic about the...
1:02:54 SC: Realistic about the current progress.
1:02:55 DP: I'm trying to say, "Look, I'm not pessimistic about where... " I think we've made wonderful progress, and I think we're re-piecing the stuff slowly together. But I'm a little... Okay. Here's what I'm optimistic and pessimistic about. I'm optimistic about that we can get a grip on, let's say, in the next 10 years, let's call them rules or operations or computations that put items together to yield larger items. The fact that a red can, for instance, is a can and not a red, how did you know that? That doesn't follow from first principles. You have to actually figure that out, right? That's not a "gimme." But my hunch is that these elementary operations of composition or combinatorics that yields larger units that then become the input to the next steps, I'm actually pretty optimistic about that we're gonna get a grip on that, believe it or not.
1:03:55 DP: What I'm much more not pessimistic but just kind of bewildered about, and that's stuff that I think that's giving me more... Stuff that I think about a lot is, "Well, how do you store anything to begin with?" If I say, "red can," that's great. But where is red? Where is can? And how to put "red can" together?
1:04:14 SC: Like literally located in the brain.
1:04:15 DP: Literally located in the brain such that... The remarkable thing is, as we talked about earlier, let's say you have 50,000... Or you're uber-educated, so you probably have 100,000 things in your head. Now, in this...
1:04:29 SC: Many of them are silly and useless.
1:04:30 DP: Many of them useless, but even the useless ones, you pull... At the rate that our conversation is happening. And we said 4 to 5 Hertz is a typical syllable rate. That means we speak something between three to five words per second, super fast. It means, at that time scale, you have to translate the information coming into the periphery into the correct code, go into your bag of words, pull out the correct item, not the wrong item. We're pretty good. Compared to machines, we're unbelievably robust and resilient in noise, and all kind of stuff. Pull out the right item, and keep doing that sequentially, put them together with the next one, and also with ones that are distant, and get the correct interpretation out at a subsecond scale all the time. The operations that happen, I'm pretty optimistic about that we get a grip on, because I think there's wonderful research happening of that in many different labs. But the process of the identification of how we actually store not just words but anything at all is, to me, deeply puzzling, and I think is one of the deepest mysteries of neuroscience.
1:05:36 SC: I would have thought, at the most naive level, again perhaps being misled by the analogy with computers, that information is stored in individual neurons as some bits somewhere, like there's a computer memory. My understanding is that that is wrong, and neuroscientists will tell me that it's closer to being stored in the connections between the neurons. But maybe you're gonna tell me that we don't even know that.
1:06:05 DP: Yeah, you're absolutely right. I much prefer your first story that you say, that maybe things are stored inside cells or the structures of cells or even possibly the genome. That would be the really interesting case.
1:06:24 SC: How do you color red? I can learn the color red, and it will be stored in my genome?
1:06:28 DP: Well, why do you have all the introns for? We don't know. But, look, the standard story is the one you just said. Our standard story in memory right now is it's the connections between the cells, that's the synaptic structure and the synaptic connectivity, is what reflects memory of something and the modification of that connection between cells is effectively what learning means. So, learning means a modification, a cellular, a molecular level modification, and ultimately a genetic level modification.
1:06:57 SC: And Nobel Prizes have been given out for these discoveries.
1:06:58 DP: Numerous. Yes. The first one perhaps, Kandel, most famously for this synaptic... The synaptic pattern of things is the basis of this, and this is the standard story. But when you start to dig in a little bit, and I think the most, let's say, harsh critic is Randy Gallistel from Rutgers, who's a very distinguished psychologist and cognitive scientist, who's worked most of his life on learning, and from a computational point of view has written a very, very fascinating book called "Memory and the Computational Brain," and a bunch of papers which he basically pulls the rug out from under neuroscience of memory and says, "Look, show me at least how you store the number 17 or something, or anything for that matter, any bit of information. And where is it, how is it done, and then how is it actually implemented when you use it? How do you... "
1:08:01 DP: So, we have intuitions that, yes, the pattern of connectivity may be activated or deactivated, but you really need some kind of digital storage device. The issue that's very tricky in memory is you... On the one hand, the way human memory works is certainly content addressable. We know that for words, for instance. It's content addressable in the sense if I say "doctor," you think nurse or toothache or whatever the appropriate semantic field is that that associates with you. That's what content addressability is. There's a content-based cloud of ideas. But you also want what a computer has, which is address addressability. You go to a location and you pull something out. And so how do we... Is that possible? Can we put things there? How would we store in a digital of format the digital information we have? How do we actually compute, let's say, with variables?
1:08:52 DP: Take the wonderful examples from the animal world, the Tunisian desert ant, Cataglyphis. It lives in the desert, Tunisia, flat, not much to see, no visual cues. The ant comes out of its... Burrows in, comes out, and has to walk around to find, let's say, a leaf. So, the ant has to... These ants are small, not huge brains. The ant walks around, walks around, meandering, maybe a meter or 2 meters finds a bit of leaf. How does the ant walk back, straight back to its hole? How?
1:09:33 DP: How does the ant know how to do that? Okay. How would you do that? So, you'd have to figure out a bit of math. You have to figure out, "Okay, wait, wait a minute." Here is a simple one that does it like Hansel and Gretel. So, it leaves a little bit of chemical trail. But if it did that, it would wander back the same path. It doesn't do that. It actually takes a straight vector back and then looks for its hole. So, it must have, first of all, figured out how far it went. It has to count the steps or something, which it turns out it does. It has to keep track of time, because at that point the sun has, of course, changed position, so it needs the solar ephemeris function.
1:10:07 DP: So, you need to actually have an equation in your head in which you plug in the value of variables that you then calculate in order to say, "I have to go back south by southwest in ye much, whatever, 4 feet." Now, that's a kind of simple example well attested in beautiful experiments, by the way, but that's a kind of compelling and very clear case where a small nervous system has to take... Does a very simple behavior, namely finding your home after walking out. But in order to do that, it has to plug a value into a variable. So, how do you do the...
1:10:44 SC: It's a little bit abstract right there.
1:10:44 DP: Now, things get a little dodgy.
1:10:46 SC: Many junior high school students struggle with this.
1:10:46 DP: Do not enjoy this concept.
1:10:47 SC: Right.
1:10:48 DP: And so how does the little tiny brain have represented a variable that then is able to take a specific value, then do its little calculation and say, "I'm going that way"?
1:10:58 SC: Is this the experiment where they put the little ants on stilts?
1:11:00 DP: That is exactly one of them, yeah. There's a whole bunch of experiments. These are mostly by Rüdiger Wehner from Zurich. Beautiful, beautiful experiments.
1:11:09 SC: And when they were on stilts, they got the wrong answer, 'cause they took the number of steps, but they didn't understand how far they were going.
1:11:14 DP: Exactly right. So, that was the evidence that you have a counter, because if you're too high and you take the number, you would go too far. So, it makes such a compelling case for a simple kind of computation. So, if the ant does that, it can't be impossible. There are circuitry in a small brain that allows you to have variable-based computation. Now, if you attribute it to the ant, do you not attribute it to the vertebrate?
1:11:42 SC: Right. Right.
1:11:43 DP: That seems weird to me.
1:11:44 SC: And you're hinting that this might be part of the origin of abstraction, the way they do computation?
1:11:51 DP: That's already too far. I would like to... To me, it's a reminder that one of the things we have to have an answer for is, let's say, algebraic computation.
1:12:00 SC: Right. Yeah. So, it's actually good to have a tractable goal. Okay. If we can understand that, which in some primitive sense even ants can do it, that'll be nice.
1:12:09 DP: Yeah. It gives a very specific... It's the kind of problem that language faces, too. For instance, one of the interesting things about language processing is that it's the so-called property of structure dependence. One of the things that makes computers different from human brains, for the most part, is that the way that language works across all languages, it's not a string of pearls, but it's more like an Alexander Calder mobile. It has relationships that are dependent on where in a structure things are, not just in a sequence. So, it's not just I have a sentence that has seven words, one, two, three, four, five, six, seven, in a row, and it matters what does your neighbor, your nearest... But it matters what their structural relation is. And there's incontrovertible evidence for this. This is not debated. There's notion of... That means you have to have some kind of... Some people give it a tree structure because that's a good way to visualize it. Other people are deeply offended by trees, and they say, "Well, you can't have trees living in the brain." But that's just a notational variant. You can express that many different ways.
1:13:21 SC: Call them networks, people will love it.
1:13:22 DP: Call them networks or it can be bracketing, like in a burr, or whatever. You can call it whatever you want. It's just that their relations are non-local. And that's a really deep property, so you have to be able to deal with non-local relationships. The most obvious example is how do you deal with, let's say, pronouns and the thing that they refer to, the so-called antecedents. Let's say, "Sean was finished with his recording and he asked me to give him another glass of wine," what is the "him?" That could be Sean, but that could be someone else. It could be some other him. It could not be me, but how do you know that off the fly? It knows because there's actually a structural relationship between the pronoun and the antecedent, if you use "himself" that it can only refer... These are trivial examples, but they highlight a very special property, namely the property of what's commonly called, that there are constituents. Constituents are just equivalence classes. Constituents, when I say...
1:14:18 SC: Core screenings.
1:14:19 DP: Yeah. "The red can was on the table," then the red can is a constituent, because I could say, "The transparent glass was on... " So, I can substitute for something else. And the fact that you have such things and that they have, of course, causal force in how we say stuff suggests that they must be in your head somehow, unless you are a dualist. That's fine, too.
1:14:41 SC: No, it's not. No, we don't want to be dualists here. It's okay.
1:14:43 DP: I'm not okay with that. It doesn't work for me.
1:14:46 SC: Some of my best friends are dualists. If you have a brain with 85 billion neurons in it, and as you said, each neuron has thousands of friends, so there's lots of connections, it's a gigantic computational problem, as you said, even it's the most functional level for getting around meaning and abstraction and poetry and music. If physicists were in charge of this, they'd just be throwing gigantic computer power at it and treating as a big data problem. And I do get the impression this is happening also in neuroscience.
1:15:16 DP: That's absolutely right. I think that there were many of us, maybe all of us, are seduced by the power of... By the computational power now available. Everybody and their brother has some wicked GPU in their laptop. It is changing ways in interesting ways. Maybe it's good, maybe it's bad. The fact of the matter is the data sizes that we now collect, the data amounts, are extremely large. That's true. So, when I stick someone in... I put you in a magnetoencephalography scanner, so we recorded for, let's say, 10 minutes, that's about two gigabytes of data. That's a lot of data. And so obviously you need to automatize the analysis needed to figure something out.
1:16:03 DP: But now comes a very important epistemological choice that you have to make, is do you approach these data with... Let's call it... Well, it's almost old school with the hypothesis, like you learned. [laughter] Do you actually have an idea? Were you looking for something? Did you design what you did with a particular thing in mind? Do you have a hypothesis, a model, a theory, which is a very Baconian and boring and... But has done pretty well the last few hundred years. I'm not gonna lie. Or do you throw... Do you pursue a data driven or big data approach thing? And that's very popular because we can, we have machines that do well and you can... Certain problems are, let's say, at least descriptively, reasonably well addressed by simply classifying things. If you have a classification problem, then big data is a cool approach. You train your network on gazillions of trials, and then you throw some new data on it, and maybe you have even an unsupervised learning situation, and you get beautiful clusters of things out of them. And I think that's very powerful, and I'm not gonna lie we use that in my lab.
1:17:18 SC: And looking for correlations also.
1:17:20 DP: And you basically look... Let's be clear. If you take this approach, you're looking at an orgy of data that's almost treated hypothesis free. It's purely correlational approach. It's basically the mother of all regressions. So, what might happen? In one universe, you might get lucky. It might turn out that the regression you build, the giant correlation matrix, which is obscenely large, turns out to give you an interesting fractionation or factorization of the problem. But what are the chances? [laughter] It could happen, but there's a lot of data, weird and messy and noisy. A lot of weird things could happen.
1:18:03 DP: I'll tell you, I'm not against this at all. Nobody's against having more data. It's kinda silly. But the question is, how do you approach these data you've collected? And I guess I'm very sympathetic. In my own lab, we have vitriolic, energetic debate. I love everybody in my lab, wonderful people, but we don't all agree. And I have lab members that are strongly pushing me on this, and I'm strongly pushing back. And I believe it's a really good idea to design the experiment with a question in mind. If I stick you in a machine, and I'm gonna spend a lot of money, energy and time on recording these really quite sophisticated complicate data, I wanna know what I'm looking for. A very, very big version of this is the Large Hadron Collider. You don't build that thing just because you say, "Oh, shit. Let's just correlate a bunch of stuff."
1:18:55 SC: Oh, no, but exactly the same debate is going on between targeted searches and blank searches for some anomaly in something somewhere.
1:19:03 DP: Yeah. That's interesting how you might interpret it. I'm sympathetic to finding things by serendipity, but my own feeling is still that are from my own hunches, that actually a very well-articulated model, that can then be shown to be wrong, reduces the nature of the problem. And I'm simply not satisfied by some giant correlation matrix, or the answer they're a regression. Those are engineering solutions in my own... As I told you before, my gut feeling is that, in the neurosciences, in the summer of 2018, engineering approaches and big data approaches are actually replacing the standard approach of the sciences. And that's a little weird, that engineering is superseding the science.
1:19:49 SC: But is it a temporary enthusiasm that we get...
1:19:53 DP: Well, we're all enthusiastic 'cause it's cool to be able to look at gigantic amounts of things. But here is the danger. The danger is that we end up... Maybe it's not a danger. Maybe I'm just an old guy.
1:20:05 SC: Maybe.
1:20:05 DP: An old, angry guy. I think the potential danger is that we become theoretically myopic. That is to say, the kinds of answers that are yielded by this approach are of a certain form and we will only be getting that form of answer, the form of answer that comes out of giant regressions. And that may be theoretically misleading. That is, we might have a generation of science in which the answers that are provided us come from that kind of epistemological stance and prevent us from seeing theoretical alternatives that are not fully aligned with that approach, which I think would be a huge bummer.
1:20:45 SC: So, ironically, by not having a theoretical framework with which to analyze and construct the experiment, you're locking yourself into certain possible things you could discover...
1:20:55 DP: That's right.
1:20:55 SC: And refusing to look for other ones.
1:20:57 DP: Yeah. So, I'm working on this problem with one of my colleagues right now. The way we're trying to... Our diagnosis of this is we're not at all against using huge amounts of data. It's exciting. But here's our diagnosis. Suppose you run a big data approach on a neuroscience problem. Let's say, I wanna study spoken word recognition, something I actually know something a little bit about. And what I'm gonna do is I'm gonna record and just gonna have a thousand people listen to 10,000 words, some giant data. I'm gonna crank this thing through my convolutional neural network. And then I'm gonna get some interesting classification scheme out of it, and I can probably record in different layers.
1:21:37 SC: So, you're looking at what happens in the brain when they're listening to these words. You're just gonna look for correlations.
1:21:43 DP: And I'm just gonna record this and I'm just gonna take these data, and I'm gonna correlate everything with everything. And, of course, some correlations will come out, that's great. Now, let's say I end up with a very good model, a terrific model fits, and that model will be characterized by a series of parameters. And it's gonna be great, and let's say it has 14 different things that I had to tweak. Awesome. Now, I have a fantastic model, and I have these 14 values. I've got a Kappa, 'cause you gotta have a Kappa.
1:22:14 SC: Okay. Maybe a Tau, always good. Lambda would...
1:22:16 DP: Lambda is nice. Gotta get a couple of Lambda is good. Okay. Now, I have these parameters that I fixed and they yield the optimal thing. What do I have... What's my next step? My next step presumably is to do "normal science" on what these parameters are. So, I now have just bit myself in the butt. I have to say, "Well, I have these 14 things. Now, I have to actually figure out what they are because they are the ones that are ostensibly the causal... They have the force of being explanatory for the model. What gives?" I think that's fine, but... Our argument is that this is a lovely approach, but in the end, you're gonna actually reinvent what you have to do to begin with to give a full comprehensive explanatory account of the parameters of your machine-learned models.
1:23:05 SC: So, you think that there's still room for human thought and language in trying to understand human thought and language?
1:23:10 DP: I think you need humans... You need just common sense.
1:23:17 SC: Yeah.
1:23:17 DP: A solid common sense, no bullshit. Have a good bullshit detector.
1:23:21 SC: And read all the details of the papers.
1:23:23 DP: And don't worry about... And it turns out you have to actually do the homework. You have to do the homework.
1:23:28 SC: Do the homework. All right. Good advice. We'd like to leave on a good advice, and I can't do better than, "Do the homework." David Poeppel, thank you very much for being on the podcast.
1:23:36 DP: Thank you.
[music]
Now this is an interesting episode! Thank you very much for making this podcast series free.
The transcript is excellent! The talk was good too. 😉 BTW, at 26:15 in the transcript it should be “vocalizes” not “localizes”.
It would be nice if the transcript was shown in an area of the web page that scrolls separately from the podcast control. As it is now, to resync after an interruption is somewhat of a chore.
Very interesting and thanks a lot for transcription, help non native English speaking people like me.
I have one question I wish to investigate using psychoanalysis in myself ( I do not know yet if feasible). Whatever structure the brain have for recognizing word and meaning of received sounds, and whatever for producing speech or written ideas and concepts. My opinion is that there is not , let say, a “transcription” from thoughts to verbally developed ideas, actually the language is the brain structure of discursive thoughts, there are not discursive thoughts, I mean verbal constructions, without language. Yes, there are other types of thoughts: images, artistic, general undeveloped ideas, unreal thoughts like the ones in dreams, and so on. So the language structure is the brain structure of the discursive thinking, both are the same.
Is this might be somehow proven, it has a tremendous impact in philosophy thinking of language, and in its relation with external “reality”.
Hi Sean, love the podcast – this was one of my favorite episodes so far.
I especially enjoyed the discussion near the end about the ever evolving methods in science. However, I will point out that unfortunately David has his history and philosophy… exactly backwards. At 01:16:03 (thanks to the transcript) he mentions having “a hypothesis, a model, a theory…” as “very Baconian” and that the data-driven approach is new and different. The very boring but trustworthy hypothesis-model-theory approach is really more of a Popperian methodology where experiments are run only after forming a theory which makes certain falsifiable predictions. The data-driven approach is in fact actually more in line with what Bacon envisioned. In his ‘New Organon’ Bacon suggests a division of labor amongst scientists where some go out and gather data and observations “without premature reflection or any great subtlety”, some others organize that data, and then still others try to formulate some theory based on those observations.
Laura Franklin-Hall explores these methodological differences in a short paper called ‘Exploratory Experiments’ (linked below) which looks at some case studies in biology and the various approaches made available by something she calls ‘wide instrumentation’, but the same sort of thing could be said about the new approaches made possible by having powerful computers and massive quantities of data.
‘Baconian’ would still be an inappropriate term for the types of data-driven experiments mentioned by David because presumably those researchers are bring a large amount of background theory to bear on which problems they investigate – something which was lacking from Bacon’s original formulation of minions scouring the earth and collecting whatever facts they happen upon.
It doesn’t change any of David’s subsequent discussion about the fruitfulness of engineering-type approaches (which I thought was spot on), I just wanted to point out the history.
https://laurafranklin-hall.com/methodology/franklin-hall-2005—-explo.pdf
Housekeeping suggestion: Standardize the metadata (if that’s the proper term).
When I download these, sometimes it’s “Sean Carroll”, sometimes it’s “Sean Carroll, scientist and author”, there are other mismatches, and my phone can’t cope, the podcasts end up in different automatically generated folders for Artists, Genres, Playlists, etc.
I appreciate it’s not the most urgent issue facing mankind, but it would be handy 🙂
And, Thanks