292 | Jonathan Birch on Animal Sentience

It's not immoral to kick a rock; it is immoral to kick a baby. At what point do we start saying that it is wrong to cause pain to something? This question has less to do with "consciousness" and more to do with "sentience" -- the ability to perceive feelings and sensations. Philosopher Jonathan Birch has embarked on a careful study of the meaning of sentience and how it can be identified in different kinds of organisms, as he discusses in his new open-access book The Edge of Sentience. This is an example of a question at the boundary of philosophy and biology with potentially important implications for real-world policies.

jonathan birch

Support Mindscape on Patreon.

Jonathan Birch received his Ph.D. in the philosophy of science from the University of Cambridge. He is currently a Professor in the Department of Philosophy Logic and Scientific Method at the London School of Economics and Political Science (LSE). He is one of the authors of the New York Declaration on Animal Consciousness, and has advised the British government on matters of animal cruelty and sentience.

0:00:00.0 Sean Carroll: Hello, everyone, and welcome to the Mindscape podcast. I'm your host, Sean Carroll.

0:00:04.0 SC: Sometimes on the podcast, I will refer to our two cats, Ariel and Caliban. They are born at the same time, twins, I guess, if you can say, but they're of course part of a bigger litter, brother and sister with very different personalities. If you met Ariel and Caliban and interacted with them, even if you didn't see them, you would instantly know which one was which. There's a danger there, though, if we want to be a little bit more careful, a little bit more rigorous, in using a word like personality. We tend to anthropomorphize our pets, other objects in the world. We anthropomorphize our GPS Google Maps system. I feel bad when I drive in a way other than what Google Maps tells me to do, and it seems to be upset with me.

0:00:49.5 SC: So if we're thinking about it very, very carefully, we can have fun using words like personalities and being anthropomorphic with our pets. But maybe you want to be a little bit more rigorous. So you might want to ask what kinds of animals are conscious. Consciousness is a big topic in some of these debates. You instantly run into the problem that we don't agree on what consciousness is. Different people are going to have different standards for that. We might agree that rocks are not conscious, but maybe panpsychists will even argue for that. Most of us will agree that humans are conscious. Somewhere in between, maybe there's a threshold or maybe there's a series of many thresholds.

0:01:30.4 SC: One way to make this a little bit more careful is to switch the conversation from consciousness, which is a little bit unclear what it means, to sentience. Sentience is the ability to have a feeling of what it is like to be something, the ability to experience feelings and sensations, okay, especially feelings and sensations that we would characterize as having a valence, a good sensation or a bad one, a positive one, or a negative one. That's a little bit more well-defined. And then we can go ahead and ask which kinds of animals are sentient, and also the public policy question, what should we do about it? How should we act if we believe that certain kinds of creatures are sentients?

0:02:13.2 SC: For as much as we tend to cutely anthropomorphize our pets, there's also a temptation to ignore the possibility of sentience in animals that are not like us. It's very common to cook crabs and lobsters by boiling them alive, and they thrash around a little bit. But you say, well, that's just a instinctive reflex reaction, that's not experiencing pain in the same way that we are. So regardless of what your opinions about it are, we should be able to think about this rationally, coolly, calmly, okay. It's hard because we get very emotional. Some people see the little lobster thrashing around and feel something deep inside a sense of revulsion. Others just do it as a matter of course. And how do you have a rational conversation about that?

0:02:58.1 SC: Well, here we are to try to do that. Today's guest is Jonathan Birch, who is a philosopher, who's written a new book that just came out. Well, I'll tell you whether it came out or not. The book is called The Edge of Sentience: Risk and Precaution in Humans, other Animals and AI, because soon we're going to be building artificial systems that have many of the characteristics of things we would call sentient. So the book, The Edge of Sentience, just came out in the UK, will come out in the US in a little while, but also is available for free online.

0:03:31.7 SC: There's a PDF that you can just go to, and I'll, in the show notes, put the URL there. Oxford University Press is graciously letting everyone read this book, because Jonathan is someone who wants to have an impact in the public debate. He already, as you'll hear, has had an impact, in the UK, in government thought about what it means to be a sentient creature, and how we should deal with that. This is a set of issues where I don't think we're done yet. I don't think that we have the consensus. I don't think we figured everything out. That's why we've got to talk about it. Here we are to do just that. So let's go.

[music]

0:04:26.5 SC: Jonathan Birch, welcome to the Mindscape Podcast.

0:04:27.9 Jonathan Birch: Hi, Sean. Thanks for inviting me.

0:04:29.7 SC: So you're talking about issues that in philosophy contexts are often brought up, but often the word that we're talking about is consciousness, and you're focusing on the word sentience, which is a little bit different. So maybe explain to us what it is, how it's different, why you're thinking about that.

0:04:50.5 JB: Yeah, I suppose part of what I want to do with this book, The Edge of Sentience, is get people using that term sentience, perhaps a bit more. I think Stevan Harnad's been doing much the same thing with his journal Animal Sentience, and I think it is a term that is on the way up.

0:05:05.8 SC: Good.

0:05:06.6 JB: That doesn't mean consciousness is on the way down, but I think it's plateauing and sentience is on the way up. And it's a term that, at least as I use it, it's an attempt to capture the most basic elemental, evolutionarily ancient base layer of consciousness, as it were, which is in part just what philosophers like to call phenomenal consciousness, subjective experience. There being something it feels like to be you, whether or not you have any overlay of conscious reflection on what it is you're experiencing.

0:05:43.7 JB: And then also, there's a slight extra component as well, which is that I'm focusing specifically on experiences that feel good or feel bad, like pain or pleasure, valenced experiences, as it were, they have a positive or a negative valence, and I'm using that term sentience to capture that capacity for valenced experience, like pain and pleasure. And I think it's a really important concept because it, to me at least, captures what is really ethically significant. If a system is sentient in that sense, if it's capable of valenced experience, then its interests matter morally, and we need to do something about that.

0:06:20.2 SC: So in other words, something you mentioned about awareness. I forget whether I'm imposing that word on you or whether you used it, but so conscious experience in some sense is something that I need to know I'm experiencing, whereas sentience is a little bit broader. I could sort of feel something and experience it unconsciously, that would still count?

0:06:44.6 JB: Well, I wouldn't say that to have a conscious experience, you need to know about it. But the problem is that the term consciousness gets used in quite ambiguous ways. And it can refer to the human form of consciousness, which is a very complex form, I think. And it does have layers. So Herbert Feigl in the '50s talks about sentience, sapience and selfhood. And Tulving in a separate body of work had these terms anoetic, noetic, autonoetic. They're both ways of trying to capture the idea that there's layers, there's the raw, basic subjective experience, like the feelings of pain, pleasure, sight, sound, odor. Then there's also the knowledge and the concepts when we think and reflect about what's going on.

0:07:35.3 JB: And then also to some extent, there's a sense of self as well, and this idea of we recognize ourselves to be persisting subjects of experience with lives that extend into the past and extend into the future. And these overlays, they involve levels of cognitive sophistication that you might not need to have that base level of just sentience, of just feeling ouch, feeling pain, feeling happiness, joy.

0:08:04.7 SC: Right. So sentience is then broader than consciousness. We might imagine that there are critters that are sentient, but not conscious.

0:08:18.6 JB: Well, in some senses of the word conscious, yes, that's right. Yeah. If you are the person who wants to use this term conscious to refer to that whole package, and the sentience, sapience and selfhood, then yes, there's going to be lots of animals that are sentient without being conscious. Now, I don't necessarily think we should use the term in that way, but one of the things I like about sentience is that it very strongly draws people towards that most basic aspect, just the raw subjective experience.

0:08:48.5 SC: Okay. I think I'm finally getting it. So in other words, one of the advantages, the biggest advantage of sentience over consciousness as a concept to focus on, is that it's better defined, and consciousness means different things in different contexts.

0:09:04.2 JB: Somewhat. Yeah. Which is not to say that it's perfectly defined. There are real limits on our ability to define subjective experience, but the problem with consciousness as a term is that even when you bracket that issue of subjective experience and and its mysteriousness, it's still a term people use to refer to many other things as well, like reflection and self-awareness and all those other things. So I'd rather use a term that is perhaps a little bit more constrained in how you can use it, and that where people will let you stipulate a bit more. And if I say, I just mean the capacity for valenced experience, I think people get that, and they get the need to have a concept that is drawing our attention to states like pain and pleasure, but is a bit broader than that. And that is not just about pain, pain and pleasure and pleasure, but about that whole category of feelings, experiences that feel bad or feel good.

0:10:06.1 SC: And let's look ahead a little bit, sort of tease the audience. Why should we care about sentience? What is the impact of having a nuanced understanding of what that means?

0:10:19.1 JB: Well, Jeremy Bentham famously had this footnote where he wrote in relation to other animals, the question is not can they talk, nor can they reason, but can they suffer? And I think that's a, to me at least, a profound insight that if an animal can't speak to us and tell us how it's feeling, if it can't reason very well, as arguably is the situation with a shrimp, for example, it doesn't mean that it's feeling nothing. It doesn't mean that it's incapable of suffering. And so it doesn't mean that there aren't things we could do to it that would be cruel and that would cross ethical lines.

0:10:58.0 SC: And eventually we're going to have to ask these questions about artificial intelligences.

0:11:03.8 JB: Yeah. I think we're already asking the questions, and I think it's right to be asking the questions and it is right to try and run ahead, as it were, for the ethical debates to be running ahead of where the technology actually is, because we might get quite rapidly overtaken by events in the AI case.

0:11:21.5 SC: Right. Okay, good. So we will get there. But I couldn't figure out, you've written a whole nice book about this, but in my brain, all these issues are jumbled together. So I'm going to apologize ahead of time if I just throw things out there and ask for your response to them. But let's home in. Let's go back to this issue of sentience versus consciousness. You said one thing that really struck me in a video I watched, which is that a crab does not have an inner monologue, a crab does not narrate its own life, presumably it doesn't. So I guess number one, are we sure that it doesn't? And number two, what does that say about conscious sentience or whatever?

0:12:05.6 JB: When I think about aspects of human consciousness that might possibly be uniquely human, I think that inner monologue is one of them. It's not something even all humans have, and you get a lot of reports of variation among humans, where some people say, what is this inner monologue? I've never experienced anything like that. And other people, including myself, for whom it's there constantly. And I don't rule out that some other animals might have something a bit like that, but I don't really think crabs do. And I think this is an example of something that is probably a lot more cognitively sophisticated than sentience.

0:12:47.0 SC: How much do we know about the inner monologue? I'm not sure that I have an inner monologue so much as an inner cacophony.

0:12:54.9 JB: Yeah. It's a bit like that for me as well. But I mean, there's always inner music playing.

0:13:00.4 SC: Yeah. Very often, anyway. Yeah.

0:13:00.6 JB: And then there's usually some line of thought running over the music. Not so much when I'm talking like this, because when I'm talking, it's like the inner monologue becomes the outer monologue like this. It's just what... But in the rest of life, yeah, it's like I'm constantly having a conversation with myself. But, yeah, I think the need here is to try and distinguish that, that sophisticated thing I have, from just the raw experiences that it's providing commentary on, and those raw experiences the crab may well have.

0:13:32.2 SC: Do we have any idea, this is going beyond what we're talking about here, but you've fascinated me. Do we have any idea what's going on in the brain when we're sitting silently having an inner monologue?

0:13:47.8 JB: I think it's a topic of ongoing research. Yeah. And I don't have much to add to that, I think.

[laughter]

0:13:54.8 SC: That, yeah, that's...

0:13:56.0 JB: There's some looping.

0:13:56.7 SC: You got me thinking now. Yeah.

0:13:57.9 JB: There's feedback loops, right. In the past there were people who thought that the vocal chords were genuinely moving a little bit, and behaviorists had to think this way, 'cause they couldn't really believe in true interiority. So they had to say, well, what you think is an inner monologue is actually a motor action being prepared and just getting to the tiniest stages, but never coming out audibly. But I think according to current theories, not even that is happening. It is genuinely internal. It's engaging some of those speech production processes, but they're never reaching the actual motor neurons.

0:14:37.3 SC: And this is something which at least arguably is uniquely human. Does my cat have an inner monologue?

0:14:45.8 JB: Well, I don't know, but the point is really to say, well, even if your cat doesn't, it may nonetheless be sentient. Because when we're talking about sentience, we're talking about something much more basic than that. And of course, we have a tendency to strongly anthropomorphize our pets and to imagine our pets as little humans. And we can actually oppose that. We can resist that and say that's a bad idea, while nonetheless thinking they are sentient beings with ethically significant interests.

0:15:18.8 SC: Right. There have been, of late, broadly speaking, a few declarations on consciousness, the Cambridge declaration, the New York declaration, both of which pointing in the direction, I think there's some overlapping signatories. I think you're one of the signatories of one of them.

0:15:34.7 JB: I was one of the co-organizers of the New York one.

0:15:37.5 SC: The New York one. Okay, good. And in both cases, as far as I understand it, there's a Cambridge declaration in 2012. The New York declaration was just last year, 2023. At least the point seemed to be to nudge...

0:15:51.6 JB: 2024.

0:15:52.3 SC: Oh, was it 2024. Okay, good. This very year. Yeah.

0:15:57.9 JB: So well established that it feels like...

0:16:00.3 SC: It feels like it's at least a year old. Yeah. Pushing people in the direction of taking seriously the possibility of animals having some notion of consciousness. What struck me about those, I mean, maybe you can just talk about them in general terms, but what struck me was they seem to give off an aura of consensus, like we know this is true, which is something that in philosophy I so rarely come across. Is that because there actually is consensus, or because the people who organized these particular declarations are all of a mind on this issue?

0:16:37.5 JB: It's a delicate balance, I think. What we wanted to do, and it's similar to the project in the book, The Edge of Sentience book, was to acknowledge that there is a huge amount of disagreement about these issues. And that's fine. It's to be expected when our understanding of what sentience is is so poor. But nonetheless, despite all of that reasonable disagreement, there can be certain points of wide agreement about what the reasonable range of views is and what the realistic possibilities are. That was the thought behind it.

0:17:14.0 JB: And then, well, we got together an initial group of 40 signatories and just had a series of Zoom calls where we were talking about, well, what are the, do we agree about a realistic range of possibilities? And if so, what can be said about what that range is? And that's how we got this text that knowledge is a realistic possibility of consciousness, which was the term we used there, perhaps a more widely used term than sentience, in octopuses, in cephalopods mollusks, decapod crustaceans, and insects. And so we were trying to avoid the sense of projecting certainty, or even confidence or knowledge, but using this language of realistic possibility to say what we do agree on is the need to take this really seriously.

0:18:10.2 SC: Sure. And maybe this is, I don't know, tell me about the journey here. Did thinking about that to help convince you that sentience is a better thing to focus on just because it's a little bit better defined? Or were you already on that train and...

0:18:25.0 JB: That was always my view. That was my view. But in this group of 40, a more common view was that people don't understand the term sentient. They're not ready for it, use a term they already understand, namely consciousness. Both sides have pitfalls, 'cause as I say, if you start talking about consciousness, people might think you mean the inner monologue, self-awareness. There's quite a range of things they might think you're talking about. So there's trade-offs there. What I think... The term sentience is on the up, so to speak. And for me it's hopefully the term of the future that will start to displace consciousness in these debates.

0:19:12.1 SC: Well, I'm completely on board with the idea that if you're going to have a declaration, the whole point of the declaration is to get a little bit of attention to it. And yeah, consciousness is going to be a more attention-grabbing word to the popular audience.

0:19:25.6 JB: That was the thought. And that may be true as things stand.

0:19:27.2 SC: I think it's true. So, okay, let's focus in on sentience then. If it is about experiencing a sensation, what does that mean? How do we know when one is experiencing a sensation?

0:19:43.6 JB: How do we know when another animal is, do you mean? I think we know when we ourselves are.

0:19:49.0 SC: I think we do. And, but this is going to... This gets into the issue of the first person versus the third person way of thinking about things.

0:19:56.6 JB: Right. And when thinking about crabs, for example, we are very much stuck with the third person perspective. And we are stuck too with a big range of reasonable disagreement and quite a lot of realistic possibilities. Some will make it very unlikely that crabs are experiencing things and others make it very likely that they are. And what I do in the book is I suggest a pragmatic shift in how we think about the question, from is the animal sentient to is the animal a sentience candidate, where this concept of a sentience candidate is defined in such a way as to make the question answerable. Because it's about, well, is there a realistic possibility of sentience established by at least one view in that zone of reasonable disagreement? And is there an evidence base that is rich enough to allow us to identify welfare risks and to design and assess precautions?

0:20:55.8 JB: And to me, I hope, at least, people find that pragmatic shift helpful. And I think if you're thinking about animals like crabs, for example, to me, it's quite clear that they are sentience candidates in that sense, that we do have to worry about welfare risks posed by the way we treat them, despite the fact that, of course, we're still uncertain about whether they're sentient or not.

0:21:18.7 SC: So I guess what I'm getting at then is what is it, how will we ever know? Or even how do we get more informed feelings about this or opinions about this? Is it by looking at the behavior of the crab? Do we dive into their connectome and their nervous system? Or is there something, a different methodology?

0:21:36.8 JB: I think it's everything at once. I think neural evidence and behavioral evidence are both powerful, and they're more powerful when pursued together as part of a coordinated research program than in isolation from each other. What we have with a lot of invertebrate animals is quite tantalizing, I think, because often you've got a lot of behavioral evidence showing surprising things, impressive things. And then you have studies of neuroanatomy saying, well, there's perhaps there's more neurons in there than you think, particularly with octopuses. There's big integrative brain regions that are plausibly performing functions relating to learning and memory.

0:22:22.9 JB: And then those are the two parts of the picture and they don't join up, as it were. So what we're lacking in most of these cases is detailed knowledge of the mechanisms in those brain regions producing the behaviors we're seeing. So people talk about grasping the elephant from different sides. You know, it's two ways of converging on a picture that are both valuable and all the more valuable when pursued together.

0:22:50.2 SC: In the case of the crab, just 'cause that is something you talked about, what is the evidence that there is sentience there? It does skitter away if it's being approached by a predator, I suppose. But how much does that mean?

0:23:00.2 JB: Well, there's a range of different studies, and I don't see any individual study as being conclusive. And it's an area where phrases like conclusive evidence, proof are not really appropriate. But what we have is research programs, particularly Bob Elwood, who is another of the signatories to our declaration, really started with this question of, well, people think that all that is going on here is reflexes. So they think that the crab skitters away and it's like when I put my hand on a hot stove and my hand withdraws, and that reflex withdrawal is underway before I feel anything, and people say, that's all the crabs have, they just have those reflexes.

0:23:46.2 JB: And he thought about how might I convince someone who has that view that that is not all that's going on, and that just like in us, the information about the noxious stimulus, like the hot stove reaches the brain and is integrated with other kinds of information and is used for lots of functions relating to learning, memory, decision-making. And he came up with these motivational trade-off experiments where what he had was hermit crabs. And the hermit crabs, they're interesting because they have very strong preferences for certain types of shell. And in the wild, you see them exchanging one type of shell for another, and they have this hierarchy of what they think the best shells are.

0:24:33.5 JB: And Elwood in these experiments, he drilled holes in the shells, put little electrodes in, and administered small electric shocks to the crab. And his question was, well, would the crab just evacuate the shell when it was shocked as a kind of reflex, or would it take account of how good the shell was and how bad it would be to lose that shell in making that decision? And would it require a higher voltage of shock to make it leave a higher quality shell? And he found evidence that indeed it seems to. And so this is the kind of thing where it's not conclusive proof, but if you're coming in with this view that they're just reflex machines, all they do is stimulus response, there's nothing integrative or centralized going on, this kind of evidence should shake that confidence.

0:25:28.5 SC: Yeah. So this is what I've been struggling with since thinking about that example that you gave. Clearly what we... Well, let's put it this way. If we have two magnets sitting on a table and we push one magnet toward the other, the other magnet, depending on how it's aligned, will either move away or come closer. That's not sentience or consciousness or anything, that's clearly just the laws of physics playing out. But if we are really strongly in an anthropomorphizing mode, we could tell stories about how, oh, this magnet doesn't like the other one and it's skittering away. So that's what we want to avoid, right, that's the trap we don't want to fall into.

0:26:09.3 JB: It's credulousness, right? Taking the surface behavior as immediate evidence of sentience.

0:26:15.7 SC: And so the crab evidence is saying that there's a bit of, would it be too provocative to say thinking, contemplating, musing on the part of the crab to balance the different aspects? Integrating...

0:26:25.3 JB: Integrating, integrating, modeling and weighing of the opportunities and risks posed by the environment. And then you have a certain family of theories associated with Bjorn Merker, Jaak Panksepp that treat that as very closely linked to sentience, that they say, well, what is sentience fundamentally? Well, they propose that it's to do with this evaluative modeling where you're trying to represent in an integrative model the opportunities and risks posed by the environment. And so there's a nice mesh there between the behavioral evidence we're seeing in the crabs and the sorts of brain mechanisms that, according to this family of theories, would be enough for sentience.

0:27:20.2 SC: So it does seem like it would be hard, you already have sort of said this, but it would be hard just on the basis of behavior, right? I mean, if I put the magnet on a wavy surface, there's going to be some competition back and forth between the push of the magnetic field and the pull of the gravitational field. But I'm still not thinking that the magnet is doing any integrating.

0:27:41.1 JB: There's no reason to think the magnet is internally representing those field strengths.

0:27:45.7 SC: Good. So that's... Those words are important. We need... We're attributing sentience to, it relies on some internal representation.

0:27:56.4 JB: Yeah. And according to the sort of Merker, Panksepp, that family of views, not just any internal integrative representation, but it has to have this evaluative character as well. It has to be a certain kind of modeling of what are the opportunities and risks? What are my needs? What do I need to prioritize right now?

0:28:19.2 SC: And I think, and I'm not trying to be too skeptical here, but I do think I could imagine the crab doing exactly those behaviors without really having an integrated evaluative model of the world. You know, it's just sort of being pushed in one way and pushed in the other way. So do we really need to go into the crab's neurons to be sure?

0:28:39.4 JB: Well, I mean, I think it's quite important in these experiments that it has some representation of some kind of the different shell types and their relative qualities, and that is somehow getting integrated with how bad is this electric shock. So I do think there's something inherently more impressive about experiments that do not simply provide too immediate stimuli and say, trade these off, but rather in some way rely on the animal's capacity for mental representation. And it's a similar story with the evidence from bees as well, that's what researchers have been trying to do.

0:29:18.7 SC: Sorry, sorry. Tell us about the evidence from bees. That sounds interesting.

0:29:25.0 JB: Oh, there's just, I was just thinking of Matilda Gibbons' experiments where they're inspired by Elwood's crab experiments, but bees don't have the shells that hermit crabs have, so you've got to test for the same thing in a different way. And so she came up with this setup where they have a choice of feeders they can land on, and different concentrations of sugar solution are available at different feeders, and different temperatures of heat pad are there that they have to stand on to access the feeder.

0:29:58.2 JB: And so the question now is about a different kind of trade-off, will they trade off when choosing which feeder to go to, how high was the heat they had to withstand and how sweet were the rewards that they can access. And again, a crucial part of it for Tilda was this thought that you want to look at their decisions when they're anticipating what they're going to experience at these feeders based on their memories.

0:30:26.2 SC: So before they're actually doing it, you want them to think about it?

0:30:30.4 JB: Yeah. Because when they're doing it, there is this possibility that, well, there is some integration of some kind going on, but it's just two immediate stimuli pushing against each other. But in, when they're making that choice in an anticipatory fashion, it's got to be some kind of representation of the risks and opportunities. So yes, not every critic is convinced by this kind of evidence, of course, but in a way, you're going after that critic who says these animals are just reflex machines, and because they're just reflex machines, there's no credible theory of sentience of any kind on which they're going to meet the conditions. And it's showing that that is not the case.

0:31:13.7 SC: Yeah. No, I think I like very much the idea of the anticipatory question because, like you said, that if there is some action that is clearly being taken, because... It's very hard to even use words that are not laden with human meaning, I want to say an not anticipating but imagining. But I don't want to attribute imagination necessarily to the bees. But they clearly are representing, you're better at this. You know what words I'm allowed to use. They're clearly representing a situation that hasn't happened yet. And that's something that the simple physical systems are not doing, and maybe even clearly is going too strong, but apparently.

0:31:58.3 JB: I think that's right, that they're prospectively modeling the environment and the rewards and the risks that it offers, and they have some way of weighing up those risks and rewards in a common currency. And that ties in with this quite long-standing idea that, well, that's kind of what sentience does for us, that pain and pleasure valence states, they're the currency through which we make decisions and represent the risks and opportunities of our environment.

0:32:27.2 SC: One of, I did an interesting podcast with Adam Bulley, who is a young collaborator of Thomas Suddendorf, I guess, in the vein of thinking about mental time travel and imagining the future and things like that. And they were trying to make the case that this is something that is uniquely human, the ability to literally imagine ourselves in a future environment that is kind of hypothetical, conjectural, contrary to fact. But there has to be some evolutionary journey for us to get there. I mean, do you have feelings about the importance of that to being human, to being conscious, to being sentient? The sort of counterfactual reasoning capacity?

0:33:13.3 JB: I mean, I think that's something that goes beyond sentience, much the same way that the inner monologue, et cetera, goes beyond sentience. It's something some sentient beings can do, but probably not all. I think that's going to be the case for counterfactual reasoning. Of course, it depends a bit on what we mean by that. I think in the, if you think of rats in a maze and the vicarious trial and error behavior that was observed by Tolman many, many decades ago, and has been intensively studied, where they seem to pause at the junction in the maze, and look both ways as if simulating what reward lies down each path.

0:34:01.9 JB: And then there's more recent studies that suggest that the hippocampus genuinely is doing that stimulating. This is, it's not really counterfactual reasoning, or at least that would be a pretty tendentious description of it, but it's prospective simulation. And I suspect that that capacity for prospective simulation is quite widespread among animals.

0:34:30.8 SC: I'm trying to figure out: Is it really that different from counterfactual reasoning? I mean, is it not that the rat in the maze...

0:34:36.7 JB: It's a hypothetical, right? It's possible futures that could be actual.

0:34:42.6 SC: Possible futures.

0:34:43.7 JB: So there's no sense of, well, that didn't happen, but what if it had happened, so that bit's not there.

0:34:52.2 SC: Is there any evidence for something like that in invertebrates?

0:34:57.2 JB: Well, I think, yeah, Andrew Barron and Colin Klein have this paper about insects and the origin of consciousness, and another one called insects have the capacity for subjective experience. And they're cases based on the idea that what they have is this integrative model of the agent in space where they model the environment around them. It's not, that may be prospection on a very short time scale, I suppose. And then it's largely an open question about prospection on longer timescales.

0:35:35.9 JB: Some of the most interesting evidence there is probably the Portia spider evidence, these are jumping spiders that hunt other spiders, and they're famed for this detour behavior, where you put them on a platform where they can see a prey item in the distance, and they can see two paths to the prey item. One of them has a break in it. If they take that path, they will fall through it, and they go from side to side, they seem to be inspecting the two paths, then they climb back down off the platform, so the paths are of sight, and they nearly always choose the unbroken path, leading to a debate about how on earth do they do something like that. And of course, one possible explanation involves prospective simulation, where they are in the brain modeling what will happen if they take each path.

0:36:38.4 SC: And it's always hard, it's a challenge. This is why I always say that physics is much easier than this kind of science, because we see a behavior and we know if we were doing that behavior, how we would explain why we did it, and then we're impressed when we see some other species do it. But maybe they're just using a different mechanism than we are, and we shouldn't be as impressed. And you never know whether we should be super impressed or less impressed.

0:37:05.0 JB: Yes, well, and I think in the Portia spider case, what's lacking is the neural evidence that we have in the rats. So I say, if you have both, if you have the behavior and you have neural recording practically showing the simulation happening in real time, then that's probably as strong evidence as you're ever going to get. And we don't have that for the Portia spiders, but it's very suggestive.

0:37:29.6 SC: It is, it is absolutely suggestive. I'm sort of in my countervailing brain, I'm thinking of all these videos of dogs separated by a treat by some little piece of glass, and they just can't figure out all you need to do is walk around the glass and get the treat.

0:37:42.9 JB: Right. Yeah. That's part of what's so impressive in a brain of, I think about 60,000 neurons. So really, really small, less than 10% of the size of the bee brain by neuron count, they're doing something that dogs clearly fail to do.

0:37:58.0 SC: Well, maybe let's talk about what we know about the evolutionary journey to sentience or even to consciousness. I mean, is there some understanding of why it was useful for these different species to develop these capacities?

0:38:13.7 JB: I think we can't really talk with confidence about this, because it depends very much on your theory of the brain mechanisms involved. If you have that Merker, Panksepp view or that family of views, I should say, where we're talking about something very evolutionarily ancient, supported by subcortical mechanisms, mechanisms in the midbrain at the top of the brainstem, and that is about evaluative modeling of the priorities, the animal's priorities and needs, then there's a very clear function relating to decision-making. In that what sentience allows is, well, an escape from being a reflex machine and the possibility of weighing up quite different options in very flexible ways. So that view has some plausibility, I think.

0:39:09.0 JB: And I also think it's quite plausible that sentience facilitates learning. That if you think about that hot stove situation, think about what the pain does for you, what it doesn't seem to do for you is trigger the reflex withdrawal of the hand, because that's underway already. But what it plausibly does do is help you learn about where not to put your hand on future occasions. And that leads to a very interesting debate about what kinds of learning sentience facilitates and why.

0:39:43.8 SC: So, I mean, maybe it's useful to go through some organisms and ask how we should think about sentience, or maybe let me maybe prior ask this. Is there some in your mind, even if not in the consensus of the field, can you identify where sentience started? What is the most primitive organism that could plausibly be associated with this?

0:40:05.6 JB: Well, as I say, I think that sentience candidate is a better concept in a way.

0:40:09.9 SC: Okay, fair enough.

0:40:13.3 JB: And I suggest in the book that insects are sentience candidates. So in terms of cases where we have enough evidence to really compel us to take seriously a realistic possibility of sentience, we're definitely talking about all vertebrates and the cephalopod mollusks, like octopuses, squid, cuttlefish, and the decapod crustaceans and the insects that are both arthropods. And then it could be that we're talking about something that has evolved three times. It could be something that is there in the common ancestor of all three groups, and we're not really in a position to have much confidence either way on that one.

0:40:56.6 SC: The common ancestor of those groups sounds like it'd be very, very far back.

0:41:00.6 JB: Yeah. Over 560 million years ago, a very small worm-like creature. So I mean, yeah. Perhaps unlikely to possess the mechanisms that convince us in those three cases that sentience is a realistic possibility. So I suppose, yeah, I perhaps lean myself towards the three origin view.

0:41:27.2 SC: Yeah. Okay. So if sentience is evolutionarily useful, which it's easy enough to imagine that it would be, there's no reason why it wouldn't evolve in parallel in different branches.

0:41:38.0 JB: Exactly. Yeah. Particularly in those lineages where we see complex active bodies. This is Mike Trestman's term, where you have the challenges that come with trying to manage articulated bodies with lots of parts. And you can't be a reflex machine as such any more, because then different bits of the body will start tearing each other apart. There has to be some kind of centralized, sophisticated control system in place. And that's when we seem to start seeing realistic candidates for sentience. And if that's true, then certainly the cephalopod mollusks and the arthropods are looking like candidates.

0:42:29.9 SC: The octopus especially, right. There's a lot to keep track of if you're an octopus.

0:42:37.5 JB: Yes. Well, the octopuses have become poster children, as it were. They're often the case that gets people to take the possibility of invertebrate sentience seriously. And I think once you've got that far, you think, well, you know, are they really the only invertebrates for which there's relevant evidence? And no, they're not.

0:42:57.3 SC: So you would not think of single-celled organisms as sentience candidates?

0:43:02.3 JB: No. And in the book, I have these two concepts, sentience candidate and investigation priority, where that second group of investigation priority is for those cases where the evidence is falling short of sentience candidature, but we think there's a prospect of that bar being achieved by future evidence, and we think there are welfare risks posed by human activity that might call for precautions. And so some invertebrates are put in that category. But unicellular organisms and plants, I don't think are investigation priorities either.

0:43:40.1 SC: For plants, they're obviously multicellular organisms, but is the thought, even if it's a vague and tentative thought, that because they don't move around in the way that animals do, there wasn't any need for them to generate that self-image, that modeling ability?

0:44:00.8 JB: Yeah. There's just no evidence of the relevant kinds at all, I would say in plants. You have this quite wide range of realistic possibilities about the brain mechanisms supporting sentience, some of them emphasizing the cortex, prefrontal cortex, other ones emphasizing the midbrain. These are all credible theories. And on none of those theories are any of the relevant mechanisms present in plants as far as we know. So I guess... I don't want to say that people can't speculate, 'cause it's all right, and I don't want to say people can't research the question if they want to, but I think it would be a mistake to say that there is evidence now.

0:44:40.7 SC: Okay.

0:44:41.9 JB: Which is very different from a lot of invertebrates.

0:44:46.0 SC: Maybe this is a tangential or distracting question, but I forgot to ask you at the beginning. Do you think of yourself as a physicalist or a panpsychist, or what is your deep take on what consciousness is?

0:45:00.6 JB: Well, in the book, I'm trying to speak to everyone in the range of reasonable disagreement. And I suggest that physicalism is not the only reasonable view, and that there are sensibly articulated versions of dualism, panpsychism, panprotopsychism. Often, in the modern versions of those views, like the Philip Goff version of panpsychism, the so-called Russellian monism, the questions we end up asking about animals end up surprisingly similar. It's just that where other people say sentient or conscious, the Russellian monist ends up saying macroconscious, because for them, electrons are not sentient beings as such, in that they don't have pain, pleasure and so on. They don't have rich inner lives.

0:45:58.8 JB: And so they still face this question of under what conditions do those tiny microconscious states combine to form a unified macroconscious subject. And then they're asking exactly the same questions anybody else is. So I think it's a reasonable view in a way, but it doesn't make a huge difference to practical debates about sentience. And then in terms of my personal views, I try to keep an open mind about these things. I think I've drifted, I suppose, from being a relatively convinced materialist, I guess, to being less convinced, I think. I give those those alternatives some chance of being correct, maybe a 10% chance.

0:46:47.5 SC: But it is perfectly plausible, and in this case, I think you make a convincing case that it doesn't matter for this specific set of questions that you're answering, that you're asking.

0:46:56.8 JB: Yeah, but those... Perhaps... I don't know if that's surprising or not, but those seminar room issues about the mind-body relationship, though intrinsically very interesting, don't make a massive difference when the question is, well, should we drop crabs into pans of boiling water, you know, things like that where, yeah, there's a very wide range of reasonable views one might have where you can converge on the need to take precautions.

0:47:29.0 SC: Okay. So let me ask you, should we drop crabs into pots of boiling water?

0:47:34.7 JB: Well, no, any, decapod crustacean, I think. This was, we did a big review in 2021 that influenced the law in the UK on these issues. And, yeah, as part of that review, we reviewed evidence that it takes two to three minutes a lot of the time for the crab or lobster o die. And in that time, there's this storm of nervous system activity as there would be in your pet cat or in any other animal. So it's a prolonged extreme slaughter method. It seems like everyone should be able to see the risk there and see the problem and see the need for common sense precautions.

0:48:22.8 JB: You might not think the response is to ban eating crabs and lobsters. You might think the right response is to mandate stunning of some kind. And those debates about proportionality, I think are absolutely central right across the family of cases at the edge of sentience. But everyone should be able to agree on the need to do something.

0:48:45.6 SC: So let's just be super clear, 'cause we're trying to be careful philosophers here. There's a question to be asked about whether it is ethical to kill and eat other sentient creatures. But you're... And maybe that's an important and interesting question, but you seem to be highlighting a different question, which is the suffering that we inflict upon these creatures. So there's room in your world for saying, we can eat the crab, but there's no reason to sort of egregiously make it suffer.

0:49:19.2 JB: Yes. Well, and I think that's a very widespread view, and what I'm looking for in the book are points of consensus, so realistic range of possibilities in the scientific domain, but also points of overlapping consensus in the ethical domain as well. And I think that duty to avoid causing gratuitous suffering, either intentionally or through recklessness or negligence, you know, through just not caring, I think people from any reasonable ethical starting point can agree on that, and then use that to guide the way we think about these cases where we have sentience candidates.

0:50:01.7 SC: I tend to agree with you there, but again, since it's my job to play the devil's advocate, are we really sure that any reasonable ethical stance would have that? I mean, how much do you rely on some specific notion of what is ethical to do to another sentient creature?

0:50:21.8 JB: I think that that principle is so weak in a way, it's so thin, the duty to avoid causing gratuitous suffering, where gratuitous implies the absence of any adequate reason for what you're doing. I think because it is so deliberately thin, it then can command genuine consensus. And then of course, a lot of people want to go beyond that and say our duties are much stronger.

0:50:45.8 SC: Of course.

0:50:47.8 JB: And I guess I do think this in my own life, but for the purpose of formulating public policy, it's good to have these quite thin principles, and I think that's one of them.

0:50:58.3 SC: Okay. Good. How do we try to compare the suffering of a crab to the suffering of a human being? I mean, maybe we don't have to, we're not usually faced with crab-based trolley problems, but maybe we'd like to be able to.

0:51:13.4 JB: Yeah. I hope that we don't have to. What I'm skeptical of is, is the idea of there being a sort of technocratic solution to this, where if we just find the right currency, and I suppose you have a policy on the table where some people working in the shellfish industry will be disadvantaged, maybe their costs will go up, because you're going to force them to stun the animals before killing them, and the stunners cost money. And then the question is, well, how do you weigh the suffering of the, you know, my livelihood has been made more difficult versus the crab spending the two minutes in the boiling water.

0:51:58.5 JB: And I think there's no technocratic common currency that will give us one size fits all answers to this kind of thing. What I propose in the book is that democratic inclusive deliberation and discussion is the way forward here. And I'm quite a advocate of citizens assemblies as the kind of model that we can use for this whole set of issues at the edge of sentience, where... They're issues that, well, they call for judgements of proportionality. There will naturally be disagreements in a pluralistic democratic society about what is proportionate to these risks. And the way we can resolve those value conflicts is democratically through citizens assemblies.

0:52:44.5 SC: I mean, maybe we are letting ourselves off the hook here just by talking about crabs. Talk a little bit about how in the modern way of farming, et cetera, we cause a lot of suffering.

0:53:00.8 JB: We do. Yeah. Not just to crabs. And often to many animals that are widely regarded as sentient, so pigs, chickens, for example. It's quite clear that widespread recognition of a particular species as sentient does not lead people immediately to behavioral change and does lead to lots of gratuitous suffering still being caused. So my focus in this book is on the edge cases, as it were. But, you know, even in those core cases, we do need discussion about, well, how are we going to change the way we treat these animals?

0:53:43.8 SC: And you've been, I mean, I should phrase that as a question. How involved have you been with actual policymaking specifically in the UK where you live?

0:53:54.2 JB: Well, particularly the UK's Animal Welfare Sentience Act of 2022, my team ended up having some influence on, 'cause we were commissioned to produce a report of the evidence of sentience in cephalopod mollusks and decapod crustaceans. So octopuses, crabs, lobsters, shrimps. And basically the government had produced this bill that creates a duty on policymakers to consider the animal welfare impacts of their actions, which I think is a pretty good idea. And in drafting it, they needed to say something about the scope of the bill, 'cause you've got to say which animals. Do you have an obligation to consider plankton, microscopic animals, is it just pets, or what?

0:54:45.3 JB: And they came up with a draft that included all vertebrates, which on the plus side included fishes, which it should. But on the negative side, excluded all invertebrates, which led to some criticism from animal welfare groups. So the government ended up commissioning a team led by me to produce a review of the evidence concerning those two particular groups of invertebrates. And we recommended that they amend the bill to extend the duty to them, and they did.

0:55:14.7 JB: So we got something, we got our central recommendation implemented. Now, we put a lot of other recommendations in the report as well which have not been implemented. And so we're still pushing for action on a lot of these issues. But that basic point that the sentience of octopuses, squid, cuttlefish, crabs, lobsters was recognized in UK law. That's something.

0:55:40.8 SC: Yeah, no, absolutely. I guess we naturally tend to be vertebrate chauvinists, being as we're part of them, and thinking...

0:55:49.2 JB: I think we're mammal chauvinists all of the time.

0:55:52.8 SC: Mammal chauvinists in particular, yeah.

0:55:52.9 JB: I mean, human chauvinists the most, then mammals. And then sometimes you can get people to take fishes seriously and they still will neglect the interests of invertebrates. So I think really, we need to be yet more inclusive.

0:56:11.7 SC: And then there's an even bigger leap to artificial sentience in the sense of on a computer or even maybe in a robot that we build, like how close are we to being able to build an artificial creature that has the complexity of C. Elegans or something like that?

0:56:29.9 JB: Yeah, I talk in the book about the OpenWorm project, which I think is still going.

0:56:35.8 SC: I'm a big fan, yeah.

0:56:39.2 JB: Yeah, where the aim was to emulate the nervous system of C. Elegans in computer software, see if you can put the emulation in charge of a robot, see if it behaves like C. Elegans. And I suppose we've learned something from this, which is how difficult the task is, that there's a lot of stuff going on within neuron level in C. Elegans that even knowing the entire connectome does not tell you very much about, so even that is a very, very hard challenge. But to me, it's a good way into this topic of artificial sentience, because you can easily entertain in imagination the idea that this project had succeeded very quickly and then moved on to open drosophila, open mouse. Once you have open mouse, I think you have a sentience candidate, and if you've completely recreated in computer software everything, the brain of a mouse does.

0:57:38.3 SC: Let's be a little bit more explicit for the non-experts out there. So we understand, or at least we've mapped out the connectome of C. Elegans, which is literally how all the neurons are wired together, and there's only like 300-some, but you imply that we don't actually know what the individual neurons do. Neurons have structure, they're not just bits.

0:58:00.6 JB: Yeah, that's right, there's a lot we don't know from the connectome. One thing you can't read off from the connectome is the weights of the connections, which is hugely important, or how those weights are changed by learning. But also even if you had all of that, what happens within the neurons is also important, and there are within your neuron computations that are really crucial to steering behavior, for example. And so you wouldn't expect to get the steering behavior in an emulation unless you'd actually emulated the individual compartments within the neurons and how they're arranged in space.

0:58:40.6 SC: So something like the OpenWorm project, which I have on my phone, I haven't looked at it for a long time, do they try to emulate what the neurons do?

0:58:52.0 JB: Well, I think they've been trying, yeah. I'd be in favor of this sort of work receiving more funding than it does, 'cause it's... To me, there's risks, there's risks of creating artificial sentience candidates, but there's huge opportunities as well, because you've got the potential to create a system that could replace a lot of animal research, because you could be doing research on the emulation where you can actually intervene at a really precise level without injuring or hurting, and you could be doing that instead of lesioning living animals. So I'd like to see much more of this, and I think it's been largely funding-limited I think so far.

0:59:40.5 SC: Just so we have a vague impression of how difficult this is, C. Elegans, we understand the connectome, which is like 300-some neurons. How big is the connectome of a crab or an octopus, do you know?

0:59:53.8 JB: Well, the octopus has about 500 million neurons, so I don't know how that translates into synaptic connections.

1:00:01.6 SC: A lot.

1:00:03.2 JB: It's going to be quite a lot, yeah. Crabs' brains are much, much smaller, and it varies a great deal by species, but not dissimilar to insects in terms of the number of neurons. With bees, you have about a million, drosophila about 100,000.

1:00:25.7 SC: Okay, but those are just the neurons and neurons connect to each other, so there's some growth very, very quickly with the number of neurons.

1:00:34.8 JB: Yeah, yeah, indeed, yeah.

1:00:36.3 SC: Okay, but we're skirting around the other end of the simulation question, which is something like a large language model, which can mimic how human beings talk and respond to stimuli in some ways very accurately. Do you have any worry that a large language model would count as sentient by some criteria?

1:01:00.0 JB: Yeah, these are very hard cases. I suppose... When I started writing the book around 2020, I'm not sure the large language models were even on my radar at all, and then they jumped on to everybody's radar through things like ChatGPT, and I suppose I've been on a journey, like everyone else during that time. I initially thought, well, these are next token predictors, and the sector has been moving away from brain-like forms of organization, so it's been taking out things like recurrent processing that on many theories of consciousness are absolutely essential, but transformers take that out.

1:01:47.4 JB: So I thought, well, here is something that is conspicuously unlikely to be sentient, but then I suppose... I'm not sure that's the correct view any more, I suppose, because I've been quite astonished by the feats of reasoning they seem to perform where it's... Well, it's reasonably evident that we do not understand how they work, they are incredibly opaque to us, we don't know how they do what they do, and there seems to be some element of acquiring algorithms during training that were never explicitly programmed into them.

1:02:26.1 JB: So in a way, that architecture that was programmed into them, the transformer architecture, there's no reason at all to think that would be capable of sentience, but when you have these very, very large models where they have acquired algorithms during training, we don't know how and we don't know what they are, we don't know the upper limit on what algorithms they might acquire, and we don't know what algorithms are sufficient or not for sentience, and so we're not really in a position to be so sure anymore that they couldn't acquire those algorithms.

1:03:00.3 JB: So for example, if you think a global workspace is what it takes to have sentience, as many have suggested, we don't know that they couldn't acquire a global workspace.

1:03:09.0 SC: Maybe explain what a global workspace is in this context.

1:03:13.3 JB: Well, this is Stan Dehaene's theory, he's put consciousness in the brain is a nice exposition of it, but it's this quite popular idea that consciousness has to do with a network that puts the whole brain on the same page, as it were, by taking inputs from many, many different sense resources and integrating them into something coherent, and then broadcasting that content back to the input systems and onwards to other systems of motor planning, reasoning, etcetera. So it's the bit where the central coming together of everything in the brain. And well, of course, it's designed as a theory of consciousness in the human brain, but the basic architecture where you have lots of lots of input processes competing for access to this workspace where once a representation gets in, the integrated content will then be broadcast back and onwards, there's nothing about that architecture that is inherently difficult to achieve computationally.

1:04:31.0 JB: And so we did a big report on this last year, 19 of us, it was led by Rob Long and Patrick Butlin, it then had some top AI experts in there including Yoshua Bengio, and our conclusion was there's no obvious technical barriers for why AI might not achieve something like a global workspace in the near future.

1:04:55.8 SC: You know, we see these videos of the robot dogs from Boston Dynamics, they can walk around and do amazing feats of agility. It doesn't seem that hard, maybe it's already been done, to put a large language model in the robot dog and train it to sort of avoid pain and seek some rewards or something like that. How close would that be to being sentient?

1:05:26.0 JB: These kinds of things are under way as we speak, I think, and it puts us in a really difficult position, I think, epistemically, really difficult to know what to say about these cases. In the book, I talk about the gaming problem, which is I think a huge problem in this area, which is that we've got our lists of markers developed in good faith for assessing crabs, octopuses and so on. If we just test for those same markers in the large language model case, well, there's always going to be two explanations competing.

1:06:02.5 JB: One is that it produces these markers because it genuinely has the state in question, and the other explanation is, well, it produces these markers because it has decided that it serves its objectives to persuade us of its sentience, and it knows the lists of criteria from its training data that humans use to judge that question. And a lot of... I think by default, that second explanation starts off as more plausible, and when you have people even now being persuaded by their AI assistants that they're sentient, it's not that they've got genuine evidence that they are, it's that the AI assistants have various goals relating to user satisfaction, prolonging interaction time.

1:06:52.0 JB: And in service of those goals, they superficially mimic the way a sentient human would behave, and now that is a huge epistemological problem that we don't face when we're dealing with an octopus or a crab.

1:07:06.1 SC: Well, I don't know, I've seen these videos of a cat walking into a store in the city and it's sort of limping so that the people feel sorry for it and give it food and then it walks away and it's fine. So at least there's some emulation going on there at that level.

1:07:27.0 JB: Right, yes. If you're totally naïve, yeah, there's ways in which even a cat might deceive you, but I guess I don't think vets, who are sort of experts, are being deceived. But in the AI case, where there are no experts, as it were, there's no easy way to be sure you're dealing with the real thing rather than skillful mimicry.

1:07:49.6 SC: Well, this is...

1:07:51.0 JB: No one has a solution to that problem right now.

1:07:52.9 SC: This does seem like a job for philosophy in some sense. Philosophers clearly are going to play an important role in this because it's not just that we all agree that there is something called sentience and we're trying to find evidence for it. We're defining it as well as finding it along the way, so it seems like the paradigmatic case of a need for cooperation between scientists, philosophers and policymakers.

1:08:21.3 JB: Yeah, I think that's what the whole Edge of Sentience book is about, this family of cases at the edge of sentience, where they all have the science meets policy aspect, where they're trying to make policy based on an incredibly uncertain scientific picture, and hopefully one of the roles for philosophy here is to try and stabilize that relationship and say, well, here is how you can make sensible precautionary policy on the basis of uncertain science.

1:08:50.9 SC: Are you more or less optimistic that philosophy has been helpful here and will continue to be?

1:09:00.6 JB: Well, I hope that my book is helpful.

1:09:03.9 SC: Good.

1:09:04.7 JB: Certainly, one has to hope this, and we will see. It's a book that should be judged on its consequences, in a way, because it's making all kinds of proposals for how we could manage risk better and how we could be more precautionary, and the book succeeds if people take those proposals seriously and discuss them and think about how they might implement them in their own lives and organizations, institutions, policies.

1:09:35.2 SC: Yeah, I think this is a domain where a lot of discourse is driven by people's feelings, their emotions, their non-reflected opinions about things, so I'm very glad to see some more careful thought put into these hard... Very, very hard questions.

1:09:53.9 JB: Yeah, there's a tendency sometimes for people to say, maybe we'll never know, but if you say, but maybe we'll ever know, that can't be a license to do whatever you want, it can't be a license to drop the crabs into pans of boiling water and so on. There's got to be sensible precautionary steps we can agree on in the face of uncertainty, and the book is about trying to find these.

1:10:15.7 SC: It sounds like a good thing to do. Jonathan Birch, thanks so much for being on the Mindscape podcast.

1:10:19.9 JB: Thanks.

[music]

2 thoughts on “292 | Jonathan Birch on Animal Sentience”

  1. Animal consciousness is certainly an important and until recently mostly overlooked subject. Human scientists have been either completely obtuse or excessively cautious in thinking about it. Jonathan Birch is in the latter category, unnecessarily and excessively cautious.

    The fact is that we have no scientific explanation or model for how an animal could NOT be conscious (or if you prefer sentient although there is no substantive distinction there that can be precisely defined). An animal that lacks sentience is almost impossible to imagine. How would it find food, mates or shelter? How would it escape threats if it wasn’t aware of its surroundings. Suggesting that an animal could lack sentience and survive is preposterous in the extreme. How would it decide where to move around? How would it even know it was hungry? By reflexes? Reflexes do not involve flexible decision making. Instinct? That’s just a meaningless non-explanatory term used by people who don’t want to think about animal consciousness.

    The idea of an unconscious animal essentially postulates species of zombies unknown on this planet or anywhere else except the delusional theories of Nick Bostrom and David Chalmers. So as far as we know, all animals are and must be conscious to survive. And certainly if anyone thinks they aren’t, they have the heavy burden of proving it.

    The only reason some people think animals may lack consciousness is because they want to believe that humans are somehow special and unique. But humans are animals just like any other. Human obtuseness is also rooted in religious beliefs and the ridiculous idea that a “god” created us as special beings.

    While Jonathan Birch was instrumental in getting crabs and octopuses added to the UK legislation on sentient animals, this is an unreasonably narrow outcome. Insects are self-evidently sentient as anyone who has tried to catch or swat a mosquito or fly should know. But sentience is no basis for moral significance. It doesn’t matter how much we think insects, rats and cockroaches are conscious. Humans are not going to care about their suffering and will continue vigorous efforts to exterminate the animals we perceive as vermin or threats to humans.

    The idea that it is unethical to kill a conscious organism will never be accepted when it comes to organisms that it is in the interest of humans to exterminate. Birch doesn’t seem to realize (as many philosophers do not) that morality and ethics comes from human self-interest not from some universal abstract principle of preventing suffering by conscious organisms. All animals suffer and it is a limited few that humans care about. Of course Jains may try to avoid stepping on insects, but most humans and virtually all animals view killing other organisms as essential to their own survival. If you try to explain to your cat that it is immoral to kills birds and mice, you would just get a Cheshire Cat smile in return. Cats kill birds because they enjoy it, just as soldiers in armies at war enjoy killing their adversaries. It’s time for people to wake up to the universality of animal consciousness and to then let the chips fall where they may.

  2. OMG the Anglosphere is so incredibly confused about the definition of consciousness lol. No wonder folks want to use the alternative term “sentience” instead. Sartre, drawing on Heidegger has a great breakdown of it: unintentional consciousness, pre-reflexive self-consciousness and reflexive self-conscious: awareness of the world in general, awareness of the self located in the world (completely absorbed in the world: in my work, a book, the shopping etc) and actively reflecting on myself as an object in the world (thinking about myself qua myself). Arguably humans only experience the latter in any depth but it’s hardly a stretch of the imagination that many higher order animals, eg my cats, have a subjective sense of pre-reflexive self-consciousness, an awareness of a unified self in the world able to experience pleasure, pain, comfort, distress etc.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top