There should be some government program that forces scientists to watch dystopian science-fiction movies, so they can have some idea of the havoc their research is obviously going to cause. I just stumbled across an interview with Nobel Laureate Gerald Edelman, that has been on the site for a couple of months. (Apparently the Discover website is affiliated with some sort of magazine, to which you can subscribe.)
Edelman won the Nobel for his work on antibodies, but for a long time his primary interest has been in consciousness. He believes (as all right-thinking people do) that consciousness is ultimately biological, and is interested in building computer models of the phenomenon. So we get things like this:
Eugene Izhikevitch [a mathematician at the Neurosciences Institute] and I have made a model with a million simulated neurons and almost half a billion synapses, all connected through neuronal anatomy equivalent to that of a cat brain. What we find, to our delight, is that it has intrinsic activity. Up until now our BBDs had activity only when they confronted the world, when they saw input signals. In between signals, they went dark. But this damn thing now fires on its own continually. The second thing is, it has beta waves and gamma waves just like the regular cortex—what you would see if you did an electroencephalogram. Third of all, it has a rest state. That is, when you don’t stimulate it, the whole population of neurons stray back and forth, as has been described by scientists in human beings who aren’t thinking of anything.
In other words, our device has some lovely properties that are necessary to the idea of a conscious artifact. It has that property of indwelling activity. So the brain is already speaking to itself. That’s a very important concept for consciousness.
Oh, great. We build giant robots, equip them with lasers, and now we teach them how to gaze at their navels, and presumably how to dream. What can possibly go wrong?
tacitus,
“If I went to sleep, someone downloaded my mind into a computer and then destroyed my body (or not) before “waking up” the AI copy of myself, I guess that copy would sense that continuity of existence every bit as much as I do when I wake up in the morning. ”
The problem however that IT WOULD NOT BE REALLY YOU, unless there is a high tech link between your soul and your copy that allow you to read a stream of experience of your copy. And this leads us to one very fundamental thing which comes from pesky philosophers. Unfortunately, I do not remember name of the lady from University of London whose talk on philosophy of consciousness I casually attended 8 years ago, but I sharply remembered her point and it is as follows: With consciousness we have two realities: one is biological brain that is carrier of our consciousness, another is subjective experience what makes you. Even if you map brain and know how it works, that does nothing to your purely subjective feeling of yourself, your dreams, desires, choices, values. According to her, the philosophical definition of soul is as follows: soul is subject that has that subjective experience, your soul is that I to which you refer when you say “I see”, “I feel,” “I choose.”
So, even if you are an atheists, you can talk about souls without committing major sin against your faith. 🙂 And, now coming back to your example, while downloading info from your brain try to make sure that you download the soul :-), that is those thing which makes you you from standpoint of your subjective perception.
On serious note, the lady claimed that there is no any reasonable conceptual framework to bridge the two conceptual realities: the reality of the Neuroscience and reality of individual subjective experience and even if Neuroscience gives all answers to the question of Neuroscience, this will do nothing to the second realm, the realm of individual experience. And when you think about the vastness of this gap, then all assertions like “consciousness is fundamentally biological” or “consciousness is fundamentally social” look more like artifacts of primitive faiths.
sergey,
What if our neurobiological technology improves to the point that we can make a functionally identical biological copy of the individual’s brain, including all connections and thresholds, etc.? Not just a brain model, but an actual brain?
– Does the copy have the same mental characteristics as the original?
– If copying can be done with arbitrary accuracy, will the subjective experience of the copy be the same as that of the original?
– Does a copy of the soul come for free? Or do we have to find a spare one somewhere?
You could think of this as the Star-Trek Transporter problem.
And speaking about studies of the realm of subjective experience, which is the realm of soul, ( in that sense as the enlightened leady whom I mentioned in my previous post had defined that realm) I wonder if one can use methods of science there? Perhaps if you are a scientist you can try to use your world experience and try to be objective in your studies of your subjective being.
But here comes the paradox, you are attempting to have OBJECTIVE view on something which is by very definition is SUBJECTIVE, your internal experience.
Thus, the gap between realms subjective experience and objective science is so conceptually vast that even first attempt at bridging it leads to ostensible logical paradox. The vastness of that gap is what makes me to agree with certain assertions about religion and science Sean made in some of his earlier writings. Sean said that science does not need God. I am compelled to agree, because I think that all subjects related to religion, including God belong to subjective realm, the realm of soul.
@Bruce [#49] I’ve no doubt your final paragraph would be true of a dispassionate AI (just as it would be of any aliens who might be in the vicinity). But what if there were competing AIs, each given an overriding goal to work for the advantage of one country or group? Things might get rather more fraught, and their interventions more obvious.
That seems the most depressingly inevitable development, at least when (not if) they first appear. But it raises a question regarding consciousness: If these AIs were conscious, and smarter than humans, would they not be very likely to conclude it was better to work round their programmed goals, for the greater benefit of humanity, just as most people in society agree it is beneficial to curb their baser instincts?
Maybe it will turn out that a primary aim of understanding consciousness is to *avoid* giving AI too much of it, in case the AI becomes too sensitive in its conscience (I think it’s no coincidence those words sound so similar.)
Also, if we assume that consciousness is an individual’s subjective experience of the interactions between a “main” network and a smaller (perhaps overlapping) “monitoring” network whose currents reflect in embryonic form those of the former, then I imagine another problem for AI developers will be to prevent spurious independent monitoring networks springing up and causing what amounts to multiple personalities.
Like heat dissipation, the magnitude of this problem is presumably directly related to the size of the networks, just as math groups of larger order tend to have more subgroups.
Neal J. King,
Even if we assume that you have an absolutely precise copy of you with exactly same past subjective experience, that does not do anything whatsoever to YOUR EXPERIENCE. Experience of your copy is of your copy and yours is yours. If you are dead and you copy is left, the mankind could be benefited by your knowledge of QM. Good for the makind, but it does nothing to you. You may feel good about it before you are dead, but once you are dead, than what it is to you? And I agree that Star-Trek Transporter problem poses very similar questions.
On the other hand, may be there are some laws which would prohibit absolutely precise coping, as you said it, you may not be able to copy it for free. Or may be you would need to get that soul from somewhere to make a precise copy. That kind of scenario was shown in the final part of the movie “Artificial Intelligence”: when AI boy David asked extraterrestrial beings (or were they advanced robots?) to bring his long dead adopted mother to life, they said to him that although they can recreate her body from her DNA, yet the one thing which is called soul can not be recreated, for once it is used it can not be reused. All what it can do now is to stay in body for a day and no more than that. “Artificial Intelligence” is a touchy movie and I think it is highly relevant to what we discuss in this thread.
sergey,
Even subjectivity can be studied with a certain scientific methodology:
Feynman told us one time of some dreaming experiments he did for a philosophy class at MIT: He wanted to be able to become self-aware during the dreaming state. He eventually managed to do this, and then did some “reality testing.” “Seeing” a nail on the wall, he noted that when he “saw” himself “touching” the nail (in his dream), he also “felt” the nail: a degree of correlation that you naturally expect in the real world, but which isn’t required in a dream. He then suggested to himself that it ought to be possible that he would “see” the nail at point X, but “feel” it at point Y; and this began to happen. Because this was a dream, it was no longer necessary for X and Y to be the same.
“Experience of your copy is of your copy and yours is yours. If you are dead and you copy is left, the mankind could be benefited by your knowledge of QM. Good for the makind, but it is nothing to you.”: I rather think that the experiences of the two brains would be identical, if we can make good enough copies. It could be a problem if both are actually still around (minor problem involved with the slicing & dicing), but if a perfect copy could be made non-destructively, I don’t see an obvious physical basis for saying which one is the “real” brain and which one is the “copied” brain: functional equivalence should be subjective equivalence. And I don’t see any physical entity that could serve as the basis for a soul that could differentiate between these two.
So in the instant case, I would expect that my copy would pick up from the point at which the scan had been made and go on from there, without a hitch.
Neal J. King
Interesting note about Feynman’s experiment. But to make it scientific, it should be verifiable and before we get devices which can read mind it is hard to verify it. One only way to verify it is to try to do it. Now, the problem is that different people have different mental abilities. For me it took some time to learn to become self-aware in my dreams. I usually look at my hands and try to both see and feel them. Sometimes I feel but do not see. It is very hard for me to keep landscape stable in my dreams. It always changes: windows and doors disappear, I move from place to place etc. I know some other people have dreams with stable landscape. In any case, the things one person can do in his dreams may not be the same as what another can do and why you should believe to me, to Feynman or anyone else who tell you about their dream experiences? Sometimes those personal experiences have profound implications if we choose to believe in them, for example:
Long ago when I was a student of physics a classmate of mine told me that in his half sleep he often sees himself in a grey world with almost no people. He met only one person in that world and they were talking time to time. He had several experiences of that sort. Ones they saw a body of a third person emerging and they were trying to bring to their grey world but failed. The crux is that he claimed that he later met the person with whom he was in grey world in the real physics world and they recognized each other and were both cognizant of their previous experience in the grey world. Should I believe it or not? If he is right then we get something new and strange added to our physical reality, that is- telepathic communication. My personal answer to this was simple: It is noted, but either it is true tor false it has meaning only for people who experienced it. Even if my friend is honest, that does not really add telepathic communication to mine own reality, for I still have the same capability as I had.I can deny the truthfulness of his account because it contradict my idea about laws of nature and if he was not truthful that is OK. But presume that he was truthful, what would he care about my idea of laws of nature. If he personally experienced it and it is ostensibly in odds with modern physics, then whom he should believe to his own eyes or to a wise guy who talks to him about modern physics?
The other hard problem comes when people talk about integral experiences. You may sometimes encounter people who say “Jesus had visited me in my dream and I experienced profound joy and happiness and it become completely clear to me what I should do about my life.” Neither we can easily reproduce this, nor there is any point in reasoning pro or contra with the person who has got the experience.
“I rather think that the experiences of the two brains would be identical, if we can make good enough copies.”
That may be so, but one of this experiences is yours another is of your copy, so you have no direct relation to his experience?
“I don’t see an obvious physical basis for saying which one is the “real” brain and which one is the “copied” brain: functional equivalence should be subjective equivalence.”
I agree with the first part, but I do not understand what is subjective equivalence and why functional equivalence should apply subjective equivalence.
” And I don’t see any physical entity that could serve as the basis for a soul that could differentiate between these two.”
If there is a physical entity that could serve as the basis for a soul , then first question if you could copy it. Leibniz thought about souls as monads. Those can not be copied, but I am not sure if his monads were physical in any sense. Suppose you can copy physical carriers of souls, then the time copying is done, and presuming that entities are not linked, then there are two entities with their own individual experiences. Each one can love, feel fear, anger on its own right. And what matters to you is what you feel. And to you copy it matters what he feels.
It is easy for us to talk about our own copies or copies of our colleagues. But what would we feel about people whom we deeply love, with whom we very deeply connected (wife, child) being copied and replaced by their copies. I can not think easily about it.
sergey,
– Dreams: Different people’s dreams are their own experiences. What can be objectively transferred are methods used to change or manipulate these experiences. The actual “informational” content of the dream is not of interest from this point of view, unless you take the point of view that dream contents have a reality of their own (I don’t, and so I would be politely uninterested in the friends your friend met in his dreams, unless there is some reality-based evidence involved), or reflect current moods and pre-occupations (as the Freudians do).
– Whose experience: If I take a document and photocopy it, the information on both are the same: it doesn’t make much sense to say that the information is only on one of the versions. Since mental contents are informational and not physical, in the same way if a functionally identical copy of myself were to be made, both versions should have identical memories. Each will go forward with his own feelings and views, but based on the same past experiences and patterns. Futures will differ, but pasts will be the same.
I don’t think it’s so difficult to think about: If the mind and consciousness can be supported completely by a physical platform, then it should be possible to duplicate that platform, and if you do that, you will get multiple individuals with the same past. If you don’t believe that to be true, then you must assert the existence of a unique feature that cannot be duplicated; but since essentially anything physical can be duplicated (in principle), that means you are asserting the existence of individuality that based on something non-physical. You may as well all it the soul.
So, after all this running around, either there IS a soul, and consciousness cannot be supported solely on the basis of a physical platform; or there is NOT a soul, and consciousness can be supported solely on a physical platform.
It’s certainly easier to believe that there IS a soul (keeps the relationship bookkeeping more straightforward, going into the future). But that assumption puts a low ceiling on what else we might think about; and we actually have no empirical basis for believing it, because we’ve never been able to put it to any kind of test.
sergey, all this talk about souls is unnecessarily distracting, given the religious and metaphysical connotation of the word. When you use the word soul, you are implying that there is an essence to a human being that exists outside the natural realm and thus can never be discovered or investigated by scientific means. Why not just use the term “mind” instead?
As for animating a perfect copy of your brain, then yes, essentially, that copy is not the original “you”, since you still exist. But that copy has nothing missing. It’s not missing a “soul” or any other essential component that makes you “you” and while it may be a copy, if it isn’t informed that it’s a copy (say it’s downloaded into your cloned body after you’re original was killed) then it will have no reason not to believe it is the one with full continuity of existence.
And take my example of a gradual transfer. Embed the new “mind container” chip inside a person’s brain, and slowly transfer brain functions from the biological brain to the chip (perhaps over a period of weeks, while they are sleeping), and turning them off in the biological brain as they are transferred — in other words, do a move instead of a copy.
What then?
There were never two copies of the same brain, and once the seamless transfer is complete, the chip is fully in charge of the original physical container (the body). The only difference is that once the body dies, the chip can be transferred to either an AI environment or a newly cloned body. Assuming the chip is able to contain a perfect biological representation of the brain’s physical state and a perfect simulation of the brain’s chemical and electrical activity, then it’s seems to me to be reasonable to assume that gives you the continuity of existence you need to continue beyond the death of your original body. If not, then what is it I am missing?
We already used artificial brain implants to replace or enhance brain function. Admittedly they are still very crude—-deep brain stimulation and Vagus nerve stimulation devices for Parkinson’s disease sufferers and the clinically depressed respectively—but we certainly don’t see them as being any less themselves for it (in fact, quite the opposite, given that the implants restore them to a more normal state), so is there any reason to assume that there is a point where a person has enough artificial implants controlling parts of the brain that they are no longer the same person? I just don’t see it.
Neal J. King,
Bandler and Grinder, the creators of the field of Neuro-linguistic programming have interesting approach to defining reality. According to them reality is something which can be perceived/ shared by several individuals. This reality is not objective but rather inter-subjective. According to this, our dreams are not real unless they are collectively shared, something which does not happen to most of us.
The modern concept of objective reality that based on objective reading of mechanical devices was shaped by Francis Beckon. Our objective reality is much divorced from our sensual experiences. Our mental apparatus and our habits trained to study this objective reality do not give us much ammunition to study the reality of our own consciousness. They may as well lower the ceiling on what we may possibly find in our individual experiences…
Speaking about soul: I do not think objective knowledge of it would be more valuable than individuals control of his/her world of feelings and experiences. We do not want to experience pain or intensive fear. Wealth of objective knowledge about neurological aspects of pain may not be as handy as physical training for a soldier who is wounded and has to run.
Suppose we nail the soul down, suppose in some far future physicists playing with theories of their time will find out that information processing entity is needed to maintain consistency of their theory (in a similar fashion Higgs particle is needed in SM). Suppose they will write equations that define those entities, find that there is a well defined infinite series of solutions, the entities can be naturally numbered and only one of each should necessarily exist for their theory to be consistent. Suppose they could then develop a test and find out that those things really exist and each human has uniquely assigned entity. Suppose they also find that those entities can go to so called hell and so called paradise in terms of their individual experience. Suppose that the new theory is so complex that learning it would take half of your life, but learning it does not decrease you chances to go to hell for the eternity :-(. On the other hand the psychologists of that time derived a set of boring mental exercises which would allow souls to be “saved” but one has to spend entire life dedicated to those exercises. Suppose you live in that horrible world where the contemporary science tells you that you go either to haven or hell for eternity!!! Would you choose to secure the future of your individual experiences by doing boring mental exercises or study science which is fun, but it does not warranty that you are saved from hell?
tacistus
“When you use the word soul, you are implying that there is an essence to a human being that exists outside the natural realm and thus can never be discovered or investigated by scientific means.”
No, no.. God forbid me religious proselytizing 🙂 , I did not imply anything of that sort. I use the concept “soul” in that sense as a lady mentioned in my posts defined it: soul is a subject of individual experience. For you it is what you perceive to be you when you say “I see,” “I hear,” “I smell”…
What I am trying to point at, there is dichotomy between your own experience and objective structure of the carrier. What I am trying to emphasize is that no matter what carrier is ( a physical brain, a chip, Matrix, a metaphysical monad, or a particle of exotic matter) you still have two realities: one is world outside which you can study objectively and another thing is your subjective world. And that subjective reality may not be studied by scientific means.
“Why not just use the term “mind” instead?”
This is extremely interesting question. I think that there is a subtle difference between the mind as function or as instrument and the soul as the user of the instrument. One can say “I lost my mind” that means there is a difference between you and your mind. Theoretically one can also say “I lost myself” but I rather consider it as a metaphor, because I am not sure if person can have himself in first place.
Importantly, our individual experience is wider than mental experience. I do not consider pain to be experience of mind. Also you can experience you existence without having any thought which was foundational for some philosophers. On the other hand Descartes spoiled the consensus by saying “I think therefore I am” creating the equation between soul and mind. I am not comfortable in caling perception of smell pain etc mental activity.
Copying the carrier may copy your past experience but you are still trapped in your own carrier. The interesting question is what would happen if you slowly transfer brain functions from the biological brain to the chip . I do not know what would happen to me if functions of my brain were transferred to chip. Would I be the same and would I be I? It is easy to argue that being moved to chip from body would change your self-perception, but it is hard to argue that it completely change it. The question is how weather continuous transfer to entirely different carrier possible without being Brocken down at some point.
sergey, I wasn’t accusing you of proselytizing, I just object to the word “soul” in the context of this discussion because it is such a loaded word, coming with all the baggage of the religious use of the term. I understand that the woman you were quoting wasn’t using it in a religious context, but I still believe it’s an unnecessary distraction.
I understand that when you say “using your mind” and “using your body” necessarily posits the question “what is it that is using them?”, but labeling that as the “soul” implies a dualistic entity that exists outside both the mind and body acting as the puppet-master, and again you get into trouble with using “soul” here because of the inherent religious meaning of the word. We simply have no evidence for a soul of any kind, and I would argue that even the concept is unnecessary. Perhaps person or personhood would be a more suitable term as the actor that uses the mind and body, in the naturalistic sense.
I do not consider pain to be experience of mind.
Pain may be experienced by your body, but it most certainly affects the mind, otherwise you would not be able to learn to avoid it. This is proven by those who lose the ability to feel pain. Without that instant feedback, it’s much harder to avoid, and to learn to avoid, situations where your body can be harmed.
Also you can experience you existence without having any thought which was foundational for some philosophers.
If this is an accurate quote, this seems to me to be little more than philosphers’ psychobabble. If you’re not thinking then what’s the point of existing?
It is easy to argue that being moved to chip from body would change your self-perception, but it is hard to argue that it completely change it.
If the body remains the same, and the nerve connections to the body remain intact (big “ifs” I know) then there is no reason why there should be any change in self-perception as your brain is moved to a chip. All that’s changed is the substrate upon which your brain is built.
Now, once the body dies, and the chip has to be moved into a new body or virtual environment, I would certainly expect there to be a learning process as your mind/brain gets used to it’s new environment. But this is similar to the process people have to go through when they have a stroke or lose a limb. There is something profoundly different about them that can be very difficult to overcome (e.g. phantom limb syndrome) but the brain and thus the person usually adapts to their new situtation in time. However, the important thing is that it doesn’t change their personhood.
Regarding the idea of gradual transfer of subjective experience to keep continuity with a copy, a similar experiment could actually be run via fairly safe surgery:
Have a surgeon drill one small hole in the top of your skull. Then them use a carefully guidd needle to apply local anesthetic to your corpus callosum.
At first you would experience one subjective awareness become two; that part’s already been done for severe epileptics. But later, as the freezing wore off, you’d get the unique experience of two similar “copies” of you merging while fully conscious.
Bruce,
How have the epileptics been able to report on two separate awarenesses?
tacitus:
“but labeling that as the “soul” implies a dualistic entity that exists outside both the mind and body acting as the puppet-master, and again you get into trouble with using “soul” here because of the inherent religious meaning of the word.”
well, yet another friend of mine, an avowed atheist, and even an enemy of God 🙂 called our common boss “a soulless career machine.” I do not think that atheists can not use word soul and its derivatives. And I think if they do so, that would not mean a dualistic entity that exists outside both the mind and body acting as the puppet-master. But yet, it implies a dualistic entity *conceptually* opposite to both the mind and body. And yes, it is acting as the puppet-master. I do not wish to assert here that it is objectively exists. Personally, I think that probably it exists, but I have no evidence for that, so why to blabb about it. And in fact, it does not even matter in context of this discussion.
Pain being mental vs. not mental experience: it is all matter of definition. You can go with Descartes and equate soul with mind if it makes sense within you conceptual model of the world.
“Also you can experience you existence without having any thought ….If this is an accurate quote, this seems to me to be little more than philosphers’ psychobabble. If you’re not thinking then what’s the point of existing?”
You have just dissed Eckhart Tolle! Oprah would be highly displeased with you. :-). On the other hand Goethe would be displeased with both your, Descartes and Eckhart Tolle. As I remember it (I had chance to read Faust 27 years ago) Goethe thought and put it in the mouth of Faust that action should come first, even before perception, thinking and being . So, here are the options:
“I act therefore I am” Faust
“I think therefore I am” Descartes.
“I am that I am” …….old classical idea from Bible, and it may be even older than the Bible.
You choose what you like.
>How have the epileptics been able to report on two separate awarenesses?
From what I’ve read/seen they are not actually aware of it unless the issue is forced. For one thing, only one hemisphere gets the ability to speak.
The really strange/creepy thing is that the person’s speaking hemisphere will give totally bullshit explanations for the actions of the other hemisphere’s hand, when the other hemisphere is responding in a straightforward way to stimulus that the dominant/speaking hemisphere can’t see. It supports the notion that most of our actions are subconscious, and our conscious internal narritive is actually more like the PR officer, and not so much in the decision making loop.
I’ve always been interested in what the non-dominant/non-speaking hemisphere might have to say about its life, but I’ve never seen it dealt with in the articles or documentaries.
Tacitus>”…If you’re not thinking then what’s the point of existing?”
Spoken like a true geek! But I hope you don’t really believe this, or I feel pity for the quality of your sex life.
Great post and great discussion! To BtC: could you provide any references on these experiments?
Lots. Just google “split brain”.
John #27 wrote:
As I’ve discussed on my blog, it seems to me that the solution is to consider the brain to be a computer that is computing a “virtual” world in which the subjective things we experience really exist. We are part of this virtual world generated by the brain.
The laws of physics combined with the low entropy initial conditions make computation possible in our universe. This then allows us to build computers that can simulate universes of which the laws of physics are competely different.
E.g. given enough resources, you could simulate a world in which people live on a virtual planet where pigs can fly. These people would really be conscious. They would presumably be unable to work out how gravity works in their world; the rules for gravity would be very complicated as the software would have to make exceptions for pigs.
As long as something is computable (formally describable), you can create a virtual world in which it exists. It doesn’t have to exist in the real world. So, it shouldn’t be a surprise that the brain would have evolved over hundreds of millions of years to effectively compute a world that has to be described using variables that don’t necessarily have an analogue in the real world (e.g. qualia like pain, hunger, thirst etc.).
Chalmers:
There is not just one problem of consciousness. “Consciousness” is an ambiguous term, referring to many different phenomena. Each of these phenomena needs to be explained, but some are easier to explain than others. At the start, it is useful to divide the associated problems of consciousness into “hard” and “easy” problems. The easy problems of consciousness are those that seem directly susceptible to the standard methods of cognitive science, whereby a phenomenon is explained in terms of computational or neural mechanisms. The hard problems are those that seem to resist those methods.
The easy problems of consciousness include those of explaining the following phenomena:
the ability to discriminate, categorize, and react to environmental stimuli;
the integration of information by a cognitive system;
the reportability of mental states;
the ability of a system to access its own internal states;
the focus of attention;
the deliberate control of behavior;
the difference between wakefulness and sleep.
All of these phenomena are associated with the notion of consciousness. For example, one sometimes says that a mental state is conscious when it is verbally reportable, or when it is internally accessible. Sometimes a system is said to be conscious of some information when it has the ability to react on the basis of that information, or, more strongly, when it attends to that information, or when it can integrate that information and exploit it in the sophisticated control of behavior. We sometimes say that an action is conscious precisely when it is deliberate. Often, we say that an organism is conscious as another way of saying that it is awake.
There is no real issue about whether these phenomena can be explained scientifically. All of them are straightforwardly vulnerable to explanation in terms of computational or neural mechanisms. To explain access and reportability, for example, we need only specify the mechanism by which information about internal states is retrieved and made available for verbal report. To explain the integration of information, we need only exhibit mechanisms by which information is brought together and exploited by later processes. For an account of sleep and wakefulness, an appropriate neurophysiological account of the processes responsible for organisms’ contrasting behavior in those states will suffice. In each case, an appropriate cognitive or neurophysiological model can clearly do the explanatory work.
[…]
The really hard problem of consciousness is the problem of experience. When we think and perceive, there is a whir of information-processing, but there is also a subjective aspect. As Nagel (1974) has put it, there is something it is like to be a conscious organism. This subjective aspect is experience. When we see, for example, we experience visual sensations: the felt quality of redness, the experience of dark and light, the quality of depth in a visual field. Other experiences go along with perception in different modalities: the sound of a clarinet, the smell of mothballs. Then there are bodily sensations, from pains to orgasms; mental images that are conjured up internally; the felt quality of emotion, and the experience of a stream of conscious thought. What unites all of these states is that there is something it is like to be in them. All of them are states of experience.
Of course, no discussion of consciousness would be complete without mentioning Dust Theory for Beginners!
Also, for the more advanced Dust Theory enthusiast, see here (good stuff in the comments).
And one of my favorites, the Hans Moravec classic Simulation, Consciousness, and Existence.
And, because it’s Memorial Day and you probably don’t have anything to do, here’s Daniel Dennett’s entertaining Where Am I?
I can never quite get my head around what the likes of Chalmers and Searle think the hard problem is. I mean, I grasp that it’s not intuitive that “unconscious matter generates subjective experience”, but I don’t think it’s particularly problematic – philosophiclaly speaking (practically, working out how the brain works is obviously a very tricky thing). Basically, the whole “philosophical zombie” idea seems palpably ill formed. As Neal says above, how do you know I’m not a zombie? Partly because we assume they don’t exist, and partly because we have interactions which persuade you I have a consciousness at least broadly similar to you. So why doesn’t that apply to the zombies themselves? If they describe and respond to their qualia in a convincing manner, how on earth can you say they aren’t really experiencing them subjectively? It seems to me to be a recipe for solipsism, something I’m fairly sure the zombie proponents reject.
And as for Chalmers thought experiment, again, I don’t see the problem. It’s just that putting something we experience linearly (sort of) and biologically into a repeated, technological framework takes us out of our comfort zone. So for point 2, if we assume that the brain state is reset for the second run, to me it seems obvious it’s a conscious experience even if it’s the same. We just can’t reset our own brains, so it’s impossible for us to repeat a day. But if you assume that we could, and we repeated a day in a similar manner, would you dispute that was a conscious experience? And if the artficial brain isn’t reset, then we wouldn’t expect it to respond identically, because it would have the memory of the previous day. It would probably be pretty damn confused, a la Groundhog Day. As for point 3, again, what’s the problem? Assuming the “brain” can’t tell it’s a replay, why wouldn’t it be conscious, but experiencing the same things as before? A lot of the hang-ups we have with artificial consciousness seem to be tied into hang-ups about free will. And as for point 4, of course the DVD on a shelf isn’t conscious, any more than a dead brain is. Consciousness is a process, not a state.
RE: Dust Theory
There is a mathematical take on the idea of Universe as a simulation by Jürgen Schmidhuber
http://www.idsia.ch/~juergen/computeruniverse.html
His first paper attempts a formalizm for simulated universe/multiverse and derive some consequences from the postulates:
http://arxiv.org/abs/quant-ph/9904050
Ginger Yellow, that’s been my point too. I don’t see that there is any particular unsolvable barrier to creating artificial consciousness. Whether or not we manage to accurately emulate the human brain is likely a matter of whether we can develop the technology to do it. I suspect that we will eventually, but not for a long time yet.
Once we’ve done it, it certainly will cause a lot of queasiness, particularly in religious quarters, over the issue of free will. If you create a machine that has consciousness and free will, what does that mean for the rest of us? I suspect there will be a very large number of people who will argue that the AIs free will is simply a very good emulation and not the real thing.
Then there is another issue. If we create a conscious, thinking AI human simulation, what rights does it have? If I make an AI copy of my brain, who has the rights over what happens to it after it’s switched on? The original me, or the copy?
If you populate a simulated world with conscious AIs who are programmed to believe they are living out real lives within the simulation, do you have the right to play God over their artificial lives? Such a simulation would be a fantastic tool for all sorts of experimentation, but would be ethical to do that, or would we be no better than the Nazi scientists who experimented on Jews during WWII?
>Ginger Yellow: “As for point 3, again, what’s the problem? Assuming the “brain” can’t tell it’s a replay, why wouldn’t it be conscious, but experiencing the same things as before?”
You really should to read the “dust” links above. If you assume a simulation has an internal experience, the problem is that there is a nearly seemless spectrum of states between a real-time simulation, to a repeated simulation using recorded inputs and random component, to recorded full synaptic states, to a dead recording, to nothing special at all, including the data encoded as bits scattered around the universe in a form meaningless without a key.
Think of the recorded brain states, stored in an array of memory locations each representing a step in time. The only distinction between examples 2 and 3 in the thought experiment is vs logically generating the next step, or simply loading it into the memory region representing the current synaptic state. Given the neural simulation code involved, you could probably generate many partial examples between 2 and 3. As the “dust” idea demonstrates, you can then go from there to … really nothing at all.
I agree that it may be our imagination or intuition that is failing in many of these thought experiments, as it does in quantum physics. I also agree that it’s hard to see how the simulation would *not* be internally aware while interacting with a real world person and reporting on its own consciousness (this is why chalmer’s zombies are interesting, they’re a deliberately absurd/impossible thought experiment).
Yet something is still wrong. The only easy, sensible solutions to the paradox are that some kind of interaction with the real, quantumly-rooted world is required, or that in some way internal experience derives from causual relationships with the real world, which the pure repeated simulation lacks. But then, what if I give you a hit of muscle relaxant and sensory deprivation measures? Experiments have shown you’d suffer permanent mental injury within hours, but putting that aside, does it make sense that you would not be internally aware / conscious during that time? There are no solutions to the paradox left then.
>I mean, I grasp that it’s not intuitive that “unconscious matter generates subjective experience”, but I don’t think it’s particularly problematic…
Put 10 people side by side, holding hands and each thinking of a different word in a sentence. Can you say that there is a gestalt awareness that contains the whole sentence in its consciousness? No, eh? So why would that be true for 10^x atoms, or 10^10 neurons?
But the thought experiment that still bothers me most is the lack of any first-person distinction that I can find between going temporarily offline due to deep anathesia, concussion, hypothermia, hypoxia etc (disorganized or near-zero neural firing), and permanent death, or waking up as a copy. As far as I can tell from that one, if I’m defined as one continous instance of internal awareness, then I may have “died” many times already, and it’s only from an observer’s perspective that I’m one continuous being. Kind of gets under your skin. We don’t really continuously exist except to other people?!?!
If you want a fun (fairly safe) experiment that you just have to fake epilepsy for, check out the “Wada Test”. That and split-brain patients kill off dualism fairly convincingly.