Another Step Toward Skynet

There should be some government program that forces scientists to watch dystopian science-fiction movies, so they can have some idea of the havoc their research is obviously going to cause. I just stumbled across an interview with Nobel Laureate Gerald Edelman, that has been on the site for a couple of months. (Apparently the Discover website is affiliated with some sort of magazine, to which you can subscribe.)

Edelman won the Nobel for his work on antibodies, but for a long time his primary interest has been in consciousness. He believes (as all right-thinking people do) that consciousness is ultimately biological, and is interested in building computer models of the phenomenon. So we get things like this:

Eugene Izhikevitch [a mathematician at the Neurosciences Institute] and I have made a model with a million simulated neurons and almost half a billion synapses, all connected through neuronal anatomy equivalent to that of a cat brain. What we find, to our delight, is that it has intrinsic activity. Up until now our BBDs had activity only when they confronted the world, when they saw input signals. In between signals, they went dark. But this damn thing now fires on its own continually. The second thing is, it has beta waves and gamma waves just like the regular cortex—what you would see if you did an electroencephalogram. Third of all, it has a rest state. That is, when you don’t stimulate it, the whole population of neurons stray back and forth, as has been described by scientists in human beings who aren’t thinking of anything.

In other words, our device has some lovely properties that are necessary to the idea of a conscious artifact. It has that property of indwelling activity. So the brain is already speaking to itself. That’s a very important concept for consciousness.

terminator_robot.jpg
Oh, great. We build giant robots, equip them with lasers, and now we teach them how to gaze at their navels, and presumably how to dream. What can possibly go wrong?

86 Comments

86 thoughts on “Another Step Toward Skynet”

  1. sergey: I’ve met artists who think that too! And you know what? Their opinion on the subject doesn’t matter any more than that of those “philosophers” (they aren’t *real* philosophers of course, they are merely philosophical history majors, which is something entirely different).

    Having a philosopher talk about whether or not consciousness is biologically derived or not is like asking me to design a working fusion reactor with a 5gigawatt net energy output. I’m interested in fusion power, but I have no significant technical knowledge, so I’d have no idea how to go about it. Same thing with a historian (specializing in ancient philosophers) and ideas of consciousness. These are people without even the tiniest smidgen of scientific expertise (they’re *historians* for frak sake), but they like to stand around and blabber like they’re at the top of every field.

  2. Sergey is right , gopher65 is wrong and tacitus is largely wrong. I state it this way for the sake of brevity and not in order to be rude, so I apologize for the tone.

    The problem of consciousness is essentially philosophical. Empirical science , given its methods, cannot observe consciousness directly: it observes brains and behavior and makes roundabout inferences from these and from the verbal responses of experimental subjects. These last are especially tricky to interpret. Interpretation is an activity fundamentally different from observation: it is a second-person not a third-person activity. Scientists are no better at interpreting what people say (and this is the closest thing to a direct access to consciousness other than introspection) than anyone else, and perhaps not as good.

    tacitus says: “In my own view, there has been plenty of evidence found in the last few years that biology is indeed all that’s necessary for a functioning brain to be a conscious entity. There certainly hasn’t been any evidence found to the contrary.”

    What is the evidence to which tacitus refers? We have learned a lot about how the brain works in a physical sense, and none of it brings us any closer to knowing why a brain should be associated with a subjectivity. A bunch of complicated activity involving action potentials is still a a lot of complicated activity, no more subjective than any other observed physical events. If we are to answer the question “What is the relation between consciousness and matter?” we must give up clinging to physicalism, which only retards our research. (But notice I am not saying we should stop researching the brain: all such inquiries are of course interesting). The correlation we find between events in the brain and what people tell us they feel are merely clues. As sergey indicates, the key to consciousness is social interaction.

  3. gopher65:

    actually philosophers whom I refered to were full professors, teaching philosophya and writing papers on Logical Positivism. Lakoff and Johnson are professional linguists, post-positivists. Example of one book by AI experts on the subject is here:

    Pfeifer R. and Bongard J. (2006) How the Body Shapes the Way We Think: A New View of Intelligence, MIT Press, November. (Read the book review by H. Kitano in Nature, 447: 381-382, or A.A. Ghazanfar & H.K. Turesson in Nature Neuroscience, 11(1): 3)

    I agree that people risk to look like blabbers when they intrude in the field where they have no professional experience. But that also should be a concern to physicists talking about philosophy or about consciousness. One of those philosophers, a friend of mine used to repeat that greatest obstacle to progress comes from great scientists who believed that their expertise in science automatically makes them great philosophers. And one does not have to be as obnoxious as Lisenko (who killed genetics in USSR). The much more serious example was Niels Bohr talking about philosophical interpretation of QM, defining the right-thinking on the subject and preventing other people from considering alternative interpretations of QM for about 20 years (is that so?). My friend used to tell (in his lectures on philosophy) that it would be much better if Niels Bohr learned a bit of philosophy before blabbing about it or at least knew the limits of his ignorance.

  4. sergey,

    Niels Bohr was not merely an interpreter of QM, he was one of the guiding lights, probably the main one. The difficulty he had describing QM in words is that the natural language of QM is mathematics – and he was working on the issue before the right mathematical foundation was at all evident. Philosophy is still largely conducted in words; and it is not at all clear to me that alternative interpretations of QM are any better than Bohr’s views.

    How much does your friendly philosophy professor know about using QM? Has he ever calculated a cross-section? The eigenvalues of the hydrogen atom? Einstein/Podolsky/Rosen paradox? Bell’s theorem? If not, maybe he should learn a bit about the limits of his own ignorance.

  5. I met many philosophers who argue that human consciousness is ultimately social. For the experiment in question they would not see any chance of success…

    And why wouldn’t an AI possessing an accurate simulation of the human brain be unable to socialize? Even people who have very limited social interactions—deaf and blind people, for example, or those with severe autism—are still conscious beings. Providing an AI the means to interact and socialize with the outside world is probably one of the easier problems that would need to be solved.

    In any case, what does it even mean to say that “consciousness is ultimately social”?

    Are you saying that consciousness cannot be reproduced in an AI because it has a supernatural or metaphysical component that science can never investigate, or are you saying that the biology of the brain is simply too complex for us to ever reproduce consciousness in an AI?

    And are you saying that if it was possible to make an exact copy of your human brain, including all senses, neurons, ganglia, and the chemical and electrical activity, in silicon and software (something I admit could not happen for many decades, at least) the result would not be a full conscious copy of yourself? If it isn’t then how would it be different? How would a non-conscious AI copy of yourself behave or react? What component would still be missing?

  6. cont’d from #29:

    With regards to the issue of the basis of consciousness: When two people can respond to this question with such different answers as “it’s based in the brain” and “it’s based on social interaction,” the one point of which we can be sure is that they are not understanding the question in the same way at all. It would take quite a bit of discussion even to provide enough framework that semantics of the two questions could be meaningfully related to each other.

    tacitus’ inquiry in #30 is an example of the divergence in understanding of this issue. More explanations are needed!

  7. Since the argument in this comment thread seems to be about the nature and origin of consciousness, and there is a clear divide, I’d like to hear both sides’ definitions of what actually is consciousness.
    Is it being aware of your surroundings and your own internal state, to some extent? A robotic vacuum cleaner fulfils those requirements. Do you also have to be aware of your own consciousness?

  8. tacitus’ inquiry in #30 is an example of the divergence in understanding of this issue. More explanations are needed!

    That’s true, but I see nothing that can’t be explained by the differences in the initial wiring of our brains combined with the almost impossibly complex set of life experiences our brain has soaked in as we age (not doubt including various levels of physical impairment and repair as the result of ill health and/or physical trauma!).

    If the AI brain you created was a “blank” without any of this development, then I would agree that such an entity might not be regarded as a conscious entity, but if the AI was an exact copy of a human brain (as described in #30) or the developmental stages of the human brain were also part of the emulation, then again, I see no reason why such an entity would not be conscious.

    I don’t want to understate the complexity of creating a conscious AI brain. It may always remain beyond the capabilities of our technology, but it seems way too premature to write it off as a pipe dream because of some seemingly metaphysical quality to consciousness that puts it forever beyond our reach to reproduce.

  9. Since the argument in this comment thread seems to be about the nature and origin of consciousness, and there is a clear divide, I’d like to hear both sides’ definitions of what actually is consciousness.

    That’s a tough question that I am certainly not qualified to answer. That’s why I have been discussing an AI simulation of the human brain since we are all in agreement (I would hope) that human beings have consciousness.

    For example, are ravens conscious? They have tool making abilities that demonstrates them to be smarter than the average bird, but do you need consciousness to be intelligent? I have no idea.

    No doubt as experiments on neural networks (or whatever supersedes them) advance, there will be a furious debate over whether they exhibit true consciousness or just an clever emulation thereof.

  10. Bruce the Canuck

    >So if, one day, we succeed in fully emulating the functioning of a human brain with a mixture of silicon and software then given the right external stimuli (i.e. education, experience) the result will be indistinguishable from a conscious human being…

    I agree, but here is a major problem left here, and Chalmers and his philosophy ilk shouldn’t be dissed. They have a lot to contribute, the problem is not trivial in the slightest. We could wind up duplicating a human mind, say by mapping the neural network and weighting of a corpse brain at fine detail, and *still* not understand consciousness.

    For example, here’s a thought experiment that (to my memory) I ran across via Chalmers, that still keeps me up at night:

    1) Take a fresh corpse brain. Scan every synaptic link, weight, etc (this is closer than you may think to being possible). Now run a simulation of that human brain for one day on a computer, interacting in a virtual world with its human loved ones from its former life. Assuming the mind is some variety of turing machine, then by all appearances it is truly the same person, convinces the people that loved it, claims to be conscious, and in fact carries out a lengthly conversation involving introspection about the mystery of its own self-aware conscious existance.

    Probably the neural net requires some random component, to operate maybe even quantumly generated; fine, generate the random numbers required by any means you like.

    2) In that hour, assume you recorded every input, as well as the entire stream of random numbers required in the simulation of the mind. Now, assuming the turing-machine / neural net doctrine is true, if you replay the simulation using the same real-world inputs and random-number stream, the person should provide outputs in exactly the same way, and its internal processes be exactly the same, no? So said simulated person is still having a conscious experience of that day, are they not? And I mean internally, the way you and I are now. Yet the day is a mere repetition.

    3) Now assume you recorded every internal step and neural firing in that day, just as you recorded every input and output of the system. You can now replay the day like a video recording, every event of the mind both internal and external, with no neural simulation required. Is the person conscious while the recording is played back?

    4) Now assume the recorded day, both I/O and every synapse firing of every thought, is burned to a super-DVD and left static on a shelf. Is the disk having a conscious experience?

    Consciousness is a hard problem. In fact I’d go further; it is *the* problem, the big enchalata, the problem of our existance and meaning of life, the one we avoid dealing with as much as we can, partly because so much nonsense has been said about it through history. Neuroscientists and physicists may be too close to the gearwork of replicating it to appreciate just how big of a problem it really is. My own best guess is that the existance of our internal experience implies that the “platonic world” of mathemetical proofs, ideas, and such is in some way just as real as atoms and photons.

  11. tacitus,

    I think the question that you are addressing is something like, “What are the physical requirements for a structure that can be the material support for consciousness?”

    I think the question that John is addressing is something like, “Why is there an internal experience corresponding to the externally observable activities that creatures (animals and humans) engage in? Why does it inhere in some structures and not in others?”

    By this rephrasing, I provide my interpretation of the term “consciousness” as being this internal experience, knowable directly in oneself but only by inference in others.

    John’s argument appears to be that, just as consciousness of others is merely an inference, consciousness in oneself arises in social interaction with others. If you were raised by robots that made no attempt to communicate with you, would a concept of self, needs, conflicts, etc., arise? Not to say that you wouldn’t have needs and problems, but you would never have to explain them to anyone; so it would be a do/don’t do issue, not a topic for argument or negotiation.

    According to that school of thought, I would expect that social animals like dogs, intelligent birds, and so on would be conscious. It’s not clear to me that social insects have to “process” very much with each other, so probably not (or not much: consciousness may be a matter of degree).

    With regard to neural networks: First you have to put it in control of something you can interact with. Then you have to see if you like it; in other words, if you would feel bad about zeroing it out.

    I think there is a distinction between the old rule-based Artificial Intelligence programs and the neural-network stuff: I suspect that the developer of the rule-based AI system will never be able to take seriously the end-product, but that might not be true for the neural-net system. So the NN approach has a leg up on the Turing test.

    Bruce: A DVD cannot be interacted with, so I wouldn’t consider it conscious. Also, it’s frozen, so not much could be going on from its point of view either.

  12. Bruce the Canuck

    >A DVD cannot be interacted with, so I wouldn’t consider it conscious.

    But that’s mistaking the problem. The problem is not the “turing test” problem, of how a being appears from the outside. The problem is why there is an internal experience of being conscious associated with it.

  13. Bruce — thanks for the interesting post. I agree that the question of what is consciousness is tied up with the question of what it means to be human. I can see why Chalmers’ thought experiment would be troubling to people — especially if they are religious in any way. I’ve been in enough discussions with fundamentalist Christians to know that the thought of our conscious mind being nothing more than a highly complex biological machine responding to internal and external stimuli can be an anathema to many people.

    Frankly, having thought about it some, it doesn’t bother me. It doesn’t change the reality of who I am, any more than my deciding that I didn’t believe in God did. Some Christians argue that if there was no God, then there would be no reason for anyone to act morally and the world would be a hellish place to live, with everyone raping and pillaging to their own heart’s delight.

    That’s nonsense, of course, the truth is the world is the way it is, whether or not there is a God in charge. (What people would do if they all suddenly discovered that their religion was a sham is another matter entirely.)

    That’s how I feel about this debate over consciousness. If it turns out that we’re just highly complex biological machines that can (one day) be copied and recorded and played back, then it doesn’t change who I am. If free will is merely an illusion, then if it’s a good enough illusion to fool everyone, that’s fine with me! Yes, if we can one day perform those thought experiments in reality, it would be extremely freaky to watch, but I suspect we will be able to deal with it nonetheless.

  14. I think the question that John is addressing is something like, “Why is there an internal experience corresponding to the externally observable activities that creatures (animals and humans) engage in? Why does it inhere in some structures and not in others?”

    By this rephrasing, I provide my interpretation of the term “consciousness” as being this internal experience, knowable directly in oneself but only by inference in others.

    Okay, assuming that is a fair characterization, I still don’t see why John rejects the possibility that we cannot reproduce a conscious AI from purely physical parts. Our ability to internalize is a product of our evolution and is a key part to our being a social animal and why we empathized when we see others hurting. Our brain is wired that way, so I still don’t see the need to invoke a metaphysical quality that takes us out of the natural world, which is what John seems to be implying.

    If what he’s saying is that you can’t quantify consciousness in physical terms, then perhaps I can see what he’s getting at, but that still doesn’t mean that consciousness isn’t itself the result of purely naturalistic purposes. To argue otherwise is getting dangerously close to the position supporters of Intelligent Design, like the Discovery Institute, take on the issue.

  15. Neal J. King :

    “How much does your friendly philosophy professor know about using QM? Has he ever calculated a cross-section? The eigenvalues of the hydrogen atom? Einstein/Podolsky/Rosen paradox? Bell’s theorem? ”

    Perhaps not, for he was a simple engineer turned philosopher turned professor. A few years ago he was elected a member of Russian Academy of Sciences and also became a director of one blabby Institute of Philosophy in Siberia. Said so, I do not consider him as a very informed person, certanly he was dumb in Physics and Math, yet about N. Bohr vs. philosophy of QM he did not said anything different from what Murray Gell-Mann later said in “Quark and Jaguar.” Apologies for using the popular reference, reading this certainly exposes me as a dilettante 🙂 , but I never said that I am a physicist.

    Speaking about Bell inequalities as well as about Kochen-Specker Theorem, which according to some philosophers show impossibility of local realist interpretation of QM : would you like to comment about Chris Isham’s approach to interpretation of QM which intends to resolve the issue of realism using Topos Thaory/Intuitionists Logic?
    I’m just qurious about your opinion on this…

  16. tacitus,

    I think that John would not argue for the impossibility of consciousness being based on a non-brain medium. I think he’s interested in a different issue, which is the nature of consciousness as such. As I was saying originally, the “question of consciousness” can be taken from a variety of angles, and the answer to one angle is not easily related to the answer to another.
    .
    .
    sergey,

    The fact that one person says the same thing that another does about a topic does not mean that their understanding is identical or even similar on that topic. Nothing that you say about your philosopher friend gives me any confidence that he has any degree of comfort with hardcore QM; for me, this is an important point: I have lost interest in what people think about QM who haven’t done homework sets on it. They’ve never had to ask themselves, “Why I am doing this calculation? What does it mean? What the HELL does it mean?”

    With respect to Gell-Mann: He’s certainly a great physicist, but I didn’t find anything of philosophical interest in his popular writing. Maybe his writing style didn’t catch me: I read it very quickly decades ago. He’s very clever and very erudite; but not, as I can remember, particularly philosophical. Even Feynman – who claimed to despise philosophy – had more interest in philosophical topics: the experience of dreaming, for example.

    With respect to KST and Isham, etc.: Nope, haven’t read it. The stuff I’m talking about is pretty basic and pretty old.

  17. Bruce the Canuck

    > I can see why Chalmers’ thought experiment would be troubling to people — especially if they are religious in any way…If it turns out that we’re just highly complex biological machines that can (one day) be copied and recorded and played back, then it doesn’t change who I am…

    Well I’m 9/10 atheist, and it still keeps me up at night. I think many people partially embrace atheism for the wrong reason – because it seems to simplify the world by making it entirely atoms and photons. But it doesn’t, really, because the problem of conscious experience remains, and the world seems to be just as much information as physical.

    The playback thought experiment is interesting because steps 3 and 4 are absurd. How can a static copy have a conscious experience? It relates to the question of how you can exist now when your existance is finite – once you are dead and those who remember you have died, in what way were you every alive except as perhaps a (static?) pattern further back in the stream of time?

    And the answers you settle on for each step very much should have real consequences for your decisions. For example, assume you don’t want to die. (Many people say, well a good stretch is enough for me, but that’s absurd – any limitation on your life is a sort of passive suicide, barring suffering due to age can you imagine suicide? So just assume death is the big bad, now and in the future).

    Well then, both general anesthesia and a good hard concussion both appear to be a complete discontinuity in your consciousness (having experienced both, they seemed very unlike sleep). If consciousness is a pattern of causal events running on a neural turing machine, then an interruption in your consciousness, and re-booting of such as a simulation of you generated after your biological death, in both cases is just as much you as the you that wakes up after minor surgery or a hard knock on the head.

    If you believe that only your current stream of conscious awareness is the only you and a copy would not be, then you should fear minor surgery or hard knocks on the head just as much as death. If you believe that an interruption in your current stream of awareness is not death, then you either believe that the information state is the true you, and could survive your biological death, or you believe you are somehow magically attached to your current biological copy, or you believe in a spirit world.

    So assuming you’re an atheist, and that you do not fear general anesthetic, yet do not want to die, and believe a simulation of a perfect scan of your neural network would be you, then why don’t more people sign up for cryonics?

    I admit I haven’t and the only real conclusions I can come to as to why are that it “seems silly” and may open one to ridicule, or that my rational and emotional beliefs are in conflict. That or that my own beliefs on this issue lead me to the absurd conclusion that I should fear general anesthesia just as much as death, because from a first person perspective it is death and may as well be forever, it would only be from an observer’s point of view that it isn’t equivelent to permanent death.

  18. Neal J. King:
    What you say about my friend is correct, but I did not brought that guy as an authority on QM, I brought him in this discussion as a damned historian of science (as gopher65 puts it.) Aside of the fact that you personally like N. Bohr philosophical interpretation of QM, do you really have any problem with my friend’s claim that N. Bohr’s interpretation of QM suppressed the work on philosophy of QM for 20 years? Ain’t we have got more than 14 interpretations of QM after the shadow of the great guru had faded?

    I can not stipulate as what should be considered to be philosophical by a physicists such as your good self, yet I have some vague idea regarding what philosophers consider to be philosophical. And as I know, the pesky philosophers ilk still believe that they are the ones who are experts in consciousness, cognition, ideas, logic, etc. During the first half of 20th century they were all crazy about logic, that is the mathematical logic. That had ended with Wittgenstein… Now they are crazy about analysis of language and cognitive sciences. In that environment, the linguist such as Lakoff and Johnson rule the party. I am talking about American brand of philosophy, though..

  19. Bruce, Peter F. Hamilton’s “Commonwealth Universe” novels imagines exactly the type of scenario you talk about. People can download copies of themselves to be kept in a secure location should anything happen to their physical body. If they die, a clone is grown and the imprint uploaded into the body and they resume their life as before (minus the experiences after the imprint was taken). They also have a chip embedded in their brain taking backups so a more up-to-date imprint can be maintained.

    In the later novels, humans begin giving up their bodies altogether and download themselves permanently into cyberspace where they live out virtual lives limited only by their imagination, but they also maintain the option to download a copy into a spare body if they need a physical presence in the real world. Of course, that means there is more than one copy in existence at one time.

    He takes things to the extreme. For example, in one part of the story, there is a man who has cloned himself a dozen or more times, and they all live together in the same house and they share a high-tech psychic link *and* they share the same girlfriend. So she ends up having sex with multiple partners except they are all the same person (or personality) and they all share the sexual experience from multiple vantage points. Go figure!

    In Hamilton’s universe, society has already accepted the continuity of a person through multiple physical deaths and transfers, and he doesn’t really delve much into the philosophical aspects of a portable consciousness (he’s a very plot driven writer, so that’s really no surprise) but it’s interesting in the way he explores the possibilities of such a technology.

    I get your point about unconsciousness being like death in that it’s a discontinuity. (Now you’re making me feel like I don’t want to go to bed! 🙂 ) If I went to sleep, someone downloaded my mind into a computer and then destroyed my body (or not) before “waking up” the AI copy of myself, I guess that copy would sense that continuity of existence every bit as much as I do when I wake up in the morning.

    It’s funny though, I was thinking about this when I first read the post. I think most people would object viscerally to the idea of downloading their mind onto a chip as a way to acheive continuity of existence. It would be very hard for them to come to terms with the idea that the copy on the chip was actually them, and that it wouldn’t be a completely different person who was taking over once they (the original) had died.

    One the other hand. Suppose you were able to embed that chip into a person’s brain, and then gradually and seamlessly transferred all functions of the brain on to the chip, and slowly deactivating the biological parts of the brain as they became superfluous. They wouldn’t notice anything strange at all and at the end of the day, when the body dies, you have the exact same copy held on a chip as you would have if you just did a direct dump all at once (even if its done at the moment of death.)

    But, psychologically, I suspect the difference would be night and day, and many more people would accept the gradual transfer as allowing for a continuity of being. There will never be two full copies at any point, so you remain a single, unique, being, even if you’re mind is straddled across flesh and silicon.

    As for anesthesia, I too have had plenty in my time for one thing or another, but I think we don’t fear it because we do trust that we will wake up from it since it is extremely rare for something to go that badly wrong. Of course, death is also made more palatable too by the hope that we will wake up in a heavenly afterlife (funny how people never think they’re going to hell!). I certainly hope there is some sort of continuity of existence after death (though, as an atheist, I doubt it), but I fear that it will come way to early to expect any sort of technological solution, unless there is some breakthrough in life-extension technology, which I guess is always possible. Ho hum.

  20. Bruce the Canuck

    >..I fear that it will come way to early to expect any sort of technological solution…

    If you look up brain tissue scanning and automated neural network recognition, there are a decent number of papers covering actual prototype hardware and software, and several competing research groups. It appears almost technically possible now, but just unreasonably slow to scan a whole human brain. Given that it’s a kind of digitizing, given any decent level of effort you can expect a moore’s law kind of speed up. Note that such scanning requires that you be not just dead but preserved, stained, and sliced into brain salami.

    For simulation, look up the IBM blue-brain project, or the mouse brain simulation efforts. For a sense of scale the whole network that makes you would be on the order of 1000 tb ($100 of DVDs in bulk). A 1cc mouse brain takes about 3 months to scan, albiet only with the detail needed for the network, but not enough for the synaptic weights (we’re 1500cc or so).

    Basically unless something’s really missing from our understanding of neuron biology, it’s already technically possible to scan a person in and simulate them. It’s the sort of thing a manhattan project scale effort could pull off, although that’d be a foolish way to spend $ given the cost halves every 2 years anyways. If moore’s law holds then it the first simulated dead person could occur within a decade, and it would be affordable for a middle-class worker within two.

    We wouldn’t even need to understand what’s going on that makes us have general intelligence, never mind internal experience. It’s enough to understand neurons, build decent scanners, and have faster computers. In keeping with the original posting, it’s another case of just smart enough to get into trouble.

    Perhaps we geeks aren’t signing up to get our heads frozen only because it’s currently so mocked, and who wants to be the only one to wake up in Second Life 2050, everybody you knew long since dead?

    Note this is an argument for the singularity. If moore’s law kept holding true, the cost of a virtual worker would halve every 2 years! That means an exponential speed up in all content-generating fields, technical or not. It also means you would have to commit biological suicide to compete in any intellectual field! I recognize how fantastic and whatnot the concept is, and I (nearly) completely doubt it myself, but it appears you only need to believe:

    1) A simulation of a human’s neural net is sufficient to be that person
    2) We understand basic biological neuron function
    3) Moore’s law continues for another 10 years
    4) Somebody somewhere is dumb/smart/foolish/desperate/lonely/depraved enough to do it

    I have a lot of trouble knocking down any of those 4 myself. Yet I don’t seem to believe the conclusion, either.

  21. Bruce,

    I guess your (or Chalmer’s) example is not really the recording of the replay of states of a model of the brain, but the model itself: You can’t interact with a recording, and it’s a static block of information. But if you download an image of the state at one time, you could re-boot the brain model to that state, which would be like waking up as that person at that time.

    However, equally important as the brain model is the interface to something like a physical body. Think of all the modeled nerve endings that have to terminate in something that provides input/output, and the sensation and control issues. It’s not just a Moore’s-law issue.
    .
    .
    sergey,

    I don’t see how Bohr could suppress other people’s thought. He and Einstein had a famous series of discussions on the meaning of QM, which it is generally agreed that Bohr “won”. What is it about winning an argument that constitutes “suppression”?

    Schrödinger had his interpretation of QM in terms of waves which was incompatible with insights already gained about the nature of atomic physics; the wave-mechanical formalism had to be disentangled from the interpretation to provide a physically consistent theory.

    There are other interpretations of QM than Bohr’s that have developed, although my personal feeling is that most of them don’t add very much useful, for my taste. Probably the most definite is the many-worlds interpretation; but I’m not sure that adding an infinite multiplicity of realities is any improvement over saying that there is some kind of singularity in the evolution of the state vector. You are still left with the question, Why am I in the branch that I am in?, which doesn’t seem much different from a re-statement of the original issue.

    One of my current interests is reading a history of the development of QM: the 9-volume set by Mehra & Rechenberg ( http://www.amazon.com/Historical-Development-1900-1941-1942-1999-Epilogue/dp/B002155E94/ref=sr_1_19?ie=UTF8&s=books&qid=1243149912&sr=1-19 ), which actually discusses many of the individual papers written by the pioneers, in their context as part of the active discussion & development. I’m interested in understanding in more detail how the “standard” conceptual framework of QM was developed: It was not easily arrived at. I believe that a lot of the newer interpretations are attempts to avoid some of the conceptual strangeness of this framework, but do not reflect an understanding of why this strangeness was introduced in the first place. Bohr & friends spent a lot of time wrestling over what could constitute a reasonable kind of explanation for atomic phenomena, building models according to these ideas, and checking them against spectroscopic measurements. The anomalous Zeeman effect was a real headache; and they were continually baffled by the problem of helium. The photon concept was pioneered and promoted by Einstein, and was seriously resisted by Bohr because of apparent incompatibility with the correspondence principle, until the Compton effect was experimentally demonstrated. Probably the most important single step was Heisenberg’s matrix mechanics, but there is a lot of context to the question of what Heisenberg thought he was actually doing. Certainly, he was not thinking about operators and Hilbert space in 1925.

    I will have a firmer idea of what I think about these interpretations of QM when I am clearer on how we got to where we are with respect to the formalism of QM. It is a complex and multi-threaded story.

  22. Bruce the Canuck

    >You can’t interact with a recording…

    Again, the problem IS NOT how the model behaves. The real problem is whether the model has an internal experience at the time of its behavior. A follow on question is, is the internal experience a requirement of the model acting convincingly like a person, including conversing about its internal experience as we are here? And if it is not a requirement to carry out this conversation, then nothing makes any sense – or at least, I could believe you don’t exist the same way I do, that you’re one of chalmer’s zombies.

    >However, equally important as the brain model is the interface to something like a physical body….It’s not just a Moore’s-law issue.

    Give a quadraplegic a wrap-around VR headset and a second life account. Does he still exist?

    That aside, what you’re saying then is that our internal conscious experience requires interaction with the remainder of the universe; an open system. The recording doesn’t experience its actions because the data stream doesn’t originate in the real world on replay.

    That leads to the idea that in some way our sense of our conscious experience being within ourselves is false, an illusion, and conscious existance (qualia?) is located in the world not in our brains, thus our senses and brains are only a way station where the qualia have a chance to interact. Our internal lives are an illusion, and the world at large is the only thing that is real. What happens, then, if you get a community of cadaver brains scanned in, and given them free reign to build a virtual society?

    I think the real points here are:

    1) Chalmers et al often do have something to contribute, even if they’re very often wrong

    2) Either Moore’s law will fail soon or we’re about to go through a technological transition much like the splitting of the atom. Making Kurzweil one of the guys who correctly forsaw the splitting of the atom but incorrectly thought it would lead to electricity too cheap to meter, rather than Very Bad Things and/or 60 years of balance-of-terror.

  23. Bruce,

    If you can’t interact with something, you can’t evaluate its consciousness. I can only infer the consciousness of another being by interaction; I’m not a mind reader (except of my own mind).

    It is clearly NOT a requirement of “acting convincingly” to actually BE conscious: the recording of the states of the model is not that different from a movie of a person, and no one claims that a film of an actor is conscious. At this moment, you have no way to PROVE that I am NOT, in fact, one of Chalmer’s zombies. However, at this point, I suspect you find it more in-line with Occam’s razor to believe that I am actually conscious.

    The quadriplegic example: He still has eyes that provide sensory input to the neural network; and there has to be an output to impart controls to the 2nd-life avatar.

    “The recording doesn’t experience its actions because the data stream doesn’t originate in the real world on replay.” Not exactly what I mean: A recording is not something that is a valid object upon which to project conscious experience. Just as I would not consider the question of whether a pair of tossed dice feel a sense of uncertainty as to its result. I would be willing to entertain the question with respect to the instantiation of the brain model itself, which has capabilities and can interact.

    Claiming that “our sense of conscious experience being within ourselves is an illusion” is just avoiding the issue. It’s this sense which is the issue; calling it an illusion does not make any difference. If you are experiencing a nightmare, simply realizing (after waking up) that it is an illusion does not make it any more bearable at the time you are experiencing it, and certainly does not provide any explanation for why you had it.

    If you downloaded images of different brains into models and then let the models interact, you could have a society. This would be a 2nd-life situation.

    I don’t think the possibility you’re talking about is that close: I don’t think the models are good enough to really hold a full brain-scan, and I don’t think we’re near close enough to being able to take a full brain-scan, even if you’re willing to be sliced & diced live for the cause. I think Moore’s law is the least of our worries. I don’t think we know nearly enough neurobiology.

  24. Bruce the Canuck

    >I think Moore’s law is the least of our worries. I don’t think we know nearly enough neurobiology…

    Scanning the neurobiology in terms of raw data is possible but not economical already, or nearly so, and just as with DNA sequencing, can be expected to become rapidly cheaper. If you assume Moore’s law continues, it’s reduced to *only* not understanding the low level neurology. That looks to be a much easier problem than figuring out how to make a general AI from first principles.

    So assume you’re right and the neurobiology takes a few decades; that then leads to a powderkeg. You then have a large overhang, where the moment the raw scan data to network-model and simulation of neurobiology problem is solved, virtual people can be run cheaply and at rapid time scales immediately. That’s a recipe for sudden uncontrollable change.

    The scenario of controllable transition is one where the software/neurobiology problem is solved while the hardware only allows 1 test subject, run at high cost and slower than real time. All the scenarios have in common that we’re likely to have functioning models of conscious minds before we understand what we’re doing or even how they actually work, much less how they produce an internal experience.

    Still, skynet is unlikely. A real AI would seem better able to achieve its ends through near-invisibly subtle manipulation rather than brute force. A lost email there, leaked memo here, some tweaking of online dating, a little playing with the stock market. People who do the right things or feed it real world data do gently better, others do a little worse. It wouldn’t be hard for such a thing to change society greatly in a short time, without violence or awareness of the cause.

Comments are closed.

Scroll to Top