My answer to this year’s Edge Question, “What Do You Think About Machines That Think?”
Julien de La Mettrie would be classified as a quintessential New Atheist, except for the fact that there’s not much New about him by now. Writing in eighteenth-century France, La Mettrie was brash in his pronouncements, openly disparaging of his opponents, and boisterously assured in his anti-spiritualist convictions. His most influential work, L’homme machine (Man a Machine), derided the idea of a Cartesian non-material soul. A physician by trade, he argued that the workings and diseases of the mind were best understood as features of the body and brain.
As we all know, even today La Mettrie’s ideas aren’t universally accepted, but he was largely on the right track. Modern physics has achieved a complete list of the particles and forces that make up all the matter we directly see around us, both living and non-living, with no room left for extra-physical life forces. Neuroscience, a much more challenging field and correspondingly not nearly as far along as physics, has nevertheless made enormous strides in connecting human thoughts and behaviors with specific actions in our brains. When asked for my thoughts about machines that think, I can’t help but reply: Hey, those are my friends you’re talking about. We are all machines that think, and the distinction between different types of machines is eroding.
We pay a lot of attention these days, with good reason, to “artificial” machines and intelligences — ones constructed by human ingenuity. But the “natural” ones that have evolved through natural selection, like you and me, are still around. And one of the most exciting frontiers in technology and cognition is the increasingly permeable boundary between the two categories.
Artificial intelligence, unsurprisingly in retrospect, is a much more challenging field than many of its pioneers originally supposed. Human programmers naturally think in terms of a conceptual separation between hardware and software, and imagine that conjuring intelligent behavior is a matter of writing the right code. But evolution makes no such distinction. The neurons in our brains, as well as the bodies through which they interact with the world, function as both hardware and software. Roboticists have found that human-seeming behavior is much easier to model in machines when cognition is embodied. Give that computer some arms, legs, and a face, and it starts acting much more like a person.
From the other side, neuroscientists and engineers are getting much better at augmenting human cognition, breaking down the barrier between mind and (artificial) machine. We have primitive brain/computer interfaces, offering the hope that paralyzed patients will be able to speak through computers and operate prosthetic limbs directly.
What’s harder to predict is how connecting human brains with machines and computers will ultimately change the way we actually think. DARPA-sponsored researchers have discovered that the human brain is better than any current computer at quickly analyzing certain kinds of visual data, and developed techniques for extracting the relevant subconscious signals directly from the brain, unmediated by pesky human awareness. Ultimately we’ll want to reverse the process, feeding data (and thoughts) directly to the brain. People, properly augmented, will be able sift through enormous amounts of information, perform mathematical calculations at supercomputer speeds, and visualize virtual directions well beyond our ordinary three dimensions of space.
Where will the breakdown of the human/machine barrier lead us? Julien de La Mettrie, we are told, died at the young age of 41, after attempting to show off his rigorous constitution by eating an enormous quantity of pheasant pâte with truffles. Even leading intellects of the Enlightenment sometimes behaved irrationally. The way we think and act in the world is changing in profound ways, with the help of computers and the way we connect with them. It will be up to us to use our new capabilities wisely.
@Aleksandar,
“For a human, the “algorithm” works, because a human can see that something is correct without a proof.”
I wholly reject that. Humans can often provide a pretty good guess, but if humans could always see truth without proofs, then why is so much of physics (and even so much of mathematics) counter-intuitive?
Pingback: 2015 : WHAT DO YOU THINK ABOUT MACHINES THAT THINK? | Basic Rules of Life
Sean Carroll
“Modern physics has achieved a complete list of the particles and forces that make up all the matter we directly see around us, both living and non-living, with no room left for extra-physical life forces”.
You cannot use physics to reach conclusions that are philosophically untenable.
a) Necessarily consciousness is causally efficacious, at least in the progression of our thoughts. If they were not then the certainty of my own consciousness is misplaced since my certainty would be derived from physical processes rather than consciousness itself. However this is absurd since to think I must be conscious necessitates I am conscious — that is to say it is self-contradictory for there to be any doubt on this issue.
b) Consciousness is non-physical. The physical is characterised by the wholly quantitative. Consciousness by the wholly qualitative. Hence consciousness is of an ontologically differing type than the physical even if produced by the latter.
c) The mechanistic philosophy which underpinned the birth of modern science subtracted the qualitative from the physical realm and placed it into the mind. Hence the supposition that there are no colours, sounds and smells out there.
This wasn’t an actually discovery by science, it was a philosophical *stipulation*!
The upshot is that physics — at least as currently construed — can not *by definition* explain the existence of consciousness.
So to say that physics leaves no room for extra-physical life forces or consciousness means that your physics has gone awry.
Of course “machine” is a fairly loose term “we are machines that think” could mean anything.
My question is this – could there be a register machine program which could simulate the physical structures of the human brain (and enough of it’s environment to include sensory input) and so simulate any externally observable human behaviour including all of our language about subjective experience?
The follow up question is, do you think that your own subjective experience could, in principle, be the result of such an algorithm, that is to say a series of simple instructions run one after the other, with no parallelism?
If so then what could unify such a process. Let’s say there was a small time gap between each simple instruction, what is it that could make that information even seem to be unified – for example seeing all the letters of the word “the” simultaneously?
If, on the other hand, our subjective experience could not, even in principle, result from a set of simple instructions running one by one, then how is it that the simulation can report on subjective experience, for example “I feel nauseous”?
Reply to David:
What I meant is that humans can see that the relevant statements are correct without a mathematical prof, by using intuition and experiments.
Robin Herbert
Agreed. And “thinking” also is a loose term.
We could expand the term to include anything that can run an algorithm and include even the simplest computers or narrow it to eliminate it as a capability of anything other than human brain activity.
Torbjörn Larsson
I think the intent of the Edge question was to encourage speculation on what human capabilities we might eventually create in machines as the term is normally used. Of course, I imagine anyone could take this sort of open-ended question and go with it where they want. Sean has chosen to blur the distinction between human and machine capabilities by declaring we are machines.
The more interesting questions for me are:
1- How did inanimate matter become alive and eventually able to think?
2- Can we create something that can think but is not alive?
Bravo to the termination of this blog’s system of comments pop-rating. In my own rating system this blog will from now on now rate much higher. Thank you.
And so, men would do better addressing each other not as Mr. So And So, but as M/c. So and So. [… But then, how about ladies? Would M/cs. (short for Machiness) do? Any better ideas, here?]
And come to think of it, may be, when a weight drops vertically down (with a rope going over a pulley) to pull another weight up on an inclined plane, may be, this first weight is being altruistic… Is it? … Well, but at least this simple machine shows this character trait of being decisive in its actions. Unlike those weighing scales—very undecisive.
But then, where precisely do we draw the line? Is a rolling stone one of us, too? … Or is it a question that We the Machines are
inherentlyintrinsically incapable of settling? … Or is this whole idea of drawing any lines itself invalid, because QM has already shown that all lines must be blurred [at least in this world, though, in some other multi-verse, there may occur a distinction between us and rolling stones]?… Very thought-provocative…
–Ajit
[E&OE]
Hi Everybody!
AI, DARPA and theoretical physics in one post! This post is some serious click bait for the aspergers demographic.
The average person uses 100 watts of power and 1/5 of that energy goes to power the brain. As a rough guesstimate lets put the power at around 20W for a brain even though some of us may be brighter or dimmer than the average. I feel that we need to get the hardware right first. A conductive aerogel is the closest thing to a chemical brain in terms of structure, complexity, size and connections. If AI is only a matter of stem cell transplants that create neurogenesis to cure Parkinson’s and Alzheimer’s disease then sign me up.
Sean,
“Modern physics has achieved a complete list of the particles and forces that make up all the matter we directly see around us, both living and non-living, with no room left for extra-physical life forces. Neuroscience, a much more challenging field and correspondingly not nearly as far along as physics, has nevertheless made enormous strides in connecting human thoughts and behaviors with specific actions in our brains. ”
I disagree with this completely. I have talked to Neuroscientists and psychologists. As of today, they do not understand consciousness. They have no idea about where consciousness comes from and how far down it goes in the tree of life. Penrose and others have made great efforts, but as yet, there is no acceptable model. I am sure you know that according to Penrose machines will never have *understanding* like what we have!
“Of course “machine” is a fairly loose term “we are machines that think” could mean anything.”
Aren’t we nit picking with this kind of criticism. We have a general idea of what a machine is and what thinking is. A simple definition of a machine is a system that processes energy to do work or compute. Thinking is processing of information (in a self aware fashion if you like).
With that, humans are thinking machines.
That doesn’t mean we are only thinking machines. We are also feeling machines, mobile machines, etc.
@bostontola
I don’t think that thinking has anything to do with the processing of information. Thinking is a purely mental activity where certain ideas pass through ones mind. Information only exists by virtue of consciousness.
Re: Aleksandar Mikovic says:
January 19, 2015 at 2:28 am
“Reply to David:
What I meant is that humans can see that the relevant statements are correct without a mathematical prof, by using intuition and experiments.”
Thanks for addressing my comments. In response (after which I’ll give up trying to reason people out of positions that reason and observation did not apparently produce, at least based on my own experience):
“Intuition” is the operation of about 73 billion neurons and quadrillion synapses doing something in the background which you don’t know about since there are no nerve cells which monitor brain cells. One of those things is probably testing logic statements against all known cases to see if there is a counter-example – something any computer can do, in principle. For example, Einstein concluding that Quantum Mechanics must not be right, intuitively, because he knew of no cases of “spooky action at a distance” as implied to be possible by the principles of QM.
“Experiment” is what Edison did, and what FDR did during the Great Depression, and what nature does every time it mutates a gene – a process that works just as well when random as when planned (see for example, the cat that invented Lexan by knocking over two laboratory beakers).
And to a previous commenter who misunderstood what “when it works” meant in the context in which I used it: when it appears to have produced a successful result; in the case of nature, survival and reproduction is a successful result; in the case of Edison, when a battery held a charge over a long period; in the case of Godel, when he found a peer-reviewable counterexample to the proposition “every true statement about a system can be proven true without going outside that system to a more general system”; in general, success is relative to the selection criteria.
Sean says: “Modern physics has achieved a complete list of the particles and forces that make up all the matter we directly see around us, both living and non-living, with no room left for extra-physical life forces.” I guess I fail to see how previously having an incomplete list of particles left open the possibility.
kashyap vasavada,
I think you may have completely misunderstood Sean. He has not said that physics, or any discipline, has explained consciousness. In fact he carefully clarified that he was not claiming that.
Aleksandar wrote: “I am saying that a consequence of the 1st Goedel theorem is that human thinking cannot be reduced to an algorithm.”
No, that is not a consequence of either of Godel’s incompleteness theorems. You are assuming that human thinking can generate proofs of statements that are undecidable from a given set of axioms, without introducing new axioms. That has never happened.
Ian wrote: “b) Consciousness is non-physical. The physical is characterised by the wholly quantitative. Consciousness by the wholly qualitative. Hence consciousness is of an ontologically differing type than the physical even if produced by the latter.”
“Consciousness is of an ontologically differing type than the physical” is a strong conclusion to draw if you don’t first show that qualia are non-physical in origin. And at present we have no basis for assuming that they are. What you have given us is just another god-of-the-gaps argument.
Aleksandar wrote: “For a human, the “algorithm” works, because a human can see that something is correct without a proof.”
A challenge for you: Can you see, without a proof, that the twin prime conjecture is true? Or, if it isn’t the case, can you see, without a proof, that it is false?
Richard, I have already stated that the physical is wholly constituted by the quantitative. Consciousness is not. So by definition it is not physical.
Moreover, because the mechanistic philosophy stipulated physical reality to be wholly quantitative, and physics is modelled on this mechanistic philosophy, then physics (at least as currently conceived) cannot in principle account for consciousness i.e the existence of consciousness cannot be derived from more fundamental physical states. Consciousness has to be considered a fundamental existent.
I’m writing about all this in a blog entry which I’ll publish in a week or two.
Ian, you wrote “I have already stated that the physical is wholly constituted by the quantitative. Consciousness is not.” But that is not consistent with the qualifier you included: “even if produced by the latter.”
Okay, suppose you strike those weasel words from your comment. Then you have that qualia must be non-physical in origin. But we have no evidence for that assumption, and that is why it relies on a god-of-the-gaps argument, exploiting the fact that we don’t understand how qualia can emerge from the physics.
You will have to do much better than that to make a convincing argument.
Also, simply citing yourself as stating that the physical world is wholly quantitative while consciousness is not, to support your argument that consciousness is not physical, is circular reasoning.
Aleksandar Mikovic: “What I meant is that humans can see that the relevant statements are correct without a mathematical proof, by using intuition and experiments.”
Humans can often see that a statement is correct, even when it isn’t.
Here I think you dare attempting to preserve a sense of mystery by using the word “intuition,” and that the mystery would disappear if you tried to explain what intuition consists of. I.e. mining onex experience, making analogies to other situations of varying relevance, applying known principles in new contexts, etc. None of this is off-limits to a machine, and none of it is ruled out by Godel’s theorem.
Consciousness and individuality are both concepts which are hard to see defined, in order to be manifested, by blocks of processing power/memory and real world interfaces. A will and a decision making centre are required, concurrent with a developing and very flexible processor hierarchy. Will the system split into multiple ‘consciousnesses’, for example? If this is not going to happen, you will have to define consciousness rules and constraints on your hardware/software very carefully for your hardware to host, or emulate it. Difficulties defining consciousness will of course prohibit design of machines which might exhibit some semblance of it. Consciousness entails learning, development and adaption concurrent with singular will and singular identity. Try drawing up some logic rules to contain and maintain those attributes. There are others; these would be a sensible minimum.
On a different tack, people are apparently very comfortable using the emotional and volitional language of conscious life, and applying it retrospectively to help explain aspects of the evolutionary process. The concepts of consciousness had no meaning, if you are a reductionist, before there were conscious entities of the required complexity to host and observe them. Unlike, say, particle energy, ‘will to survive’ has no defined attribute value when there is no conscious entity around. A primitive organism must display, for example, a definite fear of death, or will to survive, in order to drive the selection process. It must possess a will, and a core capacity to deduce and make decisions, to some degree. These are aspects of consciousness required before the organism can develop enough to host consciousness. Is there not a contradiction here? Is not the whole evolutionary paradigm presumed to be working uphill against a logical fallacy? If conscious attributes like ‘fear’, ‘collaboration’, ‘preference’ etc etc had reality and substance before organisms developed to display them, then who (Wh0) possessed and defined them ‘in the beginning’?
Posted my own response to the Edge question and some of the threads in the comments here:
http://broadspeculations.com/2015/01/19/thinking-about-thinking/
Simon, I’m not sure what your point is. We might as well ask whether DNA replication was a real thing at the time of the Big Bang, without any living cells to host it (or for that matter, without any DNA molecules). Certainly, the potential was there.
As for evolution, selection is not a willed process, so that analogy fails.
darrelle,
I do not think I have misunderstood Sean! By calling us “machines” he ignores a fundamental difference between machines which work with algorithms and consciousness which has presumably non algorithmic basis as emphasized by Penrose. Adding a word “thinking” does not resolve this big difference!