My answer to this year’s Edge Question, “What Do You Think About Machines That Think?”
Julien de La Mettrie would be classified as a quintessential New Atheist, except for the fact that there’s not much New about him by now. Writing in eighteenth-century France, La Mettrie was brash in his pronouncements, openly disparaging of his opponents, and boisterously assured in his anti-spiritualist convictions. His most influential work, L’homme machine (Man a Machine), derided the idea of a Cartesian non-material soul. A physician by trade, he argued that the workings and diseases of the mind were best understood as features of the body and brain.
As we all know, even today La Mettrie’s ideas aren’t universally accepted, but he was largely on the right track. Modern physics has achieved a complete list of the particles and forces that make up all the matter we directly see around us, both living and non-living, with no room left for extra-physical life forces. Neuroscience, a much more challenging field and correspondingly not nearly as far along as physics, has nevertheless made enormous strides in connecting human thoughts and behaviors with specific actions in our brains. When asked for my thoughts about machines that think, I can’t help but reply: Hey, those are my friends you’re talking about. We are all machines that think, and the distinction between different types of machines is eroding.
We pay a lot of attention these days, with good reason, to “artificial” machines and intelligences — ones constructed by human ingenuity. But the “natural” ones that have evolved through natural selection, like you and me, are still around. And one of the most exciting frontiers in technology and cognition is the increasingly permeable boundary between the two categories.
Artificial intelligence, unsurprisingly in retrospect, is a much more challenging field than many of its pioneers originally supposed. Human programmers naturally think in terms of a conceptual separation between hardware and software, and imagine that conjuring intelligent behavior is a matter of writing the right code. But evolution makes no such distinction. The neurons in our brains, as well as the bodies through which they interact with the world, function as both hardware and software. Roboticists have found that human-seeming behavior is much easier to model in machines when cognition is embodied. Give that computer some arms, legs, and a face, and it starts acting much more like a person.
From the other side, neuroscientists and engineers are getting much better at augmenting human cognition, breaking down the barrier between mind and (artificial) machine. We have primitive brain/computer interfaces, offering the hope that paralyzed patients will be able to speak through computers and operate prosthetic limbs directly.
What’s harder to predict is how connecting human brains with machines and computers will ultimately change the way we actually think. DARPA-sponsored researchers have discovered that the human brain is better than any current computer at quickly analyzing certain kinds of visual data, and developed techniques for extracting the relevant subconscious signals directly from the brain, unmediated by pesky human awareness. Ultimately we’ll want to reverse the process, feeding data (and thoughts) directly to the brain. People, properly augmented, will be able sift through enormous amounts of information, perform mathematical calculations at supercomputer speeds, and visualize virtual directions well beyond our ordinary three dimensions of space.
Where will the breakdown of the human/machine barrier lead us? Julien de La Mettrie, we are told, died at the young age of 41, after attempting to show off his rigorous constitution by eating an enormous quantity of pheasant pâte with truffles. Even leading intellects of the Enlightenment sometimes behaved irrationally. The way we think and act in the world is changing in profound ways, with the help of computers and the way we connect with them. It will be up to us to use our new capabilities wisely.
The difference to me is between evolution and engineering design. Evolution results from a series of micro adaptations to circumstance layered upon each other. This results in stable, redundant structures, e.g in bird or insect flight. Engineering design is mostly one-pointed, to solve one specific problem, e.g. build an aircraft. So it is with brains that have “hidden potentials” which are more like unused layers that are not presently active.
I think the distinction between the organic and the “machine” is bound to evaporate eventually. That applies to brains and everything else.
The first great advance in the mechanisms for evolution was simply the ability to self-replicate on a molecular, or maybe very primitive cellular level. Reproduction time was on the order of hours, I guess. The next great advance was sexual reproduction, which sped up the ability of organisms to adapt, allowing the development of large plants and animals with reproduction times on the order of days and years. The next great advance is the use by humans of genetic “programming” or “directed” evolution. This will speed up the adaptive process even more, allowing the development of even larger and longer lived organisms. Perhaps it will only take a fraction of a lifetime for them to travel to the nearest star, moving at a fraction of the speed of light. “Machines” will be integrated into this directed evolution, carbon organisms will be designed to connect practically seamlessly to silicon, steel, etc. counterparts. Maybe the DNA will be designed to generate the machines as well or maybe the blueprint for design will not held entirely in DNA any more.
Ive heard many people, even computer scientists, claim that AI as we think of is not possible. And perhaps on second inspection it would seem that going from binary information systems to artificially powered intelligence is a huge (impossible)jump but it has taken nature 1 billion years to design and debug our human brain. All life on this planet has essentially culminated in our brains, and to think that after as little as 100 years we would have designed and debugged an artificial intelligence is a little suspect. Even at current levels of discovery engineering a brain amounts to, albeit must faster than nature, slow progress. Ultimately our artificially crafted brains when complete will have to be inconceivable more intelligent than our own. Perhaps at some point the process of creating this artificial intelligence will be taken over by itself.
I out grew these sort of arguments a long time ago.
A machine by definition is a tool that uses energy for an intended purpose. Declaring we are machines and our thinking is mechanical provides no useful insight. It is simply a statement that thinking obeys physical laws. Whether it resembles in any way the actions of actual machines (even computing machines) in either manner of operation or actual result is obfuscated by the too facile declaration.
http://archive.wired.com/medtech/drugs/magazine/16-04/ff_kurzweil_sb
“The difference to me is between evolution and engineering design.” Oddly enough, I opened this comment section intending to go a bit farther than Dr. Carroll and opine that not only are we biological (nanotech) machines, but that the way we think and do engineering design is generally the same process as biological evolution. (Then the first thing I see is the quoted sentence.)
Granted there are differences, but as I see it the same general process is employed:
A) Generation of “something” (chemicals, ideas, designs), usually randomly (in humans as well as biology – see Edison for example).
B) Application of selection criteria to those somethings in an attempt to promote the more successful (in meeting the criteria) over the less successful: survival and reproduction; peer review; survival in the marketplace, etc..
C) Some form or forms of memory/communication, to transmit the more successful things forward in time. E.g,, DNA, written publications, design blue prints.
I see the “purposefulness” of human thinking and design as a meta-process of biological evolution, which has given us drives of curiosity, pride in achievment, etc., all to further the basic criteria of survival and reproduction. I spent 30 years as a design engineer doing projects that were assigned to me, largely because I was paid to do so, which in turn was good for my survival.
It is very clear (speaking as a design engineer) to see this process at work in engineering design. We have all seen designs (e.g., cars, phones) evolve over our lifetimes. In fact, a lot of current design practice involves random searches of possible designs (genetic algorithms, Monte Carlo analyses). It is more speculative to attribute it to the working of our neurons since we don’t monitor them with any nerve cells to feel what they are doing – but how else would creatures designed by biological evolution do their thinking?
“died at the young age of 41, after attempting to show off his rigorous constitution by eating an enormous quantity of pheasant pâte with truffles.”
If you’ve gotta go out, then go out in style… I guess?
I look at entropy as a sort of virtual dimension. I think it’s unique that many animal brains can simulate (or is it emulate?) the reversal of entropy and use that information to their advantage. Along the same lines, the fact that even though you treat a system of objects as a system of discrete parts, you can always increase the scale of your coordinate system so far that the discrete parts can only be analyzed as a singular unit. The ability of points in space to be different is unique and as valuable as coordinates being able to change. They couldn’t change if they weren’t disordered enough to be discernable from each other in a meaningful or measurable way. That fact that you must alter your formulae when this happens in order to correctly interpret and analyze it is reason enough for me. I leave it as an exercise for someone smarter than I to have an epiphany over a glass of Benziger pinot (properly oxidized).
Now off I go. Mistresses of the night(unsuspecting coeds), here I come. Maybe one day augmented cognition will reconcile the fact that I’m 31 and in their undergrad classes and it won’t be so polarizing. Creepy as fuq or sexy as hell is generally what I get. “So…tell me about your relationship with your father. Barkeep, 2 redbull-vodkas”. Still, better odds than what I had as a correctly aged undergrad. My secret? The quantity of pheasant pate and truffles that I can eat in their presence; and the redbull-vodka & paternal complex combo. Really taking advantage of the absence of the voting system here 😀 Now delete this comment and write an awesome paper on why entropy should be a dimension; one of the things I’d like to see before I die.
Where will the breakdown of the human/machine barrier lead us?
These processes are so non-linear, and there are so many iterations, the answer is hopelessly unpredictable. We will use thinking technology to enhance our own thinking abilities. We will integrate man made machines into our brains. We will modify our genes to enhance our brains even more. As we get more intelligent, we will be able to devise and manage even more modification. At some point, the machines merge into one, modifying and improving the integrated whole. That’s if we’re not destroyed in the process.
It is heartening to see that two researchers who operate in two differing approaches to quantizing gravity, think alike when it comes to “thinking machines”:
http://edge.org/response-detail/26026
Also, being a physicist myself, my own thoughts on this matter are aligned along the same lines.
We are NOT machines that think, because Goedel has demonstrated that a machine cannot think. The first theorem of Goedel in logic states that a finite system of axioms which includes arithmetic will always have undecidable statements, i.e. the statements that cannot be decided to be true or false by using the axioms. Since a thinking machine has to use an algorithm based on finitely many axioms, then a person will be able to pose a question to the machine, which the machine will not be able to answer, while for the human the answer will be obvious. This argument was f irst proposed by Goedel in 1951, than by Lucas in 1961 and also by Penrose in 1989 (see his book “Emperor’s New Mind”).
I do not really know whether there is a soul or not. I once believed that soul is independent of body, maybe I was wrong. But all of these are trash talk, because none of us has strong evidence.
@haolin,
“I do not really know whether there is a soul or not. I once believed that soul is independent of body, maybe I was wrong. But all of these are trash talk, because none of us has strong evidence.”
Quite to the contrary. The topic of many of Sean’s lectures has been that we have very strong evidence against any soul-based or afterlife-based ideas. The inclusion of souls requires the rejection of pretty much all of particle physics.
We can certainly talk about souls in an intelligible fashion, but first you need to define exactly what you mean by ‘soul’. The concept is far too variable in common parlance to admit rigorous discussion.
@Aleksandar.
“We are NOT machines that think, because Goedel has demonstrated that a machine cannot think.”
Are you saying that thinking is only possible for infallible and all-knowing entities?
But before we can get to this Promised Land, there are two trifle matters we have to put out of the way: 1) avert destruction of human civilization due to planetary dysfunction; 2) avert destruction of Western civilization by the Islamist pet Trojan Horse let in by political correctness. Good luck!
But a spirit can have a material substrate, or maybe spirit is a kind of information. That way, both human brains and artificial intelligences might have spirits, without needing to invoke the supernatural. We could be entering the “age of spiritual machines”, as prophesied in the singularitarian religion, when the spirit of the human brain can be transferred (i.e. reincarnated, as in “again in another body”) into a thinking computer via mind uploading. There are advantages, as the human body is fragile and preprogrammed to age and die in a short time. Computer programs and data, on the other hand, are more robust as they can be easily backed up, and regularly copied to upgraded computers whenever the computer begins to show signs of wear and tear, or accidental destruction.
A very interesting study in popular-ish language is Gilbert Ryle’s book ‘The Concept of Mind’ in which he challenges Descarte’s mind-body duality. Ryle coined the expression ‘the ghost in the machine’. In fact reductio ad absurdum has it pretty much covered; if ‘the mind’ is not molecules in motion, what on earth is it?
Dear David,
I am saying that a consequence of the 1st Goedel theorem is that human thinking cannot be reduced to an algorithm. Related to your questions about the soul, one needs an appropriate metaphysics in order to define it. A materialistic metaphysics cannot accommodate non-material entities like souls or mathematical laws of nature, hence one needs to use a platonic metaphysics, where all these entities, as well as matter, appear as ideas.
I explained above (to my own satisfaction) the algorithm that thinking consists of: try something, if it works keep it, if it doesn’t, try something else. That’s how Edison invented light bulbs and batteries, and that’s how Goedel found his theorems. (It helps to have over 70 billion neurons and a quadrillion synapses churning away, more processing power by orders of magnitude than any super computer which we are yet capable of building.) “Dumb” nature used this same process to invent us, in our supreme magnificence (according to us), and therefore is ultimately responsible for all of our theorems – without having used any immaterial spirits or Platonic forms.
The best article I’ve read on the subject! That us biological meachines have a functional coevolution of parts and that this enacts intelligent behavior is an observation that some futurists have long pushed aside in a glib, and perhaps futile, manner.
I don’t buy the Godel theorem negates AI argument on many levels. Mostly, I don’t see why thinking and absolute truth must be linked. We assign truth, rightly or wrongly, correctness is another matter. If we happen to be right often enough, we survive and procreate.
“We are all machines that think”
No we aren’t. Because some of us don’t!
To be truly human a machine would not only have to think like a human it would also have to emote like one as well. The soul, the self, is merely an emotional habit. It is a ripple of energy down well-worn pathways in fatty tissue. To be human a machine would have to become convinced that something that is not real is in fact real. Good luck with that!
@David Kerlick:
“Evolution results from a series of micro adaptations to circumstance layered upon each other. This results in stable, redundant structures, e.g in bird or insect flight. Engineering design is mostly one-pointed, to solve one specific problem, e.g. build an aircraft. So it is with brains that have “hidden potentials” which are more like unused layers that are not presently active.”
The last sentence is a prediction that is rejected by the rest.
Evolutionary functions that lose earlier usefulness are not protected and deteriorate in reliable manner. Pseudogenes are part fo what are used for molecular clocks in genome dating. Few pseudogenes, or functional pieces of them (say, protein folds) are reused and they are more a “hidden potential” (so called junk DNA) for the genome.
@James Cross: “Declaring we are machines and our thinking is mechanical provides no useful insight.”
Oh, I don’t know. People still believe in superstitious dualism of various kinds (a majority of the world is religious, say), and if not they may still think humans are ‘special’.
Biological machines are apt and useful descriptions.
@Aleksandar Mikovich: “I am saying that a consequence of the 1st Goedel theorem is that human thinking cannot be reduced to an algorithm.”
Besides that obviously an evolutionary algorithm (with a lot of help from random contingency) produced human thinking so it can be so reduced in some fuzzy form (and yes, bayesian learning is used by both the genome and us as JimV describes), I think you are confusing axiomatic mathematics with algorithm use.
The latter, which is the universe of computer science, is much more powerful AFAIK. Gödel’s axioms may well contribute to that for all I know. (To be honest, I don’t think they have any practical usefulness. They are math, not physics.)
Some parts of physics can’t be reduced (at least yet) to axioms, such as quantization, but physicists happily conquer some of those areas with physical algorithms.
JimV said:
“I explained above (to my own satisfaction) the algorithm that thinking consists of: try something, if it works keep it, if it doesn’t, try something else. That’s how Edison invented light bulbs and batteries, and that’s how Goedel found his theorems.”
The problem with this “algorithm” is the part “if it works”, which means establishing that some statement is true or false, which in the case of a machine can fail, due to the existence of undecidable Goedel statements. For a human, the “algorithm” works, because a human can see that something is correct without a proof.
@@MB: Emotion is not something that comes into the Turing test of mindhood, just the ability to emulate them. And there are some very simple ways to produce the same result, that has been demonstrated already.
This is not a problem, even in principle, and as shown not at all in practice.