I propose to consider the question, “Can machines think?”
Thus began Alan Turing’s 1950 essay Computing Machinery and Intelligence without doubt the most import single piece written on the subject of what became known as “artificial intelligence”.
Straight away Turing insists that we won’t be able to answer this question by merely reflecting upon it. If we went about it this way we’d get all caught up in arguments over what exactly “thinking” means. Instead he proposes a test.
Turing imagines an imitation game composed of 3 players: (A) a man, (B) a woman, and (C) an interrogator with the role of the latter being to ask questions of the 2 players and determine which one is a man. Turing then transforms this game by exchanging the man with a machine, and replacing the woman with the man. The interrogator is then asked to figure out which is the real man:
We now ask the question, “What will happen when a machine takes the part of A in this game?” Will the interrogator decide wrongly as often when the game is played like this as he does when the game is played between a man and a woman? These questions replace our original, “Can machines think?”
The machine in question Turing now narrowly defines as a digital computer, which consists of 3 parts (i) Store, that is the memory, (ii) Executive Unit, that is the part of the machine that performs the operations, (iii) The Control, the part of the machine that makes sure the Executive Unit performs operations based upon instructions that make up part of the Store. Digital computers are also “discrete state” machines, that is they are characterized by clear on-off states.
This is all a little technical so maybe a non-computer example will help. We can perhaps best understand digital technology by comparing it to its old rival analog. Think about a selection of music stored on your iPod versus, say, your collection of vintage ‘70s
8 tracks. Your iPod has music stored in a discrete state- represented by 1s and 0s. It has an Executive Unit that allows you to translate these 1s and 0s into sound, and a Control that keeps the whole thing running smoothly and allows you, for example, to jump between tracks. Your 8 tracks, on the other hand, store music as “impressions” on a magnetic tape, not as discrete state representations, the “head”, in contact with the tape reverses this process and transforms the impressions back into sound, you move between the tracks by causing the head to actually physically move.
Perhaps Turing’s choice of digital over analog computers can be said to amount to a bet about how the future of computer technology would play out. By compressing information into 1s and 0s – as representation- you could achieve seemingly limitless Storage/Control capacity. Imagine if all the songs on your iPod needed to be stored on 8 tracks! If you wanted to build an intelligent machine using analog you might as well just duplicate the very biological/analog intelligence you were trying to mimic. Digital technology represented, for Turing, a viable alternative path to intelligence other than the biological one we had always known.
Back to his article. He rephrases the initial question:
Let us fix our attention on one particular digital computer C. Is it true that by modifying this computer to have an adequate storage, suitably increasing its speed of action, and providing it with an appropriate programme, C can be made to play satisfactorily the part of A in the imitation game, the part of B being taken by a man?
If we were able to answer this question in the affirmative, Turing insisted, then such a machine could be said to possess human level intelligence.
Turing then runs through and dismisses what he considers the most likely objections to the idea of whether or not a machine that could think could be built:
The Theological Objection– computers wouldn’t have a soul. Turing’ reply: Wouldn’t God grant a soul to any other being that possessed human level intelligence? What was the difference between bringing such an intelligent vessel for a soul into the world by procreation and by construction?
Heads in the Sand Objection– The idea that computers could be as smart as human is too horrible to be true. Turing’s reply: Ideas aren’t true or false based on our wishing them so.
The Mathematical Objection- Machines can’t understand logically consistent sentences such as “this sentence is false”. Turing’s reply: Okay, but, at the end of the day humans probably can’t either.
The Argument from Consciousness- Turing quotes a professor Lister: “”Not until a machine can write a sonnet or compose a concerto because of thoughts and emotions felt, and not by the chance fall of symbols, could we agree that machine equals brain-that is, not only write it but know that it had written it.” Turing’s reply: If this is to be the case, why don’t we apply the same prejudice to people. How do I really know that another human being thinks except through his actions and words?
The Argument from Disability- Whatever a machine does it will never be able to do X. Turing’s reply: These arguments are essentially making unprovable claims based on induction- that I’ve never seen a machine do X therefore no machine will ever do X.
Many of them are also alternative arguments from consciousness:
The claim that a machine cannot be the subject of its own thought can of course only be answered if it can be shown that the machine has some thought with some subject matter. Nevertheless, “the subject matter of a machine’s operations” does seem to mean something, at least to the people who deal with it. If, for instance, the machine was trying to find a solution of the equation x2 – 40x – 11 = 0 one would be tempted to describe this equation as part of the machine’s subject matter at that moment. In this sort of sense a machine undoubtedly can be its own subject matter.
Lady Lovelace’s Objection- Lady Lovelace, friend of Charles Babbage whose plans for his Analytical Engine in the early 1800s were probably the first fully conceived modern computer, and Lovelace perhaps the author of the first computer program had this to say:
“The Analytical Engine has no pretensions to originate anything. It can do whatever we know how to order it to perform”
Turing’s response: this is yet another argument from consciousness. The computers he works with surprise him all the time with results he did not expect.
Argument from the Continuity of the Nervous System: The nervous system is fundamentally different from a discrete state machine therefore the output of the two will always be fundamentally different. Turing’s response: The human brain is analogous to a “differential analyzer” (our old analog computer discussed above), and solutions of the two types of computers are indistinguishable. Hence a digital computer is able to do at least some things the analog computer of the human brain does.
Argument from the Informality of Behavior: Human beings, unlike machines, are free to break rules and are thus unpredictable in a way machines are not. Turing’s response: We cannot really make the claim that we are not determined just because we are able to violate human conventions. The output of computers can be as unpredictable as human behavior and this emerges despite the fact that they are clearly designed to follow laws i.e. are programmed.
Argument from ESP: Given a situation in which the man playing against the machine possesses some yet understood telepathic power he could always influence the interrogator against the machine and in his favor. Turing’s response: This would mean the game was rigged until we found out how to build a computer that could somehow balance out clairvoyance. For now, put the interrogator in a “telepathy proof room”.
So that, in a nutshell, is the argument behind the Turing test. By far, the most well known challenge to this test was made by the philosopher, John Searle, (relation to the author has been lost in the mist of time). Searle has so influenced the debate around the Turing test that it might be said that much of the philosophy of mind that has dealt with the question of artificial intelligence has been a series of arguments about why Searle is wrong.
Like Turing, Searle in his 1980 essay, Minds Brains and Programs, will introduce us to a thought experiment:
Suppose that I’m locked in a room and given a large batch of Chinese writing. Suppose furthermore (as is indeed the case) that I know no Chinese, either written or spoken, and that I’m not even confident that I could recognize Chinese writing as Chinese writing distinct from, say, Japanese writing or meaningless squiggles. To me, Chinese writing is just so many meaningless squiggles.
Now suppose further that after this first batch of Chinese writing I am given a second batch of Chinese script together with a set of rules for correlating the second batch with the first batch. The rules are in English, and I understand these rules as well as any other native speaker of English. They enable me to correlate one set of formal symbols with another set of formal symbols, and all that ‘formal’ means here is that I can identify the symbols entirely by their shapes. Now suppose also that I am given a third batch of Chinese symbols together with some instructions, again in English, that enable me to correlate elements of this third batch with the first two batches, and these rules instruct me how to give back certain Chinese symbols with certain sorts of shapes in response to certain sorts of shapes given me in the third batch.
With some additions, this is the essence of Searle’s thought experiment, and what he wants us to ask: does the person in this room moving around a bunch of symbols according to a set of predefined rules actually understand Chinese? And our common sense answer is- “of course not!”
Searle’s argument is actually even more clever than it seems because it could having been taken right from one of Turing’s own computer projects. Turing had written a computer program that could have allowed a computer to play chess. I say could have allowed because there wasn’t actually a computer at the time sophisticated enough to run his program. What Turing did then was to use the program as a set of rules he used to play a human being in chess. He found that by following the rules he was unable to beat his friend in chess. He was, however, able to beat his friend’s wife! (No sexism intended).
Now had Turing given these rules to someone who knew nothing about chess at all they would have been able to play a reasonable game. That is, they would have played a reasonable game without having any knowledge or understanding of what it was they were actually doing.
Searle’s goal is to bring into doubt what he calls “strong AI” the idea that the formal manipulation of symbols- syntax- can give rise to the true understanding of meaning- semantics. He identifies part of our problem in our tendency to anthropomorphize our machines:
The reason we make these attributions is quite interesting, and it has to do with the fact that in artifacts we extend our own intentionality; our tools are extensions of our purposes, and so we find it natural to make metaphorical attributions of intentionality to them; but I take it no philosophical ice is cut by such examples. The sense in which an automatic door “understands instructions” from its photoelectric cell is not at all the sense in which I understand English.
Even with this cleverness, Searle’s argument has been shown to have all sorts of inconsistencies. Among the best refutations I’ve read is one of the early ones- Margaret A. Boden’s 1987 essay Escaping the Chinese Room. In gross simplification Boden’s argument is this: Look, normal human consciousness is made up of a patchwork of “stupid” subsystems that don’t understand or possess what Searle claims is the foundation stone of true thinking- intentionality- “subjective states that relate me to the rest of the world”- in anything like his sense at all. In fact, most of what the mind does is made up of these “stupid” processes. Boden wants to remind us that we really have no idea how these seemingly dumb processes somehow add up to what we experience as human level intelligence.
Still, what Searle has done has made us aware of the huge difference between formal symbol manipulation and what we would call thinking. He made us aware of the algorithms (a process or set of rules to be followed in calculations or other problem-solving operations) that would become a simultaneous curse and backdrop of our own day. A day when our world has become mediated by algorithms in its economics, its warfare, in our choice of movies and books and music, in our memory and cognition (Google) in love (dating sites) and now it seems in its art (see the excellent presentation by Kevin Slavin How Algorithms Shape Our World). Algorithms that Searle ultimately understood to be lifeless.
In the words of the philosopher Andrew Gibson on Iamus’ Hello World!:
I don’t really care that this piece of art is by a machine, but the process behind it is what is problematic. Algorithmization is arguably part of this ultra-modern abstractionism, abstracting even the artist.
The question I think that should be asked here is how exactly Iamus, the algorithm that composed, Hello World! worked? The composition Hello World! was created using a very particular form of algorithm known as a genetic algorithm, or, in other words Iamus is a genetic algorithm. In very over-simplified terms a genetic algorithm works like evolution. There is (a) a “population” of randomly created individuals (in Iamus’ case it would be sounds from a collection of instruments). Those individuals are the selected against (b) an environment for the condition of best fit (I do not know in Iamus’ case if this best fit was the judgement of classically trained humans, some selection of previous human created compositions, or something else), the individual that survive (are chosen to best meet the environment) are then combined to form new individuals (compositions in Iamus’ case) (c) an element of random features is introduced to individuals along the way to see if they help individuals better meet the fit. The survivor that best meets the fit is your end result.
Searle would obviously claim that Iamus was just another species of symbol manipulation and therefore did not really represent something new under the sun, and in a broad sense I fully agree. Nevertheless, I am not so sure this is the end of the story because to follow Searle’s understanding of artificial intelligence seems to close as many doors as it opens in essence locking Turing’s machine, forever, into his Chinese room. Searle writes:
I will argue that in the literal sense the programmed computer understands what the car and the adding machine understand, namely, exactly nothing. The computer understanding is not just (like my understanding of German) partial or incomplete; it is zero.
For me, the line he draws between intelligent behavior or properties emerging from machines and those emerging from biological life is far too sharp and based on rather philosophically slippery concepts such as understanding, intentionality, causal dependence.
Whatever else intentionality is, it is a biological phenomenon, and it is as likely to be as causally dependent on the specific biochemistry of its origins as lactation, photosynthesis, or any other biological phenomena.
Taking Turing’s test and Searle’s Chinese Room together leaves us, I think, lost in a sort of intellectual trap. On the one side we having Turing arguing that human thinking and digital computation are essentially equivalent. All experience points to the fact that this is false. On the other side we have Searle arguing that digital computation and the types of thinking done by biological creatures are essentially nothing alike. An assertion that is not obviously true. The problem here is that we lose what are really the two most fundamental questions and that is how are computation and thought alike, and how are they different?
Instead what we have is something like Iamus “introduced” to the world, along with its creation as if it were a type of person, by the Turing faction when in fact it has almost none of these qualities that we consider to be constitutive of personhood. A type of showmanship that put me in mind of the wanderings of the Mechanical Turk. The 18th century mechanical illusion that for a time fooled the courts of Europe into thinking a machine could defeat human beings at chess. (There was, of course, a short person inside.)
To this, the Searle faction responds with horror and utter disbelief aiming to disprove that such a “soulless” thing as a computer could ever, in the words of Turing’s Lister: “write a sonnet or compose a concerto”, that was not a product of nothing other than” the chance fall of symbols”. The problem with this is that our modern day Mechanical Turks are no longer mere illusions- there is no longer a person hiding inside- and our machines really do defeat grand masters in chess, and win trivia games, and write news articles, and now compose classical music. Who knows where this will stop, if it indeed if it will stop, and how many what have long been considered the most human defining activities qualities will be successfully replicated, and even surpassed, by such machines.
We need to break through this dichotomy and start asking ourselves some very serious questions. For only by doing so are we likely to arrive at an understanding both of where we are as a technology dependent species and where what we are doing is taking us. Only then will we have the means to make intentional choices about where we want to go.
I will leave off here with something shared with me by Andrew Gibson. It is a piece by his friend Amanda Feery composed in memory of Turing’s centenary entitled “Turing’s Epitath”.
And I will leave you with questions:
How is this similar to Iamus’ Hello World!? And how is it different?
I believe the debate of human versus machine consciousness (whether we have consciousness, the nature of consciousness and whether it is appropriate to think of machines having consciousness e.t.c) would evolve with greater advancement in the different subfields of biology and neuroscience, and even artificial intelligence and artificial life… I know this sounds like a non-statement but what might appear contradictory can sometimes become otherwise. For instance in biology, Mendelian inheritance was once considered irreconcilable against aspects of Darwinism but the modern synthesis basically resolved those issues.
On aspects of artificial thinking versus human thinking, I tend to hold a stronger AI stance. One of the criticisms against the Chinese Room thought experiment is the notion of regression. It is plausible to put a Chinese speaker/writer in a room who would be able to output legible Chinese characters hence fooling the people outside into believing it is a ‘machine’ but it does not answer the question of ‘who’ is the person writing those replies.
Neuroscience has constantly surprised us with what it has discovered. For instance, hallucination amongst individuals who are going blind or the perception and cognitive capabilities of split brain patients.
In artificial intelligence and artificial life, military inventions are trickling down to civilian use e.g. drones. We have nurse robots. Self-driven cars no longer belong to the realm of sci-fi. Trojan computer viruses are now able to self-destruct after invading computers, leaving no trace of their existence.
All these discoveries and inventions might appear unrelated but I think its safe to say they trigger new ways of thinking (even thinking about thinking) and we might be able to resolve the ‘consciousness’ problem in the not too distant future. But that’s the optimist in me speaking.
Agreed Charles. It’s beginning to get very interesting.
Rick,
nicely written, again!
I already put forward the main arguments as response to the preceding part of “chinese room”.
I think, that Searle is completely right. To my impression he didnot fully explore the issues, though.He is right in his claim that a machine that is completely based on symbols can not understand. Besides the fact that the original version of strong AI (to which Searle responded) is dead anyway (the pseudo problem of symbol grounding ), I would respond based on Wittgenstein’s Form of Life-issue (§201 in Philosophical Investigations) and related stuff: Meaning is not in the head, the mental is not exclusively private, there is no private language.
The same holds, of course, also for machines. THe consequence is that we can’t program language understanding and everything, we just can implement the *potential* for it ! Yet, that’s a completely different story. The potential for understanding is not based on symbols anymore, it requires a probabilization and autonomous associativity. And there is indeed an abstract structure that represents just that. It is called Self-organizing maps (SOM) The SOM is NOT a neural network, it is quite different from that. In a SOM, there are no symbols any more, even as we programmed the mechanisms using symbols. But the software in this case does not rely on explicit rules and predetermined symbols, such it is nothing else than a quasi-materiality, just like the body.
Particularly I disagree with two statements of yours. You cite “Whatever else intentionality is, it is a biological phenomenon”.
Exactly here we meet reductionism. It is a petitio principii. Yes, it is biological. But is does not explain anything. Such, you can’t claim that it explains sufficiently enough. Intentionality is the philosophical term for the “mental”. The passage continues: “and it is as likely to be as causally dependent on the specific biochemistry of its origins as lactation, photosynthesis, or any other biological phenomena.”
Emergence is precisely the concept that denies the possibility to express the emergent pattern in terms of the grounding logical layer (see reaction diffusion system). The patterns in the brain that ultimately are somehow related to the ability to write this text have nothing to do with the biological basis. To claim otherwise would even result in a self-contradiction!!
The second problem I see is buried here.
“For me, the line he draws between intelligent behavior or properties emerging from machines and those emerging from biological life is far too sharp and based on rather philosophically slippery concepts such as understanding, intentionality, causal dependence.”
I can understand that understanding and causal dependence appear slippery. Yet, they can be clarified. (http://theputnamprogram.wordpress.com/2012/06/24/a-deleuzean-move/#chp7_1_u). Second, if you refer to emergence as a bracket around {biological, machine-like} then both kinds of entities should show emergence, pretty obvious so. So let us take a look to emergence again. The result of emergence is a pattern, that can’t be described on the level of the population of bodies from where it emerged. And taking the perspective from the level where the pattern is, we cant speak about the processes beneath. Besides the issue that the pattern has to be stabilized through selection, which ultimately links in the environment, the outside, the result of emergence is the necessity to give a name to the pattern.
Who is providing the name for the emergent patterns in a brain? A programmer? Surely not. The Creator? Surely not. It is simply the community of speakers. As Wittgenstein says. “It is not that I have a word in my mind and attach some meaning to it”.
Rejecting the program of traditional strong AI, as Searle did, does NOT mean to reject the possibility of machine based “intelligence”. Searle argued against the AI that claimed the role of symbols.Nothing less, but also nothing more.
The conclusion is that we will see progress with regard to “intelligent” “machines” only if we integrate “Non-Turing Computing”. NTC is, in important ways, just another way to speak about emergent differentiation (more details here: http://theputnamprogram.wordpress.com/2011/10/28/non-turing-computing/)
cheers
Thanks, again for your comments Monoo.
Though it is quite complex, I recommend readers check out your very interesting article on Non-Turing computing which you reference above. It lays out very well the philosophical case for why we should not believe Turing machines are doing what we would call thinking.
I am still doubtful that we know enough about how human beings compose music or create stories or how exactly the individual mind intersects with culture that we can draw such sharp lines between what computers are doing and what we are doing.
If there are any neuroscientist or neurophilosophers out there please jump in!
If you are correct, the disturbing thing for me is that we may be able to derive much of our culture our music, visual representation, perhaps even literature from highly advanced Turing machines that are able to ransack our cultural inheritance and patch together “culture”.
This would mean that our relationship between the higher elements of Wittgenstein’s Life-World will have in some ways been severed.
James Cross posted the excellent questions below to the first part of this article. I was afraid they might not be seen and thought they were relevant to the discussion around the current post so I have respectfully copied and pasted them below, along with my response.
James Cross:
“Machines can simulate intelligence because they are extensions of humans but that doesn’t mean they are intelligent. Computers are tools differing from other tools like shovels, picks, and rakes only in their degree of sophistication and complexity.
Iamus was programmed by a human who first made the decision to create a composing computer then established the rules by which the program would operate to create its inventory of sounds. Presumably a human also initiated the program that caused Iamus to create a composition in honor of Turing. At what point in this process did the computer compose on its own rather than merely obey rules established by a human? I would say it never did.
Ultimately the question becomes whether intelligence is purely binary? Can intelligence be represented completely by algorithm?
My hunch is no but I am not 100% certain of it. I do feel that if it is possible we are still quite a ways from achieving it.
If I could write a truly intelligent program, then my program itself should be able to create another intelligent program. Would the program created by my program be the same or different from the one I created? Could it be more intelligent than the one I created? Or is intelligence one single program?”
Rick Searle:
Hi James,
Again, all great questions. I think not so much the claims as the unspoken assumptions behind the claims of the creators of Iamus were pure showmanship.
Nevertheless, things are moving incredibly fast and it seems we already have software that creates other software:
http://money.cnn.com/2007/08/27/technology/intentional_software.biz2/index.htm
I do not think we are necessarily headed towards a world where the intelligence of digital computers, if we should call it that, is wholly distinct from human intelligence. Your earlier “garage band” example and my own experience using consumer algorithms such as Pandora put me in mind of a scenario like the one below that I think if we don’t see it then are children are very likely to.
Imagine a rock concert where the concert goers place on themselves a kind of device that measures their physiological response to music. They would in fact be connected to a very advanced, by our standards, AI that would be able to produce a variety of simulated instruments.
The AI would compose/perform music on the fly based upon how the crowd reacts. The music would start out quite chaotic but based on the physiological feedback of the concert goers would become more and more ordered and beautiful as time went on.
This is merely a science-fiction scenario and perhaps impossible for a crowd of concert goers- there is just too much diversity in reactions. It becomes much less inconceivable, I think, if we imagine the same scenario with only one individual connected to such an AI.
For better or worse, that to me seems to be the direction we are currently headed.
Thanks for posting my comments from the other thread over here. I was thinking about doing that myself. I think I posted on the Part I thread probably about the same time you were posting Part II.
I took at look at the link and the Simonyi initiative. I have been in the IT industry for many years and have seen other attempts to do what he is attempting. I am dubious at this point. In reality, much progress in software development has always been about abstracting and simplifying. The first languages were assembly languages where the developer had to write code instruction by instruction. Then came compilers which combined many instructions into high level constructs such as functions. Recent years have seen a variety of efforts to encapsulate entire pieces of common functionality into yet a higher and more simple to use developer interfaces. Although more software may be being written than ever, the number of developers hasn’t gone down. Ultimately there remains the problem of translating the business problem, almost always ill-defined from a technical standpoint, into software. Until we have software that can understand and translate what the business people mean rather than doing what they say, the effort will probably not work. Good luck with that, Mr. Simonyi!
I think you are completely correct in trying to get beyond the Turing-Searle debate with your questions.
Although I have argued that intelligence and consciousness are different, I do think there is a relationship between the two in how intelligence operates in living entities. We might argue with Turing that a machine can be thought of as intelligent if it can simulate a human. The machine would not need to be conscious to do this.
Intelligence in living forms, however, I believe arises in conjunction with consciousness or self-awareness. It is interesting that the two forms of intelligence you have been discussing are language and music, both of which originated from sound and both of which probably played a key role in the development of human capacity for self-awareness (See Mark Changizi’s Harnessed). It is unlikely that human intelligence is unrelated to this.The odd thing about this is that much human intelligence still lies below the surface of consciousness. Many decisions are made before they become conscious. Many ideas seem to come from nowhere after the conscious mind has been ruminating on them for some time. This is still puzzling to me but I think there must be a evolutionary explanation for why we have self-awareness. It is not likely to be a simple byproduct of intelligence.
Thanks for your comments, James. I will defer to your judgement of the Simonyi initiative.
Agreed that there would have to be many many issues resolved, and much progress in machines and software made, before we could declare any sort of equivalence between what our artificial creations do in terms of relating to the world and what we, along with our fellow animals, do.
To me, the fact that there are so many questions and developments in front of us is a very good thing: much to learn and experience.
DanFair44 also posted something on another part of this blog that I didn’t want people to miss, so I am pasting it here along with my response.
Dan:
Only to increase the ‘n’ in this musical experiment: Ligeti’s Etude #10
(http://www.youtube.com/watch?v=Dp-HPqXm3m4) and Iamus’ Nasciturus
(http://www.youtube.com/watch?v=Uq3iKbCNDCM).
I left my studies in music, so my opinion does not count much here, but I find the first quite close to a physical process, while the second seems to have the structure and intention that many people appreciate in a musical work. Just an idea…
Rick:
Thanks for sharing these Dan! I am not musically trained either, so it’s hard for me to tell the difference in quality between the Ligeti’s Etude #10 and Iamus’ Nasciturus.
It would be interesting to do a sort of musical Turing test to see if someone that was musically trained could tell which composition was created by Iamus, and which by a human composer.
Know of any we could give such a test?
Hi Rick,
John Searle really has been a strong defender of his Chinese Room argument. Unfortunately I think it really mischaracterises the process of language comprehension and communication and how that might occur in a machine.
It’s interesting that Monnoo mentions the “symbol grounding” problem (or pseudo-problem? I’m also interested to read about “non-Turing computation”). The problem of symbol grounding is central to the lack of understanding in the Chinese room. If we are given a Chinese-Chinese dictionary without any images, no matter how much we look at it we can’t learn Chinese unless we have ways of connecting or “grounding” the symbols in sensory experience. I hope you don’t mind me indulging in a thought that has occurred to me that I will describe below. 🙂
An interesting extension to the Chinese room (a Chinese hotel?) might be:
Imagine one floor of a building, where one room gets a message written in Chinese, that man has large table of buttons and lights, a system of files, and a guide for creating, filing and accessing notes which describe what buttons press according to the symbols received. The buttons on the table are each connected to other rooms in the building. Each different room receive a combination of lights according to the buttons pressed. The people in these rooms with the lights, read their guides and their files, and press some buttons that return a signal to the man that received the written message, as well as to the other people in rooms (their combinations of lights flicker on or off according to other rooms’ activity). Now the man who received the written message waits for the lights to settle, then refers to his guide and files to write the new message to send back.
It sounds roughly the same as the original Chinese room right? Except that now we have a number of different rooms that have different guides, different buttons, different light signals and different guides. We’ve really only multiplied the original Chinese room. Well lets add a bit extra. Let’s call a floor in this building “memory”. And imagine we have another floor, let’s call it “perception”, that has a room where someone receives a raw digital stream of pictures. He can’t decode the digital signal, but he has instructions on when to record and file series of 1s and 0s, and when to press buttons on his large table. These instructions are complex: some instructions depend on light signals received from other rooms; some depend on the digital sensory signal; and some depend on files that were recorded in the past. There is another person who receives a digital stream of sounds in the same situation, another receives digital signals for smell, another for taste, and others for tactile senses. The buttons these people press go to other rooms on the perception floor, and also go to the a single room on the memory floor. That associated room on the memory floor is the room that “remembers” that particular sensory modality (sight, sounds, smell, etc.).
Here we have a “network” of rooms that each communicate to each other, and each influence each other when they press buttons. It might be important to note that the guides here don’t directly instruct the buttons pressed, they instruct how to refer to filed notes, and how to create new notes to file. The files in the rooms on the memory floor are influenced by the signals received in the perception rooms. And this whole process is a rough analogy to how the brain modularises its processes. Since the hotel has “memories” that are grounded in sensory input and “perceptual experience”, does this hotel “understand” the messages? I think it might.
There is a reasonable probability this idea isn’t completely original so anyone please tell me if you know of a similar/identical argument.
In brief:
– I think John Searle is right that passing symbols around, without grounding them in sensory experience (digital streams of pictures and sounds would do), doesn’t have a genuine “understanding”. I think John is wrong to push the idea any further than that. When the detection of a symbol is associated with memories and sensory experience, we have grounded those symbols – we have achieved what could be called “understanding”. It doesn’t matter a great deal what the format is that the sensory information comes in, perception decodes it.
– Digital computation and “thinking” are different, but primarily because they refer to different levels of functioning. Thinking is a conscious process (that is rarely logical) that sits on top of all the subconscious brain processes including perception and memory, digital computation is absolutely logical and more akin to the firing of individual neurons. The outcome of digital computation can be the same as thinking when digital computation is used to properly model a thinking system such as a brain. The brain, as a physical system, could be simulated on a sufficiently powerful computer. That simulated brain could in theory be just as conscious as a physical brain, although some semblance of “normal” sensory input would be important for proper function.
– While computers are capable of original compositions, without the same memory, cognitive and emotional capabilities as people, their compositions are not grounded in emotions or experience. The notes mean nothing to the computer. If, one day, a machine is of a level of sophistication that it understands emotions from experience, there is no reason that it couldn’t compose music and comprehend it. I don’t think fooling us people into responding emotionally to music that was composed by an uncomprehending computer would be quite as hard to achieve.
Questions or responses are welcome! 🙂
This is a great argument! In some ways it reminded me of Margaret A. Boden’s 1987 essay Escaping the Chinese Room, though yours was done much more cleverly because it built off of John Searle’s own analogy.
I really have no expertise in this area, so it is very difficult for me to find a way to decide between your views and Monoo’s. You should check out his blog The Putman Project and see what you think. You might also want to check on J. Cross’ blog Broad Speculation. He comes to largely the same view as Monoo, but from a different angle.
My instinct though is that both Manoo and J. Cross are drawing far too sharp a line between what goes on a human brain and what is done with computers- but its just an instinct.
Your idea of viewing thinking as modular is very like the direction I find myself going. While writing my Turing 2 post an example came to me (which I did not use) that might illustrate the direction I was headed.
What is happening in the brain of a person who is an extreme savant- someone who can learn music at an incredible rate, or process numbers well beyond what is normal. (I am using this as merely an abstract case and mean no disrespect to people with such conditions. I am fully aware that not only do they possess emotional lives, but are more often than not surrounded by a circle of love and care) Fact is, we don’t know, but it seems to me they are able to access more automatic processes than are inaccessible to the rest of us and which sadly come at the price of stunting other parts of what is found in human consciousness on average.
This may seem to contradict the point I just made, but I remember hearing somewhere that those parts of human thought that have the oldest evolutionary origins are actually the HARDEST things to replicate in AIs. Whereas the latest arrivals- things like formal symbol manipulation- things we normally think of of unique signatures of human intelligence are the easiest to replicate. An extreme savant- (and again I am speaking largely abstractly) is able to process late arrivals on the evolutionary scene e.g. math, but has difficultly with processes that have been around since mammals ran under the feet of dinosaurs e.g. emotional communication and relationships.
What I don’t think people are quite ready for is machines that can do all of the things we consider unique to humanity extremely well- sometimes even better than us, that at the same time possess none of the emotional qualities we share with fellow animals.
p.s.
Could you give us your name. I didn’t want to start off my post with “Hello Mind Leap” ;>)
Hi Rick,
I had been wondering whether I should keep any level of anonymity in case I end up writing something inflammatory or really stupid. 🙂 But it’s likely I haven’t done that great of a job hiding my identity, and I’m sorry if it comes across as at all rude. My name is Toby.
I’m in the process of looking through Monnoo and James Cross’s blogs. There is some interesting stuff there, and both are examples of very different writing styles. I come from more of an engineering and robotics background, though I’ve done some reading into neuroscience and philosophy of mind. I wouldn’t say I’m an expert either, but after thinking about the issues of minds and machines and coming up with some opinions, looking for information to examine and test those opinions further is a good way for us to go.
The case of autistic savants in interesting. I haven’t studied the condition in depth, but I wonder at what age the exceptional mental abilities begin to appear. I would hazard a guess that the exceptional abilities are a result of an incredible amount of sustained time practising a particular skill. A large part of the reason autistic people have a higher prevalence of savant abilities could be because of a common preference/obsession for focusing on repetitious and systematic tasks. The brain is design such that sustained focus and practise causes it to become better at those tasks.
These same processes exist in non-autistic people, but are usually much less developed. Some non-autistic people manage to achieve similar levels of aptitude through long years of practise. But generally non-autistic people are probably much more easily distracted. Another interesting phenomenon is synesthesia, the crossing over of senses that, for example, make people see sounds as colours. I think it’s possible this can in some circumstances be related to savant-like abilities, where people “hijack” perceptual centres of the brain, particularly vision, for performing mental calculations. A very large amount of our neocortex is devoted to vision, so practising visualisation and visual memory recall, and learning to use it for computations and could drastically increase the overall brain “resources” available for these tasks.
I would probably say that the hardest thing to replicate in AI at the moment seems to be the subconscious processes for sensory perception. The functions of the “primitive” brain are mostly available to our awareness (if not directly controllable), and probably aren’t so difficult to reproduce. For example, the pleasure and reward centres are modelled reasonably well by machine reinforcement learning. The parts of the brain responsible for emotions and natural drives, do seem to be diminished in autistic people from what I’ve read. But the perceptual abilities of savants sound as though they are often extraordinary.
The implications of a machine that is capable of doing what is normally unique to humanity are far-reaching. I think part of what is intrinsic and unique to humanity is our ability to reflect on, control, and express our emotions and feelings. For a machine to do these things and develop a model of human minds, it will probably need to be capable of “emotions”. Generally, though, I think a lot of people would be uncomfortable with a machine that has most or all human capabilities, regardless of its ability to feel or express emotions. But more important might be what such a machine would think or feel about people.
Toby
Thanks for your response, Toby.
I wouldn’t worry about saying anything stupid, we all do so from time to time, if we didn’t we wouldn’t be- ehem!- human.
I look forward to reading the post on your blog, and if you don’t mind, may bother you on occasion when I have a question about the engineering side of AI.
No worries. 🙂
I’m not too concerned about the embarrassment of saying something unintelligent. But if I decide to seek employment in a job with a public profile and/or where seeming to be intelligent is important, freely speaking my mind, as I’d like to, has a risk of damaging my chances.
I’ll answer whatever questions I can. I’ll be posting on artificial intelligence next, but pretty soon I’ll be covering some thoughts on evolution of rewards and pleasure, as well as values and morality for both people and artificial intelligences. I’d be very happy for you to read any of my posts and share whatever thoughts you have. 🙂
BTW, Iamus seems to have published its first CD of contemporary works, recorded by LSO and first-shelf musicians.
http://www.cdbaby.com/cd/iamus
which can be listened with MP3 quality here:
http://melomics.com/@iamus/cd-iamus
worth listening, by the way.
I wonder how this will transform the music scenario. Who do the rights belong to? (programmer or computer owner?), what if Iamus starts composing pieces by zillions? what will be the role of the human composer from now on? (writing, or just browsing?)
best
Dan
Hey Dan,
Thanks for the info and link- I was not aware of this. I am going to give it a listen over the weekend.
All great questions, and I have no idea. One optimistic scenario, I think, is that it could democratize music creation. Right now a person has to be trained for years to learn music, but perhaps in the future, with the help of programs like Iamus we will all be able to write a songs when we’ve experienced heart ache, or lost someone, or experienced great joy. Let’s hope it’s something like that and not one of the more pessimistic outcomes .
Yep, democratization would be the right direction. Nowadays, everybody can keep images of her childhood, but time ago you would need to contract a professional painter to do so. May be you’re right and in the future a good ear and deep sensibility is enogh to compose most beautiful music, as computers help us more and more in creative tasks.
I sure hope so.
I think I will put the link to the Iamus stuff you gave me in a short post sometime next week: It’s likely to get lost under my old post.
I’d like to give you credit. Do you have a blog or something else I can link to?
Not really, Rick, just have interest in this sort of things, and like to comment in blogs that treat them seriously, like yours. Have read much on AI and evo-devo, but could not reach the momentum to start a blog.
Okay, Dan.
Maybe I’ll get some responses to the post. It’d be great if you took part in any discussion.
[…] album composed by the artificial intelligence program, Iamus, on the comments section of my piece Turing and the Chinese Room Part 2 from several month […]
[…] generation of computers that surround us are digital. (A distinction I touchedon briefly in my post Turning and the Chinese Room 2 .) Turok sees digital computers and how they process information as essentially “stupid”. Our […]
[…] Well before the 26th century there emerged a kind of modus-vivendi between human beings and machines, especially machines of the intelligent type. The employment crisis of the early to mid-21st century revolved around the widespread application of machine intelligence to all fields of economic, cultural, and political life, although no one yet could claim such machines possessed features of human intelligence such as consciousness. True Artificial General Intelligence or AGI remained a ways off, and never really replicated human intelligence, but would prove to be something quite different. […]
[…] them having achieved human level intelligence than a measure with constrained parameters like the Turing Test. The Turing Test is, after all, a pretty narrow gauge of intelligence and as search and the […]
[…] and Allen want to repeat the same behaviorists mistake made by Alan Turing and his assertion that consciousness didn’t matter when it came to the question of intelligence, […]
[…] lost in the mists of time) It was Searle who invented the well-know thought experiment of the “Chinese Room”, which purports to show that a computer can be very clever without actually knowing anything at […]