Return to MODULE PAGE

Searle's Chinese Room Argument

PART TWO: The Robot Reply

David Leech Anderson: Text Author, Storyboards
Robert Stufflebeam: Animations, Storyboards
Kari Cox: Animations

Those people who offer the "Robot Reply" response to Searle's Chinese Room argument hold a particular theory about the nature of language and how it works. Here is one possible response that such a person would give to Searle's arguments:

It should be conceded that Searle's argument is effective in showing that certain kinds of machines -- even machines that pass the Turing Test -- are not necessarily intelligent and do not necessarily "understand" the words that they speak. This is because a computer sitting on a desk with no sensory apparatus and no means of causally interacting with objects in the world will be incapable of understanding a language. Such a machine might be capable of manipulating linguistic symbols, even to the point of producing output that will fool human speakers and thus pass the Turing Test. However, the words produced by such a machine would lack one crucial ingredient: The words would fail to express any meaningful content and thus would fail to be "about" anything.

The problem with a mere PC sitting on a desk is that it has no sensory apparatus and no ability to causally interact with objects in the world. A lone PC may be programmed to utter all manner of fascinating statements about pigs. It may give an eloquent dissertation on pigs and even produce sentimental porcine poetry. However, you could bring a pig into the room and lay it on top of the computer's keyboard and the computer would know nothing about it. Knowing what the word 'pig' means, requires that a speaker's use of the word be causally hooked up (in one way or another) to real physical pigs. Without that a crucial ingredient necessary for meaningful language is missing.

This response presupposes a particular semantic theory. Semantics deals with the meanings that words, sentences, and other linguistic expressions have and how it is that they come to have those meanings. The semantic theory presupposed by the "Robot Reply" is often called, semantic externalism. This is the view that words, especially words that refer to objects in the world, come to have the meanings that they do, and come to refer to the objects that they refer to, by virtue of the causal connections that obtain between the words spoken and the objects that the words name. Therefore, a robot (like Commander Data from Star Trek the Next Generation) that is able to go to a farm, tell the difference between the pigs, the horses, and the chickens, and call the pigs by name and even slop the pigs, demonstrates by its actions that it genuinely understands what a pig is. This is possible because the robot has connected the word 'pig' with real pigs. Light is reflected off of the real pig, sent through the robot's video camera eye, and ultimately produces the utterance, "I see a pig." (Click the 'Begin' button in animation below.)

This virtual lab was originally an interactive Flash animation. Flash was retired at the end of 2020. This video preseves all the content of the original. Read all of the text and watch all of the animation. Pause the video as needed.

There is a causal chain from the pig, through the camera, to the computer processing center, and finally to the speech-center that produces the utterance, "I see a pig." With the robot we have a "word-world relation" established that confers on the word 'pig' its meaning -- which, in this case, is nothing other than the physical pig. Thus, it is argued that the Chinese Room (like a computer sitting on the desk) lacks "word-world" relations and therefore Searle was justified in drawing the conclusion that he did. But a robot would be different. The same conclusion would NOT be justified if the machine in question were a complex robot.

What follows, then, from this line of argument? First, one who offers the "Robot Reply" agrees with Searle that the Turing Test is not a reliable test for understanding a language. The kind of behavior exhibited in the Turing Test is not sufficient to demonstrate linguistic comprehension. Where the Robot Reply parts company with Searle is in its rejection of Searle's view that the Chinese room argument succeeds in showing that ALL digital computers are equally susceptible to Searle's argument. Those who offer the Robot Reply believe that the right kind of digital computer -- one that controls a sufficiently complex robot -- would indeed be intelligent and understand a language.

So, what does Searle say to this argument? Does he agree that his argument is not effective against the right kind of robot? No way!! He argues that placing the computer inside a robot will make no difference whatsoever. His argument is an interesting one.

SEARLE REJECTS THE ROBOT REPLY

Searle is not convinced by the robot reply. To see that the addition of a robotic body fails to make a difference, Searle says that one simply needs to extend the thought experiment by placing the Chinese Room inside a robotic body. (Okay, it has to be a pretty big robot, but what difference does that make?) Now, all the computational processing that goes on inside the robot will be accomplished by Searle in the now modified, "Chinese Room". In addition to symbols coming into the room in the form of questions, now there will also be symbols coming into the room from the video cameras that are receiving visual information about pigs in the barnyard.

Searle believes that this new thought experiment defeats those who think that a causal connection between a physical pig and 'pig'-utternaces is sufficient for "understanding a language." But why does he think so? Remember, we started with a computer on a desk, taking as input strings of symbols of a language and giving as output strings of symbols of that same language. If we want a computer to pass the Turing Test in English, it must be able to take as input questions in English and give us output answers in English. However, digital computers do not "recognize" (i.e., do not perform computations directly on) the symbols that make up English words and sentences (e.g., P. . i . . g). They must first convert those symbols into symbols of the only language that computers directly "understand" (i.e., the only language on which they can perform any operations): the binary language that we represent as strings of 0's and 1's (e.g., 0011010, 11101011, . . ).

Let's say that there is a computer on a desk that is currently passing the Turing Test. I, the interrogator, am just now asking the computer the question: "What is a pig?" The computer must take that string of letters and convert them into strings of 0's and 1's that are associated with those letters of the alphabet. There is an established convention for doing this, and here it is:

The Alphabet in Binary Code

Letter

Binary Code

A

01000001

B

01000010

C

01000011

D

01000100

E

01000101

F

01000110

G

01000111

H

01001000

I

01001001

J

01001010

K

01001011

L

01001100

M

01001101

N

01001110

O

01001111

P

01010000

Q

01010001

R

01010010

S

01010011

T

01010100

U

01010101

V

01010110

W

01010111

X

01011000

Y

01011001

Z

01011010

 

Letter

Binary Code

a

01100001

b

01100010

c

01100011

d

01100100

e

01100101

f

01100110

g

01100111

h

01101000

i

01101001

j

01101010

k

01101011

l

01101100

m

01101101

n

01101110

o

01101111

p

01110000

q

01110001

r

01110010

s

01110011

t

01110100

u

01110101

v

01110110

w

01110111

x

01111000

y

01111001

z

01111010

We now must imagine a separate room, call it the "Binary Converter Room" which takes as input strings of symbols from the Arabic alphabet and gives as output strings of binary symbols. So the symbols, W-h-a-t-i-s-a- p-i-g-? will be converted to the symbols 0-1-0-0-0-1-1-1-0 . . . Now, imagine that this string of 0's and 1's is sent into Searle's room where he has books that tell him how to manipulate these binary symbols. It will look something like this: (Click "forward" button.)

In the room, Searle will have a second set of books that instruct him what to do not with Chinese Symbols, but with binary symbols -- 0's and 1's. In one of the books, there will be a sentence written in English that says:

"If you receive this string of shapes: 01010111011010000110000101110100,
0110100101110011, 01100001, 011100000110100101100111,

then send out this string of shapes: 010000001, 011100000110100101100111,
0110100101110011, 01100001, 0110001
00110000101110010011011100111100101100001011100100100100,
01100001011011100110100101101101 0110000101101100"

So now we have a second version of the "Chinese Room". But now we will call it the "Binary Room". Searle's situation is much the same. Before, shapes came into the room that he didn't recognize. They were just shapes. He didn't necessarily know they were symbols. He only knew that when certain shapes came into the room, he was required to send certain other shapes out of the room. Before the shapes were Chinese symbols. Now they are binary symbols. In both cases, the symbols that came into the room asked questions which you would understand if you spoke that language -- Chinese or Binary, respectively. But Searle speaks neither language. He merely knows how to consult the books and "manipulate" the symbols based on their shape and position.

THE ROBOT REPLY & THE BINARY ROOM

Notice, that with the Binary Room, we simply duplicated the original Chinese Room. The only thing that is different is the language. In both cases, they are languages that Searle does not speak so the effect is the same. So what would a person who offers the Robot Reply have to say about the Binary Room? The same thing that they said about the Chinese Room.

A machine controlled by a computer program that merely manipulates symbols (whether those symbols are in Chinese, English, or a binary language) does not understand the meaning of those symbols if there is no causal connection between the machines use of those symbols and the objects in the world to which the words purportedly refer.

So far so good. Those who offer the Robot Reply say that what will make the difference, what will imbue a symbol like, 'pig,' with genuine meaning and thereby invest the speaker with genuine understanding, is the ability of the speaker to causally interact with real barnyard animals.

We are now in a better position to understand why Searle thinks that a new thought experiment, putting the Chinese Room inside the head of the robot -- or, (if we are permitted to modify the argument slightly) putting the Binary Room inside the robot's head -- might offer a refutation of the Robot Reply.

We now have a robot that is taking in information about the world in two ways. The robot, like the PC on the desk, continues to be able to process what we call "linguistic" information. Let's consider a question written on a piece of paper. Just as a PC that passes the Turing Test could answer the question: "What is a pig?" so, too, can our robot. In the animation below we see Searle in the Binary Room, processing the question "What is a pig?" However, because the sentence is written in binary, Searle doesn't know that it is a question. It is just a meaningless string of 0's and 1's.

Searle sends a string of 0's and 1's out of the room, not knowing their significance. That string is sent through a binary converter which produces the English words, "A pig is a barnyard animal" that the voicebox is instructed to utter.

Searle would argue that while it appears that the robot genuinely understands English and knows about pigs, that is not the case. It is Searle who is processing the question and giving the answer. If he doesn't understand what he's doing (and he doesn't), then the robot can't be said to understand what it is doing either.

Stop one minute! The critic will insist. The defender of the Robot Reply will insist that we have left out the crucial ingredient. We still don't have the causal interaction with real pigs. Searle says: Okay add in the pig, and a vision system that "seems" to recognize pigs. But remember, says Searle, the visual information that comes from the video camera is just digital information too. It is just 0's and 1's. And who is doing the processing of that information? Searle, inside the Binary Room of course. Let's look:

Does the robot "see" a pig? Does it understand what it is seeing? Does it know that it is a pig? To answer those questions, Searle wants you to reflect on his performance inside the room (as you just viewed in the animation above). Inside the room, Searle does not see a picture which he recognizes as a pig. The video camera has taken visual data from the pig that consists primarily of black pixels that draws the outline of the pig. But Searle doesn't even "see" that. Those pixels are converted to a binary code of 0's and 1's. The background is composed of 0's and the outline of the pig is captured by 1's -- as in this picture:

Fig 4: This is a simulation of one way that the visual data taken by the video camera might be records as strings of 0's and 1's.

This data -- the 0's and 1's -- are sent into the room for Searle to process. It would be a mistake to assume that Searle ever sees all of the 0's and 1's displayed as we have displayed them here. He would just get one string after another. The first three rows are all 0's. So he would get a string something like this:

0000000000000000000000000000000000000000000000000000000000000000000000

followed by two more the same:

0000000000000000000000000000000000000000000000000000000000000000000000
0000000000000000000000000000000000000000000000000000000000000000000000

then followed by a string with 0's and 1's in it, something like this (notice row #4 in Fig.4 above -- we've reduced the total number of 0's and 1's to make the string a more manageable length):

0000000000000000011111111111111111111111111111111100000000000000000000

Searle only sees strings of 0's and 1's. He never sees anything that he could recognize as a pig. To him the 0's and 1's that come from the video camera are indistinghishable from (and thus just as meaningless as) the 0's and 1's coming from the linguistic input. So we might expect Searle to ask: How does adding a string of 0's and 1's from a video camera bring the robot any closer to understanding what a pig is? Searle, working within what we are now calling our "Digital Room," would behave something like this:

Searle in the room is the one who "processed" the information. But from Searle's perspective, what he accomplished did not require any understanding of the meaning or the significance of the "1's" and "0's" that identified and that he wrote.

Searle's argument is this: So long as the only information processing that is going on consists entirely of the manipulation of symbols based solely on their formal properties (viz. their shape and position) then there is no "understanding" present and being causally hooked up to a pig gives us nothing more than uninterpreted 0's and 1's, which gives us nothing at all. With the behavior of the robot, the lights seem to be on but when we peek inside we discover that "nobody's home" at least when it comes to understanding what it is doing and saying.

So. What do YOU think of the Robot Reply? Does Searle win this round? Or, can Searle's argument be defeated?

If you want to continue exploring these issues, there are other materials on "The Structure of The Chinese Room Argument" .