-
The Chinese Room Argument Reconsidered:Essentialism,
Indeterminacy, and Strong AI
JEROME C. WAKEFIELDRutgers University, New Brunswick, NJ,
USA
Abstract. I argue that John Searles (1980) influential Chinese
room argument (CRA) againstcomputationalism and strong AI survives
existing objections, including Blocks (1998) internalizedsystems
reply, Fodors (1991b) deviant causal chain reply, and Hausers
(1997) unconscious contentreply. However, a new essentialist reply
I construct shows that the CRA as presented by Searleis an unsound
argument that relies on a question-begging appeal to intuition. My
diagnosis of theCRA relies on an interpretation of computationalism
as a scientific theory about the essential natureof intentional
content; such theories often yield non-intuitive results in
non-standard cases, and socannot be judged by such intuitions.
However, I further argue that the CRA can be transformed into
apotentially valid argument against computationalism simply by
reinterpreting it as an indeterminacyargument that shows that
computationalism cannot explain the ordinary distinction between
semanticcontent and sheer syntactic manipulation, and thus cannot
be an adequate account of content. Thisconclusion admittedly rests
on the arguable but plausible assumption that thought content is
inter-estingly determinate. I conclude that the viability of
computationalism and strong AI depends ontheir addressing the
indeterminacy objection, but that it is currently unclear how this
objection canbe successfully addressed.
Key words: artificial intelligence, cognitive science,
computation, essentialism, functionalism, inde-terminacy,
philosophy of mind, Searles Chinese room argument, semantics
1. Once More into the Chinese Room
Can computers literally think, understand, and generally possess
intentional con-tents in the same sense that humans do, as some in
the artificial intelligence (AI)field hold?1 The claim that they
can has come to be known by John Searles label,strong AI, in
contrast to weak AI, the claim that computers are merely able
tosimulate thinking rather than literally think.2
The only systematically developed and potentially persuasive
argument forstrong AI is based on the doctrine of computationalism
(or machine function-alism), which holds that the essence of
thinking in the literal sense of thinkingthat applies to human
intentional contents consists of the running of certain
syn-tactically defined programs.3 Thus, computationalists hold that
an entitys having aspecific kind of intentional content consists of
its running the same (or sufficientlysimilar) Turing machine
program with the same (or sufficiently similar) input
Correspondence address: 309 W. 104 St. #9C, New York, NY 10025,
USA. Tel: +1-212-932-9705; Fax: +1-212-222-9524; E-mail:
[email protected]
Minds and Machines 13: 285319, 2003. 2003 Kluwer Academic
Publishers. Printed in the Netherlands.
-
286 JEROME C. WAKEFIELD
output relations and state transitions that constitutes a
persons having that kindof content.4
Strong AI immediately follows from computationalism. If thinking
is consti-tuted by certain kinds of computation, and digital
computers are (in principle,modulo performance limitations)
universal Turing machines, and universal Turingmachines can compute
any kind of computable function (Turing-Church thesis),then, in
principle, computers can think because, in principle, they can be
pro-grammed with the same program that constitutes human
thought.5
John Searles (1991a) Chinese room argument (CRA) is aimed at
refuting thecomputationalist account of content, thus removing the
only grounds for believ-ing strong AI.6 Searle constructs a
counterexample via a thought experiment (theChinese room experiment
[CRE]), on which his argument rests. The CRE isclaimed to show that
running a program identical to the program of a personpossessing
certain thought contents (in Searles example, Chinese language
under-standing) does not necessarily confer those contents on the
entity so programmed.The twist is that, whereas computationalism is
controversially invoked to justifyattributing contents to
computers, in the CRE it is a human being who performsthe steps of
the program and yet, according to Searle, cannot be said to have
therelevant mental states. The CRA thus purports to show that human
thinking cannotconsist of running a certain program.
With apologies for the familiarity of the exposition that
follows, Searles counter-example to computationalisms claim that
thinking consists of implementation ofa syntactically defined
program goes as follows. Imagine that an English speaker(the
operator) who knows no Chinese is enclosed in a room in the head of
a largerobot, with an elaborate manual in English that instructs
her on what to do in theroom, and she devotedly and successfully
implements the manuals instructions.The operator receives inputs in
the form of sequences of shapes, utterly strangeto her, that light
up on a console. In accordance with the directions in the
manual,when certain shapes light up in a certain sequence on the
input console, the operatorpushes buttons with certain shapes in a
specified sequence on another outputconsole. Thus, she produces
specific outputs in response to specific sequencesof inputs. The
program is fully syntactic in that the manuals rules use only
theshapes and sequences of past inputs and syntactically defined
manipulations ofthose sequences to determine the shapes and
sequences of the output. The operatorfollows the manual without any
understanding of what any of this might mean.
Although the operator does not know it, the shapes on the input
and output con-soles are characters of the Chinese language, and
the manual is a super-sophisticatedprogram for responding
appropriately in Chinese to Chinese statements. Theinput panel
feeds in a sequence of Chinese characters corresponding to what
therobot has detected people saying to it in Chinese, and the
output console con-trols the robots speech behavior. For the
purpose of reducing computationalismto absurdity, it is assumed
that the program implemented by the operator is thesame as the
program which, per computationalist hypothesis, constitutes
Chinese
-
THE CHINESE ROOM ARGUMENT RECONSIDERED 287
understanding in humans or (if there are variants) in some
particular human, andthat the operator is so skilled at following
the program that the robot appears tospeak fluent Chinese to
Chinese speakers that talk to it. Then, according to
com-putationalism, the operator literally understands Chinese,
because she implementsthe same program as is possessed by those who
understand Chinese.7
Searle argues, however, that the operator does not understand a
word of Chinese,indeed does not even know that she (via the robots
utterances) is speaking a lan-guage. She just follows the rules
laid out in the manual. The sequences of inputtedand outputted
signs are meaningless to her. Thus, Searle concludes,
understandingmust be more that merely implementing the right
program, and computationalismis false.
The CRA has had an enormous impact. Even critics admit it is
perhaps themost influential and widely cited argument against
strong AI (Hauser, 1997, p.199) and a touchstone of philosophical
inquiries into the foundations of AI(Rapaport, 1988, p. 83). Yet,
despite the immense amount of published discussion,I believe that
the ways in which the CRA succeeds and fails, and the reasons
forits successes and failures, remain inadequately understood. I
attempt to remedythis situation by reconsidering Searles argument
in this article. Many readers willconsider the CRA already refuted
and will doubt that further attention to it iswarranted, so before
presenting my own analysis I explain at some length whyeven the
best available objections fail to defeat the CRA. I also attend
throughout,sometimes in the text but mostly in the notes, to a
number of anti-CRA argumentsrecently put forward in this journal by
Hauser (1997).
If correct, my analysis offers some good news and some bad news
for strongAI. The good news is that, as presented by Searle, the
CRA, even in its mostsophisticated and objection-resistant form, is
an unsound argument that relies ona question-begging appeal to
intuition. Many critics have contended that the CRAbegs the
question or relies on faulty intuitions, but no one, in my opinion,
hasoffered a convincing diagnosis of why it does so and thus
progressed beyond aclash of intuitions. I offer such a diagnosis
here that relies on an interpretationof computationalism as a
scientific theory about the essential nature of content.8I argue
that such theories are impervious to counterexamples based on
appealsto intuitions about non-standard cases (such as the CRE),
because such theoriesby their nature often conflict with such
pre-theoretical intuitions. So, the CRA, asstated by Searle,
fails.
The bad news for strong AI, according to my analysis, is that
the CRA can betransformed into a potentially lethal argument
against computationalism simply byreinterpreting it as an
indeterminacy argument that is, an argument that showsthat thought
contents that are in fact determinate become indeterminate under
acomputationalist account. The anti-computationalist conclusion of
the indetermin-acy version of the CRA admittedly rests on
assumptions arguable but in the enddifficult to reject about the
determinacy of intentional content. Moreover, I arguethat, contrary
to Hausers (1997) claim that the CRA is simply warmed-over in-
-
288 JEROME C. WAKEFIELD
determinacy, in fact the CRA is a substantive advance in
formulating a persuasiveindeterminacy argument against the
computationalist account of content. I con-clude that strong AI
remains in peril from the indeterminacy version of the CRA,and that
the future of strong AI rests on somehow resolving the
indeterminacychallenge.
2. Failure of Existing Objections to the CRA
Searles argument rests on the common intuition that the operator
in the Chineseroom does not understand Chinese, despite her
successful manipulation of the ro-bots verbal behavior using the
manual. This intuition is widely accepted as correct,even by many
strong AI proponents. If one accepts this intuition, then there
wouldseem to be only three possible kinds of replies that might
save computationalismand strong AI from the CRA. Two of them deny
that computationalism impliesthat the operator should understand
Chinese. First, it might be argued that notthe operator herself but
some other entity in the Chinese room situation
meetscomputationalist criteria for understanding Chinese, and that
this other entity doesunderstand Chinese. Second, it might be
argued that the Chinese room situationdoes not contain any entity,
operator or otherwise, that meets computationalistcriteria for
understanding Chinese, and that in fact no entity in that situation
under-stands Chinese. Both of these kinds of replies appear in the
literature in multipleforms. I consider the first kind of reply in
the next two sections, and then turn tothe second, in each case
selecting for discussion what I consider the most effect-ive recent
versions of that kind of response. I then consider the third
response,which is to accept the intuition that the operator does
not have the usual, consciousunderstanding of Chinese but argue
that the operator unconsciously understandsChinese. I argue that
none of these objections succeed. Only then do I consider
thealternative reply, which appeals to strong AI proponents but is
in my view here-tofore without adequate theoretical grounding, that
the CRE provides insufficientgrounds to believe that the critical
intuition it generates is correct, thus fails toestablish that the
operator does not consciously understand Chinese in the
standardsense, thus fails to refute computationalism.
2.1. INTERNALIZING THE CHINESE ROOM
The most common response to the CRE is to distinguish the
operator from thebroader operator-robot-manual system and to argue
that, in focusing on the oper-ator, Searle has selected the wrong
entity for his test. Computationalism impliesthat if an entity is
programmed in the same way as a native speaker of Chinese,then the
entity understands Chinese. But, one might argue, in the CRE it is
notthe operator but rather the entire system, including the
operator, the robot, and themanual, that are so programmed and thus
should understand Chinese. The operator
-
THE CHINESE ROOM ARGUMENT RECONSIDERED 289
is just one part of this system, so the intuition that she
herself does not understandChinese is entirely consistent with
computationalism, according to this objection.
Searle (1991a) ingeniously attempts to block this systems
objection by modi-fying the CRE so as to eradicate the distinction
between the operator and thebroader system:
My response to the systems theory is simple: Let the individual
internalize allof these elements of the system. He memorizes the
rules in the ledger and thedata banks of Chinese symbols, and he
does all the calculations in his head.The individual then
incorporates the entire system. There isnt anything at allto the
system which he does not encompass. We can even get rid of the
roomand suppose he works outdoors. All the same, he understands
nothing of theChinese, and a fortiori neither does the system,
because there isnt anything inthe system which isnt in him. If he
doesnt understand, then there is no waythe system could understand
because the system is just a part of him. (p. 512)
In this amended scenario, the manual has been rewritten to apply
to input andoutput sequences of sounds rather than written symbols,
and the operator has mem-orized the manuals rules and internalized
in her own head what were formerly theoperations in the room in the
robots head. Rather than getting an input sequenceof shapes on a
screen, the operator simply listens directly to a speaker; and
ratherthan feeding signals to a robot that makes corresponding
sounds, the operatordiscards the robot and utters the sounds
herself directly to her interlocutor. Wemay imagine that the
operator has gotten so facile at following the program thatshe is
nearly instantaneous and virtually flawless in her responses, so
there is nonoticeable difference between her responses and those of
someone who fluentlyspeaks Chinese. Placed in a situation where
everyone else understands and speaksChinese (though she does not
know what language they are speaking or even thatthey are speaking
a language), she turns in a perfect performance, interacting as
ifshe actually understood Chinese without anyone knowing that she
does not.
Under these conditions, it would seem that there is no
distinction to be drawnbetween the operator and the system because
the operator is the system. The sys-tems objection would thus seem
to become irrelevant. In this scenario, Searleclaims, there is no
question that strong AI must imply that the operator
herselfunderstands Chinese because, per hypothesis, the operator
instantiates exactly thesame program as a native speaker of
Chinese. And yet, Searle further claims, ourintuition remains solid
that the operator does not understand Chinese; she under-stands
nothing that either she or others say, and does not know the
meaning ofeven one word or sentence of Chinese. She responds
correctly not because sheunderstands the meaning of her
interlocutors assertion or her response, but be-cause she perceives
that the interlocutor makes certain sounds and recallingthe manual,
or perhaps having it so well memorized that it is like a habit that
issecond nature she responds in accordance with the manuals rules
by makingcertain specified (meaningless, to her) sounds in return.
As Block (1998) notesof the operator: When you seem to Chinese
speakers to be conducting a learned
-
290 JEROME C. WAKEFIELD
discourse with them in Chinese, all you are aware of doing is
thinking about whatnoises the program tells you to make next, given
the noises you hear and whatyouve written on your mental scratch
pad (p. 45). She may not even know sheis speaking a language; she
may think that the entire effort is an experimentaltest of the
limits of nonsense learning (in the tradition of psychologists
cherishednonsense syllables), and that her interlocutors have
merely memorized nonsensesequences as test inputs. Searle concludes
that instantiating the right programcannot be what confers
understanding, because in the amended CRA the operatorinstantiates
such a program but has no understanding.
2.2. BLOCK: RETURN OF THE SYSTEMS OBJECTION
Undaunted by Searles claim that in the new CRA, the operator is
the system, NedBlock (1998) argues that a more sophisticated
version of the systems objection suc-ceeds against the new CRA.
Just as the new CRA internalizes the system within theoperator, so
Block attempts to internalize the systems objection by
distinguishingthe operators meanings from the meanings of the
program she has internalized.Block claims that the internalized
program understands Chinese even though theoperator does not:
But how can it be, Searle would object, that you implement a
system thatunderstands Chinese even though you dont understand
Chinese? The sys-tems objection rejoinder is that you implement a
Chinese-understanding systemwithout yourself understanding Chinese
or necessarily even being aware ofwhat you are doing under that
description. The systems objection sees theChinese room (new and
old) as an English system implementing a Chinesesystem. What you
are aware of are the thoughts of the English system, forexample
your following instructions and consulting your internal library.
Butin virtue of doing this Herculean task, you are also
implementing a real in-telligent Chinese-speaking system, and so
your body houses two genuinelydistinct intelligent systems. The
Chinese system also thinks, but though youimplement this thought,
you are not aware of it.... Thus, you and the Chinesesystem cohabit
one body. Searle uses the fact that you are not aware of theChinese
systems thoughts as an argument that it has no thoughts. But this
is aninvalid argument. Real cases of multiple personalities are
often cases in whichone personality is unaware of the other. (pp.
4647).
Block argues that, although the Chinese program is implemented
by the operator,Chinese contents occur as states of the program in
the operators brain but not asthe operators contents. Note that
Block acknowledges that the operator does notpossess Chinese
semantic contents; he does not attempt to argue that the
operatorunconsciously understands Chinese (the unconscious
understanding argument isconsidered in a later section). The
operator is unaware of the Chinese meanings ofthe steps in the
program she implements, but that does not mean she
unconsciously
-
THE CHINESE ROOM ARGUMENT RECONSIDERED 291
understands them, any more than my unawareness of your contents
means I un-consciously possess your contents. Rather, Block says,
it is like a case of multiplepersonality disorder in which a brain
contains two agents, one of which is unawareof the others contents;
or, it is like the operators representing a step of the
programunder one description and the programs representing it under
another.
The major challenge for Block is to show how, within a
computationalist frame-work, the programs meanings can be different
from the operators meanings. Afterall, Searle designed the new CRA
to eliminate any such distinction. For the oper-ator to implement
the program is for the operator to go through every step of
theprogram and thus to do everything, syntactically speaking, that
the program does.Indeed, the program was (per hypothesis) selected
on the basis of the very factthat a persons (i.e., a native Chinese
speakers) implementation of the steps ofthe program constitutes the
persons (not the programs) understanding Chinese.So, Blocks
objection stands or falls with his ability to explain how to create
arelevant distinction between the programs and the operators
meanings within acomputationalist account of meaning.
Block thinks he can draw such a distinction partly because he
misconstruesSearles argument as weaker than it is. Block suggests
that Searles only groundfor denying that the operator understands
Chinese is that the operator is unaware ofpossessing Chinese
meanings. He thus claims that Searles argument is of the form:A
(the operator) is unaware of As understanding the meanings of
Chinese wordsand sentences; therefore, A does not possess Chinese
semantic contents. Withoutassessing this argument regarding the
operators meanings,9 Block observes that itloses whatever force it
has when generalized to As lack of awareness of anotherentitys
contents, as in: A (the operator) is unaware of Bs (the programs)
un-derstanding of the meanings of Chinese words and sentences;
therefore, B does notpossess Chinese semantic contents. Thus, Block
concludes that Searles argument,whatever its merits when applied to
the operators understanding, does not supportthe conclusion that
the program itself does not understand Chinese.
However, Searles argument is more subtle than Block allows.
Searle constructsthe internalized version of the CRE in such a way
that the program exists asthoughts in the operators mind; each step
of the program when it is running is,per hypothesis, a step in the
operators thought process. Thus, if computationalismis correct that
the program determines the content, the operator and the
programmust possess the same content. That, in conjunction with the
fact that the operatorunderstands the steps of the program only as
syntactic manipulations and not asChinese meanings (Block concedes
this), yields the conclusion that the programcannot understand
Chinese. Block rightly observes that Searle argues only that
theoperator, not the program itself, lacks Chinese understanding.
But that is becauseSearle realizes that, if implemented syntax
constitutes semantics, then the fact thatthe operator does not
understand Chinese implies that the program also does not,because
in the new CRE, the program and the operator necessarily go through
thesame syntactically defined steps.
-
292 JEROME C. WAKEFIELD
Block tries to justify distinguishing the operators and programs
contents bydrawing an analogy between the operatorprogram
relationship and the relation-ship between personalities in
multiple personality disorder. He notes that in suchdisorders, one
personality may not possess the thoughts of another in the
samebrain.
One might be tempted to object that in such disorders, there are
multiple selves,and every content is a content of one of those
selves, and that surely the Chineseprogram is not by itself an
agent or self, leaving no agent to possess the claimedChinese
semantic contents. But this riposte would be inconclusive. Strong
AI pro-ponents might reject the assumption that semantic contents
have to be someonessemantic contents or, less heroically, might
insist that the Chinese-understandingprogram is so complex and
capable that it is enough of a self or agent to pos-sess contents.
The latter claim is suggested by Blocks comment that the
Chineseprogram is a genuinely distinct intelligent system.
However, there is another, more compelling reason why the
multiple-personalityanalogy is not supportive of Blocks analysis:
the program and operator are notsufficiently distinct to justify
the analogy. Unlike the divergent contents of multiplepersonalities
based on divergent brain states that implement different programs,
theoccurrence of the Chinese-understanding programs states are not
distinguishablefrom the occurrence of the operators states when she
is implementing the program.Note that it might also be possible in
principle for the very same brain event tosimultaneously constitute
steps in two different programs (perhaps implementedby two
different selves) and thus to realize two different meanings. But,
within acomputationalist framework, such an occurrence of two
different meanings woulddepend on the brain states constituting
different steps in two different simul-taneously running programs
that form the context of its occurrence. But in theinternalized
CRE, there is nothing analogous to such different programs that
mightmake the meaning of a syntactic step different for the
operator and the program.The operator implements the program by
going through the steps of the program,thus must possesses the same
computationalist meanings as the program. Blocksmultiple-selves
analogy fails to tear asunder the meanings Searles new CRA
joinstogether.
Block also claims that the operator and the program understand a
given stepsmeaning under two different descriptions; the operator
understands the step un-der an English description of its syntactic
shape, while the program understandsthe step under a description in
terms of its semantic content in Chinese. Thesedivergent
descriptions are claimed to allow for two different meanings
despite thefact that the identical step occurs within the operators
implementation and theprograms running.
An identical event can be known to two agents under different
descriptions.However, according to computationalism, the meaning of
a description (which isitself, after all, a semantic content) must
be determined by the formal structure ofan implemented program, and
the operators and programs implemented formal
-
THE CHINESE ROOM ARGUMENT RECONSIDERED 293
structures are, per hypothesis, identical. Block makes much of
the fact that theoperator describes the programs syntactic steps to
herself in English, whereas theprogram itself is not in English.
However, the idea that the language used by theoperator to describe
the steps of the program should matter to the meanings of
theprograms steps is antithetical to the core computationalist
hypothesis (on whichthe generalization from the nature of human
thought to the ability of computers tothink depends) that meaning
is invariant over ways of implementing a program. So,given that the
operator implements the program, a computationalist cannot holdthat
the program understands Chinese but the operator does not just
because theoperator is aware of the programs syntactic steps under
an English description.
In any event, the fact that there are two languages is an
inessential element of thenew CRE. One could reframe the CRE so
that it describes a syntactic-programmingidiot savant who as a
child learned her first language by extrapolating a set offormal
syntactic rules from the speech sounds of those around her, and
whichshe subsequently learns to habitually follow without needing a
meta-language todescribe the rules. Or, the operator could have
overlearned the program so that it ishabitualized and no English
thought need intervene when going from step to stepof the program
as dictated by the manual. In either case, the operator simply
thinksvia the programs transformations with no meta-level chatter
in another language,like a mathematician who simply sees directly
how to transform mathematical ex-pressions without needing to think
about it in English. Despite there being only onelanguage involved,
such a person would have no semantic understanding despiteher
perfect linguistic performance.
In sum, the internalized systems objection that Chinese
understanding is pos-sessed only by the program and not by the
operator fails because, given the intern-alization of the program,
the features that, according to computationalism, yielda semantic
content for the program also occur in and yield the same content
inthe operator. Within the new CRA as Searle constructs it, there
is simply no wayfor the program to understand Chinese without the
operator possessing the sameunderstanding, if computationalism is
true.
2.3. FODOR: THE DEVIANT CAUSAL CHAIN OBJECTION
Another way to try to evade the CRA is to accept that there is
no understanding ofChinese by any entity in the CRE situation, but
to argue that this is not a counter-example to computationalism
because no entity in the CRE situation satisfies
thecomputationalist criterion for Chinese understanding. Most
objections of this sorthold that, for one reason or another, the
micro-functioning of the CREs programinadequately reflects the
functioning of a Chinese speaker, so the right programto yield
Chinese understanding has not been implemented.
The CRE is designed to avoid this sort of objection. The
program, per hypo-thesis, mimics the program of a Chinese speaker
in all significant details. If there is
-
294 JEROME C. WAKEFIELD
a syntactically definable program for Chinese understanding, as
computationalismimplies there must be, then it is precisely matched
by the rules in the manualin the Chinese room and, consequently, by
the mental processes of the operatorwho internalizes the manual in
the new CRE. So, it might seem that there can beno successful
objection based on a claimed lack of correspondence between
theChinese speakers program and the program implemented by the
operator in theCRE.
However, there is one feature of the CRE implementation that is
not analogousto the typical Chinese speakers program, namely, the
deliberate conscious imple-mentation of the steps of the program by
an operator. Jerry Fodor (1991a, b) putsforward a distinctive
version of the micro-functioning objection that focuses on therole
of the operator. Fodor accepts that neither the operator nor the
overall systemunderstands Chinese: I do think that it is obvious
that Searles setup doesnt un-derstand Chinese (1991b, p. 525). He
also accepts that the manuals rules exactlymimic the program of a
Chinese speaker. But he argues that the program alonewould normally
understand Chinese by itself, and that it is only the intrusion of
theoperator into the process that causes the program and the system
not to understandChinese. The reason, he says, is that introduction
of the operator renders the im-plemented program non-equivalent to
the original program of the Chinese speakeron whom it was modeled.
Fodor thus stands the systems objection on its head;rather than
arguing that the operators interaction with the program yields
Chineseunderstanding, he argues that the introduction of the
operator undermines under-standing that would otherwise exist, by
rendering otherwise equivalent programsnon-equivalent.
Recall that in constructing the CRA, Searle assumes for the sake
of reducingcomputationalism to absurdity that the
operator-implemented syntactic manipu-lation is equivalent, as a
program, to the syntactic program that (per computa-tionalist
hypothesis) constitutes Chinese understanding. Thus, Searle assumes
thatintroducing the operator preserves Turing-machine equivalence.
Block (see above)never challenges this assumption, and that makes
it impossible for him to dis-tinguish the operators and programs
contents; if the programs are equivalent,computationalism implies
they must constitute the same contents.
Fodor challenges the assumption of program equivalence by
arguing that equi-valence depends not only on the programs formal
steps but also on how the trans-itions between the programs steps
are implemented. If such transitions are notdirect and involve
further mediating states (e.g., conscious, deliberate
actions),then, he argues, those mediating states are in effect part
of the program, and theprogram is not equivalent to programs
lacking such mediating steps.
Fodors (1991a) argument against program equivalence between
operator-im-plemented and direct-causation programs relies heavily
on the example of percep-tion:
It is, for example, extremely plausible that a perceives b can
be true onlywhere there is the right kind of causal connection
between a and b.... For ex-
-
THE CHINESE ROOM ARGUMENT RECONSIDERED 295
ample, suppose we interpolated a little man between a and b,
whose functionis to report to a on the presence of b. We would then
have (inter alia) a sortof causal link from a to b, but we wouldnt
have the sort of causal link that isrequired for a to perceive b.
It would, of course, be a fallacy to argue from thefact that this
causal linkage fails to reconstruct perception to the conclusion
thatno causal linkage would succeed. Searles argument...is a
fallacy of preciselythis sort. (pp. 520521)
That is, imagine that you are looking at a scene and that your
experience is just likethe experience you would have if you were
perceiving the scene. However, in factsomeone is blocking your
sight and relaying signals to your brain that give you
theexperience you would have if you were seeing the scene. Fodor
observes that theresultant experiences, even though they are the
same as would result from a directperception of the scene and
indeed are caused by the scene (a necessary conditionfor perception
under most analyses) would not be considered a genuine case
ofperception because they are caused by the scene not directly but
only indirectlyvia the intervening cause of an operators actions.
The introduction of the deviantcausal chain involving the mediation
of an agent yields (for whatever reason) anintuition that genuine
perception is not occurring. The very concept of perceptionhas
built into it the requirement that the causation of the experience
by the scenebe direct, not mediated by an operator.
Fodor argues that Searle has created a similar deviant causal
chain by addingthe program operator as a relayer of inputs and
outputs in the CRE. Fodor claimsthat it is this deviant causal
pathway which is not isomorphic to a real Chinesespeaker, in whom
state transitions occur without such conscious mediation thatis
responsible for the intuition that the operator lacks genuine
understanding ofChinese, analogous to the intuition that relayed
signals do not constitute genu-ine perception. Thus, although the
intuition that the operator does not understandChinese is correct,
it provides no argument against computationalism because it isnot
due to the failure of syntax plus causal relations to the outside
world, Fodoradds to yield semantics. Rather, the intuition is due
to the fact that the operatorsimplementation introduces deviations
from the native speakers program: All thatSearles example shows is
that the kind of causal linkage he imagines one thatis, in effect,
mediated by a man sitting in the head of a robot is,
unsurprisingly,not the right kind (Fodor, 1991a, p. 520).
Fodor (1991b) goes so far as to claim that the intervention of
the operator im-plies that the CREs setup is not a Turing machine
at all because a transition to anew state of the system is not
directly and proximally caused by the prior state:When a machine
table requires that a token of state-type Y succeeds a token ofthe
state-type X, nothing counts as an instantiation of the table
unless its tokeningof X is the effective (immediate, proximal)
cause of its tokening of Y (1991b, p.525). Fodor claims this
requirement would surely rule out systems in which themechanisms by
which S1 tokens bring about S2 tokens involve a little man
whoapplies the rule if you see an S1, write down S2 (1991b, p.
525). He concludes:
-
296 JEROME C. WAKEFIELD
Even though the program the guy in the room follows is the same
program thata Chinese speakers brain follows, Searles setup does
not instantiate the machinethat the brain instantiates (1991b, p.
525).
Searles (1991b) response to Fodor contains two elements. First,
regardingFodors suggestion that the determinants of meaning include
not only the CREssymbol manipulation but also causal relations
between symbol occurrences andfeatures of the outside world, Searle
answers that the addition of such causal rela-tions will not change
the intuition that the operator does not understand Chinese:No
matter what caused the token, the agent still doesnt understand
Chinese.... Ifthe causal linkages are just matters of fact about
the relations between the symbolsand the outside world, they will
never by themselves give any interpretation to thesymbols; they
will carry by themselves no intentional content (1991b, p.
523).
On this point, Searle is surely correct. Including causal
relations to the externalworld in the operators manual (i.e.,
determining syntactic transitions not onlyby past syntactic inputs
but also by what caused the inputs) would not affect theexperiments
outcome because, whatever caused the inputs, the processing of
thesymbols could still proceed without any understanding of the
semantic content ofthe symbols. Just as there is intuitively a step
from syntactic structure to semanticcontent which is exploited by
the CRE, so there is also intuitively a step from thecause of the
occurrence of a syntactic structure to semantic content, and a
modifiedCRE could exploit that intuitive gap.
Second, regarding Fodors crucial claim that the operators
intervention leads toprogram non-equivalence and even non-Turing
machine status, Searle replies thatit is absurd to think that
inclusion of an operator who consciously implements aprogram in
itself alters the program and that such a system is not a Turing
machine:To suppose that the idea of implementing a computer
program, by definition, rulesout the possibility of the conscious
implementation of the steps of the programis, frankly, preposterous
(1991c, p. 525). Searle offers two examples to showthat such
intervention preserves Turing machine equivalence. First, he
observesthat either clerks or adding machines can be used to add
figures, and both aresurely instantiating the same addition
program. Second, he imagines Martiancreatures imported only because
they can consciously implement certain computerprograms at speeds
much faster than computers, and argues that surely they wouldbe
considered to literally implement the relevant computer
programs.
Searles examples are persuasive counterexamples to Fodors claim
that Turingmachine instantiation necessarily excludes conscious
implementation. However,the fact that Fodors universal
generalization regarding the non-equivalence ofconsciously
implemented and direct-causation programs is false does not
implythat he is wrong about the CRE. Even if introduction of
conscious implementationsometimes does not violate Turing-machine
equivalence, it is still possible that itsometimes does so. Nor is
Fodors generalization the only ground for his objectionto the CRE.
Rather, Fodors analogy to perception, where it does seem that
in-
-
THE CHINESE ROOM ARGUMENT RECONSIDERED 297
troduction of conscious implementation violates program
equivalence, is the mostcompelling aspect of Fodors argument.
Searles response does not address the perception analogy. Rather
than refutingFodors universal generalization and then focusing on
the case at hand, Searle in-stead asserts his own opposite
universal generalization on the basis of his aboveexamples. That
is, he claims that inserting conscious implementation always
pre-serves program equivalence and is just an instance of Fodors
requirement thatone step be directly caused by another: Even if we
accept [Fodors] requirementthat that there must be a (immediate
proximal) causal connection between thetokenings, it does not
violate that condition to suppose that the causal connectionis
brought about through the (immediate proximal) conscious agency of
someonegoing through the steps (1991c, p. 525). On the basis of
this generalization, Searleconcludes that the CRE operators
consciously implemented program is equivalentto the modeled Chinese
speakers program.
However, Searles examples do not convincingly establish his
general conclu-sion that conscious implementation always preserves
program equivalence. Thelimitation of his reply lies in the fact
that both of his examples involve relationsbetween artifactual
programs and their conscious implementations, whereas Fodorsexample
of perception involves conscious implementation of a naturally
occurringbiological process that may be considered to have an
essential nature in standardcases that excludes conscious mediation
at some points. The distinction betweenimplementation of
artifactually constructed versus naturally occurring programscould
be crucial to intuitions regarding program equivalence because (as
Searlehimself points out in other contexts) artifacts like programs
are subject to derivedattributions based on what they were designed
to do.
How do Searles examples involve relations to artifacts? The
first example relieson an artifact, the adding machine, that has
been designed to carry out a program,addition, that is consciously
implemented by clerks. The example shows thatwhen a machine is
designed to run a program that humans (actually or
potentially)consciously implement, the same program is intuitively
instantiated (i.e., the pro-grams are intuitively Turing-machine
equivalent) based on the artifacts designedfunction of reproducing
the relevant steps of the consciously implemented pro-gram.
Similarly, the conscious-Martian-calculator example involves the
Martiansconscious implementation of human computer programs, which
are in turn arti-facts designed to substitute for conscious human
implementation. The Martiansare intuitively understood to be
implementing the same program as the computersbecause that is the
Martians function in being brought to Earth (in this regardthey are
living artifacts, functionally speaking), and because they
(literally) andthe computers (by functionally derived attribution)
are both understood to have thefunction of implementing the same
program that a human (whether actually or, dueto human limitations,
only potentially) would consciously implement.10
Fodors prime example, however, involves no artifacts but rather
concerns con-scious implementation that intervenes in the naturally
occurring perceptual pro-
-
298 JEROME C. WAKEFIELD
cess. Fodor is surely correct that an operators implementation
of perception yields,in some intuitive sense, non-genuine
perception, even if the process is otherwiseidentical to normal
perception. Searles adding-machine and Martian-calculatorexamples
refute Fodors claim that conscious implementation never preserves
pro-gram equivalence, but Fodors perception example equally refutes
Searles claimthat conscious implementation always preserves program
equivalence. The arti-fact/natural distinction could be critical
here in explaining attributions of equival-ence. Like perception,
processes of human thought, understanding, and so on arealso
instances of naturally occurring systems in which the essence of
the program(assuming such processes consist of programs) could at
some points include lackof conscious implementation and could
require direct, unmediated causal relationsbetween steps, as Fodor
suggests. Searle fails to address this possibility and thusfails to
refute Fodors objection. So, it remains an open question whether
the con-scious implementation of the Chinese understanding program
is inconsistent withgenuine Chinese understanding.
How can we assess Fodors claim that the deviant causal chain
introduced by theoperators intervention in the CRE, and not the
syntactic nature of the operatorsprogram, is the source of the
intuition that there is no Chinese understanding?The only
(admittedly imperfect) answer seems to be to examine each
possiblesource of such an intuition within the deviant causal
chain. (Note that the issuehere concerns conscious implementation
of the program, not conscious awarenessof the programs steps. A
hyper-aware linguistics processor who is consciouslyaware of every
step in linguistic processing can still understand Chinese.)
Precisely which features of the deviant causal chain due to the
operators im-plementation might undermine the intuition that there
is Chinese understanding?A strict analogy to the perception example
would suggest the following: The factthat the operator intervenes
between the sending of the verbal input from an in-terlocutor and
the initiation of the processing of the input by the program
yieldsnon-equivalence. However, this sort of intervention has
nothing whatever to dowith whether the subject is judged to
understand the language. It is the subjectsunderstanding of the
arriving sentences, not the causal relation to the emitter ofthe
arriving sentences, that is at issue. Unlike the perception case,
there is noth-ing in the concept of language understanding that
changes an understander into anon-understander if, rather than the
program directly receiving inputs and directlyemitting outputs, an
operator mediates between the arrival of verbal inputs and
in-ternal processing, or between the results of the processing and
outputs. So, if takenliterally, the analogy Fodor attempts to forge
between language understanding andperception fails. The concept of
perception (for whatever reason) is indeed partlyabout the direct
nature of the causal relation between the perceiver and the
world,but language understanding is not about the nature of such
causal relations. Thus,the intervention of an operator between the
world and internal processing in theCRE cannot explain the
resulting no-understanding intuition.
-
THE CHINESE ROOM ARGUMENT RECONSIDERED 299
Nor can the source of the no-understanding intuition be the
sheer occurrence ofsome conscious implementation in moving from one
step of the program to anotherin the internal language
understanding process. Such conscious implementation isnot in
itself inconsistent with linguistic understanding. Although some
steps inlinguistic processing, such as immediate understanding of
the meanings of expres-sions uttered in ones native language, are
typically involuntary for fluent speakers,the recipient of
linguistic input sometimes has to consciously implement the
deci-phering process (e.g., in learning to understand a second
language) and certainlymay have to voluntarily formulate the
output. In these respects, linguistic perform-ance is different
from perception, which is inherently involuntary once sense
organsare in a receptive position, and is one way in that there is
no perceptual output.
Nor can the source of the no-understanding intuition be the
intervention of anexternal agent (as in the original, robotic CRE)
who does not herself possessthe program (which is in the manual).
Such intervention by an external agentis salient in Fodors
perception example. However, Searles new CRE, describedabove,
internalizes the entire program within the operator, so that the
operator isdirectly receiving verbal input, internally implementing
all program steps involvedin understanding the input, and directly
responding herself. Thus, the new CREeliminates any causal-chain
deviance due to an external implementer.
The only remaining explanation of why the deviant causal chain
due to con-scious implementation would yield the no-understanding
intuition lies in preciselywhich program steps are implemented.
Although linguistic responses can involveconscious implementation,
some steps appear to be inherently automatic and in-voluntary. In
contrast, the CRE involves such implementation of every step in
theprogram. According to this diagnosis, introducing the operator
creates a situationin which certain steps in the internal
understanding process that are normally inher-ently automatic
become voluntarily implemented, thus introducing steps that failto
be equivalent to the hypothesized essential nature of the natural
program.
This possibility can be addressed by amending the new CRE to
include not justinternalization but habituation and automation.
Imagine that the operator, with theprogram (syntactically identical
to the native speakers) internalized, overlearnsand thus
automatically and unreflectively implements the syntactic steps
that areimplemented automatically by native speakers, with one step
directly causing thenext without conscious or deliberate
intervention. This habituated CRE containsnone of the above
elements that might allow the deviant causal chain pointed toby
Fodor to be the source of our intuition that there is no
understanding. Yet, theintuition remains that the operator does not
understand a word of Chinese and that(to use Ned Blocks example)
when the operator seems to be asking for the saltin Chinese, she is
really thinking in English about what noises and gestures
theprogram dictates she should produce next.
The intuition that the CREs operator lacks Chinese understanding
is thus in-dependent of all plausible potential sources in the
deviant causal chain due tooperator implementation. The perception
example turns out to be misleading be-
-
300 JEROME C. WAKEFIELD
cause the concept of perception involves assumptions about a
direct causal linkbetween perceiver and environment that are not
present in the concept of languageunderstanding. Consequently, the
proposed deviant causal chain cannot be heldresponsible for the
intuition that the CREs operator does not understand
Chinese.Searles alternative account, that the fact that the
operator is implementing a sheerlysyntactic program yields the CREs
negative intuition, remains undefeated and themost plausible
account.
2.4. DOES THE OPERATOR UNCONSCIOUSLY UNDERSTAND CHINESE?
For those defenders of strong AI who accept that the CRE
operator does not under-stand Chinese in the standard way, a
remaining gambit is to suggest that althoughshe does not
consciously understand Chinese (because her conscious contents
areabout syntactic strings), she does unconsciously understand
Chinese and has un-conscious Chinese semantic contents. As in cases
of people who demonstrate un-conscious knowledge of languages they
are unaware they understand, it is claimedthat although the
operator cannot consciously access her Chinese understanding,she
nonetheless possesses such understanding unconsciously. Thus, for
example,Hauser (1997) argues as follows:
Even supposing one could respond passably in Chinese by the
envisaged met-hod without coming to have any shred of consciousness
of the meanings ofChinese symbols, it still does not follow that
one fails, thereby, to understand.Perhaps one understands
unconsciously. In the usual case, when someonedoesnt understand a
word of Chinese, this is apparent both from the first-person point
of view of the agent and the third-person perspective of
thequerents. The envisaged scenario is designedly abnormal in just
this regard:third-person and first-person evidence of understanding
drastically diverge. Tocredit ones introspective sense of not
understanding in the face of overwhelm-ing evidence to the contrary
tenders overriding epistemic privileges to first-person reports.
This makes the crucial inference from seeming to oneself not
tounderstand to really not understanding objectionably theory
dependent. Func-tionalism does not so privilege the
first-person.... Here the troublesome resultfor
Functionalism...only follows if something like Searles Cartesian
identi-fication of thought with private experiencing...is already
(question-beggingly)assumed. Conflicting inuitions about the
Chinese room and like scenariosconfirm this. Privileging the first
person fatally biases the thought experiment.(Hauser, 1997, pp.
214215)
First, there is no question-begging privileging of the first
person perspective inthe CRE. Rather, the thought experiment is an
attempt to demonstrate that thirdperson competence is not
sufficient for content attribution and that the first
personperspective is relevant to such attributions. The example is
not theory laden, butrather provides a test of various
theories.
-
THE CHINESE ROOM ARGUMENT RECONSIDERED 301
Note that the unconscious-content account is not the same as
Blocks internal-ized systems objection that the operator and
program have different contents, andthus is not necessarily subject
to the same objections. According to the unconscious-content
account, the operator herself, not the program considered
independentlyof the operator, unconsciously possesses the Chinese
meanings in virtue of herimplementation of the program.
To assess the objection that the operator unconsciously
understands Chinese,one has to have some notion of when it can be
said that an agent has specificunconscious contents. Searles (1992)
connection principle is relevant here. (In-deed, a potentially
important and non-obvious link between Searles Chinese roomand
connection principle arguments is suggested.) Searle argues (very
roughly)that one cannot be said to possess a genuine unconscious
content unless at leastin principle the content could become
conscious. According to this account, theoperator cannot be said to
unconsciously understand Chinese if there is nothingabout the
unconscious contents in the operators brain that would potentially
allowthem to come to consciousness as genuine (not merely syntactic
surrogates of)semantic contents.
Whether or not Searle has gotten the account of unconscious
mentation right,he is certainly correct that something more than
third-person dispositions to act asif one has a content are
required for attribution of unconscious content. The stand-ard
refutations of logical-behaviorist and Turing-test accounts of
content show asmuch. Moreover, the operator in the Chinese room
does not possess either of thetwo features that would typically
support such attribution of genuine unconsciousChinese semantic
contents. First, the primary method of verifying
unconsciouscontent, namely, by first-person report when the
contents come into conscious-ness, would yield the conclusion that
the contents are syntactic descriptions, notsemantic
understandings. Second, there is no need to postulate unconscious
un-derstanding for explanatory purposes; in any instance of a
conscious practicalreasoning sequence (e.g., the operators belief
and desire reasons, The manualsays I should make sound S and I want
to do what the manual says I shoulddo, lead to the action of
uttering sound S), the attribution of unconscious semanticcontents
is explanatorily superfluous; for example, postulating that the
operator un-consciously understands that S means pass the salt in
Chinese is unnecessary toexplain the utterance of S because the
utterance is fully explained by the syntactic-based reasons that
led to the action. I conclude that attributing unconscious
Chineseunderstanding to the operator cannot be coherently
defended.
Defense of strong AI based on the unconscious-content reply may
seem moreattractive than it is because of a common failure to
distinguish between possessing acontent unconsciously and not
possessing the content at all. Consider, for example,Hausers (1997)
illustration:
During the Second World War, Wrens (Women Royal Engineers)
blindlydeciphered German naval communications following programs of
Turings de-vising until machines (called bombes) replaced the
Wrens. Like Searle in the
-
302 JEROME C. WAKEFIELD
room the Wrens did their appointed tasks without knowing what
any of it wasfor but rather than conclude (with Searle) that
neither Wrens nor bombes werereally deciphering, Turing conjectured
both were doing so and, in so doing,doing something intellectually
unawares (Hodges, 1983, p. 211). (Note 22, pp.222223)
There is clearly a derived sense in which the Wrens were
deciphering German,namely, Turing used them to implement his
program the function of which was todecipher German. There is also
a sense in which they were doing this unawares,namely, they had no
idea what service they performed in following Turings pro-gram. But
the critical question for strong AI and the CRE is whether the
Wrensliterally understood the German they deciphered or the English
meanings of thesyntactic strings that emerged from Turings program.
The answer (taking the ex-amples description at face value) is that
they had no idea of these meanings, oreven that such meanings
existed. The sense in which they were unaware of thatcontent is not
the sense in which one is unaware of content one possesses
un-consciously; it is the sense in which one is just plain ignorant
and does not possessthe content at all, consciously or
unconsciously (e.g., the sense in which my three-year-old son Zachy
is unaware that when he moves his hand he is
gravitationallyinfluencing Jupiter). The Wrens, it appears, had no
idea, conscious or unconscious,of the meanings they were
manipulating or even that they were manipulating mean-ings.
Intentional descriptions such as deciphering are thus applied to
the Wrensonly in a non-literal, derived sense, based on the
function their actions performedfor Turing, and not because
semantic contents were possessed unconsciously.
Finally, note that a basic challenge to the unconscious-content
reply is to ex-plain what grounds there are for attributing one
content rather than another to theunconscious. This portends an
issue, indeterminacy of meaning, that is central tothe analysis
below.
3. Why the Chinese Room Argument is Unsound
3.1. THE ESSENTIALIST OBJECTION TO THE CHINESE ROOM
EXPERIMENT
The CRE yields the intuition that the operator does not
understand Chinese, fromwhich it is concluded that the operator
does not in fact understand Chinese. TheCRA uses this result to
argue that computationalism cannot be true. This argumentis a
powerful one for those who share the critical no-understanding
intuition aboutthe CRE and take it at face value. This response is
widely shared even amongSearles opponents. The objections
considered earlier all started from the premisethat the operator in
the CRE does not understand Chinese, at least consciously.
However, many others in the AI community either do not share the
intuition ordo not take it at face value. To them, it seems that,
irrespective of pre-theoreticalintuitions, the operator literally
and consciously does understand Chinese in virtue
-
THE CHINESE ROOM ARGUMENT RECONSIDERED 303
of her following the syntactic program. The dispute thus comes
down to a matter ofconflicting intuitions, or to a difference over
how seriously to take such intuitions.Such conflicts of intuition
cannot be resolved unless deeper principles can be citedas to why
one intuition or another is or is not good evidence for deciding
the issueat stake.
In this section, I am going to develop a new kind of objection
to the CRA, whichI will dub the essentialist objection. The
essentialist objection provides a theor-etical rationale for
concluding that the common pre-theoretical intuition that
theChinese room operator does not understand Chinese is not an
appropriate reasonfor concluding the she does not in fact
understand Chinese. I do not deny that thereis such a widely shared
intuition. Rather, I argue that there are good reasons whysuch an
intuition cannot be taken at face value and thus that the intuition
does notsupport Searles broader argument. The essentialist
objection is different from thethree objections considered earlier
because it attacks the premise, on which thoseobjections are based,
that the CRE-generated intuition shows that the operator doesnot
consciously understand Chinese. It is also different from the usual
rejections ofthat intuition in allowing that the intuition is
broadly shared but providing a theor-etical rationale for
nonetheless rejecting the intuition as determinative of whetherthe
operator in fact understands Chinese.
Computationalism was never claimed to entirely conform to our
pre-theoreticalintuitions about intentional content in all possible
cases. Nor was it claimed to bea conceptual analysis of what we
intuitively mean by meaning or content. Rather,computationalism is
best considered a theoretical claim about the essence of con-tent,
that is, about what as a matter of scientific fact turns out to
constitute content.The claim is that, in standard cases of human
thought, to have a certain intentionalcontent is in fact to be in a
certain kind of state produced by the running of a
certainsyntactically defined program. As Block (1998) puts it: The
symbol manipulationview of the mind is not a proposal about our
everyday conception.... We find thesymbol manipulation theory of
the mind plausible as an empirical theory (p. 47).According to this
construal, strong AI is an empirical claim about what
constitutesthe essence of meaning, in exactly the way that water is
H2O is an empiricalclaim about what constitutes the essence of
water.
A theory about the essence of the things referred to by a
concept often re-veals how to extend the concept to new and
surprising instances, with consequentrealignments of intuitions. In
such extensions of concepts to novel cases, the pres-ence of the
identified essential property overrides previous intuitions based
onsuperficial properties. This sort of counter-intuitive
recategorization is one of thedistinctive consequences and
scientific strengths of essentialist theorizing. Thus,for example,
St. Elmos fire is not fire, while rust is a slow form of fire;
lightningis electricity; the sun is a star; whales are not fish;
there are non-green forms ofjade; etc. Consequently, proposed
counterexamples to essentialist proposals thatrely heavily on
pre-theoretical intuitions about specific non-standard examples
do
-
304 JEROME C. WAKEFIELD
not carry much weight, because it is not clear beforehand where
such exampleswill fall after the essentialist criterion is
applied.
Imagine, for example, rejecting the claim that ice is the same
substance as wateron the grounds that our pre-theoretical
intuitions are clear that nothing solid couldbe water. Many
coherent scientific theories have been rejected on such
spuriousgrounds. For example, the fact that the infants pleasure in
sucking at the breast ispre-theoretically a paradigm case of
non-sexual pleasure was cited by many criticsas a sufficient
refutation of Freuds claim that infantile oral pleasures are
sexual.However, Freuds claim was that infantile sucking and
standard sexual activitiesshare the same underlying sexual
motivational energy source and so, despite pre-theoretical
intuitions to the contrary, are essentially the same motivationally
andfall under the category sexual. Whether Freud was right or
wrong, his claimcould not be refuted simply by consulting powerful
pre-theoretical intuitions thatthe infants sucking pleasure is
non-sexual.
Computationalism holds that the essence of standard cases of
human intentionalcontent is the running of certain formal programs,
and if that is so, then anythingthat shares that essence also has
intentional content. Searles CRE presents a non-standard human
instance that possesses that essence but violates our
pre-theoreticalintuitions regarding possession of content. Searle
takes our intuitions to be determ-inative of whether the CREs
operator understands Chinese. The essentialist replyis that Searles
reliance on such intuitions in a very non-standard case is not
apersuasive way of refuting computationalisms essentialist claim.
The acceptabilityof that claim depends on whether computationalism
can successfully explain stand-ard, prototypical cases of content,
as in typical human thought and understanding.If the proposal works
there (and the burden of proof is on the computationalistto show
that it does), then content can be justifiably attributed to
non-standardinstances sharing the identified essence (such as the
operator in the CRE), whateverthe pre-theoretical intuitions about
such non-standard instances.
3.2. UNSOUNDNESS OF THE CHINESE ROOM ARGUMENT
The essentialist reply points to a central problem with the CRA:
the argument aspresented is only as deep as the intuition about the
CRE example itself. There isno deeper non-question-begging argument
to which the example is used to pointthat would justify accepting
this particular example as a sufficient arbiter of thetheory of the
essence of intentionality. It is just such strongly intuitive
stand-alonecounterexamples that are most likely to offer misleading
intuitions about essences(e.g., ice is not water; whales are fish;
white stones are not jade).
Searle (1997) asserts to the contrary that the CRE provides the
basis for asimple and decisive argument (p. 11) against
computationalism, so it is im-portant to assess whether the
essentialist objection to the CRE survives Searlesformulation of
the CRA, which goes as follows:
-
THE CHINESE ROOM ARGUMENT RECONSIDERED 305
1. Programs are entirely syntactical.
2. Minds have a semantics.
3. Syntax is not the same as, nor by itself sufficient for,
semantics.
Therefore, programs are not minds. Q.E.D (pp. 1112).Premises 1
and 2 are obviously true and the argument appears valid, so the
ar-
guments soundness turns entirely on premise 3.11 Clearly,
premise 3, being prettymuch a straightforward denial of
computationalism, begs the question unless somejustification is
provided. Searle states: In order to refute the argument you
wouldhave to show that one of those premises is false, and that is
not a likely prospect(p. 11). That is an overly demanding
requirement, given that Searle claims todemonstrate that
computationalism is false. To refute Searles argument thatis, to
show that Searle does not succeed in refuting computationalism one
needonly show that Searle does not successfully and without begging
any questionsestablish premise 3.
The only evidence offered for premise 3 is the CRE and the
associated intu-ition that the operator does not understand
Chinese, as Searles own explanationindicates: Step 3 states the
general principle that the Chinese Room thought ex-periment
illustrates: merely manipulating formal symbols is not in and of
itselfconstitutive of having semantic contents, nor is it
sufficient by itself to guaranteethe presence of semantic contents
(p. 12). The CRE is supposed to show that, inat least one case,
syntax does not constitute semantics, based on the intuition
thatthe CREs operator does not understand Chinese. However, the
no-semantics-from-syntax intuition is precisely what strong AI
proponents are challenging with theircomputationalist theory of
content, so supporting premise 3 by relying on the pre-theoretical
intuition that there is no understanding in the non-standard CRE
begsthe question.
So, strong AI proponents even those who are pulled in the
direction ofSearles intuitions about the Chinese room operator can
justifiably complainthat Searle begs the question of
computationalism. He does so by choosing forhis counterexample a
non-standard case where, as it happens, computationalismdictates
that traditional intuitions are incorrect, and he does not offer
any inde-pendent non-question-begging reason for supporting the
traditional intuition overthe proposed essentialist theory. Think
here again of those who vigorously attackedFreuds theory by
focusing on the strong intuition that there is nothing sexualabout
babies sucking at the breast, thus begging the question of whether
the the-ory, which aspired to overturn precisely this sort of
pre-theoretical intuition, wascorrect. Or, imagine someone denying
that white jadeite is a form of jade becauseit is not green. Strong
AI proponents may justifiably object that exploiting
suchpre-theoretical intuitions is an unsound way to critique a
theory that by its naturechallenges the pre-theoretical
understanding of non-standard examples.
-
306 JEROME C. WAKEFIELD
3.3. WHAT KIND OF THEORY IS COMPUTATIONALISM?: FAILURE OF
THEONTOLOGICAL REDUCTIONIST REPLY TO THE ESSENTIALIST OBJECTION
The above critique of the CRA depends on interpreting
computationalism as anessentialist theory about content. A defender
of the CRA might object that compu-tationalism is not this kind of
theory at all, but rather a reductionist theory that hasto
precisely track intuitions. Such an objection might, for example,
go as follows:
You make it out as if strong AI is just another essentialist
scientific claim.Well, it isnt really. Its a reductionist claim. It
is that mental states are nothingbut computational states. The
problem is that all forms of reductionism have totrack the
original, intuitive phenomenon on which the reduction was
supposedto be based. But they cant do that in the case of the
Chinese room. So I dontagree that strong AI was intended to be like
the oxidization theory of fire or theatomic theory of matter. Its
reductionist in a way that is more like traditionalfunctionalism or
behaviorism, where the point is to show that common notionscan be
systematically reduced to the proposed notions.
The objection suggests that I have mistaken strong AI for an
essentialist theorywhen it is really a reductionist theory. It
should first be noted that the term reduc-tion itself is subject to
the same ambiguity. There is an obvious sense in whichan
essentialist theory is a reductionist theory, namely, a theoretical
reduction. Forexample, one reduces heat to molecular motion in
virtue of the theory that theessence of heat is molecular motion,
and one reduces elements to atomic structurein the atomic theory of
matter. In such theories, to use the language of the objection,one
kind of thing is claimed to be nothing but another kind of thing.
For example,fire is claimed to be nothing but oxidation and heat is
claimed to be nothingbut molecular motion in the respective
theoretical reductions. Reduction in thissense is nothing but
essentialist theorizing, and surely need not precisely
trackpre-theoretical intuitions, as the earlier examples show.
The objection, however, appears to be that there is another form
of reduction,which we might label ontological reduction.
(Admittedly, this phrase has thesame ambiguity, but I could not
think of any better label.) This form of reductionis not
essentialist and does not aim to provide a scientific theory of the
nature ofthe phenomenon in question. Nor is it an analysis of the
meaning of our ordinaryconcept (which computationalism clearly is
not). Rather, it aims to show that wecan reduce our overall
ontology by exhaustively translating statements about onetype of
thing into statements about another type of thing, and that this
eliminationof a basic ontological category can be achieved while
retaining our crucial intuitivebeliefs and without substantial loss
of expressive power. Such an account assertsthat we can consider
things of type A to be nothing but things of type B withoutloss for
ontological purposes; it does not assert that As are literally
nothing butBs (i.e., that As are literally constituted by Bs),
because that would involve eithera conceptual analytic or
theoretical/essentialist claim, neither of which
necessarilyapply.
-
THE CHINESE ROOM ARGUMENT RECONSIDERED 307
For example, the claim that numbers are nothing but sets is not
an essential-ist theory of what numbers have been scientifically
discovered to be, nor is it aconceptual analysis of our ordinary
concept of number. Rather, it is a claim that inprinciple we dont
need to postulate numbers as an irreducible ontological categoryin
addition to sets because the language of set theory is sufficient
to capture allthe distinctions of interest about numbers, so in
principle the expressive power ofour number ontology can be gotten
without any additional ontological assump-tions beyond those
already implicit in set theory. Similarly, one might argue (Idont
agree, but leave that aside) that logical behaviorism and
functionalism areclaims not about the essence of mental states or
the concept of a mental statebut rather about the reducibility of
talk about mental states to talk about behavi-oral dispositions or
certain causal relations, thus are claims that mental states canbe
considered nothing but behavioral dispositions or certain causal
relations forontological purposes.
Suppose for the sake of argument that computationalism is indeed
an attemptat ontological reduction in the above sense. Does that
imply that computationalismmust track pre-theoretical intuitions
and must not make anti-intuitive claims aboutcontent? I dont
believe so.
It is just not true that ontological reductions must exactly
track pre-theoreticalintuitions and must not yield odd new
assertions. For example, the prototypicalontological reduction, the
reduction of numbers to sets, implies that the number 2is a set,
certainly an anti-intuitive claim that does not track our number
statements.And, depending on which sets one identifies with the
natural numbers, there are allsorts of bizarre things one might say
that do not track pre-reduction intuitions, suchas 2 is a member of
3 or the null set is a member of 1. Moreover, novel claimsabout
what things fall under the target domain are not excluded; for
example, de-pending on ones account, one might end up saying
counter-intuitive things likethe singleton set containing the null
set is a number. The existence of some suchcounter-intuitive
results is not a serious objection to the success of the reduction
ofnumbers to sets; neither would counter-intuitive results in the
CRE be an objectionto computationalism as an ontological
reduction.
There are of course limits to the kinds of novel assertions that
are acceptable,because the point of an ontological reduction is to
capture the target domain as ac-curately as possible. But the same
applies to essentialist theories; the essence mustencompass at
least the concepts prototypical instances that are the base on
whichthe concepts definition is erected via the postulation of a
common essence. So,essentialist and reductionist theories are
similar in this respect. Both must overallsuccessfully track the
target domain, but neither must precisely track the entire setof
intuitions about the target domain.
Thus, an analog of the essentialist reply to the CRA could be
constructed evenif computationalism were interpreted as an
ontological reduction. The strong-AIproponent could argue that
Searles objection that intuitions about the CRE areinconsistent
with computationalism is like objecting that the reduction of
num-
-
308 JEROME C. WAKEFIELD
ber theory to set theory leads to some pre-theoretically absurd
results. Once wesuccessfully reduce the vast majority of important
assertions about numbers to as-sertions about sets within a certain
theory, any pre-theoretical absurdity that results(e.g., that the
null set is a member of 1) is not an objection to the
identificationof numbers with sets for ontological reductive
purposes, and the analogous pointapplies to computationalism. I
conclude that there is nothing in the distinctionbetween
essentialist theories and ontological reductions that defeats the
essentialistreply to the CRA.
In any event, computationalism is best considered an
essentialist theory ratherthan an ontological reduction.
Computationalisms signature claim that certain com-puter states are
beliefs in the same literal sense that peoples intentional states
arebeliefs is exactly the kind of claim that is characteristic of
an essentialist theorybut not of an ontological reduction. When
such an assertion that does not trackpre-theoretical intuitions is
generated by an ontological reduction (as in the nullset is a
member of 1), it is clear that the new assertion is not to be taken
as aliteral discovery but rather as a bizarre and unfortunate side
effect of the reduction.This is not the way computationalists view
the conclusion that computers literallypossess thoughts. They think
that this is a discovery generated by an insight into thenature of
thought, namely, that in prototypical human cases the essence of
thoughtis syntactically defined programming, which allows them to
generalize the categoryalong essentialist lines to computer cases.
It is thus more charitable to interpretcomputationalism as an
essentialist theory.
4. The Chinese Room Indeterminacy Argument
I believe that the essentialist objection I have offered above
is a valid objection tothe CRA as Searle states it. I am now going
to try to pull a rabbit (or perhaps Ishould say gavagai) out of a
hat and show that the CRA can be reinterpreted insuch a way as to
save it from the essentialist reply. Specifically, I will argue
that theCRA continues to pose a potential challenge to
computationalism and strong AI ifit is construed as an
indeterminacy argument which Ill dub the Chinese roomindeterminacy
argument (CRIA).
The strong AI proponent thinks that the operators intentional
states are determ-ined simply by the formal program that she
follows. How can one argue that this isnot true, without simply
begging the question (as in the CRA) and insisting that
in-tuitively there is no genuine intentionality in virtue of formal
programming alone?The only non-question-begging test I know of for
whether a property constitutesgenuine intentional content is the
indeterminacy test. If the Chinese-understandingprogram leaves
claimed intentional contents indeterminate in a way that
genuineintentional contents are not indeterminate, then we can say
with confidence that theprogram does not constitute intentional
content.
-
THE CHINESE ROOM ARGUMENT RECONSIDERED 309
The essentialist objection shows that non-standard
counterexamples such as theChinese room experiment, no matter what
their intuitive force, are not conclusiveagainst computationalism.
To be effective, such counterexamples must be targetedat
prototypical cases of human thought and must show that in those
prototyp-ical cases computationalism cannot offer an adequate
account of the essence ofthought. This means that if the CRA is to
be effective, it must be reinterpreted asan argument about normal
speakers of Chinese. The CRIA is exactly this kind ofargument. That
is, it is an argument that computationalism is unable to account
forhow anyone can ever understand Chinese, even in standard cases
of human thoughtthat intuitively are clear instances of genuine
Chinese understanding. The argumentattempts to show that in such
standard cases, if computationalism is correct, thenalternative
incompatible interpretations are possible that are consistent with
all thesyntactic evidence, thus content is indeterminate to a
degree that precludes makingbasic everyday distinctions among
meanings.
Such an argument against computationalism obviously must be
based on theassumption that the distinctions we commonly make among
meanings reflect realdistinctions and that there is in fact some
interesting degree of determinacy ofcontent in human thought
processes (e.g., that there is a real distinction betweenthinking
about rabbits and thinking about rabbit stages or undetached rabbit
parts,however difficult it is to state the grounds for the
distinction). This determin-acy assumption has been accepted not
only by Searle (1987) but also by manyof his philosophical
opponents more sympathetic to the aspirations of
cognitivescience.12 Admittedly, the CRIA has force only for those
who believe that there issome truth about the content of human
thoughts with roughly the fineness of dis-crimination common in
ordinary discourse. Consequently, if one steadfastly deniesthe
determinacy-of-content premise, then one escapes the CRIA, but at
the cost ofrendering ones account of content implausible for most
observers.
To my knowledge, Searle has never suggested that the CRA is an
indetermin-acy argument. Wilks (1982), in a reference to
Wittgenstein, implicitly suggestedan indeterminacy construal of the
CRA, but Searle (1982) did not take the bait.Nonetheless, as Hauser
(1997) observes, one might consider the following kindof statement
to hint in this direction: The point of the story is to remind us
of aconceptual truth that we knew all along; namely, that there is
a distinction betweenmanipulating the syntactical elements of
languages and actually understanding thelanguage at a semantic
level (Searle, 1988, p. 214). As Hauser notes, the onlyplausible
grounding for a conceptual claim that semantics is not just
syntactic ma-nipulation is some version of Quines (1960)
indeterminacy argument that semanticand intentional content remains
indeterminate (i.e., open to multiple incompatibleinterpretations
consistent with all the possible evidence) if the relevant evidence
islimited to syntax alone.13
What, then, is the indeterminacy argument that can be derived
from the CRA?To construct such an argument, consider a person who
possesses the program thataccording to strong AI constitutes the
ability to understand and speak Chinese. The
-
310 JEROME C. WAKEFIELD
program is syntactically defined, so that to think a certain
semantic or intentionalcontent is just to be in a certain syntactic
state. However, there is an alternativeinterpretation under which
the individual does not understand a word of Chinese.Rather, her
thoughts and utterances can be interpreted as referring to the
programssyntactic structures and transitions themselves. These two
interpretations are mu-tually incompatible but, the CRIA shows, are
consistent with all the facts aboutprogramming that
computationalism allows to be used to establish content.
Theindeterminacy consists, then, of the fact that, consistent with
all the syntactic andprogramming evidence that strong AI claims to
exhaust the evidence relevant tofixing content, a person who
appears fluent in a language may be meaningfullyusing the language
or may be merely implementing a program in which statesare
identified syntactically and thus may not be imparting any meanings
at allto her utterances. For each brain state with syntactic
structure S that would beinterpreted by strong AI as a thought with
content T, the person could have T orcould have the thought
syntactic structure S. For each intention-in-action thatwould be
interpreted as the intention to utter X to express meaning m, the
personcould just have the intention to utter X to follow the
program. These are distinctcontents, yet computationalism does not
explain how they can be distinct.
Recall Blocks earlier-cited comments about the Chinese Room
operator thatwhen she seems to be asking for the salt in Chinese,
what she is really doing isthinking in English about what noises
and gestures the program dictates that sheshould produce next, and
that when she seems to be conducting a learned discoursein Chinese,
she is thinking about what noises the program tells her to make
nextgiven the noises shes heard and written on her mental scratch
pad. Block herein effect notes the two possible interpretations
revealed by the CRIA. The personsintentional content leading to his
utterance could be I want to express the meaningplease pass the
salt, and I can do so by uttering the sentence please pass thesalt,
or it could be I want to follow the program and I can do so by
uttering thenoise pass the salt. There is no evidence in the
program itself that could dis-tinguish which of these two
interpretations is correct. The resulting indeterminacyargument
might go as follows:(i) There are in fact determinate meanings of
thoughts and intentions-in-action
(at least at a certain level of fineness of discrimination); and
thoughts aboutsyntactic shapes are typically different (at the
existing level of fineness ofdiscrimination) from thoughts that
possess the semantic contents typicallyexpressed by those
shapes.
(ii) All the syntactic facts underdetermine, and therefore leave
indeterminate, thecontents of thoughts and intentions-in-action; in
particular, the syntactic struc-ture S is ambiguous between a
standard meaning M of S and the meaning,the program specifies to be
in syntactic structure S. Similarly, an utteranceU may possess its
standard meaning and be caused by the intention to com-municate
that meaning or it may mean nothing and be caused by the
intentionto utter the syntactic expression U as specified by the
program.
-
THE CHINESE ROOM ARGUMENT RECONSIDERED 311
(iii) Therefore, the content of thoughts and
intentions-in-action cannot be consti-tuted by syntactic facts.
This indeterminacy argument provides the needed support for
Searles crucial thirdpremise, Syntax is not the same as, nor by
itself sufficient for, semantics, in hisargument against
computationalism. With the shift to the CRIA, Searles
argumentbecomes potentially sound, modulo the
determinacy-of-content assumption.
Hauser (1997) dismisses the indeterminacy interpretation of the
CRA as offer-ing warmed-over indeterminacy trivially applied. He
comments: Troubles aboutindeterminacy are ill brought out by the
Chinese room example anyhow beingall mixed up, therein, with
dubious intuitions about consciousness and emotionsabout computers
(p. 216).
The truth is quite the opposite. The CRIA has nothing to do with
intuitionsabout consciousness or emotions. Moreover, it presents a
more effective indeterm-inacy challenge than has previously been
presented for computationalist and relateddoctrines. Earlier
arguments all have serious limitations that have made them lessthan
fully persuasive. Even Quine was dismissive of such theoretical
proofs ofindeterminacy as the LowenheimSkolem theorem, complaining
that they did notyield a constructive procedure for producing
actual examples so that it was hardto tell just how serious a
problem the indeterminacy would be for everyday dis-tinctions. His
own gavagai-type example was meant to be more
philosophicallyforceful and meaningful. But it, too, was never
entirely convincing because of thelocal nature of the example,
involving just one term or small groups of terms.These sorts of
examples left many readers with the lurking doubt that there mustbe
some way of disambiguating the meaning using the rich resources of
standardlanguages, and that the examples could not be carried out
for whole-languagetranslation. Consequently, many observers remain
less than fully convinced thatindeterminacy is a problem. Indeed,
many of those trying to naturalize semanticsdismiss indeterminacy
as unproven and unlikely (Wakefield, 2001).
This is where the CRIA makes a dramatic contribution. It offers
the clearestavailable example of an indeterminacy that can be shown
to persist in whole-language translation. This is because of the
systematic way in which every sentencein Chinese, with its usual
semantic content under the standard interpretation, istranslated in
the CRIAs alternative interpretation into a sentence about the
syn-tax of the original sentence. Unlike Quines examples, one has
no doubt that thereis no way to use further terms to disambiguate
the two interpretations, for it isclear that any such additional
terms would be equally subject to the indeterminacy.The CRIA offers
perhaps the most systematic example of indeterminacy in
theliterature.
The CRIA poses the following serious and potentially
insurmountable challengeto computationalism: What makes it the case
that people who in fact understandChinese do have genuine semantic
understanding and that they are not, like theoperator in the CRIA,
merely manipulating syntax of which the meanings are un-known to
them? Even if one claims on theoretical grounds, as some do in
response
-
312 JEROME C. WAKEFIELD
to the original CRA, that the operators manipulation of syntax
does constitutean instance of Chinese understanding, one still has
to be able to distinguish thatfrom ordinary semantic understanding
of Chinese or explain why they are notdifferent; the syntactic and
semantic interpretations of the operators utterances andthoughts at
least prima facie appear to involve quite different sets of
contents. But,the CRIA concludes, computationalism cannot explain
this distinction. Withoutsuch an explanation, computationalism
remains an inadequate account of mean-ing, unless it takes the
heroic route of accepting indeterminacy and renouncingordinary
semantic distinctions, in which case it is unclear that it is an
account ofmeaning at all (it should not be forgotten that Quines
indeterminacy argumentled him to eliminativism, the renunciation of
the existence of meanings). In myview, resolving the dilemma posed
by indeterminacy is the main challenge facingcomputationalism and
strong AI in the wake of the CRA.
Hauser (1997) is apparently willing to bite the indeterminacy
bullet. He arguesthat any indeterminacy that can be theoretically
shown to infect computationalismis just a reflection of
indeterminacy that equally can be theoretically shown to
infectactual content, as well:
In practice, there is no more doubt about the cherry and tree
entries in thecherry farmers spreadsheet referring to cherries and
trees (rather than naturalnumbers, cats and mats, undetached tree
parts or cherry stages, etc.) than thereis about cherry and tree in
the farmers conversation; or, for that matter,the farmers
cogitation. Conversely, in theory there is no less doubt about
thefarmers representations than about the spreadsheets. Reference,
whether com-putational, conversational, or cogitative, being
equally scrutable in practice andvexed in theory, the conceptual
truth Searle invokes impugns the aboutness ofcomputation no more or
less than the aboutness of cogitation and conversation.(pp.
215216)
But, it is the fact that we do ordinarily understand and make
such distinctionsbetween me