Summary & Discussion Questions

 

Searle's "Chinese Room" Argument

 

I. Summary

Searle's article "Is the brain's mind a computer program?" was published in Scientific American in January 1990, along with a response by Paul and Patricia Churchland, entitled "Could a machine think?" The main argument in Searle's article, now known generally as the "Chinese-room argument," was first published in Behavioral and Brain Sciences in 1980 in an article entitled "Minds, Brains, and Programs." It was accompanied by 26 responses, written by philosophers, computer scientists, cognitive psychologists, and neurophysiologists. Searle's argument is intended to show that implementing a computational algorithm that is formally isomorphic to human thought processes cannot be sufficient to reproduce thought. Something more is required. If Searle is correct, then machine functionalism of the sort espoused by Putnam, as well as more sophisticated contemporary versions of functionalism, cannot be correct. It also means that the research program Searle calls "strong AI" (AI = artificial intelligence) is fundamentally misguided, since strong AI assumes that some form of functionalism is correct, i.e., that thinking is nothing more than symbolic manipulation according to formal rules.

 

Searle considers the following thought-experiment. Suppose that a person were given a set of purely formal rules for manipulating Chinese symbols. The person does not speak or understand written Chinese, and so he does not know what the symbols mean, though he can distinguish them by their differing shapes. The rules do not tell him what the symbols mean: they simply state that if a symbol of a certain shape comes into the room, then he should write down a symbol with a certain other shape on a piece of paper. The rules also state which groups of symbols can accompany one another, and in which order. The person sits in a room, and someone hands in a set of Chinese symbols. The person applies the rules, writes down a different set of Chinese symbols as specified by the rules on a sheet of paper, and hands the result to a person waiting outside the room. Unknown to the person in the room, the rules that he applies result in a grammatically correct conversation in Chinese. For example, if someone hands in a set of Chinese symbols that mean, "How do you feel today?" the symbols he writes down (as specified by the rules) mean, "Fine, thank you." In sum, the rules are a complete set of instructions that might be implemented on a computer designed to engage in grammatically correct conversations in Chinese. The person in the room, however, does not know this. He does not understand Chinese.

According to Searle, the person in the Chinese room is doing exactly what a computer would be doing if it used the same rules to engage in a grammatically correct conversation in Chinese. Thus, if manipulating Chinese symbols according to formal rules is not sufficient for the person to understand Chinese, it is not sufficient for a computer to understand Chinese, either. Both are engaging in "mindless" symbol manipulation. Searle summarizes his argument as follows.

 

To specify a language, one needs to specify at least a syntax and a semantics for that language. A syntax for a language lists and categorizes the words and other symbols (e.g., punctuation marks) of that language (a lexicon), and specifies rules that determine the set of permissible combinations of those words (a grammar). It does not say what the words and combination of words mean. (For example, a syntax might specify that "blorg," "et," and "dring" are words in a certain language, that "blorg" is a name, "et" a predicate, and "dring" a general noun, and that "blorg et" is a permissible combination of these words but "dring blorg" is not.) A semantics would associate an interpretation with these symbols. (For example, a semantics for the language just described might specify that "blorg" refers to Clinton, "et" means "is male," and that "dring" refers to the class of dogs.) To know what a language means, you have to know not only its lexicon and grammar (syntax), you also have to know its semantics. The notions of truth, reference, and meaning are all semantic notions.

It is important to recognize what Searle is not arguing for. First, he is not arguing that machines can't think. He is not a dualist. He asserts explicitly that human beings are simply thinking (biological) machines. Second, Searle is not arguing that thinking organisms necessary have to be made out of biological materials. It may be possible to produce a thinking machine made of non-biological materials. It is only that to do so, something more would be required than merely implementing a computer program that has the same relationship between input (perception) and output (behavior) as human cognition. Finally, Searle is not arguing that thinking does not involve symbol manipulation. It may very well be that, among other things, he believes. It is only that symbol manipulation cannot, by itself, constitute thinking, any more than simulating a functioning automobile on a computer can get you to Phoenix.

What is Searle's positive view? What more besides implementing an algorithm that is formally isomorphic to the thinking that goes on in the human brain is required to produce a thinking thing? According to Searle, whether something is thinking depends not only on what computational algorithm it is running (the software) but also the nature of the thing that is running the algorithm (the hardware). The problem with functionalism, Searle argues, is that it abstracts away too much from the actual physical implementation of the computational processes involved in thinking. A program "running on" a human brain might constitute thinking, but the same program running on a computer made out of "beer cans strung together with wires and powered by windmills," or by the population of a large country (such as China) would not constitute thinking. As he puts it, some materials in which an algorithm might be implemented have the "causal power" to produce thinking, and others do not. He does not provide a criterion for deciding which things do and which do not have the "causal power" to produce thinking. That is a matter for future investigation. However, he does think that we have some clear examples of thinking materials (the human brain) and clear examples of non-thinking materials (combinations of beer cans and string powered by windmills).

 

II. Questions for Discussion

 

  1. (The Systems Reply, Simple Version) Some people have claimed that though the person in the Chinese room does not understand Chinese, the system consisting of the person, the room, the rule book, and the pieces of paper does understand Chinese. Is this a plausible reply?
  2. (The Systems Reply, Robot Version) Some people have claimed that if Searle's Chinese room were contained within a robot, which had a voice synthesizer, light, and sound detectors and decoders, and that the robot were able to move about in the world and had the right relationships between input (perception of objects and circumstances) and output (behavior, including linguistic behavior, appropriate to the objects and circumstances), it would understand Chinese. Is this plausible?
  3. Searle is correct that syntax does not determine semantics. However, we are thinking things, and apparently (though this is not beyond dispute) our brain operates by manipulating symbolic entities, or representations. What could possibly give those representations the meaning they have? What, for example, must be true about us if we are thinking of a horse (or a unicorn) when none is present?


Back to Machine Intelligence, Part II