Chinese room argument
Our editors will review what you’ve submitted and determine whether to revise the article.
- Stanford Encyclopedia of Philosophy - The Chinese Room Argument
- The Mind Project - Searle and the Chinese Room Argument
- Psychology Today - What a Mysterious Chinese Room Can Tell Us About Consciousness
- The Ethics Centre - Thought experiment: "Chinese room" argument
- Internet Encyclopedia of Philosophy - Chinese Room Argument
- Key People:
- John Searle
- Related Topics:
- artificial intelligence
- Gedankenexperiment
- Turing test
Chinese room argument, thought experiment by the American philosopher John Searle, first presented in his journal article “Minds, Brains, and Programs” (1980), designed to show that the central claim of what Searle called strong artificial intelligence (AI)—that human thought or intelligence can be realized artificially in machines that exactly mimic the computational processes presumably underlying human mental states—is false. According to Searle, strong AI conceives of human thought or intelligence as being functionally equivalent to the operation of a computer program, insofar as it consists of the manipulation of certain symbols by means of rules that refer only to the symbols’ formal or syntactic properties and not to their semantic properties (i.e., their meanings). As presented by Searle, the Chinese room argument demonstrates that such manipulation by itself does not afford genuine understanding and therefore cannot be equated with human thought or intelligence.
Searle’s thought experiment features himself as its subject. Thus, imagine that Searle, who in fact knows nothing of the Chinese language, is sitting alone in a room. In that room are several boxes containing cards on which Chinese characters of varying complexity are printed, as well as a manual that matches strings of Chinese characters with strings that constitute appropriate responses. On one side of the room is a slot through which speakers of Chinese may insert questions or other messages in Chinese, and on the other is a slot through which Searle may issue replies. In the thought experiment, Searle, using the manual, acts as a kind of computer program, transforming one string of symbols introduced as “input” into another string of symbols issued as “output.” As Searle the author points out, even if Searle the occupant of the room becomes a good processor of messages, so that his responses always make perfect sense to Chinese speakers, he still would not understand the meanings of the characters he is manipulating. Thus, contrary to strong AI, real understanding cannot be a matter of mere symbol manipulation. Like Searle the room occupant, computers simulate intelligence but do not exhibit it.
The Chinese room argument ostensibly undermines the validity of the so-called Turing test, based on the work of the English mathematician Alan Turing (1912–54), which proposes that, if a computer could answer questions posed by a remote human interrogator in such a way that the interrogator could not distinguish the computer’s answers from those of a human subject, then the computer could be said to be intelligent and to think.
The Chinese room argument has generated an enormous critical literature. According to the “systems response,” Searle the room occupant is analogous not to a computer but only to a computer’s central processing unit (CPU). Searle does not understand Chinese because he is only one part of the computer that responds appropriately to Chinese messages. What does understand Chinese is the system as a whole, including the manual, any instructions for using it, and any intermediate means of symbol manipulation. Searle the author’s reply is that the other parts of the system can be dispensed with. Suppose Searle the room occupant simply memorizes the characters, the manual, and the instructions so that he can respond to Chinese messages entirely on his own. He still would not know what the Chinese characters mean.
Another objection claims that robots consisting of computers and sensors and having the ability to move about and manipulate things in their environment would be capable of learning Chinese in much the same way that human children acquire their first languages. Searle the author rejects this criticism as well, claiming that the “sensory” input the computer receives would also consist of symbols, which a person or a machine could manipulate appropriately without any understanding of their meaning.