Main | Introduction | The Turing Test | The Loebner Prize | Strong AI | Sources | About
Data Processing Objection | Argument from Consciousness | Mechanical Objection | Key Flaw of the Turing Test

Mechanical Objection

This objection is not part of Turing's original paper, but is one that is increasingly raised as an extension of the previous argument. The argument is that a machine that is capable could be programmed with enough human questions and human responses to be able to answer any question, thereby being able to pass the Turing Test by way of brute force but not actually be able to understand human language (Wikipedia - Turing Test).

This objection is used in John Searle's "Chinese Room" argument in conjunction with the previous two objections. In his scenario, there is a human in a machine that follows instructions for manipulating Chinese characters, allowing him to accept a question written in Chinese and return a response. The algorithm that the human follows is so complete and complex that the machine can pass the Turing Test easily. However, Searle states that the human in no way understands Chinese, and that a computer accomplishing the same results using the same technique similarly does not understand Chinese (Copeland - Chinese Gym).

The major flaw in Searle's reasoning is that it assumes that if one part of a system does not understand Chinese, then the system as a whole cannot either. Searle's argument, written in a more direct form, basically states: The human's symbol-manipulation does not enable him to understand Chinese; therefore, the human's symbol-manipulation does not enable the system as a whole to understand Chinese. Copeland calls this conclusion the "part-whole fallacy", where a conclusion about the whole system is inferred from a single part or vice versa. Searle condemns a similar fallacy, one used by Alan Turing to respond to the previous objection, where a process and its simulation are assumed to be equal if they can accomplish the same results; indeed, the second part of Searle's argument basically states that the human simulates a process that he cannot understand, therefore the system cannot understand either. However, as Copeland states, the second part of Searle's argument is inexorably linked to a fallacy of its own and therefore cannot stand (Copeland - Chinese Gym).

Copeland himself states, though, that his refutation of Searle's argument is not a case for comprehension on the part of the system, and only shows that the Chinese room argument is invalid (Copeland - Chinese Gym). Even in his precise and logical examination he is unable to conclusively say that a particular system can or cannot understand Chinese, citing instead the improbability of such systems. This indecisiveness echoes the divided response to Turing's assumption of genuinity in apparently genuine machine actions. Searle is able to say when something does not understand, but not when it does understand; likewise Copeland can expound on issues of logic but the implications of a "perfect simulation" are glossed over and eventually ignored. There is an unwillingness to approach the practical issues of a "perfect simulation" in Searle's paradigm, whereas Turing at least somewhat addressed the issue by urging for the acceptance of apparently genuine behavior. Searle treats "understanding" and "consciousness" as inherent, undefinable aspects of humanity that can never be replicated, allowing him to dismiss any machine counterpart as a mere simulation, a fake. Turing does not attempt to define these terms any more than Searle, but instead looks at the functional aspect: if it can talk exactly as a human does, one can and should talk to it and utilize its capabilities as one does a human.