According to the “Strong AI” theory, if a digital computer were ever to imitate a human mind so well that we could not tell that we were speaking with a computer, then we ought to admit that the computer too has a mind.  To put it another way, if a computer could ever be programmed to produce the right outputs in response to the right inputs, then it should be regarded as understanding what it is doing in the very same sense that we understand what we are doing.

Philosopher of language John Searle has provided what I consider a wholly convincing refutation of this hand-waving view.  Searle speaks English and does not know Chinese.  Suppose, he says, that he is locked in a room with a set of rules, written in English, for correlating certain sets of Chinese symbols, which are passed into the room, with other sets of Chinese symbols, which he passes out.  And suppose that he masters the rules so well that when the people outside the room pass in Chinese questions, he returns the right Chinese answers.  The people outside the room may be convinced that he understands the conversation, but in fact he doesn’t.  According to Searle, this is all that happens when a computer executes a program.

Notice that this argument does not show that the human mind cannot be reduced to the operation of a machine; it shows only that it cannot be reduced to the operation of that kind of machine, a machine executing algorithms by digital processes.  Whether it can be reduced to the operation of some other kind of machine is an open question.

But now Searle engages in some hand-waving of his own, because he does not regard the question as open.  “We know,” he says, “that thinking is caused by neurobiological processes in the brain, and since the brain is a machine, there is no obstacle in principle to building a machine capable of thinking.”

Just how do we “know” this?  It is not an argument, but a declaration of faith.