Chess room

Introduction.

In 1980, the journal The Behavioral and Brain Sciences published an article by philosopher John Searle.[i] (John Searle) “Minds, Brains, and Programs”, containing a description of the “Chinese Room” thought experiment[ii]criticisms of this experiment by various researchers, as well as Searle's responses to these criticisms.

The “Chinese Room” argument has turned out to be one of the most discussed in cognitive science.

This article shows the obvious fallacy of this argument.

The objection is so simple that I find it hard to believe it didn't come up earlier. It is much more likely that this bicycle has been invented many times before me. If this is indeed the case, I will be sincerely grateful to those who find and show me this refutation.

Chinese room.

Searle tries to prove that even a passing Turing test[iii] the computer is not actually capable of understanding texts, it is only capable of qualitatively pretendthat understands them.

To do this, Searle proposes a thought experiment with running a program that passes the Turing test not on a computer, but with a person as an executor.

Let's say we have a computer running a program that can pass the Turing test in Chinese. Let's translate this program into English.
Let's put in a room a person who knows English but doesn't know Chinese. Let us provide him with a book of instructions, written in English and therefore understandable to him; office supplies (pencils, paper, erasers); paper storage system. Texts in Chinese, written in hieroglyphs, are pushed under the door of the room. A person reads a book of instructions, follows the instructions step by step, as a result, he writes some other Chinese characters on paper and sends them back to the door.

From the point of view of an outside observer, the room communicates in Chinese and understands it. If a program running on a computer passes the Turing test, then so will a human executor running the same program without a computer.

Searle states that there is no significant difference between the role of the computer and the performer. Both the computer and the performer follow a program that creates behavior that appears to be understanding.

But the performer does not actually understand Chinese and has no idea about the content of the conversation. This means that the computer performing the same role also does not understand Chinese and also does not have the slightest idea about the content of the conversation.

So, a computer's ability to pass the Turing test does not necessarily mean it can understand language. According to Searle, understanding requires a brain, and without it understanding cannot arise.

Chess room.

Let us understand that Searle's reasoning is flawed. To do this, we apply Searle's procedure not to a hypothetical program that can pass the Turing test in Chinese, but to a real chess program, for example, Stockfish[iv].

As a performer for our thought experiment, we will choose a person who not only does not know how to play chess at all, but who does not even know about the existence of such a game at all.

Let's translate Stockfish into the format of instructions in a language familiar to people, without revealing the meaning of the instructions. Let's translate chess databases onto paper, but again not with diagrams understandable to humans, but in an algorithmic format.

Let's run Stockfish on the human brain substrate.

The person in the room receives a sequence of symbols. It actually means a chess move or a proposal (“let’s play, you have white”, “I offer a draw”), but the person does not know the meaning of this sequence. It carries out calculations according to the algorithm and produces a response sequence of symbols indicating a response move or reaction to the proposal.

It is clear that from an outsider's point of view, the room can play chess with the power of Stockfish. However, our performer still does not know how to play chess and does not even know that such a game exists. And since, following Searle, we believe that the performer is in no way fundamentally different from the computer, we, following Searle, must conclude that the computer cannot play chess either.

Only he knows how to do it.

So, consistent application of Searle's reasoning leads us to an absurd conclusion. This means that these arguments are wrong. This means that the lack of a certain quality in the program executor cannot be transferred to the entire computer system.

Didn't anyone realize this before?

Of course we understood. Right in the original article, the very first counterargument contains a completely correct refutation.

The systems reply (Berkeley). While it is true that the individual person who is locked in the room does not understand the story, the fact is that he is merely part of a whole system, and the system does understand the story. The person has a large ledger in front of him in which are written the rules, he has a lot of scratch paper and pencils for doing calculations, he has 'data banks' of sets of Chinese symbols. Now, understanding is not being ascribed to the mere individual; rather it is being described to this whole system of which he is a part.

Systemic Counterargument (Berkeley). The participant in the experiment who is in the room does not really understand Chinese, but he is not an analogue of a computer, but part of a system. Chinese is only understood by the entire system. A person has a book of instructions, he has pencils and paper for calculations, he has “databases” in the form of sets of Chinese characters. Understanding is achieved not at the level of the individual performer, but at the level of the entire system of which he is a part.

Yes, that's exactly it. Let's look at the chess program again. Obviously, the computer itself does not really know how to play chess. It “learns” this when the chess program is run. During the game, the computer accesses data banks and also performs many calculations, storing their results in memory. A system consisting of a computer, a chess program, data banks and RAM can play chess.

Searle, of course, responded to this counterargument. Having checked it with chess, we already know that Searle's answer is wrong, and therefore we will not analyze it carefully, content only with showing the error.

My response to the systems theory is quite simple: let the individual internalize all of these elements of the system. He memorizes the rules in the ledger and the data banks of Chinese symbols, and he does all the calculations in his head. The individual then incorporates the entire system. There isn't anything at all to the system that he does not encompass. We can even get rid of the room and suppose he works outdoors. All the same, he understands nothing of the Chinese…

Answer to the systemic counterargument: let's encapsulate all elements of the system inside a person. He has memorized the instruction book and all the databases, and he does all the calculations in his head. Now man unites the entire system within himself, there is nothing left outside. You can also get rid of the room and let him work outdoors. And he still doesn't understand Chinese at all…

Well, let's do the same operation with the chess program. A serious technical problem arises here: a person is fundamentally unable to memorize the required volumes of instructions, nor to fit into short-term memory the volumes of information necessary to perform intermediate operations, nor to perform these operations with satisfactory speed. But we are still dealing with a thought experiment and therefore we can well imagine that we “sleep with our eyes open and sing.”

Let our hero memorize all the Stockfish instructions and all the databases, and carry out all the calculations in his head. Now he is able to start the whole procedure…

…and it becomes obvious that he is now can Play chess with the power of Stockfish. He still doesn’t know the names of the pieces, or the rules of en passant capture or castling, or the principles of development – but you can play with him, which means that he knows how to play. He just plays in a completely inhuman way, absolutely not the way people do it.

It's exactly the same story with the original man in Searle's thought experiment. They install a system in his head that understands Chinese, but does it in a completely inhumane way, in a fundamentally different way. The native “language module” of the system, that is, the part of his brain responsible for the processes of perception/formation of speech, does not have access to the meaning of the Chinese text; this meaning is available only for processing by a learned algorithm. If a person forgets the algorithm, his ability to understand Chinese will be lost. If he loses the ability to remember intermediate results, it will also disappear.

Therefore, Searle's objection is incorrect. Searle does not recognize non-human, algorithmic understanding and mistakenly believes that after such an operation a person still does not understand Chinese. In a human way, he doesn’t understand, but now there is a system in his brain that understands Chinese.

What is understanding?

It is useful to generally understand what exactly we mean when we say that we understand something, and how exactly this understanding is achieved. As Mikhail Leonovich Gasparov wrote, to understand a poem means to be able to retell it in your own words[v]. This is a very good criterion, allowing for expansion beyond the limits of versification.

Understanding is achieved when a person builds a model of a situation in his mind, with which he can then work: he can look at it from different sides, evaluate, analyze, and so on. In particular, he can independently describe this model in text, that is, “retell it in his own words.”

What exactly does “in your mind” mean? Human consciousness is a set of processes in the frontal lobes of his brain, and the construction of the model occurs on the same substrate. When we understand something, certain neural circuits are activated in our brain. What will and will not be included in these circuits is determined by our life experiences, recorded in our memory, which is also localized in the brain. The necessary memories for the correct connection of neurons in these circuits are pulled into short-term memory from long-term memory, and a dynamic structure of brain processes and the contents of short-term memory arises.

It is with this dynamic structure that the brain works; it is what is called above the model of the situation in consciousness.

The model does not provide understanding, it is understanding. The degree of adequacy of this model determines the level of understanding. You may not understand anything (it was not possible to build a model at all), you can understand some part of the message from “almost nothing” to “almost everything,” or you can understand something incorrectly, that is, build a model that corresponds to a different situation.

A computer can only understand natural language in a similar way: also through building a model and working with it, but its model is structured differently. A computer model is a dynamic structure consisting of data in memory and program processes that process this data. ABBYY engineers tried to formalize and clearly describe the construction of such a model and even succeeded to some extent, but the task was completely beyond their capabilities[vi]. Existing statistical NLP systems build this model in a form that is unreadable for humans, for example, in the form of a matrix of neural network weight coefficients. However, in this case, the model is an understanding, and the degree of adequacy of the model determines the level of understanding.

The key part of the Chinese Room is not the book of instructions or the person who follows those instructions. The model-understanding of the language situation does not appear in a book of instructions, not in data banks, and even for the most part not in the brain of the performer, but in the records that the performer keeps in the process of following the instructions. It exists on the substrate of stationery that the performer uses to manually execute the algorithm from the book.

Transferring this model into a person’s head is confusing, because the human model-understanding is built completely differently. And of course, installing a computer model of understanding into the brain does not lead to the appearance of a human model there. As Marvin Minsky quite rightly noted, on the substrate of the human brain, next to the ordinary human consciousness that does not know Chinese, a virtual consciousness that knows Chinese arises[vii].

Conclusion.

The persuasive force of the Chinese Room argument rests entirely on the persuasiveness of Searle's flawed procedure. This fallacy turns out to be masked by the fact that people, firstly, for the most part do not really understand what “understanding” is; secondly, it is very difficult for them to imagine a program that can pass the Turing test in Chinese, and even more difficult to imagine in detail a working Chinese room; thirdly, on the contrary, it is very easy to imagine a person who does not know Chinese; most of us have such a person right at hand. It is this contrast that leads to the fact that a person perceives especially clearly the inability to speak Chinese, which is key to Searle’s argument.

However, the fallacy of the procedure becomes obvious if this procedure is applied to an existing program. Searle himself, of course, did not do this. If he had done this, the famous article would not have appeared, and humanity would not have spent enormous resources discussing this error.

Of course, this work does not prove that Searle's views on strong artificial intelligence are wrong; it only proves that these views are unfounded. From the fact that a computer without a program is not capable of understanding natural language, it does not at all follow that a computer running any program is not capable of this either.


[i] https://en.wikipedia.org/wiki/John_Searle

[ii] https://en.wikipedia.org/wiki/Chinese_room

[iii] https://en.wikipedia.org/wiki/Turing_test

[iv] https://stockfishchess.org/

[v] https://rus.1sept.ru/article.php?ID=200204301

[vi] https://sysblok.ru/blog/gorkij-urok-abbyy-kak-lingvisty-proigrali-poslednjuju-bitvu-za-nlp/

[vii] https://en.wikipedia.org/wiki/Chinese_room#Virtual_mind_reply

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *