The Chinese Room is a classic thought experiment in AI. A man in a sealed room, who does not speak Chinese, is given a dictionary whose contents are every possible combination of Chinese characters. For each combination, it contains a prescribed response. A Chinese speaker writes his half of a conversation on small slips of paper, and slides them into the room through a mail slot. The man inside the room looks up the phrase in his book, writes the response, and passes it back. From the outside, it appears to the Chinese speaker that he is having a normal conversation. Does the man speak Chinese? Does the book? Does the room?
Borges told us of a book whose pages were infinite in number; no page the first, no page the last, an eternal middle in which all writing was contained. In the Chinese room, the infamous book of Searle might have been that self-same book of sand; a book with its own memories, which could write new pages into itself, which could synthesize new pages from old ones, a book which WAS its own author. How else could such a book exist? A mind is not a dictionary; at the least, it is a dictionary that is eternally rewriting itself. On the other hand we can imagine our man in the Chinese room with a pen and a paper, performing elaborate calculations in an algebra he does not understand, representing the internal state of the book as he feeds back its words through the slot.
There was a time when this distinction was not widely understood, as quaint as it now seems. There was a time when lookup tables predicated on cryptographic hashes were seen as sophisticated AI, and now they are merely a data structure.
There was a time when sprawling, branching decision trees were seen as sophisticated AI, and now they are merely a data structure.
There was a time when recurrent neural networks were seen as sophisticated AI, and now they are merely a data structure.
Library of Chadnet | wiki.chadnet.org