John Searle the Strong AI
John Searle’s described a philosophical position he identifies as strong AI as the perfectly programmed computer system consisting of appropriate outputs and inputs. With the accurate inputs and outputs, the system has a mind same as that of a human being. The definition is one of the popular and widely credited statement to claims that program systems can someday think and solve various problems (Starks, 2019). To support the definition, John established a remarkable discussion concerning the basis of cognitive science and artificial intelligence. The discussion was set in his well-recognized Chinese room argument in 1980 to counter scientific critics.
Effectiveness of Searle’s argument
John’s argument is much more effective and different from traditional A.I objective. First, unlike Descartes who suggests that the individual soul interacts with brain, he gives a better explanation. The scholar observes the materialism doctrine that souls do not exists in the practical world human being live in. The different forms of materialistic position evaluate strong artificial intelligence only claiming that a computer with accurate program would be mental. The word program is the emphasis of this context because any machine can implement a particular program thus providing an answer to the mind-body challenge. The concern is that a programed machine operates the correct software (Baron, 2017). Consequently, a human mind is taken as a software piece implemented by the human brain. Therefore, in principle it would be effective to code such a human-like program to a computer and create a mental machine.
Secondly, John response to the Chinese room argument that attempts to describe his strong A.I as false is effective and different. He argues that nobody can reveal the definition to be false because it is difficult to understands the program of the human mind. Also, no one can understand the brain to be a priori – before providing empirical tests. The main idea is to make a zombie – not mental machine with any kind of a program (Baron, 2017). In case such a machine would exist, the case of false strong A.I would be supported because no program would ever make it mental. Moreover, the philosopher provides an answer to how to make such a machine and to assess whether the machine contains thoughts or not by engaging ourselves into implementing the machine. If people implement the system, we would be in position to evaluate whether it is mental or not.
In addition, Searle demonstrates the strong A.I to offer an effective argument. He illustrates a case where a person is put in a closed facility that has two slots. From slot 1 the individual is provided with Chinese characters that he/she does not identify as words. The person does not know how to read Chinese words. However, the person has a big rulebook that he/she apply to come up with other Chinese characters from the ones provided (John, 2017). With the help of the rulebook the individual lastly splits out the new characters. In this case, it is the same with the computer program with an input which computes a task and lastly splits an output. John continues with the illustration and assumes that the rule book is such that individuals outside the room can talk with the person inside the room in Chinese.
For instance, they send a message ‘how are you?’ Following the rulebook, the individual would respond with a meaningful answer. People outside would even ask the person inside whether he understands Chinese and would answer yes. The fact is, the person following the rulebook to communicate does not understand Chinese, he just follows rules. The important part of the case is that with a particular rulebook (program) one would never comprehend meanings of characters he/she manipulates (John, 2017). Searle has built a program that can never be mental. Transforming the program interprets to only changing the rulebook which clearly indicates that it doesn’t boost the understanding.