The Chinese Room Experiment
Conclusion
The Turing Test:
Searle is probably right about the Turing Test.
Simulating a human-like conversation probably does not guarantee real human-like understanding.
Certainly, it appears that simulating conversation to some degree does not require a similar degree of understanding. Programs like the 2008 chatterbots presumably have no understanding at all.
Weak AI: computer is valuable tool for study of mind, ie can formulate and test hypotheses rigorously.
Strong AI: appropriately programmed computer really is a mind, can be said to understand, and has other cognitive states.
Searle’s Response to the Systems Reply
It’s absurd to say that the room and the rules can provide understanding
2) What if I memorized all the rules and internalized the whole system. Then there would just be me and I still wouldn’t understand Chinese.
Counter-response to Searle’s response
If Searle could internalize the rules, part of his brain would
understand Chinese. Searle’s brain would house two
personalities: English-speaking Searle and Chinese-
speaking system.
Searle is an opponent of strong AI, and the Chinese room is meant to show what strong AI is, and why it is wrong.
It is an imaginary Gedankenexperiment
like the Turing Test.
The Complexity Reply
Really a type of systems reply.
Searle’s thought experiment is deceptive. A room, a man with no understanding of Chinese and “a few slips of paper” can pass for a native Chinese speaker.
It would be incredibly difficult to simulate a Chinese speaker’s conversation. You need to program in knowledge of the world, an individual personality with simulated life history to draw on, and the ability to be creative and flexible in conversation. Basically you need to be able to simulate the complexity of an adult human brain, which is composed of billions of neurons and trillions of connections between neurons.
An operator O. sits in a room; Chinese symbols come in which O. does not understand. He has explicit instructions (a program!) in English in how to get an output stream of Chinese characters from all this, so as to generate “answers” from “questions”. But of course he understands nothing even though Chinese speakers who see the output find it correct and indistinguishable from the real thing.
The Turing Test
Thank you
In 1950, a computer scientist, Alan Turing,
wanted to provide a practical test to answer
“Can a machine think?”
His solution -- the Turing Test:
If a machine can conduct a conversation so well that people cannot tell whether they are talking with a person or with a computer, then the computer can think. It passes the Turing Test.
Searle’s response to the Robot Reply
The robot reply admits that there is more to understanding than mere symbol manipulation.
2) The robot reply still doesn’t work. Imagine that I am in the head of the robot. I have no contact with the perceptions or actions of the robot. I still only manipulate symbols. I still have no understanding.
Counter-response to Searle’s response
Combine the robot reply with the systems reply. The robot as a whole understands Chinese, even though Searle does not.
Objections
The Systems Reply
The Robot Reply
What if the whole system was put inside a robot?
Then the system would interact with the world. That would create understanding.
Guided by-
Dr. Priyesh Kanungo
Searle is part of a larger system. Searle doesn’t understand Chinese, but the whole system (Searle + room + rules) does understand Chinese.
The knowledge of Chinese is in the rules contained in the room.
The ability to implement that knowledge is in Searle.
The whole system understands Chinese.
Submitted by- Kundan Bhatiya
Kritika Patel
Mca 4th sem.