Introducing
Your new presentation assistant.
Refine, enhance, and tailor your content, source relevant images, and edit visuals quicker than ever before.
Trending searches
P: The Systems Objection: Perhaps the individual in the Chinese room does not understand Chinese but the entire room does. If so, then the conclusion Searle wants is not guaranteed.
1. What model of the mind is Searle criticizing?
2. What is the difference between ‘weak’ and ‘strong’ AI?
3. What is the difference between syntax and semantics?
4. What is the “Turing Test”?
5. On what basis would Turing say that a machine can think?
S: The Internalization Reply: "Let the individual internalize all of these elements of the system. [...] There isn't anything at all to the system that he does not encompass. [...] All the same, he understands nothing of the Chinese, and a fortiori neither does the system, because there isn't anything in the system that isn't in him. If he doesn't understand, then there is no way that the system could understand because the system is just a part of him."
P: The Invalidity Reply: The above inference form is invalid. Counterexample:
1. Searle is not red, squishy, and bloody all over.
2. There's nothing in Searle's heart that isn't in Searle.
C. So, Searle's heart isn't red, squishy, and bloody all over.
The conclusion is not guaranteed by the premises.
P: The Internalization is Irrelevant Reply: Even if the individual internalizes all the elements of the system, the individual and the system may have different states. There may be something in the individual that's not in the system (and vice versa). So although the individual may not understand the semantics for the symbols, the room may.
C: The Implementation Objection: Perhaps the program does not understand Chinese but the implementation of it does (cause understanding). If so, then the conclusion Searle wants is not guaranteed. Counterexample:
1. Recipes are syntactic.
2. Syntax is not sufficient for crumbliness.
3. Cakes are crumbly.
C. Implementing a recipe is insufficient for a cake.
"Implementations of programs [...] "are not purely syntactic. An implementation has causal heft in the real world, and it is in virtue of this causal heft that consciousness and intentionality arise".
Searle's Chinese Room thought experiment has problems and so it does not give us reason to think that Strong AI is false.
(The goal is to show that Searle's argument against Strong AI fails, and not that Strong AI is true).
•What is the puzzle of representation?
•What kind(s) of representation does Crane think fundamental?
Jim Pryor
6. What is the “systems objection”?
7. What is the “implementation objection”?
8. Is there a difference between the simulation of thought and actual thought? If so, what is it?