Type-identity theorist claims that mental properties are
reducible to physical properties
Believing that SFU is in Canada just is being in Brain State (say) 137.
Which of these is the „is‟ of identity? Dr Mc is the prof of this class or
Dc Mc is hilarious. 1 .
Type identity = one-one relationship.
Functionalist deny this.
They believe that mental states are multiply realizable.
Strong Al is a kind of functionalism.
An appropriately programmed computer would have mental states.
AND it would have them in the same way that we do.
Human processing is a syntactical formation.
Searle thinks that a computer will not pass the Turing test, and,
If a computer did pass the TT, we would not be required to say that it
has mental states, and,
Even if we were, this would not help to explain how we have mental
Strong Ai, says Searle, is just wrong that „mind is to brain as software
is to hardware‟
Searle’s Chinese room thought experiment
Since many of us here can read and write some form of Chinese, let‟s
change the example. Ex: It‟s the winding room.
Searle in the Chinese (Wingding) room does not understand Chinese
He is responding purely on the basis of syntax.
The systems reply:
It‟s not Searle in the room that understands Chinese, it‟s the
o Searle says “Imagine I internalize all the rules and learn
to write all the strings.” In that case, I would include the system, and I
still wouldn‟t understand Chinese.
Reply‟s to Searle‟s response “You wouldn‟t
understand Chinese but part of you would.”
Yes, you would understand Chinese. Why
think that you would be able to tell from the
inside whether or not you understand.
All thought is conscious thought
Is Searle committed to Descartes‟ idea?
o The Robot reply - Place Searle in a robot that interacts
appropriately with the world. It gets information from
the environment, that information is processed by
Searle, who causes the robot to act accordingly.
It can walk around things, talk in Chinese, answer
questions in Chinese, read, pick up things etc.
Keep in mind how amazing this would be. You
might be sceptical that this could ever happen,
but that‟s not the PHIL point.
Searle‟s response – I still wouldn‟t
“Searle you‟ve misunderstood the
claim.” The claim is that the robot
would understand Chinese and be
thinking. It would using be using you,
so to speak.
Why think Searle is in a position to
know whether or not the robot has
Sure, Searle has his own mental
states, but how does that preclude the
robot from having some, too?
How is that some of our mental states have intentional content?
Semantic externalists would say it‟s a matter of their being
suitably related to the world.
o Searle thinks otherwise, he things that is it something
intrinsic to our brains – they produce intentionality.
What about sensation?
Can only organic brains produce sensation?
The other minds reply: If you don‟t grant a computer that passes the
I‟m not trying to deal with the solipsist. Type-identity th