CONTEMPORARY PHILOSOPHY: SEARLE (I)
 John Searle (born 31 July 1932), teaching for most of his career at UC Berkeley. His essay
“Minds, Brains, and Programs” is a classic, in which he argues against what he calls the thesis of
“Strong Artiﬁcial Intelligence” that machines can actually be intelligent. As such, it’s a contri-
bution to the discussion of what constitutes thinking/understanding that we’ve looked at (a bit)
in the course before now.
 Searle’s discussion is aimed at what is sometimes called the “Turing Test” for intelligence (or
artiﬁcial intelligence [AI]). Alan Turing was a British mathematician who proposed to ‘opera-
tionalize’ the notion of intelligence, cutting through the philosophical debates, with a simple
test. A somewhat simpliﬁed version of the Turing Test is as follows. Suppose your only mode
of interaction is via a keyboard, with which to type in and eventually submit questions, and via
a printer, which after a certain elapsed time will print out a response to the questions. If you
cannot tell whether the responses are from a human being with a similar set of tools on the other
side, or a computer, then the computer counts as intelligent. (Turing’s actual test is a bit more
complicated but not in ways that matter.) On a common reading, the Turing Test can be boiled
down to a simple claim: If you can carry on a perfectly ordinary conversation with a computer,
then that computer counts as intelligent. This is the criterion (roughly) for the Loebner Prize,
an award given to the most humanlike ‘chatterbot’ in competition.
 Some years later, Searle summarized the heart of his argument as follows (from his presenta-
tion of the argument in Scientiﬁc American in 1990):
[A1] Programs are formal (syntactic).
A program uses syntax to manipulate symbols and pays no attention to the semantics of the
symbols. It knows where to put the symbols and how to move them around, but it doesn’t
know what they stand for or what they mean. For the program, the symbols are just physical
objects like any others.
[A2] Minds have mental contents (semantics).
Unlike the symbols used by a program, our thoughts have meaning: they represent things and
we know what it is they represent.
[A3] Syntax by itself is neither constitutive of nor sufﬁcient for semantics.
Therefore, programs are neither constitutive of nor sufﬁcient for minds.
This follows from [A1]–[A3]: Programs don’t have semantics; they have only syntax,