Publishing Partner: Cambridge University Press CUP Extra Publisher Login

New from Oxford University Press!


Style, Mediation, and Change

Edited by Janus Mortensen, Nikolas Coupland, and Jacob Thogersen

Style, Mediation, and Change "Offers a coherent view of style as a unifying concept for the sociolinguistics of talking media."

New from Cambridge University Press!


Intonation and Prosodic Structure

By Caroline Féry

Intonation and Prosodic Structure "provides a state-of-the-art survey of intonation and prosodic structure."

The LINGUIST List is dedicated to providing information on language and language analysis, and to providing the discipline of linguistics with the infrastructure necessary to function in the digital world. LINGUIST is a free resource, run by linguistics students and faculty, and supported by your donations. Please support LINGUIST List during the 2017 Fund Drive.

Review of  The Mind Doesn't Work That Way: The Scope and Limits of Computational Psychology

Reviewer: Katherine Beals
Book Title: The Mind Doesn't Work That Way: The Scope and Limits of Computational Psychology
Book Author: Jerry A. Fodor
Publisher: MIT Press
Linguistic Field(s): Psycholinguistics
Cognitive Science
Issue Number: 13.712

Discuss this Review
Help on Posting

Fodor, Jerry A. (2001) The Mind Doesn't Work That Way: The Scope and
Limits of Computational Psychology. MIT Press, xi+126pp,
paperback ISBN 0-262-56146-8, $13.95, hardback ISBN 0-262-06212-7,
A Bradford book.

Katharine Beals, full time stay-at-home mom.

As its title suggests, this book responds to Steven Pinker's "How the
Mind Works," as well as to Henry Plotkin's "Evolution in Mind." These
authors, Fodor argues, are unduly optimistic about how well the
computational theory of mind (CTM) accounts for human thought. Fodor's
slim volume aims to show the limitations of the CTM, and how much work
remains to be done in understanding human cognition. As he quickly
concedes, he himself has no solution to the problems he highlights.

The book begins with an overview of the CTM. This theory, which dates
back to Alan Turing, holds that the mental representations of our
beliefs, desires and other so-called propositional attitudes have
compositional, syntactic structures that reflect the logical form of
the corresponding propositions ("I believe that John loves Mary," "I
want John to love Mary and Mary to love John," and the like), and that
the mental processes that operate on these representations, operating
specifically on their syntax, are mechanical computations akin to those
of a Turing Machine. The effect that mental representations have on
mental activity, e.g. how they give rise to other thoughts, is solely
through their logical forms. This accounts for the productivity,
systematicity, and generally truth-preserving character of thought (how
one true thought leads to another). Thinking reduces to computation.

Because mental processes are, by this account, sensitive only to the
local syntax of whatever mental representation they operate on, they
are oblivious to the context in which the mental representation is
embedded. This, as Fodor points out, works just fine with computations
that are in fact local, e.g. strictly deductive inferences like that of
"John loves Mary" from "John loves Mary and Mary loves John."

But what about the many cases where thought must be sensitive to
factors outside the syntax of the mental representation in question,
for example to the larger belief system in which the representation is
embedded? Fodor's first example, our calculation of the relative
simplicity of competing beliefs, presumably applies whenever we are
deciding which of several novel propositions to espouse. Everything
else being equal, we tend to opt for the simplest one. But a belief's
simplicity is determined not just by its local syntax, but, more
importantly, by how well it meshes with our existing belief system. As
Fodor points out, "the effect that adding a new thought has upon the
simplicity of a theory in situ is context dependent." (p. 26).

Revisions in our belief system may also require calculations about how
central different beliefs are within the system, since we are more
likely to revise or abandon peripheral beliefs than more central ones,
and this also is manifestly context dependent. So too, Fodor claims, is
much of our daily decision making generally, since many of the
parameters we use here are context sensitive.

There is a way for the CTM to preserve its premise that local syntax is
the only property of mental representations that determines their
causal role in mental events. This is to allow that it is not just the
local syntax of a given mental representation upon which mental
processes operate, but the summation of the local syntax of each of the
mental representations in the relevant belief system. But relevant
beliefs, Fodor claims, can in principle come from anywhere in "the
totality of one's epistemic commitments," and this totality, he points
out, "is VASTLY [italics] too large a space to have to search if all
one's trying to do is figure out whether, since there are clouds, it
would be wise to carry an umbrella." (p. 31) Feasible problem solving
requires that "not more than a small subset of even the relevant
background beliefs is actually consulted." (p. 37)

The existence of this kind of thinking, i.e. that which encompasses
global properties of belief systems, has been recognized time and again
by cognitive scientists and philosophers of mind, alternatively as
global, holistic or abductive reasoning. It is sometimes formulated as
the frame problem: how do we know which beliefs are relevant to a
particular situation? At the start of "How the Mind Works," Pinker
cites Daniel Dennett's hypothetical example of how this question has
frustrated the modeling of human decision making in robots. A robot
programmed to safely retrieve a battery from a room that also contains
a time bomb, and thus to consider first the side effects of various
strategies for doing so, spends hours calculating every theoretically
possible contingency, stopping only when the bomb explodes. As Fodor
points out, it is ironic that Pinker, while raising the frame problem
at the outset of his book, and characterizing it as an essential
problem about how the mind works, never revisits it later on.

Fodor proceeds to consider the main alternative to the CTM, namely,
connectionism, and argues that this, too, fails to account for
abductive reasoning. Connectionism treats the mind not as a classical
computer, but as a network of primitive nodes. Mental representations
are defined by specific patterns of interconnection and degrees of
strength between connections. The problem with this conception, Fodor
says, is that there is no way to define specific types of nodes across
different networks. Since nodes are primitives, they belong to a
particular node type solely by virtue of the specific network they are
embedded in. There is, then, no notion of an abstract constituent or
concept that can occur in two different mental representations (e.g.
"loves" in "John loves Mary" and "Mary loves John"). Connectionism thus
fails where the CTM succeeds, leaving unexplained the productivity,
compositionality, and systematicity of thought, together with the
mechanics of deductive reasoning. But it also fails to account for
abductive processes. Because there's no way to generalize elements from
one network to another, there's no way to embed the same belief, with
different centrality, in two different belief systems. As since, Fodor
claims, connections in a neural net cannot be added or removed (only
strengthened or weakened) there is no way to express changes in belief
systems vis a vis the relevance of one belief to another.

Thus dispensing with connectionism, Fodor returns to the CTM, and
discusses how Pinker et al implicitly attempt a way out of the frame
problem with a conception of the human mind as massively modular,
consisting of a large number of informationally encapsulated databases.
These modules, to the extent that they effectively "frame" the sets of
beliefs that are relevant to different situations (e.g. having to
remove a battery from a room that contains a time bomb), may define
reasonably narrow search spaces for making decisions. But then there's
a more general problem: how do you decide which module is the relevant
one for a given situation? The frame problem, Fodor concludes, remains

Fodor proceeds to critique the evolutionary grounds on which modularity
has been justified, arguing first against the use of evolutionary
theory to explain how the mind works. He points out that evolutionary
considerations in fact don't, and shouldn't, constrain other fields of
science, even other topics in human physiology. How much do, or should,
notions about selection pressure, for example, inform our understanding
of how the heart works? Also, he argues, there is too little
information about how selection pressures might have changed the
cognitive architecture of our ancestral apes, or about how changes in
the brain are reflected in the mind. Noting that our brains are grossly
similar to that of existing apes, while our minds are grossly
different, he argues that "it's entirely possible that quite small
neurological reorganizations could have effected wild psychological
discontinuities between our minds and the ancestral ape's." (p. 88) The
human mind, rather than being an evolutionary adaptation, may simply be
the result of an evolutionary accident or "saltation," and if so,
principles of evolutionary fitness are irrelevant to how it works.

Nonetheless, Fodor points out, proponents of modularity depend
critically on evolutionary arguments, because the databases within the
mind's purported modules are full of contingent propositions (e.g.
"steer clear of snakes") that are extraordinarily unlikely to have
arisen by evolutionary accident.

Fodor concludes that only a small part of human cognition is currently
within our grasp of understanding -- deductive reasoning, for which the
local syntax of mental representations is sufficient, and other
processes that apply locally within modules (for example, the language
module's linguistic processing). Placing what remains elusive up there
with the mystery of consciousness, Fodor sums it up by asking how
mental processes can be "simultaneously feasible AND [italics]
abductive AND [italics] mechanical." (p. 99)

"The Mind Doesn't Work that Way" is an important book whose major
points must be taken seriously by all those interested in how the mind
works. It gives us reason to view abduction both as a pervasive
phenomenon in human cognition, and as a serious problem that must be
solved before we can say we understand the human mind. And it shows,
quite meticulously, how the two standard theories of cognition, the
computational theory of mind, and the connectionist model, and have
failed to explain it. Fodor's deep skepticism provides a needed
antidote to the occasionally downright ebullient optimism of Pinker et

But is abduction really as mysterious as Fodor deems it -- so beyond
our current understanding, he says, that we'd best put it aside for now
and work on other things?

Fodor's gloom follows from his assumption that there are only two real
possibilities -- local searches, which are insufficient, or exhaustive
ones, which aren't feasible. But what about some sort of intermediate
search that approximates an exhaustive one -- a search that only
explores those parts of our epistemic commitments where relevant
beliefs are likely to reside, and that occasionally overlooks things.
After all, people make errors of omission all the time, probably far
more often that we realize, and perhaps nearly as often as we perform
abduction. When I'm out hunting for berries, perhaps my mind does not
bother consulting its beliefs about number theory. But perhaps number
theory actually can help locate berries -- it's just that we'll
probably never realize this. I, for one, may never find out even that I
failed to use the optimal berry-picking strategy, let alone why I
failed, and people in general will probably never think of, let alone
bother to investigate, the connection between number theory and berry

OK, Fodor might reply, but how does one know which parts of one's
epistemic commitments to overlook? Haven't I just explained frame
problem by sweeping it under the table? This brings us back to Fodor's
take on the massive modularity hypothesis. Does it really fail to get
anywhere with the frame problem? True, Pinker doesn't spell out how our
minds decide which module or modules are relevant to a given situation.
But searching among the modules, even if there are tens of thousands of
them, is not quite as bad as searching through every single individual
belief. Furthermore, perhaps the mental organization of modules allows
feasible approximations of exhaustive searches. Fodor claims that
"[y]ou can't decide a priori which of your beliefs bear on the
assessment of which of the others because what's relevant to what
depends on how things are contingently IN THE WORLD [italics]." (p. 32)
But don't we know something about our world, if not innately, than as
we learn and grow? And might we therefore organize our belief systems
to reflect its contingent structure?

Suppose, then, that some modules are subsets of others, and are
organized (innately, or dynamically as we learn) as hierarchical trees,
with more general modules (e.g. one for social situations and another
for nonsocial situations) at the roots. Perhaps each module contains
certain (innate or acquired) criteria, necessary and/or sufficient
and/or fuzzy, and branches of the hierarchy are considered in parallel
until they are ruled out as probably irrelevant. Then perhaps we have a
scheme wherein abduction is both feasible and mechanical -- as well as
an account of why we so often commit errors of omission.

One can even imagine a role for quasi-connectionist networks in all
this -- networks that avoid Fodor's objections to the brand of
connectionism he critiques. Perhaps the lowest-level modules in our
imagined hierarchy, i.e. our most specific databases, are organized as
networks, not of primitive nodes, but of those syntactic mental
representations that are the darlings of the CTM (akin to Pinker's
proposed associative network of phonological features for irregular
verbs in his latest book, "Words and Rules"). This retains all the
advantages of compositional syntax in deductive reasoning, and allows a
given mental representation to recur in different belief systems. It
also provides for the expression of degrees of relevance between, and
the relative centrality of, different beliefs in different systems, and
permits patterns of connection to change without altering the identity
of mental representations. In any case, assuming that there is at least
some relationship between the mind and the brain, actual (neural)
networks must constitute the foundation of any and all systems of
mental representations, for it is precisely through networks of
connection between cells that brains store memories.

Also not fully convincing are Fodor's criticisms of the use of
evolutionary theory in explaining how the mind works. Unlike all the
other, far simpler and far more transparent, organs in our body, the
black box that is the brain cries out for all the clues we can gather,
and considerations of evolutionary adaptation may be suggestive, if not

Fodor rightly critiques Pinker for sometimes conflating innateness with
informational encapsulation: just because a belief is innate doesn't
mean it is specific to a particular module. But Fodor then proceeds, in
showing how dependent massive modularity theorists are on evolutionary
explanations, to conflate these properties in his own way. When he
points out that the databases that make up the purported modules are
full of contingent beliefs that couldn't have arisen through
evolutionary accident, he implicitly assumes that these beliefs must,
by virtue of their residence in innate modules, themselves be innate
rather than acquired and subsequently stored there. Furthermore, there
is, in fact, strong evidence for the innateness of a considerable
number of contingent beliefs -- evidence not directly from evolutionary
theory, but from cultural universals. When one notices a ubiquitous
fear of snakes within all the cultures of the world, including those
restricted to the most urban of environments, one has to wonder whether
there is an innate belief to the effect that one should steer clear of
snakes. Cultural universals appear time and again in "How the Mind
Works," but not in "The Mind Doesn't Work that Way", and one has to
wonder how Fodor would explain them.

This is a book that desperately needed an editor. It combines the
formal Latinisms ("conspecifics," "supervene on," "coextensive with"),
the P's and Q's, and the opaque, quasi-mathematical acronyms (E(CTM),
M(CTM)) that are so beloved of philosophers with the philosopher's
aversion to concrete, non-degenerate, real-life examples. It is riddled
with jokey asides (the most egregious of which was the one about the
potato future, which had me scurrying down the garden path to a
pointless footnote) and chatty commentary: (from p. 74) "'... And why
should it matter to anyone whose time is valuable (unlike, it would
appear, that of the present author)?' Temper, temper." Key terms are
left undefined, swimming around in one's head as one tries to follow
the discussion: is a "thought" different from a "mental process," or a
"belief" different from a "mental representation"? And what precisely
is the RTM, or Representational Theory of Mind?

With enough perseverance one eventually figures out what Fodor means,
because he keeps recapitulating, in slightly different terms, his major
points. With a little reorganization, and greater clarity and more
concrete examples at the outset, he could free up a lot of space to do
justice, again with specific examples, to a question that cries out for
much more attention: just how pervasive is abduction in our daily
decision making?

Glynn, Ian (1999). An Anatomy of Thought: The Origin and Machinery of
the Mind. Oxford University Press.

Pinker, Steven (1997). How the Mind Works. Norton.

Pinker, Steven (1999). Words and Rules: The Ingredients of Language.
Basic Books.

Katharine Beals received her Ph.D. in linguistics from the University
of Chicago in 1995. From 1995 to 2000 she worked as a Senior Software
Engineer with the Natural Language Group at Unisys. She is currently at
home with her baby daughter and at work on a book about her deaf,
autistic son, which explores such issues as language modality, cochlear
implants, and language and consciousness in autistic people.