Publishing Partner: Cambridge University Press CUP Extra Publisher Login
amazon logo
More Info

New from Oxford University Press!


Style, Mediation, and Change

Edited by Janus Mortensen, Nikolas Coupland, and Jacob Thogersen

Style, Mediation, and Change "Offers a coherent view of style as a unifying concept for the sociolinguistics of talking media."

New from Cambridge University Press!


Intonation and Prosodic Structure

By Caroline Féry

Intonation and Prosodic Structure "provides a state-of-the-art survey of intonation and prosodic structure."

Review of  The Handbook of Brain Theory and Neural Networks

Reviewer: Bilge Say
Book Title: The Handbook of Brain Theory and Neural Networks
Book Author: MIchael A Arbib
Publisher: MIT Press
Linguistic Field(s): Cognitive Science
Issue Number: 14.1823

Discuss this Review
Help on Posting

Date: Tue, 24 Jun 2003 11:30:19 +0300
From: Bilge Say
Subject: The Handbook of Brain Theory and Neural Networks, 2nd ed.

Arbib, Michael A., ed. (2002) The Handbook of Brain Theory and
Neural Networks, 2nd ed., MIT Press.

Bilge Say, Middle East Technical University, Turkey


This comprehensive reference volume is a collection of
around 300 articles by various experts on mostly the
fundamentals of neuroscientific brain theory, neural
modeling for neuroscience as well as for higher level
cognitive processes and selected applications of
artificial neural networks. These articles are
complemented by articles on the mathematics of the
architectures and dynamics of artificial neural

The Handbook consists of three parts. Part I and Part
II, written by Arbib himself, form an introductory
passage into the bulk of 285 articles in Part III. Part
I is a brief and condensed introduction to the elements
of brain theory and neural networks. The biological
neuron and its artificial models are introduced followed
by a general introduction to the history and styles of
neural modeling and a concluding section on artificial
neural networks as dynamic and adaptive systems.

Part II states 22 roadmaps organized under 8 major
roadmaps, each roadmap consisting of a subset of
articles in Part III selected within that interest area.
For each roadmap, Arbib has written an introductory
guided tour, giving some background on the theme of the
roadmap and previewing how each article contributes to
that interest area. Additionally, there is a meta-map
introducing the 8 major roadmaps: Grounding Models of
Neurons and Networks; Brain, Behaviour, and Cognition;
Psychology, Linguistics, and Artificial Intelligence;
Biological Neurons and Networks; Dynamics and Learning
in Artificial Networks; Sensory Systems; Motor Systems;
and Applications, Implementations, and Analysis.

All articles in Part III (each of 3-6 pages of length,
double column) have an Introduction section where the
subject is introduced in an accessible manner and basic
research pointers are given, and a Discussion section at
the end, where a critical evaluation of the research in
the subject area of the article is presented. In between,
major contributions to research in the area of the
article are reviewed, sometimes concentrating on the
author(s)' own work, sometimes presented in a more comparative
framework. Each article's main text is followed by three
intra-Handbook pieces of information: Roadmap, the
roadmaps this article is related with; Background, the
articles in the Handbook that might serve as background
reading for the article and Related Readings, related
articles in the Handbook. These are further followed by
a list of references, where major references are marked
by a black diamond.

The sheer size of the Handbook makes it impossible to
review each article so I have decided to review only the
16 articles in the Linguistics roadmap. Of course, there
will be other articles of interest for each reader; the
roadmaps in Part II make it easier to form one's own

Language Processing (Shillcock) is a review of
connectionist language processing models for both
spoken and written language. Written with a cognitive
science perspective, especially the strengths of such
models in modeling change, namely, language
acquisition, language breakdown and language evolution
are emphasized. Emergence of complex linguistic
behaviour from basic building blocks and gradedness of
linguistic phenomena are demonstrated by example
models. In Optimality Theory in Linguistics, Zuraw
reviews the basic mechanisms of Optimality Theory and
extensions such as stochastic constraint rankings.
Advantages and disadvantages are presented alongside a
brief review of how Optimality Theory has been inspired
by neural networks. In Constituency and Recursion in
Language, Christiansen and Chater reevaluate
constituency and recursion in language from a
connectionist perspective. They review various ways
that these two concepts can be modeled in connectionist
models and the implications of such models for language

Neurolinguistics (Gordon) introduces neurolinguistics as
the study of all aspects of brain and language: basic
methods and models, and how those models are extended by
data obtained from normal humans as well as those with
language deficits. In Reading, Holden and Van Orden
review psycholinguistic experimental research and
connectionist modeling on reading. They concentrate on
the dichotomy as to whether word recognition is analytic
or synthetic (or holistic) and show how nonlinear
dynamic view of language processing can help resolve
this issue. In Imaging the Grammatical Brain,
Grodzinsky gives an account of how studies on lesioning
data and functional neuroimaging can help shape
hypotheses on the basic units of analysis in language
and their neural plausibility.

In Speech Processing: Psycholinguistics, Chater and
Christiansen show how connectionist models can bridge
the gap between experimental studies of
psycholinguistics within brain theory and traditional
linguistic accounts of language processing on issues
such as modularity, concentrating on speech segmentation
and aural word recognition. In Speech Production, Byrd
and Saltzman summarize current research on speech
production and show how a dynamic perspective brought
about by connectionist models can blur the distinction
between phonology and phonetics. Fowler, in Motor
Theories of Perception, evaluate the hypothesis of motor
systems being used in perception using motor theory of
speech perception as a principle example. The studies
presented in Language Evolution: The Mirror System
Hypothesis (Arbib) can be taken as evidence for a motor
theory of speech perception as well as for a theory of
language evolution: mirror neurons found in the premotor
cortex of the monkey (akin to language area in human
brain) respond both when the monkey performs an action
or when it perceives someone else doing the same action.
In Language Evolution and Change, Christiansen and Dale
review the role of connectionist modeling as tests for
hypotheses about language evolution, language variation
and change. Language Acquisition (Mac Whinney) is an
exploration of how connectionist models have challenged
(and have been challenged by) Chomskyan assumptions
about a language acquisition device. Almor, in Past
Tense Learning, reveals the past tense debate, various
theoretical issues raised as a result of connectionist
modeling of linguistic processes and structures for the
acquisition of past tense inflectional forms.

In Speech Recognition Technology, Beaufays et al. review
some neural network models that are used in large
vocabulary, continous speech recognition. Convolutional
Networks for Images, Speech and Time Series (LeCun and
Bengio) is a presentation of a specific, biologically
inspired neural network architecture successfully used
in applications such as handwritten character or speech
recognition. Likewise, Bourlard and Bengio in Hidden
Markov Models, introduce a specific architecture, namely
hidden Markov models, as a special case of stochastic
finite state automata and outline the benefits of their
hybrid usage with artificial neural networks.

It is stated in the Handbook that the second edition is
an extensive update of the first: of the 266 articles in
the first edition, only 9 have remained unchanged; all
others have been updated or completely rewritten. About
one third of the articles in this edition are on new
topics; the emphasis of the shift in coverage has been
from applications of artificial neural networks to
cognitive modeling, including language and especially
modeling for computational neuroscience.

It is announced by the publisher that an online version
of the Handbook will become available in August on
CogNet (, MIT Press' electronic
resources and community tools site for researchers in
cognitive and brain sciences.


The inspiration to this comprehensive Handbook is stated
by its editor as the two great questions: "How does the
brain work?" and "How can we build intelligent
machines?". The collection of articles answer the second
question more indirectly, mostly in the form of
selected applications and mechanisms of neural network
modeling. To me, the main attraction the Handbook brings
is the shareability and mutual understandability of
resources and knowledge among three, sometimes disparate
approaches that take neural function and form as their
starting points: computational neuroscience-"systematic
use of mathematical analysis and computer simulation to
provide better models of the structure and function of
living brains"-, connectionist computational modeling
for cognition-studies modeling human cognitive processes
in terms of artificial neural networks, usually using
more abstract and simple models than computational
neuroscience- and neural computing-"design of machines
(software and hardware) with circuitry inspired by but
which need not faithfully emulate the neural networks of

The articles are concise, well-edited and comprehensible
given the right background. For the interested linguist
though, some more technical background presented in a
more gentle and accessible manner than the Handbook
might be necessary to understand some of the articles in
full. Thus, complementary resources such as Lytton (2002)
and McLeod et al. (1998) might be advisable. The
roadmaps and the various ways of cross-referencing used
make the Handbook suitable for exploratory as well as
reference reading. The only thing I found missing was a
glossary, or at least some highlighting of definitional
references in the Subject Index, given the vast
terminology covered by the Handbook.


Lytton, W. W. (2002). From Computer to Brain:
Foundations of Computational Neuroscience. Springer.

McLeod,P., K. Plunkett and E. T. Rolls (1998).
Introduction to Connectionist Modelling of Cognitive
Processes. Oxford University Press.

ABOUT THE REVIEWER Bilge Say is a faculty member at the Cognitive Science Program of Middle East Technical University, Turkey. Her research interests lie mainly in Computational modeling for cognitive science and computational linguistics.

Amazon Store: