LINGUIST List 11.2175

Mon Oct 9 2000

Review: Messing & Campbell: Gesture, Speech & Sign

Editor for this issue: Andrew Carnie <carnielinguistlist.org>


What follows is another discussion note contributed to our Book Discussion Forum. We expect these discussions to be informal and interactive; and the author of the book discussed is cordially invited to join in. If you are interested in leading a book discussion, look for books announced on LINGUIST as "available for discussion." (This means that the publisher has sent us a review copy.) Then contact Andrew Carnie at carnielinguistlist.org

Directory

  1. Zouhair Maalej, Review of Messing & Campbell: Gesture, Speech, and Sign

Message 1: Review of Messing & Campbell: Gesture, Speech, and Sign

Date: Sun, 8 Oct 2000 17:28:53 +0200
From: Zouhair Maalej <zmaalejgnet.tn>
Subject: Review of Messing & Campbell: Gesture, Speech, and Sign

Messing, Lynn & Ruth Campbell (eds.) (1999). _Gesture, Speech and Sign_.
Oxford University Press: Oxford. (227 pp.)

Reviewed by Zouhair Maalej, University of Manouba (Tunisia)

The book is a collection of eleven papers edited by Messing & Campbell. It
includes a Preface (co-authored by the editors), an Introduction (by
Messing), and three subheadings: (A) The Neurobiology of Human
Communication, (B) The Relationships among Speech, Signs, and Gestures, and
(C) Epilogue: A Practical Application, each including thematically related
papers. In the Preface, the editors define the scope of the book as "a
genuinely interdisciplinary scientific study of gesture," thus implying that
the book is about gesture in relation to sign and speech. One frequently
quoted reference by most contributors is McNeill (1985, 1992).

SYNOPSIS

Preface

Messing & Campbell explain that the reason why the investigation of gestures
in signed languages (SLs) has been lagging behind has to do with the fear
that SLs might lose their status as "fully formed languages." On the other
hand, although speech-accompanying gestures have not been in such disgrace,
research in the area is fairly scant as gestures have been considered as
peripheral to speech.

Introduction: An Introduction to Signed Languages

The introduction is intended to give background information on SLs. Messing
emphasises that American Sign Language (ASL) is a full-fledged language on a
par with spoken ones (with its phonology, phonetics, syntax, morphology, and
lexicon), and notes that the degree of similarity between two SLs cannot be
determined by the similarities between their spoken counterparts. Quoting
Valli & Lucas, Messing shows that ASL has most of the criteria of a
language: productivity, displacement, pragmatic variety, role switching
between addresser and addressee, signer self-correction, exposure to sign
data, metalinguistic dimension, etc. Alongside ASL, we are told, the USA
counts a number of visual communication systems such as Manually Coded
English (MCE) (a system "devised by educators to make the morphosyntax of
English visible"), Contact Signing (a contact system between a fluent deaf
signer and a hearing signer knowing MCE or ASL as a second language),
Fingerspelling (a system whereby a sign stands for a letter of the
alphabet), Simultaneous Communication (a bimodal system whereby a message is
simultaneously presented in speaking and through visual modality), and Cued
Speech (a system of handshapes and positions meant to make the sounds of a
spoken language visible).

(A) The Neurobiology of Human Communication

1. Neuropsychology of Communicative Movements

Feyereisen is interested in the way movements are transformed into meanings,
and his objective is to review neuropsychological theories in order to
determine the different components of human manual communication. The author
invokes Kimura (1993) and Lieberman (1991-1992) as emphasising that
vocalisation (language) and manual activity (action) arise from the same
neural systems. However, recent research on the control of action in
neuropsychology suggests that the execution of limb movements is preceded by
mental representations of the movements, and that representations involving
a multiplicity of tasks such as transporting, catching, and manipulating
depend on different cerebral mechanisms. A lesion such as optic ataxia
affects the control of hand direction in space, leaving intact patients'
visual and tactile capacities to identify objects and touch various parts of
their body. Visual agnosia patients, on the other hand, have no trouble
controlling hand movement in space, but suffer from impairment to recognise
objects visually, which suggests that there are "at least two visual
cortical pathways, one for reaching ... and one for recognising objects" (p.
6). Other evidence supporting the specialisation of different neural systems
in action and perception can be found in research on seeing a gesture to
imitate it and seeing it to understand it, where meaningful gestures
activate the left hemisphere whereas meaningless ones activate regions in
the right hemisphere. However, "the systems for gesture imitation and
recognition are not independent" (p. 8).

The second section of the paper deals with apraxia - a disorder of voluntary
action. In neuropsychology, the assessment of apraxia goes by three
elicitation techniques: verbal command, imitation, and actual use of
objects. If apraxia affects manual and facial movements, it is not clear
whether limb and buccofacial apraxia originate in the same control
mechanism. Apraxia is also assessed through information processing models
such as Roy & Hall's (1992) and Rothi et al's. Roy & Hall's model is
concerned with accounting for "dissociations of performance in pantomime and
imitation tasks" (p. 12). Results suggest that "some apraxic subjects can
imitate despite inadequate pantomime," and "pantomime may be preserved and
imitation defective" (p. 12). Rothi et al's model, however, is concerned
with isolated gestures. The two models seem to agree that "pantomime from
spoken instructions requires semantic processing, whereas imitation can be
performed without comprehension" (p. 13). Feyereisen concluded that "the
route between visual analysis of gesture and motor execution was not as
direct as previously assumed" (p. 14).

The last section of the paper is devoted to the investigation of whether
speech disorders (e.g. aphasia) and gesture impairment (e.g. apraxia) are
governed by the same control mechanism. Although both aphasias and apraxias
are cases of left-hemisphere lesions, there exist aphasics who do not suffer
from apraxia, and vice versa, which suggests proximal localisation of brain
regions responsible for both disorders rather than a unitary mechanism. This
issue is further investigated through speech-related gestures. In cases of
fluent and non-fluent aphasia, "gestures were used to compensate for speech
production difficulties" (p. 16), which strongly battles against the unitary
mechanism hypothesis. Further evidence for the autonomy hypothesis of speech
and gesture production could be sought in Hadar & Butterworth (1997), who
studied what they called "ideational gestures" (iconic, deictic,
conventional, and indefinite) among conceptual, semantic, and phonological
aphasic patients, and concluded that the proportion of iconic gestures was
lower, and that of indefinite gestures is higher in aphasics suffering from
conceptual impairment. Work on normal and pathological ageing shows that the
use of speech regresses and gets superseded by pointing gestures. One
important contribution is McNeill (1992), who suggested the participation of
visuospatial cognition in gesture production.

2. Neural Disorders of Language and Movements: Evidence form American Sign
Language

Corina concentrates on disorders such as aphasia, Parkinson's disease, and
apraxia, and how they impact the use of American Sign Language (ASL). It is
common knowledge that damage to the left hemisphere of the brain occasions
language disorders while damage to the right hemisphere gives rise to
visuospatial impairment. However, in sign aphasic disorders, although the
left hemisphere is involved, deficits are the result of "lesions more
posterior than those observed in users of spoken language" (p. 29).
Paraphasic errors in sign language affect the parameters of handshape,
location, movement, and orientation, and result in phonemic, morphological,
and semantic impairments, suggesting that problems in symbolic
conceptualisation or motor behaviour are not reflected. Right hemisphere
disorders, which were thought to occasion no damage to language skills, have
been shown to be responsible for the disruption of meta-control of language
and discourse abilities in both speakers and signers. Parkinson signers show
a disruption in sign movement and precision of articulation, a reduction of
the articulatory space, loss of articulatory contacts, and hesitation and
slowing in the initiation and execution of movements, which impairs their
overall expressivity. Studies of apraxic patients point to the separability
of sign language behaviour and the ability to perform gestures.

3. Emotional and Conversational Nonverbal Signals

Continuing his study of emotions and facial expressions, Ekman devotes his
paper to emblems, illustrators, manipulators, regulators, and emotional
expressions.
(i) Of all these non-verbal signals, emblems seem to be the most conscious
and voluntary signals. Emblems are a socially learnt and culturally variant
body language, which does not exclude multicultural emblems acquired through
culture contact. However, cultures differ in the number of emblems they
include and the messages for which they exist. Emblems may accompany words,
may substitute for words in a flow, may comment about words, or may occur
instead of words. Emblems can be iconic or gestural (involving hand,
shoulder, head, and facial movements) face-to-face phenomena involving body
parts situated between the waist and the head. When they are simultaneous
with speech, emblems are paralinguistic features that contribute an extra
dimension to the conversation or add emphasis to speech.
(ii) Illustrators are also socially learnt. Drawing on Ekman & Friesen
(1969), Ekman added the last two types of illustrators to what Efron (1968)
suggested. Such illustrators include batons (emphasising words), ideographs
(sketching a path or direction of thought), deictic movements (pointing to
objects), kinetographs (depicting a bodily action), spatial movements
(depicting special relationships), pictographs (drawing a picture of their
referents), and rhythmic movements (depicting rhythm or pacing an event).
Illustrators are associated with the hands and even the head and the feet.
Batons are realised, in particular, by facial movements (especially brow
raising and lowering) which are contemporaneous with loudness. Brow raising
seems to be associated with positive emotions while brow lowering tends to
be used with negative emotions. Brow raising and lowering was also found to
be used with statements that ask questions. The same results, notes Ekman,
have been arrived at in research in sign language. The frequency of
illustrators is a function of the speaker's degree of emotional involvement,
but their frequency goes down with emotional detachment, boredom, fatigue,
and consciousness of one's speech. One of their pragmatic roles in
conversation is their functioning as help for floor holding and as a
self-priming technique to explain difficult thoughts.
(iii) Manipulators occur when one part of the body semiconsciously strokes,
presses, scratches, licks, bites, or sucks another one. Their frequency is
marked in private and in public, reflecting nervousness or simple mania, and
increasing in moments of comfort and/or discomfort and decreasing in
comfortable situations.
(iv) As their name indicates, regulators monitor the back-and-forth nature
of exchanges between interactants in conversation. Some of the regulators
include "agreement listener responses" and "calls for information," which
involve brow raising and lowering. "Floor holders," on the other hand, are
used by the speaker to prevent interruptions, and may involve holding the
hand or rising from a chair, etc.
(v) Adopting a Darwinian view of emotional expressions, Ekman points out
that they are universal "involuntary signals which provide important
information for others," and are "a sign that an emotion is occurring" (p.
50). For that, they belong in the public domain as they tend to be enhanced
in the presence of others (as in child care, mating, coping with rivals,
dealing with predators, etc.). Emotional expressions may provide information
about past events, future actions, likely thoughts and plans, the
expresser's internal states, and the observer's thought for the most
advisable future act. Emotional expressions are seated in the face, voice,
positioning of the head and body. Evidence for facial expressions is
stronger in anger, fear, disgust, contempt, surprise, sadness/distress, and
enjoyment. However, not all facial expressions are emotional expressions.

4. Language from Faces: Uses of the Face in Speech and in Sign

Campbell reports on ongoing work about the perception of face acts "under
specific communicative constraints" (p. 58). Contrary to common beliefs,
Campbell argues that speechreading the face is not only prominent among the
deaf but also hearing people. Seeing the speaker may facilitate the
intelligibility of speech. Indeed, relying on mouth, cheek, and chin
movements visual speechreading can "influence auditory speech perception"
(p. 59) as many experiments on the effect of identity (familiarity with the
speaker) and audivisual speechreading attest it. The capacity for
speechreading is impaired after left-hemisphere lesion (affecting
supramarginal speech processing), although the right hemisphere also impacts
it, and rates higher in the perception of faces and visual movements,
showing that "audiovisual speech and silent speechreading do not seem to
lateralize to the left hemisphere as cleanly as does heard speech" (p. 63).
Deaf people are obviously hardly good speechreaders (the good ones are said
to have "enhanced low-level visual capacities" (p. 64)), and nothing
indicates that the capacity for speechreading is quasi-null in deaf people
because of their different neurophysiological architecture. As an
alternative to face reading, mouth patterns may be enhanced by hand gestures
as a way of distinguishing phonemes with identical lip-patterns or
disambiguating place of articulation of a consonant. Such patterns find
themselves exaggerated in hearing speakers communicating with signers. Brow
movements and puffing of the cheeks have linguistic meanings whereas
direction of gaze and pose and movement of the head have a syntactic (with
interrogatives, negation, conditionals, relative clauses, etc.) and
pragmatic (deictic) functions. The fact that face acts are used in SL and
speech does not point to identity of neurophysiological architecture.
Invoking Corina's work, Campbell emphasised the "right-hemisphere advantage
in hearing subjects" and the mixed and sensitive lateralization among deaf
people, which Corina has interpreted as carrying linguistic and affective
dimensions in signers.

(B) The Relationships among Speech, Signs, and Gestures

5. Triangulating the Growth Point -- Arriving at Consciousness

Adopting a Vygotskyian framework, McNeill investigates what he calls
"online" gestures that are not "governed by conventions" (p. 78) and which
seem to be in synchrony with speech. McNeill calls this process "growth
points" (GPs) ("minimal psychological units" in Vygotsky's terminology),
which are (i) "theoretical units in which the principles that explain mental
change or growth ... apply to realtime utterance generation;" (ii) "the
initial form of thought out of which a dynamic process of development
emerges;" and (iii) used to model the discontinuity of mental life by a
given idea. Emphasising the importance of the combination of the two
modalities of imagery and language, McNeill explains that "reducing the
growth point to its linguistic and gestural components would destroy the
whole and with it the possibility of discovering units of psychological
processing other than the constituent structures of sentences" (p. 79). GPs
have the following properties: (i) cross-linguistic applicability; (ii)
reducibility to utterance with a gesture; and (iii) context-dependency.
Context-dependency for GPs is the "background from which a psychological
predicate is differentiated" (p. 80). Such a combination of imagery and
language anchors linguistic categories in a visuospatial context (p. 80).
The implication of GPs for meaning is that the same word may have different
psychological input every time the context changes. McNeill takes GPs as
units of "verbalizable consciousness," of which visuospatial cognition and
gestures are part.

6. The Role of Speech-related Arm/Hand Gestures in Word Retrieval

Krauss & Hadar investigate the communicative value of gestures through the
use of arm/hand in word retrieval in memory. While research in this area
attributes the use of arm/hand to defusing tension as a result of
frustration at failure to retrieve a word, Krauss & Hadar propose
"facilitative" rather than "communicative" role in lexical retrieval.
Arguing against the communicativeness of gestures, they invoke the fact that
speech-accompanying gestures are often made from the speaker's own
perspective, hence their misleading nature. To the claim that speakers
gesture when they see their addressees, they reply that speakers do gesture
even if their addressees do not see them. They believe that "gestures
originate in the process that precedes conceptualization and construction of
the preverbal message" (p. 103), and offer a model of gesture production on
a par with speech production in three stages: (i) a spatial/dynamic feature
selector, whose job is to take representations activated in spatial or
visual working memory, select spatial/dynamic features, and turn them into
spatial/dynamic specifications for the motor planner; (ii) a motor planner,
which translates the abstract spatial/dynamic specifications into a motor
program; (iii) a motor system, which executes instructions into gestural
movements. Krauss & Hadar argue that facilitation takes place when the
gesture production system participates in lexical search, speculating that
to do so it affects the speech production system.

7. The Development of Gesture with and without Speech in Hearing and Deaf
Children

Goldin-Meadow defends the view that the forms and functions of gestures are
in complementary distribution in early language development among hearing
children, and investigates gesture when it takes over communicative tasks in
deaf children. In hearing children, gestures are chronologically traced back
to deictic, conventional, iconic, and metaphoric gestures, and are found to
combine with speech but not with other gestures. Gesture-speech combinations
range from redundant to integrated ones, with the latter signalling a single
communicative act. Children perform both gestures that repeat what speech
says, and gestures that give information different from that conveyed
through speech, known as gesture-speech mismatches. Among deaf children,
gestures develop into a "home sign," which resembles a sign language, and
includes deictic and iconic gestures based on pantomime. Among these
children, iconic gestures are concatenated within a single sentence.

8. Do Signers Gesture

Emmorey contrasts signs and gestures. Signs belong to lexical categories,
use non-concatenative word-formation processes (like infixation in Semitic
languages), combine with one another, form a pattern of forms (handshapes)
rather than meaning, and exhibit subordinate clause structure and
long-distance dependencies. Meaning in SLs is sensitive to the ordering of
morphological processes and phonological form. Gestures that accompany
speech, however, might be iconic, metaphoric, and deictic. Other possible
gestures are called "beats" or "batons," which are associated with
metanarrative functions, tend to be invariable, and play a
discourse-pragmatic rather than a semantic function. Speech-gesture
synchrony is suggestive of their co-expressive dimension in referring to the
same referent, although some gestures can precede speech (but have never
been observed to follow it). Speech-accompanying gestures have been claimed
to be (i) informative, (ii) facilitative, and (iii) remedial. The
communicative function of gestures has been questioned by Krauss and
co-workers on empirical evidence, whereby (i) speakers gesture even when
they are alone, (ii) preventing gestures from occurring does not seem to
substantially impair communication, and (iii) seeing the speaker's gestures
does not seem to enhance comprehension, which contadicts McNeill's (1992)
claim that gestures are not only communicative but also acts of thought,
thus suggesting that speech and gesture arise from the same computational
stage. To the question whether signers gesture, Emmorey argues that they do
in producing "component gestures" (communicative gestures that are embedded
as part of the utterance) and deictic gestures, with the consequential
effect that they produce them alternately with signs, thus indicating that
gesture (more mimetic than idiosyncratic) and sign are not co-expressive.
Signers also perform body gestures (with face, body, voice), which differ
from other gestures by lending themselves to simultaneous occurrence with
signs. However, a difference is made between affective facial or evaluative
gestures (inconsistent onset and offset patterns and lack of co-occurrence
with specific signs) and facial grammatical gestures. It is also argued that
signers produce "interactive gestures" that monitor discourse co-ordination
(e.g. turn-taking). In the case of signers, ambiguity may arise as to the
status of a gesture within the signing space, i.e. whether a gesture is part
of a sign utterance or alternates with a sign.

9. Signs, Gestures, and Signs

Stokoe & Marschark's paper stands out in this collection by arguing that
what unites speech, sign, and gesture is more important than what divides
them. They emphasise that the division between verbal and non-verbal
languages is flawed as it equates the signs with the non-verbal, absurdly
classifying signers with animals, and that the confusion between speech and
language is disastrous because it classifies SLs as non-languages. Alluding
to the grammar of spoken and signed languages, they point out that, like
spoken languages, SLs involve movements of the vocal apparatus. Referring to
their own research, they report that "oral gestures" make signs even "more
expressive than mere words" (p. 163). Speculating about the origin of
body-parts gestures, Stokoe & Marschark argue that they "emerge naturally
from human physiology and its interactions with the world" (p. 167). Signs
and gestures have played various functions among hearing people in
situations (i) where there is no shared spoken language, (ii) where a spoken
language is "disruptive," or (iii) where it is "forbidden" (p. 171).
Agreeing with McNeill, they confirm that gesture and speech occur among both
hearing people and signers, and serve the same functions with both as
attested in studies in first language acquisition, the exception being
deixis among signers, which is predominantly lexicalised. Speaking from an
evolutionary stance, Stokoe & Marschark argue for "an integrated view of
language," where gestures have been "part of the human communication
repertoire" ("language had to begin with gestures", p. 178) when the spoken
or the signed languages were not felt to better serve "social information."
Further, while spoken or signed languages may have undergone changes,
gestures survived change because most of them are cultural or pancultural
ones, even though the frequency of gesturing is associated more with oral
cultures.

10. Two Modes -- Two Languages

Messing contrasts bilinguals knowing a spoken and a SL with those knowing
two spoken languages. Reporting on research done by Newport & Meier (1985),
Messing points out that speaking and signing children go through the same
stages of language acquisition. However, as reported by Meier & Newport
(1990), children signers show more precociousness, which is accounted for by
the fact that (i) the motor co-ordination system for signing develops
earlier than the one for speaking, (ii) the perception of signs is easier
than that for words, (iii) adults' behaviour tends to be more attentive to
identifying signs than words. Hearing children reared in a linguistic
environment where two languages are spoken tend to code-mix non-redundantly,
and begin to differentiate the two codes around the age of three, and
code-shift at around the age of four. Hearing children growing up in an
environment where one language is spoken and one is signed do the same, and
tend to tailor their choice to the way they are addressed. There are,
however, variations on code-switching across categories of bimodal
communication. Non-signers and beginning signers did not code-switch in one
experiment, while intermediate and advanced signers did. Messing documented
the motivations for intramodal code-switching and intermodal interactions
from Messing (1996): (i) strategic negotiation, (ii) identity marker, (iii)
domain marker, (iv) compensation, (v) accommodation, and (vi) stylistic
effect. Processing simultaneous communication (simcom) is found to interfere
with swiftness of speech in hearing people and signers, who are found to
perform more slowly than hearing people. Like McNeill, Messing emphasises
the synchronicity of gesture and sign with speech.

(C) Epilogue: A Practical Application

11. Embodied Conversational Agents: A New Paradigm for the Study of Gesture
and for Human-Computer Interface

Cassell proposes to give gestures predictive power in replacement of the
more descriptive and distributional existing theories. In modelling terms,
Cassell documents human gesture-speech behaviour and human-computer
interaction. Concerning the former, Cassell argues that "gesture and speech
are different communicative manifestations of one single mental
representation" (p. 203). Concerning the latter, Cassell justifies the
introduction of gestures by invoking the use of gestures in human
face-to-face interaction, the use of non-verbal modalities in difficult
situations (e.g. noisy environments), and the attribution of "social
responses, behaviours and internal states to computers" (p. 204). The
simulator integrates emblematic and propositional gestures only. It uses
Hallidayian terminology of information structure (theme/rheme), and includes
a semantic and pragmatic components monitoring discourse relations. The
interaction between gesture and speech is similar to that of words with
graphics in verbo-pictorial contexts, where distinct intonational contrasts
are correlated with the theme/rheme distribution of discourse. Gestures play
a similar role to intonation, and are found to be synchronous with rhemes
(or New information). Analogising from human-human conversation, Cassell
implements the model around the discourse domain of banking. The structure
of the simulator includes a domain planner (in charge of the concrete
actions to be executed and pragmatic knowledge) and a discourse planner (in
charge of the communicative actions and the synchronisation with the domain
planner). Space does not allow for a detailed description of this model to
be given here. The implementation of Animated Conversation revealed three
weaknesses: (i) the form of particular gestures has been constant regardless
of discourse framework, (ii) too many gestures have been generated
independently owing to discourse co-ordination (turn-taking), intonation,
etc., and (iii) missing gestures. Improvement and refinement of the system
are needed and ongoing.

CRITICAL EVALUATION

A critical evaluation of papers including empirical and scientific data is
not very easy to do unless the critic is involved in the same empirical
research s/he is assessing, which would enable him/her to confirm or counter
their results, which is not the case for the current reviewer. The papers in
this book study the form and function of gestures or facial expressions in
speech and SLs (Campbell, Goldin-Meadow, Emmorey, Stokoe & Marschark,
Messing), gestures in speech (Feyereisen, Ekman, McNeill), gestures in SLs
(Corina), gesture and memory (Krauss & Hadar), and gesture and speech in a
simulator (Cassell). If there is agreement as to the pervasiveness of
gestures in speech and SLs, it does not seem, however, that their functions
attract unanimity among workers in the field. Disagreement seems to quarrel
with synchronicity vs. precedence vis-�-vis speech and with the relation
gesture-thought vis-�-vis the onset/offset of speech and gesture.

What remains of this review is devoted to a few general critical remarks:

(i) One criticism of Ekman's categories of non-verbal signs is the overlap
between "emblems" (used to "emphasize and make the spoken word more
interesting," p. 46) and "batons" (which "accent or emphasize a particular
word or phrase," p. 47) within the category of illustrators. The same
overlap is noticeable between "illustrators" (which "can help hold the floor
for the speaker", p. 48) and the regulators known as "floor holders" (and
which are "responses made by the speaker to prevent interruptions," p. 50).
To me, there does not seem to be any difference in the characterisation of
"illustrator" and "regulator," although, as their names indicate, there
should be a difference between the two.

(ii) McNeill's claim that visuospatial cognition and gestures are part of
consciousness is not corroborated by evidence, and goes counter to evidence
arrived at in cognitive linguistics. For instance, Lakoff & Johnson (1999:
5) argue that "real human beings are not, for the most part, in conscious
control -- or even consciously aware of -- their reasoning. Most of their
reason, besides, is based on various kinds of prototypes, framings, and
metaphors." Lakoff & Johnson (1999: 11) also argue that "visual processing
falls under the cognitive, as does auditory processing." Most crucial is the
way Lakoff & Johnson (1999: 10) explicate the "cognitive unconscious": "most
of our thought is unconscious, not in the Freudian sense of being repressed,
but in the sense that it operates beneath the level of cognitive awareness,
inaccessible to consciousness and operating too quickly to be focused on."
Further evidence is adduced by Corts & Pollio (1999: 98), who argue that the
gestures that accompany figurative language happen in spontaneous "bursts,"
which suggests lack of premeditation and consciousness. Concerning the
synchrony/asynchrony of speech and gesture defended by McNeill, Cienki
(1998: 203) provides evidence for the asynchrony of verbal metaphoric
expressions with metaphoric gestures, which have been observed to often
"precede/anticipate" verbal metaphoric expressions.

(iii) Krauss & Hadar espouse a Gricean view of communication as intended
information, but categorically deny (p. 98) any intentionality behind the
use of gestures as communicatively functional. This runs counter McNeill's
"consciousness hypothesis" which it is expected to corroborate, and rejects
empirical evidence pointing to the fact that gestures co-occur with language
in a coherent manner, i.e. gesturing is not a haphazard activity
semantically and pragmatically alien to the speech it accompanies (McNeill,
this volume) or anticipates (Cienki, 1998). Krauss & Hadar (p. 98) are most
certainly abiding by what Austin (1962: 105) said about intention
recognition, i.e. "(i) when the speaker intends to produce an effect it may
nevertheless not occur, and (ii) when he does intend to produce it, it may
nevertheless occur."

(iv) The gesture-speech combinations observed in first language acquisition
among children defended by Goldin-Meadow is further evidence for McNeill's
argument, thus falsifying claims made to the effect that gesture cannot
integrate with speech to form a communicative whole made by Krauss & Hadar.
If it is the case that such gesture-speech combinations exist among children
in other languages than English, then the status of gesture-speech
co-occurrence has to be rethought in light of this would-be evidence.

(v) It has been stated earlier that Stokoe & Marschark's paper stands out in
terms of content. The paper also stands out in terms of the typing errors
that it includes (e.g. (a) "... more expressive then either...," where "then"
is an obvious "than" (p. 161, paragraph 2, the last line but one); (b) "... he
argues that the of origin of language...," where "of" is de trop (p. 165,
paragraph 2, line 6); (c) "... a sentence S can broken down...," where "be" is
missing (p. 167, paragraph 2, line 6); (d) "... children use gestures all the
time to try make themselves understood ...," where "to" should have been
inserted after "try" (p. 173, paragraph 2, line 2), etc.).

(vi) According to the editors, the book should be accessible, among others,
to the upper-level undergraduate students with an interest in gesture.
Judging from the intellectual quality of the papers, the book is hardly
accessible for the undergrads.

BIBLIOGRAPHY

Austin, J. L. (1962). _How to Do Things with Words_. Oxford: Clarendon
Press.
Cienki, Alan (1998). "Metaphoric Gestures and some of their Relations to
Verbal
 Metaphoric Expressions." In: Jean-Pierre Koenig (ed.), _Discourse and
 Cognition. Bridging the Gap_. Stanford, California: CSLI Publications,
189-204.
Corts, Daniel P. & Howard R. Pollio (1999). "Spontaneous Production of
Figurative
 Language and Gesture in College Lectures." Metaphor and Symbol, 14: 2,
81-100.
Efron, D. (1968). _Gesture and Environment._ New York: King's Crown.
Ekman, P. & W. V. Friesen (1969). "The Repertoire of Non-verbal Behavior:
 Categories, Origins, Usage, and Coding." Semiotica 1, 49-98.
Lakoff, George & Mark Johnson (1999). _Philosophy in the Flesh. The Embodied
 Mind and its Challenge to Western Thought_. New York: Basic Books.
McNeill, D. (1992). _Hand and Mind: What Gestures Reveal about Thought_.
 University of Chicago Press.

REVIEWER

Zouhair Maalej, Assistant professor of Linguistics, University of Manouba
(Tunis). Research interests include: metaphor, cognitive linguistics,
pragmatics, cognition-pragmatics interface, cognition-culture interface,
(cognitive) stylistics, (critical) discourse analysis, functional
linguistics, translation studies, etc. Publications include voice,
perception, and metaphor.

**********************
Dr Zouhair Maalej,
Department of English, Chair,
Faculty of Letters and Human Sciences,
University of Manouba,
Tunis-Manouba, 2010, Tunis, Tunisia.
*********************************************
Office phone: (+216) 1 600 700 Ext. 174
Office Fax: (+216) 1 600 910
Home Telefax: (+216) 1 362 871
E-mail: zmaalejgnet.tn
URL: http//: simsim.rug.ac.be/ZMaalej
**********************************************
Mail to author|Respond to list|Read more issues|LINGUIST home page|Top of issue