* * * * * * * * * * * * * * * * * * * * * * * *
LINGUIST List logo Eastern Michigan University Wayne State University *
* People & Organizations * Jobs * Calls & Conferences * Publications * Language Resources * Text & Computer Tools * Teaching & Learning * Mailing Lists * Search *
* *

LINGUIST List 25.2309

Mon May 26 2014

Review: Sign Language; Pragmatics: Herrmann & Steinbach (2013)

Editor for this issue: Joseph Salmons <jsalmonslinguistlist.org>

Date: 31-Dec-2013
From: Michael Morgan <MWMBombaygmail.com>
Subject: Nonmanuals in Sign Language
E-mail this message to a friend

Discuss this message

Book announced at http://linguistlist.org/issues/24/24-2995.html

EDITOR: Annika Herrmann
EDITOR: Markus Steinbach
TITLE: Nonmanuals in Sign Language
SERIES TITLE: Benjamins Current Topics 53
PUBLISHER: John Benjamins
YEAR: 2013

REVIEWER: Michael W Morgan, National Deaf Federation Nepal

The linguistically naive (or at least those less familiar with the linguistics
of sign languages) often assume that sign languages are languages of the hand,
with signs formed by the hands being the sole articulators, and they alone
defining the linguistic system. Thus, not surprisingly, many languages refer
to sign language as 'hand language' (e.g., Japanese and Chinese). Likewise,
beginning sign language learners tend to focus on the hands (literally as well
as figuratively, with eye gaze centering on and following the hands). It is
thus perhaps natural that linguistic analysis of sign languages began in the
1960s with the 'manual' elements of sign language.

Soon however, it became clear that sign languages employed multiple
articulators in addition to the hands, and focus expanded to these other
articulators -- upper torso, head, face (this last including mouth, cheeks,
eyes, eyebrows, facial expression, ). Although, as in spoken language
discourse, these other articulators also perform non-linguistic functions
('gesture', affective facial expression, etc.), it became equally apparent
that they participate integrally in the grammar and lexicon of sign languages.
Linguistic research on sign languages expanded to include non-manual elements
(NMSs, non-manual signals, NMMs, nonmanual markers, or simply nonmanuals), and
over the years nonmanuals have become a rich and productive area of research.
The book under review is a collection of seven papers contributing to this
growing field of study.

Articles collected here are updated versions of papers previously published in
the journal “Sign Language & Linguistics” 14:1 (2011). They are based on
presentations originally given at the “Nonmanuals in Sign Languages (NISL)”
conference held at the University of Frankfurt am Main, Germany in April 2009.
The book runs 197 pages, including a short introductory chapter, seven
research articles, and a three-page index.

Annike Hermann and Markus Steinbach's introductory chapter “Nonmanuals in sign
languages” (pp. 1-6) serves three functions. First, it provides a brief
introduction to the importance of nonmanuals in sign languages. While
generally 'borrowed' from the surrounding speaking community, they have been
incorporated into sign languages in such a way as to become fully a part of
the linguistic systems of those sign languages. Secondly, H & S provide a list
of eleven questions which have to be addressed “to get a comprehensive picture
of nonmanuals in sign languages” (p. 2). Covering diverse issues -- what the
lexical, syntactic, semantic and prosodic restrictions and functions of
nonmanuals may be; how nonmanuals interact with other nonmanuals as well as
with manual signs; why there is variation and optionality in the use of
nonmanuals between and among signers; where nonmanuals fit in the interface
between prosody, syntax, semantics and pragmatics; what the difference is
between grammatical nonmanuals used in sign languages (alone) and affective
manuals used in both signed and spoken languages; how nonmanuals are acquired;
how they are processed; and the origin of nonmanuals -- these questions
provide us with a reasonably complete and coherent research agenda. Third, and
finally, this introduction provides a summary of the contents of the book.

The first research chapter is Sarah Churng's “Syntax and prosodic consequences
in ASL: Evidence from multiple WH-questions” (pp. 7-45). This chapter examines
three types of multiple WH-questions in American Sign Language (ASL): stacked
wh-questions, requiring a pair-list reply (1); and two types of coordinated
wh-questions (2-3):

(1) What foods did you eat for what reasons? (requiring a pair-list reply)

(2) what foods did you eat, and why did you eat at all?

(3) what foods did you eat, and why did you eat each food?

All three of these use exactly the same lexical signs (YOU EAT WHAT WHY), and
it is the 'spread' of the nonmanuals and the occurrence of prosodic pauses
which make each type semantically distinct. Incorporating three facts of
simple wh-questions -- (a) sentence-final location of wh-phrases, (b)
uncertainty as to whether this sentence-final position is shared by
sentence-final focus position, or separate from it, and (c) fact that this
position may contain the entire wh-phrase in simple, but only the wh-head in
doubled wh-questions -- and showing that both right-movement and left-movement
models of derivation fail to account for all the facts, C proposes a third,
remnant movement, alternative, wherein focus movement is distinct from regular
wh-movement, and focus movement involves first movement to satisfy the focus
feature and then remnant movement of lower projections. Finally, after the
syntax-prosody interface is taken into account, and prosodic reset as a result
of A-bar movement posited, the three types of multiple wh-questions can be
properly derived and differences in the mandatory placement of pauses in each
accounted for. In this way, nonmanuals reflect the process of derivation
within UG, and affect semantic interpretation as well as prosody.

Kadir Gökgöz's chapter “Negation in Turkish Sign Language: The syntax of
nonmanual markers” (pp. 47-72) serves to document manual and nonmanual markers
in negative sentences in Turkish Sign Language (TİD, Türk İşaret Dili), and to
also develop a generative analysis of the syntax of negative sentences. The
following nonmanuals occur in negatives: backward head tilt (occurring in
almost half of all negative sentences in the database), headshake, single
head-turn , brow-raising and brow-lowering. Gökgöz bases his analysis on a
corpus of natural narrative and dialogue, and for each of the nonmanuals
above, percentage distributions across the database (including 56 negative
sentences) are presented and discussed. As nonmanuals may also occur in
combination, so percentages add up to more than 100%. Whether nonmanuals occur
accompanying the negative manual sign alone, or spread in a variety of ways
over additional signs as well is also analyzed. Backward head tilt is a
lexical nonmanual, predominantly occurring with negative manual sign alone,
exceptions resulting from phonological (i.e. non-grammatical) anticipatory
spread. Single head-turn is also lexical. Negative headshake appears to be
lexical in half of cases, in the other half it appears to be grammatical,
although data is insufficient to determine the exact grammatical function. The
nonmanuals analyzed as lexical (head tilt, head-turn and half of the
headshakes) each seem to be associated with different lexical signs. Finally,
brow raise and lowering are analyzed together (i.e. non-neutral as opposed to
neutral brow position). Non-neutral brow position occurs in 71% of all
negatives and is grammatical in nature, spreading over the entire sentence in
80% of cases. Syntactic spread in TİD, in contrast to ASL (Pfau & Quer 2002;
Neidle et al. 2002), appears to mark the syntactic domain of negative (not
only c-command domain of negation, but also spec-head).

In the chapter “Eye gaze and verb agreement in German Sign Language: A first
glance” (pp. 73-90), Jana Hosemann present results from an experiment
examining the correlation between verbal agreement and eye gaze in German Sign
Language (DGS, Deutsche Gebärdensprache). Previous work on ASL (Bahan 1996;
Neidle et al. 2000; Thompson et al. 2006) had shown that direction of eye gaze
did indeed correlate with agreement, although Thompson et al. indicated that
the correlation was not valid across all verb classes (and particularly, not
for so-called plain verbs). The results of the experiments on DGS show
considerable variation between signers, and also show that the situation in
DGS is more similar to that found by Thompson et al. for ASL, namely that eye
gaze was much more highly correlated with spatial and agreeing verbs, and not
for plain verbs. However, even for agreeing verbs, eye gaze was not as
systematically obligatory in DGS as it is in ASL.

Donna Lewin and Adam Schembri's chapter “Mouth gesture in British Sign
Language: A case study of tongue protrusion in BSL narratives” (pp. 91-110)
examines tongue protrusion (normally glossed as 'th'), analyzing data from ten
BSL (British Sign Language) narratives from two signers. While this mouth
gesture had already been attested in BSL with the meanings 'boring',
'unpleasant', 'too easy', the present study aimed to determine whether it also
occurred in the sense 'lack of control, inattention, and/or unawareness' as in
ASL (Liddell 1980). Mouth movements accompanying signing can be of several
kinds -- 1) adverbial, as in the cases just described; 2) enaction, as when
cheeks puff when signing 'blow up a balloon'; 3) echo phonology, where the
movement of the mouth mirrors some aspect of the movement of the hands; and 4)
other, which includes, among other things, examples of mouthing (full or
partial) spoken words -- and another goal was to examine the frequency and
distribution of tongue protrusion ('th') among these types. Various allomorphs
of the gesture were identified (for further study?), as were the lexical signs
co-occurring with each instance of 'th'. While there was considerable
variation in the frequency of 'th' between the signers studied (with one
producing twice as many instances as the other), for both signers the largest
category was 'other' (just under half for each signer). Echo phonology
accounted for the fewest number of instances overall, with all occurrences
coming from just one of the signers. Enaction and adverbial usage together
amounted to 31-53% of total instances, with the former occurring slightly more
than the latter for each signer. Allomorphs previously unattested were found.
The sense 'too easy' previously attested for BSL was indeed common, but in
addition adverbial usage with a sense similar to that described for ASL but
previously unattested for BSL was also common (although differing slightly in
form from the ASL mouth gesture).

Felix Sze's chapter “Nonmanual markings of topic constructions in Hong Kong
Sign Language” (pp. 111-142) examines a construction widely reported and
researched in other sign languages. Typically, in these other sign languages,
topic constructions are marked by nonmanuals such as head tilt and/or brow
raise, and by prosody (a pause setting the topic off from the rest of the
sentence); the goal of this chapter was to examine whether the same was true
of Hong Kong Sign Language (HKSL). In setting up the study, a wide range of
research on a variety of other sign languages was analyzed, and it was shown
that studies of 'topic' in fact have dealt with at least seven diverse types
and functions of constructions. In addition, the previous studies identified a
wide range of nonmanuals as marking 'topics'. In order to avoid some of this
type of ambiguity, for the present research, topics examined were restricted
to two main types: 'scene-setting' topics and 'aboutness' topics. Both
spontaneous and elicited data were used, and a total of 2346 tokens of
'aboutness' and 217 tokens of 'scene-setting' topics were analyzed, and
occurrence of nonmanual (brow raise and head position) and prosodic (blink,
pause, and lengthening of last sign) markers coded. Tabulated results showed
that only scene-setting topics (but not other types of topics, such as
'aboutness' topics and fronted objects) were regularly (though not
universally) marked in HKSL by nonmanuals, with head position and/or brow
raise marking such topics. However, it was also found that frequency of such
marking varies depending on the type of scene-setting topic: ubiquitously when
the scene-setter was a locative expression, more than 75% when an NP or
subordinate clause setting up a temporal domain, but less than half the time
when a conventional temporal adverb.

Ronnie B. Wilbur in “Nonmanuals, semantic operators, domain marking, and the
solution to two outstanding puzzles in ASL” (pp. 143-173) examines three
nonmanuals: negative headshake, brow lowering, and brow raise. While headshake
and brow lowering occur in single functions (negatives and wh-questions,
respectively), brow raise occurs in a structurally varied range of situations.
Additionally, while negative headshake and brow lowering spread over their
c-command domains, brow lowering does not do so. Drawing on previous research
(Pfau & Quer 2002, 2007) which indicates that negative headshake is syntactic
in ASL but morphological (i.e. affixal) in both DGS and LSC (Llengua de Signes
Catalana, Catalan Sign Language), W proposes that negative headshake is
associated with a monadic operator in ASL, attaching to the negative sign when
present (since it is the head of NegP), but obligatorily spreading over the
entire scope of negation when not. In similar fashion, the [+wh] operator is
also monadic, and so brow lowering, the nonmanual marker of [+wh], is
associated with [+wh] in C rather than with the wh- lexical sign itself. This
then accounts for the scope of brow lowering, licensed by the monadic
operator, which in turn is licensed by the semantics. Unlike negative
headshake and brow lowering, brow raise is associated with a diverse of types
of constructions (topics, focus associates, relative clauses, focused relative
clauses, conditional clauses, yes/no-interrogative, wh-cleft, etc.),
complicating the analysis of its occurrence and syntactic derivation. It is
proposed that this fact can be explained by the fact that, unlike the other
two nonmanuals examined which are licensed by monadic operators, brow raise is
licensed by dyadic operators, and thus occurring only on a narrowly defined
domain and unable to spread over its c-command domain. In addition to
explaining spreading of nonmanuals in each case, the model presented also
provides evidence that ASL has spec,CP on the left.

In the final chapter, “Linguistics as structure in computer animation: Toward
a more effective synthesis of brow motion in American Sign Language” (pp.
175-194), Rosalee Wolfe, Peter Cook, John C. McDonald, and Jerry Schnepp
discuss how linguistic analysis of nonmanuals, and in particular the domains
of brow raise and lowering, might effectively be incorporated into computer
animation of sign language to result in more accurate and acceptable
animation. They present a synthesis system which moves from the gloss stream,
adding in sequence, morpho-syntactic modifications, phonemes and timing, and
geometric settings and times to arrive at a 3D animation. Such a system works
well for simple questions requiring only interrogative nonmanual markers. They
then proceed to a case study of brow position (raised, neutral, and lowered),
discussing relevant linguistic findings, and then applying these to refine the
model. Non-linguistic (i.e. affective) brow position is then also discussed,
as is the effect of co-occurring linguistic and non-linguistic processes (e.g.
wh-question brow lowering with angry effect brow lower). Finally, the role of
the artist making animated visual messages more (e.g. facial wrinkles added to
highlight brow position). In user tests using Deaf community members, the
technique presented was highly effective, both for linguistics (animated
sentences repeated correctly with proper syntax 100% of the time) and for
non-linguistics (intended emotional state identified correctly 95% of the

The volume presents a diverse and well-rounded picture of research on sign
language nonmanuals no matter how you understand diversity. The sign languages
covered are geographically diverse: North America (three chapters on ASL),
Europe (one each on BSL from the UK and DGS from Germany), and Asia (HKSL from
Hong Kong in eastern Asia and TID from Turkey in western Asia). They also
focus on a range of non-manuals (eye-gaze, mouth gesture, brow raising and
lowering, headshake, head tilt), and on a range of functions (interrogatives,
negation, verb agreement, adverbials, topic constructions). Articles address
both formal theoretical (the two chapters on ASL and the one on TİD) and
functional (the chapters on mouth gestures in BSL and on topics in HKSL)
issues, as well as practical goals (the final chapter on animation of
nonmanuals). Chapters -- not only those which draw their data from corpus
projects but also those which work within a theoretical framework (generative
grammar) which is sometimes criticized as being 'light' on data -- are
generally data driven. Even when the focus is on more theoretical issues,
interesting new data abound. Most of these articles also present interesting
cross-linguistic comparisons; indeed, several of the chapters take as their
starting point work on nonmanuals done on other sign languages.

For sign language linguists, this is a very welcome addition to the growing
literature on the subject. For general linguists (and for gesture researchers)
it can also serve as an introduction to the breadth of the subject.

This volume is extremely well edited. In addition to the paper text, a number
of videos of illustrating examples are available online (for the chapters on
TID, HKSL, and on computer animation).

In closing, as someone who started off as a spoken-language linguist before
coming to sign language linguistics some decades ago, and as someone who made
that switch, yes, because of the remarkable complexity of sign languages
themselves, but also in large part because of the questions sign languages
raise about what big-L Language is and about how we should be doing
linguistics, I would raise a question concerning the validity (or at least the
necessary validity) of a statement made by the editors in the introduction:

“In sign languages, nonmanuals have become a genuine part of the grammatical
system because the visual-manual modality, unlike the oral-auditory modality,
offers the unique property to grammaticalize nonmanual and manual gestures.
The reason for this is that gestures use the same articulatory channel that is
also active in the production of signs, whereas spoken languages use a
completely different articulatory and perceptual system. Thus, manual and
nonmanual gestures frequently used in communication CANNOT become an integral
part of the grammatical system of spoken languages.” [emphasis added]

However, rather than exclude the possibility, it would seem more fruitful to
suggest that 'gestures' might in fact be a more integral part of the
linguistic systems of spoken language as well. It is an assumption which
should be questioned ... and tested. Languages (both individual little-l
languages and collective, big-L Language) tend to make use of whatever
'material' they have at their disposal, and so it seems at least possible that
-- aside from spoken language users with vision impairment -- non-vocal,
visual-gestural means are ALSO at their disposal. Thus, to this researcher at
least, it would seem surprising if no language made any linguistic use of this
potential building material. Co-speech gesture is by now a well established
field, if not of linguistics narrowly defined, then at least of the periphery
of linguistics. However, I would like to suggest that in addition to CO-speech
gesture, we should also be open to the possibility of speech gesture, that is,
the existence of gesture (non-vocal elements) which is fully linguistic in

The mere fact that so-called co-speech gestures seem to be optional is not
necessarily an argument for their exclusion from the linguistic system (and
thus the realm of linguistic inquiry). After all, as we see in some of the
research presented in this volume, many nonmanuals are also optional markers
in sign languages, despite being fully integrated into the linguistic system.
Perhaps then we should allow for the possibility that gesture (or at least
some gesture and other nonmanuals/non-verbals) MIGHT be language, MIGHT be a
part of the linguistic system, and adjust our research agenda accordingly.

Enfield's study of composite utterances in Lao gives us an idea of how
research might integrate gesture within language. “In composite utterances ...
different types and sources of information are complementary and
co-constitutive of a larger whole message. The composite utterance par
excellence involves simultaneous integration of (conventional) speech,
(symbolic indexical) gesture, and (non-conventional, iconic-indexical) visual
representations” (Enfield 2012: 150-151). While the gesture and visual
representations Enfield analyzes are both manual, perhaps research on the
integrative communicative semiotic of which language (narrowly defined) is a
part should be expanded to include nonmanuals as well.

Bahan, Benjamin (1996) “Non-manual realization of agreement in American Sign
Language”. Boston, MA: Boston University PhD dissertation.

Enfield, Nicholas J. (2012) The anatomy of meaning: Speech, gesture, and
composite utterances (Language Culture and Cognition 8). Cambridge University

Liddell, Scott (1980) “American Sign Language syntax”. The Hague: Mouton.

Neidle, Carol, Judy Kegl, Dawn MacLaughlin, Benjamin Bahan, & Robert G. Lee
(2002) “Syntax of American Sign Language: Functional categories and
hierarchical structure.” Cambridge, MA: MIT Press.

Newport, Elissa L., & Ted Supalla (2000) “Sign Language Research at the
Millennium”. In Karen Emmorey & Harlan L. Lane (eds.), “The signs of language
revisited: An anthology to honor Ursula Bellugi and Edward Klima”. Mahwah,
N.J.: Lawrence Erlbaum Associates. 103-114.

Pfau, Roland & Josep Quer (2002) “V-to-Neg raising and negative concord in
three sign languages”, “Revista da grammatica generativa” 27: 73-86.

Pfau, Roland & Josep Quer (2007) On the syntax of negation and modals in
Catalan Sign Language and German Sign Language. In Pamela Perniss, Roland Pfau
& Markus Steinbach (eds.), “Visible variation: Comparative studies in sign
language structure”. Berlin: Mouton de Gruyter. 129-161.

Thompson, Robin, Karen Emmorey & Robert Kluender (2006) The relationship
between eye gaze and verb agreement in American Sign Language: An eye-tracking
study. “Natural Language and Linguistic Theory” 24: 571-604.

Michael W Morgan is a linguistic typologist specializing in sign languages,
and also a pedagogical linguist focusing on issues relating to the teaching of
sign language and to the use of sign language in deaf education and deaf
literacy programs. Having been part of the teaching faculty of BA-level
programs in sign linguistics for Deaf at Addis Ababa University (Ethiopia) and
at Indira Gandhi National Open University (India), he currently resides in
Kathmandu, Nepal, where he advises the National Federation of the Deaf Nepal
on sign language and deaf education issues. He has created a three-level
curriculum for training and certification of Nepali Sign Language
interpreters, and is also currently working on documenting Nepali Sign
Language, with the eventual aim of creating open-access online reference and
teacher/learner-support materials.
Read more issues|LINGUIST home page|Top of issue

Page Updated: 26-May-2014

Supported in part by the National Science Foundation       About LINGUIST    |   Contact Us       ILIT Logo
While the LINGUIST List makes every effort to ensure the linguistic relevance of sites listed on its pages, it cannot vouch for their contents.