LINGUIST List 21.2981|
Mon Jul 19 2010
Review: Syntax: Larson (2010)
Editor for this issue: Joseph Salmons
This LINGUIST List issue is a review of a book published by one of our supporting publishers, commissioned by our book review editorial staff. We welcome discussion of this book review on the list, and particularly invite the author(s) or editor(s) of this book to join in. If you are interested in reviewing a book for LINGUIST, look for the most recent posting with the subject "Reviews: AVAILABLE FOR REVIEW", and follow the instructions at the top of the message. You can also contact the book review staff directly.
Grammar as Science
Message 1: Grammar as Science
From: Ahmad Lotfi <arlotfiyahoo.com>
Subject: Grammar as Science
E-mail this message to a friend
Discuss this message
Announced at http://linguistlist.org/issues/21/21-258.html
AUTHOR: Richard K. Larson
TITLE: Grammar as Science
PUBLISHER: The MIT Press
Ahmad R. Lotfi, Azad University (Iran)
Grammar as Science is an introductory textbook by Professor Richard K. Larson ,
and designed and illustrated by Kimiko Ryokai according to Japanese design
principles, which emphasize the visual and graphic organization of material.
Grammar as Science is intended for both undergraduate students majoring in
linguistics and non-linguistics majors taking an undergraduate course in
linguistics as an exercise in scientific theorizing and scientific thought in
The text does not lay emphasis on the use of the state-of-the-art technical
tools in syntax; instead, it encourages the students to do syntax as a medium
for learning skills needed in scientific theory construction in general, and in
framing explicit arguments for theories (including the articulation of
hypotheses, principles and reasoning) in particular. The book is also designed
for use with Syntactica, a software program for creating and investigating
simple grammars, in a ''laboratory science'' course where the participants collect
and experiment with linguistic data.
The book is divided into seven parts (28 units in total), each focusing on some
important aspect of theory construction in syntax including grammars as
theories, choosing between theories, arguing for a theory, and expanding and
constraining the theory. Exercises appear at the end of each part (and sometimes
in the middle of a unit). The text is generously enriched with graphically
loaded boxes reviewing the basic principles, data, analyses, tree diagrams, and
scholarly quotes of relevance to the topic under study.
PART I Setting Out (Units 1 and 2)
In the late Middle Ages, grammar (together with logic and rhetoric) played an
important role in the classical liberal arts curriculum. The curriculum itself
is obsolete now, but grammar is still relevant as part of a new science in which
language is studied as a natural object analogous to a bodily organ. Scientific
questions in linguistic studies of grammar include the nature of language
knowledge, how it is acquired, and how is put to use. As we cannot directly
intuit answers, we approach the problem as a ''black box'' problem. In approaching
a linguistic problem, we begin with dividing it up into more manageable parts.
Grammar is an area of research concerned with basic structural elements of a
language and their possible combinations.
PART II Grammars as Theories (Units 3-5)
Grammar could be understood as a scientific theory of linguistic knowledge. In
that case, linguists need to know how to systematically construct such grammars,
how to test them, and how and when to revise and extend them. Phrase structure
rules are introduced as theoretical claims concerning the way speakers of a
language pattern sentences. Well/ill-formedness judgments serve as the data of
syntax to be covered by our theory of language (or grammar). Such judgments
also function as the predictions of a theory. If such predictions are not borne
out, we need to revise and/or extend our grammar.
PART III Choosing between Theories (Units 6-10)
Grammar construction is hypothetico-deductive: The linguist forms a hypothesis
concerning the speaker's knowledge of language, for instance the structure of a
sentence. Conclusions are deduced, and then checked against language facts. A
theory will be preferred if it is empirically more adequate. Simplicity and ease
of extension are the other criteria we could use for evaluating our grammars. As
far as constituency (grouping words in a sentence as phrases/constituents) is
concerned, we may use a variety of linguistic phenomena such as conjunction,
proform replacement, ellipsis, dislocation and c-command (e.g. with regard to
negative polarity items, reflexive pronouns, and ''each ... the other''
constructions) as constituency tests.
Specific proposals about the structuring of constituents can now be examined
with regard to such structural relations as dominance and prominence
(c-command). Likewise, such criteria as inflection, position, and
meaning/function will be used to decide on the categories to which words and
phrases belong. Finally, Larson examines detailed examples to give the learner
some idea how our constructed grammars could be revised.
PART IV Arguing for a Theory (Units 11 and 12)
As members of a community, scientists are expected to be able to clearly explain
which facts they are concerned with, which ideas they've formed about them,
which assumptions they make, and which conclusions their findings lead to. In
other words, they need to know how to argue for their theories. A typical
argument contains a general characterization of the structure under study, a
statement of data, principles that link the data to the structure, and a
conclusion bringing together structures, data, and principles. Specific examples
of these four steps are examined next.
PART V Searching for Explanation (Units 13-18)
The lexicon is introduced with reference to subcategory features, lexical rules,
and cooccurrence restrictions. Phrases, their heads, and more features are put
in perspective then as phrases inherit features from their heads. Verbal
complements and adjuncts are compared in this respect. Iterability, optionality,
and lexical sensitivity are introduced as the diagnostics of the
complement/adjunct dichotomy. They are also examined in terms of their
incorporation into structural trees.
PART VI Following the Consequences (Units 19-23)
More technical tools are introduced: Sentential complements, complementizers,
the category CP, finite/nonfinite clauses, sentences as TPs, and the empty
category PRO are added to the linguist's toolbox. The invisible elements are
examined in detail and with regard to the differences between the structures
associated with such verb types as ''expect'' and ''persuade.'' NPs are then
compared with sentences with emphasis on the structural similarities including
the complements and adjuncts in Ss and sentence-like NPs, their orderings, and
even PRO in such NPs. A brief but still deep introduction of X-bar theory concludes.
PART VII Expanding and Constraining the Theory (Units 24-28)
In the final part, Larson focuses on interrogatives and movement, providing
evidence on wh-movement as a gapped and targeted syntactic operation probed by
some [+Q] feature of the morpheme WH. Then he focuses on the universal
constraints on movement operation. First, movement proceeds in a stepwise manner
by a series of local movements in order to satisfy the Principle of the Strict
Cycle. Second, with TP and NP as phases, i.e. sentence-like nodes that require
completeness, the Phase Principle requires that an incomplete phase must be
registered at its edge by a trace of the missing element or the element itself.
Finally, crosslinguistic variation is analyzed as parametric with TP and NP as
phase nodes in English, CP and NP as such in Italian, and CP, TP, and NP in German.
We've all heard the story of the slave boy led to discover the basic principles
of geometry merely through Socrates' questions, and Plato's attempt to explain
it as the knowledge the boy remembered from an earlier period of life on earth.
We also know Chomsky's version of the logical problem of knowledge acquisition
and his reconstruction of the Platonic memory in terms of the innate knowledge
of language. As I was proceeding through the 433 pages of Larson's Grammar as
Science, I couldn't help identifying myself with that slave boy in Socrates'
times: Larson puts the reader on a road that leads smoothly and effortlessly to
the most complicated and abstract areas of investigation in modern syntax with
minimal use of assumptions, technical tools, and data from languages other than
English. The way he approaches theorization in grammar is a perfect match for
naturalistic language acquisition by human beings. Well done!
There are a number of details, however, that the author may want to take care of
in a future edition. I list these details in the order of appearance:
(1) On p. 47, there is an exercise with a branching node VP on a tree diagram
for ''Homer chased Bart.'' Neither VPs (nor any other phrases) have been
introduced to this point in the book. The exercise itself is quite silent on the
issue, and VPs show up quite frequently on pages to follow without any
(2) On p. 54, Larson writes: ''When the set of expressions generated by some
rules includes *all* (emphasis mine) of the expressions of a language, we'll
call the rules a grammar for the language.'' This seems to be too demanding a
criterion for a grammar of a natural language. In practice, any theory
irrespective of its academic status sooner or later will face what Thomas Kuhn
calls anomalies: Absolute truth is a religious rather than scientific notion.
Applying this limitation to linguistic theories, we need to recognize languages
as collective possessions of speech communities, and their grammars, mental or
theoretical, as systems of knowledge psychologically represented. This means
there can be no single grammar generating all expressions of a language (unless
it is a private language with only one native-speaker).
(3) As the data on un/grammaticality throughout the work (including those at the
bottom of p. 58) suggest, in his approach towards building grammars, Larson is
obsessed with structures (represented by trees/PS-rules). He ignores
well/ill-formedness in terms of agreement, tense inflection etc., as in ''*Maggie
run,'' ''*Maggie rans,'' or ''the Maggie ran.'' Grammar seems to be much richer and
also wider in scope than mere word orders or phrasal patterns.
(4) Recursive rules on p. 90 all apply to the left branch of the node, and those
on p. 101 to the right one without any mention of left/right-branching
possibilities, and how one should choose between them.
(5) On p. 131, we read: ''[T]he fact that the proform 'there' can replace 'in the
park' (in ''Homer chased Bart in the park.'') without change of function suggests
that it too is a PP.'' The reasoning seems faulty as we could also substitute the
pronoun 'it' for 'that David will win the game' in ''I know that David will win
the game'' with no change of function. It doesn't follow that the pronoun 'it' is
a CP. Moreover, we could also reverse our reasoning, and consider both 'in the
park' and 'there' as ADVs instead, which seems to be a more natural option.
(6) ''Tensed verbs include garden-variety main verbs ... . They also include
tensed auxiliary verbs like so-called perfective have and progressive be (p. 301).''
It will be more standard to use the term ''perfect'' here, and reserve
''perfective'' for the aspectual category with no explicit reference to the
internal temporal consistency of a situation, which is chiefly expressed by the
simple past-tense in English.
(7) On p. 229, the theta role THEME is defined as ''Object or individual moved by
action'' (where I understand ''move'' as''change in physical position''). Then on p.
323, ''Marge'' in ''Homer persuaded Marge of his honesty,'' and also ''Marge to
leave'' in ''Homer expects Marge to leave'' are labeled as THEME. That Marge moves
makes little sense to me, unless ''move'' is understood rather metaphorically like
''move'' in ''His misery moved us all.'' ''Marge to leave'' as a moved entity
sense to me even metaphorically.
(8) Considering TPs and NPs as phases (with phases defined on p. 410 as
sentence-like nodes that require completeness) could be potentially confusing
given the fact that Chomsky (for instance, in his ''Derivation by Phase'' (2001))
understands phases as incremental chunks built from a separate lexical
sub-array, and then considers CPs and v*Ps as phases. I wonder why Larson
prefers to use such terminology as the Phase Principle and phase nodes here
instead of less confusing and more standard expressions ''subjacency'' and
''bounding nodes.'' At the least a footnote could be added to clarify things.
I have concentrated here in the last part of this review on what I found in need
of revision in Larson's work. This, however, can never lessen the merits of his
textbook. My experience as a teacher of syntax convinces me that the huge pile
of technicalities loaded in our textbooks usually leaves the author no
opportunity to show the reader how to think syntactically, and how to do syntax
by and for themselves. In absence of a 'think-it-out-yourself' approach like
Larson's, it is all up to the teacher to help the student experiment with syntax
as (generative) theoreticians do, which is not always possible given the limited
contact hours, and the extended dimensions of generative syntax today. Now I
expect to be able to breathe (a little!) next semester as I leave my students in
Larson's good hands!
ABOUT THE REVIEWER
Dr. Ahmad R. Lotfi, is a faculty member of Azad University at Khorasgan
(Esfahan, IRAN) where he offers courses in theoretical linguistics to
graduate students in General Linguistics and TESOL. His research interests
include minimalist syntax, second language acquisition studies in
generative grammar, and Persian linguistics.
Read more issues|LINGUIST home page|Top of issue
Page Updated: 19-Jul-2010
While the LINGUIST List makes every effort to ensure the linguistic relevance of sites listed
on its pages, it cannot vouch for their contents.