This book presents a new theory of grammatical categories - the Universal Spine Hypothesis - and reinforces generative notions of Universal Grammar while accommodating insights from linguistic typology.
Kempson, Ruth, Wilfried Meyer-Viol and Dov Gabbay (2000) Dynamic Syntax: The Flow of Language Understanding. Blackwell Publishers, xii+348pp, paperback ISBN: 0-631-17613-6, $34.95; hardback ISBN: 0-631-17612-8.
Simon Musgrave, Spinoza Project: Lexicon & Syntax, Leiden University
ACKNOWLEDGMENT I am grateful to Crit Cremers for helpful comments on a draft of this review.
OVERVIEW This monograph provides an introductory account of an approach to the syntax of natural language based on incremental parsing of strings of words in temporal/linear sequence. There are several crucial elements in this approach. No syntactic structures of the conventional type are included in the machinery (but I will suggest below that they are nevertheless assumed). The output of the parsing process is not a syntactic representation, but a formula in a logical language. This does not, however, constitute the denotation of the string. Instead, the formula is a representation, perhaps an incomplete one, of the meaning encoded. Pre-final stages of the parse are represented as trees, the nodes of which are decorated with logical formulae, and lexical items are conceived as the transitions between such trees. Each lexical item in turn triggers a process of updating the tree. This may involve the projection of new nodes, changing the formulae which decorate existing nodes, or both. Underspecification is integral to the parsing process. At any point, including the final representation, the logical formula may include metavariables. The reference of these may be resolved by later updates or it may be left to pragmatics to provide appropriate reference (Relevance Theory, Sperber & Wilson 1995, is assumed by the authors to be a suitable model of such processes). In addition, pre-final trees may include nodes whose position in the tree is unspecified. The authors use these mechanisms to analyse syntactic problems including relative clauses, wh- questions and crossover phenomena. The book is addressed firstly to syntacticians but will also be of interest to students of formal semantics and computational linguists. The book assumes rather detailed knowledge of at least some areas of the syntactic literature, and the formalism used requires the reader to keep track of complex logical formulae which include portions in standard first-order logic as well as statements in a modal tree logic. The appropriate audience is therefore graduate students and beyond. It is to the credit of the authors that their exposition is, for the most part, lucid and readable.
The book consists of a preface, nine chapters, a bibliography, a general index and an index of symbols. The first chapter offers an argument for the necessity of the approach adopted. Chapters 2 and 3 lay out the basics of the parsing model, with chapter 2 introducing the formal languages used and chapter 3 describing the mechanisms assumed. Chapters 4 - 7 present analyses of specific problems, respectively relative clauses, wh-questions, crossover phenomena and quantification. Chapter 8 returns to more general issues and sketches some thoughts about the design of language implied by the parsing-based approach. Chapter 9 is a detailed presentation of the formal apparatus.
Synopsis: Chapter 1 (Towards a syntactic model of interpretation) argues for an alternative to what is claimed to be the dominant paradigm in linguistics: that knowledge of language as a formal system must be characterised before it is possible to say anything about how that knowledge is used. The authors (afterwards KMG) propose what they claim is a more common-sense view, that knowledge of language means being able to process strings incrementally in order to extract (some) meaning from them. The only formal structure which exists in such a model is the sequence of partial interpretations which lead to the final representation. This argument is grounded in a discussion of the problems of the semantics of pronouns. Well-known problems preclude a straightforward denotational account of the meaning of pronouns and similar problems are argued to apply to definite NPs as well. KMG take this to show that the meaning communicated in language must be representational rather than denotational, and that such representations must inevitably include un- or under- specified elements. These representations must underdetermine meaning, but in association with pragmatic processes, they can provide a context-specific content. The task of syntax is to model the process of building up the representation, a propositional formula, which can be interpreted in context.
Chapter 2 (The General Framework) introduces the formal apparatus used. The basis of this apparatus is the use of labelled deductive systems, where deduction is defined for pairs of labels and formulae. In the current model, each tree node is decorated with a label-formula pair. The formulae are statements of a logical form, expressed as terms in a typed lambda calculus. The labels provide the information which controls how the formulae can combine (how the tree is built), and they consist of two types of information. Firstly, the logical type of the term is a part of the label. Secondly, information about the relation of the node to other nodes in the tree is expressed in the label using the tree modalities of the Logic of Finite Trees (Blackburn & Meyer-Viol 1995). These modalities allow a label to refer to daughters of the node (including a distinction between argument daughter and function daughter), to the mother of the node and also to any dominating node or any dominated node. Each node also is annotated with a set of requirements. These are expressed using the resources already described. They can specify the information needed to complete the annotation of the current node, for example, the root node of every tree introduced with the initial word of the string is annotated with a requirement that the node ultimately be decorated with a formula of type t. Requirements can also specify restrictions on other nodes, for example a transitive verb introduces the requirement that its mother node should have a daughter of type e. Thus requirements drive the process of building a tree and an interpretation.
Chapter 3 (The Dynamics of Tree Building) describes the processes by which trees are constructed. These are of three types: computational rules, lexical actions and pragmatic actions. The first type of process does not change the information content of the tree. Computational rules are rewrite rules which take either the annotation of a node or the requirements of a node and use them to compute additional facts. Lexical actions map one tree description to another one, adding information. (The parsing process is strictly monotonic.) Lexical items are the transitions between one (partial) tree and another and lexical entries take the form of conditional statements. If any requirements are associated with the item, they make up the antecedent of the conditional. If the condition is met, then the actions associated with the item are carried out, otherwise the transition is aborted. Pragmatic actions add information to a tree which is not contained in the string or is not a result of principles within the language system. These actions can be inferences using external information in association with annotations at a node, or the replacement of metavariables by more complete terms. Examples of how this process operates in detail are given for English and for Japanese. Japanese is chosen because it is verb-final, which means crucial lexical information is available only late in the parsing process, and because it is sometimes analysed as being non-configurational. But in the current model, the result of the parse must be of the same type: the final tree is a propositional structure with the combination of nodes driven solely by the logical types of the leaves and this must be identical for all languages. The difference between the two languages is in the degree to which the pre-final trees have fixed nodes or unspecified nodes, which equates to a ''different load carried by general rules and the lexicon.'' (p. 75). In a language such as English, general rules allow the hearer to infer the logical type which must annotate some nodes, for example a NP following a verb will be of type (e > (e > t)). But in Japanese, this information is provided by lexical items, case markers, which project requirements as to the environment which the NP must finally occupy in the tree. This is very close to the concept of 'constructive case' used in recent work in Lexical-Functional Grammar (Nordlinger 1998), a comparison not made by KMG although this chapter does include a comparison of their approach with several other current frameworks.
Chapter 4 (Linked Tree Structures) deals with relative clause constructions and also includes a brief discussion of genitive constructions. The crucial move which allows the analysis of relative clauses is to allow one tree to be linked to another with a formula annotating a node in one tree being identified with a formula annotating a node in the other tree. In a language such as English with head-initial relative clauses, this means that the head noun is linked to the root node of another tree, and some node within that tree is required to be annotated with a copy of the formula which annotates the head. The relation between the trees is defined as a tree modality; that is, there is a rule of LINK introduction (more accurately, a family of rules) which define how information is shared between the two structures. The interpretation which results is that of a co-ordination. To quote the example given on p. 111, the sentence:
John, who I much admire, has left MIT.
is given an interpretation equivalent to:
John has left MIT and I much admire John.
The various possibilities for relative clause constructions which occur in different languages are accounted for by variation in the nature of the LINK relation and its mode of introduction. In languages which have a relativizer (or several of them), the LINK relation is projected by these lexical items. In languages which do not use relativizers, the relation can be introduced freely, other conditions being met. If a relativizer is used, it may or may not project the copy of the head node formula itself: if it does, then the formula annotates an (initially) unfixed node in the linked tree; if not, a requirement for the copy is projected and this must be fulfilled in the linked tree by some anaphoric device, typically a resumptive pronoun. These mechanisms are argued to extend naturally to the analysis of head-final relatives in, for example, Japanese (see further discussion in the evaluation below) and even to so-called internally-headed relative clauses. Brief sections of this chapter also suggest that the LINK relation can be used to analyse topicalisation structures with resumptive clitics, and to analyse genitive constructions.
Chapter 5 (Wh questions: a general perspective) extends the representationalist perspective of the Dynamic Syntax (DS) approach to the analysis of wh- questions. The authors argue that there are problems with a denotational account of wh- items which are very similar to the problems they identified for anaphora in chapter 1. In particular, they note that although a standard view is that wh- items are operators which bind variables, there are many cases where scope interactions do not fall out as would be predicted on that account. The DS alternative is to treat wh- items as projecting a meta-variable which is a part of the formula annotating the root node of the final tree. Semantically, the meaning of a question is not fully specified; only a question and answer pair allows a complete interpretation. On this basis, there is no difference between fronted wh- constructions and in situ wh- constructions: the lexical item projects a meta- variable which ends up as part of the formula annotating the root node. The difference between the two possibilities is that a fronted wh- item has no fixed tree position when it is parsed, this will only be established as the tree is developed. The bulk of this chapter is devoted to a discussion of expletive wh- phenomena and so- called partial movement:
Was glaubst du, wen Jakob half? what think you whom Jacob helped 'Who do you think Jacob helped?'
Such structures have been problematic for other frameworks (e.g. Horvath 1997, Johnson and Lappin 1999), but are handled in a straightforward fashion in the DS framework. The basic idea is that the initial wh- item (the wh- expletive) does not introduce a meta-variable itself. Rather it projects a requirement for some node following to introduce the meta- variable. The possibility of additional intermediate expletives and the various possibilities for the full wh- item (position in its clause, locality with respect to the nearest expletive, and whether case-marking affects the full wh- item or an expletive node) all can be analysed as differences in the specification of the path between the expletive and the node which is annotated with the meta-variable. The chapter closes with a return to the problem of scope in questions, which is a non-problem in terms of the analyses proposed. The meta-variable in the logical formula which is the parse of a wh- question has no scope-taking properties and cannot interact with any quantifiers in the formula. But when the meta-variable is substituted by a term in the formula which represents an answer, then necessarily a choice is made as to whether it falls within the scope of a quantifier. But this choice has nothing to do with the wh- item.
Chapter 6 (Crossover Phenomena) considers the well-known problems of crossover from the DS perspective. Restrictions on interpretation of strong crossover structures are analysed as the result of interaction between locality constraints on pronominal interpretation, and the dynamic process by which the unfixed node in a relative clause comes to be fixed. The locality restrictions ensure that the only way the head of a relative clause and a pronoun within the relative clause can co-refer is for the unfixed node to be merged with the node annotated by the pronoun. But this means that there cannot be a gap within the relative clause, that is, a node within the relative clause which projects a requirement (in this case, for a formula of type e), because the unfixed node is no longer available to merge with it. These considerations do not apply to weak crossover situations in relative clauses, and the account also predicts that English crossover structures with resumptive pronouns should be acceptable. KMG claim that such pronouns are used frequently in natural data, but note that judgments are divided and may be influenced by pragmatic factors. They give the following examples with accompanying judgments:
?? The man who he agrees he's been overworking is coming on holiday with us. (p. 201) My son, who he, even, agrees he's been overworking is coming on holiday with us. (p. 203)
In my own idiolect, all the examples presented are impossible, and I therefore have trouble accepting this part of the analysis. However, following discussion of data from other languages (Arabic, Hebrew, Dutch), KMG argue that it is preferable to retain a less restrictive computational system and rely on (a not yet detailed) pragmatics to rule out some possibilities and to account for the gradations in judgments. The discussion of crossover in wh- questions develops this theme further. If the account given for relative clauses is correct, then it is predicted that resumptive pronouns will be possible in wh- crossover structures also, but these are generally judged as worse by English speakers. KMG nevertheless continue in their position, providing data that at least suggests the possibility of resumptive pronouns in English questions under some pragmatic circumstances. They also note that cross-linguistic variation in crossover restrictions cannot be attributed to pragmatic factors, and show that some of the observed variation at least can be explained in terms of the difference between weak and strong pronouns, and differences in the lexical forms in different languages.
Chapter 7 (Quantification preliminaries) motivates the decision of KMG to treat noun phrases as being of type e, rather than as generalized quantifiers (type [[e -> t] -> t]) as in much work following Montague 1974. The discussion also aims to demonstrate how the interpretations of pronouns which show quantificational behaviour (bound variable and E-type effects) arise in the DS model. The initial step in the argument is to show that the construal of indefinite determiners has the same character of pragmatic choice as the construal of pronouns. Specifically, restrictions on the construal of indefinites are not such as would be expected if there were a syntactic relation analogous to a long-distance dependency between the quantifier and the term on which it depends. The solution proposed is that all noun phrases include a quantificational element and an optional part of that element is a scope statement which specifies the scope of that quantification relative to other quantifications. These scope statements are collected as the tree is constructed and are evaluated fully in the formula annotating the root node. Indefinites have no scope statement, and therefore their relative scope is open to pragmatic choice in the final evaluation (cf. the treatment of scope and wh- items in chapter 5). Bound variable interpretations of pronouns fall under the cases in which a scope statement is part of the antecedent term. The evaluation of scope is handled formally by the use of an epsilon calculus in which quantified terms are reconstructed without quantifiers but with sufficient internal structure to allow full evaluation (Hilbert and Bernays 1939 - the text and the bibliography do not match at this point, one of only a handful of editing errors I detected). This has the consequence that E-type interpretations of pronouns arise naturally under general principles of term construction. The general point of the discussion in this chapter is that the idea of underspecification in a formula, and the interaction of computational and pragmatic actions which results from this, can be extended to the realm of quantification with interesting results.
Chapter 8 (Reflections on Language Design) returns to the ideas of chapter 1. I give only a brief summary here, as many of the points covered are discussed in my evaluation below. The first subsection recapitulates the essential points of the framework. The second section Underspecification and the Formal Language Metaphor, sets out the advantageous consequences of denying a homomorphism between semantic and syntactic structures and revises the notion of compositionality to fit the DS framework. Section 3 establishes that a distinction between well-formedness and acceptability can be made within the framework, despite the extensive reliance on pragmatics, and section 4 suggests ways in which cross-linguistic variation can be handled by the framework. The brief, final section of the chapter touches on philosophical issues and suggests two consequences flowing from the DS approach: parsing is a basic property of the language system, and knowledge of language ''is simply the capacity to construct appropriate decorated structures as interpretations of stimuli of a particular sort'' (p. 267).
Chapter 9 (The Formal Framework) gives full details of the formal apparatus used elsewhere in the volume, and will not be further discussed here.
EVALUATION Before proceeding further, I would like to emphasise that I consider that this is an important book and that the ideas presented in it deserve the close attention of syntacticians and semanticists. Not everything can be covered in a single book, and the following comments aim to delineate some areas in which I was left with unanswered questions.
The first of these is the status of rules as a part of the formalism. Syntactic structure in the conventional sense does not exist in the DS framework. Indeed if the claim of KMG that the only structure necessary is the sequence of partial interpretations, then DS is a way of doing compositional semantics rather than a way of doing syntax. But the sort of generalisations which conventional syntax is good at capturing are also needed in the new approach and their treatment is disparate. For example, the rule of Introduction (p. 80) allows the expansion of a requirement at one node to multiple requirements that can be met at other nodes. In English, this rule allows the requirement at the root node of a clause for a formula of type t to be expanded to requirements for formulae of type e and type e -> t at two additional nodes. This is treated as a language specific application of a general rule, and as such looks suspiciously like a phrase structure rule of the type S -> NP VP. A very different case crops up in the analysis of relative clauses in Japanese, where certain sequences of words are taken to define clause boundaries (verb + verb, verb + noun). This is handled, in the example discussed (pp. 135-6), by a disjunction in the lexical entry of the noun: where the noun follows a verb, it must head a relative clause, but not elsewhere (of course there are no statements of this type in the language of DS - the actual disjunction depends on the requirement projected by the node which the parsing process is examining). Presumably a similar disjunction will be necessary in the lexical entries of verbs to handle the verb + verb sequences. But these disjunctions appear in the lexical entries of individual lexical items, and the inference is that, as any noun or verb can occur in such an environment, such a disjunction is required in the lexical entry for every noun and every verb. (This might not be as unwieldy as it seems in the preceding statement. Although KMG do not mention the possibility, it is easy to see how this sort of problem could be handled in a lexicon structured into inheritance hierarchies (Flickinger 1987).) Both cases discussed above are very naturally described in terms of the order of elements in surface strings, that is, in syntactic generalisations. In both the cases mentioned, it seems that such description is a part of the account given by KMG but its status within the framework is not the same in each case, and is not made explicit in either case.
A second issue, discussed by KMG in at least two places (pp. 209- 213, pp. 264-266), is the power of the formal apparatus. DS is the process of mapping from a language string to a series of partial trees representing propositional formulae, with lexical items defined by the transitions between these partial trees. The propositional formulae are defined in advance (their form, not their content), and therefore a lexical item can be the informational difference between any two propositional formulae. Many such items would clearly be highly implausible, but they are not excluded from the system in principle. Similarly, the rule of Introduction discussed in the previous paragraph can be applied to license any combination of nodes, provided that the logical types of their annotating formulae can be combined and reduced to the annotation of the original node. This also allows for a very wide range of possibilities. KMG restrict themselves to the use of only some simple logical types, and this in turn restricts both the nature of lexical items and the use of the Introduction rule. But it is not clear to me that this restriction is intrinsic to the framework, or is a decision taken by KMG. If the second possibility is correct, then the framework could be used in an extremely unrestricted way. The discussions which KMG give on this topic centre on the question of whether the framework should be restricted or whether pragmatics can do the work. The answer they choose, that pragmatics can do a lot of work, is a principled one for the cases they discuss, but the more general question of what the framework actually rules out is not answered clearly.
Particularly, the discussions of cross-linguistic variation in chapters 4-6 and pp. 264-66 are strong in showing that the DS framework can deal with a wide range of data, but are less strong on demonstrating what possibilities can be ruled out. But what we might call ''deductive typology'', the cataloguing of what a theory does and does not allow, requires both types of evidence. (This comment applies rather less to chapter 6, on wh- questions, than it does to the other two data-based chapters. The analysis of wh- items, particularly the discussion of partial movement, is the strongest analysis presented in the book in my opinion.) This criticism is, at this point, directed to presentation rather than substance. For example. the analysis of relative clause structures which is presented by KMG does rule out some possibilities, as seen in the discussion of strong-crossover (pp. 196-99). It is easy to sympathise with the authors and the choices that they must have had to make as to whether to extend the empirical coverage of their presentation, or to include more theoretical discussion. But I would suggest that detailed discussions of how restrictive or unrestrictive the DS approach really is should have a high priority in future work. Other rigorous syntactic theories have not always distinguished themselves in this regard either, but this only means that the field is open for a serious competitor to prove itself.
Finally, I would have been delighted to see more discussion of the problem of production in the book. Again, an introductory volume cannot cover all aspects of a new approach, but it is disappointing to see the issue mentioned and dealt with in only a few sentences. In an approach which is explicitly based in the processing of language, I would have preferred for the question not to have been raised at all, rather than to have been treated in this way: ''Production has to be characterised as in some way parasitic on the process of parsing.'' (p. 267). My first impression is that there is no obvious way in which this program could be carried out using the DS framework. The parsing process is monotonic, but the actions triggered by lexical items depend on conditional implication. Therefore reversing the process is not possible. It may be the case that when the role of word order generalisations is made clearer, a production mechanism will be more straightforward. Again, this is not an area in which other theories necessarily have good track records, but in addition to the opportunity for staking a position that this affords, a theory which takes processing as central should have something concrete to say about production as another aspect of language processing.
REFERENCES Flickinger, Daniel (1987) Lexical Rules in the Hierarchical Lexicon. PhD Dissertation, Stanford University Hilbert, D. & P. Bernays (1939) Grundlagen der Mathematik II. Berlin: Julius Springer Horvath, J. (1997) The status of 'wh-expletives' and the partial wh-movement construction. Natural Language and Linguistic Theory 15: 509-572 Johnson, D. and S. Lappin (1999) Local Constraints on Economy. Stanford CA: CSLI Publications Montague, R. (1974) The proper treatment of quantification in ordinary English. In Formal Philosophy: Selected Papers of Richard Montague, ed. by R. Thomason, pp. 247-270. New Haven: Yale University Press Nordlinger, Rachel (1998) Constructive Case. Stanford CA: CSLI Publications Sperber, Dan & Deirdre Wilson (1995) Relevance: Cognition and Communication. Oxford: Blackwell (2nd ed)
ABOUT THE REVIEWER:
ABOUT THE REVIEWER Simon Musgrave is a post-doctoral researcher at the University of Leiden. His doctoral thesis is a study of non-subject arguments in Indonesian, using LFG as the theoretical framework. He is currently working on a cross- linguistic database for the Spinoza Project, Lexicon & Syntax, and is part of the East Indonesia research group within the project.