Review of Constructions at Work
|AUTHOR: Goldberg, Adele E.
TITLE: Constructions at Work
SUBTITLE: The nature of generalization in language
PUBLISHER: Oxford University Press
Peter Petré, University of Leuven
The main goal of Goldberg's new book ''Constructions at work'' is the
establishment of a psychologically and cross-linguistically realistic model
of our knowledge of language within the framework of construction grammar
(CxG; see Goldberg 1995, Croft 2001). The main claim is that our language
system consists of a series of generalizations, which are CONSTRUCTED on
the basis of input similarities and general cognitive mechanisms.
The book is divided in three parts. A first part provides a basic
introduction into (the advantages of) construction grammar. Part II
concentrates on how and why schematic constructions are learned. Part III
shows how CxG can account for the existence of language-internal and
cross-linguistic generalizations without appeal to stipulations specific to
Part I: Constructions
Chapter 1 is a scene-setter, which briefly outlines the notion of
construction: a pairing of form and meaning/function. A construction is
LEARNED, insofar as some aspect of its form or function is not strictly
predictable from something else; this puts an emphasis on subtle
semantic/pragmatic differences between constructions. In contrast to
Goldberg (1995), this definition is somewhat widened to include
predictable, but highly frequent, entrenched patterns too, an accommodation
to the usage-based creed. Also, constructions are argued to occur at all
levels of grammar, comprising morphemes, words, idioms as well as fully
schematic constructions such as the ditransitive (Subj V OBj1 Obj2). An
actual expression is a combination of several constructions taken from this
Chapter 2 is devoted to the following hypothesis:
''There are typically broader syntactic and semantic generalizations
associated with a surface argument structure form than exist between the
same surface form and a distinct form that it is hypothesized to be
syntactically or semantically derived from.'' (p. 25)
By default, a single surface form is argued to equal a single construction.
For instance, all pairings of the semantic roles 'agent', 'recipient' and
'theme' with the grammatical relations of Subject, Primary and Secondary
Object are ditransitives, sharing a sense of ''causing somebody to receive
something''. Derivational theories, by contrast, derive a ditransitive such
as 'Mina bought Mel a book' from a benefactive 'for'-construction ('Mina
bought a book for Mel'), but 'Mina sent Mel a book' from an input
expression 'Mina sent a book to Mel'. Goldberg shows that a derivational
analysis fails to account for the many similarities between the two
ditransitives, as well as for the differences with their respective
prepositional paraphrases: with both ditransitives, questioning of the
recipient argument isn't allowed; adverbs may not separate the two object
arguments; the recipient argument must be animate; etc. Goldberg doesn't
deny that paraphrases can have roughly the same meaning, but this overlap
results mainly from the frame semantics of the verb, which can be fused
with various argument structure constructions, as long as the participants
of the verb and the semantic roles of the construction are semantically
In keeping with recent trends in cognitive psychology (Ross and Makin
1999), chapter 3 advocates a usage-based model of categorization, according
to which both item-specific knowledge and generalizations co-exist and are
connected through an abstraction cline. Specific items (exemplars) are not
entirely concrete - attributes are stored only if relevant to the
categorization task at hand - and the first abstractions made are often
based on a couple of items only.
The co-existence of exemplars alongside generalizations is evidenced by the
idiosyncratic behaviour of highly frequent instances of a category, such as
''He is *mere'' and ''The *aghast man'' for the English Adjective, which must
mean that these items are stored separately (Bybee 1995). At the same time,
that generalizations are essential to language is shown by the presence of
broad regularities such as fixed SVO word-order in English, instead of each
(novel) verb having a word-order of its own.
Goldberg explicitly argues that constructions are appropriate theoretical
objects to model this exemplar-based categorization. Constructions capture
the decomposition of expressions into combinations of form-meaning
pairings, which are as abstract as is needed to enable predictions about
future instances. For instance, upon hearing 'the man who sat on every
hill', we needn't retain the semantically unrelated phrase 'sat on every
hill' as part of the representation of the relative clause.
Part II: Learning generalizations
If honeybees can learn abstract concepts, why wouldn't we be able to learn
language? In this spirit, chapter 4 explores how argument structure
constructions are learned from the input. Statistical learning has already
been shown in the case of word boundaries (in a phrase 'bananas with milk',
the probability of the syllable 'nas' following 'na' > 'with' following
'nas'). Still, within the generative tradition, it is claimed that
syntactic structures are innate, because the input is simply not rich
enough. However, others (particularly Tomasello) have emphasized the
conservative nature of children's early language, and the failure to
generalize linking rules beyond the input until the age of 3.5 years, which
suggests that argument structure constructions are learned as well.
Following up this idea, Goldberg shows that the input available to children
is not arbitrary, but instead heavily skewed, with construction-dependent
preferences for particular general-purpose verbs ('go', 'put', 'give').
These verbs are readily accessible and as such can function as COGNITIVE
ANCHORS, facilitating the acquisition of argument structure constructions.
This hypothesis is tested by several experiments, which show that both
children and adults can learn a new construction on the basis of only a
small set of skewed input data within three minutes.
Chapter 5 is a cognitive explanation of constraints on productivity. It is
argued that overgeneralizations are minimized by a process of pre-emption,
meaning that a learner can infer that a possible construction A is not
appropriate if, consistently, construction B is heard instead. In this way,
'*she explained me the story' is pre-empted by the frequent occurrence of
'she explained the story to me'. By contrast, 'she sneezed the foam off the
cappuccino' is NOT pre-empted by 'she sneezed', because these expressions
do not mean the same thing at all. Another explanation for the lack of
overgeneralizations appeals to the notion of semantic coherence. It is
argued that speakers will be the more confident in producing a new
instance, the closer it is semantically to already existing instances. For
instance, 'the truck screeched down the street' is acceptable because
'screech' is similar to other verbs found in the intransitive motion
construction (e.g. 'rumble'). At the end of the chapter, a more
comprehensive account is hinted at, according to which speakers combine
several rules of thumb into a strong hypothesis about when the use of a
certain construction is appropriate and when not.
Chapter 6 accounts for why constructional generalizations are learned. In
the case of argument structure constructions, it is hypothesized that the
functional motivation to learn them lies with their predictive value of
overall sentence meaning.
This hypothesis is examined by means of several tests. A first test
involves the cue validity of verbs and constructions -- the probability
that an object is in a particular category (in this case overall sentence
meaning), given that it has a particular feature (or cue). For instance, if
a certain sentence contains either the verb 'put' or the VOL pattern (Verb
Object PP(Location/Path)), which of these will predict the overall sentence
meaning of caused motion best? It is shown that weighted cue validities of
verbs and constructions are very similar. In addition, a sorting task
experiment shows that sorts by construction are equally frequent to verb
sorts. A second test involves category validity, the probability that an
item has a feature, given that it belongs to a certain category (in this
case caused-motion). Here argument structure constructions score much
better than verbs, simply because there are many verbs, but only few
appropriate constructions. Finally, it is shown that the presence of one
construction primes speakers to produce other instances of the same,
indicating that speakers implicitly learn a small inventory of patterns in
order to facilitate online production.
Chapter 7 explores the importance of information structure and processing
demands in quantifier scope and in constraints on ''islands'', apparently
purely syntactic constructions that cannot be used in unbounded dependencies.
For instance, '#John shouted that she was dating someone new' is not a
felicitous answer to 'Why was Laura so happy', because of the appearance of
the manner-of-speaking verb 'shouted' (instead of 'said'). This means that
'shouted' is an ''island'' to appropriate answers. ''Island''-phenomena are
argued to result from a clash between two construction types: elements
involved in unbounded dependencies occupy discourse-prominent slots (being
the primary topic or appearing within the focus domain, the asserted part
of the sentence), whereas ''islands'' are generally backgrounded. This
generalization also explains the ''island''-behaviour of ditransitive
recipients, whose low discourse-prominence is at least hinted at by their
frequent omission. They are neither the subject (the primary topic), nor
are they within the focus domain: they are not negated by sentence negation
(''#No, him'' is not an appropriate reply to a sentence like 'She gave her a
ball'). Varying degrees of acceptability among speakers, and exceptions
such as wh-complements, which are islands and yet are within the focus
domain, are additionally explained in terms of higher processing demands.
Similarly, the scope assignment of quantifiers is also explained in terms
of information structure: quantifiers used with topics generally receive
wide scope ('there exists (at least) one x, such that all y ...'), because
wide scope correlates with the givenness of a topical referent.
Chapter 8 explores another case of an apparently unmotivated syntactic
restriction, namely that of Subject Auxiliary Inversion (SAI). It is argued
that SAI constructions (questions, counterfactual conditions, clauses with
initial negative adverbs, wishes, exclamatives) share more than their
syntax in that they all deviate semantically from a prototypical sentence,
which is POSITIVE (presupposing the truth of the proposition), assertive,
independent and declarative and has predicate focus (e.g. 'His girlfriend
was worried'). Typically, an SAI-construction differs from this profile in
being NON-POSITIVE. Yes/No-questions, and also wishes/curses for instance
have in addition a non-declarative speech act force, and counterfactual
conditionals, such as 'Had he found a solution, ...' are non-assertive
(presupposing a non-actual state of affairs) and dependent on the following
main clause. The presence of non-positive polarity also motivates formal
inversion in that the fronting of the first auxiliary conveys that the
polarity involved is not the canonical, positive one.
Chapter 9 argues that cross-linguistic generalizations in argument
realization exist, but are no hard and fast constraints. As such, they can
best be explained by appealing to independently motivated pragmatic,
semantic, and processing factors. In this way, the appearance of actors and
undergoers in prominent syntactic slots is explained in terms of their high
degree of salience. Actors are salient because of a general human bias to
focus on (animate/human) agents. Undergoers are salient as the endpoints of
an action, which are generally better attended to than onsets or
intermediaries. Similarly, the tendency to have as many arguments as there
are complements is explained by appealing to Gricean principles of
relevance and economy: (i) expressed referents are interpreted to be
'relevant', and (ii) 'relevant' and 'non-recoverable' semantic participants
must be overtly indicated. By contrast, in many languages recoverable
participants can be freely omitted, and this cannot be explained in terms
of exceptionless hard-wired linking rules. For other argument realization
phenomena, an explanation in terms of analogy is suggested. For instance,
the existence of possessor-subjects in a language 'motivates' the existence
of a ditransitive in which the possessor is expressed formally like a
subject vis-à-vis the theme argument. Finally, Goldberg also discusses more
generally the way in which word-order generalizations can be interpreted in
terms of processing demands, and to what extent the principle of iconicity
can explain certain syntactic phenomena.
Chapter 10 serves to further position Goldberg's approach among others that
carry a ''constructionist'' label. First, certain generative frameworks share
the basic idea with CxG that some type of meaning is directly associated
with some type of form (Hale and Keyser 1997, Borer 2001). Crucially
different from CxG, however, these theories are derivational; the
constructions they posit only have coarse meanings; they try to reduce the
importance of the lexicon. For instance, they derive pairs such as 'shelf'
- 'shelve' or ''dance'' (N) - ''dance'' (V) from a single lexeme through
certain kinds of constructions, but Goldberg argues that this fails to
explain the semantic differences between these words in a principled way.
Closer at home is Fillmore's unification based CxG (1999), whose formalism,
however, is not very useful in describing subtle semantic/pragmatic
differences between constructions, and also Langacker's cognitive CxG
(1987), which is, however, explicitly non-emergentist. Most closely related
to Goldberg is Croft's radical CxG, with which she shares the idea that
categories are construction specific, and that generalizations across
constructions will always be imperfect, but, nevertheless, sufficiently
interesting to be examined in their own right.
Finally, chapter 11 sums up the two main points of the book once again:
language is learnable from the input, and generalizations are learned
because of their usefulness in predicting meaning, in other words, in
''Constructions at work'' is a ''must have'' volume. Rarely has a linguist
combined expertise of such a high standard from different disciplines such
as theoretical linguistics, psycholinguistics and cognitive psychology.
Goldberg's collaborators deserve special mention too, as the book also
shows how rewarding teamwork can be. Adding to the quality of the book are
the many parallels Goldberg draws with non-linguistic cognition and
categorization, not eschewing to carry out non-linguistic experiments
herself. Especially the idea that our human brains are huge statistical,
prototype-based categorizing machines, and the subsequent hypothesized
existence in language of frequency-based, prototype-based constructions,
varying in degree of abstractness, is fleshed out brilliantly.
Another merit of the book is its references. Even if they may be slightly
skewed (for instance, given Goldberg's approval of Croft's radical
Construction Grammar, there are surprisingly few references to him), as a
rule, Goldberg starts each new topic with a knowledgeable list of them. The
style is very enjoyable as well, and highly accessible to a wide audience
of both undergraduates and professional linguists, which is not a trivial
This is not to say the book is entirely without flaws. First, the editing
has overlooked rather more ''slips of the keyboard'' than desirable. While
these slips usually don't hinder the flow of the text, in a couple of cases
the printed text is a little confusing. To give just one example, it seemed
awkward to me to give the sentence ''The teacher assigned one problem to
every student'' (ambiguous) as the prepositional material for comparison to
the ditransitive ''The teacher assigned one student every problem.''
(unambiguous) (p. 32, 160), because the two are not at all paraphrases of
each other. If one started instead from the ditransitive counterpart of the
'to'-construction, that is ''The teacher assigned every student one
problem'', it is actually not clear to me that this sentence is less
ambiguous than the 'to'-construction, precisely the claim Goldberg makes.
Even if this unfortunate presentation of the data doesn't threaten the
general idea that quantifiers tend to have wide scope in topical slots,
ideally some kind of statistical evidence for such a decrease of ambiguity
would have been desirable in this particular case. Maybe these and other
minor points of possible misunderstandings can be corrected in a second
edition, which I am sure we can expect in the near future.
Actually, in consistently usage-based research, all the examples should
have been taken from real language material. In a similar vein, I would
have valued some discussion in the book on the importance of diachrony,
which, I think, can sometimes explain exceptions from a different angle.
These are at present usually discussed in terms of processing difficulties
(see for some recent work in this field Bergs & Diewald, to appear). But of
course, one has to remain realistic, and it needs to be said that Goldberg
already has done incredibly much.
Another minor point is concerned with the general tone of part III. While
this is clearly the best part of the book, consisting almost entirely of
new and ground-breaking linguistic analysis, at the same time, some of the
points made seem to have deserved somewhat more space. Particularly
chapters 8 and 9 seem somewhat hurried, and neither does the criticism of
three generative approaches within a few pages in chapter 10 probably do
full justice to them. At the same time, Goldberg's intellectual sincerity
sets an example for others, when she explicitly mentions problems that
Finally, I have one somewhat more serious point of criticism on Goldberg's
use of grammatical relations (Subject, Object, Oblique etc.) as a basic
unit of analysis. Whereas, usually, her analyses follow the general
consensus, sometimes a more systematic account of what counts as a
grammatical relation seemed desirable. In particular, when Goldberg
compares the cue validity of verbs with that of constructions in chapter 6,
the cue-validity of the caused-motion construction is defined as the degree
to which the formal VOL pattern expresses caused motion. However, I have
some doubts about how 'formal' this VOL pattern actually is: how does the
hearer differentiate between Ls (Locations/Paths) and other obliques in
online perception? For instance, how does a hearer know that 'What is your
foot doing on the table' is an instance of the VOL-pattern, but,
presumably, 'What is your foot doing at this moment' is not; both are
highly similar sequences of VP NP PP. Of course, the answer is quite
simple: they can be distinguished on the basis of the lexeme (construction)
following the preposition. To me it seems that the interaction between
lexemes and constructions in determining overall sentence meaning is
somewhat simplified in Goldberg's account, and I think the complexity of
this interaction would have deserved somewhat more attention (the more so
because, after all, lexemes are also constructions).
Despite these points of criticism, I can't stress enough how satisfying and
convincing I found ''Constructions at work''. In a faithful and detailed
manner, the book shows that apparently purely syntactic phenomena can be
accounted for in terms of their function and by appealing to general
cognitive capacities. By doing this, Goldberg brings language and the
extra-linguistic world one step closer together, and reveals how, in a
simple and yet brilliant way, our brains may transform cognition into
communication. In sum, it brings theoretical linguistics down to earth
again, where it belongs.
Bergs, Alex en Gabriele Diewald. (To appear) Constructions and language
change. Selected papers from the Workshop on Constructions and Language
Change, XVII International Conference on Historical Linguistics. Berlin:
Mouton de Gruyter.
Borer, H. (2001) Exo-skeletal vs. endo-skeletal explanations: syntactic
projections and the lexicon. Paper presented a the Explanations in
Linguistics Conference, San Diego, CA.
Bybee, Joan L. (1995) Regular morphology and the lexicon. Language and
cognitive processes 10, 425-55.
Croft, William. (2001) Radical construction grammar: syntactic theory in
typological perspective. Oxford: Oxford University Press.
Fillmore, C.J. (1999) Inversion and constructional inheritance. In G.
Webelhuth, J.-P. Koenig, and A. Kathol (eds.). Lexical and constructional
aspects of linguistic explanation. Stanford, CA: CSLI Publications.
Goldberg, Adele. (1995) A construction grammar approach to argument
structure. Chicago/London: University of Chicago Press.
Hale, K. and Keyser, J. (1997) On the complex nature of simple predicators.
In A. Alsina, J. Bresnan, and P. Sells (eds.). Complex predicates.
Stanford, CA: CSLI Publications, 29-65.
Langacker, Ronald W. (1987) Foundations of cognitive grammar, Vol. 1.
Stanford, CA: Stanford University Press.
Ross, B.H. and Makin, V.S. (1999) Prototype versus exemplar models. In R.G.
Sternberg (ed.). The nature of cognition. Cambridge, MA: MIT Press, 205-41.
| ABOUT THE REVIEWER:
ABOUT THE REVIEWER
Peter Petré holds an M.A. in Philosophy as well as an M.A. in English and
German Language and Literature. At present, he works as a researcher of the
Fund for Scientific Research - Flanders (FWO) under the supervision of
Prof. Dr. Hubert Cuyckens. His doctorate deals with the interaction between
lexical and constructional change in passive and copula constructions in
English between 800-1500, focussing among other things on how changes in
schematic constructions can lead to the loss of lexical items. In a similar
vein, previous research of his focussed on the disappearance of Germanic
prefixes in English from a constructional perspective.