Publishing Partner: Cambridge University Press CUP Extra Publisher Login
amazon logo
More Info

New from Oxford University Press!


Style, Mediation, and Change

Edited by Janus Mortensen, Nikolas Coupland, and Jacob Thogersen

Style, Mediation, and Change "Offers a coherent view of style as a unifying concept for the sociolinguistics of talking media."

New from Cambridge University Press!


Intonation and Prosodic Structure

By Caroline Féry

Intonation and Prosodic Structure "provides a state-of-the-art survey of intonation and prosodic structure."

Review of  The Language of Word Meaning

Reviewer: Niladri Sekhar Dash
Book Title: The Language of Word Meaning
Book Author: Pierrette Bouillon Federica Busa
Publisher: Cambridge University Press
Linguistic Field(s): Semantics
Issue Number: 12.1573

Discuss this Review
Help on Posting

Bouillon, Pierette, and Busa, Federica, eds. (2001) The Language of
Word Meaning, Cambridge University Press. Hardback ISBN 0-521-78048-9,
402pp., $69.95.

Niladri Sekhar Dash, Indian Statistical Institute, Kolkata, India

The volume consisting 17 articles is divided into four
parts. The first part, 'Linguistics Creativity and the
Lexicon' (4 articles), provides the philosophical
foundations for the work presented in the book. The second
part, 'The Syntax of Word Meaning' (6 articles), deals with
the analysis of some fairly standard topics in lexical
semantics: verb semantics, partitive constructions,
adjectives, causal relations etc. The third part of the
volume, 'Interfacing the Lexicon' (4 articles), presents
contributions on metonymy, metaphors, and idioms, plus a
corpus-based study of nonstandard forms on the generativity
or extension of senses. The last part, 'Building Resources'
(3 articles), contains contributions on the development of
actual resources for natural language processing (NLP) using
current developments in lexical semantics. Each part
contains a covering introduction by the editors of the
volume to provide a road-map for the readers and to
highlight the common issues raised by each set of
papers. Moreover, the volume contains a very scholarly
preface by James Pustejovsky and a general introduction
entitled 'Word Meaning and Creativity' by the editors of the

In the first article entitled 'Chomsky on the Creative
Aspect of Language Use and Its Implication for Lexical
Semantic Studies' (pp. 5-27), James Mcgilvray tries to
understand how the lexical creativity can be understood from
set of generative rules keeping the idea of language
creativity of Chomsky in background. In the article he
discusses how word meaning contributes to the creative
aspect of language and reaches the conclusion that lexical
semantics, as done within a research program such as the
Generative Lexicon, is a "branch of syntax, broadly
speaking". Moreover, he argues that the creative aspect of
language use can constrain theories of language in much the
way as those on the poverty of stimulus do. He also tries to
explain on the observations made by Chomsky, and explores
their consequences for a semantic theory of the lexicon.

In 'The Emptiness of the Lexicon: Critical Reflections on
J. Pustejovsky's "The Generative Lexicon"' (pp. 28-50),
Jerry A. Fodor and Ernie Lepore are quite critical to prove
the uselessness of Pustejovsky's theory of Generative
Lexicon. Here, they account the semantic lexicon proposed by
Pustejovsky (1995) and discuss and reject his argument that
the complexity of lexical entries is required to account for
lexical generativity. They also defend a kind of lexical
atomism: though they concede that lexical entries are
typically complex, still they claim that their complexity
does not jeopardize either the proposition that lexical
meaning is atomistic or the identification of lexical
meaning with denotation. The paper was first published in
'Linguistic Enquiry' (29:2) in 1998. The articles is
included in this volume realizing its importance in the area
under study.

In 'Generativity and Explanation in Semantics: A Reply to
Fodor and Lepore' (pp. 51-74), Jemes Pustejovsky address the
remarks made by Fodor and Lepore. Here his response focuses
on two themes: Fodor and Lepore's misreading and
misinterpretations of the substance as well as the details
of the theory, and the generally negative and unconstructive
view of the study of semantics and natural language meaning
inherent in their approach. Pustejovsky presents an
internalist view of the lexicon, where qualia structure is
the syntax for lexical description, which in turn provides
the input to the rules of semantic composition. The role of
a syntax of word meaning is precisely that of avoiding
holism, while permitting questions concerning the
well-formedness of concepts, the combinatorial possibilities
of the elements constituting their internal structure
(i.e. qualia), and the relations they bear to each other.

In 'The "Fodor"-FODOR Fallacy Bites Back' (pp. 75-85),
Yorick Wilks comes forward in defence of James Pustejovsky's
proposition. He addresses explicitly the role of inferential
relations as key elements that drive 'intelligent' natural
language understanding and the computational modeling of
language. Here, he tries to show that Fodor and Lepore are
misguided in their attack on Pustejovsky's theory, largely
because their argument rests on a traditional, but
implausible and discredited, view of the lexicon on which it
is effectively empty of content, a view that stands in the
long line of explaining word meaning (a) by ostension and
then (b) explaining it by means of a vacuous symbol in a
lexicon, often the word itself after typographic
transfiguration. Both (a) and (b) share the wrong belief
that a word must correspond a simple entity that is its
meaning. He then turns to the semantic rules that
Pustejovsky uses and argues that, although they have novel
features, they are in a well-established Artificial
Intelligence tradition of explaining meaning by reference to
structures that mention other structures assigned to words
that may occur in close proximity to the first.

In 'Type Construction and the Logic of Concepts'
(pp. 91-123), James Pustejovsky tries to pose a set of
fundamental questions regarding the constraints we can place
on the structure of our concepts, particularly as revealed
through language. He outlines a methodology for the
construction of ontological types based on the dual concerns
of capturing linguistic generalizations and satisfying
metaphysical considerations. He then discusses what 'kinds
of things' there are, as reflected in the models of
semantics which we adopt for our linguistic theories. Next,
he argues that the flat and relatively homogeneous typing
models coming out of classic Montague Grammar (Dowty,
1979) are grossly inadequate to the task of modeling and
describing language and its meaning. He outlines aspects of
a semantic theory employing a ranking of types. He
distinguishes first between natural (simple) type and
functional types, and then motivates the use of complex
types (dot objects) to model objects with multiple and
interdependent denotations. This approach is called as
Principle of Type Ordering (PTO). Finally, he explores what
the top lattice structures are within this model, and how
these constructions relate to more classic issues in
syntactic mapping from meaning.

In 'Underspecification, Context Selection, and Generativity'
(pp. 124-148), Jacques Jayez considers the symmetric
dependency in which lexical elements impose certain semantic
profiles to the context they fit in. He shows that, although
they are highly underspecified, those profiles cannot be
reduced to a general semantic frame, and that their semantic
adaptability reflects the highly abstract and
similarity-based character (vagueness) of the predicates
that help to define them. To illustrate this mechanism, he
studies three French verbs which give a good idea of the
complexity and flexibility of context selection. The data
presented in this paper show that the relation between
context and interpretation can be conceived in two
ways. Either context provides missing information or lexical
elements themselves indicate the type of contexts in which
they would be maximally appropriate.

In 'Qualia and the Structuring of Verb Meaning'
(pp. 149-167), Pierrette Bouillon and Federica Busa focus on
issues of verb representation as they bear on the problem of
how meaning shifts occur in context. Taking as an example
the polymorphic behaviour of a French verb, they show that
its multiple senses can be derived co-compositionally from
the semantics of the verb and its arguments. Senses need not
be enumerated, but can be derived generatively from richer
representations of words and compositional mechanisms for
combining them. Thus they are able to show that, instead of
enumerating the various syntactic constructions a French
verb enters into, with the different senses that arise, it
is possible to give it a rich underspecified semantic
representation that acquires its specification in context
and will explain both its semantic and syntactic
polymorphism. They also show that their analysis extends to
the Italian data, accounting for subtle differences
involving the strength of the presuppositions associated to
the Italian verb.

In 'Sense Variation and Lexical Semantics, Generative
Operations' (pp. 168-191), Patrick Saint-Dizier outlines
some elements related to sense variation and to sense
delimitation within the perspective of the Generative
Lexicon (Pustejovsky, 1995). He develops the case of
adjectival modification and a few forms of sense variations,
metaphors, and metonymies for verbs and shows that, in some
cases, the qualia structure can be combined with or replaced
by a small number of rules, which seem to capture more
adequately the relationships between the predicate and one
of the arguments. He focuses on the telic role of the qualia
structure, which seems to be most productive role to model
sense variations. In particular, he shows how types can be
added, and how predicates from the telic participate to the
construction of the semantic representation of the compound
noun + adjective and in the verb-argument relation. He also
shows how telic roles contribute to the modeling of

In 'Individuation by Partitive Construction in Spanish'
(192-215), Salvador Climent argues that in Spanish and many
other languages, individual entities can be referred to not
only by means of regular noun-headed phrases but also by
partitive constructions, one of the ways of individuating
referents. The latter show a range of semantic properties
that differ in many ways from those of the former. To
account for that, in this work, the semantic representation
of partitive constructions has been built by
co-composition. Therefore, both the partitive and its
NP-complement contribute in a balanced way to create a
lexical structure for the compound that will be appropriate
for further compositional operations in the same way than
regular NPs do. Moreover, the case of nouns that are
systematically polysemous between referential nouns and
relational partitives is also accounted for, using
containers as the typical case. Such nouns are treated as
Lexical Conceptual Paradigms (LCPs), thus providing a single
underspecified representation able to account in any case
for the appropriate sense of the word in a range of possible

In 'Event Coreference in Casual Discourses' (pp. 216-241),
Laurence Danlos initiates a study that concerns the casual
discourses expressing a direct causation. This discourse
study relies on lexical semantic works (concerned mainly
with sentences in isolation) to show that discourse
considerations can shed light on lexical semantics. With the
help of the extended event structure for causative verbs
proposed in Pustejovsky (1995), he shows that they involve
an event coreference relation when the result is expressed
by a causative verb in its transitive use. He defines two
types of event co-referernce: generalization and
particularization. He shows that discourses expressing a
direct causation with a resultive rhetorical relation
involve a generalization relation (which explains their
awkward behavior), while those discourses with an
explanation rhetorical relation involve a particularization
relation (which account for their normal behavior). Finally,
he studies discourses in which the result is expressed with
an unaccusative form of a causative verb. His study leads to
question the extended event structure for unaccusatives
proposed in Pustejovsky (1995).

In 'Metaphor; Creative Understanding, and the Generative
Lexicon' (pp. 247-261) Julius M. Moravcsik presents a detail
study of metaphorical expressions (idioms and metaphors). He
tries to understand how can words in certain configurations
mean something different from what they mean in their
literal use, prescribed by the rules of the language, and at
the same time convey significant insights into what we, in a
given context, take as parts of reality. In the analysis of
the figurative aspects of idiomatic meaning, he relies on
qualia structure and productivity. The analysis
distinguishes idioms from both metaphors and similes. With
the help of the lexical theory, he points to important
differences among these types of figurative speech, as well
as to underlying common ground. His analysis of idioms
introduces us to a kind of non-literal semantic analysis
that helps us to see also why idioms play an important role
in natural languages, and are not of mere ornamental
significance. In case metaphors, his theory entails that
metaphor interpretation requires mastery of the rules of
literal language, because selecting the appropriate
underspecified concept for the generation of the imaginative
leap is based on knowing what literal use provides. His
theory rests on the productivity of the lexicon, because the
relation between the underspecified concept and the new
specification is theoretically analogous to the
specification of the meaning of a word.

In 'Metaphor in discourse' (pp. 262-289), Nicholas Asher and
Alex Lascarides offer a novel scheme for analysis of
metaphor, which attempts to capture both their conventional
constraints on their meaning, and the ways in which
information in the discourse context contributes to their
interpretation in context. They make use of lexical rules in
a constraint-based grammar to do the former task, and a
formal semantics of discourse, where coherence constraints
are defined in terms of discourse structure, to do the
latter task. The two frameworks are linked together, to
produce an analysis of metaphor that both defines what's
linguistically possible and accounts for the ways in which
pragmatic clues from domain knowledge and rhetorical
structure influence the meaning of metaphor in context. They
show how their scheme can explain data concerning: verbs
involving change of location; the metaphorical shift of
meaning of words that refer to kinds of physical objects
when they are predicted to persons; and the dependence of
metaphorical interpretation upon discourse structure. They
hope, by using the modern logical tools of formal pragmatics
and semantics one can make progress on this difficult
subject and that in turn a better understanding of metaphor
will enhance our understanding of lexical meaning and
lexical processes.

In 'Syntax and Metonymy' (pp. 290-311), Jerry Hobbs extends,
in an interesting and possibly controversial manner, the
range of phenomena that are thought as metonymy: a process
of referring to one entity by describing a functionally
related entity. The author introduces the framework of
'Interpretation as Abduction', in which it is
straightforward to formalize both varieties of metonymic
coercion: deferred ostension and predicate transfer
(Nunberg, 1995). He presents a range of examples of
phenomena that have previously been viewed as syntactic that
can in fact be viewed as a special kind of metonymy, where
the coercion relation is provided by the explicit content of
the sentence itself. The phenomena considered
are: extraposed modifiers, ataxis, container nouns, the
distinction between distributive and collective readings of
plurals, and what may be called 'small clauses in
disguise'. There are also cases where grammatically
subordinated material in sentences does function as the main
assertional claim of the sentence, which are analyzed
accordingly as examples of metonymy where the coercion
relation is provided by the explicit content of the rest of
the sentence. The examples discussed in the paper all lie on
the boundaries between syntax, semantics and pragmatics. By
analyzing all these possible coercions he has been able to
show how a combination of syntax, compositional semantics,
and metonymic interpretation cam explain a diverse set of
supposedly syntactic phenomena.

In 'Generative Lexicon Meets Corpus Data: The Case of
Nonstandard Word Uses' (pp. 312-328), Adam Kilgarriff
presents a radically different view which challenges the
contribution of the generative line of research in lexical
semantics. In his experiment with some sample words
('modest', 'disability', 'steering', 'seize', 'sack'
(noun), 'sack' (verb), 'onion', 'rabbit', 'handbag'
etc.,) taken from HECTOR corpus, he evaluates if Generative
Lexicon, as a general theory of lexicon, can account for
nonstandard uses of words what we find in text corpora. By
'nonstandard' use, Kilgarriff refers to cases where the
meaning of a word is not found in a standard dictionary
definition of that word. He discusses in detail a number of
nonstandard uses and presents models for their
interpretation that draws on large quantities of knowledge
about how the word has been used in the past. The knowledge
is frequently indeterminate between 'lexical' and 'general',
and is usually triggered by collocations rather than a
single word in isolation. The experiment produced a negative
result in favour of GL, which leads Kilgarriff to argue that
"GL is a theory for some lexical phenomena, not
all" (p. 327). The article shows that when faced with an
actual corpus and real use of words, there is an even
greater need for a framework for lexical semantics with an
actual theoretical vocabulary, an actual set of
compositional rules, and an actual methodology. In absence
of this, any theory or proposition is just rhetoric.

In 'Generative Lexicon and the SIMPLE Model: Developing
Semantic Resources for NLP' (pp. 333-349), Federica Busa,
Nicoletta Calzolari, and Alessandro Lenci present recent
extensions of Generative Lexicon theory (Pustejovsky,
1995) in the context of the development of large-scale
lexical resources for twelve different European
languages: the SIMPLE (Semantic Information for Multipurpose
Plurilingual LExica) model. Their development of lexical
resources has been guided by an underlying framework for
structuring word meaning and generating concepts, which
satisfies both ontological considerations as well as the
need to capture linguistic generalizations. They present an
alternative proposal to the current methodology for building
ontologies as their goal is to capture additional aspects of
word meaning that are equally important in language and
equally necessary in the development of a computational
lexicon. They show that their model has a high degree of
generality in that it provides the same mechanisms for
generating concepts independently of their grammatical
category. In addition, their model allows for a fairly broad
and clear coverage of the different types of concepts in the
language, an aspect that is often lacking in existing
lexicons, where the focus is on the representation of the
clear, well-known cases, while the semantics of abstract
entities is neglected.

In 'Lexicography Informs Lexical Semantics: The SIMPLE
Experience' (pp. 350-362), Nilda Ruimy, Elisabetta Gola, and
Monica Monachini show that the general idea that
lexicography is often considered as a trivial routine
activity which is not really relevant from the theoretical
point of view, is a highly misguided view. Their argument is
also supported by Pinker (1995) who argues "The world of
words is just as wondrous as the world of syntax, or even
more so. For not only are people as infinitely creative with
words as they are with phrases and sentences, but memorizing
individual words demands its own special
virtuasity" (Pinker, 1995: 127). They draw their evidence
from the practical experience gained in the framework of the
SIMPLE project to argue that as in other sciences, a careful
and large-scale empirical investigation is a necessary step
for testing, improving, and expanding the theoretical
framework for lexicography. In the chapter they present
results from the development of the Italian semantic lexicon
in the framework of the SIMPLE project, which implements
major aspects of Generative Lexicon theory. Their paper
focuses on the semantic properties of abstract nouns as they
are conceptually more difficult to describe. For this
reason, they believe, these abstract nouns have the
capability to be a good test-bed for any semantic
theory. They show that the difficulty to describe abstract
nouns by means of qualia roles seems to be related more to
the intrinsic complexity of abstract entities and properties
rather than to the inadequacy of the Qualia theory. Besides,
they show how the elements of meaning easily map on the
dimensions expressed via qualia roles, as far as concrete
nouns are concerned. Their methodology is developed to
satisfy the requirements of building large lexicons as it
reveals how a real implementation greatly contributes to the
underlying theory.

In the last article entitled 'Condensed Meaning in
EuroWordNet' (pp. 363-383), Piek Vossen discusses condensed
meaning in the EuroWordNet project, where several wordnets
for different languages are combined in a multilingual
database. Each language-specific wordnet is structured along
the same line as WordNet (Miller et al., 1990). The matching
of the meanings across the wordnets makes it necessary to
account for polysemy (Ravin and Leacock, 2000) in a
generative way and to establish a notion of equivalence at a
more global level. His discussion shows that a well-designed
interlingual or language-neutral ontology may have many
benefits from which all the linked wordnets can profit. He
also shows that the separation of the interlingua from the
language-specific realizations may help to clarify the way
meaning is proliferated in the lexicalized vocabulary of
languages. His discussion shows that the realizations of
interpretations, even from a generative point of view, is a
matter of the language. However, he opines that each
language represents a unique lexical mapping to these
meanings or aspects of these meanings, even though it is
possible to set up a powerful and predictive system for
deriving complex meanings. Nevertheless, his paper shows
that both the individual wordnets and the multilingual
database as a whole will profit from a generative approach,
which reduces the (often inconsistent) enumeration of
interpretations and improves the mapping across (genetically
or typologically related ?) languages.

An overall preliminary evaluation of the volume is found in
the CUP website which can be referred here: "This volume is
a collection of original contributions that address the
problem of words and their meaning. This represents a still
difficult and controversial area within various
disciplines: linguistics, philosophy, and artificial
intelligence. Although all of these disciplines have to
tackle the issue, so far there is no overarching methodology
agreed upon by researchers. The aim of the volume is to
provide answers based on empirical linguistics methods that
are relevant across all the disciplines and provide a bridge
among researchers looking at word meaning from different

The volume can be considered as an extended analysis and
interpretation of James Pustejovsky's 'Generative Lexicon'
(1995) because all the articles (except one by Jerry
Hobbs) is written with close reference to this path-breaking
work. While some articles are strongly vocal to nullify the
thesis of 'Generative Lexicon', some are staunch supporters
of Generative Lexicon' and show that any criticism against
GL is caused either from misunderstanding and
misinterpretation, some other articles discuss how their
WordNet projects (SIMPLE or EuroWordNet) are designed
following the principles proposed in Generative Lexicon.

The volume has a nice get-up with high quality paper and
printing. There are a few spelling mistakes and a few
abbreviated forms which probably need their full
forms. However, it can be definitely claimed that 'the book
has a long life'.

Dowty, D.R. (1979) Word Meaning and Montague Grammar: The
Semantics of Verbs and Times in Generative Semantics and in
Montague's PTQ. Dordrecht: Reidel.

Miller, G., Beckwith, R., Fellbaum, C., Gross, D., and
Miller, K. (1990) Five Papers on WordNet. CSL Report 43,
Cognitive Science Laboratory, Princeton University,

Nunberg, G. (1995) 'Transfers of Meaning'. Journal of
Semantics. 12: 109-132.

Pinker, S. (1995) The Language Instinct: The New Science of
Language and Mind. Middlesex, England: Penguin Books Ltd.

Pustejovsky, J. (1995) The Generative Lexicon. Cambridge,
MA: MIT Press.

Ravin, Y., and Leacock, C. (2000) Polysemy: Theoretical and
Computational Approaches. Oxford: Oxford University Press.

Niladri Sekhar Dash passed BA in English in 1989, and MA in
Linguistics from Calcutta University in 1991 and did Natural
Language Processing (NLP) course from Indian Institute of
Technology, Kanpur in 1994. From 1991 to 1997 he has worked
as Language Analyst in various projects of MIT, Govt. of
India in Computational Linguistics and NLP. From 1997 he is
working as Linguist in the Computer Vision and Pattern
Recognition Unit of Indian Statistical Institute,
Kolkata. In 2000, he has submitted his thesis on
corpora-based NLP for doctoral degree to the Calcutta
University. His research interests are corpus linguistics,
annotation, word-sense disambiguation, word processing,
lexical semantics, generative morphology etc.


Format: Hardback
ISBN: 0521780489
ISBN-13: N/A
Pages: 202
Prices: U.S. $ 70
U.K. £ 50