LINGUIST List 12.1938

Tue Jul 31 2001

Review: Iwanska & Shapiro, NLP & KR (2nd rev.)

Editor for this issue: Terence Langendoen <>

What follows is another discussion note contributed to our Book Discussion Forum. We expect these discussions to be informal and interactive; and the author of the book discussed is cordially invited to join in. If you are interested in leading a book discussion, look for books announced on LINGUIST as "available for discussion." (This means that the publisher has sent us a review copy.) Then contact Simin Karimi at or Terry Langendoen at


  1. Gregor Erbach, review of Iwanska & Shapiro, NLP and Knowledge Representation

Message 1: review of Iwanska & Shapiro, NLP and Knowledge Representation

Date: Tue, 31 Jul 2001 11:23:15 +0200
From: Gregor Erbach <>
Subject: review of Iwanska & Shapiro, NLP and Knowledge Representation

Iwanska, Lucja M., and Shapiro, Stuart C. (eds.) (2000) Natural Language
Processing and Knowledge Representation. MIT Press, 459pp., $40.00.

Gregor Erbach, SemanticEdge GmbH, Berlin

[A previous review of this book is posted at: --Eds.]


The book is concerned with the computational nature of natural
language, a new approach which views natural language as a system for
computation and for knowledge representation. The approaches described
in the book do not represent the standard techniques in Natural
Language Processing (NLP) or in knowledge representation, but rather a
new paradigm in which NL (or rather a "NL-like" reprentation) is
regarded as a formalism for knowledge representation (KR)and
reasoning. This view is opposed to the traditional view which regards
natural language as a front-end.

The book raises some fascinating questions, and provides tentative
answers to them:

(1) Is natural language algorithmic? [yes]
(2) Can NL be considered as a formal language? [yes]
 If so, what is its complexity? [tractable]
(3) What is the relationship between a) representing the
 meaning of NL utterances, and b) the semantics of the
 knowledge representation language [they should be the same]

The central thesis of the book is that NL is a knowledge
representation system with its own representational and reasoning
mechanism. "All information and knowledge processing tasks supporting
the computer system's intelligent behaviour take place at the NL
level" (p. xiv).

The advantages are that the NL-like uniformity of representation and
reasoning results in a powerful computational architecture. NL-driven
computational models, as introduced in the book
- simulate the role of NL in human information and knowledge
 processing, and
- are different from other KR and reasoning systems, which
 are representationally and inferentially impoverished relative to NL.

The book consists of eleven chapters by different authors, and is
organised in two parts. Part 1 introduces several NL-like KR and
reasoning systems. Commonalities among the different formalisms are
- the use of sets
- rich ontologies for the domains of discourse
- specialised inference rules

Part 2 deals with knowledge representation and acquisition for
general-purpose NLP. It addresses the pros and cons of uniform
vs. non-uniform approaches to knowledge representation and
reasoning. Uniform approaches use the same representation language for
every task, but are often representationally and computationally
impoverished. It also discusses knowledge acquisition from texts and
presents progress on fully automatic knowledge acquisition.

Chapter 1 by Lucja M. Ivanska presents the UNO formalism as a model of
NL, and claims that a NL-like representation is expressive and
computationally tractable at the same time. NL utterances are
represented by type equations. UNO has a computable set-theoretic and
interval-theoretic semantics. It is influenced by the constraint logic
programming language LIFE, by the linguistic variable approach of
fuzzy logic, by Montague semantics, by generalised quantifier theory,
among others.

Chapter 2 by David McAllester and Robert Givan presents a new
polynomial time procedure for automated inference. It makes use of
"Montagovian syntax", which is a syntactic variant of FOPL. It is
based on taxonomic relations between expressions that denote sets. The
inference procedures are based on the quantifier-free fragment of
Montagovian syntax, which is more expressive that the quantifier-free
fragment of classical First-Order Predicate Logic. Unfortunately,
there is no clear characterisation of this fragment and its usefulness
for NLP. The authors acknowledge that the formalism is only distantly
related to NL syntax.

Chapter 3 by David McDonald introduces KRISP, a "new representation of
the literal information conveyed by an NL utterance. KRISP is
"fundamentally a semantic network" with no deductive component and
limited expressive power. Objectives - motivated by the DARPA TIPSTER
information extraction tasks - in the design of KRISP were
comprehension of real-world texts with real-time processing speed, and
a bi-directional representation which is usable for parsing and
generation. The "semantic representation of any grammatically
well-formed phrase in a text should be a first-class object. [...]
KRISP is an object-oriented repackaging of the lambda calculus with
the insight that the binding of a variable to a value should be given
a first-class representation."

Chapter 4 by Lenhart Schubert and Chung Hee Hwang presents Episodic
Logic (EL), which combines ideas from Montague grammar, Situation
Semantics and Discourse Representation Theory, and adds new ideas
concerning situations, actions, propositions, facts, times,
quantification, tense and aspect. The semantics of EL remains an area
of active research. The research methodology is to start with a very
expressive formalism, investigate its formal and semantic properties,
and produce an experimental implementation. There is an easy mapping
from NL to EL. EL supports both goal-driven and input-driven
inference, and has been tested on small, but realistic text samples.

Chapter 5 by Stuart L. Shapiro presents SNePS, a new logic appropriate
for NLU, which is the basis of an NL-using intelligent agent. Types of
expression in SNePS are propositions, rules, acts and
individuals. Among the features of SNePS are
- set-oriented connectives
- unique variable binding: different variables have different names
- set arguments, e.g., sisters({Mary,Sue,Sally})
- "higher-order logic", implemented in first-order logic by
- intensional representations
- numerical quantifiers
- contexts (real-world vs. fictional)
- belief revision, using dialogue to resolve contradictions
- circular and recursive rules
Some commonsense reasoning problems, which pose difficulties for FOPL,
are presented, along with the solution in SNePS.

Chapter 6 by Bonnie J. Dorr and Claire R. Voss presents a "non-uniform
approach to interlingual machine translation, in which distinct
representational languages are used for different types of knowledge."
The authors claim that non-uniformity of representation across
representation levels has facilitated the identification and
construction of systematic relations at the interface between each

Chapter 7 by Lucja M. Iwanska argues for uniform KR and reasoning. It
presents a uniform temporal and spatial reasoner with
representational, inferential and computational characteristics
resembling NL. The author has collected hundreds of real-life test and
design examples for quantitative evaluation of the system's
performance in terms of precision and recall.

Chapter 8 by Susan McRoy, Syed S. Ali and Susan M. Haller favours
uniform representations. It uses a prepositional semantic network
representation. There is no distinction between object-level
(knowledge derived from utterances) and meta-level (knowledge about
utterances) processing. The representation has been applied in a
general-purpose tutoring system.

Chapter 9 by Sanda M. Harabagiu and Dan J. Moldovan is concerned with
enriching the WordNet taxonomy with contextual knowledge acquired from
texts, using fully automatic procedures. It uses a representation
resembling Sowa's conceptual graphs.

Chapter 10 by Lucja M. Iwanska, Naveen Mata and Kelly Kruger is
concerned with the acquisition of concept (type) definitions from
large-scale, real-life textual corpora. It is based on "processing
simple, easily extractable, parsable and interpretable constructs of
NL conveying taxonomic knowledge". The method ("learning from clear
cases") achieves a high degree of precision and recall, and practically
eliminates human pre- and postprocessing. UNO is used as the
representation formalism.

Chapter 11 by William J. Rapaport and Karen Ehrlich presents a
psychologically motivated method for acquiring dictionary-like
definitions of new words, their different senses and novel uses. New
vocabulary is acquired by determining meaning from context, i.e.,
surrounding text, grammatical information, and background knowledge.

Critical evaluation:

It is an absolutely fascinating book, which helps one to see many
issues in NLP and KR in a new light. The numerous examples and
large-scale implementations add substance to the claims made in
the book and provide inspiration for rethinking the challenges.

Although the book is self-contained and contains appendices which
introduce logics, mathematical background, examples and data, it is
not suitable as an introductory text to the field. Rather it is aimed
as postgraduate students or experienced researchers who know standard
NLP and KR techniques.

Unfortunately, the book fails to give a precise definition of what it
means for a representation to be NL-like. The relation between
surface forms, surface syntax, and NL-like representations is not made
clear. The problem how to generate NL expressions in different
languages from NL-like representations - which would provide a good
test for NL-likeness - is not addressed. The authors do not make clear
in which sense standard FOPL, which was conceived as a formalism for
representing NL propositions, is not NL-like. In my opinion, the
NL-likeness of a representation is a point on continuum between
surface strings and formal logic representations, rather than a
binary distinction.

No tests are given for evaluating whether a representation has
sufficient expressivity, although a number of representational and
inferential challenges are given in appendix C. The relationship
between the different formalisms presented in the book needs further

The book re-opens the debate whether "designer logics" or standard
predicate logic are more suitable for NLP. The experience of CYC,
which achieved to represent much of common-sense knowledge (a quarter
million concepts and over a million relations) in FOPL appears to
suggest that FOPL is expressive enough, and that specialised inference
procedures can support efficient reasoning over this knowledge. Many
of the formalisms presented in the book are indeed syntactic variants
of fragments of FOPL.

For me, as someone who is actively involved in the design and
implementation of knowledge-based dialogue systems, the book has
provided ample inspiration.

About the reviewer

Gregor Erbach is Head of Systems Architecture at semanticEdge in
Berlin, a company which builds knowledge-based dialogue systems. His
research interests include constraint-based grammar formalisms,
natural language parsing and generation, text categorisation and
summarisation, multilingual information retrieval and dialogue
Mail to author|Respond to list|Read more issues|LINGUIST home page|Top of issue