LINGUIST List 4.689

Sun 12 Sep 1993

Disc: Constraints and The Linguistic Wars

Editor for this issue: <>


Directory

  1. JACOBSON, the notion of constraints

Message 1: the notion of constraints

Date: Sun, 12 Sep 93 02:20:13 EDthe notion of constraints
From: JACOBSON <LI700013BROWNVM.bitnet>
Subject: the notion of constraints

 Having looked through (just part of) Harris' book (THE LINGUSTICS
WARS), and having read some of the comments on Linguist regarding the
notion of "constraints" (with respect to whether or not Ross and
others were interested in "constraining the power of transformations")
I think it's important to note that three distinct notions of "constraints"
are being conflated in some of the comments (and, at times, in the book).
In fact, I see this conflation very often in linguistic discussions.
 We can distinguish three things. (1) Constraining the set of possible
languages that can be described by grammars which are compatible with
a theory. Thus Theory A is "more constrained" in this sense than
Theory B if the grammars allowed by Theory A can describe a proper subset
of languages which are describable by the grammars allowed by Theory B.
(2) Constraining the set of possible grammars which are compatible
with a theory. In this sense, Theory A is "more constrained" than
Theory B if the grammars allowed in Theory A are a proper subset of
those allowed in Theory B. (3) Constraining what transformations can
do, in the sense of things like Subjacency, the Complex NP Constraint,
etc. This notion of "constraint" has nothing to do with the first two
notions although, of course, one could embed such "constraints" into
theories which are more constrained in either the first or second sense.
(This is different from constraining the set of possible transformations -
a constraint on, e.g., what kinds of rules can be written does indeed
constrain the class of possible grammars, although it may or may not
succeed in constraining the class of possible languages.)
 To clarify, note first that the first two senses of "constraining
the theory" are often conflated. There is certainly a connection between
the two - if one succeeds in "constraining" the set of possible languages
then one must also be constraining the set of possible grammars (although
this can get tricky, since it may be that the set of languages countenanced
by Theory A is a proper subset of those of Theory B without the grammars
countenaced by Theory A being a proper subset of those countenanced
by Tehory B - the reason is that Theory A may well allow for grammars
not allowed for by Theory B, but Theory B might allow for other
grammars which can describe all of the languages describable by Theory
A.) The important point here, though, is that the second use of the
term "constraint" does not necessarily imply the first. (A good discussion
of this point is given in Wall's mathematical linguistics textbook.)
A classic example is the following: Theory B says that the grammars of
all natural languages contain only context-free phrase structure rules.
Theory A says that the grammars of natural languages contain only
context-free phrase structure rules in Chomsky Normal Form (these are
rules in which a single non-terminal is rewritten only as exactly two
non-terminals, or as a single terminal). The grammars compatible with
Theory A are a proper subset of those compatible with Theory B, but they
give exactly the same class of languages: any context-free grammar can
be rewritten as an equivalent one in Chomsky Normal Form. (Not that
anyone proposes either of these theories, but they serve to illustrate
an important moral: "constraining" the form of possible rules doesn't
in any way necessarily constrain the class of possible languages.
Unfortunately, this point is often lost in linguistic discussions -
people are constantly proposing constraints on the class of possible
grammars without worrying about whether these constraints may be
vacuous.) The Peters and Ritchie proof regarding transformational
grammars concerned the first notion of "constraint" - it said that
basically any kind of language that could be described via any kind
of formal rule system could also be described by a transformational
grammar (with certain kinds of deletion operations).
 When we talk about "constraining transformations", a new confusion
is sometimes introduced - sometimes this is used to mean constraining
the class of possible transformations, and sometimes it is used to mean
constraining what a particular transformation (or set of them) can do.
The first sense does indeed constrain the class of possible grammars
(though it may or may not constrain the class of possible languages).
Emonds' Structure Preserving Constraint (as put forth in Emonds, 1976)
is an example of this, as is the claim that the only kind of movement
transformation allowed is Move Alpha. Neither, however, are embedded
within theories which are such that it has been shown that adding these
constraints reduces the class of possible languages.

 But what I find to be the most potentially serious confusion centers
on the use of the term "constraint" in the sense of constraining what a
particular transformation may do. This has no bearing on the question
of constraining the class of possible languages - nor even the class of
possible grammars - at least not without further elaboration of
what constitutes possible grammars.
 To illustrate, consider the actual example of the Complex NP Constraint
(one could substitute Subjacency, or any other similar constraint on
transformations into this discussion and make the same point). Imagine
Theory A and Theory B, both of which allow grammars with a rule of
Wh-Movement. To make that concrete, let's put that in terms of classical
transformational grammar, where the rule would be written something like:
 1. X wh Y
 1 2 3 ===>
 2+1 0 3
(I don't mean this formalisation to be taken too seriously; I'm just using
it to illustrate the point.) Theory A, moreover, says that the
interpretation of such a rule is that any wh - regardless of where it is -
may be moved to the front. Theory B (very roughly, the theory put forth
by Ross) says that such a rule is possible, but that the interpretation of
X is "constrained" (this is exactly why Ross called his dissertation
"Constraints on Variables in Syntax). It is constrained in such a way that
(roughly) it may not contain a portion of a complex NP (and hence the
wh cannot be within a complex NP and then be moved out of it).
 Keeping all other things equal, both theories allow for the same
number of grammars - they just happen to be different grammars. (Actually,
one could put this in a different way: they allow for the same grammars, if by
"grammar" we mean the representation of the rules - but the interpretation
of this representation is different.) Similarly, the languages allowed
by the two theories might be different (although that remains to be seen,
as it depends what else is allowed in the grammars). For the sake
of discussion, though, let us assume that the two theories do allow
for different languages. There's no reason to conclude that either theory
allows for more or for less languages than the other - they just allow
for DIFFERENT languages.
 Put differently, imagine two grammars - identical in all other
respects - except that one contains the rule in (1) as interpreted under
Theory A, and one contains the rule in (1) as interpreted under
Theory B (i.e., the theory with the Complex NP Constraint). Moreover,
assume that there is a set of sentences (sentences such as *Who do you
know the man who likes?) which can be gotten by rule (1) in the grammar without
the complex NP constraint by rule (1) (and can't be gotten in any
other way). Then the corresponding grammar in Theory B can't get
these sentences at all. This means that the grammar under discussion
within Theory B allows for fewer SENTENCES than the corresponding
grammar within Theory A, but there is no sense in which Theory B allows
for fewer LANGUAGES (nor even for fewer possible GRAMMARS) than Theory A.

 I raise this because one often sees this notion of constraining
transformations conflated with the notion of constraining the set
of possible languages (or, the set of possible grammars).
For example (to take the book which started this discussion), I don't
mean to trash Harris' book since I liked a lot about it. However, he
does make this conflation several times in his discussion on pp. 180-
181. The gist of the discussion is that Generative Semantics did
indeed worry about restricting the power of the grammar [in the
first sense] since it paid a lot of attention to constraints [in the
third sense]. I think a more appropriate response to the attacks
on Generative Semantics being "unrestrictive" would be to note that it
had never been shown that Interpretive Semantics was any more "restrictive"
(in the first sense). Nor was it shown that adding things like
global rules in any way increased the power of the theory in the sense
of allowing for more possible languages. (Indeed, in light of the
Peters and Ritchie results, one might argue that adding global rules
and other devices to the theory couldn't possibly make it "more
powerful" since it already was all-powerful.)

 Pauline Jacobson
 Brown University
Mail to author|Respond to list|Read more issues|LINGUIST home page|Top of issue