Publishing Partner: Cambridge University Press CUP Extra Publisher Login

Discussion Details




Title: Re: A Challenge to the Minimalist Community
Submitter: Richard Sproat
Description: We thank the people who have responded to our challenge posted in
16.1156, both in private and on the List. A number of the responses
(mostly those offered in private) have been supportive. Others have
raised issues with our challenge. In the interests of brevity, we will
respond to the main objections rather than to individual comments:

1. It is too early to expect P&P to provide a theory that can be
implemented as part of a large-scale parsing system that learns from
data.

RESPONSE: This was our "Objection 3", which we characterized as
a "remarkable dodge". Need we say more?

2. The challenge is the wrong challenge, either because:

A. We rely on the Penn Treebank as our gold standard, whereas
there is no reason to accept the validity of the Penn Treebank
structures; they are not even theoretically interesting.

B. Providing valid structures for sentences is not the only goal or even
the most reasonable goal of syntactic theory: a syntactic theory should
also provide grammaticality judgments for sentences; a syntactic
theory should explain cross-linguistic variation.

C. Statistical approaches have it too easy since they are trained on
data that is similar in genre to the test data.

RESPONSE: If you do not like the Penn Treebank, you are free to use
any other reasonable corpus, and to provide your own annotations
and representations. The task remains the same. Show that a P&P
acquisition system can do at least as well as statistical approaches.

Regarding B, we remind readers that humans do assign structure to
sentences, that assigning structure to sentences is surely a part of
what syntax is about, that humans acquire this knowledge as part of
language acquisition, and that P&P claims to provide an explanation of
how this is achieved. So we are at a loss to understand why inducing
a large-scale working parser from sample data is not a valid test of
P&P.

The claim that statistical approaches have it "too easy" will have some
content when it is accompanied by an implemented P&P device that
matches the performance of machine learning systems. If such a
device cannot be constructed, it suggests not that statistical systems
have it too easy (the same conditions have always been on offer to
those interested in developing a large coverage P&P parser), but that
the P&P framework is not computationally viable as a model for
language acquisition.

3. The challenge could certainly in principle be met by P&P.

RESPONSE: "In principle" doesn't count here. Only "in fact" has any
credibility.

4. The challenge is already being met.

RESPONSE: Oh really, where? We look forward to seeing convincing
evidence of this.

5. Computational linguistics is about engineering rather than science.
It may be useful for us scientists to be more aware of what is going on
in engineering, and similarly the engineers could gain some insights
from us scientists.

RESPONSE: It is true that computational linguistics often has
engineering applications and that these applications often motivate
computational linguists to address certain problems. But let's not
confuse the issue. Many computational linguists, the two present
authors included, are fully trained linguists who happen to be
interested in how computational methods can yield insights on
language. If this is not science, we do not know what is.

6. Machine learning cannot produce constraints that rule out
ungrammatical sentences. Where the P&P seeks to characterize the
set of possible natural languages, ML just learns syntactic patterns
exhibited in a particular corpus.

RESPONSE: Machine learning has achieved induction of robust
grammars that can, in fact, be turned into classifiers able to distinguish
between acceptable and ill formed structures over large linguistic
domains. The fact that after more than half a century of sustained
research the P&P enterprise and its antecedents have failed to
produce a single broad coverage computational system for grammar
learning suggests that its notion of Universal Grammar encoded in a
language faculty may well be misconceived. The increasing success of
unsupervised ML techniques in grammar acquisition lends at least
initial plausability to the proposal that general learning and induction
mechanisms, together with minimal assumptions concerning basic
linguistic categories and rule hypothesis search spaces are sufficient
to account for much (perhaps all) of the language acquisition task.

7. You should have offered a monetary prize as a financial incentive
for meeting the challenge.

RESPONSE: We don't see why we need to pay people extra for
demonstrating the viability of a "research program" which has
dominated much of the field for decades, but has yet to produce
anything approaching the results that its rivals have achieved
efficiently in a relatively short period of time.

Finally since our challenge has actually stimulated relatively little
discussion from the P&P community, we suspect the following may
also be one response:

8. Ignore the challenge because it's irrelevant to the theory and
therefore not interesting.

RESPONSE: This is the "answer" we had most anticipated. It does not
bode well for a field when serious scientific issues are dismissed and
dealt with through silence.

Richard Sproat
Shalom Lappin
Date Posted: 05-May-2005
Linguistic Field(s): Computational Linguistics
Syntax
Discipline of Linguistics
LL Issue: 16.1439
Posted: 05-May-2005

Search Again

Back to Discussions Index