Publishing Partner: Cambridge University Press CUP Extra Publisher Login

Discussion Details




Title: Re: A Challenge to the Minimalist Community
Submitter: Emily Bender
Description: I would like to respond to Carson Sch├╝tze's motor vehicle analogy,
from LL 16.1439:

 Consider the following analogy. You and I both are given the task of
 designing a motor vehicle that will get someone from point A to point
 B. You come back with a Corvette, I come back with an SUV. Now
 you say, ''Let's go to a racetrack, I'll bet I can drive a circuit faster
 than you, which means I have the better design.'' I will of course
 object: speed was not specified as the desideratum of the vehicle.
 Both vehicles can get a person from A to B. Moreover, the SUV can
 do lots of things the 'vette can't: carry more than 2 people, hold lots
 of luggage, play DVDs for the back seat passengers, transport
 moderate-sized pieces of furniture, host a small business meeting,
 etc. My motivation in designing it was to make it a multi-purpose
 family vehicle. If I were now to go back to the drafting table and
 modify my SUV design so that it keeps all its current features but can
 also go as fast as a Corvette, surely I will have achieved a much
 more difficult task than the person who just designed the Corvette.

If I've understood the point of this analogy, it is that building a system
which can take UG and some natural language input and produce a
grammar which can be used to assign structures to (at least the
grammatical) strings in some corpus of language is somehow outside
the original point of what P&P was trying to do.

I agree with Asudeh here: Even setting aside for a moment the
problem of learning (i.e., the process of getting from UG to a specific
language grammar), the ability to take strings and assign them
structure constitutes at least part of the getting from A to B. Most P&P
work (especially that within the Minimalist Program) works at a level of
abstraction that seems to preclude working on the details of assigning
structures to actual strings. This requires handling not only the
phenomenon of interest, but its interaction with everything else
required to assign structure (and meaning). Deducing that wheels
and a transmisson are both required for travel from A to B is only part
of the solution.

Work in the theoretical frameworks that do benefit from interaction
with computational linguistics (e.g., LFG, HPSG, CCG) has repeatedly
shown the benefits of getting computers to keep track of all of the
parts of a grammar so that the linguist can ask questions like: If I
switch to this analysis of case, what other changes does that require
in my grammatical system? Or, at the level of requirements on the
formalism (and from the perspective of HPSG), is the simple operation
of unification enough, or does an adequate account of the facts of
natural language require the ability to state relational constraints?

Grammatical models, when considered in all their detailed glory, are
complex enough that it is not possible to reliably follow all of the
implications of any proposed change in one's head or with pen and
paper. The initial development of infrastructure to interpret (and
parse with) grammars in any particular formalism requires an up-front
investment of time. There is also time-consuming work involved in
implementing theoretical ideas in order to test them. However, the
benefits of both of these investments are immense. They allow us to
test our ideas both for consistency with the rest of the grammatical
system and against a wider range of data than is possible without
computer assistance: The current fastest HPSG parser, `cheap'
(developed within the PET platform of Callmeier 2000), can process a
testsuite of 1000 sentences in a matter of minutes. Using the
regression testing facilities of [incr tsdb()] (Oepen 2001), it is possible
to compare the behavior of the current state of the grammar with
earlier test runs, and look for sentences for which there are changes
in predicted grammaticality, number of parses, structure of parses, etc.

Furthermore, this kind of work is not restricted to monolingual
investigation. As shown by the LFG ParGram (Butt et al 2002, King et
al in press) and HPSG Grammar Matrix (Bender et al 2002) projects, it
is possible to explore issues of universals and variation across
languages in such a way that the proposed ideas can be tested by
using the grammars to parse testsuites (or corpora) of the languages
studied.

I do not believe that all syntactic research should take place in the
context of computational implementation. The implemented systems
discussed above have benefitted greatly from theoretical work as well
as contributing to it. At the same time, the potential benefits of
computational work for theoretical inquiry should not be eschewed.

References:

Many of the resources mentioned above are available online
at: http://www.delph-in.net

Bender, Emily M., Dan Flickinger and Stephan Oepen. 2002. The
Grammar Matrix: An Open-Source Starter-Kit for the Rapid
Development of Cross-Linguistically Consistent Broad-Coverage
Precision Grammars. In Carroll, John and Oostdijk, Nelleke and
Sutcliffe, Richard (eds), Proceedings of the Workshop on Grammar
Engineering and Evaluation at the 19th International Conference on
Computational Linguistics. Taipei, Taiwan. pp. 8-14.

Butt, Miriam, Helge Dyvik, Tracy Holloway King, H. Masuichi, and
Christian Rohrer. 2002. The Parallel Grammar Project. In Carroll,
John and Oostdijk, Nelleke and Sutcliffe, Richard (eds), Proceedings
of the Workshop on Grammar Engineering and Evaluation at the 19th
International Conference on Computational Linguistics. Taipei,
Taiwan. pp. 1-7.

Callmeier, Ulrich. 2000. PET --- A Platform for Experimentation with
Efficient HPSG Processing Techniques. Natural Language
Engineering 6 (1), Special Issue on Efficient Processing with HPSG.
pp.99--108.

King, Tracy Holloway, Martin Forst, Jonas Kuhn, and Miriam Butt. In
press. The Feature Space in Parallel Grammar Writing. Journal of
Research on Language and Computation, Special Issue on Shared
Representations in Multilingual Grammar Engineering.

Oepen, Stephan. 2001. [incr tsdb()] -- Competence and Performance
Laboratory. User Manual. Technical Report. Computational
Linguistics, Saarland University, Saarbruecken, Germany.

Emily M. Bender
Department of Linguistics
University of Washington
Date Posted: 09-May-2005
Linguistic Field(s): Computational Linguistics
Syntax
Discipline of Linguistics
LL Issue: 16.1454
Posted: 09-May-2005

Search Again

Back to Discussions Index