Featured Linguist!

Jost Gippert: Our Featured Linguist!

"Buenos dias", "buenas noches" -- this was the first words in a foreign language I heard in my life, as a three-year old boy growing up in developing post-war Western Germany, where the first gastarbeiters had arrived from Spain. Fascinated by the strange sounds, I tried to get to know some more languages, the only opportunity being TV courses of English and French -- there was no foreign language education for pre-teen school children in Germany yet in those days. Read more



Donate Now | Visit the Fund Drive Homepage

Amount Raised:

$34724

Still Needed:

$40276

Can anyone overtake Syntax in the Subfield Challenge ?

Grad School Challenge Leader: University of Washington


Publishing Partner: Cambridge University Press CUP Extra Publisher Login

Discussion Details




Title: Re: 16.1156, A Challenge to the Minimalist Community
Submitter: Chung-chieh Shan
Description: In response to Richard Sproat and Shalom Lappin's challenge (16.1156),
Peter Hallman (16.1251) draws a contrast between the Principles and
Parameters (P&P) approach and statistical approaches to parsing.

A statistical parser can, within physical limitations, recognize
and learn any statistically significant pattern, not merely those
patterns that occur in human languages.... The P&P framework
seeks to answer the question

(Q) What is a possible human language (type)?

The P&P parser that Sproat and Lappin envision would answer this
question; comparable statistical parsers do not.

He suspects that it would be "unrealistic" for a P&P parser to reach accuracy
comparable to current statistical parsers in three years, for two reasons.
First, as the paragraph above concludes, a P&P parser would accomplish
more than current statistical parsers. Second, current P&P theory may not
be "ready to form the basis of a trainable parser".

I am more optimistic for P&P. To me, these same two reasons indicate
Sproat and Lappin's challenge to be realistic rather than unrealistic.

First, a statistical parser is only hindered when it recognizes patterns that
do not occur in human languages. The larger the space of hypotheses to
explore, the less effective machine learning can be. Conversely, many
advances in statistical parsing (going back as far as probabilistic regular
and context-free grammars) are made precisely by better
delineating "those patterns that occur in human languages", such as locality
and hierarchy. In other words, a statistical parser embodies an
(approximate) answer to the question Q, just as a P&P parser or theory
does. A better answer should give rise to a better parser.

Second, the attention that the P&P approach pays to language acquisition
corresponds directly to payoffs in parsing performance. For example, a
parser whose design addresses the poverty of the stimulus should require
less training data, less supervision, or both. Such a parser would be able to
learn from the Penn Treebank better, take advantage of vast amounts of
unlabeled corpora, or both.

In sum, a parser that better "connect[s] typological universals to the
mechanism of language learning" will fare better in accuracy, all other
things being equal. That one linguistic theory may be more "ready" than
another for implementation reflects on not just the focus of different
communities (as Martha McGinnis points out, 16.1251), but also the
theories themselves. Trying to answer the question Q is no excuse for poor
parsing. All other things being equal, poor (or unknown) parsing
performance indicates failure at (resp. disinterest in) answering Q.
Date Posted: 22-Apr-2005
Linguistic Field(s): Computational Linguistics
Discipline of Linguistics
LL Issue: 16.1288
Posted: 22-Apr-2005

Search Again

Back to Discussions Index