Featured Linguist!

Jost Gippert: Our Featured Linguist!

"Buenos dias", "buenas noches" -- this was the first words in a foreign language I heard in my life, as a three-year old boy growing up in developing post-war Western Germany, where the first gastarbeiters had arrived from Spain. Fascinated by the strange sounds, I tried to get to know some more languages, the only opportunity being TV courses of English and French -- there was no foreign language education for pre-teen school children in Germany yet in those days. Read more



Donate Now | Visit the Fund Drive Homepage

Amount Raised:

$34724

Still Needed:

$40276

Can anyone overtake Syntax in the Subfield Challenge ?

Grad School Challenge Leader: University of Washington


Publishing Partner: Cambridge University Press CUP Extra Publisher Login

Discussion Details




Title: Re: A Challenge to the Minimalist Community
Submitter: Carson Schutze
Description: I see my attempt at a simple metaphor has gone awry. And we seem
to be spiraling down into a general discussion of "How can proponents
of theory X ever show that it is right/wrong/nonvacuous etc.." Over the
years such discussions on the List have not been very fruitful, in my
opinion. But I think the Sproat & Lappin challenge raised a much more
specific point that risks getting lost.

[Sorry for consuming so much bandwidth. I don't foresee the need to
say anything further. And let me acknowledge Richard Sprout for
some off-list discussion that helped me to clarify some points; he is of
course not responsible for anything I say below.]

Emily Bender drew the following conclusion from my metaphor:
  If I've understood the point of this analogy, it is that building a system
  which can take UG and some natural language input and produce a
  grammar which can be used to assign structures to (at least the
  grammatical) strings in some corpus of language is somehow outside
  the original point of what P&P was trying to do.

No, that was not the point. The point was that trying to compare the
success of two systems (vehicles) at accomplishing a single task
(going really fast) is pretty meaningless if you totally ignore all the
other things the systems can or cannot do, e.g. support family
transportation needs (something that one of the candidates--Corvette,
was never designed to do and shows no signs of being able to do).
[Of course opinions differ on whether something shows signs of being
able to do X--see below.] This is not to say that going fast was not *a*
goal in the design of the SUV as well (does anyone ever design a
vehicle with the intent of it NOT being able to go fast? perhaps a go-
kart), it's simply that other desiderata were considered higher priorities
to worry about first (for what many of us consider principled reasons).

Just to be crystal clear (and I don't pretend to speak for all P&Pers
here): I have no objection with the suggestion that P&P might benefit
by trying to build a wide-coverage parser, or implement aspects of the
theory in some other way, or pursue proofs as to whether it is capable
of (learning to) parse. Others may have strong feelings that this would
be unproductive at this stage, I'm agnostic, that's not relevant to my
point.

My point is that the comparison, which was fairly explicit in S&L's
original posting, between P&P and statistical (and other, though they
focused on statistical) parsers doesn't make sense. Here's some text
from the challenge:

  What is particularly notable about the Klein-Manning grammar
induction
  procedures is that they do what Chomsky and others have argued is
  impossible: They induce a grammar using general statistical methods
  which have few, if any, built-in assumptions that are specific to
language.

To even debate this, we would have to establish a definition
for "grammar"; earlier in the paragraph this system is described as
inferring a "parser", which, as has been discussed, is crucially not the
same thing under usual interpretations of these terms.

The important point is the suggestion that some 'alternative(s)' to P&P
can supposedly do "what Chomsky and others have argued is
impossible ... induce a grammar". Here we have a comparison based
on a false premise, it seems to me. What is the evidence that the
Klein/Manning algorithms induce a grammar that has the properties
Chomsky argued required innate structure to learn? All we've been
told about it is that it parses some corpora at some rate less than 80%
but is "quickly converging" on that level of accuracy. No one in P&P
ever claimed that inducing the ability to parse a representative subset
of a corpus of everyday speech to a certain approximation (given POS
tags) required innate linguistic machinery. That's not the basis of any
poverty-of-the-stimulus argument. We haven't even been told whether
this statistical learner systematically distinguishes well-formed from ill-
formed novel input, a sine qua non for the sorts of systems Chomsky
is talking about.

Later on we find the following

  If the claims on behalf of P&P approaches are to be taken seriously,
it is
  an obvious requirement that someone provide a computational
learner that
  incorporates P&P mechanisms, and uses it to demonstrate learning
of the
  grammar of a natural language.
 
  **With this in mind, we offer the following challenge to the
community.**
 
  We challenge someone to produce, by May of 2008, a working P&P
parser
  that can be trained in a supervised fashion on a standard treebank,
such
  as the Penn Treebank, and perform in a range comparable to state-
of-the-  art statistical parsers.

What are we to make of "with this in mind" as a connective between
the upper (and preceding) paragraphs and the lower? The former
talks about learning a grammar of a natural language. The latter talks
about correctly parsing 90% of examples sampled from some corpus
the system was trained on. Accomplishing the very narrow parsing
task in S&L's challenge hardly tells us anything about whether some
system is or is not able to learn a natural language grammar, so if our
goal is really studying how humans acquire grammars, the challenge
is virtually irrelevant to that goal.

I suppose that someone of the S&L persuasion might sum up the
argument thus [I'm speaking purely hypothetically, following the lead of
S&L in suggesting what "the other side" might say:]

"How do humans learn and parse human language? Chomsky says
this ability relies on innate language-specific knowledge. But *we*
have statistical systems that we claim can achieve part of what
humans do, without any innate language-specific knowledge. We've
solved/are on the verge of solving (part of) the problem you said only
your approach could solve, so you'd better convince us that at the
very least you can indeed solve that problem too. Then we'll have two
promising theories that we can try out on other parts of the bigger
problem."

To show what's wrong with this, despite some trepidation I cannot
resist one final vehicular analogy.

"What makes a car work in its primary function (as a self-propelled
device)? You claim that an engine is absolutely crucial. Now we
observe that one of the properties that cars have is that if you push
them, they will roll for a while (e.g. when the battery is dead). I've built
a contraption (a little red wagon, say) that will roll for a while if you
push it. Therefore, your claim that an engine is necessary to make a
car work is now seriously in jeopardy, because my little red wagon
doesn't have an engine, and look, it rolls almost as well as a fast car,
and better than an SUV. We should explore little red wagons as
alternatives to cars."

To avoid misinterpretation:

engine = innate knowledge
roll on wheels = (learn to) approximately parse a corpus after training
on it
self-propulsion = acquiring human language
car = human: can do lots of things, of which rolling after a push is one,
and obviously not totally unrelated to its critical function of self-
propulsion, but
not one of the more difficult things to get it to do either
SUV = current-day P&P model, according to S&L, who might say it
doesn't
roll at all

Carson
Date Posted: 11-May-2005
Linguistic Field(s): Computational Linguistics
Linguistic Theories
Discipline of Linguistics
LL Issue: 16.1505
Posted: 11-May-2005

Search Again

Back to Discussions Index