LINGUIST List 9.483

Sun Mar 29 1998

Disc: NLP and Syntax

Editor for this issue: Martin Jacobsen <>


  1. Philip A. Bralich, Ph.D., Re: 9.457, Disc: NLP and Syntax
  2. Dan Maxwell, NLP and Syntax

Message 1: Re: 9.457, Disc: NLP and Syntax

Date: Thu, 26 Mar 1998 15:06:25 -1000
From: Philip A. Bralich, Ph.D. <>
Subject: Re: 9.457, Disc: NLP and Syntax

On Tue, 24 Mar 1998, Pius ten Hacken <>

>In reaction to Phil Bralich's posting in Vol. 9-383 I would like to
>restate my point briefly as follows:
> The goal of linguistics is an explanatory account of the data. A
>descriptive account of the data is not the goal of linguistics. It is
>not even an intermediate goal. It is only a side effect of the search
>for an explanatory account of the data. As a result, a full
>descriptive account of the data has by itself very little value
>scientifically. It does not indicate any degree of maturity of a
>scientific theory. Therefore a partial explanatory account is better
>for a linguistic theory than a full descriptive account.

I have no idea how anyone can pretend to have an explanatory account
of data they have not yet described. Research that seeks to find an
explanatory set of facts may lead to insights that will lead to
revisions or complete reworkings of previous descriptive accounts, but
if the theory is not itself grounded in a set of descriptive facts it
is just idle speculation. The degree to which the data has been
described is a proper and very ordinary measure of the groundedness
and maturity of any theory. From that basis, discussions of
explanatory adequacy can be undertaken that will have some real value
and real consequences. And of course descriptions that disprove the
hypotheses of particular explanatory accounts can be cause for the
rejection of that account. Trying to separate explanatory and
descriptive theories and then trying to prefer one over the other is
just not sound science. Either our science is grounded in the facts
or we revert to a world where divine revelation to the elect (usuall
self-appointed) rather than observation is the means by which we
choose between theories. If that is the science you want you are
welcome to it. Personally, I think it is not good to toss out this
basic principle of science (that a description of basic facts is
required before explanations are offered) simply because it threatens
the hypotheses (explanatory or otherwise) that one would like to

>The implementation of a syntactic theory as a parser can only test
>its amount of descriptive adequacy. Therefore, there is no reason for
>linguists to accept it as a valid criterion for the evaluation of a
>linguistic theory.

Except as a measure of whether or not that theory is grounded in facts
as all science must be. Let's not forget every theoretical mechanism
for every serious theory of syntax, can be programmed. Thus, the
computer does serve as a measure of a theories "maturity" in that it
can demonstrate whether or not a theory can account for the data set
it is designed to account for. Explanatory theories that are not
thoroughly grounded in the facts are speculation at best, a reversion
to divine revelation at worst.

In a final note we are not talking about theories that have been
around for just a couple of years. We are talking about research
efforts that have been around from 10 to 35 years. Asking them to show
their ability to handle the basic facts (as illustrated by the
standards that I have offered) is not asking too much, nor should it
be difficult. To let them off the hook from this basic requirement of
any theoretical effort is also not good science. Are we scientists or
are we apologists? Let's keep the science scientific and insist that
theories account for a body of data as part of their right to

Phil Bralich

Philip A. Bralich, Ph.D.
President and CEO
Ergo Linguistic Technologies
2800 Woodlawn Drive, Suite 175
Honolulu, HI 96822

Tel: (808)539-3920
Fax: (808)539-3924
Mail to author|Respond to list|Read more issues|LINGUIST home page|Top of issue

Message 2: NLP and Syntax

Date: Sat, 28 Mar 1998 17:38:59 -0500
From: Dan Maxwell <>
Subject: NLP and Syntax

Yesterday evening the Jim Lehrer news hour included an interview with
sociolinguist Deborah Tannen concerning a new book of hers. One of
the things she said in the interview was that society is too
confrontational and that we need to do more to seek common ground. So
in that spirit let me try to comment on Pius ten Hacken's assertion
that a partial explanatory account is better than a full descriptive
account and that anyone disagreeing with this cannot account for much
of scientific practice.

This might turn out to be true if we can develop a distinction between
what is explanatory and what is descriptive. I think the most hopeful
place to look for such a distinction is in the work of
psycholinguists, who in principle might be able to provided support
for a claim that one account is psychologically real (and therefore
more explanatory in some sense) and another is not.

There are cases in the literature when this has happened. There was
an early textbook problem from I think Samoan which apparently
involved a phonological rule of consonant deletion, based on the
linguistic criteria used at the time. But then psychological
evidence, as reported by Kiparsky in about 1970, seemed to support
something like consonant insertion. Nowadays, I think we would say
that there is no phonological rule at all, just lexically conditioned
distinct allomorphs of a specific morpheme.

Another example is the report from the 1970's that there did not
appear to be any psychological support for transformations. Well,
transformations either disappeared completely or at least became much
less numerous than they were at the time. But I suspect that the
Peters&Richey mathematical findings had more to do with this than the
psycholinguistic ones.

I haven't been following the recent work in psycholinguistic support
for various linguistic theories, so I don't know what they say about
current frameworks, but I would guess that for every framework with an
influential founder there are at least a few who claim to have found
them, or some aspect of them, to be psychologically real. If that is
the situation, then in spite of the lack of consensus(not to say
partisan bickering) that this seems to imply, it would still in
principle be possible for an outsider with no particular ax to grind
to try to evaluate competing claims.

In the absence of such support, the distinction between explanatory
and descriptive seems pretty empty to me. The specific questions
which according to ten Hacken . have no good answer unless he is
correct seem to be to have fairly prosaic answers which have nothing
to do with this distinction:

(1)" Why do so many articles and conference presentations START with a
presentation of the data rather than ending there ?"

They are starting with a set of sentences(data); they aim to finish
with a set of rules or other formal devices that account for these
data. Ideally, they would like their devices to be identical to or at
least similar to devices which other linguists have used for other

(2) "Why do linguists never use a parser-generator to get the most
efficient CFG(Context Free Grammar) for their data set (or suggest a
more efficient one) ? "

Are there parsers/generators that do this? The parsers that I have
heard of create trees or some other formal representation from strings
of words. I thought generators create such formal represenatations
from rules or something like them. If linguists had some sort of
program that could do something like these tasks, I think some of them
would be interested in seeing what it created and seeing whether they
agreed with its results..

(3)" Why do so many articles and presentations (a) take data from
different languages into account or (b) apply a theory developed for
one language to data from another language ?"

(a)They want to develop a linguistic description which is as widely
applicable as possible.

(b) They want to find out if the theory/description they developed for
one language is also applicable to another, and if it is not, how
close it comes to being applicable, i.e. what modification they have
to make for this other language..

"The answers to these and many other questions lies in the above
description of the relationship between explanatory accounts,
descriptive accounts and the goal of linguistic theory."

Given the above answers to the questions raised, I don't see that this
follows. But then I shouldn't presume to speak for all linguists.
Perhaps others would give different answers to ten Hacken's questions.
It's always dangerous to try to interpret other people's behavior the
way ten Hacken and I are doing. Maybe other linguists can tell us
whether the choices they make in the above situations have anything to
do with the distinction between explanatory accounts and descriptive

I don't really disagree with anything in Peter Menzel's latest
contribution. To the extent that I am informed on these subjects, i
mostly agree. But I think a couple of his assertions deserve more

"Psychological reality, in the long run, comes down to neurological

I certainly agree with this. But there is an interesting parallel to
computer programs: computer programs at the lowest level are a series
of instructions interpreted by the machine. At a higher level they
are nowadays a series of instructions interpretable by humans, at
least humans who know the particular programming language, such as
"C(++)". For the program to work, these instructions have to be
translatable into machine instructions. That is, higher language
reality comes down to a lower language reality. Most higher language
programmers know little or nothing about machine instructions. The
translation from the higher language to the lower is done by another
program - an interpreter or compiler.

"A theory that can be forrmalized to run on a computer is not likely
to correspond to how speakers deal with (learn, use, store, process,
etc) language."

It is true and obvious that human brains are different from computers
in lots of ways. Brains have neurons; computers have silicon.
Language functions in brains are scattered around, presumably
interacting with all sorts of other things, whereas language programs
nowadays are concerned only with language. Computers seem to be a lot
faster at processing than brains. Maybe brains, so far, are better at
using information in one realm to make decisions about another.

I don't think any of this implies that we can't gain some insight
about how language works in the brain but formalizing models that
imitate (i.e. produce the same results as) the brain in a computer.
I think much work in artificial intelligence is also based on this
assumption. I think the parallel between human brains and computer
programs given above is an example of the insights that can be gained
this way. And I think Peter Menzel agrees with this, since he
observes that much work has been done describing certain aspects of
brain functions using mathematical models.

Dan Maxwell 
Mail to author|Respond to list|Read more issues|LINGUIST home page|Top of issue