LINGUIST List 9.502

Tue Mar 31 1998

Disc: NLP and Syntax

Editor for this issue: Martin Jacobsen <>


  1. Peter Menzel, Algorithms and Mind/Brain
  2. manaster, Re: 9.483, Disc: NLP and Syntax

Message 1: Algorithms and Mind/Brain

Date: Mon, 30 Mar 1998 20:38:48 +0100
From: Peter Menzel <>
Subject: Algorithms and Mind/Brain

Because this contribution started out as an answer to Dan Maxwell's
remarks to my earlier contribution, I originally intended to send it
to him personally. But then, as the answer became more complex and
touched upon more general points, I thought that it should be turned
into a discussion contribution, particularly since others may have
misunderstood my remarks in a similar manner.

My argument concerning algorithmic descriptions of language (grammars)
turned on two points: First, there is ample evidence from psychology
that we don't "think algorithmically"; that is, that our thought
processes in e.g., problem solving, general storage and retrieval of
information, etc, seem to function by constructing models of the
aspect(s) of reality we are trying to understand, remember, etc. The
construction of these models is not algorithmical, as this term is
generally used and understood, because it depends heavily upon each
individual's past experiences, goals, grasp of the particular
situation, etc. Therefore, we cannot generally describe processes in
the mind/brain using algorithms.

This is not to say, by the way, that no brain processes can be
described using algorithms; a number of them have, in fact, been so
described! At the same time, thought processes entailed in e.g.,
problem solving (including analyzing and understanding sentences)
cannot be described using algorithms.

Second, the question of how our mind/brain deals with language can be
answered in one of two ways: Either we argue that language is dealt
with the same as any other mind/brain process, or we argue that it is
not. In the first case, any grammar (including syntax) that aims for
psychological (neurological) reality should not be algorithmical,
since the mind/brain does not operate like this (cf., above). In the
second case, you have to argue that we deal with language in a way
that is altogether different from that in which we deal with other
mental processes. If you want to argue this way, you must assume a
strong position of modularity; e.g., the one proposed by Fodor. There
is, however, no evidence for this strong position: Neither from an
evolutionary viewpoint nor from a neurological one. That is, what
evidence we do have points strongly in the direction that our
mind/brain does not deal with language in a separate module that is
self-contained and inaccessible to other modules, and which analyses
incoming signals (i.e., speech) "from the bottom up" and without
having access to other information. On the contrary, there is much
evidence that we use all sorts of information and much processing
"from the top down", when decoding (trying to understand) any incoming
information, including language. (The question here gets quite
technical, inasmuch as there is disagreement as to just what is meant
by "self-contained and inaccessable", and how far into the so-called
higher processes of the mind/brain this inaccessability goes, for
there is general agreement that these "higher processes" are not
strongly modular.)

What makes this matter even more complex is the fact that the brain is
what one might call "generally modular" in that, generally speaking,
certain types of incoming information ARE processed in certain areas
of the brain; thus e.g., visual signals in the visual cortex, language
in Broca's and Wernicke's areas, etc. However, strong modularity of a
Fodorian type, if it is based on what I've called "neurological
reality", must assume, because the modules are said to be self
contained and impenetrable to outside information, that it is only
given areas of the brain that are used in processing given signals or
information. And this is not the case. The discussions concerning
modularity therefore often turn on questions of what one considers
"higher mental processes" (cf., above).

Two final remarks. First, I did not say (nor did I ever think!) that
neural networks cannot serve as models for the mind/brain. On the
contrary, I have held, taught, and written that they can so serve
(with the fairly strong reservation that they are vastly
oversimplified compared with real brains). Second, no-one, certainly
not ten Hacken or I, has claimed that psychologically realistic
analyses don't have to describe the data. Way back in 1963, Chomsky
made a point about the various levels of adequacy of a linguistic
description or analysis. These were "observational" (describing the
data), "descriptive" (accounting for linguistic speaker intuition and
capture significant generalizations of the language), and
"explanatory" adequacy (explaining the linguistic intuition, allowing
a choice between competing descriptions). Only in the last case one
can speak of a lingistic theory. Such a theory has to account for
language acquisition, psychological reality / speaker intuition (sic),
language universals, etc. It does, of course, have to describe the
data. Now, as I read ten Hacken's remarks, and as I intended mine to
be read, this kind of "adequacy" is what we ought to aim for,
regardless of whether it is as formalistic as that proposed by Bralich
or other computer inspired analyses, or less formalistic, as e.g.,
constructivist grammars. In fact, if what I've said about the
mind/brain is correct, then formalistic/algorithmic analyses of
language are not likely to be explanatorily adequate, because these
analyses disregard some of important aspects of the mind/brain.

I hope this clarifies matters somewhat.

Mail to author|Respond to list|Read more issues|LINGUIST home page|Top of issue

Message 2: Re: 9.483, Disc: NLP and Syntax

Date: Sun, 29 Mar 1998 20:00:37 -0500 (EST)
From: manaster <>
Subject: Re: 9.483, Disc: NLP and Syntax

I am not sure whether this idea has been mentioned, since I have not
followed the discussion as closely as I should have. Anyway, the
idea, which any number of people have done work on (and which was one
of the basic ideas I had when I coined the term 'mathematics of
language' in contrast to mere computational or mathematical
linguistics) is that computational tools be used to help test ANY
aspect of linguistic theory. In other words, if somebody's idea of
ling theory is a theory of universals or of acquisition or of language
change or what have you, then THAT is what you implement in a suitable
computer language and test on your favorite computer. A parser can be
used to test a very small (though pace some who have written here in
response to Dr. Bralich, I think a nonempty) part of linguistic theory
only. So I really do not see what the problem is: by all means let us
use whatever information parsing work provides us with in improving
linguistic theories, but let us by no means mistake the part for the

Alexis MR
Mail to author|Respond to list|Read more issues|LINGUIST home page|Top of issue