LINGUIST List 9.383

Sat Mar 14 1998

Disc: NLP and Syntax

Editor for this issue: Martin Jacobsen <>


  1. Philip A. Bralich, Ph.D., Re: 9.368, Disc: NLP and Syntax
  2. Mark Douglas Arnold, (NLP/Syntax)--->Modularity of Mind

Message 1: Re: 9.368, Disc: NLP and Syntax

Date: Fri, 13 Mar 1998 09:36:11 -1000
From: Philip A. Bralich, Ph.D. <>
Subject: Re: 9.368, Disc: NLP and Syntax

>On Mon, 09 Mar 1998 Piu ten Hacken wrote: Contrary to what Phil
>Bralich suggested at various places in this discussion, silence on
>certain points he made earlier does not automatically mean agreement
>with him or surrender. A bare repetition of previous arguments (such
>as "please look closely at those standards (repeated below)"),
>however, contributes nothing to the deepening of the discussion and
>will annoy the readers. It only shows that people mutually reject the
>relevance of each other's arguments.

The repition was largely intended for those readers who were new to
the discussion--summarial review to help keep readers involved. You
cannot both count on them to be ignorant of these posts and to be able
to notice the repitiion.

[bralich said]> >This is quite true, but how can you actually
>>>study acquisition or >processing if you have not yet properly
>>>isolated or described what is >that is being acquired or processed?
>>>You cannot talk about the >acquisition of langauge or the
>>>processing of langauge if you are still >not able to properly
>>>describe parts of speech, parts of the sentence, >statements,
>>>questions, subjects, and so on and the relationships >between them.
>>>You simply have not completed the preliminaries. I am >not saying
>>>that syntax is all of linguistics any more than I am saying >that
>>>the isolating the periodic table is the whole of chemistry, but
>>>>without the work being substantially completed on those basics
>>>neither >field is really ready to begin. Any theory that attempts
>>>to study >these "domains of observation in the real world" must
>>>first >demonstrate that it has at least isolated and described
>>>these domains >in some tangible sense. My main argument is that
>>>the entire field is >starting off at the wrong end of the stick >
>>>>I thank Phil Bralich for his acknowledgement that the entire field
>>>is >on my side, although I suspect this is a slight
>>>exaggeration. Without >exaggeration, however, I can claim that in
>>>philosophy of science it is >generally acknowledged that
>>>theory-neutral observation and >theory-neutral description are
>>>impossible. There are just too many >different things to observe
>>>and too many possible ways of describing >them. In order to make a
>>>sensible selection, a theory is >necessary. (If you think you do it
>>>without a theory, your theory is >just entirely implicit, which is
>>>not a recommendation) As a >consequence, the question which aspect
>>>of language is to be explained >cannot wait until after the
>>>description, because it interacts with the >description. This
>>>interaction influences, for instance, the choice of >most urgent
>>>data to cover, the criteria for a valid coverage, etc. In >a theory
>>>of linguistics the most urgent data to be described need not >be
>>>the most frequent ones (i.e. the ones most urgent for a parser),
>>>>but they should be the ones that provide arguments for decisions
>>>on >how to extend or modify the theory.

I don't think there is much doubt in the field what there is to be
studied from the point of view syntax: we have already isolated
sentences, phrases, clauses, parts of speech and so on to a large
degree and we know what the predictable structural relationships that
exist within and among them are largely due to work over the last 35
years or so. No one is really saying this isn't so, also we all know
that theories of syntax that exist have all made claims about the
nature of these basic structures. Naturally of course there will be
an interplay between hypotheses and descriptions of the data as we
develop and hone our theories; however, at some point we have to step
back and ask ourselves which theories, whatever their motivation are
capable of accounting for the data. That is which theories after
three decades of trying have done the best job and how can we
demonstrate that?

Now I am not saying that other theories of syntax have to close up
shop and go home. Nor am I saying they will not at some point arrive
at a proper description of the data that does indeed meet their goals
(whether that be a description of processing, acquistion, or whatever)
but I am saying that as long as they do not have a demonstrably
satisfying account of the basics, they can make no strong claim to
being a mature or effective theory. They can call themselves
psychologically motivated, or learning theory motivated or whatever,
but they cannot make many claims to being able to account for the
data. We may have researchers around the world developing new
theories of math, but without a certain degree of success with the
basics, they cannot lay claim to being mature theories.

>Summarizing: Theoretical linguistics and CL have different goals. The
>descriptive goal of CL is not a subset of the goal of theoretical
>linguistics, although there is a significant overlap. Explanation
>does not follow description in theoretical linguistics, but the two
>develop in interaction. Therefore,

I am afraid I am going to have to repeat some of my eariler arguments
here. The only point I am making about programming is that as long as
a theory (whatever its motivation) is made up of theoretical
mechanisms that can be implemented in a programming language, then CL
will serve as an effective test of the actual effetiveness (maturity)
of a theory. If a theory of processing or acquisition demonstrates
that its theoretical mechanisms cannot be implemented in a programming
language then we can excuse that theory from this test, though we
still might want to be somewhat sucpicious in the same way we would be
sucpicious of a theory of math that could not produce calculators.

>1. a linguistic theory which does not give a description of all the
>phenomena Bralich's parser covers need not be a bad theory;

Correct, but it is also not a mature theory. 

>2. if Bralich's underlying theory is only descriptive it is not as
>good _qua linguistic theory_ as the existing ones. (N.B. I do not say
>anything about the evaluation of Bralich's parser as a parser)

This is actually a very interesting point. I won't argue it, but I do
not know how we would go about rejecting theories that accounted for
the data while accepting those that did not because those that did not
work were LESS able to describe the data. It is a very odd criteria
for science.

> >------------------------ Message 2------------------------------- 
> >On Mon, 9 Mar 1998 Samuel L. Bayer wrote: 

>>Relative to the Derek Bickerton/Philip Bralich parser adequacy
>>criteria: the field of computational linguistics has spent quite a
>>number of years developing evaluation criteria for parsers, which I
>>recommend looking at before you start reinventing the wheel. See the
>>journal Computational Linguistics for the last five or six years or
>>so, or for a summary, you can read the chapter that my coworkers and
>>I wrote on comparing the theoretical and corpus-based computational
>>enterprises, in a book edited by John Lawler called Computers and
>>Linguistics, due out in April. 

> > And Philip Bralich responded: 
>Yet his organization as well as all others have yet to be able to
>create a BracketDoctor--a device that generates trees and labeled
>brackets in the accepted style for this industry, the style of the
>Penn Treebank II guidelines, or a MemoMaster, a device that increases
>by many thousands the number of commands that are possible for a
>speech rec navigation and control system. In addition, the standards
>that I propose are largely based on functionality that most people
>have assumed that parsers and theories of syntax had handled years
>ago. The standards I have proposed are meant to demonstrate that
>there is a serious problem in the field; that is, very basic levels
of functionality (see below) that many believed had already been
>achieved, simply have not been reached. First, I'm rather
>disappointed in Dr. Bralich's rather blunt advertising in a
>supposedly academic debate, as evidenced in his rather obvious plug
>for his software in thie first sentence of this paragraph.

I am sorry these demonstrations of our ability are commercial in
nature, but this is hard evidence that we have achieved the things we
claim. All I am saying to the community is "produce something
tangible for others to judge rather than rhetoric." Put a program in
the reviewers hands.

>>Second, "my organization", the MITRE Corporation, has been an active
>>participant in helping define standards for a sequence of
>>DARPA-sponsored evaluations, MUC (the Message Understanding
>>Conferences, to which Dr. Bralich alludes), which is now ongoing for
>>almost ten years and has been regarded by virtually everyone in the
>>field as a singular success. In particular, the government funders
>>and organizers of this evaluation, who have real-world needs for
>>digesting and understanding large amounts of text (such as Wall
>>Street Journal and the AP newswire), have judged the program to be
>>extremely successful. The MITRE language research group has
>>participated in these evaluations, as have groups from BBN, SRI, and
>>a number of other major university and industry research labs
>>(consult any of the 7 MUC proceedings for details). How Dr. Bralich
>>can dismiss this community-wide effort so casually is absolutely
>>beyond me. > Yes, but information retrieval, which is all that MUC
>>does, has little interest in dealing with the sorts of parsing that
>>is being discussed. Information Retrieval and Information
>>Extraction could both benefit tremendously from the ability to
>>provide the full and thorough analyses that our Parser currently
>>provides in a less dramatic domain than hundreds of documents of
>>unrestricted text. However, not having parsers that are up to the
>>task, MUC and others (EAGLES, TREC, etc) have settled on other tasks
>>that are interesting in themselves but escape the need for a
>>thorough constituent analysis of strings. This luxury is not
>>available to us who want to work in the area of question/answer,
>>statement/response repartee for Natural Language queries of
>>databases and the Internet (e.g "Who was the fourteenth president of
>>the United States?" or "What was the close of IBM on Friday")or
>>navigation and control for speech rec and so on: these NLP tools
>>simple cannot be approached without the meeting the preliminary
>>standards I have proposed. And of course the MUC and EAGLES
>>standards say nothing about these areas of NLP technology. Before
>>asking us to look at other standards, you should at least note that
>>the standards proposed by MUC and EAGLES are limited to a very small
>>subset of NL and they do not at all look at the ability to provide a
>>thorough analysis of strings as any theory of syntax MUST.

Let me illustrate with some quotes from the MUC-6 web page which
outlines the tasks to be accomplished. To see this site yourself go
to (I have no idea why
no one in these discussions is providing the relevant URLs besides

You will note in the following that there is no concern whatsoever for
the ability to do a constituent analysis of a tree as that is
precisely what is being avoided. Certainly any amount of constituent
analysis would be of value and would not be excluded but there seems
to be an awareness that it is not available so different criteria are
chosen. Note also that information being extracted is not described
in terms of phrases or clauses.


>MUC-6, the sixth in a series of Message Understanding Conferences,
>was held in November 1995. This conference, like the previous five
>MUCs, was organized by Beth Sundheim of the Naval Research and
>Development group (NRaD) of NCCOSC (previously NOSC). These
>conferences, which have involved the evaluation of information
>extraction systems applied to a common task, have been funded by ARPA
>to measure and foster progress in information extraction. A meeting
>in December 1993, following MUC-5, and chaired by Ralph Grishman,
>defined a broader set of objectives for the forthcoming MUCs: to push
>information extraction systems towards greater portability to new
>domains, and to encourage more basic work on natural language
>analysis by providing evaluations of some basic language analysis
>technologies. NYU and NRaD worked together to develop specifications
>for a set of four evaluation tasks: named entity recognition
>coreference template elements scenario templates (traditional
>information extraction) Named Entity Recognition The Named Entity
>task for MUC-6 involved the recognition of entity names (for people
>and organizations), place names, temporal expressions, and certain
>types of numerical expressions. This task is intended to be of direct
>practical value (in annotating text so that it can be searched for
>names, places, dates, etc.) and an essential component of many
>language processing tasks, such as information extraction.
>Coreference The Coreference task for MUC-6 involved the
>identification of coreference relations among noun phrases.
>Information Extraction [Template Elements and Scenario Templates] The
>template-filling task for MUC-6 involved the extraction of
>information about a specified class of events and the filling of a
>template for each instance of such an event. In contrast to MUC-5,
>the effort has been to design relatively simple templates and to
>predefine the "template elements" (for people, organizations, and
>artifacts) which would apply to a wide variety of different event

>>In fact, the discovery that Dr. Bralich seems to have so recently
>>made - namely, that many linguistic theories are emperors with no
>>clothes - was a discovery which almost everyone in the MUC program
>>made years ago. Dr. Bralich is absolutely right in observing that
>>"very basic levels of functionaliy ... that many believed had
>>already been achieved, simply have not been reached", and I believe
>>personally that the field of theoretical linguistics is becoming
>>increasingly irrelevant because of this, since we now have the
>>computational resources to test the accuracy, speed and coverage of
>>these systems. And I sympathize tremendously with Dr. Bralich's
>>crusade. However, I can't begin to express how damaging it is to
>>dismiss a decade of well-thought-out attention to this problem in a
>>closely related field.

However, having demonstrated with a 75,000 word dictionary and a
parser that does a wide variety of new functions, we can no longer
dismiss theoretical syntax. Certainly, IE and IR will benefit greatly
when we extend our current tools to those areas because we provide so
much more information about the environments of the "named entities"
and so on.

I am not dismissing the entirety of IR and IE, I am merely saying that
this is a different area, but truth be told, when the standards that I
have proposed are extended to the areas of IR and IE there will
unquestionably be improvements in these devices.

>>The value of the approach followed by computational linguistics is
>>that it is task-based, where the task is not parsing, but rather
>>doing something useful with the text: either doing topic
>>identification, person/location/organization identification,
>>identification of events for entry into a database, etc. What the
>>field discovered were two very important things: (1) that many
>>problems which syntactic theories took as central were by and large
>>unimportant because of their infrequent occurrence (like quantifier
>>scope disambiguation), and (2) many problems which syntactic
>>theories ignored were crucial and interesting (like part-of-speech
>>identification and segmentation of text into sentences).

Here you are talking about doing something useful with huge numbers of
documents of unrestricted text whereas I am speaking (primarily) about
doing question/answer, statement/response repartee, grammar checking,
the improvemengt of machine translation devices and of course about a
significant, over night improvement in naviagation and control
devices. Nothing in the MUC standards speaks to any of this. The MUC
standards are actually quite narrow compared to the very wide realm of
what is possible with NLP.

This problem is not unknown in the field. Take a look at what is said
in _The State of the Art in Human Language Technology_, Ron Cole
Editor-in-Chief. A 1996 report commissioned by the National Science
Foundation. In this report, Ted Briscoe (Section 3.7, p. 1) states,
"Despite over three decades of research effort, no practical domain
independent parser of unrestricted text has been developed." In
addition, in that same report, Hans Uszkoreit and Anne Zaenen state
(Section 3.1, p. 1), "Currently, no methods exist for efficient
distributed grammar engineering [parsing]. This constitutes a serious
bottleneck in the development of language technology products."

Thus, while there is some IE and IR happening without parsers, there
are hundreds of other possible technologies that cannot be developed
with the standards used by MUC. To create these other technologies it
is necessary to meet standards just like those I have proposed or the
bottleneck will not be broken. In addition all IR and IE technologies
will be significantly improved once these NLP tools are brought to
bear on that domain.

>By the standards of evaluation which have evolved in computational
>linguistics, Dr. Bralich's standards are unacceptably vague. When he
>says that a system should identify parts of speech, which parts of
>speech does he mean? The Brown Corpus has about 40 of them; linguists
>tend to assume about 7, with some feature breakdown, and no two
>frameworks use the same set.

I am not being vague: use whatever system you want 40, 7, 100
whatever, but then take the one week of programming time that is
required to create output in simple English using Noun, verb,
adjective and so on. Specifically take the output of your parser and
meet the Penn Treebank II guidelines. That I believe is the standard
of measurement in this field which is precisely why we created a
device that can do that.

>What about systems which aren't perfect (since none of them are)? Is
>precision (percentage of answers found which are correct) more
>important than recall (percentage of correct answers which are
>found)? I could observe the same problems with his definition of
>terms in many other places in his standards. The point of the MUC
>evaluation is that the criteria which have been laid down are precise
>enough to measure progress on the given task, and detailed and
>explicit enough to forestall any argument about whether scores are
>comparable. Dr. Bralich's criteria fail on both these counts.

I concur there is a lack of precision, but this is merely to make it
easier for others to get on board with them, once there is more than
one parser that can work with those standards at all we will have to
difine them further.

>Finally, the claim that "these basic levels of functionality should
>be met long before anyone attempts comparisons or evaluations of
>different systems" is simply dogma, and ignores the progress in the
>field which I outlined above. The reason we don't evaluate or compare
>systems based on these criteria is because the criteria are
>spectacularly ill-defined, and in many cases irrelevant.

While there is a lack of precision in some of them, I don't think it
is at all a problem to expect a system to label tense, change active
to passive or passive to active or to be able to answer a simple
question. I also do not find that spectacularly undefined. As a
matter of fact except in a few areas, we are expecting much more
precision and accountability than anyone else. The ability to label
sentence type and internal clauses is also hardly out of line. I am
merely asking that a parser be able to note if a string is a statement
a question or a command, and once it does that to be able to note
tense, internal clauses and so on. Again to be precise we merely
translate our output to the Penn Treebank output. The translation of
one theories output into the Penn Treebank style shouldn't take more
than a couple of weeks for an experienced programmer. Why not assign
some students in Link grammar or any other parsing system out there
the task of turning the output into a Penn Treebank style. Anyone who
knows anything at all about programming would no this is trivially
easy (IF the analsysis is done correctly in the first place).

>As I've said, the literature speaks for itself. If I were to refer to
>Chomsky's Minimalist Program, would Dr. Bralich challenge its
>existence simply because I failed to provide a bibliography? I hope
>not. And it's Dr. Bayer, by the way, and, yes, the doctorate is in
>theoretical linguistics.

All I suggest is that the reader go to the MUC page himself (URL given
above) and decide for himself. The tasks that IR and IE set out for
themselves may be of some value in that very limited domain, but they
have absolutlely no applicability to the development of other NL tools
such as q&a machine translation and so on. In order to approach these
other areas you absolutely have to have a parser.

>Instead of paging through those many periodicals, ask yourself one
>question. "Why isn't there a single reference to a list of standards
>in this field or from this journal such as proposed below?"
>What if I said, "Instead of paging through those many periodicals
>about GB, why not just make up your own theory? Why isn't there a
>single list of rules in GB that I can refer to, like Cliff Notes?"
>The answer is obvious: because the field is large and complicated,
>and the problems are difficult, and you should read the literature,
>because it's important and relevant and that's what scholarship is
>about. All this is also true of evaluation of language systems. To
>suggest otherwise is insulting to all the theorists who have slaved
>over these difficult problems.

I know GB, I have worked with GB. MUC is no GB. 

>Let's not argue about who's failed; in general, computational
>linguists have made far more progress in evaluation (and in producing
>usable systems) than most people working in the field. I have a
>counterproposal for Dr. Bralich: run the Ergo system against the
>MUC-6 or MUC-7 evaluation set, and tell us what your score is
>relative to the other systems which participated in the evaluation.

These areas are definately on our agenda and I can assure you, you
will see my comments and discussion in this area as soon as we do.
There are projects we have to finish first and some preliminary work
required before we do. Hopefully early in 1999, I will begin to post
on IR and IE. And at that time we will undoubtedly ask that similar
standards be extended to that domain as well.

>I won't bother responding to the rest of Dr. Bralich's message; it
>continues to imply that I am alluding to evaluation criteria and
>software systems which don't exist, based on Dr. Bralich's apparent
>unwillingness to read a body of commonly-available literature. It is
>not my duty to educate Dr. Bralich about what's been going on in
>computational linguistics; the literature is there and anyone can
>look at it, and I, for one, won't demean the complexity of the field
>by suggesting that it ought to be summarizable in a one-page note.

Well, I posted the areas of interest from MUC-6 above. These things
have nothing to do with NLP (though NLP could help them). In addition
the MUC standards apply nowhere outside of IE and IR. The standards I
propose will be of value in the hundreds of other areas of NLP that
are not being approached because of the aforementioned bottleneck.

Phil Bralich

>-------------------------------- Message 3 -------------------------------

>On Wed, 11 Mar 1998, Peter Menzel wrote: The discussion on this topic
>is a familiar one in liguistics and in science in general. As my
>contribution to it, I should like to clarify some of the underlying
>assumptions made by Bralich and other computational linguists on the
>one hand, and by theoretical linguists like ten Hacken and Arnold on
>the other.

The distintion between theoretical linguists and computational
linguists is a false one--at least as far as Derek and I are
concerned. Our programs are based on a theory of syntax. However, our
theory, like all others is made up of theoretical mechanisms that can
be translated into a programming language. Our success in this area
does not turn us into computational linguists. We are theoretical
linguists who have used the computer to provide physical evidence of
the efficacy of our theory.

However, I do not see how anyone could come even close to meeting the
standards I propose without first having a fully worked out theory of
syntax. Even if programmers were the ones to develop a program that
met those standards we would have to admit that somewhere in those
lines of code was a true theory of syntax. Even if you just created a
huge series of gerry rigs, either they would not work or they would
merge into a theory. The phenomena to be described are complex,
subtle, and intricate. Only a completely worked out theory of syntax
will result in such programs.

>how can we make certain that the sample of a language we happen to be
>describing is representative of human language, and that in two
>senses: How can we generalize from our sample to the rest of the
>language in question, and how can we generalize to other languages?
>Chomsky's reformulation of our goal as "accounting for the native
>speaker's (linguistic) intuitions" successfully enlarged the
>discipline by including the area I mentioned. In doing this, it also
>moved lingistics squarely into psychology.

Speakers have no problems creating correct passives, actives,
questions and statements. That much we have a responsibility to
account for before going off into less clear areas. All the standards
I proposed are based on functions that are clear and predictable.

>I believe that here, already, we've come to a division of interests
>between computer linguists and what one might want to call "speaker
>linguists"; for the former are not, as a rule, interested in all
>aspects of "native speaker intuition", though they are, of course,
>interested in generalizing their sample to the rest of the language
>in question; and, to a lesser extent, in generalizing to human
>laguage. With respect to the latter, they mostly seem to assume that
>the language they're describing is representative of human language.
>But, as Chomsky pointed out, and as we all learned in grad school,
>"accounting for native speaker intuition" also has serious
>implications for the acceptability of a proposed analysis.

Native speaker intuitions do not fail us on the standards I have
proposed. They do not argue that "John was arrested by Mary" is the
passive equivalent of "Mary arrested John." To use vagueness of
speakers intuitions to cloud that very ordinary reality is to admit
that speakers intuitions are not a scientific device, but a tool to
cloud very ordinary issues. The standards I propose are not
contradicted by speakers intuitions.

I hope you do not mean to say you can find speakers whose intuitions
will tell you that "John" is a correct response for the question "Who
gave Mary a book?" when the data set is "John gave Mary a book." The
standards I propose ask nothing more exotic than that. To claim that
there is some special wisdom to be brought to bear on this from native
speaker intuitions is obviously dead wrong.

>Now, if there is one thing that recent work in neurophysiology,
>psychology, psychobiology, and related areas has shown, it is that
>the human brain (or that of any other living being we have examined,
>for that matter) does NOT work like a general purpose serial
>computer. As far as I can tell,

Then the burden of proof is on you. Look at the standards I have
proposed and demonstrate that native speakers do not accept them as
valid descriptions of the predictable aspects of structure.

> The most complex case is the one where a researcher believes that
>the human linguistic ability is different from all our other
>abilities and can, indeed, be accounted for by using algorithmic
>models, while our other abilities cannot.

So do you mean to say that math is somehow an aberrant product of the
mind when it works algorithmically and that science should not use it
except in those cases where it is non-algorithmic? I suppose if we
went to the average human and asked him his native intuitions about
math we would have a better mathematical theory.
It would be nice if we could claim that these things cannot be
demonstrated in a scientific way. That way, there would be no need to
rely on anything other than brute force to maintain a position.
Unfortunately, our programs more than adequately demonstrate that for
the area of theoretical syntax at least you are going to have to come
to the earth and argue from evidence rather than the "you can't say
that" argument based on the impossibility of doing science in this
area. Algorithms do indeed work to account for products of the human
brain. Both math and syntax prove this.

Phil Bralich

Philip A. Bralich, Ph.D.
President and CEO
Ergo Linguistic Technologies
2800 Woodlawn Drive, Suite 175
Honolulu, HI 96822

Tel: (808)539-3920
Fax: (808)539-3924
Mail to author|Respond to list|Read more issues|LINGUIST home page|Top of issue

Message 2: (NLP/Syntax)--->Modularity of Mind

Date: Fri, 13 Mar 1998 15:40:20 -0500 (EST)
From: Mark Douglas Arnold <>
Subject: (NLP/Syntax)--->Modularity of Mind

While the following comment is outside the (narrow) scope of the
NLP/Syntax thread, I would like to address a comment made by Peter
Menzel (9.368: NLP and Syntax) concerning modularity of mind
vis. functional localizations in the brain. One of Menzel's points is
that the mathematical/formalist approach to linguistic theory must be
significantly altered (abandoned?) because recent brain studies show
that a fundamental formalist assumption---modularity of mind---can't
possibly be right.

Menzel recites the increasingly common argument that modularity of
mind can't be right because recent brain studies show that various
cognitive functions are not localized to specific areas of the
brain. This line of argument is based on a misinterpretation of the
modularity hypothesis, and while it is certainly possible that the
modularity hypothesis will turn out to be wrong, it is important to be
clear about why current brain studies don't argue against modularity.

To put it bluntly, there is nothing inherent in the modularity
hypothesis which requires clear-cut localization of function in the
brain. While it might be the case that certain specific proposals have
tried to adopt a tightly defined mapping between cognitive function
and neural anatomy, there is a difference between the implementation
of an assumption and the assumption itself.

In short, the core assumption behind modularity of mind is that
certain abstract representations (as well as the psychological
mechanisms for manipulating those representations) are essentially
function specific. There is nothing in the hypothesis which requires
that function-specific abstract representations be physically
localized to particular places within the neural anatomy. That is to
say, "module of mind" does not equal "lobe of brain".

Citing the lack of tightly defined functional localization to refute
the modularity hypothesis is the result of a misundertanding of the

Mark Arnold

Mail to author|Respond to list|Read more issues|LINGUIST home page|Top of issue