LINGUIST List 3.357

Fri 24 Apr 1992

Disc: Rules

Editor for this issue: <>


  1. , Re: 3.336 Rules
  2. Avery Andrews, Itkonen on Andrews on rules
  3. ,

Message 1: Re: 3.336 Rules

Date: Tue, 21 Apr 1992 13:13:08 Re: 3.336 Rules
From: <>
Subject: Re: 3.336 Rules

In 3.336 Avery Andrews points out (in a discussion of my previous posting):

>A grammatical theory, as a theory about the mere shape of the data in
>somebody's file-card collection, is an utterly uninteresting object.

This may be true, but I do not see its relevance. I cannot believe that I
have come across as a die-hard, corpus-confined inductivist - for one
thing, I do not think that institutional facts, which I have been going on
about, carried much more weight with that stereotyped prugelknabe of modern
linguistics than it does with my present opponent(s).

To make my rather un-sensational position explicit: I take a grammar to be
a theory, not of a finite set of file cards, but of an infinite set of
possible utterances, predicting institutional properties of them - not only
their acceptable shapes, but also aspects of their possible
interpretations, as accessible through interviews with fallible informants.
A grammatical theory predicts constraints on possible differences between
grammars, and defines analytical tools and descriptive devices with which
to write such grammars.

Is this still an utterly uninteresting object?

If it is, then so is generative grammar of the variey Andrews advocates.
For how could a grammatical theory that in *practice* confines itself to
the traditional kind of data - informant judgments of well-formedness
properties and meaning properties - possibly be anything more than this? As
someone (unfortunately I forget who) pointed out once: There is no help in
preaching individual psychology if you go on practising autonomous

Andrews disagrees with my claim that theories about the mental correlates
of grammar *presuppose* rather than *are* grammatical theories, and that
grammars define what is to be explained rather than give psychological
explanations themselves. The reasons he gives are:

>I don't find this a viable viewpoint for grammatical theory, because
>the actual shape of grammatical systems is determined by all sorts of
>different factors (brute history for example being a dominant and
>*extremely* troublesome one in phonological systems), and there is no
>point in launching into the elaboration of grammatical theories without
>doing one's best to sort out the roles of the know causal factors.

This is at least terminologically surprising. The implicit claim here seems
to be that *grammatical* theory should *not* be a theory about
*grammatical* systems, since their shape "is determined by all sorts of
different factors". Does this mean that a grammatical theory should be a
theory of only those aspects of grammatical systems that have a certain
type of causes? Isn't that at least impractical, given that we know next to
nothing about such causes? And if we thus confine grammatical theory,
shouldn't we then reserve the term 'grammatical' either for the theories,
or (as I have suggested) for the systems?

Terminology aside, we seem to agree that accounting for "grammatical
systems" is one thing, and finding mental causes is another.

Where we disagree, is in the assumption (not shared by me) that such causes
can be found through inspection of grammatical systems alone.

Helge Dyvik
Mail to author|Respond to list|Read more issues|LINGUIST home page|Top of issue

Message 2: Itkonen on Andrews on rules

Date: Tue, 21 Apr 92 22:20:34 PDItkonen on Andrews on rules
From: Avery Andrews <andrewsCsli.Stanford.EDU>
Subject: Itkonen on Andrews on rules

Re Itkonenon on Andrews, etc. (Linguist 3.343)

>An empirical inductive generalization
>('All crows are black') is refuted by a single counter-example
>(= a non-black crow), but a rule-sentence (= 'The English
>definite article precedes the noun') is not refuted by a prima
>facie counter-example (= an utterance of 'man the'),

On this sort of standard, I doubt that physics or chemistry would make
it as empirical, since isolated events are irrelevant for them as
well. The problem with isolated events is that you can never
really be sure what caused them, so their relevance to a theory
that is deep enough to be interesting is unclear. What is needed
is replicability, and in linguistics we attain this by studying
phenomena that are widely distributed amongst the population
(e.g. norms). Building on Hoski Thrainsson's point about `dog the'
being OK as long as its not supposed to be part of an NP, the
production of such a sequence could be due to all sorts of things
other than the usual workings of the mental structures responsible
for ordinary language use.

>Rules, as I use the term, are pretheoretical, low-level enti-
>ties, described, if at all, by (rule-)sentences of the same
>type. (The term 'definite article' is dispensable.) Rules are
>the object of linguistic intuition; thus, intuition is not
>about particular cases.

I'm still confused here. The way linguists get their data
is to say sentences to themselves or others, and ask how they
sound or what they mean. They also look at texts, and make
generalizations about what they find there, and how people
interpret it. All this looks pretty particular to me.
Note in particular that linguists often loose their intuitions
on areas they work on (I've been thinking about COMP-trace
effects recently, and things like `the only thing I know
where is is my shoes' sound almost good to me now). This
behavior is quite different than with, say, moral or mathematical
intuitions, where thinking about issues often sharpens intuitions
rather than inevitably dulling them

>This type of knowledge (which I prefer
>to call 'a priori') has of course been learned on the basis of
>observation (of particular cases), but -- to repeat -- not on
>the basis of empirical induction.

If one is talking about the local linguistic norms, what is acquired
isn't knowledge, but a sort of belief: it is quite possible for
people to have small errors in their `knowledge'. For example,
at least in American English, there are a sizeable number of
people (including me) for whom `a couple (of)' doesn't mean
two, like it ought to, but `a few'. Presumably some learners must
have failed to pick up the right meaning, and thereby got a
sub-norm going. Furthermore, although thh acquisition process is
presumably not explicit empirical induction, it isn't necessarily
all that different from how we acquire our `tacit knowledge'
of how other aspects of the world work. Does the child have
`a priori' knowledge that if she pinches her brother, he
will cry? What is the essential difference between this, and learning
that if you say `I want a cookie', people will think you mean
that you want a cookie?

I'd rather let someone who knows more mathematics than I do
speak to the supposed resemblance of mathematics and linguistics,
but in my experience they are almost diametrically opposed in
Mail to author|Respond to list|Read more issues|LINGUIST home page|Top of issue

Message 3:

Date: Thu, 23 Apr 92 10:54:56 ED
From: <>

Martti Arnold Nyman (Vol-3-343) says:

> As to wh-island effects, I'm afraid sentences of the type
> ?What did Mary wonder to whom Max gave
> rather belong to a virtual world generated by the grammarian.
> I readily admit they aren't generalizations about Lrule(s)
> exemplified by the above ?:ed sentence.

In so far as I can make any sense out of this reply, I wouldn't even
disagree with it; still, since Nyman doesn't seem to begin to understand
what the problem is here, let me expand on my question once more: we find
that speakers *observe* (not violate) the wh-island constraint; we find
that they form yes-no questions by having recourse to hierarchical
structure rather than linear precedence, i.e. they unfailingly produce (1a)
rather than (1b):

(1) a Is [the man who is tall] __ in the room?
 b Is [the man who __ tall] is in the room?

If speakers proceeded on the basis of inductive generalisation or analogy
or some such principle, one would expect a more or less random distribution
over (1a) and (1b) in the acquisition stage, quod non. Hence speakers
possess a certain knowledge or follow certian rules, and one would like to
know where they get this knowledge from. There is a real issue here, not to
be dismissed lightly by reference to "virtual worlds generated by
grammarians". The question now is: how can one tell if rules speakers
follow, such as the wh-island rule or the yes-no question formation rule,
are Grules or Lrules? And what merit is there to making such a distinction
at all?

Guido Vanden Wyngaerd
Mail to author|Respond to list|Read more issues|LINGUIST home page|Top of issue