LINGUIST List 2.686

Tue 22 Oct 1991

Disc: Is language finite?

Editor for this issue: <>


  1. AVERY D ANDREWS, Re: 2.678 Infinite Languages
  2. Henry Kucera, Re: 2.678 Infinite Languages

Message 1: Re: 2.678 Infinite Languages

Date: Sat, 19 Oct 1991 20:48:46 GMT
Subject: Re: 2.678 Infinite Languages
I agree with Jack Hoeksma that the issue of infinite language is
mostly recreational, but I think it can also reveal underlying
attitudes as to what linguistics is about, so ...
I have thought a bit about perpetually accelerating speakers, but
devices that can produce an infinite sequence of tokens in a finite
time would have to be pretty wierd. Imagine a box that shows 0
on a screen for half a second, 1 for the next quarter, 0 for the
next eighth, etc. What it is showing when the whole second is up?
All I can think of (on the basis of my pop physics) is that it would have
to be in a quantum-mechanical superposition of both states (so showing 0 or
1 randomly when looked at), but a device that could do that would be
so different in its internal structure from us that I don't see how its
structure or capabilities could have any bearing on the nature
of human linguistic abilities (I believe that there would be other
limitations of a more strictly physical nature on `accelerating speakers',
but will leave that for someone with a physics background to work out,
likewise the possibilities with looping time).
On the other hand the `sentence' produced by an unending sequence of
speakers (or one speaker on a truly excellent life-support system ...)
*can't* get finished, so it *can't* get produced (not merely
doesn't happen to be produced). So, being a linguist rather than
a mathematician, I decline to take a professional interest in it.
As for competence vs. performance, it seems to me that Chomsky (1986)
_Knowledge of Language_ has already abandoned this distiinction in its
original form. From the perspective of KoL, the Aspects notion of
`competence' is a confused amalgam of `I-Language' (the internal,
language-particular `parameter setting' constituting what the speaker has
learned about their language) and a notion that one might call `idealized
performance': what some of the mental modules involved in language use
would be able to do if freed from various limitations that are seen as
irrelevant to their essential structure and functioning. It seems to me that
Chomsky also tries to deflect attention away from the idealized performance
concept, but I think that it is in fact essential, since idealized
performance, not I-language, is what can be compared to actual performance
for empirical evaluation of theories.
Grossly large sentences belong in idealized performance, I would say, since
they could be produced by mechanisms we actually contain (devices realizing
stacks and state loops, for example), if structurally irrelevant limitations
were removed.
Moving on to Alexis M-R's points (in vol-2-678, Oct 17 1991):
1. Fine, but I see the issue as not whether it is possible to regard the
set of NL sentences as finite, but whether there is any motivation for
doing so. Lacking a clear motivation for any particular finite bound,
why impose one?
2. yes
3. yes
4. Concomitant with following Chomksy in abandoning the original
comptence/performance distinction is the possibility of treating
different kinds of `performance effects' differently. E.g., the mode
of failure with center embeddings & cross-serial linkages is
structurally much more interesting that the mode of failure with
boring parataxes such as big instances of the schema
 John shouted, (and then somebody else shouted)*
or with edge recursions like big instances of
 (he knows that)* he lies
Emmon Bach has an article in some processing-oriented journal to the
effect that *syntactic* processing of X-serial dependencies crashes
when there are more than two of them: when more complex sequences
are accepted and understood, the processing is actually semantics-
rather than syntax- driven (like what Broca's aphasics do (and, I get
the impression, Roger Schank's computer programs)). If substantiated,
a hard limit of 2 on the syntactic processing of these constructions would
surely be an important clue as to how the mechanisms work, and would
intuitively be on the borderline between competence and performance
in the Aspects framework.
Some of the Ross effects, such as the ATB conditions, might also
fall into this borderline category, approaching it historically from
the other direction. Such borderline cases suggest that maybe the
borders need revision (and, of course, Chomsky has always tried to
get people to study the structure of the actual phenomena rather
than quibble over labels and taxonomies).
5. I like it.
6. Infinite sentence lengths are still not attainable by devices like
us, so I'll continue to urge that they be left out of linguistics.
Of course, maybe, someday, somehow, the math of infinite sentences will
prove relevant for linguistics, but, as things presently stand, I do
not see how the ability to accomodate infinite sentence lengths is any
kind of argument in favor of a linguistic theory with essentially
Chomskyan aims).
 Avery Andrews (
Mail to author|Respond to list|Read more issues|LINGUIST home page|Top of issue

Message 2: Re: 2.678 Infinite Languages

Date: Mon, 21 Oct 91 16:28:40 EDT
From: Henry Kucera <>
Subject: Re: 2.678 Infinite Languages
 Charles Hockett, in his small book "State of the Art," has one chapter on the
 finiteness of languages and the question whether natural languages are well-
 defined objects or not (he concludes the latter). It is a chapter where the
 analogy is made between football vs. baseball and natural languages and the
 conclusion reached that languages are like football (not like baseball where
 the set of possible scores is infinite but enumerable, which is not the case
 in football).
 Just curious: Has anybody read this and are there any opinions? Perhaps there
 are and I have missed the postings in which case I apologize. Thanks.
Mail to author|Respond to list|Read more issues|LINGUIST home page|Top of issue