LINGUIST List 7.706

Thu May 16 1996

Disc: Ungrammatical Sentences

Editor for this issue: Anthony Rodrigues Aristar <aristartam2000.tamu.edu>


Directory

  1. Sebastian Shaumyan, Disc: Ungrammatical Sentences

Message 1: Disc: Ungrammatical Sentences

Date: Sun, 12 May 1996 18:53:29 EDT
From: Sebastian Shaumyan <shaumyanminerva.cis.yale.edu>
Subject: Disc: Ungrammatical Sentences
 Disc: Ungrammatical Sentences
 -------------------------------

Here is my answer to the comments on my posting of April 9.

Richard DeArmond writes:

>It is necessary to carefully distinguish between grammatical features
>(Chomsky's 'phi'-features) and meaning. Grammatical features occur in
>two classes: the first class includes meaning that is referential in
>the Fregean sense, as opposed to meaning that has sense. All lexical
>items have indirect, but no direct reference.


Linguistics has nothing to do with the reference in Fregean sense. For
linguistics, "morning star" and "evening star" have two different
referents. For logic, these two expressions have one and the same
referent. One must not confuse the linguistic concept of the referent
with the logical one. The linguistic concept of meaning implies the
concept of the linguistic referent. Any morpheme has a referent
according to formula M : R. What is meaning? Meaning is a presentation
of the way a sign (in our case, a morpheme) relates to its
referent. In other words, the meaning of a sign is a characterization
of its referent. The meaning is a relation of the sign to its
referent.


Richard DeArmond:

>Derivational morphemes in many if not all cases have sense. I would
>go so far to define derivational morphemes as those that denote sense
>but no direct reference. The second class includes phi-features that
>have no reference. These include Case and agreement. It could be
>claimed that agreement features have reference through their
>antecedent features. This cannot be the case. If both had direct
>reference, there is always the possibility that each could end up
>having different references--this I am certain never happens.
>Second, we have the problem of grammatical gender which is under
>discussion elsewhere in LList. There is often a connection with
>meaning for human and some animate objects, but in most cases
>grammatical gender has no referential property. It is determined by
>the inherent and unpredictable class of the noun.


All morphemes have referents and, consequently, meaning. The
distinction between direct and indirect reference does not make
sense. Rather, we must distinguish two kinds of reference: lexical and
grammatical. Accordingly, we must distinguish two kinds of meaning:
lexical and grammatical. Yes, case and agreement, as well as gender,
have meaning. The meaning of grammatical is abstract, but there is no
morpheme without a meaning. In the course of the history of a language
some morphemes may lose their meaning, and then they are no more
morphemes. As examples of morphemes having no meaning DeArmond refers
to de- in "de-stroy" or re- in "re-ceive". This analysis is
wrong. From the standpoint of contemporary English, de- and re- in
these words are mere syllables rather than morphemes.


On to Robert Beard's comments.

Robert Beard questions the claim in my posting that every morpheme is
a bilateral entity consisting of sign and meaning. So he poses a
rhetorical question:

Robert Beard:

>What is the meaning of the morphemes -at and -al in terms like
dram-at-ic-al_?

My answer is: These are syllables rather than morphemes. One must not
confuse syllables or consonant clusters, which are unilateral entities
(that is, they are mere combinations of phonemes), with morphemes,
which are bilateral entities (that is, they are combinations of sign
and meaning). Beard makes the same error as DeArmond. They are not
alone. Confusion of syllables and consonant clusters with morphemes is
a widespread phenomenon. Suffice it to say that, as I have explained in
my book, the treatment of phoneme clusters and morphemes as objects of
the same order constitutes the very foundation of generative phonology.



Robert Beard quotes the following passage from my posting:


>The grammatical morphemes belong in grammar, while content morphemes
>belong in the lexicon. If this assumption is correct, then we must
>replace the opposition GRAMMAR/SYNTAX VERSUS SEMANTICS by the
>opposition GRAMMAR/SYNTAX VERSUS LEXICON.


Then he writes:

>'Lexeme-Morpheme Base Morphology', SUNY Press, 1995 is the theory of just
>how that works. However, in that book I demonstrate that there must be a
>grammatical level distinct from the lexicon, phonology, syntax, and
>semantics. I further demonstrate how the grammatical level, i.e. the level
>of morphological categories, is mapped onto the remaining four. Such
>mapping must be carried out by algorithms which are equivalent to the
>functional parameters of language.


I wouldn't trust any algorithms, or any mathematics, based on the
confusion of morphemes with syllables and consonant clusters.

My linguistic theory, Applicative Universal Grammar (AUG), uses a
sophisticated mathematical formalism based on Combinatory Logic. But
it does not appeal to mathematics in support of its fundamental
concepts. The conceptual problems have nothing to do with
mathematics. Conceptual analysis is independent of any
mathematics. Mathematics is a tool of deduction. Deduction from what?
The value of deduction depends on the value of the initial ideas to
which deduction is applied. Deduction must be applied to right
concepts. In this connection I would like to quote mathematicians as
impartial judges on the value of mathematics in science. Thus, after
giving striking examples of the abuse of mathematics in different
sciences, V. Nalimov concludes:

"The use of mathematics in itself does not eliminate absurdities in
publications. It is possible to `dress scientific brilliancies and
scientific absurdities alike in the impressive uniform of formulae and
theorems' (Schwartz, 1962). Mathematics is not the means for
correcting errors in the human genetic code. Side by side with the
mathematization of knowledge, mathematization of nonsense also goes
on; the language of mathematics, strange as it seems, appears fit for
carrying any of these problems" (Nalimov, 1981).

And now let me quote a passage from my own book:

"Successful application of mathematics in various sciences has caused
an aura of prestige about scientific works that use mathematical
symbolism. There is nothing wrong with that, since the right use of
mathematics is really important in any science. A well-motivated use
of mathematical formalism can enhance the value of a mathematical
paper or a book. In case of an abuse of mathematics, however,
mathematical symbolism acquires a purely social function of creating
an aura of prestige about works whose cognitive value is null or, at
best, very low. In this case, the manipulation of mathematical
symbolism, the play with symbols, belongs to phenomena of the same
order as the ritual dances of an African tribe. The manipulation of
mathematical symbolism becomes a sign of affiliation with an exclusive
social group. The magic of the play with symbols can have an effect
comparable to the effect of drugs. The participants of the play get
high and then fall into a trance that makes them feel they are in
exclusive possession of God's truth. (Shaumyan, 1987: 321).


David Powers writes:

>The distinction between syntax and semantics is anything but clear,
>and Shaumyan's notion that semantics subsumes both grammar and
>lexicon is very appealing. [...} Unfortunately, the distinction
>between grammatical and lexical morphs is no more clear cut than is
>the distinction between syntactic and semantic (un)acceptability.
>Many of the examples used in linguistic texts and papers reflect this
>problem. Both the starred and the unstarred examples seem awkward
>and unacceptable out of context, as presented, but may be quite
>natural in context.... As an Australian who does not speak any
>American dialect, it is sometimes difficult for me to know whether
>the frequent reversal of starring patterns which I would place on the
>examples of an American text derive from dialectic differences or
>carelessness on the part of the authors. I tend to suspect it is a
>bit of both, but sometimes my objection can be definitively sheeted
>home to dialect or idiom.


The comments of David Powers reflect a widespread confusion of the
data with the subject matter of a science. The subject matter of
linguistics is the semantic system of language as a theoretical
construct. Grammar as a theoretical construct is an abstraction
resulting from a conceptual analysis of linguistic data. The subject
matter of linguistics is an object obtained by abstraction from
differences between idiolects. There are as many idiolects of Russian
or English as there are speakers of Russian or English. The English
language is an abstraction from all English idiolects in a similar way
as the zoological concept of the dog is an abstraction from all dogs
of the world. Under a clearly defined subject matter of linguistics,
the distinction between grammatical and lexical meanings does not
present serious problems except for special cases at the interface of
grammar and lexicon. These special cases have a marginal significance;
they in no way undermine the fundamental opposition between
grammatical and lexical meanings.


Benji Wald writes:

>A quick comment on Shaumyan's posting. His position, or at least his
>tree of language, seems to me to threaten a replay of the "linguistics
>wars" between Generative and Interpretive Semantics described in
>Harris's book of that title.


My position in no way threatens a replay of the "linguistic wars"
between Generative and Interpretative Semantics. Here is why.

Applicative Universal Grammar is a semiotic theory of language. The
cornerstone of Applicative Universal Grammar is the concept of the
sound-meaning bond. The sound-meaning bond is defined by the RELEVANCE
PRINCIPLE:


 Every linguistic unit is a combination of a class of meanings with a
 class of phonic segments (or markers based on phonic segments).
 The only distinctions between meanings that are relevant for forming
 a class of meanings are those that correlate with the distinctions between
 the corresponding phonic segments (or markers based
 on phonic segments), and, conversely, the only distinctions between phonic
 segments (or markers based on phonic segments) that are relevant
 for forming a class of phonic segments (or markers based on phonic
 segments) are those that correlate with the distinctions between the
 corresponding meanings: no matter how different, two distinct meanings
 belong in the same class of meanings if they do not correlate with
 two distinct phonic segments (or markers based on phonic
 segments); conversely, no matter how different, two distinct phonic
 segments (or markers based on phonic segments) belong in the same
 class if they do not correlate with two distinct meanings.



 ELEMENTARY EXAMPLES, ILLUSTRATING THE RELEVANCE PRINCIPLE

The English word WASH has different meanings in the context of
expressions WASH ONE'S HANDS and WASH THE LINEN. But the distinction
between the two meanings is irrelevant for the English language
because this distinction does not correlate with a distinction between
two phonic segments: in both cases we have the same phonic segment
WASH. Therefore these two meanings must be regarded not as two
different meanings but as two variants of the same meaning. On the
other hand, the meaning of the Russian word MYT, which corresponds to
the meaning of the English WASH in WASH ONE'S HANDS, and the meaning
of the Russian word stirat', which corresponds to the meaning of the
English WASH in WASH THE LINEN, must be regarded as different meanings
rather than variants of the same meaning as in English, because the
distinction between the meanings of Russian MYT' and STIRAT'
correlates with different phonic segments, and therefore is relevant
for the Russian language. As to relevant and irrelevant distinction
between signs, consider, for instance, the distinction between the
phonic segments NJU and NU. The distinction between these phonic
segments does not correlate with a distinction between their
meanings. Therefore these phonic segments are variants of the same
phonic segment, denoted by a sequence of the letters new.

The Relevance Principle explicates the profound idea of Saussure that
language is form, not substance. Saussure's concept of form is
special. By no means Saussure's concept of form must be confused with
the notion of form as used in various formal theories that dominate
the current linguistic scene. Saussure's concept of language as form
means an entity emerging through an intimate interaction of sound and
meaning. Saussure's concept of form means that neither sound separated
from meaning nor meaning separated from sound are part of language.

The consequences of the Relevance Principle are numberless. They
provide insights into the most intimate properties of language as a
symbolic form of the representation of reality. One of interesting and
important consequences of the Relevance Principle is that it provides
a semiotic base for the Linguistic Relativity Principle, formulated by
Whorf. The semiotic approach to the Linguistic Relativity Principle
leads to a seemingly paradoxical conclusion that linguistic relativity
and linguistic invariance presuppose and complement each other. The
relativity principle applies to relativity itself. Relativity is
relative. The complementarity of linguistic invariance and linguistic
relativity has a counterpart in physics. The concept of invariance is
central to Einstein's theory of relativity. Einstein's theory of
relativity is concerned with finding out things that remain invariant
under transformations of coordinate systems. Applicative Universal
Grammar treats language universals as invariant under the transitions
from one relative linguistic system to another. Further argumentation
in support of the complementarity of linguistic relativity and
linguistic invariance one can find in Shaumyan 1980; 1987.

Now, I think, it is clear why my position does not threaten a replay
of linguistic wars between Generative and Interpretive Semantics. In a
nutshell, the difference between Interpretive and Generative Semantics
is this: Interpretative Semantics separates sound from meaning while
Generative Semantics separates meaning from sound. Interpretative
semantics thinks of syntactic structures as meaningless sequences of
symbols generated from nowhere and only later "interpreted" by
semantic rules, that is, converted into a semantic representation. By
contrast, Generative Semantics practices a reckless analysis of the
meaning of linguistic expressions disregarding the structure of the
sound shapes of linguistic expressions. Both Interpretative and
Generative Semantics have paid little attention to the conceptual
analysis of linguistic data to clearly define the assumptions
justifying their research methods. Rather they have been interested in
mathematical formalism. But mathematics is no substitute for a
conceptual analysis of data.

Under the Relevance Principle, neither Interpretive nor Generative
Semantics is acceptable. Any linguistic theory which recklessly
neglects the conceptual analysis of constraints determined by the
sound-meaning bond is built on sand.

Which is better Interpretive or Generative Semantics? Both are worse.

To substantiate my characteristic of Interpretive and Generative
Semantics, I must give examples. One can find a detailed in-depth
analysis of Chomsky's theories and its derivatives, illustrated by
copious examples, in Shaumyan 1987. Here I will confine myself only to
one elementary example illustrating the difference between Generative
Semantics and semantics based on the Relevance Principle.

Let us consider a semantic analysis of the verb KILL. Generative
Semantic treats KILL as a causative verb, which means "CAUSE BECOME
MINUS ALIVE".

The problem with this analysis is that given the fact
of the above paraphrase of KILL or other paraphrases like "RENDER
UNALIVE" or "DEADEN", etc., the verb does not thereby become causative
because it is not correlated with a phonic counterpart. A real
causative is one with a causative morphonology as in the forms SIT :
SET (I SIT BY THE TABLE : I SET THE TABLE FALL), FALL : FELL (THE TREE
FALLS : THE LUMBERJACK FELLS THE TREE). The KILL does not have a
phonologically marked counterpart *KELL. Therefore it is not
causative.

The root of this false hypothesis is the implicit false
assumption that the meaning of a word is determined independently of
its correlation with its sound form. A correct semantic analysis must
be based on a correlation of the meaning of an expression with its
sound form.

In conclusion, I must say this: I reject both Interpretive and
Generative Semantics, but I recognize that the enthusiastic following
of Interpretive and Generative Semantics has produced an interesting
and useful body of research. The negative attitude towards any theory
in no way implies the negation of the positive results obtained in the
framework of the theory.

Do I contradict myself? No, I don't. A theory is like a window
through which we look out of a room. A good theory is like a clean
window. A bad theory is like a dirty window. It allows us to see but
it may distort what we see. Still, if we want to see what is going on
outside, we need a window. A dirty window is better than no window. A
bad theory is better than no theory. A bad theory allows us to see
somehow. Without a theory we see nothing. This is why even a bad
theory may help us to get positive and interesting results. A bad work
is bad, and a good work is good no matter what theory its author has
used. Every work must be judged on its own merits independently of
the theoretical principles its author relied on. Of course, in the process
of the analysis of a work we must separate the wheat from the chaff.

Under his false theory, Columbus predicted that he would reach India,
but he discovered America. And this is one of the bast thing done
under a false theory in our imperfect world.


REFERENCES

Shaumyan, S. (1987). A Semiotic Theory of Language. (Indiana
 University Press: Bloomington,Indiana).

Shaumyan, S. (1980). "Semantics, The Philosophy of Science, and
 Mr. Sampson". In FORUM LINGUISTICUM, Volume V, Number 1. pp. 66-83.
 Published by Jupiter Press for Linguistic Association of Canada and
 States. Adam Makkai, Editor, P.O.B. 101. Lake Bluff,
 Illinois, 60044.

Nalimov, V.V. (1981). In the Labyrinth of Language: A Mathematician's
 Journey. Philadelphia: ISI Press)

Schwartz, J. (1962). "The Pernicious Influence of Mathematics on
 Science". In Logic, Methodology, and the Philosophy of
 Science. Proceedings of the 1960 International
 Congress. Palo Alto, Calif.: Stanford Univesity Press.




- ------------------------------------------------------------------
Sebastian Shaumyan				 119 Whittier Road
Professor Emeritus of Linguistics	 New Haven, CT 06515, U.S.A.
Yale University					 (203) 397-1814
						 FAX: (203) 387-7433
- ------------------------------------------------------------------

Mail to author|Respond to list|Read more issues|LINGUIST home page|Top of issue