LINGUIST List 8.575

Wed Apr 23 1997

Disc: Falsifiability in OT

Editor for this issue: Anthony Rodrigues Aristar <>


  1. Mohanan K. P., falsifiability in OT

Message 1: falsifiability in OT

Date: Wed, 23 Apr 1997 09:09:07 +0800 (SST)
From: Mohanan K. P. <>
Subject: falsifiability in OT

I haven't seen any responses to Chris Hogan's questions "Is OT a
linguistic theory?" and "What kinds of data would falsify OT?" I think
these questions are important, so let me make an attempt. Before I do so,
however, I need to spell out what I understand by the term "theory", and
distinguish it from "formalism" and "conceptual framework".


THEORY 1: In the physical sciences, when we say "Newton's theory of
gravitation", "quantum theory", and so on, we refer to a constellation of
logically connected empirical laws, definitions and other hypotheses that
apply to the domain in question. Such a constellation is empirical in the
sense that it yields testable assertions about the world. Binding theory
in GB and Kiparsky's (1979) universal syllable template are both theories
in this sense of the term.

THEORY 2: In the logico-mathematical domain, the term Ttheory" is used
to refer to formal systems (= formalisms). Formalisms such as set theory,
graph theory, number theory, group theory, etc. are not empirical,
because by themselves they do not yield any predictions about the states
of affairs in the world. The same remark applies to the formalisms in
logic such as propositional calculus, predicate calculus, and fuzzy
logic. A formalism is a combination of a formal language and system of
calculations such that given certain propositions, we can infer
(=calculate) certain other propositions. Theory 1 is evaluated by
checking its external consistency (i.e., consistency of the predictions
with observations), as well as its internal consistency. Theory 2 is
evaluated by checking its internal consistency alone. (De Morgan's laws
in propositional calculus, unlike Newton's laws in physics, are not laws
about states of affairs in the world.)

Thus, theory 1 is substantive, while theory 2 is formal. In the physical
sciences, theory 1 is typically accompanied by (or implemented in terms
of) theory 2. For instance, the formalisms accompanying Newton's theory
of gravitation include Euclidean geometry and calculus. The formalisms
accompanying Einstein's theory of gravitation include Riemannian geometry
and calculus. Theories in biology and medicine do not seem to have
formalisms attached to them. Thus, competing theories of migraine
(vascular, neural, neurovascular...) are made up of substantive
propositions without an accompanying system of calculations.

If a formalism is designed for use in a particular domain, we may call it
a formal framework for that domain. Proposals for context free and
context dependent PS rules, structure building and structure changing
rules, transformational and non-transformational grammars, constraints
and repair, extrinsic ordering, rules and constraints, percolation, and
so on, are formal frameworks in this sense: they give us a domain
specific formal language and a calculating system to deduce the
consequences of the laws and representations of the organization of human
language, but they do not tell us anything about the content or substance
of these laws and representations. Underspecification is a formal
framework for phonological laws and phonological representations. A great
deal of research during the early days of generative linguistics was
aimed at developing formalisms appropriate for the grammars of natural
languages. As far as I can tell, the first articulation of theory 1 of
human language was A-over-A condition, followed by islands, followed by
the RG laws.

The counterparts of physical laws in linguistics are "rules",
"constraints", "principles" "conditions", and so on. (I will use the term
"law" as a neutral cover term, including all these different formalisms
for the statement of regularities in representations.) With the exception
of the discussion in chapter 9, SPE is not a phonological theory 1 of
human language, because it does not contain a set of universal
phonological laws. What SPE gives us is a formalism for the statement of
laws and representations (theory 2). It also give us a phonological
theory 1 of English, because it contains a set of rules for English

THEORY 3: The term theory is also used in the sense of a conceptual
framework, which is a vocabulary associated with a set of related
concepts. A conceptual framework allows us to formulate theories, laws
and descriptions of the world. The vocabulary of gravity, force,
acceleration, momentum, distance, time, mass, and so on forms the
conceptual framework within which Newton's theory of gravitation is
formulated. Einstein's framework replaces "force" with "field". The so
called distinctive feature theory is a conceptual framework in this
sense: it gives us a set of concepts ([voice], [nasal], ...) that go into
the formation of empirical laws and representations. Like theory 1,
unlike theory 2, theory 3 is substantive.

In sum, theory 1 in the physical sciences, but not necessarily in biology
and medicine, includes theory 2 (formalism) and theory 3 (conceptual
framework). We can have theory 2 and theory 3 without theory 1. In logic
and mathematics, theory 2 does not entail theory 1. Distinctive feature
theory (theory 3) does not entail theory 1.


The way I see it, OT is primarily a formal framework (= domain specific
theory 2) that provides a system of calculating the interaction between
the laws of the organization of language. In classical propositional
calculus, premises (a)-(c) in (1) yield the unresolvable logical
contradiction in (d):

(1) Derivation 1:
	a) P -- > Q
	b) M -- > not Q
	c) P & M
	d) therefore Q & not Q

Classical propositional calculus is monotonic. Suppose we construct a
non-monotonic propositional calculus in which axioms are prioritized.
When a combination of axioms results in a potential contradiction, the
more highly ranked axiom determines the conclusion. In such a system, we
can derive (2e) by ranking (2a) higher than (2b):

(2) Derivation 2
	a) P -- > Q
	b) M -- > not Q
	c) (2a) >> (2b)
	d) P & M
	e) therefore Q

If we rank (2b) higher than (2a), the inference would be "not Q".

If we replace (2d) with (3d), there will be no conflicts. Hence the
conclusion will be "not QS:

(3) Derivation 3
	a) P -- > Q
	b) M -- > not Q
	c) (3a) >> (3b)
	d) M
	e) therefore not Q

We may refer to such a non-monotonic logic as "Optimality Logic" (OL).
Even though OT uses the formalism of constraints and OL uses the
formalism of if-then conditionals, they use equivalent devices for
conflict resolution, and hence may be viewed as variants of the same

Like other logico-mathematical systems, OL/OT do not make any claims
about the world: it is simply a formal system for making inferences.
However, we can make empirical claims about the relation between formal
systems and a given domain, such as those in (4) - (8):

(4)	The best formalism for the characterization of human reasoning is
		propositional calculus.
(5)	The best formalism for the characterization of human reasoning is
		predicate calculus.
(6)	The best formalism for the characterization of human reasoning is
		the combination of fuzzy logic and OL.
(7)	The best formalism for the characterization of regularities in
human languages
		is that of context sensitive PS rules.
(8)	The best formalism for the characterization of the interaction of
laws in
		human languages is that of OT.

While the formal systems of propositional calculus, OL and fuzzy logic by
themselves do not make any empirical claims, the claims in (4)-(8) are
empirical. For instance, it is easy to show that (4) and (5) are not
tenable. Chomsky tried to show that (7) was untenable, while GPSP tried
to show that (7) is tenable if we enrich the formalism with devices like
percolation and non-local subcategorization.

As far as I know, linguists (and probably economists) are the only people
preoccupied with claims of appropriateness of formalisms. If Newton and
Einstein were to make claims of the type that we linguists make, physics
will have controversies on (9) and (10):

(9) The best formalism for gravitational phenomena is Euclidean geometry.
(10) The best formalism for gravitational phenomena is Reimannian

The bulk of research in theoretical phonology since SPE has been aimed at
developing appropriate formal frameworks for human languages, a
preoccupation initiated by Chomsky's Ph.D. thesis.

If we distinguish between formalisms per se on the one hand, and claims
about the appropriateness of formalisms such as in (4)-(8), it becomes
clear that the empirical content of OT, SPE, Syntactic Structures etc. at
the level of linguistic theory are the claims of appropriateness rather
than the formalisms themselves.

In terms of the typology of theories that Hogan lists (IA, IB, IIA, IIB),
I guess what I am saying is that OT is a type IIA theory, not a IA. Hogan
does not distinguish between formal frameworks and conceptual frameworks,
so I should add that the type II category relevant for OT is that of the
formal framework, not conceptual framework.


As for Hogan's question, "What kind of data would falsify OT?", the
answer is the same for all frameworks, whether conceptual or formal. No
data by itself would falsify a framework. We show that a framework is
untenable by demonstrating that there exists a body of data which call
for an analysis which is inconsistent with the framework in question. The
argument can take two forms. If we are lucky, we can demonstrate that
there exists a body of data for which we can construct a successful
analysis in terms of framework A but not framework B, and hence framework
B should be rejected (e.g. the argument against non-transformational
grammars based on the "respectively" construction.) Alternatively, we
can demonstrate that there exists a body of data for which framework A
yields a simpler analysis than framework B, and hence A is superior to B.
(e.g. The arguments for the distinctive feature classificatory system as
opposed to the IPA classificatory system.)

To take an example, consider how conflict resolution is achieved in the
framework proposed in my "Fields of Attraction" paper in Goldsmith (ed)
TThe Last Phonological RuleU. This framework uses intrinsic strength
assignment rather than relational ranking assignment, as illustrated in
derivation 4:

(11) Derivation 4
	a) P - > Q 		(strength: 0.7)
	b) M -- > not Q	(strength: 0.5)
	c) P & M
	d) therefore Q

The idea here is that the stronger requirement is the winner in a
conflict situation. Since (11a) is stronger than (11b), the former wins
when their inferences are contradictory. The strength assignment
formalism provides for three things that the ranking formalism does not
provide for:

(12)	If two laws are inherently strong (their strengths are close to
 neither of them can be violated even when their requirements are

(13)	If a law is inherently weak (its strength is close to 0), it can
be violated even
 if there is no other law that contradicts it.

(14) If the combined strength of law X and law Y is greater than law Z,
 combination will win even if law Z is stronger than both law X
and law Y
 individually. (ganging up effect).

The ganging up effect is illustrated in derivation 5:

(15) Derivation 5
	a) P - > Q (strength: 0.7)
	b) M -- > not Q (strength: 0.5)
	c) S --> not Q (strength: 0.4)
	d) P & M & S
	e) therefore not Q

In the OT/OL formalism, the inherent strength assignments of (15a-c)
translate as "((15a) >> (15b) >> (14c))". Hence the inference will be
"Q", where the winner is (15a). If the ganging up effect is required for
the analysis of a body of data, it will involve an additional formal
device that sanctions the equivalent of ((15b)+(15c)) >> (15a).

Department of English Language and Literature
National University of Singapore
Mail to author|Respond to list|Read more issues|LINGUIST home page|Top of issue