LINGUIST List 13.1367

Wed May 15 2002

Disc: Falsifiability vs. Usefulness

Editor for this issue: Karen Milligan <karenlinguistlist.org>


Directory

  1. Mike Maxwell, Re: Disc: Falsifiability vs. Usefulness
  2. H.M. Hubey, Re: 13.1354, Disc: Falsifiability vs. Usefulness
  3. Dan Everett, Falsifiability and usefulness
  4. Martha McGinnis, Re: Falsifiability vs. Usefulness

Message 1: Re: Disc: Falsifiability vs. Usefulness

Date: Wed, 15 May 2002 11:21:52 -0400
From: Mike Maxwell <maxwellldc.upenn.edu>
Subject: Re: Disc: Falsifiability vs. Usefulness

In LL 13.1279, the first msg in this thread, Dan Everett wrote:
>Pick your favorite theory. Can you really imagine any
>circumstances under which its founders would admit
>that enough of its statements had been falsified to
>warrant chucking it?

Maybe it's time to bring some real cases into this discussion.

Since I know more about current issues in phonology and morphology
than I do about syntax, let me try an example from that. (Maybe this
was mentioned in the NLLT editorial that Dan referred to in his first
msg--I haven't seen it.) And I will _not_ pick my favorite
theory--quite the contrary!

What would it take to falsify Optimality Theory (OT)?

To make this a bit more specific, what would it take to falsify
current versions of OT as applied to phonology and morphology? The
original version of OT was more constrained (pardon the pun)--all
constraints were innate, there were only "ordinary" constraints on
Input-Output, and so forth. Now there are constraints that can only
be interpreted as learned (at least that's my take!), conjoined
constraints, Output-Output constraints, Base-Reduplicant constraints,
anti-faithfulness constraints, ad infinitum (or at least so it appears
to me). Is the theory falsifiable?

Maybe it doesn't matter whether the theory is falsifiable (and maybe
this is Dan's point). One can argue that the predecessors to OT
Phonology, such as rule-based lexical phonology, were also
unfalsifiable, and that they achieved this status by add-ons,
particularly by constraints (the OCP, for example) imposed on top of
the rules. After awhile, the add-ons achieved the status of epicycles
in some linguists' eyes, and they began looking for alternative
theories. Are we approaching that state in OT?

Regardless of where OT stands in this, I guess the main point is that
one can usually deal with counter-examples by patchng up a theory.
But after awhile, the patches are perceived as epicycles, and at that
point (well, it's not usually a _point_, but that's another issue) the
theory becomes suspect. If there's another theory waiting in the
wings, then is the time to bring it out.

Incidently, I suspect the reason Steady State cosmology was given up so
completely and quickly (without trying to patch it, see LL 13.1330) is
that (ironically!) it's much harder to add epicycles to astronomy than
it is to add them to theories of cognitive psychology. There's no
reason to suppose that the mind is maximally simple, whereas astronomers
tend to presuppose simplicity.

 Mike Maxwell
 Linguistic Data Consortium
 maxwellldc.upenn.edu
Mail to author|Respond to list|Read more issues|LINGUIST home page|Top of issue

Message 2: Re: 13.1354, Disc: Falsifiability vs. Usefulness

Date: Wed, 15 May 2002 12:42:07 -0400
From: H.M. Hubey <hubeyhmail.montclair.edu>
Subject: Re: 13.1354, Disc: Falsifiability vs. Usefulness



LINGUIST List wrote:

> Date: Wed, 15 May 2002 07:51:39 -0300
> From: "Dan Everett" <dan_everettsil.org>
> Subject: RE: 13.1351, Disc: Falsifiability vs. Usefulness
>

>
> happy with this, so long as we recognize that it does not entail
> showing something to be wrong or eternal abandonment of the hypothesis
> (for example, someone could convince me that there is a word boundary
> after the penult in the word that I had thought to be a
> counterexample, after I had abandoned my original stress
> hypothesis).

You might simply modify your original statement to "mostly" instead of
"all the time". That would be perfectly reasonable and rational and
can be handled both by probabilistic reasoning and fuzzy logic. But
simple crisp logic says that the original statement has been
falsified.

But falsifiability is not about the implication P=>Q. It is more than
that. It is about the kinds of "statements" (theories) that can be
deemed scientific. It is possible to make a scientific statement which
turns out to be incorrect. The statement "all birds on earth fly" is a
scientific statement but not a correct statement. IT is scientific
because it is, in principle, falsifiable. Just find a bird that does
not fly. That is how it is falsifiable.

But a statement like "the devil is the root of all evil" is not
falsifiable especially if you do not show people how we would go about
detecting its presence. You might claim that "everytime there is evil,
that shows the presence of the devil". But how is it falsifiable?
Nobody can prove that there is no devil.



> I urge linguists who wish to hang on to a vestige of
> falsifiability to accept this much less potent, non-Popperian sense of
> falsifiability, rather than the considerably more questionable and
> pompous notion that Hull, Hempel, Lakatos, and other philosophers of
> science have argued so strongly against.

Science is more than simple logic. But Popper's notion is about
something like the "bare minimum" requirements for science. For
example, I think there are "qualities" to science, like quality of
energy. And those where there is no math is bad quality. Nobody has to
agree, but the bare minimum requirement at least allows all the
-ologists to call themselves scientists, and to call their -ologies to
be "sciences".



....Mark
hubeyhmail.montclair.edu
Mail to author|Respond to list|Read more issues|LINGUIST home page|Top of issue

Message 3: Falsifiability and usefulness

Date: Wed, 15 May 2002 12:50:45 -0300
From: Dan Everett <dan_everettsil.org>
Subject: Falsifiability and usefulness

The replies by Hubey and Bradfield in 13.1354 are the most useful ones
to date, I think. They both address Hempel (but omit Hull) and they do
so with sophistication. Bradfield takes issue both with the way I put
the Hempel problem and with the conclusion I draw from it. Hubey
thinks it is easy to show something to be wrong. And he thinks that I
have applied Logic to science in a overly restrictive manner,
specifically, that I have failed to refer to Bayesian reasoning and
that this has adversely affected my view of science and
falsifiability. And he agrees with Bradfield that I have drawn the
wrong conclusion from Hempel. Before I address their criticisms, let
me attempt to restate more precisely a couple of the problems I have
with falsifiability.

(1) It is too weak: falsifiability would allow as 'cognitively
significant' (Hempel 1991, 71) or empirically significant, otherwise
unuseful statements, e.g. statements of the form 'All swans are white
and the absolute is perfect'. 

(2) It is too strong: falsifiability rules out statements such as 'For
every language there exists a set of derivational mappings between the
syntax and the phonology' or 'For every language there exists a
default epenthetic vowel'. Such statements can be very useful, yet
they are not falsifiable. (See Hempel for the non-falsifiability of
statements mixing universal and existential quantifiers, e.g. 'For any
substance there exists a solvent'.)

(3) It is too difficult to apply: 'All languages have syllables'. To
prove this or disprove it for any given instance simply requires too
many additional assumptions for it to be said to be a falsifiable
statement in any useful sense. Yet it can be quite a useful statement
for the linguist. (As Hull (1988,295ff) points out there are two kinds
of "wriggling" that take place in science. One is definitional - most
terms in most theories contain sufficient "semantic plasticity" to
allow one scientist to tell another 'That doesn't falsify my idea,
because I didn't *mean* that.' The other comes from the constant
evolution of theories, that this or that fact just doesn't matter
yet/anymore, to my statement or theory, as the case may be).

Bradfield's main point is that Hempel's 'conjunction problem'
(repeated in (1) above) is not a salient problem. As I state above,
though, this problem is salient because it shows that falsifiability
is too weak. It does this because it shows that falsifiability fails
to rule out empirically useless statements such as 'All swans are
white and the absolute is perfect', because they are in fact
falsifiable.

Hubey claims that if one says that "all rabbits are yellow" and then
finds a white rabbit then the claim has been falsified. No, the claim
has *apparently been counterexemplified*. What happens after that is
anyone's guess. For example, I suspect that theoreticians committed to
yellow rabbits will try to find a reason to explain the white rabbit.
And such efforts, based on group solidarity, a priori commitments,
etc, can be quite effective. So much so that falsifiability is in the
eye of the beholder (For example, how many people need to see the
white rabbit before 'All rabbits are yellow' is falsified? One?
Perhaps they were on LSD? Two? Perhaps they committed a conspiracy?
One + a photo? Perhaps the photo was doctored? etc, etc).

I am *not* saying, in response to Hubey, that science progresses
logically. Not at all. I am agnostic on the matter. If I were to make
a claim it would be that most people *think* science is progressing
and so I guess it is (after all, I am sitting about 1 degree south of
the equator as I type this, but am nevertheless able to connect
immediately to the internet and work comfortably in an air-conditioned
room). But that judgment has nothing to do with any kind of logic or
with falsifiability.

Dan Everett

P.S. In his comments, Bradfield remarks on the analytic vs. synthetic
statement distinction and says that '2+2=4' is nonempirical because it
is analytic. Now this is not crucial to the discussion on
falsifiability, but it does raise the issue of truth again, since many
people agree that Quine's 'Two dogmas of empiricism' successfully
demonstrated the lack of utility of the analytic/synthetic
distinction. Let me suggest a possible application of the
obliteration of this distinction to linguistics. In Optimality Theory
inviolable constraints are rendered as definitions in the GEN function
and violable constraints are in the EVAL component. The inviolable
constraints, e.g. 'syllables do not dominate feet', contain many
nonfalsifiable statements, though they escape scrutiny by the notion
of falsifiability because they are taken (when anyone bothers to
consider them) as analytic. But if Quine is right (and if my
understanding of him is right, which is far more dubious), then this
distinction cannot be made.


Mail to author|Respond to list|Read more issues|LINGUIST home page|Top of issue

Message 4: Re: Falsifiability vs. Usefulness

Date: Wed, 15 May 2002 13:03:10 +0000
From: Martha McGinnis <mcginnisucalgary.ca>
Subject: Re: Falsifiability vs. Usefulness

Just a brief response to Dan Everett's remark (13.1289) that if
linguistic theories are not subject to falsifiability, then
"linguistics cannot be trying to get at the Truth." To my mind, this
is the strongest objection to an unfalsifiable theory.

There may be individual linguists who stubbornly reject every form of
counterevidence to their theory of choice. I have even heard one
prominent linguist respond to counterevidence by saying that he was
aiming not to make falsifiable claims, but simply to describe patterns
in the data. I can only conclude that these individuals are not
trying to get at the truth about language, though their work may
organize and present data in a way that is "useful" for those who are.
In fact, it's not clear to me what such work could be useful for, if
no one tried to get at the truth. (I take it that Dan isn't talking
about linguistics being useful for passing the time pleasantly, or for
allowing linguists and their publishers to make a living.)

Fortunately, as far as I can tell, such individuals are the exception
rather than the rule. There are many reasons for rejecting apparent
counterevidence -- counterexamples may result from a systematic
interaction with some independent principle (remember Grimm's Law and
Verner's Law), or may be otherwise misleading. However, when actual
counterevidence arises, as it often does, most linguists revise their
theories accordingly -- not simply by adding stipulations, but by
rethinking their generalizations altogether. A notable illustration of
this took place when I was a grad student at MIT. Chomsky had spent
the semester laying out his then-current version of Minimalist theory,
when a student (Susi Wurmbrand) pointed out that his analysis made the
false prediction that English should allow "*What did there a man
buy?". Chomsky gazed silently at the blackboard for a full
minute. Then he turned, grinned, and agreed it was a problem. He
spent the few remaining lectures hastily demolishing and rebuilding
his theory.

In my experience, linguists do this sort of thing constantly, if not
always quite so publicly. So I don't accept Dan's pessimistic view of
the field. On the whole, I think we are searching for the truth.

Regards,
Martha
______________________________________________________
Dr. Martha McGinnis, Assistant Professor
Linguistics Department, SS 820 University of Calgary
2500 University Dr. NW, Calgary, AB T2N 1N4 CANADA
http://www.ucalgary.ca/~mcginnis/
Mail to author|Respond to list|Read more issues|LINGUIST home page|Top of issue