LINGUIST List 13.1288

Wed May 8 2002

Disc: Re: 13.1279:Falsifiability vs. Usefulness

Editor for this issue: Karen Milligan <>


  1. Kurt S. Godden, Re: Falsifiability vs. Usefulness
  2. Roger Lass, Re: 13.1279, Disc: New: Falsifiability vs. Usefulness
  3. David Ludden, Re: 13.1279, Disc: New: Falsifiability vs. Usefulness

Message 1: Re: Falsifiability vs. Usefulness

Date: Wed, 08 May 2002 10:31:23 -0400
From: Kurt S. Godden <>
Subject: Re: Falsifiability vs. Usefulness

In LINGUIST 13.1279 Wed May 8 2002, Dan Everett urges us to
question the value of falsifiability, citing this example:

> To see this more clearly, consider the following two statements:
> (1) Sentences of natural language never surpass 1,173 letters in length.
> (2) Agents, more often than not, but not always, are expressed as topics.
> Statement (1) is explicit, strong, clear, and falsifiable. Statement
> (2) is clear, not very explicit, somewhat weak, and difficult to
> falsify. Still, though, (2) seems eminently superior to (1) as advice
> for a new linguist.
> Having a 'strong, falsifiable claim' is like owning a well-crafted
> shovel. Sometimes it can be useful. But sometimes it gets in the
> way. An article or analysis chockablock with strong, falsifiable
> claims is not necessarily a better or more useful article than another
> lacking them. Each article and each claim must be judged on a
> case-by-case basis according to our goals.

Yes, but all things being equal, falsifiable claims are far better than
non/unfalsibiable claims. The problem with your examples is that all
things are *not* equal. Statements of theory should also be descriptive
and predictive, among other things. While it's good to question your
assumptions, your argument does not at all convince me of anything,
though I did enjoy reading your posting, and hope that it leads to some
lively discussion.

I would also like to comment on the following:
> But it also fails to account for theory shift among its proponents in
> the *absence* of counterexamples. For example, as some Topic/Comment
> pages in Natural Language and Linguistic Theory have pointed out of
> late, a major recent shift in linguistic theory seems to have been
> undertaken not because a particular set of hypotheses was falsified,
> but, rather, because the founder of the theory decided to do something
> else. 

At the risk of offending many, if not most, of the people reading this
list, I have long observed the entire field of linguistics as being very
prone to fads and the cult of personality. And if you *are* offended,
then perhaps there is some truth in what I say.

Kurt Godden, Ph.D.				Advanced Technology Labs
Principal Member of the Engineering Staff	Lockheed Martin Corporation
Fingers:			Camden, NJ 08102 USA
Mail to author|Respond to list|Read more issues|LINGUIST home page|Top of issue

Message 2: Re: 13.1279, Disc: New: Falsifiability vs. Usefulness

Date: Wed, 8 May 2002 16:52:52 +0200
From: Roger Lass <>
Subject: Re: 13.1279, Disc: New: Falsifiability vs. Usefulness

I think there is some misunderstanding here. Falsifiability is an
*epistemological* criterion; if we're interested in theories as not
simply instrumental, but as having what Popper called
'verisimilitude', then falsifiability is a kind of guarantor of
vulnerability, which in effect means the highest kind of rationality.

But in many fields we don't know enough to make falsifiable
statements, and there are domains (e.g. statistical ones) where
'truths' are tendential, and falsifiability sensu stricto is not to
the point. But rational discussability, simplicity, coherence with
knowledge from other domains (what Whewell called 'consilience') is
important, and is also a kind of guarantor.

If simple-minded Popperian falsifiability were the only kind, then all
science would be subject to the strictures that Lakatos used against
Popper decades ago. But the notion is not that univocal. Any theory
that makes no predictions at all is of course non-falsifiable; but a
theory that makes no predictions, that does not at least suggest the
existence of phenomena that haven't yet been encountered, or doesn't,
at the weakest, impose some kind of mildly predictive strictures on
its domain is not really 'useful' at all, except as a convention to
adopt for working.

A theory can be non-falsifiable but useful if it at least quantifies
or pseudo-quantifies the 'degree of surprise' one ought to feel at a
phenomenon. So while neo-Greenbergian theories of word-order are not
falsifiable in the strong sense, they are strong enough to have the
utility of allowing us not to expect, and therefore to be surprised
at, say a fairly rigid OV language with prepositions. This is also a
narrowing of domain, and does one thing that all theories ought to do,
whatever their epistemological pretensions: as Popper put it, the
function of a theory is 'to rule out states of affairs in nature'. In
another sense one could also look at the utility of even a
nonfalsifiable theory as inhering precisely in the limitations it
imposes on the phenomenal universe: at its weakest, a theory should at
least be maximally nonpermissive, help to define 'nature' in a domain,
and be a device for disallowing miracles.

Roger Lass
Graduate School in Humanities
University of Cape Town
Mail to author|Respond to list|Read more issues|LINGUIST home page|Top of issue

Message 3: Re: 13.1279, Disc: New: Falsifiability vs. Usefulness

Date: Wed, 8 May 2002 12:17:04 -0500
From: David Ludden <>
Subject: Re: 13.1279, Disc: New: Falsifiability vs. Usefulness

I found Dan Everett's essay on "Falsifiability vs. usefulness" very
interesting, and I think that he touches on an important issue
concerning how linguistics is pursued as a science. However, I'm not
convinced that Everett makes a meaningful distinction between
falsifiable and useful. Rather, I think the issue he is trying to get
at is the question of how linguists should test hypotheses.

Everett illustrates his distinction between falsifiable and useful
hypotheses with the following examples.

>(1) Sentences of natural language never surpass 1,173 letters in length.
>(2) Agents, more often than not, but not always, are expressed as topics.

For Everett, hypothesis (1) is falsifiable because one need only
produce a sentence from a natural language containing 1,174 or more
letters in order to prove it false. On the other hand, hypothesis (2)
is not falsifiable on Everett's view because providing a instance of
agent-as-nontopic, or any number of instances, does not prove the
hypothesis false. This constitutes the rationalist approach, thinking
in terms of necessary and sufficient conditions. The rationalist
approach yields certain results, but its scope of application is
limited. It is the approach a mathematician would take in testing a
proposed theorem.

However, hypothesis (2) is falsifiable in a different sense. If we
analyzed a sufficiently large representative corpus and found
significantly more instances of agents-as-nontopics than
agents-as-topics, we would conclude that hypothesis (2) is probably
false. This constitutes the empirical approach, thinking in terms of
tendencies and probabilities. The empirical approach yields tentative
results that grow stronger in certainty with replication and
converging evidence. It is the approach used in the natural sciences
in general and the social sciences in particular, where the data are
always noisy.

"Usefulness" is a subjective term, but if we take it to mean something
like "affording a greater understanding," then Everett is correct in
saying that hypothesis (1) is not very useful but that hypothesis (2)
is. Certainly our hypotheses should be useful in this sense;
otherwise, they will be trivial. However, our hypotheses should also
be empirically falsifiable; otherwise, they will be so vaguely worded
that they are meaningless.

David Ludden
Department of Psychology
University of Iowa
Mail to author|Respond to list|Read more issues|LINGUIST home page|Top of issue