LINGUIST List 13.1376

Thu May 16 2002

Disc: Falsifiability vs. Usefulness

Editor for this issue: Karen Milligan <karenlinguistlist.org>


Directory

  1. H.M. Hubey, Re: 13.1367, Disc: Falsifiability vs. Usefulness

Message 1: Re: 13.1367, Disc: Falsifiability vs. Usefulness

Date: Thu, 16 May 2002 10:03:44 -0400
From: H.M. Hubey <hubeyhmail.montclair.edu>
Subject: Re: 13.1367, Disc: Falsifiability vs. Usefulness



LINGUIST List wrote:

> Date: Wed, 15 May 2002 12:50:45 -0300
> From: "Dan Everett" <dan_everettsil.org>
> Subject: Falsifiability and usefulness
>
> wrong conclusion from Hempel.

I think I should point out why Hempel (probably) did what he did. The
original simple view of science was: (i) look at the data (ii) make a
hypothesis, (iii) test the hypothesis.

The "experiment" in (iii) was supposed to "verify" the theory/hypothesis.

When it was shown that the experiment did no such thing (e.g. Einstein's
remark "No amount of experimentation can prove me right; a single
experiment can prove me wrong"), then people switched to "confirmation"
i.e. the experiment did not verify but "confirm" the hypothesis.

Hempel showed logically that it was also bad. The only thing left was
falsification. Nobody can argue that a counterexample to a general
statement falsifies the general statement.

I think this problem can be handled by probability theory or fuzzy logic.
In other words, the "implication" (e.g. in P=>Q) has to be defined in
such a way that it is not equivalent to ~Q=>~P. It is this equivalence that
makes "seeing_black_raven_confirmation" equivalent to
"seeing_non-black_non-raven_thing_confirmation" equivalent. With some
foggy notion of probability theory in the back of one's mind, one can see
that seeing a "black raven" is different than seeing a "yellow banana" for
confirming the statement "ravens are black".

The problem is with logic.

Something else has to be added to logic (e.g. experimental evidence)
to account for "confirmation".



> Before I address their criticisms, let
> me attempt to restate more precisely a couple of the problems I have
> with falsifiability.
>
> (1) It is too weak: falsifiability would allow as 'cognitively
> significant' (Hempel 1991, 71) or empirically significant, otherwise
> unuseful statements, e.g. statements of the form 'All swans are white
> and the absolute is perfect'.

I don't really understand the point of this. The two statements each
have to be true, and falsifiability applies to each statement. It is more
restrictive to make a statement in which both statements are claimed
to be true.


> (2) It is too strong: falsifiability rules out statements such as 'For
> every language there exists a set of derivational mappings between the
> syntax and the phonology' or 'For every language there exists a
> default epenthetic vowel'. Such statements can be very useful, yet
> they are not falsifiable. (See Hempel for the non-falsifiability of
> statements mixing universal and existential quantifiers, e.g. 'For any
> substance there exists a solvent'.)

I think in some cases it may be difficult, but for a finite number of
cases one can show via an exhaustive enumeration that the statement
is true (or false). In practice it might be easier to prove that the
statement is not true simply by finding a counterexample, than to
to exhaustively enumerate every possibility to show that it is true
for every case.

In the case of "for every substance, there exists a solvent" one
would have to define what is a substance, and then list the solvent
for every such "substance", and in practice there may be too many
substances. For example, is a quark a substance? Is an electron a
substance?

Digressing a bit, what linguistics really needs (especially historical
linguistics) is a set of axioms or postulates from which one may attempt
to derive as many of the (apparent) truths as possible. Then one can
use deduction.

As an example for falsifiability consider the case of lexical diffusion (LD)
vs regular sound change (RSC). The word "regular" is not even defined, and the
heuristic RSC is directly falsified by LD. So why isn't there a move to
see if one of them can be slightly modified (probably RSC) and then a search
for a more general model that can show that they are both cases of the more
general truth?

And even more to the point, during last century, Lewis studied a "reasoning"
of type he called "abduction" (vs deduction and induction). Induction will
probably not be too useful in linguistics and deduction requires an axiomatic
system. The work on "universals" is the search for axioms. "Abduction" is
the search for patterns, actually more like claiming a pattern from a few
instances.

But this problem exists in every field. One might say that today this
problem is the problem of artificial intelligence, and it is called
datamining or knowledge discovery.

What is really needed is mathematical modeling.
Given enough time, and enough mathematical
models axioms/postulates will arise quite naturally.


> (3) It is too difficult to apply: 'All languages have syllables'. To
> prove this or disprove it for any given instance simply requires too
> many additional assumptions for it to be said to be a falsifiable
> statement in any useful sense. Yet it can be quite a useful statement
> for the linguist.

It may be difficult to apply but I do not see it in this case. Here, in
fact, one sees the usefulness of axiomatic systems in which the
primitive objects are clearly listed (e.g. sounds, phonemes)
and the derivations of others from the primitives is shown or
defined. Here the problem is in the definition of "syllable".
It is like asking for proof that 1+1=2. It is here that confusion
exists. There can be no such proof because 1+1 is the definition
of 2.

The problem of "syllable" is one of definition.


> (As Hull (1988,295ff) points out there are two kinds
> of "wriggling" that take place in science. One is definitional - most
> terms in most theories contain sufficient "semantic plasticity" to
> allow one scientist to tell another 'That doesn't falsify my idea,
> because I didn't *mean* that.' The other comes from the constant
> evolution of theories, that this or that fact just doesn't matter
> yet/anymore, to my statement or theory, as the case may be).

That is why axiomatic systems (mostly math and mathematical
sciences) avoid this problem. That is the reason that I think all
science has to be mathematical. For example, the RSC heuristics
of historical linguistics does not even define "regular", and it
uses "sound" instead of "phoneme" or "phone". Why? These are
the primitive objects of linguistics.

Even physics has leeway for wiggling. The measurements do
not always show exact fit into equations because of measurement
error. Thus at the higher levels all such math modeling of physics
is stochastic. Secondly, the real descriptions are probably nonlinear
and we can work best with linear models, so at the higher levels
almost all the equations are approximations and everyone knows it.

One can show "failure" in physics even in freshman physics, but
everyone knows that this is the failure of math. For example,
years ago when I taught physics a student asked me for a solution
of a problem. Imagine a wall of some height with a piece of paper
pasted to the opposite side where the experimenter is. The object
is to calculate the angle which a projectile (e.g. a bullet) must be
fired in order to hit the paper on the other side of the wall. It is
obvious that in order to hit the paper, the bullet must come down
vertically (grazing the wall opposite the experimenter). In order for
the bullet to come down vertically, it must be fired upwards
vertically. And the equations show it. But if the bullet is fired
vertically, then why shouldn't the bullet come down exactly where
the experimenter is and hit him?

It looks like a paradox, and it is. But when you check the equations
you see that the velocity must be infinite, and thus the bullet must
go to infinity and then return, and that is why it comes back
vertically to the ground.

So these little problems are understood to be problems of math
not physics. What is really lacking in linguistics is what some
might term "maturity". August Comte clearly pointed this out
over 200 years ago. The development of science actually had to
take place the way it did.

In conclusion, it is not necessary to use logic in linguistics
or in physics or engineering. What really distinguishes physics,
engineering and computer science from the other sciences is
that it deals with nonliving, nonvolitional objects and that it
is based on mathematical modeling (which does not include
logic). No physicist studies logic. Philosophy of science is
done by philosophers not scientists. Comte gives the way
science should be done, and the reasons why it developed the
way it did and why it must have been that way.

Nobody reading this would ever get their hair cut by someone
who did not have experience cutting hair. And these people are
'licensed'. Apparently those who would not get their hair cut
by people who did not have experience think that they can learn
to do science from those that have not done it. Apparently
doing science is easier than cutting hair.

Apparently the same people who teach how science should be done
without doing it are the same ones who also did away with Comte's
work by labeling it, and tossing it in the garbage can. I can't
understand why.


> Bradfield's main point is that Hempel's 'conjunction problem'
> (repeated in (1) above) is not a salient problem. As I state above,
> though, this problem is salient because it shows that falsifiability
> is too weak. It does this because it shows that falsifiability fails
> to rule out empirically useless statements such as 'All swans are
> white and the absolute is perfect', because they are in fact
> falsifiable.

I don't see how any criterion could rule out useless statements.

There is no semantics in math, only in our minds. All math is
syntax. So too with linguistics methods that claim to do semantics.
They are merely doing syntax of another language.


> Hubey claims that if one says that "all rabbits are yellow" and then
> finds a white rabbit then the claim has been falsified. No, the claim
> has *apparently been counterexemplified*. What happens after that is
> anyone's guess. For example, I suspect that theoreticians committed to
> yellow rabbits will try to find a reason to explain the white rabbit.

But what really happens is not what matters. A yellow rabbit is a
counterexample to "all rabbits are yellow". I think falsification
is a bare minimum and it can avoid a lot of useless and confusing
things.

Here is an example. Not to pick on the author, but I have been reading
Hock's book on historical linguistics and I find that it is a thorough
book. However, the statement I have read often in discussion groups
is that "all sound change is regular". Hock's book seems to be saying
either (1) all sound change is regular except for those that are not,
or (2) all regular sound change is regular.

Here is where probabilistic models are much more useful than
logical ones. The former models can handle "most sound change
is regular", and it can also handle lexical diffusion. Even, nonprobabilistic
mathematical models can handle this problem simply by showing these
two to be approximations of truth or one to be a simpler case of the
other. That would leave out probability theory, and still produce a
satisfactory answer. That was the case with the freshman physics
problem example above.


> I am *not* saying, in response to Hubey, that science progresses
> logically. Not at all. I am agnostic on the matter. If I were to make
> a claim it would be that most people *think* science is progressing
> and so I guess it is (after all, I am sitting about 1 degree south of
> the equator as I type this, but am nevertheless able to connect
> immediately to the internet and work comfortably in an air-conditioned
> room). But that judgment has nothing to do with any kind of logic or
> with falsifiability.
>
> Dan Everett

I think falsifiability is something like a gen-ed course. It is a bare
minimum. Learning to read is a bare minimum requirement for
"literacy" but simply being able to read does not make one a literate
person anymore in the modern era. Maybe in the 1100's it did.
After all, there wasn't much to read those days.



>
> P.S. In his comments, Bradfield remarks on the analytic vs. synthetic
> statement distinction and says that '2+2=4' is nonempirical because it
> is analytic. Now this is not crucial to the discussion on

Here is a case in point. If we decide that this is "empirical" then we
have to put 2 bananas next to 2 bananas, then combine the two into
pile and then count, and obtain 4. Then we have to do this with
apples, oranges, sheep, dogs, cars, trucks, ......

It is infinite.

And not falsifiable.

But it leads from the definitions. "1+1" is the definition of "2",
and so on.


- 
M. Hubey

hubeyhmail.montclair.edu
http://www.csam.montclair.edu/~hubey
Mail to author|Respond to list|Read more issues|LINGUIST home page|Top of issue