Jaakko Hintikka REFORMING LOGIC (AND SET THEORY) 1. Frege’s mistake Frege is justifiably considered the most important thinker in the development of our contemporary “modern” logic. One corollary to this historical role of Frege’s is that his mistakes are found in a magnified form in the subsequent development of logic. This paper examines one such mistake and its later history. Diagnosing this history also reveals ways of overcoming some of the limitations that Frege’s mistake has unwittingly imposed on current forms of modern logic. Frege’s mistake concerns the semantics (meaning) of quantifiers. The mistake is to assume that this semantics is exhausted by the quantifiers’ (quantified variables’) ranging over a class of values. These values are the members of the domain (universe of discourse) of the language to which the quantifiers belong. The entire job description of the quantifiers is to indicate whither or not at least one member of the domain has a certain (possible complex) predicate (existential quantifier) and to indicate whether all of them have one (universal quantifier). In other words, quantifiers are higher order predicates indicating whether or not a given lower-order predicate is nonempty or exceptionless. This is in fact precisely how Frege proposes to treat quantifiers in his logical theory. (See Frege 1984, pp. 153-154, pp. 26-27 of the original.) This is obviously part of the semantical task of quantifiers. However, it is not the only one. Quantifiers have another function in language. There is a task that any language must be capable of fulfilling if it is to serve as a language of science and for that matter as a language suitable for innumerable purposes in everyday life. This task is to C:\Hintikka.Reforming logic.and set theory.0408.doc.7/30/2008
35
Embed
Jaakko Hintikka - BU Personal Websitespeople.bu.edu/hintikka/Papers_files/Hintikka... · Jaakko Hintikka REFORMING LOGIC (AND SET THEORY) 1. Frege’s mistake Frege is justifiably
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Jaakko Hintikka
REFORMING LOGIC (AND SET THEORY)
1. Frege’s mistake
Frege is justifiably considered the most important thinker in the development of our
contemporary “modern” logic. One corollary to this historical role of Frege’s is that his
mistakes are found in a magnified form in the subsequent development of logic. This
paper examines one such mistake and its later history. Diagnosing this history also
reveals ways of overcoming some of the limitations that Frege’s mistake has unwittingly
imposed on current forms of modern logic.
Frege’s mistake concerns the semantics (meaning) of quantifiers. The mistake is
to assume that this semantics is exhausted by the quantifiers’ (quantified variables’)
ranging over a class of values. These values are the members of the domain (universe of
discourse) of the language to which the quantifiers belong. The entire job description of
the quantifiers is to indicate whither or not at least one member of the domain has a
certain (possible complex) predicate (existential quantifier) and to indicate whether all of
them have one (universal quantifier). In other words, quantifiers are higher order
predicates indicating whether or not a given lower-order predicate is nonempty or
exceptionless. This is in fact precisely how Frege proposes to treat quantifiers in his
logical theory. (See Frege 1984, pp. 153-154, pp. 26-27 of the original.)
This is obviously part of the semantical task of quantifiers. However, it is not the
only one. Quantifiers have another function in language. There is a task that any
language must be capable of fulfilling if it is to serve as a language of science and for that
matter as a language suitable for innumerable purposes in everyday life. This task is to
C:\Hintikka.Reforming logic.and set theory.0408.doc.7/30/2008
indicate what depends on what, more explicitly, to express relations of dependencies and
independencies between variables. It is easily seen that the only way of expressing such
dependencies in an ordinary logical language on the first-order level is through formal
dependencies and independencies between quantifiers. That the variable y depends on x
(in the sense of ordinary-life dependence) is expressed by the fact that the quantifier
(Q1y) to which formally depends on the quantifier (Q2x) to which x is bound. Thus in an
(interpreted) sentence of the form
(1.1) (∀x)(∃y)F[x,y]
the variable y depends on the variable x, as is seen e.g. from the fact that the truth-making
value (“witness individual”) of y depends on the value of x. (Re witness individuals, see
also sec. 9 below.)
Such dependence can be expressed on the second-order level by quantifiers
asserting the existence of a function that embodies this dependence. For instance, (1.1) is
equivalent with
(1.2) (∃f)(∀x)F[x,f(x)]
Here f picks out as its value b=f(a) a truth-making value b of y that corresponds to the
value a=x of each x. It will turn out that this way of expressing the dependence of
variables can also be expressed on the first-order level by means of the dependence
relations of first-order quantifiers. This can be done in IF logic; see section 2 below.
2
This independence of the two aspects of the semantics of quantifiers of one another is
vividly seen in many-sorted quantification theory. The two quantifiers can range over
different and even exclusive domains, and yet be either dependent or independent of each
other, as the case may be.
It is not anachronistic to call Frege’s neglect of the role of quantifiers as
expressing such dependencies a mistake. Frege’s own co-discoverer of the logic of
quantifiers, C.S. Peirce, was fully cognizant of this dimension of their semantics. In
practice, its most basic manifestation is the importance of quantifier ordering. In Peirce,
this ordering comes up in the form of the distinction between the two players of the
semantical games and quantifiers of whose importance Peirce was aware. Peirce’s pen-
pal Ernest Schröder struggled with the problems of coping with the same aspect of the
meaning of quantifiers in less vivid terms. (See here Hintikka 1996 (b) and the references
given there.)
2. IF logic and scope
One consequence of Frege’s mistake has been pointed out earlier and corrected, at least
in part. (See e.g. Hintikka 1996.) Since part of the task of quantifiers is to express
dependencies between variables, our logic should be able to do this job completely. In
other words, we should be in a position to express any possible pattern of dependencies
and independencies between variables. These interpreted dependencies between
variables are expressed by the formal dependencies between the quantifiers to which they
are bound. Now how are these formal dependencies codified in the usual logical
notation? The obvious answer is: By the nesting of quantifier scopes. But this nesting
3
relation is of a rather special kind. It is among other features transitive and
antisymmetric. Furthermore, it is linear in the sense that the scopes of two quantifiers
cannot overlap only partially. Hence only such dependence patterns can be formulated in
the received logic of quantifiers when the dependence relation has these special
properties. As a consequence, only some of all possible patterns of dependence and
independence can be expressed in the received first-order logic. Hence this logic does
not fulfill its whole job description. Frege’s mistake thus gave rise to a flaw in the
received first-order logic.
This flaw is corrected in what has come to be called IF logic (For it, see e.g.
Hintikka 1996 (a), Hintikka and Sandu 1996.) This can for most purposes be
accomplished by introducing an independence-indicating / (“slash”) that makes a
quantifier (Q2y/Q1x) (replacing (Q2y) independent of another quantifier (Q1x) even when
it occurs in the syntactical scope of (Q1x).
It is thus seen that IF logic is not a special logic alternative to the received logic of
quantifiers. On the contrary, it is our usual Frege-Russell first-order logic that is
unnecessarily restricted in its expressive power and hence should be considered a special
logic among alternatives. In contrast, IF logic is the unrestricted logic of quantifiers.
In this essay, IF logic is not discussed further and is not relied on, either, except as
an object lesson. It is nevertheless in order to point out some consequences of its very
existence.
Once we realize that the nesting of syntactical scopes is not an ideal method of
expressing dependence and independence, we realize also that we have to be careful of
the traditional notion of scope as an explanatory notion in semantics. (Cf. here Hintikka
4
1997.) The traditional notion combines two things that per se have nothing to do with
each other. Syntactical scope is used to indicate the dependence and independence of
quantifiers and other logical operators of each other. (This might be called dependence
scope or priority scope.) But it also makes the syntactical segment of a sentence (or
discourse) where a variable is bound to a given quantifier. (Binding scope.)
Once the difference between these two is understood, certain problems in the
semantics of natural language are solved. A case in point is the semantics of the so-
called donkey sentences.
(2.1) If Peter owns a donkey, he beats it.
(2.2) If you give each child a gift for Christmas, some child will open it today.
The meaning of (2.1)-(2.2) cannot be expressed in the notation of the received first-order
logic. But if a binding scope is expressed by parentheses ( ) and dependence scope by
brackets [ ], the logical form of these two will be
(2.3) [(∃x)(O(p,x)] ⊃ B(p,x))
(2.4) [(∀x)((∃y)G(x,y)] ⊃ (∃z)O(z,y))
The apparent difficulty with such “donkey” sentences as (2.1) – (2.2) is largely
due to the very same mistake we saw Frege committing. What distinguishes expressions
like (2.3)-(2.4) from familiar ones is conspicuously the use of the dependence-indicating
5
brackets [ ]. A failure to use them is accordingly not to give the dependence-identifying
role of quantifiers their full due.
Much of what has been said of dependence relations between quantifiers can be
said of dependence relations of other logically active notions, including propositional
connectives, epistemic and modal operators etc.. For instance, epistemic logic was held
back for years before it was realized that wh-knowledge can only be adequately
expressed by means of quantifiers that are independent of clause initial epistemic
operators, as e.g. in “It is know who is F” whose logical form turns out to be
(2.5) K(∃x/K)F[x]
where the stroke / expresses independence. (See here Hintikka 2003.)
In general, by freeing the conventions governing the scope we can achieve the
same result as by introducing an independence indicator. In this way, we will be able to
express patterns of dependence and independence between quantifiers (and propositional
connectives) and constants that cannot be expressed in the received first-order logic.
(Constants may also have to be included in the arguments of Skolem functions.) The fact
that we can thus carry out the liberation of quantifiers by changing only the punctuation
of logical sentences is vivid evidence for the naturalness and indeed indispensability of IF
logic.
It is even possible in this way to turn Tarski’s T-schema into a truth definition.
Let us assume that x is a variable for the Gödel numbers x = g (S) of sentences S.. Then
Tarski’s T-schema summarizes all sentences of the form
6
(2.6) T(a) ↔ S[a]
where T(x) is a truth predicate. Tarski is right in that we cannot have
(2.7) (∀x)(T(x) ↔ S[x])
As I have pointed out on other occasions, this failure is due to the fact that quantifiers and
other logical operators in S[x] should not depend on the variable x, which has a purely
syntactical role in S[x]. Such dependencies can be ruled out by writing instead of (2.7)
(2.8) (∀x)([T(x)] ↔ S[x])
Of course, this is no longer equivalent to any ordinary first-order sentence. The same
thing can be expressed in IF logic by making all the quantifiers and propositional
connectives in (2.8) (other than (∀x)) independent of the initial universal quantifier (∀x).
Either way, our liberated notation enables us to do what Tarski proved impossible to do
by means of the received Frege- Russell first-order logic: convert the T-scheme into a
genuine truth definition.
3. From existential instantiation to functional instantiation
Another consequence of Frege’s mistake that is (perhaps unwittingly) repeated by later
logicians looks so insignificant that it has not attracted much attention. It concerns the
formulation of the rules of inference for our basic first-order logic. There it looks very
much as if the meaning of quantifiers is done full justice to (in a context of deduction) by
7
the usual rules of instantiation. The rule of existential instantiation applies to a formula
(∃x)F[x] with an initial existential quantifier. It allows the replacement of this formula
by F[β] where β can be thought of as standing for a possibly unknown individual of the
kind the given formula says is instantiated. This obviously captures the force of the
existential quantifier as expressing non-emptiness.
Intuitively, the term β operates just like the “John Does” and “Jane Roes” of
lawyers’ jargon. (Wallis thought that historically such legal usage was the historical
model for algebraic symbols; see Klein 1968. p. 321.) Formally, the term β can be a
“dummy name” or in our deductive practice simply a new individual constant.
Likewise, the usual rule of universal instantiation might seem to capture
adequately the semantical force of a universal quantifier as expressing universality
(exceptionlessness).
But even though these instantiation rules express truth and nothing but the truth
about the meaning of quantifiers, they do not tell us the whole truth. One at first sight
inconspicuous feature of theirs is that they apply only to sentence-initial quantifiers.
They do not apply to quantifiers inside a formula, not even if this formula is assumed to
be in the negation normal form. (This assumption is routinely made in this paper.) Every
logic instructor who has taught to her students the usual rule of existential instantiation is
likely to find herself later correcting students who are proposing to apply it to quantifiers
inside a formula, perhaps within the scope of universal quantifiers. At this point, a clever
student could try to embarrass the instructor by asking: “Since the rule of existential
instantiation is obviously based directly on the meaning of the existential quantifier,
surely it ought to be applicable independently of the context. What happens in such an
8
application is that we merely choose one individual of a certain kind among existing ones
for our attention.”
If the instructor is up to her task, she will point out that the choice of the
“arbitrary individual” β is not absolute, not a once-and-for-all matter, but depends on
other individuals. More specifically, it depends on the values of the universal quantifiers
within the scope of which the existential quantifiers occurs (in a sentence that is in the
negation normal form).
This answer points to an important truth. Existential instantiation can take place
inside larger formulas, if we use as an instantiating term a function term that takes into
account the dependence of the existential quantifiers to which it is applied on other
quantifiers in the same sentence. If we heed those dependencies, we can generalize the
rule of existential instantiation. The generalized formulation might run as follows:
Assume that S is a sentence in the negation normal form and that the formula
(3.1) (∃x)F[x]
occurs somewhere in S=S[(∃x)F[x]]. Then S may be replaced by
(3.2) S[F[f(y1,y2,…)]
where (∀y1), (∀y2),… are all the universal quantifiers which the scope of which (∃x)
occurs in S, and f is a new function constant. If there are no such universal quantifiers,
the function term f(y1, y2, …) is replaced by a new individual constant. The old rule of
existential instantiation is thus a special case of the new one, viz. the case of sentence-
initial existential quantifiers.
9
More generally, we can stipulate that (Q1y1), (Q2y2), … are all the quantifiers in S
on which the quantifier (∃x) depends on there. This formulation can be used also in IF
logic.
Notice that this is a first-order rule in the crucial sense that no quantification over
higher-order entities is involved. The reason why we have considered instantiation by
functions rather than individuals should be obvious. It reflects the fact that witness
individuals may depend on other witness individuals.
By the same toke the rule of existential generalization has to be liberated. It will
allow the replacement of any function term of the form f(x,y1,y2,…) to be replaced by a
variable z bound to an existential quantifier (∃z). This quantifier must occur within the
scope of all the quantifiers (∀y1), (∀y2),… . Otherwise its location is free, assuming only
that we are dealing with a formula in the negation normal form
4. Uses of the rule of functional instantiation
The relative neglect of the generalized rule existential instantiation can be taken to be an
instance of the same mistake as has been here attribute to Frege. But is it a mistake in the
present context? Defenders of status quo can try to claim that the rule of functional
instantiation is dispensable, and that its neglect is therefore justified, perhaps in the
interest of theoretical economy.
Admittedly, the rule of functional instantiation is redundant in the received
treatment of first-order logic. In this logic, we can let an existential formula wait in our
logical argumentation until by means of applications of other rules it has been brought to
the surface of our formulas, in other words until it has been brought to a sentence-initial
10
position. But in principle we have to ask whether this dredging process affects the
semantics of an existential quantifier, including its dependence relation to other
quantifiers. Logicians have been victims of bad luck in that the process of bringing an
existential quantifier to the surface of a sentence does not affect its deductive function in
the received first-order logic. This is bad luck in that it has directed their attention away
from those aspects of the logic of quantifiers that are due to dependence and
independence relations between them, thus making this instance of Frege’s mistake a
mistake.
An example can illustrate the way in which functional instantiation helps to make
logical proof s shorter and more natural. Consider the conditional