Top Banner
An LFI with a transparent truth predicate Eduardo Barrio – Damián Szmuc – Federico Pailos Abstract. We will present an LFI based on Coniglio & Silvestrini [2014] first order three-valued matrix logic called “MPT1”.There are two main differences with between MPT1 and the matrix that here presented, that we will called “MPTTT*”. The first one is that MPTTT* admits a transparent truth predicate. The second one is that the conditionals of the two matrices treat differently conditionals likeφ→ψ, where v(φ)=1/2 and v(ψ). In MPT1, those conditionals receive value 0. In MPTTT*, they are valued with ½. This also has important consequences in the way they treat biconditionals, and those consequences helps MPTTT* dealwith self-reference sentences. In particular, with biconditionals that can be read as expressing in the language “The Liar” or a “Curry sentence”. But MPTTT* matrix is non-monotonic, and this makes harder finding a fixed-point interpretation of the truth predicate, and thus proving the non-triviality of the theory. In order to prove this, and also prove the completeness of it, we will use a three-side disjunctive sequent system, based on the one that Ripley [2012] use to prove the completeness of his paraconsistent truth theory STTT. We will present a semantics for the disjunctive sequents that “traduces” MPTTT into a disjunctive sequent language, and then show that: ΓMPTTT* Δ iff Γ│ Δ │Δ MPTTT** is valid. The, we will show that MPTTT** is non-trivial, and that will show that MPTTT* is it, also is non-trivial. This proof will involve a cut- elimination proof for MPTTT*. This is an induction proof over the index of a cutproof, and adopts notions developed in Paoli [2013]. Finally, we will prove that MPTTT** is complete. The strategy employed will be similar to the one Ripley [2012] uses to show STTT’s completeness. I-INTRODUCTION: The logics of formal inconsistency (LFIs) are powerful paraconsistent logics that encode classical logic and allow us to fix an interesting distinction between contradictions and inconsistencies. These systems, introduced by Carnielli and Marcos [2002], internalize the metatheoretical notion of consistency, expressing it in the object language. Hence one can isolate contradictions in such a way that the application of the principle of explosion is restricted to consistent sentences only, thus avoiding triviality. This is achieved by means of adding to a collection of appropriate axioms and rules already accepted in classical propositional logic a
33

An LFI With a Transparent Truth Predicate 2015

Sep 05, 2015

Download

Documents

agus_margiotta

An LFI With a Transparent Truth Predicate 2015
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript

An LFI with a transparent truth predicate

Eduardo Barrio Damin Szmuc Federico Pailos

Abstract. We will present an LFI based on Coniglio & Silvestrini [2014] first order three-valued matrix logic called MPT1.There are two main differences with between MPT1 and the matrix that here presented, that we will called MPTTT*. The first one is that MPTTT* admits a transparent truth predicate. The second one is that the conditionals of the two matrices treat differently conditionals like, where v()=1/2 and v(). In MPT1, those conditionals receive value 0. In MPTTT*, they are valued with . This also has important consequences in the way they treat biconditionals, and those consequences helps MPTTT* dealwith self-reference sentences. In particular, with biconditionals that can be read as expressing in the language The Liar or a Curry sentence. But MPTTT* matrix is non-monotonic, and this makes harder finding a fixed-point interpretation of the truth predicate, and thus proving the non-triviality of the theory. In order to prove this, and also prove the completeness of it, we will use a three-side disjunctive sequent system, based on the one that Ripley [2012] use to prove the completeness of his paraconsistent truth theory STTT. We will present a semantics for the disjunctive sequents that traduces MPTTT into a disjunctive sequent language, and then show that:

MPTTT* iff MPTTT** is valid.

The, we will show that MPTTT** is non-trivial, and that will show that MPTTT* is it, also is non-trivial. This proof will involve a cut-elimination proof for MPTTT*. This is an induction proof over the index of a cutproof, and adopts notions developed in Paoli [2013]. Finally, we will prove that MPTTT** is complete. The strategy employed will be similar to the one Ripley [2012] uses to show STTTs completeness.

I-INTRODUCTION:

The logics of formal inconsistency (LFIs) are powerful paraconsistent logics that encode classical logic and allow us to fix an interesting distinction between contradictions and inconsistencies. These systems, introduced by Carnielli and Marcos [2002], internalize the metatheoretical notion of consistency, expressing it in the object language. Hence one can isolate contradictions in such a way that the application of the principle of explosion is restricted to consistent sentences only, thus avoiding triviality. This is achieved by means of adding to a collection of appropriate axioms and rules already accepted in classical propositional logic a restricted principle of explosion,

oA, A, A B

where oA means that A is consistent. If A is not consistent, explosion cannot be applied. Classical reasoning can be restored into LFIs. The inferential behavior in consistent fragment of the language of the LFIs is completely classical. But, can we add a truth predicate to aLFI? Most paraconsistent logics cannot handle at the same time a transparent truth predicate and a consistent (or inconsistent) operator. XXXXX AC FALTARAN EJEMPLOS XXXXX We are going to explore the possibility of doing exactly that with a slight modification of MPT1, LPT1s semantic counterpart, one of the most well known LFIs. We will introduce a new conditional that is in principle capable to avoid triviality even in presence of transparent truth and consistency. Modifying the 3-valued matrices of MPT1, we will get stable valuations for the Strengthened Liar and Currys sentence. The main difference with MPT1s conditional is that when the antecedent receive value , and the consequent receives value 0, then the conditional receives (the designated value) . In this way, we may recover biconditionals that mimic problematic instances of the diagonal lemma, even the ones with the truth predicate or the consistent operator.

II.- Reasoning with truth and inconsistencies

Para-complete and para-consistent theories wind up with a non-classical material conditional, where a material conditional A B is defined as A v B.

Para-complete: A A.

Para-consistent: A, A B B.

Hence, in either case, the resulting material conditional is often thought to be inadequate. In the para-complete case, the given conditional detaches (i.e., satisfies Modus Ponens) but fails to support all instances of the given (material) T-schema: Tr() A and its converse can fail. In the para-consistent case, all instances of the given (material) T-schema hold; however, the given conditional fails to detach. As a result of these apparent deficiencies, much of the work in para-consistent and para-complete responses to paradox has focused on supplementing such theories with a suitable conditional, one that both detaches and validates all T-biconditionals [Beall, 2009, Field, 2008, Priest, 2006]. But the task is complicated. What makes the task particularly difficult is Currys paradox, which involves (conditional) sentences that say of themselves that if they are true then absurdity is true (e.g., that everything is true). We need a conditional detachable but Curry-paradoxical-safe.

An alternative strategy could be found adopting the Logics of Formal Inconsistency. These kind of systemsare logics able to internalize, in a precise sense, the notions of consistency and inconsistency at the object-language level. They do it by introducing primitive unary connective. As the strategy followed by [Priest, 2006] and [Beall, 2009], such logics are para-consistent in the following sense: given a contradiction of the form (A A), it is not possible in general to deduce an arbitrary formula B from the contradiction. That is, such logics do not fall into deductive triviality when exposed to a contradiction. An LFI explodes if A, A and oA occur simultaneously, for some arbitrary A, such that oA expresses the fact that A is consistent. Thus, contradictions can be isolated in such a way that the application of the principle of explosion is restricted to consistent sentences only, thus avoiding triviality. This is done in different ways. We are going to explore oneoption: the system LPT introduced by Coniglio&Silvestrini [2014] in which the semantics will be given by 3-valued matrices. This matrix logic will be denoted by MPT. A first-order version of LPT, LPT1, is also presented adding axioms and inferential rules for quantifiers. LPT may be axiomatized by the following schemas of a Hilbert calculus. Consider the positive intuitionistic sentential logic (PISL):

Axiom Schemas

(A1) A(BA)

(A2) (AB)((A(BC))(AC))

(A3) A(B(AB))

(A4) (AB)A

(A4) (AB)B

(A5) A(AB)

(A5) B(AB)

(A6) (AC)((BC)((AB)C))

plus the following axiom-schemas:

(A7) A v (A B)

(A8) A v A

Now, adding the following axioms:

(A9) AA

(A10) oA(A(AB))

(A11) oA(AA)

(A12) o(AB)

(A13) (oAoB)o(AB)

(A14) (AAB)((AB)(BA))

and the Rule of inference:

(MP) infer B from A and AB.

The resulting axiomatic system will be called LPT.

Coniglio&Silvestrini [2014] present a complete and sound semantics for LPT. The truth-tables of MPT can be constructed straightforwardly:

The truth-tables of the defined connectives are given below:

It can also be shown that the LPT is sound and complete with respect to a paraconsistentbivaluation semantics, i.e. truth functions (not truth-functional) that assign, for each sentence of the language, a truth-value 1 or 0. The system LPT1, the first-order version of LPT, will be defined adding axioms and inferential rules for the quantifiers. XXXX CULES?XXX

Could we use LPT1 to talk about truth? More specifically, assuming the diagonal lemma, could we add the unrestricted validity of the T-schema to LPT1 into a semantically self-sufficiency languages without leading to a trivialization? We are going to show that the response in negative: this logic is trivial in presence of the truth predicate.

Theorem 1: Adding the instance of the diagonal lemma known as the Curry sentence C(T(C))- to LPT + CAPTURE + RELEASE, lead to trivialization.

Proof:

1)C(T(C))The Curry Sentence

2)C(C)Release, 1

3)(C(C))(C)Absorption

4)CModus Ponens, 2, 3

5)T(C)Capture, 4

6)CModus Ponens, 1, 5

7)Modus Ponens, 4, 6

Modus Ponens is a valid rule of LPT. What about Absorption? Its also valid, as Absorption is a tautology of MPT, and LPT is complete with respect to MPT.

Corollary 1: Adding all instance of the diagonal lemma with the truth predicate to LPT + CAPTURE + RELEASE, lead to trivialization.

Proof: The Curry sentence is one of those instances.

Now we are going to consider LPT1 and MPT1.

Let ~ be the strong negation operator that is defined in Coniglio& Da Cruz 2013. Can we add to the MTP1 they defined, a transparent truth operator? Lets see why not. Well start with the instance of the Diagonal Lemma known that represents (or is) the Liar sentence:

LT(L).

Its obvious that it cannot get a classical value. Can it get the value ? If that were the case, then well get and equivalence such that the first member of it gets value , but the second one (because of the way the strong negation behave) will get value 0, and so the equivalence will get value 0. But then well have an instance of the Diagonal Lemma that doesnt receive a designated value, which seems an undesirable result.

If we want to avoid this result, some changes need to be made. One inmediate option is to change the meaning of (at least) one of the operators involved in the sentence. We dont want to touch the strong negation, at least if we want something like a consistent operator in the language. But maybe we can do something with the new conditional. In particular, lets see what happens when we define a new one exactly like that one, but such that if the antecedent gets value and the consequent gets value 0, then the conditional receives the value -a designated one. This is how the new new conditional will behave.

1

0

1

1

1

0

1

1

0

1

1

1

It replaces Coniglio& Da Cruz operator, that functions this way:

1

0

1

1

1

0

1

1

0

0

1

1

1

If one adopts this matrix, with this new conditional, one thing you lose is that the strong negation operator can no longer be defined with the (new) conditional and . But one can still add it as a primitive constant, and specify its behavior in terms of the truth table already shown.

Lets see now what will happen with Currys paradox. It begins with an instance of the diagonal lemma like the one shown below, known as the Curry sentence:

C(T(C))

What can its value be? Once again, it cant receive a classical value. But if its value is , then the conditional that is the second term of the equivalence will get the value , and so the biconditional will get the value 1.

Will this matrix, with this new conditional, be a LFI? Yes, it will. As its build up to Coniglio & Da Cruzs one, then validity is understood as designated-value preservation, and 1 and are both designated values. One might think otherwise, because the Explosion Principle will be valid in this new matrix. This is the Explosion Principle: A(AB). It will have no counterexamples. If v(A)= 0, then the whole conditional will have value 1. If v(A)=1, v(A)=0, then v(AB)=1, and then v(A(AB))=1. If v(A)=1/2, v(A)=1/2, then v(AB)=1, 1/2, and then v(A(AB))=1.

Nevertheless, this is not a big issue. According to Coniglio y Da Cruzs definition 3.1, a logic is a LFI if two clauses are satisfied. The second one demands that the gentle explosion principle is satisfied. This is the GEP:

A, A, A B

And in this matrix, it is. A possible counterexample must give a designated value to every premise. But if v(A)=1/2, then v(A)=0. So v(A)=1, 0. But then, either v(A)=1, and so v(A)=0, or v(A)=1, but then v(A)=0. So there is no valuation such that every premise receives a designated value, and the inference will have no counterexamples.

The first condition demands that the Explosion RULE must be invalid. So,

A, A B

We havent still build a proof system for this matrix. Anyway, the idea will be that the proof system reflects what happens in the matrix. And in it, this happens:

A, A B

Theres no stable truth-value assignment that gives 1 to all the premises, nor one that gives 1 to one of them and to the other. But, naturally, there are valuations that assigns to every premise. And, of course, nothing forbids that the conclusion receive value 0 (unless it is a tautology).

As weve seen, the weak negation and the conjunction will be define in the new matrix lets call it MTP*- in the same way as in MTP. The only variation in the basic constants is found in the conditional, and in particular in just one case: where the antecedent receives value and the consequent receives value 0. The new truth-conditions will give the (new) conditional (the designated) value , where in the previous case it receives value 0.

With all these constants, its possible to define disjunction, top, bottom and the biconditional. This last one will, of course, behave different in some cases. Lets take alook at them. The following correspond to the new biconditional:

1

0

1

1

1

0

1

1

0

0

1

1

This will replace the Coniglio& Da Cruz one:

1

0

1

1

1

0

1

1

0

0

0

0

1

On the other side, its no longer possible to define the strong negation in terms of a conditional that has the negated formula has the antecedent, and bottom as the consequent, as Coniglio and Da Cruz did, because that conditional, if the negated formula receives value , will no longer get value 0. Nevertheless, one can specify its meaning in a direct fashion, in this way:

=-, +u,

With this strong negation operator, we can define a consistent operator in the same way as Coniglio y Da Cruz did, using both negations.

Will this new matrix make valid all the axioms of LTP the logic that is correct and complete with respect to the matrix logic MPT? LTP has sixteen axioms, A1-A16. This new semantic will make valid all of them, but two: A2 y A12.

This results are expected, because the two of them involved the (new) conditional, that is such that there are combinations of the values of the antecedent and the consequent that make the conditional false. So there will be less valuations that make false a conditional. And this is the key feature of the conditional, that allows MPT* to get rid of Currys paradox.

On the other side, the failure of A2 corresponds to a failure of Modus Ponens A, ABB- in the new semantics. In order to see it, just take a valuation v such that v(A)=1/2 and v(B)=0. A12 is invalid because now there will be more conditional like the previous one- that get a designated value -1/2, to be more specific. So now not every conditional will be true of false, but, why should they? What may be special with the conditionals? In this new framework, is not the case that every conditional receives value 1 or 0 as many of the formulae of the language.

Anyway, the goal is to build an analogue of LPT lets call it LPT*- that is correct and complete with respect to MPT* -which is analogue to MPT in the way already specified.Unfortunately, MPT* doesnt validate Modus Ponens neither.(Counterexample: L, L).

Let MPTTT be a semantics that works as MPT1s one, but, instead of MPT1s conditional, has MPT*s one. Also, MPTTT will work with a language without the identity predicate, so this is another difference from MPT1. The goal will be to prove that we can have a LFI such that has both a consistent operator and a transparent truth predicate, without getting into trouble. We will see how much close to it we might get.

3-A COMPLETENESS PROOF FOR A LFI WITH A TRANSPARENT TRUTH PREDICATE

We will work with two languages: L and L+. L is the base language, an ordinary first order language without identity plus and , and a consistency operator . L+ is L plus a distinguished unary truth predicate T. To ensure that there will be paradoxes around, the interpretation of L and L+ will be partially constrained. In particular, the individual constants will come divided into two countable sets. The members of first set will function as the usual sort of individual constant, receiving their denotation from each model. We will call them ordinary names. The members of the other set will receive denotations independently. To simplify things, we will not allow the ordinary names to name any formula.[footnoteRef:2]We will fix a quote-name-forming deviceQsuch that Q(A), for any formula A, is a singular term that will denote A in any model. Well write A for the distinguished name of A.[footnoteRef:3]All this restrictions will have as a result that L+ will be a subset of the domain of every model. So we will work only with infinite models. [2: This is just to avoid the problems that contingent self-reference might generate. A slight modification of the system we are about to defend will handle those problems, but also will result in a system a little bit more obscure. Since contingent self-reference is not the special target of this work, we will just keep things as simple as possible, and restrict contingent self-reference the way we have specify.] [3: A key advantage of this name-forming device over the one that Ripley uses in xxxx, is that this kind of procedure is purely syntactic. Ripleys one, on the other hand, depends on a particular meta-linguistic function. But, why not use arithmetic? It has some obvious advantages, but also other major disadvantages. The main one is that such a setting would be too rich to allow for a completeness proof, which is what we ultimately are looking for. ]

Still, if we want to achieve self-reference in S, we need to make some adjustments. Specifically, we will expand the vocabulary with propositional constants. This will also be a key feature in the self-reference sentence formation procedure.In particular, we will have instances of following sentences as formulas:

p(p)

Where p is a variable that range over the new set of propositional variables introduced. The bicondicional sign , of course, represent just a conjunction of conditionals.[footnoteRef:4](p) is a metavariable of a formula of any complexity, which includes an atomic subformula of the form B(p). This guarantee that something like The Liar sentence or Currys sentence are represented in the language. But we need a further restriction. That restriction will be semantical, and is the following. We will select, for each (x) where (x) is a formula with one free variable-, a biconditional of the form [4: Those biconditionals can be read as a way to mimic instances of the diagonal lemma, that are themselves supposed to be a way to achieve self-reference. But we are not especially interested in that particular way to get self-reference, just to achieve it in some way.]

p(p)

Lets call that set of biconditional, Z. And we will restrict our set of models to the ones that assign a designated value to each one of the members of Z.

The following sentence, for example, may belong to that set:

pT( p )

So we will only consider models that assign a designated value to it there will be such, as we have seen. Just assign to p, and then the biconditional will get value 1.If this sentence is part of that set that expresses the Liar sentence-, then the following one wont be:

pT( p )

But of course, there will be, for example, another one that sort of expresses the Truthteller. For example, this one:

q T( q )

In the rest of the paper, we will prove the correctness and completeness of MPTTT.W will use as the target proof theory, a special disjuntive sequent system S. Lets make the following specification with respect to the metalanguage that will be used.Let A and B be any formulas, and , , , , , , , , be any set of formulas. Also, for the -rules, let t be any term, and let a be a variable not occurring in the rules conclusion sequent. Now we need to specify what disjunctive sequents are.

Definition. A disjunctive sequent is satisfied by a model M=D, I iff either I()=0 for some or I()=1/2 for some or I()=1 for some . A sequent is valid iff it is satisfied by every model. A model is a counterexample to a sequent iff it does not satisfy the sequent.

The disjuntivesequents will be finite e.g., it will only include a finite number of formulas. So the correctness and completeness proofs will be relative to the inferences that only involved finite sequents of MPTTT.

MPTTT consequence relation is just LPs one what Ripley XXXX calls a tolerant-tolerant notion of logical consequence. So an inference from to will be valid in MPTTT if and only if there is no model such that every formula in receives a designated value (1, ) and some formula in receive an undesignated value (0). But there is a strong relation between inferences as are usually understand, as a relation between one set of formulae and another[footnoteRef:5] things with the following structure: MPTTT - and the disjuntivesequents we have presented. That relation is the following: [5: It might be defended that inferences, as are usually understand, involved just formulas (and not sets of them) as conclusions. Fair enough. Nothing on the approach defended depends on that. If you think that that is the right position, just read the conclusion as a single formula, or just singletons of a single formula.]

MPTTT iff MPTTT MPTTT is valid[footnoteRef:6] [6: In the rest of the article we will just give away the sub-indexes from the disjunctive sequents.]

The proof just follows from the definition of MPTTT validity and validity from disjunctive sequents.[footnoteRef:7] [7: Still, there may be disjunctive valid sequents that do not have that form. For example, if a formula A receives value in every admissible model, the A A will be valid. But so will A . But our completeness proof, is not about the disjuntivesequents proof system and its semantics, but about semantically valid inferences in MPTTT and inferences that have a proof in this disjunctive sequent system. So, more preciselly, what we will prove is that:MPTTT iff is provable.We will prove that by proving that:(For all sequents ) A sequent is provable iff it is valid.So what about sequents like p (p) ? Some of them will be valid, but they may not have a proof. Nevertheless, if a sequent like that is valid, then also will be valid a sequent like this: p (p) p (p) And if that is the case, then that sequent will also have a proof, and then it will both be the case that MPTTT p (p) and MPTTT p (p). So completeness and soundness will be preserve. Well see all these more in detail in the next pages. ]

The proof system Swe are about to present consists on a number of axioms and rules. A sequent is provable iff it follows from the axioms by some number (possibly 0)of applications of the rules.As we are working with sets, the effects of Exchange and Contraction are built in, and Weakening is built into the axioms.

S has the following axioms and rules:

Axioms:

*For every formula A,

, A , A, A

Is an axiom.

*For every formula A,

, ~A , ~A

Is an axiom.

*For every formula A,

, A , A

Is an axiom.[footnoteRef:8] [8: These last two axiom-schema can be read as saying that every formula receives at least one of the three truth values, associated in turn with one of the three sides of a sequent. ]

*For every formula p (p) that belongs to Z,

, p (p) , p (p)

Is an axiom.

Structural Rules:

Cut 1

, A, A

Cut 2

, A , A

Cut 3

, A , A

Derived Cut

, A, A , A, A , A , A

Operational rules:

Left

, A

, A

Right

, A

, A

Middle

, A

, A

Left ~

, A

, ~A

Right ~

, A, A

, ~A

Left

, A, B

, AB

Middle

, A , A , B , B, A, B

, , , , ,AB, ,

Right

, A , B

, , , , AB

Left

, A , B

, , AB , ,

Middle

, A , B

, , , AB ,

Right

, A , B , B

, AB

Left o

, A

, oA

Right o

, A, A

, oA

Left

, A(t)

, xA(x)

Middle

, A(a), A(a) , A(t)

, , , xA(x), , A

Right

, A(a)

, xA(x)

Left T

, A

, T(A)

Middle T

, A

, T(A)

Right T

, A

, T(A)

As the rest of the connectives , , the disjunction and the biconditional- can be defined in terms of the former, we wont specify rules for them.

We may prove the following result:

(Soundness). If a sequent is provable, then it is valid.

Proof: The axioms are valid, and validity is preserved by the rules, as can be checked without too much trouble.

But of course the difficult part is to prove completeness. Following Ripley 2002 proof of the completeness of his disjunctive sequent system with respect to ST+, we will use the method of reduction trees, that yields, for any given sequent, either a proof of that sequent, or a countermodel of it. (The method provides of a way of building the eventual countermodel.)

We will introduce the notions of subsequent and sequent union, which will be used in the proof:

(Definitions).A sequent S = is a subsequent of a sequent S = (written S S) iff , , .

A sequent S = is the sequent union of a set of sequents {i i i}iI (written S = iI i i i iff = iI {i}, = iI { i}, = iI { i},

The construction starts from a root sequent S0 = 0 0 0, and then builds a tree in stages, applying at each stage all operational rules that can be applied, plus Derived Cut in reverse, from the conclusion sequent to the premise(s) sequent(s). We will have an enumeration of the formulas and also an enumeration of terms, and we will reduced at each stage all the formulas in the sequent, starting from the one with the lowest number, then doing the same with the one with the second lowest number, and move on in that fashion until we finish with the one with the higher number in the enumeration. If the formula appears in more than one side in the sequent, then well start reducing the formula that appears on the left side, then reduce the formula on the middle side, and finally reduce the formula on the right. The final action of each stage n will be an application of the rule of Derived Cut to the n-formula in the enumeration. If we apply a multipremise rule, we will generate more branches. If we apply single-premise rules, we just extend the branch with one more leave. We will only only add formulas at each stage, without erasing any of them, so every branch will be ordered by the subsequent relation. Any branch that has as it topmost sequent an axiom, will be closed. A branch that is not close is open. We repeat the procedure until every branch is closed, or until there is an infinite open branch. If every branch is closed, then the tree itself will be a proof of the root sequent. If there is an infinite open branch Z, we can use it to construct a countermodel to the root sequent.

So, a little bit more formally, stage 0 will just be the root sequent S0 = 0 0 0. If its an axiom, close the branch. For any stage n+1, one of two things might happen:

1-For all branches in the tree after stage n, if the tip is an axiom, close the branch.

2-For open branches: For each formula A in a sequent position in each open branch, if A already occurred in that sequent position in that branch (so A has not been generated during stage n+1), and if A has not already been reduced during stage n+1, then reduced A as follows:

If A is a negation B, then

*if A is in the (left/middle/right) position, extend the branch by copying its current tip and adding B to the (right/middle/left) position.

If A is a conjunction B C, then

*if A is in the left position, extend the branch by copying its current tip and adding both B and C to the left position.

*ifA is in the middle position, split the branch in three: extend the first by copying the current tip and adding B to both the middle and right positions; extend the second by copying the current tip and adding C to the middle and right positions; and extend the third by copying the current tip and adding both B and C to the middle position.

* if A is in the right position, split the branch in two: extend the first by copying the current tip and adding B to the right position; and extend the second by copying the current tip and adding C to the right position.

If A is a universal quantification xB(x), then

*if A is in the left position, extend the branch by copying its current tip and adding B(t) to the left position, where t is the first term in the enumeration not already used in a reduction of A in the left position before stage n + 1.

*if A is in the middle position, split the branch in two: extend the first by copying the current tip and adding B(a) to both the middle and right positions, where a is the first term in the enumeration not to occur anywhere in the current tip; extend the second by copying the current tip and adding B(t) to the middle position, where t is the first term in the enumeration not already used in a reduction of A in the middle position before stage n + 1.

*if A is in the right position, extend the branch by copying its current tip and adding B(a) to the right position, where a is the first term in the enumeration not to occur anywhere in the current tip.

If A is a consistency assertion B, then

*if A is in the left position, extend the branch by copying its current tip and adding B to the middleposition.

*if A is in the right position, extend the branch by copying its current tip and adding B to theright and left positions.

*if A is in the middle position, then do nothing.

If A is a strong negation assertion ~B, then

*if A is in the left position, extend the branch by copying its current tip and adding B to the right position.

*if A is in the right position, extend the branch by copying its current tip and adding B to the middle and left positions.

*if A is in the middle position, then do nothing.

If A is a conditional B C, then

*if A is in the left position, split the branch in two: extend the first by copying the current tip and adding B to right position, and extend the second by copying the current tip and adding C to the left position.

*if A is in the middle position, split the branch in two: extend the first by copying the current tip and adding B to middle position, and extend the second by copying the current tip and adding C to the left position.

* if A is in the right position, extend the branch by copying the current tip and adding B to the left position, and extend adding C to the middle and right positions.

We will also apply the Derived Cut rule at each step. XXXX NO ESTOY SEGURO DE QUE CUT NO SEA ELIMINABLE. PERO EN CUALQUIER CASO, ES VLIDA, AS QUE PODEMOS USARLA A PLACER. Y ES TIL, PORQUE NOS PERMITE DAR CON CONTRAMODELOS. XXXX Just take the nth formula in the enumeration of formulas and call it A. Now extend each branch using the rule of Derived Cut. So for each open branch, if its tip is , split it in three and extend the new branches with , A , A , , A , A and , A , A, respectively.

Now we need to repeat this procedure until every branch is closed, or, if that doesnt happen, until there is an infinite open branch. If the first scenario is the actual one, then the tree itself is a proof of the root sequent each step will just be the result of an application of a structural or operational rule to the previous step. If the second scenario is the actual one, we can use the infinite open branch to build a countermodel. Lets see how to do it.

If in fact there is an infinite open branch, then the Derived Cut rule will have been used infinite many times, and so every formula will appear at some (finite) point in the branch, and will remain in every step afterwards, because no formula is lost in the construction of the reduced tree. Also, every formula will appear in (at least and at most) two places in S. Now, the first step will be to collect all sequents of the infinite open branch B into one single sequent S = = S: S is a sequent of B.

But S cannot be a sequent of our system S, because we are working with finite sequents, and S is infinite. But it can be part of an extension of the proof system S we are working with, that do admit infinite sequents. This extension lets call it S*- will have the same axioms and rules, but without the restriction that every sequent should be finite.

As Derived Cut has been applied infinite many times in the construction of the branch, every formula will occur in exactly two places in S. It cannot occur in the three places, because thenthere is some finite stage n where the formula is for the first time in the branch in the three sides. But then that sequent will be an axiom, and so the branch will be closed. Correspondingly, there will be a model such that no formula in the sequent receives the value associated with the place where it occur (0 if the formula occur in the left, if it occur in the middle, 1 if it occur in the right) so there will be acountermodel to it. Lets see this in detail.

We will explain now how to design a countermodel to S. For each formula en in the sequent, this valuation will givea different value that the one that corresponds to its place in it. But that includes all the formulas in the initial and finite sequent S0. That valuation, then, will also be a countermodel to S0.

So now we need to specify a domain D and an interpretation I. Let D = L+ -the language itself. (This is not essential for the proof, though.)In order to build I, we need to ensure that no formula receives the value corresponding to its location in S. And this is how we will achieve this goal. For n-ary predicates P (including the truth predicate T), let I(P) be as follows: I(P)(I(s1),I(s2),...,I(sn)) = 0, , 1, respectively, i P(s1,s2,...,sn) does not appear in //, respectively. Of course P(s1,s2,...,sn) will appear in exactly two places in the sequent because that will the effect of some application of the Derived Cut rule-, and it cannot appear in the three of them, because otherwise the sequent will be an axiom, and so the branch would eventually have closed. One can design the interpretation of each predicate to easily achieve this. Does P(s1,s2,...,sn) appears in exactly the places where TP(s1,s2,...,sn) appears? Yes. As any formula in sequent that corresponds to an infinite open branch, P(s1,s2,...,sn) appears in exactly two places in the sequent. If TP(s1,s2,...,sn) appears in the only place where P(s1,s2,...,sn) does not appear, then, as TP(s1,s2,...,sn) will eventually be reduced, P(s1,s2,...,sn) will appear in the only place where it doesnt appear until that moment in the sequents of the branch. But then that sequent will be an axiom, and so the branch will be closed. This is the only possibility that we need to consider. Is not as if TP(s1,s2,...,sn) can appear in less places that P(s1,s2,...,sn): as any formula in a sequent corresponding to an infinite open branch, it has to appear in exactly two places.

The rules by which we reduced formulas can be used to show by induction that if none of the components of weak negations, conjunctions and universal quantifications receives the value associated with any place in which it appears in S, neither will the compound.[footnoteRef:9]Take a weak negation A that appears on the left side of the sequent, then A will appear in the right side of it that will be an effect of the eventual reduction of A. So if by inductive hypothesis,I(A) 1, then I(A) 0. Another example: if xA(x) appears in the middle of the sequent, then for some term a, either A(a) appears in both the middle and in the right, or else A(t) appears in the middle of the sequent for every term t. If the first one is the case, then I(A(a)) = 0, and so I(xA(x)) = 0. If the second one is the case, then A(t) will appear in the middle of the sequent for every t, no term t is such that I(A(t)) = 1/2, and so I(xA(x)) 1/2. And either way, we have I(xA(x)) , and so does not receive the value associated with the middle side of the sequent. [9: As the same happens in Ripleys system, well use his examples.]

What happens with the strong negations, the conditionals and the consistency assertions? Letsstartwiththestrongnegations ~A. XXXX AC FUI SUPER DETALLADO CON CADA DEMOSTRACIN, PERO POR SUPUESTO ES DEMENTE PONER TODO ESTO EN EL ARTCULO FINAL. CON PONER UN EJEMPLO DE CADA FRMULA CREO QUE ALCANZA.XXXX No formula like this can appear both on the left and the right sides of the branch, because then they would have appear at both sides in one sequent of the branch. That sequent, then, will be an axiom, and so wont correspond to an infinite open branch. So (i) either ~A is both in the left and in the middle sides of the sequent, or (ii) it is both in the middle and the right side of the sequent. Lets start with (i). At some point, ~A will have been reduced. It appears in the middle side of the sequent, and so nothing is supposed to be done when this happens. But it also appears on the left side. Then A will appear on the right side of the sequent on the next stage of the construction. At some point, an application of Derived Cut will also introduce A in the left or in the middle side, and so they will appear in one of the two sides in S. (It cannot appear on the three sides, because then the sequent will be an axiom.) If A appears on the left and on the right side, then A will get value , and then ~A will get value 1. So neither of them receives a value associated with one the sides where they appear. And the same happens if A appears on the middle and on the right side, because then A, by inductive hypothesis, will get value 0, and ~A will get value 1. So once again neither of them receives a value associated with one the sides where they appear. Now consider (ii). This case is similar to the previous one. But one ~A is reduced, as it is on the right side of the sequent, that will get A on both the left and the middle sides. So by inductive hypothesis, A will get value 1, and ~A will get value 0. So, again, none of these formulae receives the value associated with the sides where they appear.

The cases of consistency assertions A will be very similar to the strong negations ones. No formula like this can appear both on the left and the right side, because then the sequent will be an axiom, and so wont be part of an infinite open branch. So (i) either A is both in the left and the middle sides of the sequent, or (ii) it is both in the middle and the right side of the sequent. Lets start with (i). At some point, A will be reduced. It appears in the middle side of the sequent, so nothing is supposed to be done when this happens. But it also appears on the left side. If that happens, then A will appear on the middle side of the sequent on the next stage of the construction. At some point, an application of Derived Cut will also introduce A in the left or in the right side, and so they will also appear in one of the two sides in S. (It cannot appear on the three sides, because then the sequent will be an axiom.) If A appears on the left, then A will get value 1, an also will A. If it appears on the right, then it will get value 0, and A will get value 1. So neither of them receives a value associated with one the sides where they appear. Now consider (ii). Once A is reduced, as it is on the right side of the sequent, that will get A on both the left and the right sides. So by inductive hypothesis, A will get value 1/2, and A will get value 0. So, again, none of these formulae receives the value associated with the sides where they appear.

Lets turn now to the cases of conditionals of the form BC. These cases are different. We need now to consider three possible situations: (i) either the conditional appears in both the left and the right sides, or (ii) it appears both in the left and in the middle sides, or (iii) it appears on the middle and the right sides. So lets start with (i). This is a little bit tricky. Eventually, BC will be reduced from a sequent like , BC , BC. The reduction of the conditional on the right side will demand to copy the current tip, and also the addition of B in the left, and the addition of C in both the middle and the right sides of the sequent. But as BC appears also in the left side, this demands to split the branch in two, and to extend the first by copying the current tip and adding B to the right position, and extend the second by copying the current tip and adding C to the left position. As we have established, we need to reduce first the ones on the left, then the ones on the middle, and finally the ones of the right side of the sequent. (Remember we are talking about occurrences of the same formula, and which is the order to reduce them at some particular stage n.) The result of reducing first the sequent of the left, and then the one of the right, will be the result of splitting the branch in two, and (1) extending the first by copying the current tip and adding B to the left and right position, but also add C to the middle and right position, and (2) extending the second by copying the current tip and adding C to the left, middle and right position, and also B to the left and right positions. So the two new sequents will have the following appearences:

(1) , BC, B , C , BC, B, C

(2) , BC, B, C , C , BC, C

So these are two new branches. The second one, (2), will be an axiom, because the formula C appears in the three sides of the sequent, and so that particular branch will be closed. But that doesnt happen with (1). The complexity of B and C is less than the one of BC, so the inductive hypothesis can be applied to them. So B will get value , C will get value 0, and so BC will get value . So none of these formulae receives in the valuation a value associated with the sides of the sequent where they appear.

The second case is one where BC appears in both the left and the middle sides. When we build the reduced tree, we start reducing the occurrence of the conditional on the left side of the sequent. So we split thebranch in two, extending the first by copying the current tip and adding B to right position, and extending the second by copying the current tip and adding C to the left position. Then we need to reduce the occurrence of the conditional on the middle side, and we will do that in each branch. So, in each case, we split the branch in two, and extend the first by copying the current tip and adding B to middle position, and extend the second by copying the current tip and adding C to the left position. So we start with a sequent like this:

, BC , BC .

Then we obtain these two new sequents:

(I), BC, C , BC

(II) , BC , BC , B .

The extension of (I) will produce these two new sequents:

(I), BC, C , BC

(I), BC, C , BC, B

On the other hand, the extension of (II) will produce these two new sequents:

(II) , BC, C , BC , B

(II) , BC , BC, B , B

But no formula in (I), (I), (II) and (II) if any of them belong to an infinite open branch- will receive the value associated with any place in which it appears. Lets see why. In the (I)s case, the infinite extension of , BC, C , BC will include occurrences of B in two of the three sequents but, of course, not in the three of them-, and one more occurrence of C on the middle or in the right sequent. We need to make sure that the conditional gets value 1. But we can guarantee that if C appears on the middle or on the right side, that will indeed be the case. And that will just be the case, because the branch is infinitely open. So by inductive hypothesis, C will get value , if it also appears at some point on the right side, or 1, if it eventually appears on the middle side. B, also by inductive hypothesis, will get value 0//1, iff it does not appear nowhere in the left/middle/right side of the sequent.

In (I), the infinite extension of , BC, C, BC, B will include occurrences of B in the left or in the right side, but not in both of them, and will also include cases of C in the middle or in the right, but not in both of them.We need to make sure that BC will receive value 1, but if C in the middle, then by inductive hypothesis, it will get value 1, and so will the conditional. If C in appears on the right side, then by inductivehypothesis, it will get value 1/2, and so the conditional will get value 1.

What about (II)? The infinite extension of , BC, C , BC , B will get similar results as the infinite extension of (I), because either C will not be in the middle, and so willget value , or it will not be on the right. In either case, the conditional will get value 1. Thecase with (II) is a little bit different. In , BC , BC, B , B , by inductive hypothesis, as Bwill not be on the left side in its infinite extension, it will get value 0, and so the conditionalwill get value 1, no matter what value C receives.

Now we need to consider the third case, where the conditional appears on the middle and the right sides, like this: , BC , BC. As the conditional is in the middle side, when we reduce it we will get two new sequents, both of them the result of copying the current tip and adding some formulas: the first one has also B in the middle side; the second one has C on the left side:

(1) , BC, B , BC

(2) , C , BC , BC

As the conditional is in both of them on the right side, eventually it will be reduced. The result, in both cases, will be a new sequent that copy the current tip, and add B to the left side, and C to the middle and the right side of the sequent. So well get:

(1), B , BC, B, C , BC, C

(2), B, C , BC, C , BC, C

In (1), we can apply the inductive hypothesis and obtain the desired result. B will get value 1, because it is in the left and the middle sides, but not in the right. And C, as it is on the middle and the right sides, will get value 0. And so the conditional will get value 0. C, in (2), will be in each side of the sequent, and so the branch will be close. But this is not a problem, because (1) will still be part of an infinite open branch.

By completing the induction along these lines, we can show that we have a model on which no formula receives the value associated with any place in which it appears in S. But, as we know, that includes of the formulas in the initial and finite sequent S0. That valuation, then, will also be a countermodel to S0, which is what we were searching for. So we have just establish that for any sequent S, either it has a proof or it has a countermodel.

Conclusion

XXXX DESPUS DE ESTABLECER ESTO Y LO DEL CONDICIONAL, TENS QUE METER LO DE DERIVED CUT, Y HABLAR ACERCA DEL ORDEN. EN ESTE MOMENTO, LO MS SENSATO ME PARECE INTERCALAR UNA REDUCCIN NORMAL, Y OTRA POR DERIVED CUT.

XXXXX

Notas:

Faltan: (i) Una prueba de no-trivialidad, y (ii) una prueba de que Cut no es eliminable.

El punto (ii) no es necesario. Cut es semnticamente vlido, y si no es eliminable, entonces es vlido tambin desde el punto de vista sintctico. Por qu sera relevante probar que no es eliminable? Porque ent va a haber secuentes que no puedan ser probados sin Cut, y eso justifica este sistema ms complejo, con tres reglas estructurales (en principio) de ms. Pero igual se puede usar Cut donde se lo necesite, porque en cualquier caso va a ser vlido. Y lo vamos a utilizar, en particular, para probar completitud. (Ms especficamente, para construir el contramodelo.)

Si tuviramos prueba de (i), estaramos hechos. El modo ms sencillo de hacer dado que la prueba de punto fijo del predicado de verdad no parece sencilla de conseguir- es probando que hay un secuente que no puede ser probado sintcticamente. Un candidato razonable es el secuente vaco. Y si Cut fuera eliminable i.e., si todo lo que se probara con Cut pudiera probarse sin l-, entonces tendramos la deseada prueba de no-trivialidad, porque el secuente vaco no es probable solo con reglas operacionales porque todas ellas generan un secuente vaco como secuente-conclusin. Y como el sistema es correcto, ese secuente no va a ser semnticamente vlido. Lo que significa que hay un modelo que no lo satisface, lo que significa que no todo modelo del sistema es trivial. Victoria.

(ANTES: No hay que trabajar con secuentes finitos? Si el secuente es infinito, puede que nunca termines de reducirlo, y sin embargo, puede que no te queden todas las frmulas del lenguaje en el infinite open branch. Una opcin: intercalar aplicaciones de derivedcut por cada n aplicaciones del resto de las reglas. As s te asegurs que en el infinite open branch estn todas las frmulas del lenguaje. Otra opcin: que en cada paso apliques todas las reglas a la vez. Pero quizs eso no sea posible. Si, por ejemplo, tens que aplicar infinitas veces la reglas de intro del condicional a la izquierda porque tens infinitos condicionales a la izquierda, lo que te va a quedar (creo) es un rbol con infinitas ramas. Yo no s si eso es un rbol.

Otra opcin: copiar el mtodo de Ripley, y despus aclarar dudas. En particular, cmo hacer con los secuentes infinitarios. Una cosa que se puede hacer es aplicar las reglas no en cualquier orden, sino por el orden en la enumeracin de frmulas (intercalando, tras una reduccin de una frmula, una aplicacin de derivedcut). Por poner un orden, si una frmula aparece en ms de un lugar, reducir primero las que aparecen a la derecha, despus en el medio, y a lo ltimo en la izquierda. De esta forma, eventualmente vamos a dar con una prueba del secuente probable. (El sistema es compacto, creo. Verificar.)

A NON-TRIVIALITY PROOF

MPTTT is non-trivial if and only if there is one unprovable sequent. So lets just consider this sequent:

Pa

That sequent is not an axiom, and cannot be obtained by an application of an operational rule. Its only formula is a consistent sentence, and we have a rule to introduce those sentences on the left side, and also a rule to introduce them on the right. But there is no rule that specify how to introduce it on the middle side of a sequent. But still, in order to show that it cannot be prove, we need to show that it cannot be obtained by an application of neither of the three Cut rules S has.

There are several ways to go from here. But one promising route would be to show that anything that can be proved with an application of Cut, can also be proved without it. If we have that Cut-Elimination proof, the we will also get the desired non-triviality proof. So, is Cut eliminable from S?

One way to prove it is by induction on the index of the sequents, where the index is a sequent is a order pair of the grade and the rank of the sequent.[footnoteRef:10] In order to define these notions, we must first introduce two other preliminary ones. [10: We borrow these notions from Paoli XXXX]

Cutproof. A proof D in S is called a cutproofiff it contains just one application of Cut, whose conclusion S is the endsequent of the proof. It is called a cut-free proof iff it contains no application of Cut at all. The formula A that appears in Cut 1, Cut 2 and Cut 3 is called the cutformula.

The idea is that if it can be prove that every cutproof can be transformed in a cut-free proof, then every proof can be transformed into a cut-free proof. Just take any proof P of the sequent T. P will have a finite number n of applications of Cut. Just take the uppermost and leftmost application of Cut. That sub-proof will be a cutproof. If our hyphotesis is right, then it can be turn into a cut-free proof. No take the uppermost and leftmost application of Cut of this new proof. That sub-proof will be a cutproof. So, by hyphotesis, one can transformed it into a cut-free proof. So now we have a new proof, with two less applications of Cut than the original one. Just apply this procedure n times, and youll get a cut-free proof of the sequent T.

Rank. Let D be a cutproof whose final inference is one of these three:

Cut 1

, A , A

Cut 2

, A , A

Cut 3

, A , A

To define the rank of the sequent S in D --denoted by rD(S)- we need to distinguish three subcases, one for each Cut rule. In the case of Cut 1, the rank of S is so defined:

*If S belongs to the subproof D of D whose endsequent is [, A], rD(S) the maximal length (diminished by one) of an upward path of sequents S1, ,Sns.t. S1=S and Si(1in) contains A in the left side.

*If S belongs to the subproof D of D whose endsequent is [ , A ], rD(S) the maximal length (diminished by one) of an upward path of sequents S1, ,Sns.t. S1=S and Si(1in) contains A in the middle side.

*rD([])=rD([, A])+rD([, A])

In the case of Cut 2, the rank of S is so defined:

*If S belongs to the subproof D of D whose endsequent is [, A], rD(S) the maximal length (diminished by one) of an upward path of sequents S1, ,Sns.t. S1=S and Si(1in) contains A in the right side.

*If S belongs to the subproof D of D whose endsequent is [, A ], rD(S) the maximal length (diminished by one) of an upward path of sequents S1, ,Sns.t. S1=S and Si(1in) contains A in the middle side.

*rD([])= rD([, A])+rD([, A])

In the case of Cut 3, the rank of S is so defined:

*If S belongs to the subproof D of D whose endsequent is [, A], rD(S) the maximal length (diminished by one) of an upward path of sequents S1, ,Sns.t. S1=S and Si(1in) contains A in the left side.

*If S belongs to the subproof D of D whose endsequent is [ , A], rD(S) the maximal length (diminished by one) of an upward path of sequents S1, ,Sns.t. S1=S and Si(1in) contains A in the right side.

*rD([])= rD([, A])+rD([, A])

Rank of a subproof in a cutproof. Let D be a cutproof and D be any of its subproofs (possibly D itself). The rank of D in D is denoted by rD(D) or, when the context is clear, simply r(D)-, and coincides by definition with rD(S), where S is the endsequent of D.

Grade of a subproof in a cutproof. Let D be a cutproof, and D be any of its subproofs (possibly D itself). The grade of D in D is denoted bygD(D) or, when the context is clear, simply g(D)-, and equals the grade of the sequent prove by the proof. The grade of a sequent S that is the conclusion of a proof D is denoted bygD(S), and is the same as the cut-formula A. If A is not a truth assertion (e.g., a sentence with the form T), then gD(A) is the number of logical symbols contained in the cut-formula A. If A = T, then gD(T) = gD().

Index of a subproof in a cutproof.Let D be a cutproof and D be any of its subproofs (possibly D itself). The index of D in D is denoted by iD(D) or, when the context is clear, simply i(D)-, and is the ordered pair gD(D),rD(D). Indexes are ordered lexicographically: that is, i, n j, m iff either ij, or nm.

So lets now look at the cut-elimination proof. We will proceed by induction on the index of D:

[i(D)=0, 0] If the rank of D is 0, then A will be just an atomic formula. There are three kinds of atomic formula: propositional letters like p-, assertions like Pa, or thuth assertions, like T, where gD(T)=0. Since rD(D), then both premise-sequents must be axioms. So they must have one of these forms:

(i) , A , A, A

(ii) , ~A , ~A

(iii) , A , A

(iv) , p (p) , p (p)

The formula that is cut may be A from (i), but it cannot be ~A from (ii), A from (iii), of p (p) p (p) from (iv), because the grade of the last three formulas is different from 0. In the first case, the two premise-sequents will be the same sequent, and will be a sequent with a structure like (i). On the other case, then the formula that is cut will be part of and , and , or and . And in the three cases, the result sequent will be a formula like (i)-(iv), and so will be an axiom and so Cut is not necessary to prove it.

[i(D)= 0, k, 1k]If the grade of D is 0, then A will be just an atomic formula. There are three kinds of atomic formula: propositional letters like p-, assertions like Pa, or thuth assertions, like T, where gD(T)=0. If r(D)0, then either the rank of the left sequent is bigger than 0, or the rank of the right sequent is bigger than 0. There are types of cases to consider, depending on whether we have applied Cut 1, Cut 2 or Cut 3. Lets start with the Cut 1 case.

Cut 1

, A , A

So we need to consider two subcases: either (i) rD(, A)0, or (ii) rD(, A )0.

(i)rD(, A)0. Thus [, A] is the conclusion of an inference where A can either be a principal, or an auxiliary, or a side formula. It is a side formula, the strategy that we will employ will be to push the Cut upwards, in a way such what we will get is a new proof of [] containing cutproofs of grade 0 and a lower rank, and hence a lower index. This will entitle us to exploit the inductive hypotheses. The rank of the sequent in the proof will be indicated on the right part of the sequent. Here we have some examples:

XXXX HAY, DE HECHO, MUCHSIMOS SUBCASES, UNO POR CADA REGLA QUE PUEDAS APLICAR PARA OBTENER EL SECUENTE [, A], con A como side formula.

For example, let = {B}

, B, A (n), B , A (m)

, A, B (n+1) , A , B (m+1)

(n+m+2)

Turns into:

, A , B (n) , A , B (m)

, B (n+m)

, B (n+m+1)

Another example: = {~B}

, B , A, B (n) , B , A, B (m)

, A , ~B(n+1) , A , ~B(m+1)

(n+m+2)

Turns into:

, B , A, B (n) , B , A, B (m)

, B , B (n+m)

, B (n+m+1)

Another example: = {BC}

Let =1 2 3, =1 2 3, =1 2 3.

D=1,A1,B1,B(n1)

E=22,C2,C(n2)

F=33,B,C3(n3)

G=11,A,B1,B(m1)

H=22,C2,C(m2)

I=33,B,C3(m3)

D(n1)E(n2)F(n3)G(m1)H(m2)I(m3)

, A, BC (max(n1, n2, n3)+1), A,BC(max(m1, m2, m3)+1)

, BC(max(n1, n2, n3)+1)+ (max(m1, m2, m3)+1)

Turns into:

1,A1,B1,B(n1)11,A,B1,B(m1)

11, B1,B(n1+m1)22,C2,C(n2) 33,B,C3(n3)

, BC (max((m1+n1), n2, n3)+1)

If A is auxiliary, then the strategy will be basically the same. Lets look at one example.

, A, B , A (m) , B , A , A (n)

, A , BA (m+1) , A , BA (n+1)

, BA (max((m+1),(n+1)))

Turns into:

, A, B , A (m) , B , A , A (n)

, B , A(m+n)

, BA(m+n+1)

If A is principal, then A=T(B). Lets look at one example:

, B (m) , B (n)

, T(B) (m+1) , T(B) (n+1)

(m+1)+(n+1)

Turns into:

, B (m) , B (n)

(m+n)

(ii) rD( , A )0. This subcase will be treated in a symmetric manner than (i)

There cases that applied Cut 2 or Cut 3 are similar than the ones we have already seen.

[i(D)= k, 0, 1k] There are types of cases to consider, depending on whether we have applied Cut 1, Cut 2 or Cut 3. We will look at the case that uses Cut 1. The other two cases are symmetrical. Since r(D)=0, then the cutformula A must be principal in the subinferences whose conclusions are [, A] and [ , A] (and also cannot be generated by another application of Cut, because then the rank of D wont be 0.).

Letsseethefollowingexample: XXXXX PON UN EJEMPLO DE UN CUANTIFICADOR UNIVERSAL, Y OTRO DE UNA AFIRMACIN DE CONSISTENCIA (no se puede con el de consistencia, porque no hay regla operacional que te genere una afirmacin de consistencia en el medio).XXXX

Let= , = , = .

, A(t) (n), A(a), A(a) (m) , A(t)(s)

, xA(x) (n+1) , xA(x) (max(m,s)+1)

((n+1)+(max(m,s)+1)

Turns into:

, A(t) (n), A(t) (s)[footnoteRef:11] [11: If [, A(t)] has a proof, the so has [, A(t)]. Just start from the same axioms, but add in each side the formulas that [, A(t)] lacks in order to became [, A(t)]. XXXX NO ME QUEDA CLARO QUE EL RANK DEBA SER S. PODRA SER MS. PODRA SER MS QUE MAX(M,S)? Creo que no]

(n+s)

[i(D)= k, j, 1j, k] We need to consider two subcases: either (i) rD(, A)0, or (ii) rD( , A )0. We will just consider the case (i). (ii) is symmetrical. Once again, we will need to consider, in each alternative, three subcases: the ones that applied in the final inferential step CUT 1, the ones that apply in that step CUT 2, and, finally, the ones that apply at that stage CUT 3. Well just look at the first subcases. The others two types are symmetrical.

(i) rD(, A)0. Thus [, A] is the conclusion of an inference where A can either be a principal, or an auxiliary, or a side formula. If it is a side or an auxiliary formula, the strategy that we will employ will be to push the Cut upwards, in a way such what we will get is a new proof of [] containing cutproofs of grade 0 and a lower rank, and hence a lower index. This will entitle us to exploit the inductive hypotheses. The rank of the sequent in the proof will be indicated on the right part of the sequent. Here we have an example:

For example, let = {B}

, B, A , B (n) , B , A , B (m)

, A , B(n+1) , A , B(m+1)

(n+m+2)

Turns into:

, B, A , B (n) , B , A , B (m)

, B , B (n+m)

, B

What happen when A is principal? The strategy will be basically the same. Lets see an example:

, A (m1) , B (m2), A (n1) , B (n2)

, BA (max (m1, m2)) , BA (max(n1,n2))

((max (m1, m2))+(max(n1,n2)))

Turns into:

, A , B (m2) , A , B (n2)

, A (m2+n2)) , B (m2), A (m2+ n2)) , B (n2)

, BA(max((m2+n2), m2) , BA (max((m2+n2),

XXXXXX EXPLIC BIEN ESTA TRANSFORMACIN. EXPLIC QUE CADA FORMA DE CUT CON UN INDEX MENOR PUEDE SER REEMPLAZADO POR UNA PRUEBA DEL SECUENTE SIN CUT, POR HIP INDUCTIVA. AS CADA VEZ, HASTA LLEGAR A LA ULTIMA.

XXXXXXX

Cut 2

, A , A

Cut 3

, A , A

XXXX LA ESTRATEGIA SERA, CREO, RECONVERTIR TODA PRUEBA QUE USE CUT EN EL PASO FINAL, EN UNA QUE USE CUT EN ALGN PASO ANTERIOR. FIJATE BIEN CMO LO HACE PAOLI.

XXX P. 101 DE PAOLI.1