Top Banner
12

Mechanizing proofs of computation equivalence

Apr 23, 2023

Download

Documents

Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: Mechanizing proofs of computation equivalence

Mechanizing Proofs of Computation Equivalence(Extended Abstract)Marcelo Glusman Shmuel KatzDepartment of Computer ScienceThe Technion, Haifa, Israelfmarce, [email protected] as a Regular Paper for CAV'99AbstractA proof-theoretic mechanized veri�cation environment that allows taking advantageof the \convenient computations" method is presented. The PV S theories encapsulatingthis method reduce the conceptual di�culty of proving a property for all the possibleinterleavings of a parallel computation by separating two di�erent concerns: proving thatcertain convenient computations satisfy the property, and proving that every computationis related to a convenient one by a relation which preserves the property. We de�ne onesuch relation, the equivalence of computations which di�er only in the order of indepen-dent operations. We also introduce the computation as an explicit semantic object. Theapplication of the method requires the de�nition of a \measure" function from compu-tations into a well-founded set. We supply two possible default measures, which can beapplied in many cases, together with examples of their use. The work is done in PV S,and a clear separation is made between \infrastructural" theories to be supplied as a proofenvironment library to users, and the speci�cation and proof of particular examples.Contact author: Shmuel Katz, [email protected]

Page 2: Mechanizing proofs of computation equivalence

1 Introduction1.1 ContributionsThis paper presents a proof environment for PV S [13, 17, 12] that supports convenientcomputations and exploits partial order based on the independence of operations in di�erentprocesses, for the �rst time in a mechanized theorem-proving context. Thus theoretic workde�ning this approach is turned into a proof environment for theorem-proving that can beused without having to rejustify basic principles. Besides making convenient computationspractical in a theorem-proving tool, we demonstrate what is involved in packaging such aframework into a proof environment for use by nonexperts in the particular theory. Themodular structure of the theories should encourage using parts of the environment wheneverconvenient computations are natural to the problem statement or proof.In the continuation, basic theories are described in which computation sequences areviewed as `�rst-class' objects that can be speci�ed, equivalence of computations based onindependence of operations is precisely de�ned, and a proof method using well-founded setsis encapsulated. Two possible default measures are provided to aid the user in completingthe proof obligations, one involving the distance between matching pairs of events, and onefor computations that have layers of events. Then we summarize how a user can exploitthe environment and describe two generic examples. The �rst demonstrates ideas of virtualatomicity for sequential sequences of local events, and the second shows the use of the layersmeasure for a pipelined implementation of insertion sorting.1.2 Convenient computations and the need for mechanizationMethods that exploit the partial order among independent operations have been used bothfor model checking and for general theorem proving (see [16] for a variety of approaches). Inparticular, ideas of the independence of operations that lead to partial order reductions haveeither been used for (usually linear) temporal logic based model checking reductions [14, 18, 7],or for theoretical work on general correctness proofs in unbounded domains. [11, 15, 8]. Forgeneral correctness (as opposed to model checking) no mechanization has been implementeduntil now, and sample proofs have been hand simulated.The intuitive idea behind convenient computations is simple. A system de�nes a collectionof linear sequences of the events and/or states (where each sequence is called a computation).We often convince ourselves of the correctness of a concurrent system by considering some\convenient" computations in which events occur in an orderly fashion even though theymay be in di�erent processes. It is usually easier to prove properties for these well-chosencomputations than for all the possible interleavings of parallel processes. Two computationsare called equivalent if they di�er only in that independent (potentially concurrent) eventsappear in a di�erent order. There are classes of properties which are satis�ed equally by anytwo equivalent computations (i.e., either both satisfy the property, or neither does). If weshow that any non-convenient computation is equivalent to some convenient one, then we canconclude that any properties of this kind veri�ed for the convenient computations must alsobe satis�ed by the non-convenient ones.In certain contexts, like sequential consistency of memory models and serializability ofdatabase transaction protocols, where the convenient computations' behavior is taken as thecorrect one by de�nition, the computation equivalence itself is the goal of the veri�catione�ort. 1

Page 3: Mechanizing proofs of computation equivalence

The availability of general purpose theorem proving tools such as PV S opens the way fora mechanized application of this technique. Usually, attempts to carry out mechanized proofsin such tools raise issues that might be overlooked when using \intuitive" formal reasoning.Moreover, proofs can be saved and later, in the face of change, adjusted and rerun ratherthan just discarded. The down side of mechanized theorem proving methods is the need toprove many facts that are easily understood and believed to be correct by human intuition.Many of these facts are common to all applications of a proof approach, so there is no reasonto duplicate e�ort. General de�nitions and powerful lemmas can be packed in theories thatprovide a comfortable proof environment. These theories also clarify the new approach, andgenerate the needed proof obligations when a particular application instantiates the theories.1.3 Existing versus proposed veri�cation stylesThe PV S tool is a general-purpose theorem prover with a higher-order logic designed to verifyand challenge speci�cations, and is not tailored to any computation model or programminglanguage. It does provide a prelude of theories about arithmetic, inequalities, sets, and othercommon mathematical structures, and some additional useful libraries. Decidable fragmentsof logic are treated by a collection of decision procedures that replace many tedious subproofs.However, it has no inherent concept of states, operations, or computations. Usual veri�cationexamples, like proving an invariant in a transition system, involve the de�nition of states,initial conditions (initial-state functions), and transitions (next-state functions) by the user.To prove invariance of a state property P, one just writes and proves an induction theoremof the form: (initial(s)) P (s)) ^ (P (s)) P (next(s)))The computations themselves are not mentioned directly, and the property \in every state,P" (�P of linear temporal logic) is justi�ed (usually implicitly) by such an induction theorem.As part of the proof environment presented here, we provide precise de�nitions for compu-tations, conditional independence of operations, computation equivalence, and veri�cation ofproperties based on computation equivalence and convenient computations using well-foundedsets.We de�ne a \computation" type as a function from discrete time into \steps" (a statepaired with an operation), and specify temporal properties as arbitrary predicates over com-putations. Thus, if c : comps is a function from t : time to steps, and st is a projectionfunction from a step to its state component, then�P = global P?(c : comps) : bool = 8t 2 time : P (st(c(t)))This style is needed so we can reason about computations and their relationships. Note thatwe are not limited to linear-time properties by this style of expression, since the higher-orderlogic of PV S allows us to say things like \there exists another computation which is identicalto c up to time t1 and at a time t2 later than t1, that computation's state satis�es P"2 The theoriesIn this section we describe a hierarchy of theories, whose IMPORTING relationships can be seenin Figure 1. They provide the foundation for reasoning about equivalence among computa-tions. The top level of the hierarchy contains three components: the computation model, the2

Page 4: Mechanizing proofs of computation equivalence

equivalence notion, and the proof method. An additional default measure component uses theother theories in speci�c contexts that hide some of the proof obligations when used in anapplication.In the computation model component, transition systems and computations over themare de�ned. The option of providing global restrictions on possible sequences of transitions(for example, in order to introduce various notions of fairness [6, 2]) is also provided. In theequivalence component, theories are presented that encode when transitions are independent,and when computations are de�ned to be equivalent (in that independent transitions occurin a di�erent order). The proof method component shows how to prove that for an arbitraryset and subset, every element of the set is related to some element of the subset, using well-foundedness. Here the elements are arbitrary, and the relation is given as a parameter. Wheninstantiated with the equivalence relation from the equivalence component, the needed proofrule and its justi�cation are provided.After presenting these theories in somewhat more detail, two default measures are de-scribed, for matching pairs of operations and for layered computations. We then summarizehow a user should apply the theories to an application, and describe two examples. (Note: inthis abstract, most of the PV S �le for the proof environment is provided in an Appendix, forreference. For further details, a pointer to a Web page with the full �les and listings of theexamples is at http://www.cs.technion.ac.il/~marce. This pointer will be in the paper.)A B measure induction-

- ?- means "B IMPORTS A"Legend:conditional independence�nite sets * .equivalence of comps?subset reachability?� ?? ???

?? -����� --�� ??step sequencesexecution sequencescomputations conv compsapplication examplematching intervals measureor layered measureMODELCOMPUTATION COMPUTATION EQUIVALENCEPROOF METHODDEFAULTMEASURE Figure 1: The hierarchy of theories.2.1 Computation modelOur model of computation is de�ned by three parameterized theories: step sequences,execution sequences, and computations. Each application using these theories must de�netypes for the speci�c states and the operations as actual parameters upon instantiation ofthe theories. The theory step sequences, based on these two types, de�nes function types3

Page 5: Mechanizing proofs of computation equivalence

which are needed to build a transition system: initial conditions, enabling conditions,and next state functions. It also de�nes the types time, steps (records with two �elds:st: states and op: ops), and step seqs (functions from time to steps).In an application, the user de�nes the initial states, the enabling conditions and the next-state functions for each operation, and then instantiates the theory execution sequences.This theory de�nes the subtype of the well-built execution sequences: the ones that startin an initial state, and whose steps are consecutive, i.e., the operations are enabled on thecorresponding state, and the state in the following step is their next-state value.The theory computations has an additional parameter to be provided by a user, namely,a predicate on execution sequences called relevant? which is used to de�ne the subtypecomps. This includes only those execution sequences which satisfy the added predicate.This restriction can be used to focus on a special subset of all the possible execution sequences,for example to express fairness assumptions or to analyze just a part of the system's behavior(such as a �nite collection of transactions).2.2 Computation equivalenceEquivalence between computations and independence of operations is formalized by the the-ories conditional independence and equivalence of comps. The functional independencede�ned in the �rst of these theories is over pairs of operations and states, expressing whentwo operations are independent in a given state. It requires that the execution of either of thetwo operations doesn't interfere with the enabledness of the other one, and that the result ofapplying both operations from the given state must be the same regardless of their ordering.The key parameter of the theory is a separate conditional independence relation also overpairs of operations and states. This predicate must be symmetric and it must imply thefunctional independence of the two operations from the given state. (These conditions willappear as proof obligations when the theory is instantiated.) This arrangement allows theuser to choose how much independence is to be considered for a particular application.The theory equivalence of comps �rst de�nes the result of swapping two independentoperations on a given state, as an execution sequence. If the need arises to prove that theresult is a legal computation (a relevant? execution sequence), it is passed as a proofobligation to the application since the relevant? predicate is only de�ned there.The rest of the theory deals only with legal computations which are identical up to theswapping of independent operations. The following relations are de�ned:� one swap equiv?(c1,c2): c1 and c2 are di�erent and di�er by a single swap, i.e., c2 isthe result of swapping consecutive independent operations in c1 at some time t.� swap equiv n?(c1, c2, n): c1 and c2 di�er by up to n activations of single swaps.� swap equiv?(c1,c2): this is the transitive closure of one swap equiv? and is true i�there is an n s.t. swap equiv n?(c1, c2, n).In the theory, the relation swap equiv? is proven to be an equivalence relation. Thisrelation is the formalization of the intuitive notion of equivalent computations, and the equiv-alence classes that it generates in the set of all computations are called interleaving sets inthe context of partial order reductions and the temporal logic ISTL�[9, 10].4

Page 6: Mechanizing proofs of computation equivalence

2.3 Proof methodConsider an arbitrary set (or data type) T, with a preorder relation path to? over its elements,and choose a subset of T - those elements which satisfy a given predicate. We want to provethat from each element in T we can reach one in the chosen subset. We �rst pick a \measurefunction" which maps elements from T into elements of a well-founded structure (M,<). Inthe theory subset reachability we show that it su�ces to prove that each element outsidethe chosen subset has a path to one with a strictly smaller measure.The theory conv comps has parameters that de�ne a computation model, a reduces to?preorder, a predicate for choosing the conv?enient computations, and a measure functioninto a well-founded set. These are used with the subset reachability theory to provide asu�cient condition:FORALL(c) : NOT conv?(c) => EXISTS(d) : reduces to?(c; d) AND m(d) < m(c) (1)from which reduction to convenient computations is proved:FORALL(c) : EXISTS(d) : conv?(d) AND reduces to?(c; d)It also provides a theorem which de�nes the added proof obligations that must be dis-charged in an application in order to verify any property p? for all computations:FORALL(c) : conv?(c) => p?(c)FORALL(c; d) : reduces to?(c; d) => p?(d) => p?(c) (2)In other words, p? must be true for the convenient computations, and must respect thepreorder used in the theory. In a wider context, the theory of convenient computationscan be used to reduce the veri�cation of properties of general computations to the simplerproblem of veri�cation over the convenient computations. The reduces to? relation can beany preorder for which the required premises ((1) and (2)) can be proven. Since the theory isparametric, other computation models and notions of equivalence can be used, even thoughhere we use the ones de�ned in this paper.2.4 Default measures2.4.1 Matching intervals measureIn [8] the convenient computations method was applied (manually) to the sequential con-sistency problem. The measure involved intervals of selected events and their length. Themeasure value was lowered by moving unrelated events out of the interval until all the selectedevents happened consecutively. We provide a simpler version which can be applied to achievethe same e�ect. An interval is de�ned as a pair of points in time (t1; t2), and its distance(length) is t2� t1� 1 (thus a consecutive pair (t; t+1) has distance zero). The measure valuefor a computation is de�ned as the sum of all the distances of its matching intervals.To use this measure, the application must supply a predicate match?(c,i) that de�nesthe \matching" intervals i (pairs of points) in a given computation c. In a matching intervalwe want two events to ideally happen immediately one after the other, in a certain order, evenif in many computations there are intervening events. Typical cases are sending and receiving5

Page 7: Mechanizing proofs of computation equivalence

a value over an empty communication channel, or performing a series of local steps in aprocess. The minimum value is attained when all the matching intervals have zero distance,and it should be easy to show that computations with a minimal measure must be convenient.The match? predicate must satisfy the following requirements: (1) Every computation has�nitely many matching intervals. This is to make the measure �nite. (An alternative not yetimplemented is to require that the set of nonzero-distance matching intervals be �nite, andsum the distances only over that set). (2) The matching intervals in two one-swap-equivalentcomputations are the same, up to the exchange of the end-points a�ected by the swap. (3)No two matching intervals start in the same time point and no two end together. This is usedto simplify the number of cases. (4) Swappable (i.e., independent consecutive) operationscannot appear at the ends of a (zero-distance) matching interval.The theory also provides and proves a heuristic for �nding a computation d which isequivalent to a given computation c and has a smaller measure. Such a d exists if c satis�es:(only starts interval?(c; t) AND NOT only starts interval?(c; t+ 1)) OR(only ends interval?(c; t+ 1) AND NOT only ends interval?(c; t))where only starts interval?(c,t)= starts interval?(c,t) AND NOT ends interval?(c,t)(and only ends interval? is de�ned similarly).The predicates starts interval?(c,t) and ends interval?(c,t) state that there is amatching interval in c that starts(ends) at time t. This means that either an event onlystarting an interval is followed by one not only starting an interval, or an event only endingan interval is preceded by one not only ending an interval. Due to the other assumptions, ifthis holds, the relevant pair can be exchanged, yielding a computation with a smaller measure.2.4.2 Layered measureAnother way of thinking of convenient computations of a program is to de�ne ordered phasesor layers of execution [5, 4]. Each event is associated with a layer. If the events in everylayer appear contiguously in the computation, without events from a layer getting mixedwith events from an earlier layer, the computation is considered convenient. Examples wherethis approach seems natural are programs with communication-closed layers and distributedsnapshot algorithms [3].The layered measure theory considers programs with a �nite number of layers, whereall but the last one must be �nite and eventually �nish, i.e., for each computation and foreach layer in it, there is a time after which all the events belong to subsequent layers. Ifin�nite computations are considered, this can be achieved by applying some sort of fairnessassumption.For each event (computation step) except those associated with the last layer, we countthe number of previous events that belong to a (strictly) later layer than the layer of thatevent. The measure value of a computation is the sum of those counts. Clearly, computationswith a zero measure value should be convenient in an application, since no event is precededby an event from a later layer.The application must de�ne a natural number lastlayer and function (layer) that mapsa computation and a time point into a natural number less than or equal to lastlayer. Thisfunction must meet the following requirements:6

Page 8: Mechanizing proofs of computation equivalence

� As mentioned before, for every layer below lastlayer there is a time after which thereare no more time points belonging to it.� The layer function is the same for one-swap-equivalent computations, except at thetwo time points involved in the swap, where the layer values are interchanged.� For any two consecutive time points t and t+1 such that layer(t) > layer(t+1) , theoperations at those two steps must be independent, i.e. a swap must be possible. Thisseemingly strong requirement should be easy to prove if layering is appropriate for theapplication.In this theory, it is proven that any assignment of layers satisfying these conditions guaranteesthat an arbitrary computation with a non-zero measure is equivalent to one with a smallermeasure. Thus showing a drop in the measure value is hidden from a user, if the threeconditions above can be shown.3 User inputWe now summarize the elements that must be provided when applying the method.3.1 The user's problem descriptionFirst, the computation model must be described by de�ning the types of the states andoperations, the initial states, the operations' enabling conditions, and their next-state func-tions. Any necessary global restrictions such as fairness or �niteness are then added to de�nethe relevant computations.Second, the user must de�ne the conditional independence relation between the operationsat given states. This is used to instantiate the computation equivalence theory which willprovide the swap-equiv relation. The theory will generate proof obligations to show thatthe user's suggested relation is a valid independence relation. Finally, in the proof methodtheories, the convenient computations must be provided in the instantiation of the theoryconv-comps, and a measure function must be de�ned (either by the user, or using one of thetwo provided).These are all the de�nitions needed to prove computation equivalence. Aside from theimporting assumptions of the theories used, the user is left to prove that for every non-convenient computation there is a reduction to a computation with a lower measure (for thematching intervals measure there is a su�cient condition that makes that proof much easier,and for the layered measure this is already proven).To prove any property (predicate over computations) for all the computations of an ap-plication, the theorem provided in conv comps leaves the user to prove that the propertyholds for convenient computations and that equivalence over the user's independence relationpreserves the property.3.2 The user's design decisions and tradeo�sAs in any proof method, experience is essential in successfully applying the elements of thismethod. Choosing the relevant computations can be critical, especially in proving the im-porting assumptions of the theories that de�ne measure functions.7

Page 9: Mechanizing proofs of computation equivalence

When proving that the reduction preserves the property to be veri�ed, and also whenproving that the independence relation implies functional independence, it helps to have assmall an independence relation as possible. This con icts with the interest of having moreopportunities to swap operations in order to �nd a computation with a smaller measure.If we include more computations in the class of convenient computations, it may be easierto show a reduction to a smaller measure for the remaining nonconvenient computations. Onthe other hand, then we diminish the bene�t of the use of equivalence by having to prove thedesired properties directly for a larger class of convenient computations.As seen in the proof obligation (2), the properties that can be veri�ed when the theoriesare combined in an application are those which are preserved by the reduction relation. Alemma in the theory equivalence of comps simpli�es this requirement: it su�ces to showthat two computations which di�er only in the order of one pair of independent operations,must satisfy p? equally: FORALL(c; d) : one swap equiv?(c; d) => p?(d) => p?(c)This requirement is easy to prove for large classes of properties, e.g., properties that areto hold only in �nal states. In certain cases, one might need to add \history" variables tothe state, (without a�ecting the behavior of the rest of the state components) to supportproperty veri�cation. For example, in order to verify mutual exclusion, a ag that records aviolation of the mutual exclusion should be added. This is done so that two computationswhich di�er only by the order of a pair of operations are not considered equivalent if one ofthem violates the mutual exclusion requirement and the other does not. The original systemvariables might not su�ce to make those operations functionally dependent.The characterization of the properties which can be proven by this method is a subjectworth further research. In this paper we have focused on the proofs that computations areequivalent, and particularly on showing that every computation is equivalent to one of theconvenient computations.4 Example 1: Using the matching intervals measureOur �rst example (a full listing is at the Web page given earlier) shows how a sequence oflocal actions in a process can be considered atomic. It is typical of many situations where asequence of local actions can be viewed as virtually atomic [1].% x: nat=0, s: bool=FALSE% in_l, in_m : nat% P1: t: nat || P2: t:nat % the t's are local% l0: t:=in_l || m0: t:=in_m% l1: t:=t+x || m1: await s=TRUE% l2: x:=t || m2: t:=t*x% l3: s:=TRUE || m3: x:=t% l4: STOP || m4: STOPHere the operation l3 must occur before m1, so in fact we observe all the possible interleavingsof the operation m0 (P2's initialization) with the operations l0-l3. The states type containsthe two program counters explicitly. The ops type is an enumeration of the labels in theprogram. The initial? predicate on states is straightforward. The en? enabling condition,and the next next-state-function are de�ned in a table format to enhance readability.In this example we de�ne two operations as independent if they belong to di�erent pro-cesses and they satisfy indep l m?, a predicate given in a table which in this particular case8

Page 10: Mechanizing proofs of computation equivalence

is not state-dependent. This table was �lled based on our understanding of the semantics ofthe programming language. The independence relation must be proved to imply functionalequivalence and to be symmetric, as a type-correctness requirement. After that is done, todecide if two operations can be swapped, we only need to look them up in the table. Theseparation of these two proof concerns is quite useful during the proving process. The conve-nient computations are chosen as those in which m0 is executed immediately before m1 (andafter l0{l3). We choose to use the default measure with matching pairs. Here we can de�nethe match? predicate so that only the pair (m0,m1) matches. Clearly, when the measure iszero, the computation is convenient.Note that if the instructions in question were in a loop, the de�nition of matching pred-icates would have to guarantee that the proper occurrences of the instructions are matched.In anticipation of such cases, the match? predicate states that the interval in question startswith m0 and ends with m1, without any other m1 in between or an m0 before its beginning.This is necessary in order to prove an assumption of the matching intervals measure the-ory: if there is a sequence m0...m1...m1, we do not want two matching intervals startingwith m0, or two that end with m1 in the sequence m0...m0...m1.There is another condition to satisfy: computations must have a �nite number of matchingintervals. Again, in anticipation of looping programs, we restrict ourselves to these computa-tions by adding this requirement as the relevant? predicate. This is harmless in the presentexample, since we deal with a single matching interval.The proof that for any nonconvenient computation there is an equivalent one with a smallermeasure was accomplished by using the theorem provided in the matching intervals measuretheory.Since our main concern is to prove the computation equivalence, we provided an ex-ample of a property to be veri�ed and the related proof obligations (conv implies p andone swap equiv preserves p) only for illustrative purposes, without completing their proofs.5 Example 2: Using the layered measureOur second example (also available at theWeb page) is a typical representative of the pipelinedprocessing paradigm. In our example, all computations are equivalent to those that execute\one wave at a time," i.e., in which a new input is entered only when all the operationsrelated to the previous inputs have been �nished. The program is a pipelined insertion-sortalgorithm in which the bu�ers between the processors can hold a single value. We assume thateach processor does its local actions atomically: taking its input, comparing it with the valueit holds, and sending the maximum between them to the next processor in the pipeline. Tounderstand why it is complicated to prove the algorithm without the convenient computationsapproach, consider a general computation. In a typical state, the k �rst processors alreadyhave a value, and some of them have a nonempty incoming bu�er. There could be severalsuch processors whose successor has room in its bu�er, so many di�erent operations wouldbe enabled in such a state. To verify the sorting algorithm we need a general invariant, muchharder to �nd than the one needed for our convenient computations, in which there is at mostone possible continuation at a time.The example is described in the theory pipeline sort, parametric on the type of the val-ues being sorted, their total-ordering relation, the number of input values (and of processors)NUM, and an array from 0 to NUM-1 holding those values.9

Page 11: Mechanizing proofs of computation equivalence

The processors' indexes range from 1 up to NUM. The system state includes a counter ofthe number of inputs already inserted, and an array of processor states. Since we choose touse the layers approach, we will extend the state and add auxiliary instructions to assign alayer number to each state. Each such processor state includes a ag to signal a nonemptyinput bu�er, the input bu�er itself, and an integer that holds the layer associated with thatinput. This number is taken from the global counter when inputting a new value into the �rstpipeline stage, and is copied to the next pipeline stage when a value is propagated forward,regardless of the result of the comparison between the input and the locally held value.The layer value of a computation step is the abovementioned number for the normaloperations (those of any processor), or the value of the global input counter for an input-new-value operation. The idling operation, enabled only at the end of the whole computation, hasa layer value NUM+1.The initial states, enabling conditions and next-state-functions are coded in a straight-forward way. In this case, we imposed no added restrictions when describing the relevant?computations.The independence relation is de�ned as TRUE only between operations done by non-adjacent processors (and for two idling steps). It is proved that this relation implies functionalindependence and is symmetric. Restricting the independence relation greatly simpli�es theproofs.Since the layered measure proves the computation equivalence for all cases in which itsassumptions hold, we make these explicit and proceed to prove them, using several sublemmas.In this and the other examples we have done, the proof environment generates the proofobligations, and contains over half of the lines in the PV S speci�cation and also of the provercommands (taken as a rough indication of the proof e�ort).References[1] K. Apt and E. R. Olderog. Veri�cation of Sequential and Concurrent Programs. Springer-Verlag, 1991.[2] K. R. Apt, N. Francez, and S. Katz. Appraising fairness in languages for distributedprogramming. Distributed Computing, 2:226{241, 1988.[3] K.M. Chandy and L. Lamport. Distributed snapshots: determining global states ofdistributed systems. ACM Trans. on Computer Systems, 3(1):63{75, Feb 1985.[4] C. Chou and E. Gafni. Understanding and verifying distributed algorithms using strati-�ed decomposition. In Proceedings of 7th ACM PODC, pages 44{65, 1988.[5] T. Elrad and N. Francez. Decompositions of distributed programs into communicationclosed layers. Science of Computer Programming, 2(3):155{173, 1982.[6] N. Francez. Fairness. Springer-Verlag, 1986.[7] P. Godefroid. On the costs and bene�ts of using partial-order methods for the veri�cationof concurrent systems. In D. Peled, V. Pratt, and G. Holzmann, editors, Partial OrderMethods in Veri�cation, pages 289{303. American Mathematical Society, 1997. DIMACSSeries in Discrete Mathematics and Theoretical Computer Science, vol. 29.10

Page 12: Mechanizing proofs of computation equivalence

[8] S. Katz. Re�nement with global equivalence proofs in temporal logic. In D. Peled,V. Pratt, and G. Holzmann, editors, Partial Order Methods in Veri�cation, pages 59{78.American Mathematical Society, 1997. DIMACS Series in Discrete Mathematics andTheoretical Computer Science, vol. 29.[9] S. Katz and D. Peled. Interleaving set temporal logic. Theoretical Computer Science,75:263{287, 1990. Preliminary version appeared in the 6th ACM-PODC, 1987.[10] S. Katz and D. Peled. De�ning conditional independence using collapses. TheoreticalComputer Science, 101:337{359, 1992.[11] S. Katz and D. Peled. Veri�cation of distributed programs using representative inter-leaving sequences. Distributed Computing, 6:107{120, 1992.[12] S. Owre, N. Shankar, J. M. Rushby, and D. W. J. Stringer-Calvert. PVS LanguageReference. Computer Science Laboratory, SRI International, Menlo Park, CA, September1998.[13] Sam Owre, John Rushby, Natarajan Shankar, and Friedrich von Henke. Formal veri�ca-tion for fault-tolerant architectures: Prolegomena to the design of PVS. IEEE Transac-tions on Software Engineering, 21(2):107{125, February 1995.[14] D. Peled. Combining partial order reductions with on-the- y model checking. Journalof Formal Methods in System Design, 8:39{64, 1996.[15] D. Peled and A. Pnueli. Proving partial order properties. Theoretical Computer Science,126:143{182, 1994.[16] D. Peled, V. Pratt, and G. Holzmann(eds.). Partial Order Methods in Veri�cation.American Mathematical Society, 1997. DIMACS Series in Discrete Mathematics andTheoretical Computer Science, vol. 29.[17] John Rushby, Sam Owre, and N. Shankar. Subtypes for speci�cations: Predicate sub-typing in PVS. IEEE Transactions on Software Engineering, 24(9):709{720, September1998.[18] P. Wolper and P. Godefroid. Partial-order methods for temporal veri�cation. In Pro-ceedings of CONCUR'93 (Eike Best, ed.), LNCS 715, 1993.11