Top Banner
Distributed Programming Reasoning about Synchronous Message Passing Message Passing Johannes ˚ Aman Pohjola CSIRO’s Data61 (and UNSW) Term 2 2021 1
52

Message Passing Johannes Aman Pohjola

Mar 16, 2022

Download

Documents

dariahiddleston
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: Message Passing Johannes Aman Pohjola

Distributed Programming Reasoning about Synchronous Message Passing

Message Passing

Johannes Aman PohjolaCSIRO’s Data61 (and UNSW)

Term 2 2021

1

Page 2: Message Passing Johannes Aman Pohjola

Distributed Programming Reasoning about Synchronous Message Passing

Where we are at

In the last lecture, we saw monitors and the readers and writersproblem, concluding our examination of shared variableconcurrency.For the rest of this course, our focus will be on message passing.It’s a useful concurrency abstraction on one computer, and thefoundation for distributed programming.In this lecture, we will introduce message passing and discusssimple proof techniques for synchronous message passing.

2

Page 3: Message Passing Johannes Aman Pohjola

Distributed Programming Reasoning about Synchronous Message Passing

Distributed Programming

distributed program: processes can be distributed acrossmachines → they cannot use shared variables(usually; DSM exception)processes do share communication channels formessage passinglanguages: Promela (synchronous and asynchronousMP), Java (RPC)libraries: sockets, message passing interface (MPI),parallel virtual machine (PVM) etc.

3

Page 4: Message Passing Johannes Aman Pohjola

Distributed Programming Reasoning about Synchronous Message Passing

Message PassingA channel is a typed FIFO queue between processes.

Ben-Ari Promelasend a message ch ⇐ x ch ! xrecieve a message ch ⇒ y ch ? y

Synchronous channels

A synchronous channel has queue capacity 0. Both the send andthe receive operation block until they both are ready. When theyare, they execute at the same time, and assign the value of x to y.

Asynchronous channels

For asynchronous channels, send doesn’t block. It appends thevalue of x to the queue associated with channel ch. The receiveoperation blocks until ch contains a message. When it does, theoldest message is removed, and its content is stored in y.

4

Page 5: Message Passing Johannes Aman Pohjola

Distributed Programming Reasoning about Synchronous Message Passing

Message PassingA channel is a typed FIFO queue between processes.

Ben-Ari Promelasend a message ch ⇐ x ch ! xrecieve a message ch ⇒ y ch ? y

Synchronous channels

A synchronous channel has queue capacity 0. Both the send andthe receive operation block until they both are ready. When theyare, they execute at the same time, and assign the value of x to y.

Asynchronous channels

For asynchronous channels, send doesn’t block. It appends thevalue of x to the queue associated with channel ch. The receiveoperation blocks until ch contains a message. When it does, theoldest message is removed, and its content is stored in y.

5

Page 6: Message Passing Johannes Aman Pohjola

Distributed Programming Reasoning about Synchronous Message Passing

Message PassingA channel is a typed FIFO queue between processes.

Ben-Ari Promelasend a message ch ⇐ x ch ! xrecieve a message ch ⇒ y ch ? y

Synchronous channels

A synchronous channel has queue capacity 0. Both the send andthe receive operation block until they both are ready. When theyare, they execute at the same time, and assign the value of x to y.

Asynchronous channels

For asynchronous channels, send doesn’t block. It appends thevalue of x to the queue associated with channel ch. The receiveoperation blocks until ch contains a message. When it does, theoldest message is removed, and its content is stored in y.

6

Page 7: Message Passing Johannes Aman Pohjola

Distributed Programming Reasoning about Synchronous Message Passing

Taxonomy of Asynchronous Message Passing

Asynchronous channels may be...

Reliable: all messages sent will eventually arrive.

Lossy: messages may be lost in transit.

FIFO: messages will arrive in order.

Unordered: messages can arrive out-of-order.

Error-detecting: received messages aren’t garbled in transit (or ifthey are, we can tell).

Example

TCP is reliable and FIFO. UDP is lossy and unordered, buterror-detecting.

7

Page 8: Message Passing Johannes Aman Pohjola

Distributed Programming Reasoning about Synchronous Message Passing

Taxonomy of Asynchronous Message Passing

Asynchronous channels may be...

Reliable: all messages sent will eventually arrive.

Lossy: messages may be lost in transit.

FIFO: messages will arrive in order.

Unordered: messages can arrive out-of-order.

Error-detecting: received messages aren’t garbled in transit (or ifthey are, we can tell).

Example

TCP is reliable and FIFO. UDP is lossy and unordered, buterror-detecting.

8

Page 9: Message Passing Johannes Aman Pohjola

Distributed Programming Reasoning about Synchronous Message Passing

Algorithm 2.1: Producer-consumer (channels)channel of integer ch

producer consumerinteger x integer yloop forever loop forever

p1: x ← produce q1: ch ⇒ yp2: ch ⇐ x q2: consume(y)

9

Page 10: Message Passing Johannes Aman Pohjola

Distributed Programming Reasoning about Synchronous Message Passing

Conway’s Problem

Example

Input on channel inC: a sequence of charactersOutput on channel outC:

The sequence of characters from inC, with runs of 2 ≤ n ≤ 9occurrences of the same character c replaced by the n and c

a newline character after every K th character in the output.

Let’s use message-passing for separation of concerns:

- - -compress outputinC pipe outC

10

Page 11: Message Passing Johannes Aman Pohjola

Distributed Programming Reasoning about Synchronous Message Passing

Conway’s Problem

Example

Input on channel inC: a sequence of charactersOutput on channel outC:

The sequence of characters from inC, with runs of 2 ≤ n ≤ 9occurrences of the same character c replaced by the n and c

a newline character after every K th character in the output.

Let’s use message-passing for separation of concerns:

- - -compress outputinC pipe outC

11

Page 12: Message Passing Johannes Aman Pohjola

Distributed Programming Reasoning about Synchronous Message Passing

Algorithm 2.2: Conway’s problemconstant integer MAX ← 9constant integer K ← 4channel of integer inC, pipe, outC

compress outputchar c, previous ← 0 char cinteger n ← 0 integer m ← 0inC ⇒ previousloop forever loop forever

p1: inC ⇒ c q1: pipe ⇒ cp2: if (c = previous) and q2: outC ⇐ c

(n < MAX − 1)p3: n ← n + 1 q3: m ← m + 1

elsep4: if n > 0 q4: if m >= Kp5: pipe ⇐ i2c(n+1) q5: outC ⇐ newlinep6: n ← 0 q6: m ← 0p7: pipe ⇐ previous q7:

p8: previous ← c q8:

12

Page 13: Message Passing Johannes Aman Pohjola

Distributed Programming Reasoning about Synchronous Message Passing

Reminder: Matrix Multiplication

Example1 2 34 5 67 8 9

×1 0 2

0 1 21 0 0

=

4 2 610 5 1816 8 30

Let p, q, r ∈ N. Let A = (ai ,j)1≤i≤p1≤j≤q

∈ Tp×q and

B = (bj ,k)1≤j≤q1≤k≤r

∈ Tq×r be two (compatible) matrices. Recall that

the matrix C = (ci ,k)1≤i≤p1≤k≤s

∈ Tp×s is their product, A× B, iff, for

all 1 ≤ i ≤ p and 1 ≤ k ≤ r :

ci ,j =

q∑j=1

ai ,jbj ,k

13

Page 14: Message Passing Johannes Aman Pohjola

Distributed Programming Reasoning about Synchronous Message Passing

Reminder: Matrix Multiplication

Example1 2 34 5 67 8 9

×1 0 2

0 1 21 0 0

=

4 2 610 5 1816 8 30

Let p, q, r ∈ N. Let A = (ai ,j)1≤i≤p

1≤j≤q∈ Tp×q and

B = (bj ,k)1≤j≤q1≤k≤r

∈ Tq×r be two (compatible) matrices. Recall that

the matrix C = (ci ,k)1≤i≤p1≤k≤s

∈ Tp×s is their product, A× B, iff, for

all 1 ≤ i ≤ p and 1 ≤ k ≤ r :

ci ,j =

q∑j=1

ai ,jbj ,k

14

Page 15: Message Passing Johannes Aman Pohjola

Distributed Programming Reasoning about Synchronous Message Passing

Algorithms for Matrix Multiplication

The standard algorithm for matrix multiplication is:

for all rows i of A do:for all columns k of B do:

set ci ,k to 0for all columns j of A do:

add ai ,jbj ,k to ci ,k

Because of the three nested loops, its complexity is O(p · q · r).

Incase both matrices are quadratic, i.e., p = q = r , that’s O(p3).

15

Page 16: Message Passing Johannes Aman Pohjola

Distributed Programming Reasoning about Synchronous Message Passing

Algorithms for Matrix Multiplication

The standard algorithm for matrix multiplication is:

for all rows i of A do:for all columns k of B do:

set ci ,k to 0for all columns j of A do:

add ai ,jbj ,k to ci ,k

Because of the three nested loops, its complexity is O(p · q · r). Incase both matrices are quadratic, i.e., p = q = r , that’s O(p3).

16

Page 17: Message Passing Johannes Aman Pohjola

Distributed Programming Reasoning about Synchronous Message Passing

Process Array for Matrix Multiplication

Sink Sink Sink

Result

Result

Result

Zero

Zero

Zero

Source Source Source

����

����

����?

?

?

?

?

?

?

?

?

?

?

?

1 2 3

4 5 6

7 8 9

4,2,6 3,2,4 3,0,0 0,0,0

10,5,18 6,5,10 6,0,0 0,0,0

16,8,30 9,8,16 9,0,0 0,0,0

201

201

201

201

210

210

210

210

001

001

001

001

17

Page 18: Message Passing Johannes Aman Pohjola

Distributed Programming Reasoning about Synchronous Message Passing

Computation of One Element

Result Zero7 8 9 ����

? ? ?2 2 0

001630

18

Page 19: Message Passing Johannes Aman Pohjola

Distributed Programming Reasoning about Synchronous Message Passing

Algorithm 2.3: Multiplier process with channelsinteger FirstElementchannel of integer North, East, South, Westinteger Sum, integer SecondElement

loop foreverp1: North ⇒ SecondElementp2: East ⇒ Sump3: Sum ← Sum + FirstElement · SecondElementp4: South ⇐ SecondElementp5: West ⇐ Sum

19

Page 20: Message Passing Johannes Aman Pohjola

Distributed Programming Reasoning about Synchronous Message Passing

Algorithm 2.4: Multiplier with channels and selective inputinteger FirstElementchannel of integer North, East, South, Westinteger Sum, integer SecondElement

loop forevereither

p1: North ⇒ SecondElementp2: East ⇒ Sum

orp3: East ⇒ Sump4: North ⇒ SecondElementp5: South ⇐ SecondElementp6: Sum ← Sum + FirstElement · SecondElementp7: West ⇐ Sum

20

Page 21: Message Passing Johannes Aman Pohjola

Distributed Programming Reasoning about Synchronous Message Passing

Multiplier Process in Promela

1 proctype Multiplier(byte Coeff;

2 chan North;

3 chan East;

4 chan South;

5 chan West)

6 {

7 byte Sum, X;

8 for (i : 0..(SIZE-1)) {

9 if :: North ? X -> East ? Sum;

10 :: East ? Sum -> North ? X;

11 fi;

12 South ! X;

13 Sum = Sum + X*Coeff;

14 West ! Sum;

15 }

16 }21

Page 22: Message Passing Johannes Aman Pohjola

Distributed Programming Reasoning about Synchronous Message Passing

Algorithm 2.5: Dining philosophers with channelschannel of boolean forks[5]

philosopher i fork iboolean dummy boolean dummyloop forever loop forever

p1: think q1: forks[i] ⇐ truep2: forks[i] ⇒ dummy q2: forks[i] ⇒ dummyp3: forks[i+1] ⇒ dummy q3:

p4: eat q4:

p5: forks[i] ⇐ true q5:

p6: forks[i+1] ⇐ true q6:

NB

The many shared channels make it possible to give forks directly toother philosophers, rather than putting them back on the table.

22

Page 23: Message Passing Johannes Aman Pohjola

Distributed Programming Reasoning about Synchronous Message Passing

Algorithm 2.6: Dining philosophers with channelschannel of boolean forks[5]

philosopher i fork iboolean dummy boolean dummyloop forever loop forever

p1: think q1: forks[i] ⇐ truep2: forks[i] ⇒ dummy q2: forks[i] ⇒ dummyp3: forks[i+1] ⇒ dummy q3:

p4: eat q4:

p5: forks[i] ⇐ true q5:

p6: forks[i+1] ⇐ true q6:

NB

The many shared channels make it possible to give forks directly toother philosophers, rather than putting them back on the table.

23

Page 24: Message Passing Johannes Aman Pohjola

Distributed Programming Reasoning about Synchronous Message Passing

Synchronous Message Passing

Recall that, when message passing is synchronous, the exchange ofa message requires coordination between sender and receiver(sometimes called a handshaking mechanism).

In other words, the sender is blocked until the receiver is ready tocooperate.

24

Page 25: Message Passing Johannes Aman Pohjola

Distributed Programming Reasoning about Synchronous Message Passing

Synchronous Transition Diagrams

Definition

A synchronous transition diagram is a parallel compositionP1 ‖ . . . ‖ Pn of some (sequential) transition diagrams P1, . . . , Pn

called processes.The processes Pi

do not share variables

communicate along unidirectional channels C ,D, . . .connecting at most 2 different processes by way of

output statements C ⇐ efor sending the value of expression e along channel Cinput statements C ⇒ xfor receiving a value along channel C into variable x

25

Page 26: Message Passing Johannes Aman Pohjola

Distributed Programming Reasoning about Synchronous Message Passing

Edges in (Sequential) Transition Diagrams

For shared variable concurrency, labels b; f , where b is a Booleancondition and f is a state transformation sufficed.

Example

t = 1; x ← 5` `′

Now, we call such transitions internal.

26

Page 27: Message Passing Johannes Aman Pohjola

Distributed Programming Reasoning about Synchronous Message Passing

I/O Transitions

We extend this notation to message passing by allowing the guardto be combined with an input or an output statement:

` `′b;C ⇒ x ; f

` `′b;C ⇐ e; f

27

Page 28: Message Passing Johannes Aman Pohjola

Distributed Programming Reasoning about Synchronous Message Passing

Example 1

Let P = P1 ‖ P2 be given as:

s1 t1C ⇐ 1 ‖ s2 t2C ⇒ x

Obviously, {True} P {x = 1}, but how to prove it?

28

Page 29: Message Passing Johannes Aman Pohjola

Distributed Programming Reasoning about Synchronous Message Passing

Semantics: Closed Product

Definition

Given Pi = (Li ,Ti , si , ti ), for 1 ≤ i ≤ n, with disjoint local variablesets, define their closed product as P = (L,T , s, t) such that:L = L1 × . . .× Ln, s = 〈s1, . . . , sn〉, t = 〈t1, . . . , tn〉 and,`

a→ `′ ∈ T iff, either

1 ` = 〈`1, . . . , `i , . . . , `n〉,`′ = 〈`1, . . . , `′i , . . . , `n〉,`i

a−→ `′i ∈ Ti an internal transition, or

2 ` = 〈`1, . . . , `i , . . . , `j , . . . , `n〉,`′ = 〈`1, . . . , `′i , . . . , `′j , . . . , `n〉,

i 6= j , with `ib;C⇐e;f−−−−−→ `′i ∈ Ti and `j

b′;C⇒x ;g−−−−−−→ `′j ∈ Tj , anda is b ∧ b′; f ◦ g ◦ Jx ← eK

29

Page 30: Message Passing Johannes Aman Pohjola

Distributed Programming Reasoning about Synchronous Message Passing

Example 1 cont’d

Observe that the closed product is just

〈s1,s2〉 〈t1,t2〉x ← 1

so validity of {True} P {x = 1} follows from

|= True =⇒ (x = 1) ◦ Jx ← 1K

which is immediate.See the glossary of notation for the meaning of all these strange symbols.

30

Page 31: Message Passing Johannes Aman Pohjola

Distributed Programming Reasoning about Synchronous Message Passing

Verification

To show that {φ} P1 ‖ . . . ‖ Pn {ψ} is valid, we could simplyprove {φ} P {ψ} for P being the closed product of the Pi .

This can be done using Floyd’s method, because there are no I/Otransitions left in P.

Disadvantage

As with the standard product construction for shared-variableconcurrency, the closed product construction leads to a number ofverification conditions exponential in the number of processes.

Therefore, we are looking for an equivalent of the Owicki/Griesmethod for synchronous message passing.

31

Page 32: Message Passing Johannes Aman Pohjola

Distributed Programming Reasoning about Synchronous Message Passing

Verification

To show that {φ} P1 ‖ . . . ‖ Pn {ψ} is valid, we could simplyprove {φ} P {ψ} for P being the closed product of the Pi .This can be done using Floyd’s method, because there are no I/Otransitions left in P.

Disadvantage

As with the standard product construction for shared-variableconcurrency, the closed product construction leads to a number ofverification conditions exponential in the number of processes.

Therefore, we are looking for an equivalent of the Owicki/Griesmethod for synchronous message passing.

32

Page 33: Message Passing Johannes Aman Pohjola

Distributed Programming Reasoning about Synchronous Message Passing

Verification

To show that {φ} P1 ‖ . . . ‖ Pn {ψ} is valid, we could simplyprove {φ} P {ψ} for P being the closed product of the Pi .This can be done using Floyd’s method, because there are no I/Otransitions left in P.

Disadvantage

As with the standard product construction for shared-variableconcurrency, the closed product construction leads to a number ofverification conditions exponential in the number of processes.

Therefore, we are looking for an equivalent of the Owicki/Griesmethod for synchronous message passing.

33

Page 34: Message Passing Johannes Aman Pohjola

Distributed Programming Reasoning about Synchronous Message Passing

Verification

To show that {φ} P1 ‖ . . . ‖ Pn {ψ} is valid, we could simplyprove {φ} P {ψ} for P being the closed product of the Pi .This can be done using Floyd’s method, because there are no I/Otransitions left in P.

Disadvantage

As with the standard product construction for shared-variableconcurrency, the closed product construction leads to a number ofverification conditions exponential in the number of processes.

Therefore, we are looking for an equivalent of the Owicki/Griesmethod for synchronous message passing.

34

Page 35: Message Passing Johannes Aman Pohjola

Distributed Programming Reasoning about Synchronous Message Passing

A Simplistic Method

For each location ` in some Li , find a local predicate Q`, onlydepending on Pi ’s local variables.

1 Prove that, for all i , the local verification conditions hold, i.e.,

|= Q` ∧ b → Q`′ ◦ f for each `b;f−−→ `′ ∈ Ti .

2 For all i 6= j and matching pairs of I/O transitions

`ib;C⇐e;f−−−−−→ `′i ∈ Ti and `j

b′;C⇒x ;g−−−−−−→ `′j ∈ Tj show that

|= Q`i ∧ Q`j ∧ b ∧ b′ =⇒ (Q`′i∧ Q`′j

) ◦ f ◦ g ◦ Jx ← eK .

3 Prove |= φ =⇒ Qs1 ∧ . . .∧Qsn and |= Qt1 ∧ . . .∧Qtn =⇒ ψ.

35

Page 36: Message Passing Johannes Aman Pohjola

Distributed Programming Reasoning about Synchronous Message Passing

Proof of Example 1

There are no internal transitions. There’s one matching pair.

True =⇒ (x = 1) ◦ Jx ← 1K ≡ 1 = 1

≡ True

36

Page 37: Message Passing Johannes Aman Pohjola

Distributed Programming Reasoning about Synchronous Message Passing

Soundness & Incompleteness

The simplistic method is sound but not complete.It generates proof obligations for all syntactically-matching I/Otransition pairs, regardless of whether these pairs can actually bematched semantically (in an execution).

37

Page 38: Message Passing Johannes Aman Pohjola

Distributed Programming Reasoning about Synchronous Message Passing

Example 2

Let P = P1 ‖ P2 be given as:

s1 `1 t1C ⇐ 1

T1

C ⇐ 2

T2‖ s2 `2 t2

C ⇒ x

T3

C ⇒ x

T4

We cannot prove {True} P {x = 2} using the simplistic method.Proof obligations for the transition pairs (T1,T4) and (T2,T3)should not, but have to be, discharged, and lead to acontradiction, meaning that no inductive assertion network forapplying the simplistic method to this example can be found.

38

Page 39: Message Passing Johannes Aman Pohjola

Distributed Programming Reasoning about Synchronous Message Passing

Remedy 1: Adding Shared Auxiliary Variables

Use shared auxiliary variables to relate locations in processes byexpressing that certain combinations will not occur duringexecution. Only output transitions need to be augmented withassignments to these shared auxiliary variables.

Pro easy

Con re-introduces interference freedom tests for matching

pairs `ibi ;C⇐e;fi−−−−−−→ `′i ∈ Ti and `j

bj ;C⇒x ;fj−−−−−−→ `′j ∈ Tj ,and location `m of process Pm, m 6= i , j :

|= Q`i ∧Q`j ∧Q`m ∧bi ∧bj =⇒ Q`m ◦ fi ◦ fj ◦ Jx ← eK

[This method is due to Levin & Gries.]

39

Page 40: Message Passing Johannes Aman Pohjola

Distributed Programming Reasoning about Synchronous Message Passing

Example 2 cont’d

s1 `1 t1

s2 `2 t2C ⇒ x C ⇒ x

C ⇐ 1 C ⇐ 2

40

Page 41: Message Passing Johannes Aman Pohjola

Distributed Programming Reasoning about Synchronous Message Passing

Example 2 cont’d

s1 `1 t1

s2 `2 t2C ⇒ x C ⇒ x

C ⇐ 1; k ← 1 C ⇐ 2; k ← 2

41

Page 42: Message Passing Johannes Aman Pohjola

Distributed Programming Reasoning about Synchronous Message Passing

Example 2 cont’d

s1 `1 t1

s2 `2 t2C ⇒ x C ⇒ x

C ⇐ 1; k ← 1 C ⇐ 2; k ← 2

k = 0 k = 1 k = 2

k = 0 k = 1 ∧ x = 1 k = 2 ∧ x = 2

42

Page 43: Message Passing Johannes Aman Pohjola

Distributed Programming Reasoning about Synchronous Message Passing

Levin & Gries-style Proof for Example 2

There are no internal transitions. Four matching pairs of I/Otransitions exist, the same as in the simplistic method. The proofobligations are:

|= k = 0 =⇒ (k = 1 ∧ x = 1) ◦ Jk ← 1K ◦ Jx ← 1K (1)

|= k = 0 ∧ k = 1 ∧ x = 1 =⇒ (k = 1 ∧ k = 2 ∧ x = 2) ◦ Jk ← 1K ◦ Jx ← 1K(2)

|= k = 1 ∧ k = 0 =⇒ (k = 2 ∧ k = 1 ∧ x = 1) ◦ Jk ← 2K ◦ Jx ← 2K(3)

|= k = 1 ∧ x = 1 =⇒ (k = 2 ∧ x = 2) ◦ Jk ← 2K ◦ Jx ← 2K (4)

No interference freedom proof obligations are generated in thisexample since there is no third process.

43

Page 44: Message Passing Johannes Aman Pohjola

Distributed Programming Reasoning about Synchronous Message Passing

Levin & Gries-style Proof for Example 2 cont’d

Thanks to contradicting propositions about the value of k , (2) and(3) are vacuously true because their left-hand-sides are false.The right-hand-sides of the implications (1) and (4) simplify toTrue, which discharges those proof obligations, e.g., for the RHSof (1):

(k = 1 ∧ x = 1) ◦ Jk ← 1K ◦ Jx ← 1K ≡ 1 = 1 ∧ 1 = 1

≡ True

44

Page 45: Message Passing Johannes Aman Pohjola

Distributed Programming Reasoning about Synchronous Message Passing

Remedy 2: Local Auxiliary Variables + Invariant

Use only local auxiliary variables + a global communicationinvariant I to relate values of local auxiliary variables in the variousprocesses.

Pro no interference freedom tests

Con more complicated proof obligation for communicationsteps:

|= Q`i∧Q`j∧b∧b′∧I =⇒ (Q`′i

∧Q`′j∧I )◦f ◦g◦Jx ← eK

[This is the AFR-method, named after Apt, Francez, and deRoever.]

45

Page 46: Message Passing Johannes Aman Pohjola

Distributed Programming Reasoning about Synchronous Message Passing

Example 2 cont’d

s1 `1 t1

s2 `2 t2

C ⇐ 1 C ⇐ 2

C ⇒ x C ⇒ x

46

Page 47: Message Passing Johannes Aman Pohjola

Distributed Programming Reasoning about Synchronous Message Passing

Example 2 cont’d

s1 `1 t1

s2 `2 t2

C ⇐ 1; k1 ← 1 C ⇐ 2; k1 ← 2

C ⇒ x ; k2 ← 1 C ⇒ x ; k2 ← 2

47

Page 48: Message Passing Johannes Aman Pohjola

Distributed Programming Reasoning about Synchronous Message Passing

Example 2 cont’d

s1 `1 t1

s2 `2 t2

C ⇐ 1; k1 ← 1 C ⇐ 2; k1 ← 2

C ⇒ x ; k2 ← 1 C ⇒ x ; k2 ← 2

k1 = 0 k1 = 1 k1 = 2

k2 = 0 k2 = 1 ∧ x = 1 k2 = 2 ∧ x = 2

48

Page 49: Message Passing Johannes Aman Pohjola

Distributed Programming Reasoning about Synchronous Message Passing

Example 2 cont’d

s1 `1 t1

s2 `2 t2

C ⇐ 1; k1 ← 1 C ⇐ 2; k1 ← 2

C ⇒ x ; k2 ← 1 C ⇒ x ; k2 ← 2

k1 = 0 k1 = 1 k1 = 2

k2 = 0 k2 = 1 ∧ x = 1 k2 = 2 ∧ x = 2

Define I ≡ (k1 = k2).

49

Page 50: Message Passing Johannes Aman Pohjola

Distributed Programming Reasoning about Synchronous Message Passing

AFR-style Proof for Example 2There are no internal transitions. Four matching pairs of I/Otransitions exist, the same as in the simplistic method. The proofobligations are:

|=k1 = 0 ∧ k2 = 0 ∧ k1 = k2

=⇒ (k1 = 1 ∧ k2 = 1 ∧ x = 1 ∧ k1 = k2) ◦ Jk1 ← 1K ◦ Jk2 ← 1K ◦ (x ← 1)

(5)

|=k1 = 0 ∧ k2 = 1 ∧ x = 1 ∧ k1 = k2

=⇒ (k1 = 1 ∧ k2 = 2 ∧ x = 2 ∧ k1 = k2) ◦ Jk1 ← 1K ◦ Jk2 ← 2K ◦ Jx ← 1K(6)

|=k1 = 1 ∧ k2 = 0 ∧ k1 = k2

=⇒ (k1 = 2 ∧ k2 = 1 ∧ x = 1 ∧ k1 = k2) ◦ Jk1 ← 2K ◦ Jk2 ← 1K ◦ Jx ← 2K(7)

|=k1 = 1 ∧ k2 = 1 ∧ x = 1 ∧ k1 = k2

=⇒ (k1 = 2 ∧ k2 = 2 ∧ x = 2 ∧ k1 = k2) ◦ Jk1 ← 2K ◦ Jk2 ← 2K ◦ Jx ← 2K(8)50

Page 51: Message Passing Johannes Aman Pohjola

Distributed Programming Reasoning about Synchronous Message Passing

AFR-style Proof for Example 2 cont’d

Thanks to the invariant k1 = k2, (6) and (7) are vacuously true.The right-hand-sides of the implications (5) and (8) simplify toTrue, which discharges those proof obligations, e.g., for the RHSof (8):

(k1 = 2 ∧ k2 = 2 ∧ x = 2 ∧ k1 = k2) ◦ Jk1 ← 2K ◦ Jk2 ← 2K ◦ Jx ← 2K≡ 2 = 2 ∧ 2 = 2 ∧ 2 = 2 ∧ 2 = 2

≡ True

51

Page 52: Message Passing Johannes Aman Pohjola

Distributed Programming Reasoning about Synchronous Message Passing

What Now?

Next lecture, we’ll be looking at proof methods for termination(convergence and deadlock freedom) in sequential, shared-variableconcurrent, and message-passing concurrent settings.Next week, we have a break!After the break, we’ll be looking at a compositional proof methodfor verification, proving properties for asynchronouscommunication, and, if time on Thursday, we’ll talk about processalgebra.Assignment 1 is out! Read the spec ASAP!.

52