Top Banner
1 Lecture 8: Concurrency: Mutual Exclusion and Synchronization(cont.) Advanced Operating System Fall 2009
22

1 Lecture 8: Concurrency: Mutual Exclusion and Synchronization(cont.) Advanced Operating System Fall 2009.

Jan 17, 2016

Download

Documents

Bruce Palmer
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: 1 Lecture 8: Concurrency: Mutual Exclusion and Synchronization(cont.) Advanced Operating System Fall 2009.

1

Lecture 8: Concurrency: Mutual Exclusion and Synchronization(cont.)

Advanced Operating SystemFall 2009

Page 2: 1 Lecture 8: Concurrency: Mutual Exclusion and Synchronization(cont.) Advanced Operating System Fall 2009.

2

Three Environments

1. There is no central program to coordinate the processes. The processes communicate with each other through global variable.

2. Special hardware instructions3. There is a central program to coordinate the

processes.

Page 3: 1 Lecture 8: Concurrency: Mutual Exclusion and Synchronization(cont.) Advanced Operating System Fall 2009.

3

Hardware SupportDisable interrupt CSEnable interrupt

Won’t work if we have multiprocessors

Page 4: 1 Lecture 8: Concurrency: Mutual Exclusion and Synchronization(cont.) Advanced Operating System Fall 2009.

4

Special Machine Instructions Modern machines provide special atomic hardware instructions

Atomic = non-interruptable Either test memory word and set value Or swap contents of two memory words

Page 5: 1 Lecture 8: Concurrency: Mutual Exclusion and Synchronization(cont.) Advanced Operating System Fall 2009.

5

Exchange(int register, int memory)

Exchange the contents of a register with that of memory.

Shared Variable: lock, initially 0local variable: key

Process Pi

… Prefixi

Keyi=1While(Keyi0) do {exchange(Keyi,lock)} CSi

Lock=0;… suffixi

Page 6: 1 Lecture 8: Concurrency: Mutual Exclusion and Synchronization(cont.) Advanced Operating System Fall 2009.

6

Three Environments

1. There is no central program to coordinate the processes. The processes communicate with each other through global variable.

2. Special hardware instructions3. There is a central program to coordinate the

processes.

Page 7: 1 Lecture 8: Concurrency: Mutual Exclusion and Synchronization(cont.) Advanced Operating System Fall 2009.

7

Semaphores A variable that has an integer value upon

which 3 operations are defined. Three operations:

1. A semaphore may be initialized to a nonnegative value

2. The wait operation decrements the semaphore value. If the value becomes negative, then the process executing the wait is blocked

3. The signal operation increments the semaphore value. If the value is not positive, then a process blocked by a wait operation is unblocked.

Other than these 3 operations, there is no way to inspect or manipulate semaphores.

Page 8: 1 Lecture 8: Concurrency: Mutual Exclusion and Synchronization(cont.) Advanced Operating System Fall 2009.

8

Wait(s) and Signal(s) Wait(s) – is also called P(s)

{s=s-1;if (s<0) {place this process in a waiting queue}

} Signal(s) – is also called V(s)

{s=s+1;if(s0) {remove a process from the waiting

queue}}

Page 9: 1 Lecture 8: Concurrency: Mutual Exclusion and Synchronization(cont.) Advanced Operating System Fall 2009.

9

Semaphore as General Synchronization Tool

Counting semaphore – integer value can range over an unrestricted domain

Binary semaphore – integer value can range only between 0 and 1; can be simpler to implement

Also known as mutex locks Wait B(s) s is a binary semaphore

{ if s=1 then s=0 else block this process}

Signal B(s){ if there is a blocked process then unblock a process else s=1}

Can implement a counting semaphore S as a binary semaphore

Page 10: 1 Lecture 8: Concurrency: Mutual Exclusion and Synchronization(cont.) Advanced Operating System Fall 2009.

10

Note

The wait and signal primitives are assumed to be atomic; they cannot be interrupted and each routine can be treated as an indivisible step.

Page 11: 1 Lecture 8: Concurrency: Mutual Exclusion and Synchronization(cont.) Advanced Operating System Fall 2009.

11

Mutual Exclusion provided by Semaphores

Semaphore S; // initialized to 1

Pi

prefixi

wait (S);

CSi

signal (S);

suffixi Signal B(s){ if there is a blocked process

then unblock a process

else s=1}

Wait B(s) s is a binary semaphore{ if s=1

then s=0 else block this

process}

Page 12: 1 Lecture 8: Concurrency: Mutual Exclusion and Synchronization(cont.) Advanced Operating System Fall 2009.

12

Message Passing (not covered) Direct Addressing

Specific identifier of source and destination processes Send(destination, message) Receive(source, message)

Indirect Addressing Messages are not sent directly from sender to receiver but rather

are sent to a shared data structure consisting of queues that can temporarily hold messages.

Such queues are generally referred to as mailboxes. Thus, for 2 processes to communicate, one process sends a

message to the appropriate mailbox and the other process picks up the message from the mailbox.

Mailbox

P1

Pn

…Q1

Qn

Sending processes Receiving processes

Page 13: 1 Lecture 8: Concurrency: Mutual Exclusion and Synchronization(cont.) Advanced Operating System Fall 2009.

13

Message Passing (cont.) (not covered) When a send primitive is executed in a process, there

are 2 possibilities: Either the sending process is blocked until the message is

received Or it is not

When a receive primitive is executed in a process, there are 2 possibilities:

If a message has previously been sent, the message is received and execution continues

If there is no waiting message, then eithera) The process is blocked until a message arrives, orb) The process continues to execute, abandoning the attempt to

receive. Blocking send, blocking receive Nonblocking send, blocking receive Nonblocking send , nonblocking receive

Page 14: 1 Lecture 8: Concurrency: Mutual Exclusion and Synchronization(cont.) Advanced Operating System Fall 2009.

14

Mutual Exclusion (not covered)

Create_mailbox(mutex)Send(mutex, null)

Pi

Prefixi

Receive(mutex,msg); CSi

Send(mutex,msg);Suffixi

Main processMailbox name is mutex

Page 15: 1 Lecture 8: Concurrency: Mutual Exclusion and Synchronization(cont.) Advanced Operating System Fall 2009.

15

Two classical examples

Producer and Consumer Problem Readers/Writers Problem

Page 16: 1 Lecture 8: Concurrency: Mutual Exclusion and Synchronization(cont.) Advanced Operating System Fall 2009.

16

Two classical examples

Producer and Consumer Problem Readers/Writers Problem

Page 17: 1 Lecture 8: Concurrency: Mutual Exclusion and Synchronization(cont.) Advanced Operating System Fall 2009.

17

Producer and Consumer Problem Producer can only put something in when there is an

empty buffer Consumer can only take something out when there is a

full buffer Producer and consumer are concurrent processes

0

N-1

N buffersproducer

consumer

Page 18: 1 Lecture 8: Concurrency: Mutual Exclusion and Synchronization(cont.) Advanced Operating System Fall 2009.

18

Producer and Consumer Problem(cont.) Global Variable

1. B[0..N-1] – an array of size N (Buffer)2. P – a semaphore, initialized to N3. C – a semaphore, initialized to 0

Local Variable1. In – a ptr(integer) used by the producer, in=0 initially2. Out – a ptr(integer) used by the consumer, out=0 initially

Producer Process

producer: produce(w) wait(p) B[in]=w in=(in+1)mod N signal(c) goto producer

Consumer Process

consumer: wait(c) w=B[out] out=(out+1)mod N signal(p) consume(w) goto consumer

W is a local buffer used by the producer to produce

W is a local buffer used by the consumer to store the item to be consumed

Page 19: 1 Lecture 8: Concurrency: Mutual Exclusion and Synchronization(cont.) Advanced Operating System Fall 2009.

19

Two classical examples

Producer and Consumer Problem Readers/Writers Problem

Page 20: 1 Lecture 8: Concurrency: Mutual Exclusion and Synchronization(cont.) Advanced Operating System Fall 2009.

20

Readers/Writers Problem

Suppose a data object is to be shared among several concurrent processes. Some of these processes want only to read the data object, while others want to update (both read and write)

Readers – Processes that read only Writers – processes that read and write

If a reader process is using the data object, then other reader processes are allowed to use it at the same time.

If a writer process is using the data object, then no other process (reader or writer) is allowed to use it simultaneously.

Page 21: 1 Lecture 8: Concurrency: Mutual Exclusion and Synchronization(cont.) Advanced Operating System Fall 2009.

21

Solve Readers/Writers Problem using wait and signal primitives(cont.)

Global Variable: Wrt is a binary semaphore, initialized to 1; Wrt is used by both readers and writers

For Reader Processes: Mutex is a binary semaphore, initialized to 1;Readcount is an integer variable,

initialized to 0 Mutex and readcount used by readers onlyReader Processes

Wait(mutex)Readcount=readcount+1If readcount=1 then wait(wrt)Signal(mutex);…Reading is performed…Wait(mutex)Readcount=readcount-1If readcount=0 then signal(wrt)Signal(mutex)

Writer Processes

Wait(wrt)…Writing is performed…Signal(wrt)

Page 22: 1 Lecture 8: Concurrency: Mutual Exclusion and Synchronization(cont.) Advanced Operating System Fall 2009.

22

End of lecture 6

Thank you!