Top Banner
Unit III CONCURRENCY AND SCHEDULING Principles of Concurrency Mutual Exclusion Semaphores Monitors Readers/Writers problem. Deadlocks Prevention Avoidance Detection Scheduling Types of Scheduling Scheduling algorithms.
178

CONCURRENCY AND SCHEDULING Principles of Concurrency ...

Jan 02, 2022

Download

Documents

dariahiddleston
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: CONCURRENCY AND SCHEDULING Principles of Concurrency ...

Unit IIICONCURRENCY AND SCHEDULING

Principles of Concurrency

Mutual Exclusion

Semaphores

Monitors

Readers/Writers problem.

Deadlocks –

Prevention

Avoidance

Detection

Scheduling

Types of Scheduling

Scheduling algorithms.

Page 2: CONCURRENCY AND SCHEDULING Principles of Concurrency ...

Roadmap

• Principals of Concurrency

• Mutual Exclusion: Hardware Support

• Semaphores

• Monitors

• Message Passing

• Readers/Writers Problem

Page 3: CONCURRENCY AND SCHEDULING Principles of Concurrency ...

Multiple Processes

• Central to the design of modern Operating Systems is managing multiple processes– Multiprogramming(multiple processes within a

uniprocessor system)

– Multiprocessing(multiple processes within a multiprocessor)

– Distributed Processing(multiple processes executing on multiple ,distributed processing)

• Big Issue is Concurrency – Managing the interaction of all of these processes

Page 4: CONCURRENCY AND SCHEDULING Principles of Concurrency ...

ConcurrencyConcurrency arises in:

• Multiple applications

– (processing time)Sharing time

• Structured applications

– Extension of modular design(structured programming)

• Operating system structure

– OS themselves implemented as a set of processes or threads

Page 5: CONCURRENCY AND SCHEDULING Principles of Concurrency ...

Key Terms

Page 6: CONCURRENCY AND SCHEDULING Principles of Concurrency ...

Interleaving and

Overlapping Processes

• Earlier (Ch2) we saw that processes may

be interleaved on uniprocessors

Page 7: CONCURRENCY AND SCHEDULING Principles of Concurrency ...

Interleaving and

Overlapping Processes

• And not only interleaved but overlapped

on multi-processors

Page 8: CONCURRENCY AND SCHEDULING Principles of Concurrency ...

Difficulties of

Concurrency

• Sharing of global resources

• Optimally managing the allocation of

resources

• Difficult to locate programming errors as

results are not deterministic and

reproducible.

Page 9: CONCURRENCY AND SCHEDULING Principles of Concurrency ...

A Simple Example

void echo()

{

chin = getchar();

chout = chin;

putchar(chout);

}

Page 10: CONCURRENCY AND SCHEDULING Principles of Concurrency ...

A Simple Example:

On a Multiprocessor

Process P1 Process P2

. .

chin = getchar(); .

. chin = getchar();

chout = chin; chout = chin;

putchar(chout); .

. putchar(chout);

. .

Page 11: CONCURRENCY AND SCHEDULING Principles of Concurrency ...

Enforce Single Access

• If we enforce a rule that only one process

may enter the function at a time then:

• P1 & P2 run on separate processors

• P1 enters echo first,

– P2 tries to enter but is blocked – P2 suspends

• P1 completes execution

– P2 resumes and executes echo

Page 12: CONCURRENCY AND SCHEDULING Principles of Concurrency ...

Race Condition

• A race condition occurs when

– Multiple processes or threads read and write

data items

– They do so in a way where the final result

depends on the order of execution of the

processes.

• The output depends on who finishes the

race last.

Page 13: CONCURRENCY AND SCHEDULING Principles of Concurrency ...

Operating System

Concerns

• What design and management issues are raised by the existence of concurrency?

• The OS must – Keep track of various processes(PCB)

– Allocate and de-allocate resources(PROCESSOR TIME,MEMORY,FILES,I/O DEVICES)

– Protect the data and resources against interference by other processes.

– Ensure that the processes and outputs are independent of the processing speed

Page 14: CONCURRENCY AND SCHEDULING Principles of Concurrency ...

Process Interaction

Page 15: CONCURRENCY AND SCHEDULING Principles of Concurrency ...

Competition among

Processes for Resources

Three main control problems:

• Need for Mutual Exclusion

– Critical sections

• Deadlock

• Starvation

Page 16: CONCURRENCY AND SCHEDULING Principles of Concurrency ...

Requirements for

Mutual Exclusion

• Only one process at a time is allowed in

the critical section for a resource

• A process that halts in its noncritical

section must do so without interfering with

other processes

• No deadlock or starvation

Page 17: CONCURRENCY AND SCHEDULING Principles of Concurrency ...

Requirements for

Mutual Exclusion

• A process must not be delayed access to

a critical section when there is no other

process using it

• No assumptions are made about relative

process speeds or number of processes

• A process remains inside its critical section

for a finite time only

Page 18: CONCURRENCY AND SCHEDULING Principles of Concurrency ...

Roadmap

• Principals of Concurrency

• Mutual Exclusion: Hardware Support

• Semaphores

• Monitors

• Message Passing

• Readers/Writers Problem

Page 19: CONCURRENCY AND SCHEDULING Principles of Concurrency ...

Disabling Interrupts

• Uniprocessors only allow interleaving

• Interrupt Disabling

– A process runs until it invokes an operating

system service or until it is interrupted

– Disabling interrupts guarantees mutual

exclusion

– Will not work in multiprocessor architecture

Page 20: CONCURRENCY AND SCHEDULING Principles of Concurrency ...

Pseudo-Code

while (true) {

/* disable interrupts */;

/* critical section */;

/* enable interrupts */;

/* remainder */;

}

Page 21: CONCURRENCY AND SCHEDULING Principles of Concurrency ...

Special Machine

Instructions

• Compare&Swap Instruction

– also called a “compare and exchange

instruction”

• Exchange Instruction

• COMPARE is made between a memory

value and a test value, if the values are

same ,a swap occurs.

Page 22: CONCURRENCY AND SCHEDULING Principles of Concurrency ...

Compare&Swap

Instruction

int compare_and_swap (int *word,

int testval, int newval)

{

int oldval;

oldval = *word;

if (oldval == testval) *word = newval;

return oldval;

}

Page 23: CONCURRENCY AND SCHEDULING Principles of Concurrency ...

Mutual Exclusion (fig 5.2)

Page 24: CONCURRENCY AND SCHEDULING Principles of Concurrency ...

Exchange instruction

void exchange (int register, int

memory)

{

int temp;

temp = memory;

memory = register;

register = temp;

}

Page 25: CONCURRENCY AND SCHEDULING Principles of Concurrency ...

Exchange Instruction

(fig 5.2)

Page 26: CONCURRENCY AND SCHEDULING Principles of Concurrency ...

Hardware Mutual

Exclusion: Advantages

• Applicable to any number of processes on

either a single processor or multiple

processors sharing main memory

• It is simple and therefore easy to verify

• It can be used to support multiple critical

sections

Page 27: CONCURRENCY AND SCHEDULING Principles of Concurrency ...

Hardware Mutual

Exclusion: Disadvantages

• Busy-waiting consumes processor time

• Starvation is possible when a process

leaves a critical section and more than one

process is waiting.

– Some process could indefinitely be denied

access.

• Deadlock is possible

Page 28: CONCURRENCY AND SCHEDULING Principles of Concurrency ...

Roadmap

• Principals of Concurrency

• Mutual Exclusion: Hardware Support

• Semaphores

• Monitors

• Message Passing

• Readers/Writers Problem

Page 29: CONCURRENCY AND SCHEDULING Principles of Concurrency ...

Semaphore

• Semaphore:

– An integer value used for signalling among

processes.

• Only three operations may be performed

on a semaphore, all of which are atomic:

– initialize,

– Decrement (semWait)

– increment. (semSignal)

Page 30: CONCURRENCY AND SCHEDULING Principles of Concurrency ...

Semaphore Primitives

Page 31: CONCURRENCY AND SCHEDULING Principles of Concurrency ...

Binary Semaphore

Primitives

Page 32: CONCURRENCY AND SCHEDULING Principles of Concurrency ...

Strong/Weak

Semaphore

• A queue is used to hold processes waiting

on the semaphore

– In what order are processes removed from

the queue?

• Strong Semaphores use FIFO

• Weak Semaphores don’t specify the

order of removal from the queue

Page 33: CONCURRENCY AND SCHEDULING Principles of Concurrency ...

Example of Strong

Semaphore Mechanism

Page 34: CONCURRENCY AND SCHEDULING Principles of Concurrency ...

Example of Semaphore Mechanism

Page 35: CONCURRENCY AND SCHEDULING Principles of Concurrency ...

Mutual Exclusion Using Semaphores

Page 36: CONCURRENCY AND SCHEDULING Principles of Concurrency ...

Processes Using

Semaphore

Page 37: CONCURRENCY AND SCHEDULING Principles of Concurrency ...

Producer/Consumer

Problem

• General Situation:

– One or more producers are generating data and

placing these in a buffer

– A single consumer is taking items out of the buffer

one at time

– Only one producer or consumer may access the

buffer at any one time

• The Problem:

– Ensure that the Producer can’t add data into full buffer

and consumer can’t remove data from empty buffer

Producer/Consumer Animation

Page 38: CONCURRENCY AND SCHEDULING Principles of Concurrency ...

Functions

Producer Consumer

while (true) {

/* produce item v

*/

b[in] = v;

in++;

}

while (true) {

while (in <= out)

/*do nothing */;

w = b[out];

out++;

/* consume item w

*/

}

• Assume an infinite buffer b with a linear array of

elements

Page 39: CONCURRENCY AND SCHEDULING Principles of Concurrency ...

Buffer

Page 40: CONCURRENCY AND SCHEDULING Principles of Concurrency ...

Incorrect Solution

Page 41: CONCURRENCY AND SCHEDULING Principles of Concurrency ...

Possible Scenario

Page 42: CONCURRENCY AND SCHEDULING Principles of Concurrency ...

Correct Solution

Page 43: CONCURRENCY AND SCHEDULING Principles of Concurrency ...

Semaphores

Page 44: CONCURRENCY AND SCHEDULING Principles of Concurrency ...

Bounded Buffer

Page 45: CONCURRENCY AND SCHEDULING Principles of Concurrency ...

Semaphores

Page 46: CONCURRENCY AND SCHEDULING Principles of Concurrency ...

Functions in a

Bounded Buffer

Producer Consumer

while (true) {

/* produce item v

*/

while ((in + 1) % n

== out) /* do

nothing */;

b[in] = v;

in = (in + 1) % n

while (true) {

while (in == out)

/* do

nothing */;

w = b[out];

out = (out + 1)

% n;

/* consume item

• .

Page 47: CONCURRENCY AND SCHEDULING Principles of Concurrency ...

Demonstration

Animations

• Producer/Consumer

– Illustrates the operation of a producer-consumer

buffer.

• Bounded-Buffer Problem Using Semaphores

– Demonstrates the bounded-buffer consumer/producer

problem using semaphores.

Page 48: CONCURRENCY AND SCHEDULING Principles of Concurrency ...

Roadmap

• Principals of Concurrency

• Mutual Exclusion: Hardware Support

• Semaphores

• Monitors

• Message Passing

• Readers/Writers Problem

Page 49: CONCURRENCY AND SCHEDULING Principles of Concurrency ...

Monitors

• The monitor is a programming-language

construct that provides equivalent

functionality to that of semaphores and

that is easier to control.

• Implemented in a number of programming

languages, including

– Concurrent Pascal, Pascal-Plus,

– Modula-2, Modula-3, and Java.

Page 50: CONCURRENCY AND SCHEDULING Principles of Concurrency ...

Chief characteristics

• Local data variables are accessible only

by the monitor

• Process enters monitor by invoking one of

its procedures

• Only one process may be executing in the

monitor at a time

Page 51: CONCURRENCY AND SCHEDULING Principles of Concurrency ...

Synchronization

• Synchronisation achieved by condition

variables within a monitor

– only accessible by the monitor.

• Monitor Functions:

–Cwait(c): Suspend execution of the calling

process on condition c

–Csignal(c) Resume execution of some

process blocked after a cwait on the same

condition

Page 52: CONCURRENCY AND SCHEDULING Principles of Concurrency ...

Structure of a Monitor

Page 53: CONCURRENCY AND SCHEDULING Principles of Concurrency ...

Bounded Buffer Solution

Using Monitor

Page 54: CONCURRENCY AND SCHEDULING Principles of Concurrency ...

Solution Using Monitor

Page 55: CONCURRENCY AND SCHEDULING Principles of Concurrency ...

Bounded

Buffer Monitor

Page 56: CONCURRENCY AND SCHEDULING Principles of Concurrency ...

Roadmap

• Principals of Concurrency

• Mutual Exclusion: Hardware Support

• Semaphores

• Monitors

• Message Passing

• Readers/Writers Problem

Page 57: CONCURRENCY AND SCHEDULING Principles of Concurrency ...

Process Interaction

• When processes interact with one another, two fundamental requirements must be satisfied:

– synchronization and

– communication.

• Message Passing is one solution to the second requirement

– Added bonus: It works with shared memory and with distributed systems

Page 58: CONCURRENCY AND SCHEDULING Principles of Concurrency ...

Message Passing

• The actual function of message passing is

normally provided in the form of a pair of

primitives:

• send (destination, message)

• receive (source, message)

Page 59: CONCURRENCY AND SCHEDULING Principles of Concurrency ...

Synchronization

• Communication requires synchronization

– Sender must send before receiver can receive

• What happens to a process after it issues

a send or receive primitive?

– Sender and receiver may or may not be

blocking (waiting for message)

Page 60: CONCURRENCY AND SCHEDULING Principles of Concurrency ...

Blocking send,

Blocking receive

• Both sender and receiver are blocked until

message is delivered

• Known as a rendezvous

• Allows for tight synchronization between

processes.

Page 61: CONCURRENCY AND SCHEDULING Principles of Concurrency ...

Non-blocking Send

• More natural for many concurrent

programming tasks.

• Nonblocking send, blocking receive

– Sender continues on

– Receiver is blocked until the requested

message arrives

• Nonblocking send, nonblocking receive

– Neither party is required to wait

Page 62: CONCURRENCY AND SCHEDULING Principles of Concurrency ...

Addressing

• Sendin process need to be able to specify

which process should receive the

message

– Direct addressing

– Indirect Addressing

Page 63: CONCURRENCY AND SCHEDULING Principles of Concurrency ...

Direct Addressing

• Send primitive includes a specific identifier

of the destination process

• Receive primitive could know ahead of

time which process a message is

expected

• Receive primitive could use source

parameter to return a value when the

receive operation has been performed

Page 64: CONCURRENCY AND SCHEDULING Principles of Concurrency ...

Indirect addressing

• Messages are sent to a shared data

structure consisting of queues

• Queues are called mailboxes

• One process sends a message to the

mailbox and the other process picks up

the message from the mailbox

Page 65: CONCURRENCY AND SCHEDULING Principles of Concurrency ...

Indirect Process Communication

Page 66: CONCURRENCY AND SCHEDULING Principles of Concurrency ...

General Message Format

Page 67: CONCURRENCY AND SCHEDULING Principles of Concurrency ...

Mutual Exclusion Using Messages

Page 68: CONCURRENCY AND SCHEDULING Principles of Concurrency ...

Producer/Consumer

Messages

Page 69: CONCURRENCY AND SCHEDULING Principles of Concurrency ...

Roadmap

• Principals of Concurrency

• Mutual Exclusion: Hardware Support

• Semaphores

• Monitors

• Message Passing

• Readers/Writers Problem

Page 70: CONCURRENCY AND SCHEDULING Principles of Concurrency ...

Readers/Writers Problem

• A data area is shared among many

processes

– Some processes only read the data area,

some only write to the area

• Conditions to satisfy:

1. Multiple readers may read the file at once.

2. Only one writer at a time may write

3. If a writer is writing to the file, no reader may

read it.

interaction of readers and

writers.

Page 71: CONCURRENCY AND SCHEDULING Principles of Concurrency ...

Readers have Priority

Page 72: CONCURRENCY AND SCHEDULING Principles of Concurrency ...

Writers have Priority

Page 73: CONCURRENCY AND SCHEDULING Principles of Concurrency ...

Writers have Priority

Page 74: CONCURRENCY AND SCHEDULING Principles of Concurrency ...

Message Passing

Page 75: CONCURRENCY AND SCHEDULING Principles of Concurrency ...

Message Passing

Page 76: CONCURRENCY AND SCHEDULING Principles of Concurrency ...

Deadlock

• Permanent blocking of a set of processes

that either compete for system resources

or communicate with each other

• No efficient solution

• Involve conflicting needs for resources by

two or more processes

Page 77: CONCURRENCY AND SCHEDULING Principles of Concurrency ...

Deadlock

Page 78: CONCURRENCY AND SCHEDULING Principles of Concurrency ...

Deadlock

Page 79: CONCURRENCY AND SCHEDULING Principles of Concurrency ...

Deadlock

Page 80: CONCURRENCY AND SCHEDULING Principles of Concurrency ...

Reusable Resources

• Used by only one process at a time and

not depleted by that use

• Processes obtain resources that they later

release for reuse by other processes

Page 81: CONCURRENCY AND SCHEDULING Principles of Concurrency ...

Reusable Resources

• Processors, I/O channels, main and

secondary memory, devices, and data

structures such as files, databases, and

semaphores

• Deadlock occurs if each process holds

one resource and requests the other

Page 82: CONCURRENCY AND SCHEDULING Principles of Concurrency ...

Reusable Resources

Page 83: CONCURRENCY AND SCHEDULING Principles of Concurrency ...

Reusable Resources

• Space is available for allocation of

200Kbytes, and the following sequence of

events occur

• Deadlock occurs if both processes

progress to their second request

P1

. . .

. . .Request 80 Kbytes;

Request 60 Kbytes;

P2

. . .

. . .Request 70 Kbytes;

Request 80 Kbytes;

Page 84: CONCURRENCY AND SCHEDULING Principles of Concurrency ...

Consumable Resources

• Created (produced) and destroyed

(consumed)

• Interrupts, signals, messages, and

information in I/O buffers

• Deadlock may occur if a Receive message

is blocking

• May take a rare combination of events to

cause deadlock

Page 85: CONCURRENCY AND SCHEDULING Principles of Concurrency ...

Example of Deadlock

• Deadlock occurs if receives blocking

P1

. . .

. . .Receive(P2);

Send(P2, M1);

P2

. . .

. . .Receive(P1);

Send(P1, M2);

Page 86: CONCURRENCY AND SCHEDULING Principles of Concurrency ...

Resource Allocation Graphs

• Directed graph that depicts a state of the

system of resources and processes

Page 87: CONCURRENCY AND SCHEDULING Principles of Concurrency ...

Conditions for Deadlock

• Mutual exclusion

– Only one process may use a resource at a

time

• Hold-and-wait

– A process may hold allocated resources while

awaiting assignment of others

Page 88: CONCURRENCY AND SCHEDULING Principles of Concurrency ...

Conditions for Deadlock

• No preemption

– No resource can be forcibly removed form a

process holding it

• Circular wait

– A closed chain of processes exists, such that

each process holds at least one resource

needed by the next process in the chain

Page 89: CONCURRENCY AND SCHEDULING Principles of Concurrency ...

Resource Allocation Graphs

Page 90: CONCURRENCY AND SCHEDULING Principles of Concurrency ...

Resource Allocation Graphs

Page 91: CONCURRENCY AND SCHEDULING Principles of Concurrency ...

Possibility of Deadlock

• Mutual Exclusion

• No preemption

• Hold and wait

Page 92: CONCURRENCY AND SCHEDULING Principles of Concurrency ...

Existence of Deadlock

• Mutual Exclusion

• No preemption

• Hold and wait

• Circular wait

Page 93: CONCURRENCY AND SCHEDULING Principles of Concurrency ...

Deadlock Prevention

• Mutual Exclusion

– Must be supported by the OS

• Hold and Wait

– Require a process request all of its required

resources at one time

Page 94: CONCURRENCY AND SCHEDULING Principles of Concurrency ...

Deadlock Prevention

• No Preemption

– Process must release resource and request

again

– OS may preempt a process to require it

releases its resources

• Circular Wait

– Define a linear ordering of resource types

Page 95: CONCURRENCY AND SCHEDULING Principles of Concurrency ...

Deadlock Avoidance

• A decision is made dynamically whether

the current resource allocation request

will, if granted, potentially lead to a

deadlock

• Requires knowledge of future process

requests

Page 96: CONCURRENCY AND SCHEDULING Principles of Concurrency ...

Two Approaches to

Deadlock Avoidance

• Do not start a process if its demands might

lead to deadlock

• Do not grant an incremental resource

request to a process if this allocation might

lead to deadlock

Page 97: CONCURRENCY AND SCHEDULING Principles of Concurrency ...

Resource Allocation Denial

• Referred to as the banker’s algorithm

• State of the system is the current

allocation of resources to process

• Safe state is where there is at least one

sequence that does not result in deadlock

• Unsafe state is a state that is not safe

Page 98: CONCURRENCY AND SCHEDULING Principles of Concurrency ...

Determination of a Safe State

Page 99: CONCURRENCY AND SCHEDULING Principles of Concurrency ...

Determination of a Safe State

Page 100: CONCURRENCY AND SCHEDULING Principles of Concurrency ...

Determination of a Safe State

Page 101: CONCURRENCY AND SCHEDULING Principles of Concurrency ...

Determination of a Safe State

Page 102: CONCURRENCY AND SCHEDULING Principles of Concurrency ...

Determination of an Unsafe State

Page 103: CONCURRENCY AND SCHEDULING Principles of Concurrency ...

Deadlock Avoidance Logic

Page 104: CONCURRENCY AND SCHEDULING Principles of Concurrency ...

Deadlock Avoidance Logic

Page 105: CONCURRENCY AND SCHEDULING Principles of Concurrency ...

Deadlock Avoidance

• Maximum resource requirement must be

stated in advance

• Processes under consideration must be

independent; no synchronization

requirements

• There must be a fixed number of

resources to allocate

• No process may exit while holding

resources

Page 106: CONCURRENCY AND SCHEDULING Principles of Concurrency ...

Deadlock Detection

Page 107: CONCURRENCY AND SCHEDULING Principles of Concurrency ...

Strategies Once Deadlock Detected

• Abort all deadlocked processes

• Back up each deadlocked process to

some previously defined checkpoint, and

restart all process

– Original deadlock may occur

Page 108: CONCURRENCY AND SCHEDULING Principles of Concurrency ...

Strategies Once Deadlock Detected

• Successively abort deadlocked processes

until deadlock no longer exists

• Successively preempt resources until

deadlock no longer exists

Page 109: CONCURRENCY AND SCHEDULING Principles of Concurrency ...

Advantages and Disadvantages

Page 110: CONCURRENCY AND SCHEDULING Principles of Concurrency ...

Dining Philosophers Problem

Page 111: CONCURRENCY AND SCHEDULING Principles of Concurrency ...

Dining Philosophers Problem

Page 112: CONCURRENCY AND SCHEDULING Principles of Concurrency ...

Dining Philosophers Problem

Page 113: CONCURRENCY AND SCHEDULING Principles of Concurrency ...

Dining Philosophers Problem

Page 114: CONCURRENCY AND SCHEDULING Principles of Concurrency ...

Dining Philosophers Problem

Page 115: CONCURRENCY AND SCHEDULING Principles of Concurrency ...

UNIX Concurrency Mechanisms

• Pipes

• Messages

• Shared memory

• Semaphores

• Signals

Page 116: CONCURRENCY AND SCHEDULING Principles of Concurrency ...

UNIX Signals

Page 117: CONCURRENCY AND SCHEDULING Principles of Concurrency ...

Linux Kernel Concurrency Mechanism

• Includes all the mechanisms found in

UNIX

• Atomic operations execute without

interruption and without interference

Page 118: CONCURRENCY AND SCHEDULING Principles of Concurrency ...

Linux Atomic Operations

Page 119: CONCURRENCY AND SCHEDULING Principles of Concurrency ...

Linux Atomic Operations

Page 120: CONCURRENCY AND SCHEDULING Principles of Concurrency ...

Linux Spinlocks

Page 121: CONCURRENCY AND SCHEDULING Principles of Concurrency ...

Linux Semaphores

Page 122: CONCURRENCY AND SCHEDULING Principles of Concurrency ...

Linux Memory Barrier Operations

Page 123: CONCURRENCY AND SCHEDULING Principles of Concurrency ...

Solaris Thread Synchronization Primitives

• Mutual exclusion (mutex) locks

• Semaphores

• Multiple readers, single writer

(readers/writer) locks

• Condition variables

Page 124: CONCURRENCY AND SCHEDULING Principles of Concurrency ...

Solaris Synchronization Data Structures

Page 125: CONCURRENCY AND SCHEDULING Principles of Concurrency ...

Windows Synchronization Objects

Page 126: CONCURRENCY AND SCHEDULING Principles of Concurrency ...

Chapter 9

Uniprocessor Scheduling

Operating Systems:

Internals and Design Principles,

6/E

William Stallings

Dave Bremer

Otago Polytechnic, N.Z.

©2008, Prentice Hall

Page 127: CONCURRENCY AND SCHEDULING Principles of Concurrency ...

Roadmap

• Types of Processor Scheduling

• Scheduling Algorithms

• Traditional UNIX Scheduling

Page 128: CONCURRENCY AND SCHEDULING Principles of Concurrency ...

Scheduling

• An OS must allocate resources amongst

competing processes.

• The resource provided by a processor is

execution time

– The resource is allocated by means of a

schedule

Page 129: CONCURRENCY AND SCHEDULING Principles of Concurrency ...

Overall Aim

of Scheduling

• The aim of processor scheduling is to

assign processes to be executed by the

processor over time,

– in a way that meets system objectives, such

as response time, throughput, and processor

efficiency.

Page 130: CONCURRENCY AND SCHEDULING Principles of Concurrency ...

Scheduling Objectives

• The scheduling function should

– Share time fairly among processes

– Prevent starvation of a process

– Use the processor efficiently

– Have low overhead

– Prioritise processes when necessary (e.g. real

time deadlines)

Page 131: CONCURRENCY AND SCHEDULING Principles of Concurrency ...

Types of Scheduling

Page 132: CONCURRENCY AND SCHEDULING Principles of Concurrency ...

Two Suspend States

• Remember this diagram from Chapter 3

Page 133: CONCURRENCY AND SCHEDULING Principles of Concurrency ...

Scheduling and

Process State Transitions

Page 134: CONCURRENCY AND SCHEDULING Principles of Concurrency ...

Nesting of

Scheduling Functions

Page 135: CONCURRENCY AND SCHEDULING Principles of Concurrency ...

Queuing Diagram

Page 136: CONCURRENCY AND SCHEDULING Principles of Concurrency ...

Long-Term Scheduling

• Determines which programs are admitted

to the system for processing

– May be first-come-first-served

– Or according to criteria such as priority, I/O

requirements or expected execution time

• Controls the degree of multiprogramming

• More processes, smaller percentage of

time each process is executed

Page 137: CONCURRENCY AND SCHEDULING Principles of Concurrency ...

Medium-Term

Scheduling

• Part of the swapping function

• Swapping-in decisions are based on the

need to manage the degree of

multiprogramming

Page 138: CONCURRENCY AND SCHEDULING Principles of Concurrency ...

Short-Term Scheduling

• Known as the dispatcher

• Executes most frequently

• Invoked when an event occurs

– Clock interrupts

– I/O interrupts

– Operating system calls

– Signals

Page 139: CONCURRENCY AND SCHEDULING Principles of Concurrency ...

Roadmap

• Types of Processor Scheduling

• Scheduling Algorithms

• Traditional UNIX Scheduling

Page 140: CONCURRENCY AND SCHEDULING Principles of Concurrency ...

Aim of Short

Term Scheduling

• Main objective is to allocate processor

time to optimize certain aspects of system

behaviour.

• A set of criteria is needed to evaluate the

scheduling policy.

Page 141: CONCURRENCY AND SCHEDULING Principles of Concurrency ...

Short-Term Scheduling

Criteria: User vs System

• We can differentiate between user and

system criteria

• User-oriented

– Response Time

• Elapsed time between the submission of a request

until there is output.

• System-oriented

– Effective and efficient utilization of the

processor

Page 142: CONCURRENCY AND SCHEDULING Principles of Concurrency ...

Short-Term Scheduling

Criteria: Performance

• We could differentiate between

performance related criteria, and those

unrelated to performance

• Performance-related

– Quantitative, easily measured

– E.g. response time and throughput

• Non-performance related

– Qualitative

– Hard to measure

Page 143: CONCURRENCY AND SCHEDULING Principles of Concurrency ...

Interdependent

Scheduling Criteria

Page 144: CONCURRENCY AND SCHEDULING Principles of Concurrency ...

Interdependent

Scheduling Criteria cont.

Page 145: CONCURRENCY AND SCHEDULING Principles of Concurrency ...

Priorities

• Scheduler will always choose a process of

higher priority over one of lower priority

• Have multiple ready queues to represent

each level of priority

Page 146: CONCURRENCY AND SCHEDULING Principles of Concurrency ...

Priority Queuing

Page 147: CONCURRENCY AND SCHEDULING Principles of Concurrency ...

Starvation

• Problem:

– Lower-priority may suffer starvation if there is

a steady supply of high priority processes.

• Solution

– Allow a process to change its priority based

on its age or execution history

Page 148: CONCURRENCY AND SCHEDULING Principles of Concurrency ...

Alternative Scheduling

Policies

Page 149: CONCURRENCY AND SCHEDULING Principles of Concurrency ...

Selection Function

• Determines which process is selected for

execution

• If based on execution characteristics then

important quantities are:

• w = time spent in system so far, waiting

• e = time spent in execution so far

• s = total service time required by the process,

including e;

Page 150: CONCURRENCY AND SCHEDULING Principles of Concurrency ...

Decision Mode

• Specifies the instants in time at which the

selection function is exercised.

• Two categories:

– Nonpreemptive

– Preemptive

Page 151: CONCURRENCY AND SCHEDULING Principles of Concurrency ...

Nonpreemptive vs

Premeptive

• Non-preemptive

– Once a process is in the running state, it will

continue until it terminates or blocks itself for

I/O

• Preemptive

– Currently running process may be interrupted

and moved to ready state by the OS

– Preemption may occur when new process

arrives, on an interrupt, or periodically.

Page 152: CONCURRENCY AND SCHEDULING Principles of Concurrency ...

Process Scheduling

Example

• Example set of

processes,

consider each a

batch job

– Service time represents total execution time

Page 153: CONCURRENCY AND SCHEDULING Principles of Concurrency ...

First-Come-

First-Served

• Each process joins the Ready queue

• When the current process ceases to

execute, the longest process in the Ready

queue is selected

Page 154: CONCURRENCY AND SCHEDULING Principles of Concurrency ...

First-Come-

First-Served

• A short process may have to wait a very

long time before it can execute

• Favors CPU-bound processes

– I/O processes have to wait until CPU-bound

process completes

Page 155: CONCURRENCY AND SCHEDULING Principles of Concurrency ...

Round Robin

• Uses preemption based on a clock

– also known as time slicing, because each

process is given a slice of time before being

preempted.

Page 156: CONCURRENCY AND SCHEDULING Principles of Concurrency ...

Round Robin

• Clock interrupt is generated at periodic

intervals

• When an interrupt occurs, the currently

running process is placed in the ready

queue

– Next ready job is selected

Page 157: CONCURRENCY AND SCHEDULING Principles of Concurrency ...

Effect of Size of

Preemption Time Quantum

Page 158: CONCURRENCY AND SCHEDULING Principles of Concurrency ...

Effect of Size of

Preemption Time Quantum

Page 159: CONCURRENCY AND SCHEDULING Principles of Concurrency ...

‘Virtual Round Robin’

Page 160: CONCURRENCY AND SCHEDULING Principles of Concurrency ...

Shortest Process Next

• Nonpreemptive policy

• Process with shortest expected processing

time is selected next

• Short process jumps ahead of longer

processes

Page 161: CONCURRENCY AND SCHEDULING Principles of Concurrency ...

Shortest Process Next

• Predictability of longer processes is

reduced

• If estimated time for process not correct,

the operating system may abort it

• Possibility of starvation for longer

processes

Page 162: CONCURRENCY AND SCHEDULING Principles of Concurrency ...

Calculating

Program ‘Burst’

• Where:

– Ti = processor execution

time for the ith instance of

this process

– Si = predicted value for

the ith instance

– S1 = predicted value for

first instance; not

calculated

Page 163: CONCURRENCY AND SCHEDULING Principles of Concurrency ...

Exponential Averaging

• A common technique for predicting a

future value on the basis of a time series

of past values is exponential averaging

Page 164: CONCURRENCY AND SCHEDULING Principles of Concurrency ...

Exponential Smoothing Coefficients

Page 165: CONCURRENCY AND SCHEDULING Principles of Concurrency ...

Use Of Exponential

Averaging

Page 166: CONCURRENCY AND SCHEDULING Principles of Concurrency ...

Use Of

Exponential Averaging

Page 167: CONCURRENCY AND SCHEDULING Principles of Concurrency ...

Shortest Remaining

Time

• Preemptive version of shortest process

next policy

• Must estimate processing time and choose

the shortest

Page 168: CONCURRENCY AND SCHEDULING Principles of Concurrency ...

Highest Response

Ratio Next

• Choose next process with the greatest

ratio

Page 169: CONCURRENCY AND SCHEDULING Principles of Concurrency ...

Feedback Scheduling

• Penalize jobs that

have been running

longer

• Don’t know

remaining time

process needs to

execute

Page 170: CONCURRENCY AND SCHEDULING Principles of Concurrency ...

Feedback Performance

• Variations exist, simple version pre-empts

periodically, similar to round robin

– But can lead to starvation

Page 171: CONCURRENCY AND SCHEDULING Principles of Concurrency ...

Performance

Comparison

• Any scheduling discipline that chooses the

next item to be served independent of

service time obeys the relationship:

Page 172: CONCURRENCY AND SCHEDULING Principles of Concurrency ...

Formulas

Page 173: CONCURRENCY AND SCHEDULING Principles of Concurrency ...

Overall Normalized

Response Time

Page 174: CONCURRENCY AND SCHEDULING Principles of Concurrency ...

Normalized Response

Time for Shorter Process

Page 175: CONCURRENCY AND SCHEDULING Principles of Concurrency ...

Normalized Response

Time for Longer Processes

Page 176: CONCURRENCY AND SCHEDULING Principles of Concurrency ...

Normalized

Turnaround Time

Page 177: CONCURRENCY AND SCHEDULING Principles of Concurrency ...

Fair-Share Scheduling

• User’s application runs as a collection of

processes (threads)

• User is concerned about the performance

of the application

• Need to make scheduling decisions based

on process sets

Page 178: CONCURRENCY AND SCHEDULING Principles of Concurrency ...

Fair-Share Scheduler