Scheduling Algorithms for Multiprogramming in a Hard Real-Time Environment (then and now) EECS 750 – Spring 2006 Presented by: Shane Santner, TJ Staley,

Post on 30-Mar-2015

213 Views

Category:

Documents

0 Downloads

Preview:

Click to see full reader

Transcript

Scheduling Algorithms for Multiprogramming in a Hard

Real-Time Environment (then and now)

EECS 750 – Spring 2006

Presented by:

Shane Santner, TJ Staley, Ilya Tabakh

Agenda

• Intro

• The Paper

• Current state of the art

• Differences between then and now

• Shortcomings of Rate Monotonic Analysis

• Conclusion

Agenda

• Intro

• The Paper

• Current state of the art

• Differences between then and now

• Shortcomings of Rate Monotonic Analysis

• Conclusion

Intro

• Scheduling is a problem which has been worked on for many years

• Hard real-time scheduling presents its own set of unique problems

• Rate Monotonic Scheduling presents one approach to addressing the problem of Hard real-time scheduling

Agenda

• Intro

• The Paper

• Current state of the art

• Differences between then and now

• Shortcomings of Rate Monotonic Analysis

• Conclusion

The Paper

Scheduling Algorithms for Multiprogramming in a Hard Real-Time Environment

C. L. Liu and James W. Layland

© 1973

Introduction

• Hard Real-Time – Tasks must be guaranteed to complete within a predefined amount of time

• Soft Real-Time – Statistical distribution of response-time of tasks is acceptable

Background

• Rate Monotonic refers to assigning priorities as a monotonic function of the rate (frequency of occurrence) of those processes.

• Rate Monotonic Scheduling (RMS) can be accomplished based upon rate monotonic principles.

• Rate Monotonic Analysis (RMA) can be performed statically on any hard real-time system concept to decide if the system is schedulable.

Background

• We will examine three different types of algorithms– Fixed Priority Assignment (Static)– Deadline Driven Priority Assignment (Dynamic)– Mixed Priority Assignment

Environment

• Paper makes five assumptions about tasks within a hard real-time system:– Requests for all tasks with hard deadlines are periodic

with a constant interval between requests– Each task must complete before the next request for it

occurs– Tasks are independent of each other– Run-time for each task is constant– All non-periodic tasks are special

• Initialization tasks• Failure routines

Environment

• The restrictions previously mentioned can be applied to a controlled environment such as an assembly line where:– Tasks are periodic– Tasks are sequential, therefore they complete before the next

request is issued.– Tasks are dependent on each other, but in a controlled

environment this can be handled.– Run-time for each task is constant– Power-on initialization would be the only non-periodic tasks

Fixed Priority Scheduling Algorithm• Assign the priority of each task

according to its period, so that the shorter the period the higher the priority.

• Tasks are defined based on:– Task is denoted as: t1

– Request Period: T1, T

2,...,T

m

– Run-Time: C1, C

2,...,C

m

RMA Example

Figure 1. Both possible outcomes for static-priority scheduling with two tasks (T1=50, C1=25, T2=100, C2=40)

Case 1: Priority(t1) > Priority(t2)

Case 2: Priority(t2) > Priority(t1)

RMA Example

• Some task sets are not schedulable

Figure 2. Some task sets aren't schedulable (T1=50, C1=25, T2=70, C2=30)

Achievable Processor Utilization

Processor Utilization

0

0.2

0.4

0.6

0.8

1

1.2

1 3 5 7 9 11 13 15 17 19 21 23 25 27 29 31 33 35 37 39

Number of tasks

Least

Up

per

Bo

un

d

• Processor utilization is defined as the fraction of processor time spent in the execution of the task set.

U (Ci /Ti)i1

m

W (m) m (21/m 1)

Relaxing the Utilization Bound

• Utilization bound is directly proportional to the number of tasks in the system. Under fixed priority scheduling the upper bound is 69.314% (ln(2)) processor utilization.

• By incorporating dynamic priority assignment of tasks in a hard real-time system we can achieve 100% processor utilization

• This method is optimal in the sense that if a set of tasks can be scheduled by any algorithm, it can also be scheduled by the deadline driven scheduling algorithm.

• This implies that for a set of tasks that are schedulable under the fixed priority scheduling algorithm, that this same set of tasks is schedulable under dynamic priority scheduling with a processor utilization approaching 100%.

The Deadline Driven Scheduling Algorithm• Priorities are assigned to tasks according to

the deadlines of their current requests.• A task will be assigned the highest priority if the

deadline of its current request is the nearest• Conversely, a task will be assigned the lowest

priority if the deadline of its current request is the furthest.

• At any instant, the task with the highest priority and yet unfulfilled request will be executed.

Deadline Driven Scheduling Algorithm

Figure 3. (T1=50, C1=25, T2=70, C2=30)

• Deadline driven algorithm can schedule this task set that was not schedulable with the fixed priority algorithm.

A Mixed Scheduling Algorithm

• Tasks with small execution periods are scheduled using the fixed priority algorithm. All other tasks are scheduled dynamically and can only run on the CPU when all fixed priority tasks have completed.

• Originally motivated by interrupt hardware limitations– Interrupt hardware acted as a fixed priority

scheduler

– Did not appear to be compatible with a hardware dynamic scheduler

Comparison and Comments

• The fixed scheduling algorithm has a least upper-bound processor utilization of around 70%. This can be quite restrictive when processor utilization is critical.

• Therefore, the deadline driven scheduling algorithm was developed to increase this bound to a theoretical limit of 100%.

• Finally, the mixed scheduling algorithm will not be able to achieve 100% processor utilization, however it will be significantly better than the limitation of 70% for the fixed scheduling algorithm

Conclusion

• Five key assumptions were made to support the ensuing analytical work.– Least defensible are:

• All tasks have periodic requests• Run times are constant

• These two assumptions should be design goals for any real-time tasks which must receive guaranteed service

Conclusion• A scheduling algorithm which assigns priorities to tasks in

a monotonic relation to their request rates was shown to be optimum among the class of all fixed priority scheduling algorithms.

• The least upper bound of processor utilization for this algorithm is on the order of 70% for large task sets.

• The dynamic deadline driven scheduling algorithm was then shown to be globally optimum and capable of achieving full processor utilization.

• A combination of the two scheduling algorithms appears to provide most of the benefits of the deadline driven scheduling algorithm, and yet may be readily implemented in existing computers with interrupt limitations.

Agenda

• Intro

• The Paper

• Current state of the art

• Differences between then and now

• Shortcomings of Rate Monotonic Analysis

• Conclusion

Current State of the art

• Rate Monotonic Analysis is the methodology which has evolved from the paper and is defined as:– A collection of quantitative methods and

algorithms that allow engineers to specify, understand, analyze and predict the timing behavior of read-time software systems.

Agenda

• Intro

• The Paper

• Current state of the art

• Differences between then and now

• Shortcomings of Rate Monotonic Analysis

• Conclusion

What has changed?

• The Original paper was very restrictive

• Lots of work has been done in order to extend capabilities of RMA

• Rules are made to be broken!

Assumption 1 - Problem

Original Assumption 1: The requests for all tasks for which hard deadlines exist are periodic, with a constant interval between requests.

Limitation: eliminates most real-time systems from consideration by eliminating sporadic events

Solution: periodic task to poll aperiodic events, priority exchange protocol, deferrable server protocol, sporadic server protocol

Priority Exchange Protocol

• Periodic task created to process sporadic events

• If no sporadic events, server exchanges priority with periodic task

• Lets task run until complete or until sporadic task encountered

• When sporadic task encountered it runs at servers current priority level

• Replenishment of the server’s execution time occurs periodically

Priority Exchange Protocol

Pros:• Fast average response times

Cons:• Implementation complexity• Unnecessary accruement of run-time

overhead

Deferrable Server Protocol

• Similar to priority exchange protocol

• If no sporadic requests are pending, suspend server until request arrives

• When sporadic task arrives, if server has allotted time left, task executing is preempted

Deferrable Server Protocol

Pros:

• Low implementation complexity

• Accrues little run-time overhead

Cons:

• Violates fourth assumption (oops)

Sporadic Server Protocol

• Fundamental difference between the deferrable server and the sporadic server is how the servers replenish

• Deferrable server gets full amount replenished at beginning of each period

• Sporadic server replenishes the amount of exec time consumed by each task one period after it is consumed

Sporadic Server ProtocolPros:• Does not accrue as much run-time overhead• Does not require as much implementation

complexity• Replenishment scheme does not periodically

require maintenance processing

Cons:• Still violates fourth assumption but

schedulability has been demonstrated

Relaxing Assumption 1

• With sporadic server, “all tasks for which hard deadlines are periodic” part of assumption is relaxed

• Remaining part of assumption, can be addressed by either taking the minimum possible task period or employing mode change (talked about under assumption 4)

Assumption 2 – No problem

Original Assumption 2:

Each task must complete before the next request for it occurs

Solution:

Run-time doesn’t have to monitor periods of all task. Could also buffer requests

and feed one per period

Assumption 3 – Big Problem

Original Assumption 3: The tasks are independent of each that requests for a certain task do not depend on the initiation or the completion of requests for other tasks

Limitation: No inter-task communication, no semaphores, no i/o

Solution: priority inheritance protocol, priority ceiling protocol

Priority Inheritance

• Applies if a lower priority task blocks a higher priority task

• Lower priority task inherits the priority of higher task for the duration of its critical section

Priority Inheritance

• Used in order to address unbounded priority inversion

• Unbounded priority inversion is introduced along with the introduction of semaphores

Priority Ceiling

• Every semaphore is given a ceiling priority which is at least the priority of the highest priority task that can lock the semaphore

• This ensures that the process running will be hoisted to the higher priority and allowed to run

Priority Ceiling

• Developed in order to eliminate deadlocking

Assumption 4Original Assumption 4: Run-time for each task is

constant upper bound for that task and does not vary with time

Limitation: cannot add or remove tasks from the task set, nor dramatically alter time required to process currently existing task

Solution: ceiling priority protocol (under certain conditions)

Assumption 4 conditions1) For every unlocked semaphore S whose priority ceiling needs

to be raised, S’s ceiling is raised immediately2) For every locked semaphore S whose priority ceiling needs to

be raised, S’s ceiling is raised immediately3) For every semaphore S whose priority ceiling needs to be

lowered, S's priority ceiling is lowered when all the tasks which may lock S, and which have priorities greater than the new priority ceiling of S, are deleted

4) If task T’s priority is higher than the priority ceilings of locked semaphores S1,…,Sk, which It may lock, the priority ceilings of the given semaphores must first be raised before before adding T

5) A task T, which needs to be deleted, can be deleted immediately upon the initiation of a mode change. If T has initiated, then it may be deleted only after completion

6) A task may be added into the system only if sufficient processor capacity exists

Assumption 5

Original Assumption 5: Any aperiodic tasks in the system are special … and do not themselves have hard, critical deadlines

Limitation: no interrupts

Solution: can be relaxed by utilizing techniques in relaxing the 1st and 3rd assumptions

Agenda

• Intro

• The Paper

• Current state of the art

• Differences between then and now

• Shortcomings of Rate Monotonic Analysis

• Conclusion

Shortcomings of RMA

• Tends to be pessimistic• Scheduling overhead never taken into

account• Not good if case doesn’t conform with

assumption:– If most of workload is aperiodic– If worst case is realized very infrequently– If there is no minimum inter-arrival interval

between thread invocations

Agenda

• Intro

• The Paper

• Current state of the art

• Differences between then and now

• Shortcomings of Rate Monotonic Analysis

• Conclusion

Conclusion

• RMS was a good start, but had many drawbacks– Overly restrictive

• RMA extends RMS, relaxing many restrictions

• RMA is a good scheduling algorithm, but should only be considered as a guideline

• Not appropriate for all situation

Sources

Heidmann, Paul. Rate Monotonic Analysis Paper. 8 Apr. 2006 <http://www.heidmann.com/paul/rma/PAPER.htm>.

Klein, Mark. "Rate Monotonic Analysis." Software Technology Roadmap. 14 Dec. 2005. Carnegie Mellon Software Engineering Institute. 8 Apr. 2006 <http://www.sei.cmu.edu/str/descriptions/rma_body.html>.

Liu, C. L. & Layland, J. W. "Scheduling Algorithms for Multi-Programming in a Hard Real-Time Environment." Journal of the Association for Computing Machinery 20, 1 (January 1973): 40-61.

top related