Top Banner
The Limited Preemption Uniprocessor Scheduling of Sporadic Task Systems Sanjoy Baruah - 2005
23
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: RTS

The Limited Preemption Uniprocessor Scheduling of Sporadic Task Systems

Sanjoy Baruah - 2005

Page 2: RTS

• Motivation– From feasibility perspective, preemptive scheduling strictly dominates non preemptive

scheduling. Every task system that is feasible under non preemptive scheduling is feasible under preemptive scheduling too, but the converse is not always true.

– Preemptive scheduling has overhead associated with itself - context switching and arbitrating access in critical section to shared resources which allows access to only one task at any instant of time.

– Objective is to reduce the number of preemptions.– Determine the largest size chunk in which the tasks can be scheduled non preemptively.

Also if a task needs a shared resource for a period smaller than this chunk, then access to this shared resource is arbitrated by just having the task execute the resource non preemptively.

Page 3: RTS

• The Formation of Problem Statement– A sporadic task system in which every real-time sporadic task is defined by

• τ i = (ei, di, pi)– ei => Worst case execution requirement

– di => A relative deadline

– pi => Minimum inter arrival separation

– It is further assumed that the task system is schedulable under preemptive scheduling.– Objective: To determine the largest value qi for each τi Є τ such that τ remains feasible if

the jobs of τi are scheduled in non preemptive chunks, each of size no larger than q i.

– Inserted idle times are forbidden

Page 4: RTS

• The EDF Scheduling Algorithm– Priority driven scheduling algorithm, with higher priority assigned to request which has

earlier deadline– Is an optimal uniprocessor – if EDF cannot schedule a task set on a uniprocessor, there is

no scheduling algorithm that can– If all the tasks are periodic and have relative deadlines equal to their periods , this

algorithm will feasibly schedule a periodic task set as long as its utilization factor verifies:• U ≤ 1 where

n

i i

i

P

CU

1

Page 5: RTS

• Algorithm Proposed by the Author– Define Demand Bound Function DBF(τi, t) as the largest cumulative execution requirement of all jobs

that can be generated by τi to have both their arrival times and deadline times within a contiguous interval of length t.

– DBF in an interval [t0, t0 + t) is maximised when the first job arrives at t0 and the successive jobs arrive as soon as possible, i.e, t0 + pi, t0 + 2pi, …

– Approach : Compute the largest values of qi such that the infeasibility conditions are not satisfied.

– Suppose a system τ which is not schedulable under EDF, and derive properties that it must satisfy.– As τ is not schedulable, it must generate a legal collection of jobs upon which EDF would miss some

deadline. Let σ(τ) denote the smallest such legal collection of jobs. Let tf denote the instance at which deadline is missed, and ta the earliest arrival time of any job in σ(τ).• The processor is never idled over [ta, tf) in the EDF schedule of σ(τ).

• Atmost one job with deadline greater than tf.

• If no jobs with deadlines > tf, then

• Suppose there is 1 job with deadline > tf. Let τj denote the task which generates this job, and let [t1, t2] denote the last contiguous time interval during which it executes in non-preemptive mode. Then

– From these we conclude that Restricted Sporadic Task System is not schedulable• if • Or there is a

– From these, the value of qi can be calculated iteratively while performing the feasibility analysis.

afaf

n

ii tttttDBF

),(1

111

),( ttttDBFq ffi

n

jii

j

tDBFttn

ii

1

),(:0:

n

jii

ijjf

jjjjj

ttDBFqdtt

njpdqe

1

),(:0:

,1),,,,(

Page 6: RTS

• Conclusion– In uniprocessor systems, preemptive scheduling dominates over non-preemptive one

considering feasibility, ignoring the overhead of context switching which can be significant for many applications (processor preemption, resource sharing, etc).

– The author here has given an algorithm to determine the largest chunk size that can be scheduled non-preemptively and yet the system does not miss its deadline.

– Advantages :• If a resource is required for a time less than this chunk size, then the resource access can be

arbitrated by simply having the task use the resource non-preemptively, instead of using complex resource sharing algorithms.

• Run time scheduling can be simplified by knowing that a task that gets access to shared processor may run for a certain time prior to being preempted by some other task. Context switching overhead is reduced.

– Other attempts in this direction were by • Baker, who assigned preemption levels and allowed tasks to preempt only those which are at a

lower level than itself.

Page 7: RTS

Deadline Fair Scheduling: Bridging the Theory and Practice of Proportionate Fair Scheduling

in Multiprocessor Systems

Abhishek Chandra, Micah Adler and Prashant Shenoy - 2001

Page 8: RTS

• Motivation– Streaming audio, video and multiprocessor have timing constraints, but unlike hard-real

time applications, occasional violations of these deadlines do not result in catastrophic consequences.

– A P-Fair scheduler allows an application to request x i time units every yi time quanta and guarantees that over any T quanta, a continuously running application receives between floor[(xi / yi) * T] and ceiling[(xi / yi) * T] quanta of service.

– Simulations have shown that in practical application of the P-Fair schedulers, asynchrony in scheduling multi-processors, frequent arrivals and departures of tasks can cause the system to be non-work conserving.

– Most of the research focuses on theoretical analysis of these schedulers.– The authors here have considered practical issues of implementing the scheduler into a

multi-processor operating system kernel.– Also to make the system work conserving, the P-Fair scheduler is coupled with Airport

Scheduling.– Also as this is a multiprocessor environment, processor affinities are to be considered

for better performance by making use of cached data.

Page 9: RTS

• The Formation Of Problem Statement– Soft-real time constraint (streaming audio, video, online virtual world, multiplayer

games)– Multiprocessor system– Practical issues of implementing P-Fair Scheduling into multiprocessor operating system

kernel– Combine DFS with auxiliary work conserving scheduler in order to guarantee work

conserving behavior– Account for processor affinities

Page 10: RTS

• Proportionate Fair Scheduler– A P-Fair scheduler allows an application to request x i time units every yi time quanta and

guarantees that over any T quanta, a continuously running application receives between floor[(xi / yi) * T] and ceiling[(xi / yi) * T] quanta of service.

– Strong notion of fairness as at any given instant, no given application is more than one quantum away from its due share.

– Assumptions:• Quantum direction is fixed.• Set of tasks in the system is fixed.

Page 11: RTS

• Algorithm Proposed By The Author– Use a modified definition of P-Fairness– Let Φi denote the share of processor bandwidth that is requested by task I in a p-processor system. – Then over any T time quanta, a continuously running application should receive between

floor[(Φi/summation(Φi)).pT] and ceiling[(Φi/summation(Φi)).pT].– DFS schedules each ask periodically depending on its share Φi . It uses an eligibility criteria to

determine the tasks eligible for scheduling. Once scheduled, the task becomes ineligible until the next period begins. Each eligible task is stamped with an internally generated deadline, and the DFS schedules these eligible tasks in the earliest-deadline-first order.

– Each task is associated with a share Φi, a start tag Si and a finish tag Fi. When a task is executed, its start tag is updated at the end of the quantum as Si + q/ Φi ,where q is the duration for which the task ran. If a suspended task wakes up, its Si is max of present Si and the virtual time.

– The finish tag Fi is updated to Si + q`/ Φi , where q` is the maximum time for which the task can run the next time it is scheduled.

– At scheduling instance, the scheduler determines the eligible tasks using an eligibility criterion, and then computes the deadlines for these tasks – for both operations Si and Fi are used.

– DFS has been proven to be work conserving under the assumption of fixed task set and synchronize fixed length quanta. These need not hold in a typical multiprocessor system.

– DFS is combined with an auxiliary scheduler to make the system work conserving. It maintains two queues – one which contains eligible tasks and the other contains all the tasks. If there are no eligible tasks to be scheduled by the DFS, then the auxiliary scheduler uses the second task set and schedules a task which is currently ineligible.

– To take into account the processor affinities, instead of just using the deadline as the sorting factor, a combination (ex. Linear) of deadline and affinity (affinity is 0 for the processor on which the task ran last and 1 for all others) can be used. He authors have called this as the goodness factor, and the scheduler picks the smallest goodness factor task to be scheduled – thus it is global scheduling.

Page 12: RTS

Early Release Fair Scheduling

James H Anderson and Anand Srinivasan - 2000

Page 13: RTS

• Motivation– P-Fair scheduling algorithms schedule tasks by breaking them into quantum-length

subtasks which are given intermediate deadlines. – In P-Fair, if some subtask of a task executes early in its window, the task becomes in-

eligible for scheduling again until the start of the window of its next subtask. – Thus, P-Fair is not work-conserving.

Page 14: RTS

• The Formation of the Problem Statement– Collection of periodic real-time tasks– Multiple processors– Each task T is associated with a period T.p and execution cost T.e. Each T.p time units, an

invocation of T with a cost of T.e takes place. This is a job of T. It is required that each job of a task must complete before the next job can of the same task can begin.

– T can be allocated on different processors, provided it is not scheduled on more than one processor at the same time.

– T.e / T.p is the weight of a task. It is assumed that the weight is strictly less than 1 (weight 1 task would require a dedicated processor which makes the scheduling decision easier).

Page 15: RTS

• P-Fair Scheduling Algorithm– A P-Fair scheduler allows an application to request x i time units every yi time quanta and

guarantees that over any T quanta, a continuously running application receives between floor[(xi / yi) * T] and ceiling[(xi / yi) * T] quanta of service.

– Strong notion of fairness as at any given instant, no given application is more than one quantum away from its due share.

– Assumptions:• Quantum direction is fixed.• Set of tasks in the system is fixed.

– Lag of a task T is defined as the difference between the amount of time allocated to the task and what would have been allocated to it in an ideal system. • Lag(T, t) = (T.e / T.p) * t – allocated(T, t)

– P-Fair scheduler implies -1 < lag(T, t) < 1

Page 16: RTS

• ER-Fair Scheduling Algorithm– The Early Release Scheduling Algorithm is derived by dropping the -1 lag constraint.

• Lag(T, t) < 1

– Every P-Fair schedule is ER-Fair, but the converse need not be true.– Every ER-Fair is periodic

• Lag(T, t) = 0 for t = T.p, 2T.p, 3T.p, …– Reason is that for these values of t, (T.e / T.p)*t is an integer. Now by the constraint of the lag(T, t) < 1,

the lag has to be either 0 or negative. But a negative lag implies that the task received more time than what it had requested. Hence lag(t, t) = 0 for these values of t.

– Baruah et al. showed that a periodic task set has a P-Fair schedule in a system of M processors iff summation(T.e/T.p) < M. Every PFair being an ER-Fair, same is the feasibility condition for ER-Fair schedule.

– Each subtask of T has an associated pseudo-release and pseudo-deadline (referred to as release and deadline)• r(Ti) = floor[(i-1) * T.e / T.p]

– r(Ti) is the first slot into which Ti can be scheduled.

• d(Ti) = ceiling[i * T.e / T.p] – 1

• w(Ti) = [r(Ti), d(Ti)]– w is the window of a subtask.

– The only difference between PFair and ERFair is in their eligibility criterion. In PFair, a subtask Ti is eligible at time t if t Є w(Ti) and if Ti-1 has been scheduled prior to t but Ti has not. In ERFair, if Ti and Ti+1 are part of the same job, then Ti+1 becomes eligible for execution immediately after Ti executes.

Page 17: RTS

• Conclusion– The authors have proposed the ER-Fair scheduling algorithm by dropping the -1

constraint of PFair Scheduling, whereby some subtasks can execute early – before their window. This overcomes the non work-conserving nature of the PFair schedules.

– A hybrid system can be proposed in which only few selected tasks can be released early. This might be useful if a small subset of tasks are subject to stringent response-time requirements.

– It may also be possible to determine dynamically when and by how much the subtasks might be released early.

Page 18: RTS

Bounds on the Performance of Heuristic Algorithm for Multiprocessor Scheduling of

Hard-Real Time Tasks

Fuxing Wang, Krithi Ramamritham and John A Stankovic - 1992

Page 19: RTS

• Motivation– To determine feasible non preemptive schedules on multiprocessor systems, list

scheduling can be used. But while list scheduling has a good worst case schedule length, it does not have good average case performance.

– An heuristic approach is the H scheduling algorithm. It has a good average case performance with respect to meeting deadline, but a poor worse case schedule length.

– So try to combine the features of both the list scheduling algorithm and H scheduling algorithm, such that it performs well in finding feasible schedule and has a good schedule length bound.

Page 20: RTS

• The Formation of the Problem Statement– To assign a set of real time tasks to processors and additional resources, such that all

tasks meet their resource requirements and timing requirements. – Given

• A set of m identical processors in a homogeneous multiprocessor system. Each processor is capable of executing any task.

• A set of r resources such as data sets and buffers. Resources are discreet and may have multiple instances. Resources are continuous (a resource is continuous if a task can request any portion of it) and renewable (a resource is renewable if its total amount is always fixed and they are not consumed by the tasks).

• A set of n tasks, each characterized by its worst case computation time, its deadline and its resource requirement vector.

– Assume that the tasks are aperiodic, independent and non preemptable. Also assume that the resources requested by a task are used throughout the tasks execution time.

– The performance criteria is to minimize the maximum completion time.– Scheduling problem with these characteristics is a hard problem. Heuristic approach is

considered here.

Page 21: RTS

• The List Scheduling Algorithm– Every task has a priority defined (may depend on deadline, resource requirement or

some combination). – Tasks are arranged in a ready queue sorted on their priority values.– When a processor becomes idle, it scans the ready queue and selects the first task that

does not violate resource constraints.– Does not have good average case performance because sometimes moving a lower

priority task up in schedule when higher priority tasks are blocked by resource constraints causes the higher priority tasks to miss some deadlines.

• The H Scheduling Algorithm– Priority is calculated as

• h(Ti) = di + Wi . bi

• Higher the value of h, lower is the priority.

– The tasks are sorted on this priority value.– Whenever a processor becomes free, it schedules the task with the highest priority.– It does not, as list scheduling does, try to be greedy about processor resource usage.

Page 22: RTS

• Algorithm Proposed by the Authors– The authors have proposed a combination of the list and H scheduling algorithms,

calling it the HK scheduling algorithm.

– It uses the same heuristic as H scheduling algorithm, but tries to be greedy to a certain degree with respect to processor usage.

– HK maintains a variable tCK which divides the schedule into two parts. The first portion satisfies that in any sub-interval, either at least k processors are busy or less than k processors are busy, but addition of any task causes a resource contention.

– tck is formally set to the maximum possible value that satisfies-• In any sub-interval [x, y] of [0, tck) at least k processors are busy

• Less than k processors are busy, but no other tasks can be as their addition causes resource conflict.

– The HK then applies highest priority first rule to schedule a task which can fit into the partial schedule before tCK.

Page 23: RTS

• Conclusion– The time complexity of Hk scheduling algorithm is very high when k > 2. Only H2 has time

complexity comparable to those of H scheduling and list scheduling algorithms. Also for non-uniform tasks the length bound does not improve for k > 2. So the authors focused mainly on H2.

– Analyses showed that the H2 produces better worst-case bound lengths than H algorithm, and is also almost as good as H scheduling algorithm in finding the feasible schedule.