Top Banner
Chapter 5.2: CPU Scheduling Chapter 5.2: CPU Scheduling
21

Chapter 5.2: CPU Scheduling. 5.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts Chapter 5: CPU Scheduling Chapter 5.1 Basic Concepts Scheduling.

Dec 21, 2015

Download

Documents

Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: Chapter 5.2: CPU Scheduling. 5.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts Chapter 5: CPU Scheduling Chapter 5.1 Basic Concepts Scheduling.

Chapter 5.2: CPU SchedulingChapter 5.2: CPU Scheduling

Page 2: Chapter 5.2: CPU Scheduling. 5.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts Chapter 5: CPU Scheduling Chapter 5.1 Basic Concepts Scheduling.

5.2 Silberschatz, Galvin and Gagne ©2005Operating System Concepts

Chapter 5: CPU SchedulingChapter 5: CPU Scheduling

Chapter 5.1 Basic Concepts

Scheduling Criteria

Scheduling Algorithms

Chapter 5.2 Multiple-Processor Scheduling

Real-Time Scheduling

Thread Scheduling

Operating Systems Examples

Java Thread Scheduling

Algorithm Evaluation

Page 3: Chapter 5.2: CPU Scheduling. 5.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts Chapter 5: CPU Scheduling Chapter 5.1 Basic Concepts Scheduling.

5.3 Silberschatz, Galvin and Gagne ©2005Operating System Concepts

5.4 Multiple-Processor Scheduling5.4 Multiple-Processor Scheduling

So far, we’ve only dealt with a single processor.

CPU scheduling more complex when multiple CPUs are available due to load sharing.

No single best solutions – no great surprise.

Page 4: Chapter 5.2: CPU Scheduling. 5.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts Chapter 5: CPU Scheduling Chapter 5.1 Basic Concepts Scheduling.

5.4 Silberschatz, Galvin and Gagne ©2005Operating System Concepts

5.4.1 Approaches to Multiple-Processor Scheduling 5.4.1 Approaches to Multiple-Processor Scheduling

Asymmetric multiprocessing (master) Here there is one processor that makes the decisions for

Scheduling, I/O processing, system activities Other processor(s) execute only user code.

This is a simple approach because only one processor accesses the system data structures, so sharing is not an issue with other processors.

Symmetric Multiprocessing (SMP) Here, each processor is self-scheduling. Share a common ready queue or each processor may have its own private

queue of ready processes. Whether we have a common ready queue or private ready queues, we have a

scheduler for each processor that examines the ready queue and dispatches the CPU to a specific process for execution. .

Clearly, there are sharing issues, here, since each processor may update this common data structure or try to access a specific PCB in a queue …

Most all modern operating systems support SMP including Windows XP, Solaris, Linux, and Mac OS X.

Most of our discussions here on out usually apply to SMPs.

Page 5: Chapter 5.2: CPU Scheduling. 5.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts Chapter 5: CPU Scheduling Chapter 5.1 Basic Concepts Scheduling.

5.5 Silberschatz, Galvin and Gagne ©2005Operating System Concepts

5.4.2 Processor Affinity in Multiprocessors5.4.2 Processor Affinity in Multiprocessors

Big question: once a process starts to execute on one processor, does it continue subsequent executions on the same processor??

If a process starts executing on one processor successive memory accesses are normally cached. This is the norm.

But if a process is now migrated to a different processor, this cache must be invalidated and a new cache must/will be established… High cost; some loss in efficiency too.

So what is ‘Processor Affinity?’ (the tendency to favor one processor over another…)

Most SMP systems try to avoid this processor affinity. If policy is to try to keep a process running on the same processor (no

guarantee), called soft affinity. Some systems (Linux…) provide system calls that support what they call

hard affinity, where a process may specify that it is not to migrate to a different processor.

Page 6: Chapter 5.2: CPU Scheduling. 5.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts Chapter 5: CPU Scheduling Chapter 5.1 Basic Concepts Scheduling.

5.6 Silberschatz, Galvin and Gagne ©2005Operating System Concepts

5.4.3 Issues in Load Balancing SMPs5.4.3 Issues in Load Balancing SMPs

Very interesting issues here.

We want, of course, to keep these processors busy so as to take advantage of multiple processors.

We know that two processors does not necessarily mean twice the throughput.

Further, when we have multiple processors, we have load balancing issues.

If there is a single run queue, issues are small because a processor will be dispatched to the next process in the common queue when the current process is no longer execution (for whatever reason)…

If we have multiple run queues, (private run queues) of processes (and unfortunately in most modern implementations of SMP each processor does, load balancing becomes an issue.

Page 7: Chapter 5.2: CPU Scheduling. 5.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts Chapter 5: CPU Scheduling Chapter 5.1 Basic Concepts Scheduling.

5.7 Silberschatz, Galvin and Gagne ©2005Operating System Concepts

Load Balancing in SMP with Load Balancing in SMP with PrivatePrivate Ready Queues Ready Queues

Here (particularly in SMPs) we have private ready queues. Two approaches:

Push migration, and Pull migration.

Push Migration: here, there is a system task that looks at processor loads and, finding imbalances, moves (pushes) processes from a busy processor to a less busy one.

Pull Migration: here, the load balancing algorithm pulls a ready task from a processor who is busy and dispatches it to an idle processor..

Algorithms are often implemented in parallel. In Linux, its load balancing algorithm executes every 200

msec (push migration) or whenever a run queue for a processor is empty (pull migration).

Page 8: Chapter 5.2: CPU Scheduling. 5.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts Chapter 5: CPU Scheduling Chapter 5.1 Basic Concepts Scheduling.

5.8 Silberschatz, Galvin and Gagne ©2005Operating System Concepts

Downside in Pushing and PullingDownside in Pushing and Pulling

Can counteract advantages of processor affinity….and this may be quite significant!

Never an exact science in systems engineering.

Some implementations always pull a process from a busy processor;

In other systems, processes are moved only if the imbalance exceeds a certain threshold.

Again, never an exact science….

Page 9: Chapter 5.2: CPU Scheduling. 5.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts Chapter 5: CPU Scheduling Chapter 5.1 Basic Concepts Scheduling.

5.9 Silberschatz, Galvin and Gagne ©2005Operating System Concepts

5.4.4 Symmetric Multithreading5.4.4 Symmetric Multithreading Here we offer a different strategy – providing ‘logical processors’ in lieu of

physical processors. How does this ‘symmetric multithreading’ work?

Notion of a logical processor: Idea is to create multiple ‘logical processors’ on a physical processor making it look as if there are several processors on an operating system, even if there is only a single processor.

To do so, each logical processor must have its own architectural state.

This means it must have its ‘own’ register set and must be able to handle its own interrupts.

Otherwise, each logical processor still shares the resources of its physical processor, such as cache memory and bus architecture.

Where we have a system with two physical processors each having two logical CPUs, the appearance is that we have four ‘processors’ for work.

Page 10: Chapter 5.2: CPU Scheduling. 5.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts Chapter 5: CPU Scheduling Chapter 5.1 Basic Concepts Scheduling.

5.10 Silberschatz, Galvin and Gagne ©2005Operating System Concepts

5.4.4 Symmetric Multithreading5.4.4 Symmetric Multithreading Recognize that this logical implementation is a hardware

feature – not software. Hardware must provide the representation of the

architecture state for each logical processor as well as interrupt handling.

OSs need not be designed differently if they are to run on an SMT system; but certain performance gains are possible if the operating system is aware that it is running on such a system.

From your text: E.g. If both physical processors are idle, a scheduler should first

try to schedule separate threads on each physical processor rather than on separate logical processors on the same physical processors.

Otherwise, both logical processors on one physical processor could be busy while the other physical processor remains idle.

Page 11: Chapter 5.2: CPU Scheduling. 5.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts Chapter 5: CPU Scheduling Chapter 5.1 Basic Concepts Scheduling.

5.11 Silberschatz, Galvin and Gagne ©2005Operating System Concepts

5.5 Thread Scheduling5.5 Thread Scheduling

Recall that there are both user-level and kernel level threads.

We will speak of kernel-level threads and not ‘processes’ per se. So, then, we actually ‘schedule’ threads.

Threads at the user-level are managed by a thread-library, and kernel is not aware of them. These user-level threads must ultimately be mapped to an associated

kernel-level thread before they can be executed by the CPU.

In order to accommodate this, we must consider scheduling issues involving these user-level threads mapped to and kernel-level threads.

Page 12: Chapter 5.2: CPU Scheduling. 5.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts Chapter 5: CPU Scheduling Chapter 5.1 Basic Concepts Scheduling.

5.12 Silberschatz, Galvin and Gagne ©2005Operating System Concepts

5.5.1 Contention Scope5.5.1 Contention Scope

User-level threads and kernel level threads are scheduled differently. On a ‘many to one’ mapping and ‘many to many mapping’, the thread

library schedules user-level threads to run on in a scheme known as process-contention scope (PCS). Using PCS, the thread library actually schedules user threads to run

on an available LWP. Here the contention is among threads belonging to the same process.

Then, to ‘next’ decide which kernel thread to schedule onto a CPU, the kernel uses a System-Contention Scope (SCS), where scheduling takes place among all threads in the system.

It is interesting to note that in some systems, a user may specify a thread priority, in general, the scheduler schedules threads according to some priority.

The one-to-one mapping model is found in Windows XP, Solaris(, and Linux. This model schedules threads using only the system-contention scope approach.

Page 13: Chapter 5.2: CPU Scheduling. 5.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts Chapter 5: CPU Scheduling Chapter 5.1 Basic Concepts Scheduling.

5.13 Silberschatz, Galvin and Gagne ©2005Operating System Concepts

5.5.2 Pthread Scheduling5.5.2 Pthread Scheduling

We have discussed a sample POSIX Pthread program in the last chapter and showed how threads are created.

Pthreads identifies the following two functions for setting the contention scope values: PTHREAD_SCOPE_PROCESS schedules threads using PCS

scheduling PTHREAD_SCOPE_SYSTEM schedules threads using SCS

scheduling. We mentioned process-contention scope (PCS) and System-

contention scope (SCS) on the previous slide. Each of these functions have parameters (see book) that points to

the attributes of the thread other values that provide information to the functions on how to set the contention values…

The text presents more detail than is necessary at this time. Our interests are best served to look at Operating System

Examples, and in particular, how Linux does its scheduling…

Page 14: Chapter 5.2: CPU Scheduling. 5.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts Chapter 5: CPU Scheduling Chapter 5.1 Basic Concepts Scheduling.

5.14 Silberschatz, Galvin and Gagne ©2005Operating System Concepts

5.6.3 Linux Scheduling5.6.3 Linux Scheduling Former versions of Linux, the Linux kernel ran a variation of

the UNIX scheduling algorithm that was characterized by two problems: These versions 1. did not support SMP systems – so dominant nowadays, and 2. did not scale well as the number of tasks on a system grows.

Later versions of Linux (2.5) saw the scheduled revamped to provide a very fast scheduling algorithm that runs in a contant time (O(1)).

The newer scheduler provides for greatly improved support for symmetric multi-processors.

These improvements include much improved support for processor affinity, load balancing in SMP configurations, and even more detailed algorithms that address ‘fairness’ and support for tasks that are interactive and require rapid response times.

Page 15: Chapter 5.2: CPU Scheduling. 5.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts Chapter 5: CPU Scheduling Chapter 5.1 Basic Concepts Scheduling.

5.15 Silberschatz, Galvin and Gagne ©2005Operating System Concepts

5.6.3 Linux Scheduling5.6.3 Linux Scheduling

Recall that the Linux scheduler is now preemptive.

Linux has a priority scheme and is priority-based with two priority ranges: ‘Real-time’ range of priorities (0 to 99) and a (ready for this)

‘Nice value’ priority range (100 to 140).

Numerically lower values are higher priorities, as you would guess.

Clearly, these priorities are designed for those applications requiring ‘real time’ response.

Linux also assigns those tasks with a higher priority longer time quanta and those with lower priority a shorter time quanta.

This is shown on the next figure.

Page 16: Chapter 5.2: CPU Scheduling. 5.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts Chapter 5: CPU Scheduling Chapter 5.1 Basic Concepts Scheduling.

5.16 Silberschatz, Galvin and Gagne ©2005Operating System Concepts

The Relationship Between Priorities and Time-slice lengthThe Relationship Between Priorities and Time-slice length

Real time priorities

Nice priorities

Note the longer time quanta

Clearly, those tasks requiring high priority get the ‘nod.’And, this is fine – especially for Linux-supported system and other application processes. Let’s look at this in a bit more detail…

Page 17: Chapter 5.2: CPU Scheduling. 5.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts Chapter 5: CPU Scheduling Chapter 5.1 Basic Concepts Scheduling.

5.17 Silberschatz, Galvin and Gagne ©2005Operating System Concepts

List of Tasks Indexed According to PrioritiesList of Tasks Indexed According to Priorities

We know Linux supports SMP. Thus each processor has its own runqueue data structureEach processor schedules itself independently from the other processors. This data structure that Linux uses consists of two parts:

an active array and an expired array

Both of these are prioritized list of processes.Initially, all threads are entered into the active array in accordance with their priority assigned.Threads are selected for execution according to their priority.

Real time / high priority tasks are assigned a static priority – this won’t be changed.Other tasks have priorities that are dynamic – they may change over time.We will discuss this ahead.

So, as we said, all tasks start off in the active array portion of the run queue and are selected for execution according to their priority.

So, as long as a task has time remaining in its assigned quantum, it is eligible for additionalexecution on the CPU. But when the quanta is expired, it is no longer eligible for execution scheduling in the active array and is moved to the expired array. If the task is a real time task whose quantum has expired, it will maintain its priority, but it is moved into the expired array.

Page 18: Chapter 5.2: CPU Scheduling. 5.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts Chapter 5: CPU Scheduling Chapter 5.1 Basic Concepts Scheduling.

5.18 Silberschatz, Galvin and Gagne ©2005Operating System Concepts

List of Tasks Indexed According to PrioritiesList of Tasks Indexed According to Priorities

Scheduling is simple:

Each processor selects the highest priority task from its own run-queue active array structure until it is empty.

Once the active array is empty, scheduling will continue by swapping the expired array with the active array (now empty).

Page 19: Chapter 5.2: CPU Scheduling. 5.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts Chapter 5: CPU Scheduling Chapter 5.1 Basic Concepts Scheduling.

5.19 Silberschatz, Galvin and Gagne ©2005Operating System Concepts

Linux Implements RT and Nice Scheduling…Linux Implements RT and Nice Scheduling…

As mentioned, in Linux real time tasks (RT) are assigned static priorities. All other tasks have dynamic priorities based on their nice values plus or minus 5. The degree of interactivity determines whether 5 will be added or subtracted from

the ‘nice’ value. If 5 is added to the priority of a task, then its priority is lower (has higher number), and

conversely….

The length of time a task has been ‘sleeping’ while waiting for an I/O is used to determine the task’s degree of interactivity.

Tasks more interactive have longer sleep times (more time awaiting the completion of an I/o) and get an adjustment closer to -5 (increasing their priority).

Hence the scheduler favors interactive tasks. The scheduling we are describing definitely favors interactive processing.

In contrast, tasks with shorter sleep times are more likely CPU-bound jobs and will have a number closer to +5 added to their nice values (thus decreasing their priority).

Recalculation of the dynamic priority occurs only when a task has used up its time quantum and needs more time.

And, of course, It is moved to the expired array.

Page 20: Chapter 5.2: CPU Scheduling. 5.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts Chapter 5: CPU Scheduling Chapter 5.1 Basic Concepts Scheduling.

5.20 Silberschatz, Galvin and Gagne ©2005Operating System Concepts

Algorithm EvaluationAlgorithm Evaluation Selecting an appropriate CPU scheduling algorithm is very complicated. Each algorithm has its own parameters and each is best for certain computing

environments and likely not very good for other computing environments.

What must be determined first are the criteria to be used to select an algorithm: Criteria may center around CPU utilization, or response time (for interactive

environments), overall throughput or simply being very responsive to a customer!

Every environment will have its own relative importance for each of these measures.

The specifics on determining the best scheduling algorithms for specific environments is beyond the scope of this course, and is usually taught in an advanced operating systems course that specializes in performance and other considerations.

So we will stop at this section in Chapter 5 and get ready for Chapter 6, which addresses Process Synchronization

Page 21: Chapter 5.2: CPU Scheduling. 5.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts Chapter 5: CPU Scheduling Chapter 5.1 Basic Concepts Scheduling.

End of Chapter 5.2End of Chapter 5.2