Top Banner
Unit 6 Operating System and Real Time Programme 6.1 Operating System An “operating system” is piece of software that acts as a layer between the computer hardware and the application software. It enables the computer hardware to communicate and o perate with the computer software. Without a computer operating system, a computer would be useless. Some people broaden their definition of operating system to include the supporting applications. In this case the core piece is known as the kernel. 1
24

E5144 UNIT 6 Operating System and Real Time Programme

May 30, 2018

Download

Documents

Zul Hairey
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: E5144 UNIT 6 Operating System and Real Time Programme

8/9/2019 E5144 UNIT 6 Operating System and Real Time Programme

http://slidepdf.com/reader/full/e5144-unit-6-operating-system-and-real-time-programme 1/24

Unit 6 Operating System and Real Time Programme

6.1 Operating System

• An “operating system” is piece of software that acts as a layer between the

computer hardware and the application software. It enables the computer 

hardware to communicate and operate with the computer software. Without acomputer operating system, a computer would be useless.

• Some people broaden their definition of operating system to include the

supporting applications. In this case the core piece is known as the kernel.

1

Page 2: E5144 UNIT 6 Operating System and Real Time Programme

8/9/2019 E5144 UNIT 6 Operating System and Real Time Programme

http://slidepdf.com/reader/full/e5144-unit-6-operating-system-and-real-time-programme 2/24

Operating system types

As computers have progressed and developed so have the types of operatingsystems. Below is a basic list of the different types of operating systems and afew examples of operating systems that fall into each of the categories. Many

computer operating systems will fall into more than one of the belowcategories.

a) GUI - Short for Graphical User Interface, a GUI Operating System containsgraphics and icons and is commonly navigated by using a computer mouseBelow are some examples of GUI Operating Systems.

System 7.xWindows 98Windows CE

b) Multi-user - A multi-user operating system allows for multiple users to usethe same computer at the same time and/or different times. Below are someexamples of multi-user operating systems.

LinuxUnixWindows 2000 

c) Multiprocessing - An operating system capable of supporting and utilizingmore than one computer processor. Below are some examples of multiprocessing operating systems.

LinuxUnixWindows 2000 

Explanation:

Multiprocessing refers to an operating situation where the simultaneous processing of programs takesplace. This state of ongoing and coordinated processing is usually achieved by interconnecting two or more computer processors that make it possible to use the available resources to best advantage.Many operating systems today are equipped with a multiprocessing capability, althoughmultiprogramming tends to be the more common approach today.

The basic platform for multiprocessing allows for more than one computer to be engaged in the usedof the same programs at the same time. This means that persons working at multiple work stations canaccess and work with data contained within a given program. It is this level of functionality that makesit possible for users in a work environment to effectively interact via a given program.

There are essentially two different types of multiprocessing. Symmetric multiprocessing, more thanone computer processor will share memory capacity and data path protocols. While the process mayinvolve more than one computer station, only one copy or the operating system will be used to initiateall the orders executed by the processors involved in the connection.

2

Page 3: E5144 UNIT 6 Operating System and Real Time Programme

8/9/2019 E5144 UNIT 6 Operating System and Real Time Programme

http://slidepdf.com/reader/full/e5144-unit-6-operating-system-and-real-time-programme 3/24

The second approach to multiprocessing is known as massively parallel processing. Within thisstructure, it is possible harness and make use of large numbers of processors in order to handle tasks.Often, this type of multiprocessing will involve over two hundred processors. Within the environment of MPP, each processor works with individual operating systems and memory resources, but will connectwith the other processors in the setup to divide tasks and oversee different aspects of transmissionsthrough data paths.

Multiprocessing is a common situation with corporations that function with multiple locations and alarge number of employees. The combination of resources that can result from the use of multiplecomputer processors make it possible to transmit data without regard to distance or location, as wellas allow large numbers of users to work with a program simultaneously. While the actual creation of amultiprocessing system can be somewhat complicated, the approach ultimately saves a great deal of time and money for larger companies.

What is multiprogramming?

Multiprogramming is one of the more basic types of parallel processing that can be employed in manydifferent environments. Essentially, multiprogramming makes it possible for several programs to beactive at the same time, while still running through a single processor. The functionality of multiprogramming in this environment involves a continual process of sequentially accomplishing tasks

associated with the function of one program, then moving on to run a task associated with the nextprogram.

Multiprogramming is very different from the multiprocessing because even though there may beseveral programs currently active, the uniprocessor is not simultaneously executing commands for allthe programs. Instead, the processor addresses each program, executes a single command, thenmoves on to the next program in the queue. The previous program remains active, but enters into apassive state until the uniprocessor returns to the front of the queue and executes a second command.

From an end user standpoint, the process of multiprogramming is seamless. As far as actualfunctionality, the user appears to be using several different applications at the same time. This isbecause multiprogramming utilizes the uniprocessor to execute commands quickly. The end result isthat a user notices little if any lag time when minimizing one application in order to perform a taskassociated with a different application.

The mechanism within multiprogramming is known as an interrupt. Each task is granted a specificamount of time for processing before the operating systems will move on to the next program and thenext task. In a sense, multiprogramming is about juggling several tasks at one time, quickly performingone piece of the required action, then moving to do something with a different task before returning tothe former job.

Memory is important to the proper function of multiprogramming. Capacity should be ample enough toensure that if one program within the rotating queue encounters a problem, it does not prevent delaysor impact the operation of other open applications. At the same time, some type of memory protectionshould be in place. If this is not the case, then a problem with one application can create a cascadingeffect that shuts down or at least slows down the other open applications.

d) Multitasking - An operating system that is capable of allowing multiplesoftware processes to run at the same time. Below are some examples of multitasking operating systems.

UnixWindows 2000

3

Page 4: E5144 UNIT 6 Operating System and Real Time Programme

8/9/2019 E5144 UNIT 6 Operating System and Real Time Programme

http://slidepdf.com/reader/full/e5144-unit-6-operating-system-and-real-time-programme 4/24

Explanation:

Multitasking is the act of doing multiple things at once. It is often encouraged among office workersand students, because it is believed that multitasking is more efficient than focusing on a single task atonce. Numerous studies on multitasking have been carried out, with mixed results. It would appear that in some cases, multitasking is indeed an effective way to utilize time, while in other instances, the

quality of the work suffers as a result of split attention.

The term initially emerged in the tech industry, to describe a computer 's single central processing unitperforming multiple tasks. Early computers were capable of performing only one function at once,although sometimes very quickly. Later computers were able to run a wide assortment of programs; infact, your computer is multitasking right now as it runs your web browser and any other programs youmight have open, along with the basic programs which start every time you log on to your operatingsystem.

In the late 1990s, people began to use “multitasking” to describe humans, especially in officeenvironments. A secretary might be said to be multitasking when she or he answers phones, respondsto emails, generates a report, and edits a form letter simultaneously. The ability of the human mind tofocus on multiple tasks at once is rather amazing; the American Psychological Association calls thisthe “executive control” of the brain. The executive control allows the brain to delegate tasks while

skimming material and determining the best way to process it.

While accomplishing multiple things at once appears more efficient on the surface, it can come withhidden costs. Certain complex higher order tasks, for example, demand the full function of the brain;most people wouldn't want brain surgeons multitasking, for example. Insufficient attention can causeerrors while multitasking, and switching between content and different media formats can have adetrimental effect as well.

A certain amount of multitasking has become necessary and expected in many industries, and jobseekers often list the ability to multitask as a skill on their resumes. Students also find this skill veryvaluable, since it allows them to take notes while processing lecture information, or work on homeworkfor one course while thinking about another. When you do decide to multitask, make sure to checkyour work carefully, to ensure that it is of high quality, and consider abandoning multitasking for certain

tasks if you notice a decline.

e) Multithreading - Operating systems that allow different parts of a softwareprogram to run concurrently. Operating systems that would fall into thiscategory are:

LinuxUnixWindows 2000

Explanation:

In the world of computing, multithreading is the task of creating a new thread of execution within anexisting process rather than starting a new process to begin a function. Essentially, the task of multithreading is intended to make wiser use of computer  resources by allowing resources that arealready in use to be simultaneously utilized by a slight variant of the same process. The basic conceptof multithreading has been around for some time, but gained wider attention as computers becamemore commonplace during the decade of the 1990’s.

4

Page 5: E5144 UNIT 6 Operating System and Real Time Programme

8/9/2019 E5144 UNIT 6 Operating System and Real Time Programme

http://slidepdf.com/reader/full/e5144-unit-6-operating-system-and-real-time-programme 5/24

This form of time-division multiplexing creates an environment where a program is configured to allowprocesses to fork or split into two or more threads of execution. The parallel execution of threadswithin the same program is often touted as a more efficient use of the resources of the computer system, especially with desktop and laptop systems. By allowing a program to handle multiple taskswith a multithreading model, the system does not have to allow for two separate programs to initiatetwo separate processes and have to make use of the same files at the same time.

While there are many proponents of multithreading, there are also those that understand the processas being potentially harmful to the task of computing. The time slicing that is inherent in allowing a forkor thread to split off from a running process is thought by some to set up circumstances where theremay be some conflict between threads when attempting to share caches or other hardware resources.There is also some concern that the action of multithreading could lower the response time of eachsingle thread in the process, effectively negating any time savings that is generated by theconfiguration.

However, multithreading remains one of the viable options in computer  multitasking. It is not unusualfor a processor to allow for both multithreading as well as the creation of new processes to handlevarious tasks. This allows the end user all the benefits of context switching while still making the bestuse of available resources.

Operating system listing

Below is a listing of many of the different types of operating systems availabletoday, the dates they were released, the platforms they have been developedfor and who developed them.

Operatingsystem

Platform Developer

AIX / AIXL Various IBM

AmigaOS Amiga Commodore

BSD Various BSD

Caldera Linux Various SCO

Corel Linux Various Corel

Debian Linux` Various GNU

DUnix Various Digital

DYNIX/ptx  Various IBM

HP-UX Various

Hewlett

Packard

IRIX Various SGI

Kondara Linux Various Kondara

Linux Various Linus Torvalds

MAC OS 8 Apple Apple

5

Page 6: E5144 UNIT 6 Operating System and Real Time Programme

8/9/2019 E5144 UNIT 6 Operating System and Real Time Programme

http://slidepdf.com/reader/full/e5144-unit-6-operating-system-and-real-time-programme 6/24

Macintosh

MAC OS 9AppleMacintosh

Apple

MAC OS 10

Apple

Macintosh Apple

MAC OS XAppleMacintosh

Apple

MandrakeLinux

Various Mandrake

MINIX Various MINIX

MS-DOS 1.x IBM / PC Microsoft

MS-DOS 2.x IBM / PC Microsoft

MS-DOS 3.x IBM / PC Microsoft

MS-DOS 4.x IBM / PC Microsoft

MS-DOS 5.x IBM / PC Microsoft

MS-DOS 6.x IBM / PC Microsoft

NEXTSTEP Various Apple

OSF/1 Various OSF

QNX Various QNX

Red Hat Linux Various Red Hat

SCO Various SCO

SlackwareLinux

Various Slackware

Sun Solaris Various Sun

SuSE Linux Various SuSE

System 1AppleMacintosh

Apple

System 2AppleMacintosh

Apple

System 3AppleMacintosh

Apple

System 4AppleMacintosh

Apple

6

Page 7: E5144 UNIT 6 Operating System and Real Time Programme

8/9/2019 E5144 UNIT 6 Operating System and Real Time Programme

http://slidepdf.com/reader/full/e5144-unit-6-operating-system-and-real-time-programme 7/24

System 6AppleMacintosh

Apple

System 7AppleMacintosh

Apple

System V Various System V

Tru64 Unix Various Digital

Turbolinux Various Turbolinux

Ultrix Various Ultrix

Unisys Various Unisys

Unix Various Bell labs

UnixWare Various UnixWare

VectorLinux Various VectorLinuxWindows 2000 IBM / PC Microsoft

Windows 2003 IBM / PC Microsoft

Windows 3.X IBM / PC Microsoft

Windows 95 IBM / PC Microsoft

Windows 98 IBM / PC Microsoft

Windows CE PDA Microsoft

Windows ME IBM / PC MicrosoftWindows NT IBM / PC Microsoft

Windows Vista IBM / PC Microsoft

Windows XP IBM / PC Microsoft

Xenix Various Microsoft

6.2 System Resources

System resources are the parts within a computer that are available to be used by the operatingsystem and other applications. The most notable of the system resources is the amount of memory inuse, but CPU time should be considered here as well. Each time an application starts, the applicationwill request memory from the operating system and a slice of CPU time to perform its function. For example, when a computer user starts the word processing application on the computer, they will clickthe icon for the application and shortly thereafter, the program starts. During the time while the user iswaiting for the program to start, the operating system is provisioning system resources to handle thisapplication. It is essentially making room for it among the other processes and applications that maybe running at the time the program is started. When the word processor application starts, it sends arequest to the operating system to provision the necessary system resources for it to function

7

Page 8: E5144 UNIT 6 Operating System and Real Time Programme

8/9/2019 E5144 UNIT 6 Operating System and Real Time Programme

http://slidepdf.com/reader/full/e5144-unit-6-operating-system-and-real-time-programme 8/24

Depending on the amount of memory available, the application may open quickly, or may open a bitslower if less memory is available when the application starts. Sometimes there is not enough memoryto get an application running right away, in which case the operating system recognizes the lack of system resources and will make an attempt to store some things in a swap file to allow more memoryto be available for the active applications.

The swap file acts like memory but is contained on the hard disk of the computer. When the RAMmemory within a computer becomes full, the operating system will page (or write) things out to thecomputers swap file, freeing up RAM memory for programs in use. As the swap file continues to grow,it can become full. This will cause the operating system to produce warning messages indicating thatthe swap file or  virtual memory is full and the user will be instructed to close some programs to free upsystem resources, allowing the computer to function better. Many times, restarting the computer is thebest way to alleviate these warning messages.

If a peripheral is needed, like a printer or disk drive, the hardware being requested will send anInterrupt Request (IRQ) to the CPU. The IRQ is the signal that the peripheral device uses to let theCPU know that it needs to do something. Hardware resources are the memory and CPU time usedwhen peripheral devices, like printers, scanners, and modems are used. Each time one of thesedevices is accessed by the user, the device sends a signal to the motherboard to interrupt the CPU soit can operate. Once it is finished performing the requested tasks, the device signals again that it has

completed. These signals are known as Interrupt Requests (IRQs), and each device has a specificchannel or set of channels that it can use to communicate with the motherboard. If all of the channelsfor a specified device are used, the device cannot function. Each IRQ channel can only use onedevice, or have one device assigned to it in a computing system. This helps the motherboard knowwhich devices it should expect on which IRQs. System resources are monitored by the computersoperating system to ensure that the computer runs as efficiently as possible, given the resourcesavailable at any time.

Type of resources

• CPU time

• Random access memory and virtual memory 

• Hard disk space

•  Network throughput

• Electrical power • External Devices

• Input/output operations

Resource management

A resource handle is an identifier for a resource that is currently being accessed.Resource handles can be opaque, in which case they are often integer numbers, or they

can be pointers that allow access to further information. Common resource handles are

file descriptors and sockets.

A situation when a computer program allocates a resource and fails to deallocate it after use is called a resource leak . Resource tracking is the ability of an operating system,

virtual machine or other program to terminate the access to a resource that has been

allocated but not deallocated after use. When implemented by a virtual machine this isoften done in the form of garbage collection. Without resource tracking programmers

must take care of proper manual resource deallocation. They may also use the RAII 

technique to automate this task.

8

Page 9: E5144 UNIT 6 Operating System and Real Time Programme

8/9/2019 E5144 UNIT 6 Operating System and Real Time Programme

http://slidepdf.com/reader/full/e5144-unit-6-operating-system-and-real-time-programme 9/24

Access to memory areas is often controlled by semaphores, which allows a pathological

situation called a deadlock , when different threads or processes try to allocate resources

already allocated by each other. A deadlock usually leads to a program becoming partially or completely unresponsive.

Access to resources is also sometimes regulated by queuing; in the case of computingtime on a CPU the controlling algorithm of the task queue is called a scheduler .

6.3 Computer Program

A computer program, or just a program) is a sequence of instructions written to perform a specified task for a computer.[1] A computer requires programs to function,

typically executing the program's instructions in a central processor .[2] The program has

an executable form that the computer can use directly to execute the instructions. Thesame program in its human-readable source code form, from which executable programs

are derived (e.g., compiled), enables a programmer to study and develop its algorithms.

Computer source code is often written by professional computer programmers. Source

code is written in a programming language that usually follows one of two main paradigms: imperative or declarative programming. Source code may be converted into

an executable file (sometimes called an executable program or a binary) by a compiler  

and later executed by a central processing unit. Alternatively, computer programs may beexecuted with the aid of an interpreter , or may be embedded directly into hardware.

Computer programs may be categorized along functional lines: system software and

application software. Many computer programs may run simultaneously on a single

computer, a process known as multitasking.

6.4 Process

In computing, a process is an instance of a computer program that is being executed. It

contains the program code and its current activity. Depending on the operating system 

(OS), a process may be made up of multiple threads of execution that execute instructionsconcurrently.[1][2]

A computer program is a passive collection of instructions, a process is the actual

execution of those instructions. Several processes may be associated with the same program; for example, opening up several instances of the same program often means

more than one process is being executed.

Multitasking is a method to allow multiple processes to share processors (CPUs) and

other system resources. Each CPU executes a single task at a time. However,multitasking allows each processor to switch between tasks that are being executed

without having to wait for each task to finish. Depending on the operating system

9

Page 10: E5144 UNIT 6 Operating System and Real Time Programme

8/9/2019 E5144 UNIT 6 Operating System and Real Time Programme

http://slidepdf.com/reader/full/e5144-unit-6-operating-system-and-real-time-programme 10/24

implementation, switches could be performed when tasks perform input/output 

operations, when a task indicates that it can be switched, or on hardware interrupts.

A common form of multitasking is time-sharing. Time-sharing is a method to allow fastresponse for interactive user applications. In time-sharing systems, context switches are

 performed rapidly. This makes it seem like multiple processes are being executedsimultaneously on the same processor. The execution of multiple processes seemingly

simultaneously is called concurrency.

For security and reliability reasons most modern operating systems prevent direct

communication between independent processes, providing strictly mediated and

controlled inter-process communication functionality.

Process management in multi-tasking operating systems

A multitasking* operating system may just switch between processes to give the

appearance of many processes executing concurrently or simultaneously, though in factonly one process can be executing at any one time on a single-core CPU (unless using

multi-threading or other similar technology).[3]

It is usual to associate a single process with a main program, and 'daughter' ('child') processes with any spin-off, parallel processes, which behave like asynchronous 

subroutines. A process is said to own resources, of which an image of its program (in

memory) is one such resource. (Note, however, that in multiprocessing systems, many

 processes may run off of, or share, the same reentrant program at the same location in

memory— but each process is said to own its own image of the program.)

Processes are often called tasks in embedded operating systems. The sense of 'process' (or task) is 'something that takes up time', as opposed to 'memory', which is 'something thattakes up space'. (Historically, the terms 'task' and 'process' were used interchangeably, but

the term 'task' seems to be dropping from the computer lexicon.)

The above description applies to both processes managed by an operating system, and

 processes as defined by process calculi.

If a process requests something for which it must wait, it will be blocked. When the

 process is in the Blocked State, it is eligible for swapping to disk, but this is transparent

in a virtual memory system, where blocks of memory values may be really on disk and

not in main memory at any time. Note that even unused portions of active processes/tasks(executing programs) are eligible for swapping to disk. All parts of an executing programand its data do not have to be in physical memory for the associated process to be active.

 ______________________________ 

*Tasks and processes refer essentially to the same entity. And, although they havesomewhat different terminological histories, they have come to be used as synonyms.

10

Page 11: E5144 UNIT 6 Operating System and Real Time Programme

8/9/2019 E5144 UNIT 6 Operating System and Real Time Programme

http://slidepdf.com/reader/full/e5144-unit-6-operating-system-and-real-time-programme 11/24

Today, the term process is generally preferred over task, except when referring to

'multitasking', since the alternative term, 'multiprocessing', is too easy to confuse with

multiprocessor (which is a computer with two or more CPUs).

Process States

An operating system kernel that allows multi-tasking needs processes to have certain

states. Names for these states are not standardised, but they have similar functionality.

• First, the process is "created" - it is loaded from a secondary storage device (harddisk or CD-ROM...) into main memory. After that the process scheduler assigns it

the state "waiting".

• While the process is "waiting" it waits for the scheduler to do a so-called context

switch and load the process into the processor. The process state then becomes"running", and the processor executes the process instructions.

• If a process needs to wait for a resource (wait for user input or file to open ...), it

is assigned the "blocked" state. The process state is changed back to "waiting"

when the process no longer needs to wait.

• Once the process finishes execution, or is terminated by the operating system, it isno longer needed. The process is removed instantly or is moved to the

"terminated" state. When removed, it just waits to be removed from main

memory.

11

Page 12: E5144 UNIT 6 Operating System and Real Time Programme

8/9/2019 E5144 UNIT 6 Operating System and Real Time Programme

http://slidepdf.com/reader/full/e5144-unit-6-operating-system-and-real-time-programme 12/24

Primary process states

The following typical process states are possible on computer systems of all kinds. Inmost of these states, processes are "stored" on main memory.

a) Created

(Also called New) When a process is first created, it occupies the "created" or "new"

state. In this state, the process awaits admission to the "ready" state. This admission will

 be approved or delayed by a long-term, or admission, scheduler . Typically in mostdesktop computer systems, this admission will be approved automatically, however for 

real-time operating systems this admission may be delayed. In a real time system,

admitting too many processes to the "ready" state may lead to oversaturation andovercontention for the systems resources, leading to an inability to meet process

deadlines.

b) Ready or Running

(Also called waiting or runnable) A "ready" or "waiting" process has been loaded into

main memory and is awaiting execution on a CPU (to be context switched onto the CPU

 by the dispatcher, or short-term scheduler). There may be many "ready" processes at anyone point of the systems execution - for example, in a one processor system, only one

 process can be executing at any one time, and all other "concurrently executing" processes will be waiting for execution.

A ready queue is used in computer scheduling. Modern computers are capable of running

many different programs or processes at the same time. However, the CPU is only

capable of handling one process at a time. Processes that are ready for the CPU are kept

12

Page 13: E5144 UNIT 6 Operating System and Real Time Programme

8/9/2019 E5144 UNIT 6 Operating System and Real Time Programme

http://slidepdf.com/reader/full/e5144-unit-6-operating-system-and-real-time-programme 13/24

in a queue for "ready" processes. Other processes that are waiting for an event to occur,

such as loading information from a hard drive or waiting on an internet connection, are

not in the ready queue.

c) Blocked

A process that is waiting for some event (such as I/O operation completion or a signal).

d) Terminated

A process may be terminated, either from the "running" state by completing its execution

or by explicitly being killed. In either of these cases, the process moves to the"terminated" state. If a process is not removed from memory after entering this state, this

state may also be called zombie.

Additional process states

Two additional states are available for processes in systems that support virtual memory. 

In both of these states, processes are "stored" on secondary memory (typically a harddisk ).

e) Swapped out and waiting

(Also called suspended and waiting.) In systems that support virtual memory, a process

may be swapped out, that is removed from main memory and placed in virtual memory by the mid-term scheduler. From here the process may be swapped back into the waiting

state.

f) Swapped out and blocked

(Also called suspended and blocked.) Processes that are blocked may also be swapped

out. In this event the process is both swapped out and blocked, and may be swapped back in again under the same circumstances as a swapped out and waiting process (although in

this case, the process will move to the blocked state, and may still be waiting for a

resource to become available).

6.5 Real time process

In computing, real-time refers to a time frame that is very brief, appearing to be immediate. When acomputer processes data in real time, it reads and handles data as it is received, producing resultswithout delay. For example, a website that is updated in real-time will allow its viewers to see changesas soon as they occur, rather than waiting for updates to be visible at some later date.

A non-real-time computer process does not have a deadline. Such a process can be considered non-real-time, even if fast results are preferred. A real-time system, on the other hand, is expected to

13

Page 14: E5144 UNIT 6 Operating System and Real Time Programme

8/9/2019 E5144 UNIT 6 Operating System and Real Time Programme

http://slidepdf.com/reader/full/e5144-unit-6-operating-system-and-real-time-programme 14/24

respond not just quickly, but also within a predictable period of time. A good example of a real-timecomputer system is a car’s anti-lock break system. An anti-lock brake system is expected to release avehicle’s brakes, preventing dangerous wheel locking, in a predictably short time frame.

Unfortunately, there are times when real-time systems fail to respond as desired. A Real-time processfails when its task is not completed before its deadline. In computing, there is no grace period givenbecause of other demands on a system. Real-time deadlines must be kept without regard to other factors; they are considered mission critical.

When a process is considered hard real-time, it must complete its operation by a specific time. If it failsto meet its deadline, its operation is without value and the system for which it is a component couldface failure. When a system is considered soft, real-time, however, there is some room for lateness.For example, in a soft, real-time system, a delayed process may not cause the entire system to fail.Instead, it may lead to a decrease in the usual quality of the process or system.

Hard, real-time systems are often used in embedded systems. Consider, for example, a car enginecontrol system. Such a system is considered hard, real–time because a late process could cause theengine to fail. Hard real-time systems are employed when it is crucial that a task or event is handled bya strict deadline. This is typically necessary when damage or the loss of life may occur as a result of asystem failure.

Soft real-time systems are usually employed when there are multiple, connected systems that must bemaintained despite shifting events and circumstances. These systems are also used when concurrentaccess requirements are present. For example, the software used to maintain travel schedules for major transportation companies is often soft real-time. It is necessary for such software to updateschedules with little delay. However, a delay of a few seconds is not likely to cause the kind of mayhem possible when a hard, real-time system fails.

6.6 Scheduling

Scheduling is a key concept in computer multitasking, multiprocessing operating system 

and real-time operating system designs. Scheduling refers to the way processes are

assigned to run on the available CPUs, since there are typically many more processesrunning than there are available CPUs. This assignment is carried out by softwares

known as a scheduler and dispatcher.

The scheduler is concerned mainly with:

• CPU utilization - to keep the CPU as busy as possible.

• Throughput - number of processes that complete their execution per time unit.

• Turnaround - total time between submission of a process and its completion.

• Waiting time - amount of time a process has been waiting in the ready queue.

• Response time - amount of time it takes from when a request was submitted until

the first response is produced.• Fairness - Equal CPU time to each thread.

In real-time environments, such as mobile devices for automatic control in industry (for 

example robotics), the scheduler also must ensure that processes can meet deadlines; thisis crucial for keeping the system stable. Scheduled tasks are sent to mobile devices and

managed through an administrative back end.

14

Page 15: E5144 UNIT 6 Operating System and Real Time Programme

8/9/2019 E5144 UNIT 6 Operating System and Real Time Programme

http://slidepdf.com/reader/full/e5144-unit-6-operating-system-and-real-time-programme 15/24

Types of operating system schedulers

Operating systems may feature up to 3 distinct types of schedulers: a long-term scheduler 

(also known as an admission scheduler or high-level scheduler), a mid-term or medium-

term scheduler and a short-term scheduler . The names suggest the relative frequency

with which these functions are performed.

a)Long-term Scheduler

The long-term, or admission, scheduler decides which jobs or processes are to be

admitted to the ready queue; that is, when an attempt is made to execute a program, itsadmission to the set of currently executing processes is either authorized or delayed by

the long-term scheduler. Thus, this scheduler dictates what processes are to run on a

system, and the degree of concurrency to be supported at any one time - ie: whether ahigh or low amount of processes are to be executed concurrently, and how the split

 between IO intensive and CPU intensive processes is to be handled. In modern OS's, this

is used to make sure that real time processes get enough CPU time to finish their tasks.Without proper real time scheduling, modern GUI interfaces would seem sluggish.

[Stallings, 399].1

Long-term scheduling is also important in large-scale systems such as batch processing 

systems, computer clusters, supercomputers and render farms. In these cases, special purpose job scheduler software is typically used to assist these functions, in addition to

any underlying admission scheduling support in the operating system.

b) Mid-term Scheduler

The mid-term scheduler temporarily removes processes from main memory and placesthem on secondary memory (such as a disk drive) or vice versa. This is commonlyreferred to as "swapping out" or "swapping in" (also incorrectly as " paging out" or 

"paging in"). The mid-term scheduler may decide to swap out a process which has not

 been active for some time, or a process which has a low priority, or a process which is page faulting frequently, or a process which is taking up a large amount of memory in

order to free up main memory for other processes, swapping the process back in later 

when more memory is available, or when the process has been unblocked and is nolonger waiting for a resource. [Stallings, 396] [Stallings, 370]

In many systems today (those that support mapping virtual address space to secondary

storage other than the swap file), the mid-term scheduler may actually perform the role of the long-term scheduler, by treating binaries as "swapped out processes" upon their execution. In this way, when a segment of the binary is required it can be swapped in on

demand, or "lazy loaded". [Stallings, 394]

c) Short-term Scheduler

15

Page 16: E5144 UNIT 6 Operating System and Real Time Programme

8/9/2019 E5144 UNIT 6 Operating System and Real Time Programme

http://slidepdf.com/reader/full/e5144-unit-6-operating-system-and-real-time-programme 16/24

The short-term scheduler (also known as the CPU scheduler) decides which of the ready,

in-memory processes are to be executed (allocated a CPU) next following a clock 

interrupt, an IO interrupt, an operating system call or another form of signal. Thus theshort-term scheduler makes scheduling decisions much more frequently than the long-

term or mid-term schedulers - a scheduling decision will at a minimum have to be made

after every time slice, and these are very short. This scheduler can be preemptive, implying that it is capable of forcibly removing processes from a CPU when it decides to

allocate that CPU to another process, or non-preemptive (also known as "voluntary" or 

"co-operative"), in which case the scheduler is unable to "force" processes off the CPU.[Stallings, 396].

d) Dispatcher

Another component involved in the CPU-scheduling function is the dispatcher. The

dispatcher is the module that gives control of the CPU to the process selected by the

short-term scheduler. This function involves the following:

• Switching context

• Switching to user mode

• Jumping to the proper location in the user program to restart that program

The dispatcher should be as fast as possible, since it is invoked during every processswitch. The time it takes for the dispatcher to stop one process and start another running

is known as the dispatch latency. [Silberschatz, 157]

Scheduling criteria

Different CPU scheduling algorithms have different properties, and the choice of a particular algorithm may favor one class of processes over another. In choosing whichalgorithm to use in a particular situation, we must consider the properties of the various

algorithms. Many criteria have been suggested for comparing CPU scheduling

algorithms. Which characteristics are used for comparison can make a substantialdifference in which algorithm is judged to be best. The criteria include the following:

• CPU Utilization. We want to keep the CPU as busy as possible.

• Throughput. If the CPU is busy executing processes, then work is being done.

One measure of work is the number of processes that are completed per time unit,called throughput. For long processes, this rate may be one process per hour; for 

short transactions, it may be 10 processes per second.• Turnaround time. From the point of view of a particular process, the important

criterion is how long it takes to execute that process. The interval from the time of 

submission of a process to the time of completion is the turnaround time.

Turnaround time is the sum of the periods spent waiting to get into memory,waiting in the ready queue, executing on the CPU, and doing I/O.

• Waiting time. The CPU scheduling algorithm does not affect the amount of the

time during which a process executes or does I/O; it affects only the amount of 

16

Page 17: E5144 UNIT 6 Operating System and Real Time Programme

8/9/2019 E5144 UNIT 6 Operating System and Real Time Programme

http://slidepdf.com/reader/full/e5144-unit-6-operating-system-and-real-time-programme 17/24

time that a process spends waiting in the ready queue. Waiting time is the sum of 

 periods spend waiting in the ready queue.

• Response time. In an interactive system, turnaround time may not be the bestcriterion. Often, a process can produce some output fairly early and can continue

computing new results while previous results are being output to the user. Thus,

another measure is the time from the submission of a request until the firstresponse is produced. This measure, called response time, is the time it takes to

start responding, not the time it takes to output the response. The turnaround time

is generally limited by the speed of the output device.

It is desirable to maximize CPU utilization and throughput and to minimize turnaroundtime, waiting time, and response time. In most cases, we optimize the average measure.

However, under some circumstances, it is desirable to optimize the minimum or 

maximum values rather than the average. For example, to guarantee that all users getgood service, we may want to minimize the maximum response time. Investigators have

suggested that, for interactive systems, it is more important to minimize the variance in

the response time than to minimize the average response time. A system with reasonableand predictable response time may be considered more desirable than a system that isfaster on the average but is highly variable. However, little work has been done on CPU-

scheduling algorithms that minimize variance.

Fundamental scheduling algorithms

CPU scheduling deals with the problem of deciding which of the processes in the readyqueue is to be allocated the CPU. There are many different CPU scheduling algorithms.

In this section, we describe several of them.

a) First In First Out

Also known as F irst C ome, F irst  S erved (FCFS), its the simplest scheduling algorithm,FIFO simply queues processes in the order that they arrive in the ready queue.

• Since context switches only occur upon process termination, and no

reorganization of the process queue is required, scheduling overhead is minimal.

• Throughput can be low, since long processes can hog the CPU

• Turnaround time, waiting time and response time can be low for the same reasons

above

•  No prioritization occurs, thus this system has trouble meeting process deadlines.

The lack of prioritization does permit every process to eventually complete, henceno starvation.

b) Shortest remaining time

Also known as  S hortest  J ob F irst (SJF). With this strategy the scheduler arranges

 processes with the least estimated processing time remaining to be next in the queue. This

17

Page 18: E5144 UNIT 6 Operating System and Real Time Programme

8/9/2019 E5144 UNIT 6 Operating System and Real Time Programme

http://slidepdf.com/reader/full/e5144-unit-6-operating-system-and-real-time-programme 18/24

requires advanced knowledge or estimations about the time required for a process to

complete.

• If a shorter process arrives during another process' execution, the currentlyrunning process may be interrupted, dividing that process into two separate

computing blocks. This creates excess overhead through additional contextswitching. The scheduler must also place each incoming process into a specific

 place in the queue, creating additional overhead.• This algorithm is designed for maximum throughput in most scenarios.

• Waiting time and response time increase as the process' computational

requirements increase. Since turnaround time is based on waiting time plus processing time, longer processes are significantly affected by this. Overall

waiting time is smaller than FIFO, however since no process has to wait for the

termination of the longest process.

•  No particular attention is given to deadlines, the programmer can only attempt to

make processes with deadlines as short as possible.

Starvation is possible, especially in a busy system with many small processes being run.

c) Fixed priority pre-emptive scheduling

The O/S assigns a fixed priority rank to every process, and the scheduler arranges the

 processes in the ready queue in order of their priority. Lower priority processes get

interrupted by incoming higher priority processes.

• Overhead is not minimal, nor is it significant.• FPPS has no particular advantage in terms of throughput over FIFO scheduling.

Waiting time and response time depend on the priority of the process. Higher  priority processes have smaller waiting and response times.

• Deadlines can be met by giving processes with deadlines a higher priority.

• Starvation of lower priority processes is possible with large amounts of high

 priority processes queuing for CPU time.

d) Round-robin scheduling

The scheduler assigns a fixed time unit per process, and cycles through them.

• RR scheduling involves extensive overhead, especially with a small time unit.

Balanced throughput between FCFS and SJN, shorter jobs are completed faster than in FCFS and longer processes are completed faster than in SJN.

• Fastest average response time, waiting time is dependent on number of processes,

and not average process length.

• Because of high waiting times, deadlines are rarely met in a pure RR system.

• Starvation can never occur, since no priority is given. Order of time unit

allocation is based upon process arrival time, similar to FCFS.

18

Page 19: E5144 UNIT 6 Operating System and Real Time Programme

8/9/2019 E5144 UNIT 6 Operating System and Real Time Programme

http://slidepdf.com/reader/full/e5144-unit-6-operating-system-and-real-time-programme 19/24

e) Multilevel Queue Scheduling

This is used for situations in which processes are easily classified into different groups.

For example, a common division is made between foreground (interactive) processes and background (batch) processes. These two types of processes have different response-time

requirements and so may have different scheduling needs.

Overview

Scheduling

algorithm

CPU 

UtilizationThroughput

Turnaround

time

Response

time

Deadline 

handling

Starvation

free

First In First

OutLow Low High Low No Yes

Shortest

remaining

time

Medium High Medium Medium No No

Fixed

 priority pre-

emptivescheduling

Medium Low High High Yes No

Round-robin

schedulingHigh Medium Medium Low No Yes

How to choose a scheduling Algorithm

When designing an operating system, a programmer must consider which scheduling

algorithm will perform best for the use the system is going to see. There is no universal

“best” scheduling algorithm, and many operating systems use extended or combinationsof the scheduling algorithms above. For example, Windows NT/XP/Vista uses a

Multilevel feedback queue, a combination of fixed priority preemptive scheduling, round-

robin, and first in first out. In this system, processes can dynamically increase or decreasein priority depending on if it has been serviced already, or if it has been waiting

extensively. Every priority level is represented by its own queue, with round-robin

scheduling amongst the high priority processes and FIFO among the lower ones. In thissense, response time is short for most processes, and short but critical system processes

19

Page 20: E5144 UNIT 6 Operating System and Real Time Programme

8/9/2019 E5144 UNIT 6 Operating System and Real Time Programme

http://slidepdf.com/reader/full/e5144-unit-6-operating-system-and-real-time-programme 20/24

get completed very quickly. Since processes can only use one time unit of the round robin

in the highest priority queue, starvation can be a problem for longer high priority

 processes.

Operating system scheduler implementations

Windows

Very early MS-DOS and Microsoft Windows systems were non-multitasking, and as

such did not feature a scheduler. Windows 3.1x used a non-preemptive scheduler,

meaning that it did not interrupt programs. It relied on the program to end or tell the OSthat it didn't need the processor so that it could move on to another process. This is

usually called cooperative multitasking. Windows 95 introduced a rudimentary

 preemptive scheduler; however, for legacy support opted to let 16 bit applications runwithout preemption.

Windows NT-based operating systems use a multilevel feedback queue. 32 priority levelsare defined, 0 through to 31, with priorities 0 through 15 being "normal" priorities and

 priorities 16 through 31 being soft real-time priorities, requiring privileges to assign. 0 isreserved for the Operating System. Users can select 5 of these priorities to assign to a

running application from the Task Manager application, or through thread management

APIs. The kernel may change the priority level of a thread depending on its I/O and CPUusage and whether it is interactive (i.e. accepts and responds to input from humans),

raising the priority of interactive and I/O bounded processes and lowering that of CPU

 bound processes, to increase the responsiveness of interactive applications. The scheduler 

was modified in Windows Vista to use the cycle counter register of modern processors tokeep track of exactly how many CPU cycles a thread has executed, rather than just using

an interval-timer interrupt routine. Vista also uses a priority scheduler for the I/O queueso that disk defragmenters and other such programs don't interfere with foregroundoperations.

Mac OS

Mac OS 9 uses cooperative scheduling, where one process controls multiple cooperative

threads. The kernel schedules the process using a Round-robin scheduling algorithm.Then, each process has its own copy of the thread manager that schedules each thread.

The kernel then, using a preemptive scheduling algorithm, schedules all tasks to have

 processor time. Mac OS X[5] uses Mach (kernel) threads, and each thread is linked to its

own separate process. If threads are being cooperative, then only one can run at a time.The thread must give up its right to the processor for other processes to run.

Linux

Since version 2.5 of the kernel, Linux has used a multilevel feedback queue with priority

levels ranging from 0-140. 0-99 are reserved for real-time tasks and 100-140 areconsidered nice task levels. For real-time tasks, the time quantum for switching processes

20

Page 21: E5144 UNIT 6 Operating System and Real Time Programme

8/9/2019 E5144 UNIT 6 Operating System and Real Time Programme

http://slidepdf.com/reader/full/e5144-unit-6-operating-system-and-real-time-programme 21/24

is approximately 200 ms, and for nice tasks approximately 10 ms. The scheduler will run

through the queue of all ready processes, letting the highest priority processes go first and

run through their time slices, after which they will be placed in an expired queue. Whenthe active queue is empty the expired queue will become the active queue and vice versa.

From versions 2.6 to 2.6.23, the kernel used an O(1) scheduler . In version 2.6.23, they

replaced this method with the Completely Fair Scheduler that uses red-black trees insteadof queues.[6]

FreeBSD

FreeBSD uses a multilevel feedback queue with priorities ranging from 0-255. 0-63 are

reserved for interrupts, 64-127 for the top half of the kernel, 128-159 for real-time user 

threads, 160-223 for time-shared user threads, and 224-255 for idle user threads. Also,like Linux, it uses the active queue setup, but it also has an idle queue.[7]

NetBSD

 NetBSD uses a multilevel feedback queue with priorities ranging from 0-223. 0-63 are

reserved for time-shared threads (default, SCHED_OTHER policy), 64-95 for user threads which entered kernel space, 96-128 for kernel threads, 128-191 for user real-time

threads (SCHED_FIFO and SCHED_RR policies), and 192-223 for software interrupts.

Solaris

Solaris uses a multilevel feedback queue with priorities ranging from 0-169. 0-59 arereserved for time-shared threads, 60-99 for system threads, 100-159 for real-time threads,

and 160-169 for low priority interrupts. Unlike Linux, when a process is done using its

time quantum, it's given a new priority and put back in the queue.

Summary

Operating System Preemption Algorithm

Windows 3.1x  None Cooperative Scheduler 

Windows 95,98,ME Half Only for 32 bit operations

Windows NT,XP,Vista Yes Multilevel Feedback Queue

Mac OS pre 9 None Cooperative Scheduler 

21

Page 22: E5144 UNIT 6 Operating System and Real Time Programme

8/9/2019 E5144 UNIT 6 Operating System and Real Time Programme

http://slidepdf.com/reader/full/e5144-unit-6-operating-system-and-real-time-programme 22/24

Mac OS X Yes Mach (kernel)

Linux pre 2.5 Yes Multilevel Feedback Queue

Linux 2.5-2.6.23 Yes O(1) scheduler 

Linux post 2.6.23 Yes Completely Fair Scheduler 

Solaris Yes Multilevel feedback queue

 NetBSD Yes Multilevel feedback queue

FreeBSD Yes Multilevel feedback queue

6.7 Virtual memory

A virtual memory system denotes an area located on a computer ’s hard drive that allows programs tooperate without the need to load them into the physical memory. Computers have basically two kindsof memory systems: random access memory (RAM) and virtual memory (VM). When there is not an

adequate amount of physical memory or RAM, available to run all the applications that a user mayhave opened at any one time, the system uses virtual memory to make up the difference.

If the computer did not have the ability to access the virtual memory when it exhausted the RAM, theuser would receive an error message indicating that other applications would have to be closed inorder to load a new program. The virtual memory process works by seeking locations on the physicalmemory that have not been accessed for a certain period of time. This information is then copied to anarea on the hard drive. The available space that is freed up can now be used to load the new program.

This feature is one of the many operations performed automatically by your computer that goesunnoticed by the average user. The virtual memory is not only a way the computer creates additionalmemory for utilizing applications, but also takes advantage of the available system memory resources.This is cheaper than purchasing additional RAM chips. The hard drive of every computer system hasan area that is used for virtual memory.

This secondary source of storage, where information is stored and retrieved, is called a paging file.The area where data is exchanged back and forth between the physical memory and the virtualmemory system, in equal sized blocks, is called the pages. Virtual memory is basically a small pagingfile, which is located on the hard drive. Simply adding to the size of the paging file can increase thesize of the virtual memory system storage capacity. In contrast, the only way to create more RAM is bypurchasing and installing chips with larger memory capacities.

22

Page 23: E5144 UNIT 6 Operating System and Real Time Programme

8/9/2019 E5144 UNIT 6 Operating System and Real Time Programme

http://slidepdf.com/reader/full/e5144-unit-6-operating-system-and-real-time-programme 23/24

One of the disadvantages of virtual memory is that the read and write processing speed is noticeablyslower when compared to random access memory. Users who depend significantly on the virtualmemory system to run their applications will suffer a decline in the overall performance of their computer system. The fact is hard disks are not built for handling tiny bits of information. The key tooptimal system performance is to have more than enough RAM to handle your routine programprocessing workloads. This will ensure that accessing VMS is an exception and not the rule.

Memory Management Unit

The computer hardware that is responsible for managing the computer’s memory system is called thememory management unit (MMU). This component serves as a buffer between the CPU and systemmemory. The functions performed by the memory management unit can typically be divided into threeareas: hardware memory management, operating system memory management and applicationmemory management. Although the memory management unit can be a separate chip component, itis usually integrated into the central processing unit (CPU).

Generally, the hardware associated with memory management includes random access memory(RAM) and memory caches. RAM is the physical storage compartment that is located on the hard disk. It is the main storage area of the computer where data is read and written. Memory caches are used tohold copies of certain data from the main memory. The CPU accesses this information held in thememory cache, which helps speed up the processing time.

When the physical memory, or RAM, runs out of memory space, the computer automatically usesvirtual memory from the hard disk to run the requested program. The memory management unitallocates memory from the operating system to various applications. The virtual address area, which islocated within the central processing unit, is comprised of a range of addresses that are divided intopages. Pages are secondary storage blocks that are equal in size. The automated paging processallows the operating system to utilize storage space scattered on the hard disk.

Instead of the user receiving an error message that there is not enough memory, the MMUautomatically instructs the system to build enough virtual memory to execute the application.

Contiguous virtual memory space is created out of a pool of equal size blocks of virtual memory for running the application. This feature is a major key to making this process work effectively andefficiently because the system is not required to create one chunk of virtual memory to handle theprogram requirements. Creating various sizes of memory space to accommodate different sizeprograms cause a problem known as fragmentation. This could lead to the possibility of not havingenough free space for larger programs when the total space available is actually enough.

Application memory management entails the process of allocating the memory required to run aprogram from the available memory resources. In larger operating systems, many copies of the sameapplication can be running. The memory management unit often assigns an application the memoryaddress that best fits its need. It’s simpler to assign these programs the same addresses. Also, thememory management unit can distribute memory resources to programs on an as needed basis.When the operation is completed, the memory is recycled for use elsewhere.

One of the main challenges for memory management unit is to sense when data is no longer neededand can be discarded. This frees up memory for use on other processes. Automatic and manualmemory management has become a separate field of study because of this issue. Inefficient memorymanagement presents a major issue when it comes to optimal performance of computer systems.

23

Page 24: E5144 UNIT 6 Operating System and Real Time Programme

8/9/2019 E5144 UNIT 6 Operating System and Real Time Programme

http://slidepdf.com/reader/full/e5144-unit-6-operating-system-and-real-time-programme 24/24

Does Adding RAM improve your computer speed?

Computer speed is one of the most sought after features of both desktops and laptops. Whether gaming, surfing the Web, executing code, running financial reports or updating databases, a computer can never be too fast. Will adding more random access memory (RAM) increase computer speed? Insome cases it will, but not in all.

If RAM is the only bottleneck in an otherwise fast system, adding RAM will improve computer speed,possibly dramatically. If there are other problems aside from a shortage of RAM, however, addingmemory might help, but the other factors will need to be addressed to get the best possibleperformance boost. In some cases a computer might simply be too old to run newer applicationsefficiently, if at all.

In Windows™ systems you can check RAM usage several ways. One method is to hold down thekeys, Ctrl + Alt + Del to bring up the Task Manager . Click the Performance tab to see a graph of RAMresources. Third party freeware memory managers will also check memory usage for you. Some evenmonitor memory to free up RAM when necessary, though this is a stopgap measure.

If your system is low on RAM or routinely requires freeing RAM, installing more memory shouldimprove computer speed. Check your  motherboard before heading to your favorite retailer, however.The board might be maxed out in terms of the amount of RAM it will support. It can also happen thatexisting memory might need to be replaced if all ports are occupied by 1-gigabyte sticks, for example,on a motherboard that will support greater sticks.

If you are a gamer or work with video applications a slow graphics card might be a contributor to poor performance. A good graphics card should have its own on-board RAM and graphics processor (GPU), otherwise it will use system RAM and CPU resources. Consult the motherboard manual to seeif you can improve performance by upgrading to a better card. If your present card is top notch andRAM seems fine, the central processing unit (CPU) is another upgrade that can drastically improvecomputer speed.

Maintenance issues also affect performance and might need to be addressed to remove bottlenecks.A lack of sufficient hard disk space will slow performance, as will a fragmented drive. Spyware, 

adware, keyloggers, root kits, Trojans and viruses can also slow a computer by taking up systemresources as they run background processes.

In some cases a computer serves fine except for one specific application. Most software advertisesminimum requirements but these recommendations are generally insufficient for good performance.One rule of thumb is to double the requirements for better performance. If your system can only meetminimal requirements this is likely the problem.

Taking the measures outlined should improve computer speed unless the system is already running atpeak and the motherboard cannot be upgraded further. If so, the only alternative is to invest in a newcomputer that supports newer, faster technology. With prices falling all the time it should be easy tofind an affordable buy that will reward you each time you boot up.