Top Banner
Page 1 of 45 Unit I OS Basics: - Definition An O.S. is a program that acts as an intermediary between a user of the computer system and the computer hardware. The purpose of an O.S. is to provide an environment in which a user can execute programs. The primary goal of an O.S. system is thus to make the computer system convenient to use. A secondary goal is to use the computer hardware in an efficient manner. So we can say that it is a master control program which is used to co-ordinate all the computing resources of the computer system. It performs the co-ordination and management of all the H/W resources with in the computer and also provides an environment for user and application. It controls the flow of signals from the CPU to various parts of the computer. It is the first program loaded into the computer’s memory after the computer is switched on. O.S. performs following major functions. 1. O.S. Acts as a resource manager => O.S. is the only authority that directly interacts with the computer system (H/W). it is responsible for managing all the H/W resources of a computer that is why it is also considered a resource manager . Acting as resource manager it performs four important operations. (a) Processor Management : - It is responsible for the allocation of processor to different tasks or processes being performed by the computer system and also responsible for de allocation of processor to tasks or processes in multitasking and multiprocessing environment. Multitasking or multiprocessing means multiple processes or task can be executed simultaneously by the processor. This is done using timesharing techniques. (b) Memory Management : - It is responsible for managing the main memory of a CPU means that allocation and de allocation of memory to programs. It is also responsible for protecting of memory that is given to a program by the other programs. It handles those programs that exceed the physical limitations of the main memory means it provides the virtual memory to the programs. (c) Device Management : - It is responsible for coordinating the operations of the CPU with its peripheral equipments means communication between O.S. and peripherals (devices). E.g. one important task is to identify OS_BCA_IV_Semester
45
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript

Operating System Concepts

PAGE Page 1 of 32

Unit IOS Basics: -Definition

An O.S. is a program that acts as an intermediary between a user of the computer system and the computer hardware. The purpose of an O.S. is to provide an environment in which a user can execute programs. The primary goal of an O.S. system is thus to make the computer system convenient to use. A secondary goal is to use the computer hardware in an efficient manner.So we can say that it is a master control program which is used to co-ordinate all the computing resources of the computer system. It performs the co-ordination and management of all the H/W resources with in the computer and also provides an environment for user and application. It controls the flow of signals from the CPU to various parts of the computer.

It is the first program loaded into the computers memory after the computer is switched on.

O.S. performs following major functions.

1. O.S. Acts as a resource manager =>

O.S. is the only authority that directly interacts with the computer system (H/W). it is responsible for managing all the H/W resources of a computer that is why it is also considered a resource manager. Acting as resource manager it performs four important operations.

(a) Processor Management: -

It is responsible for the allocation of processor to different tasks or processes being performed by the computer system and also responsible for de allocation of processor to tasks or processes in multitasking and multiprocessing environment. Multitasking or multiprocessing means multiple processes or task can be executed simultaneously by the processor. This is done using timesharing techniques.(b) Memory Management: -

It is responsible for managing the main memory of a CPU means that allocation and de allocation of memory to programs. It is also responsible for protecting of memory that is given to a program by the other programs. It handles those programs that exceed the physical limitations of the main memory means it provides the virtual memory to the programs. (c) Device Management: -

It is responsible for coordinating the operations of the CPU with its peripheral equipments means communication between O.S. and peripherals (devices). E.g. one important task is to identify various peripheral devices attached to the CPU. It also deals with the spooling and buffering techniques. (d) Information System: -It deals with the storage and retrieval of data to and from secondary storage. Also deals with the representation of data on the disk, encoding and decoding of data, the compression of data and data lock.(II) O.S. as extended machine=>

O.S. is considered to be an extended machine because it provides the interface for the user and their application for interaction. It is extending the capabilities of the computer system (H/W) to user.The OS hides all the complex details of the process being performed.A simple I/O operation requires so many details such as disk block to read, number ot sectors per track, source address, destination address and so onwhich is very complex for us to understand.

The function of the OS is to present the user with the equivalent of an extended machine or virtual machine that is easier to program than the underlying hardware.

Operating System Architecture

O.Ss. are classified on the bases of computer architecture developed so far. Based on this criteria an O.S. can be classified as

(I) Simple System

(ii) Batch System(iii) Multi programmed batch System

(iv)Time Sharing System

(v) Parallel System(vi)Distributed

(vii)Real Time (viii)Personal-computer System(i) Simple System

It was one of the most simplest architecture of the O.S. where one job was processed at a time. It was single user system. It used a single program with one processor that interacts with one user at a time. It was very easy to design and implement. The C.P.U. was not fully utilized in this system because it remained idle most of the time.(ii) Batch System

It is almost same as simple system but could be used to process more than one job using a single microprocessor. Its major task was to transfer the control automatically from one job to the next. In this system, to speed up processing, jobs with similar needs were batched together and run through the computer as a group means that user could submit more than one job in a batch that automatically executes one after another. The main idea behind this system was to reduce the user interference during the processing or execution of job. Also it reduces the CPU ideal time in comparison to simple system. But still CPU remained idle some of time when jobs were being groped together.(iii) Multiprogrammed batch system

The objective of a multiprogramming O.S. is to increase the CPU utilization efficiency. The batch processing system tries to reduce CPU time (idle) through operator interaction but it could not reduce the idle time due to I/O operation. In this system when some I/O operation is being preformed, the CPU does not sit idle but process the next program or job in the queue. This insures the optimum utilization of MP (system). In this system, the O.S. must make decisions for users. E.g. all jobs that enter the system are kept in the job pool on the disk. If several jobs are ready to be brought into memory and there is no enough space for all of them, then the O.S. must choose among them that which job is to be executed first. E.g. UNIX, Windows NT, VMS.(iv)Time sharing or Multitasking

Time sharing or Multitasking is a logical extension of multiprogramming. In time sharing, system could process and execute multiple jobs using CPU switching (done context switch) technique. Time sharing system gives users an illusion that all the programs are concurrently running. It is so because the switches occur so frequently that users may interact with each program when it is running.

E.g. IBM O.S. 360

(v) Parallel System (Multiprocessing)

These O.S. were designed to overcome the limitation of earlier O.S. that could use only single processor but with the development of multiprocessor system. O.S. were designed to use multiple microprocessor units share the computer bus, the clock and some memory and peripheral devices.

The overall advantage of these systems was increased throughput (performance). By increasing the number of processors, we hope to get more work done in a shorter period of time.

Multiprocessor system can also save memory compared to multiple single systems because the processors can share the same peripherals, cabinet and power supply.

Another advantage of multiprocessor system is that they increase the reliability of the system (if one microprocessor fails the processing continues with the other microprocessor) e.g. UNIX, Window NT, Sun-Solaris (mainframe).(vi) Distributed system

These systems were designed to distribute computation among several processor where every processor has its own memory and own clock. Every processor could communicate with other microprocessor using communication lines between them (local bus, telephone lines). There is a variety of reasons for building distributed system, some of them are following: - Resource Sharing: -The advantage of such system is that the resources could be shared among several users E.g. a user at computer A may share the laser printer available on only at the computer B

Computation speedup: - it increases the computation speed because a particular computation can be partitioned into a number of sub computations that can run concurrently.

Reliability: - it increases the reliability if one computer fails in a distributed system, the remaining computers can work continue Communication: -They also provide communication facility between the systems. E.g. exchanging data between several systems.(vii)Real Time System

They are highly sophisticated and complex but fastest O.S. in this category because they have very strict and rigid (fixed) time requirements for the operation performed by the processor. It has the well defined fixed time constants and the processing must be done within the defined time limit as the system will fail.

There are two flavors of real time system. First is hard real time system, it guarantees that critical tasks complete on time. Second is Soft real time system where a critical real-time task gets priority over other tasks, and retains that priority until it completes.(viii) Personal-computer System

These systems were dedicated to a single user. These computer systems are usually referred to as personal computers (PCs). This was done so due to decrease the hardware cost. The main disadvantage of this system is that these O.S. were neither multi-user nor multitasking. Goals of these operating systems have changed; instead of maximizing CPU and peripheral utilization, the systems were designed for maximizing user convenience.O.S. Components

We can create a system as large and complex as an O.S. only by partitioning it into smaller pieces. Each of these pieces should be a well-defined portion of the system, with carefully defined inputs, outputs, and function. The major components are: -(1) Process Management: -

It is responsible for creation and deletion of both user and system processes. It also deals with the suspension and resumption of processes. It provides the mechanisms for process synchronization, process communication, and deadlock handling.

(2) Main-Memory Management: -

It keeps track of which parts of memory are currently being used and by whom. It also decides which processes are to be loaded into memory when memory space becomes available. It is responsible for allocation and de allocation memory space a needed.(3) File Management: -

It deals with creation and deletion of files and directories. Also support those primitives (commands), which are used to manipulate files and directories. It is also responsible for mapping of files onto secondary storage and for backup of files.(4) I/O System Management

One of the purposes of an O.S. is to hide the peculiarities of specific hardware devices from the user. This is done by the I/O subsystem, it consists of

A memory management component including buffering, caching, and spooling

A general device-driver interface

Drivers for specific hardware devices

(5) Secondary-Storage Management

The O.S. is responsible for the following activities in connection with disk management.

Free-space management

Storage allocation

Disk scheduling

(6) Networking

In distributed system, where a collection of processors that do not share memory, peripheral devices, or a clock. The processors communicate with one another through various communication lines, such as high-speed buses or telephone lines. The processors in the system are connected through a communication Network, which can be configured in a number of different ways and configured by this management system.

(7) Protection System: -

It is responsible for protecting the processes from one another so that a process could not access he memory of other processes. It provides the mechanisms to ensure that the files, memory segments, CPU, and resources can be operated on by only those processes that have gained proper authorization from the O.S.

(8) Command-Interpreter System: -

One of most important system programs for an O.S. is the command-Interpreter, which is the interface between the user and the operating system. Some O.S. include the command interpreter in the kernel. Other O.Ss, such as MS-DOS and UNIX, treat the command interpreter as a special program that is running when a job is initiated, or when a user first logs on. System calls or services

User programs communicate with the OS and request services from it by making system calls.Corresponding to each system call there is a library procedure that user program can call.The user program passes some specified parameters with the call, the OS execute the request if the parameter are correct and returns the status of the execution ie request was completed or not.Application programs cant access H/W directly they do show using either of the two methods(a) By making a direct call to the low level H/W routines.

(b) By calling high level routines then further call low level routines.

These routines are called system calls and are used by programmers and application developers in order to access a particular H/W. These system calls on the bases of their functions are grouped together into different categories.O.S. Services

1. Program execution services:-The system must be able to load a program into the memory and to run it. The program must be able to able to end its execution, either normally or abnormally (indicating error)2. I/O operation services: -A running program may require I/O. This I/O may involve a file or an I/O device. Users can not control I/O devices directly. Therefore O.S. must provide some mechanism to o I/O.

3. File system services (read and write, create, input, output): -It should be obvious that programs need to read and write files. They also need to create and delete files by name.

4. Communication services (network): -One process needs to exchange information with another process. This communication can be occurred in two ways the 1st communication between m, processes executing on the same computer or different computers. Communication may be implemented via shared memory, or by the technique of message, in which packets of information are moved between processes by the O.S.5. Error detection services: -The O.S. constantly needs to be aware of possible errors. Errors may occur in the CPU and Memory hardware, in I/O devices (a connection failure on a network, or lack of paper in the printer), or in user program (such as an arithmetic overflow, an attempt to access an illegal memory location)6. Accounting services: -In this type services we want to keep track of which users use how much and what kinds of resources such as microprocessor, memory, and any I/O devices.

7. Protection services: -Protection involves ensuring that all access to system resources is controlled means protecting a process by others. Security of the system from outsiders is also important.

O.S. Calls

System calls provide the interface between a process and the O.S. These calls are generally available as assembly-language instructions, and are usually listed in the manuals used by assembly-language programmers.1. Process Control: - End, abort, load, execute, create process, terminate process, get process attributes, set process attributes, wait for time, wait event, signal event, allocate and free memory.

2. File manipulation calls: - Create file, delete file, open file, close file, read file, write file, reposition, get file attributes, set file attributes.

3. Device manipulation: - Request devices, release devices, read devices, write devices, reposition, get device attributes, set device attributes, logically attached or detached devices.

4. Information Maintenance: - Get time, get date, set time, set date, get system data, set system data, get process, get file or device attribute, set process, file of process attribute.

5. Communication: - Create connection, delete connection, send messages, receive messages, transfer status information, attach or detach remote devices.

O.S. Structure (Design) 1. Simple structure (Monolithic systems)2. Layered structure

3. Virtual Machine

4. Client-Server Model

1. Simple structure: -

O.S. developed earlier followed a very simple structure. Truly speaking they do not have a well defined structure. They started as small, simple and limited system. Ms-Dos is an example of such system. It was designed and written to provide maximum functionality in minimum space because of the limited H/W available in those days. It was not divided in modules. It was a very simple structure with three layers, which are,

1. The main program which invokes the request

2. Set of service procedures that carry out system calls

3. A set of utility programs for users

2. Layered Structure: -

Later with the development of advanced and fast H/W architecture O.S. were redesigned and broken into smaller more appropriate modules that were not available in Ms-Dos. Such O.Ss. were developed following modular design. The modularization of the system was done by following layered approach which consists of breaking the O.S. into a number of layers. Each built on the top of the lower layers. The lowest layer that is the layer 0 is the H/W and the highest layer that is the layer N is the user interface.

Every layer is an implementation of an abstract object which is encapsulation of data and operation performed on it. The main advantage of layered approach is modularization. It has six layers viz,

Layer

Description

5

The operator

4

User Programs

3

Input/Output Management

2

Operator-process communication

1

Memory and drum management

0

Processor allocation and multi-programming.

3. Virtual Machine: -

After the development of O.Ss following layered approach and with the advent of more sophisticated and complex H/W architectures O.S. was redesigned to include two important techniques.

I CPU Scheduling

II Virtual Memory Technique An O.S. using both the technique can create the illusion of multiple processes each executing on its own processor with its own memory (virtual memory).

These techniques became very popular and today almost all the O.S. follow them to provide increased system performance and optimum utilization of resources, e.g. VM O.S. from IBM.

4. Client-Server Model: -

In this model of OS, to request a service, such as reading a block of file, a user process (now known as client process) sends the request to a server process, which then does the work and sends the results back to the client.In this model all the kernel does is handle the communication between clients and servers.By splitting the OS up into parts, each of which handles one facet (aspect) of the system, such as file services, process services, terminal services Unit IIProcess ManagementProcess Concept: -

A process is an abstract model of sequential program in execution. It is an identifiable object in the system having three important components.

1. The code to be executed.

2. The data on which the code is to be executed.

3. Process status.

Every process is uniquely identified by the O.S. through PID which is dynamically allocated to the process by the O.S.

A process can be created using a create system call.

e.g. LINUX uses fork() and exec() to create a new process.

Every process is created under the main process which is the 1st process created under the system i.e. O.S. this newly created process is considered as child process and the process under which it is created is considered as parent process. This forms a hierarchical structure of the process in the system.

Every process is represented in the memory by complex data structure called PCB (process control block). It contains seven important kinds of information with respect to a process.

(i) Process Status

The state may be new, wait, ready, running, terminated or block and so on.(ii) CPU registers

It contains the information about various micro processor register used by the process. The registers vary in number and type, depending on the hardware architecture but the basic registers are: -AccumulatorIP (Instruction Pointer)SP (Stack Pointer)DR (Data Register) => AX, AH, AL

Flag Register (status (program terminals))

(iii) CPU Scheduling Information. It deals with the information related to process priority and other scheduling parameters.(iv) Memory Management.

It deals with the information regarding memory type, capacity, and the location used by the process.

(v) Accounting Information

It deals with the information regarding the amount of CPU time and other resource used by a process and also includes job or process number (PID).

(vi) I/O status Information

It includes the information regarding I/O devices allocated to the process a list of open.

(vii) Program Counter

The counter indicates the address of the next instruction to be executed for this process.Process Status

At the given point of time process can be in any of one state maintained below. These state changes as the process execute.

(a) New- born

When a process is created and it enters the system.

(b) Ready

The process has acquired all the resources and waiting for microprocessor to be assigned

(c) Running

The process has acquired the microprocessor and instructions are being executed.

(d) Waiting or Block

When a process is waiting for some event to occur such as I/O operation(e) Terminate

When a process has finished its execution A process changes its state many times throughout its lifetime. These names are arbitrary and vary from O.S. to O.S. but the state remains same. Also at a given point of time only one process can be running while other maybe waiting or ready.Scheduling Criteria and Algorithms(Scheduling

The objective of multiprogramming is to have multiple processes running all the time to maximize CPU utilization. Likewise the objective of time sharing system is to switch between processes so that users can interact frequently with running processes. For a uniprocessor system, there will never be multiple processes running. We have to schedule them so that while one process is running other will wait and when the CPU is free it can be rescheduled. All these functions are performed by one of the most important part of the O.S. called scheduler.

Scheduling Queues: -

When a process enters the system it is placed in job queue. It consists of all the processes in the system. The processes that are active in memory and are waiting for the M.P. are kept in Ready queue. Also the processes waiting for an event or some I/O are kept in wait/device queue. They are held in device queue until the device required becomes free and then they are shifted to ready queue. Every process migrates between various scheduling throughout its lifetime.

Types of Scheduling: -

There are 3 kinds of scheduling that work in co-ordination with each other to perform the entire scheduling process.

1. Long Term (job scheduling): -

If there are too many processes accommodated in memory then O.S. can place the [processes using secondary memory and when required the process in S.M. Can be loaded in P.M. for further processing this role is performed by long term scheduling.

2. Short Term Scheduling (CPU Scheduling): -

Once long time scheduling loads a process in P.M. it is further transferred to M.P. for actual processing. This role is performed STS.

3. Mid Term Scheduling: -

it performs the switching between the running process and the ready process.

Scheduling Algorithms: -

The scheduling algorithms are classified on the bases of scheduling mechanism.

non-preemptive scheduling mechanism

Pre-emptive scheduling mechanism (co-operative)

Non preemptive scheduling mechanism or multitasking: -

It is a king of scheduling mechanism where a process cant be forced to release the M.P. immediately so as to transfer the control to a newly started process.

Pre-emptive scheduling mechanism or multitasking: -

It is a kind of scheduling mechanism where M.P. can be interrupted to suspend the current process and pay attention to the newly started process.Scheduling Criteria: -

Every scheduling algorithm has its own merit and demerit. Also each one of them has its properties that differ from one algorithm to other. There are certain criteria for comparing CPU scheduling algorithm. They are (i) M.P. utilization

The algorithm must ensure optimum utilization of the M.P. so as to keep it busy all the time.

(ii) Through put

It identifies the number of processes that can be completed in the specified time period. For longer processes the throughput may be 2 to 4 process per hour and the shorter processes. It may be I/O process per second.

(iii) Turn around Time (should less)

It is the sum of period spent by a process waiting in the ready queue and execution by M.P. including I/O.

(iv) Wait Time (less)

It is the time spent by a process waiting in the ready queue.

(v) Response Time (less)

It is time period between the moment the request is submitted and 1st response is received.

To have an efficient scheduling algorithm it is desirable to maximize the CPU utilization and through put. At the same time it is important to minimize the turn around, waiting, response time.

Scheduling Algorithm 1. FCFS (First Come First Serve)

2. SJF (Shortest job first)3. Priority scheduling4. Round Robin scheduling5. Multi level queue scheduling6. Multi level feed back queue scheduling1. FCFS: -

In FCFS scheduling the processes are allocated M.P. on 1st come 1st serve basis i.e.

the process that request the CPU 1st is allocated the CPU first. It is managed, maintained by FIFO.

In FCFS when a process enters the ready queue its PCB is linked to the tail of queue. When CPU is free it is allocated to the process standing at the head of queue the running process often sometime is removed and the next process at the head is allocated to the M.P. This remove process enters and is appended to the tail of the queue.

(i) Calculate wait time

(ii) Calculate T.A. time (iii) Calculate avg. wait time

(iv) Calculate avg. T.A. time

2. SJF: -

In SJF algorithm every process length is considered along with the CPU burst. When CPU is available, it is assigned to a process that has the smallest CPU burst. In case if two process have the same length and the CPU burst then FCFS scheduling is used to break the tie.

3. Priority Scheduling: -

In priority scheduling a priority associated with every process and CPU is allocated to the process with the highest priority. If two processes have same priority them FCFS is used to break the tie.

4. Round Robin scheduling: -

Round Robin algorithm is one of the most common and popular scheduling mechanism used by time sharing system. It is similar to FCFS mechanism with one exception i.e. in Round Robin algorithm every process is allocated a fixed small unit of time called Quantum or time slice. This quantum is generally between 10 to 100 ms, the ready queue is treated as circular queue and CPU scheduler goes around the ready queue allocating the CPU to each process for this interval of one quantum. In Round Robin mechanism queue works on FIFO principle. The CPU scheduler picks up the first process from the ready queue and set the timer to interrupt after one quantum and dispatches the process. 5. Multilevel queue scheduling: -

In MLQS algorithm the ready queue is partitioned into several ready queues. the processes are assigned to every queue which generally based on some properties of classes such as memory size, process priority or process type. Every queue has its own scheduling algorithms e.g. one queue might be scheduled be Round Robin algorithm while other queue may be scheduled by FCFS or SJF. In addition to this there must be some scheduling mechanism between the queues which is commonly implemented as priority scheduling. E.g. the queues may be categorized on the nature of process where system processes have the higher priority while the user processes have lower priority.(a) System processes

(b) Interactive processes

(c) Interactive adding processes

(d) Batch processes

(e) User processes 6. Multilevel Feedback queue scheduling: -

it is almost same as MLQS algorithm where the processes are permanently assigned to a particular queue to enter to the system. Processes do not move between queues. In MLFBQS mechanism a process can move between queues. The main idea is to separate process with different CPU burst feature. If a process uses too much of CPU time it will be moved to low priority queue giving control to other processes standing in the queue. Similarly if a process has been waiting too long in a lower priority queue may be moved to high priority queue, this feature is called Ageing.Multi processor scheduling: -

In multiprocessor scheduling the algorithm or the scheduling mechanism is almost same as that of multi level fed back queue scheduling but the queue could be processed by using more than one M.P.

Since there is lot of variations in M.P. types, two approaches could be used to implement MPS. In first approach the CPU could be a homogenous system. Where all the M.P. are identical. It is easy to implement in comparison of second approach. In second approach we could use a Hetrogenous system having different M.P. It is difficult to implement in comparison to the first approach.

Further if Homogenous system is used then it is preferred to have only one ready queue that will be accessed concurrently by all M.P.s.

But if it is Hetrogenous system then use of Multiple ready queue for multiple M.P.s is recommended.Real Time Scheduling: -

In real time system which have very strict time constraints implementing scheduling algorithm is very complex. \real time system usually multiple powerful M.P. to insure the processing within the specific time limit. In real time scheduling mechanism a process enter a ready queue and carries some additional information along with it, which is basically time limit. This time limit is passed as an argument to the real time system if the real time scheduler can process the request with in the specified time limit then the request is accepted and further processing continues. But if the request could not be processed with in the specified time limit them it is rejected by the real time scheduler and then it is processed by another part of real time scheduler called soft real time scheduler. If the real time scheduler can process the request with in the specified time limit, it is considered as Hard real time scheduler. Unit III

Process Synchronization

Process Synchronization: -

Concept: -

It is possible to have multiple process active in memory each process has at least three important component associated with it that are code, data and its status. It is possible that multiple processes may try to access the same data concurrently e.g. two programs are trying to use or open the same file, database or record. Such system may result into data inconsistency or unusual behavior of the system so there should be a mechanism to synchronize all the process trying to access the same resource or data. This ensure data integrity in the system and the mechanism is called process synchronization and is a very complex system.

Race Condition: -

If two or more processes share the same data structure in memory then the resultant value cannot be determined because it depends on the process that access the data structure first and the one that access it last. When such a situation occurs then it is called race condition. To overcome this problem there should be some mechanism so as to avoid simultaneously access to the same data.

Critical Section Problem: -

We have seen that every process has important elements associated with it they are code of the process, the data manipulated by code and finally the status of the process. If there are multiple processes active in memory then every process would have its own code, data and status. There are certain instructions in the code that exclusively used to modify some data this section of the code is called Critical Section. When a process is in its critical section then no other processes should be allowed to enter their critical sections. Further the processing if critical section by a process using microprocessor should be usually exclusively in the time. This raise the need of designing a protocol that satisfies following criteria.

( I ) Mutually Exclusion (mutex): -

If a process P0 is in its critical section and is processed by the, M.P then no other process that is P1 to Pn should be allowed to enter their critical section.

( II ) Progress: -

When none of the process are in their critical section then only those processes that are not in their remainder sections can participate in the decision of entering into critical section.

( III ) Bounded Time:

Every process should be bounded to enter its critical sections so that other have the change to enter their critical section.

Every process is roughly divided into two distinct section critical and remainder section and to insure the proper synchronization between the processes the critical sections have two points

(a) Entry Point

(b) Exit Point

When a process needs to enter into its critical section it makes the request to its entry point if no other process is in its critical section. After the complete processing of critical section the process leaves through the exit point and enters the remainder section.

Semaphore: -

When one process modifies a particular data structure no other process can simultaneously modify it. We can use semaphore to deal (n) process critical section problem. The (n) processes share a semaphore called MUTEX which is initialized to one(1). The implementation of with a wait queue may result into a situation where two or more processes are waiting infinitely for an event that can be caused by only one of the waiting process. When such a stage is reached these processes are called to be deadlock. When a process in a set waiting for an event that can be caused only by another process in the set the process is said to be deadlock. It is very important to overcome such situations when a process enters into deadlock state. Mainly it is caused by resource acquisition and release. Another possibility related to deadlock is indefinite blocking or starvation. Deadlock and starvation are almost same with one exception i.e. in starvation the process waits indefinitely or infinite amount of time.

Classical Problem of Synchronization: -

1. Bounded Buffer (Producer-Consumer Problem): -

In bounded buffer algorithm we assume that there is a pool N buffers where each buffer is capable of holding one item. The MUTAX semaphore provide mutual exclusion to the buffers in the pool and is initialized to (1). When all the buffers in the pool are empty. The empty is initialized to the value of N and the full is initialized to the value (0) we can interrupt the code as per the producer producing full buffers for the consumer or as per the consumer producing empty buffer for the producers.

2. Readers and Writers Problem: -

In readers and writers problem a data structure such as a file or record is to be shared among several concurrent processes some of these processes may only read the content of the shared object whereas other may update the shared object. The processes interested only in reading are called readers while the rest one interest in writing are called writers. If readers access the shared data objects simultaneously then no adverse effect will occur. But if a writer and some other process access the shared object simultaneously then a problem may occur. To ensure these difficulties do not arise we require that writer have exclusive access to the shared object this is called readers and writers problems.

3. The Dining Philosophers Problem: -

Consider 5 philosophers spending their lives thinking and eating. The philosophers share a common circular table surrounded by 5 chairs. Each belonging to one philosopher. In the centre of the table there is a bowl of rice and the table is laid 5 single chopsticks. When a philosopher think, he does not interact with his colleagues. From time to time a philosopher get hungry and try to pick up the two chopsticks that are closer to him.

The closest chopstick for a philosopher would be the chopsticks on his left and right side. The philosopher may pick one chopstick at a time but they cannot pick up another chopstick that is already in the hand of neighbour. When hungry philosopher has both the chopstick on the same time he starts eating without releasing both of his chopsticks. When he is finished eating. He put down both of his chopstick and starts thinking again.

The dining philosophers problem is a simple representation of the need to allocate several resources among several processes in a deadlock phase and in starvation manner.

One simple solution is to represent each chopstick with a semaphore. A philosopher tries to grab the chopstick by executing the wait operation on that semaphore. He then releases his chopstick by executing the signal operation on the same semaphore. This solution guarantees that no two neighbors are eating simultaneously but there is a possibility of deadlock. If all the philosophers become hungry simultaneously and each one them grabs chopsticks on his left hand side then all the elements of the chopsticks will now be equal to 0. when each philosopher tries to grab his right chopstick he will delayed for ever causing starvation. The possible situations could be.

1. Allow utmost 4 philosophers to be sitting simultaneously at the table.

2. Allow a philosopher to pick up chopstick only if both the chopsticks are available.

3. Use an asymmetric solution i.e. an odd philosopher picks up first his left chopstick and then his right chopstick whereas an even philosopher picks up his right chopstick and then left

Monitors: -

A monitor is a collection of procedures, variables, and data structures that are all grouped together in a special kind of pakage or module.Monitors are high level constructs that are defined for process synchronization. These are also considered programs define operators. These monitors are represented by declaring variable whose value defines the state of a process as well as its body i.e. the procedures and functions that implements operation. Monitors are represented by programmers and cannot be directly used by other processes. A process defined within a monitor can access only those variables declared locally within the monitor. Similarly the local variable of the monitor can only be accessed by local procedures. The monitor construct ensures that only one process at a time can be active but it is not sufficient powerful for implementing some additional synchronization schemes. Atomic Transactions: -

The mutual exclusion of critical section ensures that critical sections are executed automatically that is if 2 critical sections are executed concurrently the result is equilant to there sequential execution in some unknown order. This property is useful in many applications but there are some cases where we would like to ensure that a C.S. forms a signal unit of work. Which is performed entirely or not perform at all for e.g. in a function transfer in which one account is debited and another account is credited. It is important that for data consistency either both the operators that is credit and debit occur or neither occur. These are called atomic transactions. Symbol Method: -

A collection of instructions that perform a single logical function is called a transaction a major issue in processing transaction is the preservation of atomicity despite the possibility of failure with in the computer system. A transaction is a program unit that access and possibly update various data items that may reside on disk having some file from user point of view a transaction is simply a sequence of read and write operations which are terminated by either a commit operation or an abort operation. Commit operation signifies that the transaction has terminated its execution successfully where as an abort operation signifies that the transaction has to cease its execution due to some logical errors a terminated transaction that has completed its execution successfully is called a committed transaction other wise it is called an abort transaction. Ones an operation is committed it cannot be undone by abort operation. There are certain factors that ensure atomicity but they depend on certain characteristics of the storage media which are given below: -1. volatile storage.

2. Non-volatile storage: - It is more prone to errors and crashes and a slower transaction volatile storage.

Factors ensuring atomicity of transactions: -

1. Log based recovery: -

It is one of the ways to ensure the atomicity on storage regarding all the information and modification way to the various data the most widely used method for achieving it is called write ahead logging mechanism. In this mechanism system maintain on stable storage a data structure called log each log describes a single operation of a transaction write and has the following fields.

a. Transaction name.

b. Data item name.

c. Old Value.

d. New Value.

The undo and redo operations can easily be implemented using log even if failure occurs.

2. Check Point: -

When a system failure occurs we must first consult the log to determine those transactions that need to be redone and those need to be undone. In this principle we have to search entire log to identify these transactions byut there are 2 drawbacks of these principles.

a. It is a time consuming task.

b. Most of the transactions need to be redone that have already actually updated the data but it will.

c. Out a log record check point on to stable storage.

This mechanism the system periodically performs check points which require the

following sequence of operation.

1. Output all the records currently residing in main memory to the stable storage.

2. Output all the modified data residing in volatile storage to the stable storage cause no harm to reduce these overheads the concept of check points introduced .

Dead Lock Concept: -In multitasking environment there may be multiple processes executing on the system and computing for a particular set of resources. A process request resources and if resources are not available at that time then a process entered into wait state and moves into the wait or device queue. Further it may happen that waiting processes will never again change their state because the resources required by them are held by other waiting processes. This situation is called DEAD lock or CANSAS lock. This lock state that if two trains are approaching each other at a crossing then both shell come to a full stop again until the other has gone .

Under normal mode of operation every process utilizes the resources in the following sequence: -

(ii) Request - if resource is required by a process then a request is made by process. If the resource is available then resource is allocated to the process for utilization else the process enters into the wait state further shifting to wait or device queue.

(iii) Utilization or Use the process can operate on the resource. E.g. if a printer is the resource then it is used for printing data by the process.

(iv) Release After the process is finished with the resource utilization, it released the resource so that it can be occupied by the other processes in wait state.This mode of operation is also termed as release and request mechanism and is implemented by the use of the system calls E.g. 1st (a) Resource= File

(b) Request = Open file

(c) Use = read or write

(d) Release = close file E.g. 2nd (e) Resource= Memory

(f) Request = allocate memory

(g) Use = Use memory ,start read and write

(h) Release = release and free the memory

Deadlock Characterization: -

A process may never its execution till the system resources are not available further preventing other job from ever starting. This deadlock situation can arise if following four conditions occur in the system.

(a) Mutual Exclusion: -

At least one process or the resource must be in non-sharable mode i.e. only one process can utilize a resource at a time if another process require the same resource then requesting process must be delayed until the reso7urce has been released.

(b) Hold and Wait: -

There must be a process that is holding at least the resource and is waiting to acquire additional resource that currently being held by other processes.

(c) No preemption: -

Resources can be preempted i.e. a resource can be released only by the process holding it and after it as completed its task.

(d) Circular Wait: -

In this situation there must exist a set of waiting processes say Pn. P0 is waiting for a resource occupied by P1, P1 is waiting for a resource occupied by P2 and P2 is waiting for a resource i.e. occupied by Pn-1 and Pn-1 is waiting for a resource occupied by Pn. Finally Pn is waiting for a resource occupied by P0. all these four conditions are used to characterize deadlock in a process.Methods for handling deadlock: -

In case a deadlock is characterized by the presence of any of the four conditions namely mutual exclusion, hold and wait, no preemption, circular wait. Some mechanism is required to handle them. They are important principles that can be used for dealing with deadlock situations or problems.1. to design and develop a protocol to ensure that the system will never enter into deadlock state.

2. If we allow the system to enter into deadlock state then we must ensure its recovery from deadlock state.

3. Ignore the problems altogether and assume that deadlocks never occur in the system. Almost all the O.S. use this solution today including UNIX.

To ensure that deadlock problem never occur the system can use either a deadlock prevention mechanism or avoidance mechanism.

Deadlock prevention mechanism is set of methods for ensuring that at least one of the necessary condition cannot hold and these method prevent deadlock by verifying how request for resources can be made.

Deadlock avoidance mechanism requires that the O.S. must be given in advance additional information about the resource required by a process. With this additional knowledge we can decide for each request weather or not the process should wait. Each request require that the system consider the resource currently available, the resource currently allocated to a process and the future request and releases of each process to decide weather the current request can be satisfied or must be delayed.

Deadlock Prevention: -

All the four conditions

1. Mutual Exclusion of a process execution depends on weather the resources are sharable or non-sharable. It is not required if resources are sharable e.g. a read only file can be simultaneously accessed by multiple processes without causing any harm to it. If the resources are not sharable then we must ensure the mutual exclusion of process i.e. we must ensure that all the processes execute mutually exclusive in time.Sharable device ( hard disk, CD, RAM)

Non Sharable (Printer, Magnetic Tapes, Floppy disks)

2. Hold and Wait: - In Hold and Wait situation the process having some of the resources but still waiting for other resources to be allocated for its execution may result into a deadlock state for other processes. In order to ensure that no process enters into a deadlock state under this situation a protocol must be designed where every process before holding any process examines the current availability status of all the required resources. If the required resources are available then they can be allocated before its execution.

This protocol has two disadvantage

(a) the process is unnecessarily delayed due to non availability of all the required resources.

(b) That may result into starvation.

3. No preemption: - In no preemption mechanism a resource cannot be interrupted for its allocation to another process i.e. there is no preemption of resources that have already been allocated to a process. To overcome this situation we can use a protocol that if some resources are occupied by a process and another process is requesting for same resources then all the resources currently been held are preempted i.e. they are implicitly released. They are then allocated to the process waiting for them and the older process can restart again with its old resources where they are released by the new process.This resources can be applied only to those resources whose state can be easily saved and restored later e.g. microprocessor registers and memory space.

4. Circular Wait: - In C.W. mechanism a process holding some resources and waiting for another resources that is occupied by the next process in the queue occur in a loop. To overcome this situation we need to design a protocol that groups all the resources type and assign them a unique number in the increasing order. Also the protocol will ensure that each process request the resources in increasing order of the number.We can prevent a process to enter into a deadlock state by implementing constants on how the request are made. These constants ensure that the necessary condition for deadlock cant occur and hence deadlock cannot hold a process. But the possible disadvantages by these methods are low device utilization and reduces system through put or performance.

Deadlock Avoidance Mechanism: -

DLA mechanism is an alternative method for avoiding deadlocks in a system it requires additional information about how the resources are to be requested e.g. if there is a type drive and a printer then a printer might tell that it will first access and then take drive on the other hand another process might tell that it will access tape drive and then printer. With this knowledge of complete sequence of requests and releases of resources by a process we can decide for each request weather or not a process should wait. Each request requires that the system considered the resources currently available ad resources currently allocated to the process and future request and release of resources by process. This ensure that deadlock does not occur in the system.Deadlock Detection: -

If a system does not use a DLP or DLA mechanism then deadlock situation may occur. In such situation the system must provide two facility

1. an algorithm that examine the current state of the system to identify weather a deadlock has occurred.

2. If a deadlock is detected then and algorithm to recover from deadlock must be provided.

The selection of algorithm depends on the system having only one instance of each resource as well as system with several instances of each resource. It also depend on the process type. The selection of deadlock detection and recovery mechanism increases the overhead that include runtime cost and execution of detection algorithm. It also results into the loss i.e. possible due to recovery from a deadlock state.

Deadlock Recovery: -

When the system identifies a deadlock state then there are several possibilities that may occur in the system.

1. First possibility is to inform the user or operator of the system that deadlock has occurred and that the operator deal with deadlock manually. This could be done in either of two ways.

(b) the 1st way is to let the user decide how to handle the deadlock situation.

(c) The 2nd way is to present user a list of options to select one of them for handling the deadlock situation.

2. Second possibility is to let the system decide how to handle the deadlock situation so as to recover from it automatically. There are two ways for breaking the deadlock. The first ay is to abort one or more processes and the second way is to preempt some of the resources occupied by the other process.

There are two way to terminate to abort a process. In both the cases the system reclaims all the resources allocated to the terminated processes.

(a) To terminate or abort all the deadlock processes.

(b) Abort one process at a time until the dead cycle is eliminated.

We would follow the second approach because there might be a situation that terminating the first deadlock process releases some resources that are required by other processes in deadlock state.

It is very important to identify the state of process at the time of termination because if the process is in the middle of updating a file then terminating the process may result into loss of data leaving the file in incorrect state e.g. termination of a print job currently being processed may also result into loss of data leaving the printer in incorrect state.

Combine Approach to Deadlock Handling: -

Researchers today have combine all the basic approaches used for handling deadlock because an approach alone cannot handle the deadlock situation properly. These approaches are combined together to ensure the optimum utilization or resources in the system among multiple processes. It is done so by combining all the approaches together i.e. combining deadlock prevention, avoidance, detection and recovery mechanism. Almost all the future generation O.S. and few of the current ones employ these techniques.

Unit IV

Memory Management

Introduction: -

We have seen that there can be multiple processes active in memory which are handled by the CPU scheduler. All theses processes share the memory so we need to decide about some memory management mechanism to ensure the optimum utilization of available memory the selection of memory management algorithm depends on hardware architecture of underlying system. Also before a program can be executed it needs to be loaded into memory. The program having a group of instructions are loaded into memory and these instructions are fetched by the CPU one by one. After fetching an instruction, it is decoded and then memory variable or other data structure referenced by the instruction are read from the memory, then the instruction is executed on the values recently read from the memory and results are returned back into memory.It is important to have memory management sub system to handle all memory related activities. This sub system is called a MMU (Memory Management Unit) and is H/W dependent. Memory is viewed as an array of bytes or words. Each having a particular address or location.

Logical Vs Physical address space: -

The addresses seen by the micro processor or CPU is commonly referred as logical address whereas an address seen by the memory unit is commonly referred as physical address. It is possible that both the addresses may be same or different. Logical addresses are also called virtual addresses. The set of logical addresses or virtual addresses generated by a program is commonly referred as logical address space and the set of physical addresses corresponding to these logical addresses is referred as physical address space. so in the execution time addresses binding scheme, the logical addresses and the physical addresses differ from each other. During run time this mapping from virtual to physical address is done by MMU i.e. Memory Management Unit and is H/W device which is used by the O.S. the kind of memory management scheme being implemented defines the mapping mechanism.

The MMU mapping scheme is a generalization of base register scheme. Memory Management Scheme or Technique or Algorithm or Mechanism: -

1. Swapping: -

The swapping technique is used to load excess of processes or processes more than the limit of primary memory an keep the current process in memory which is processed and send the rest processes on to secondary memory. After the processing of the process in the primary memory is completed then a new process is loaded from the swap space, from secondary memory. The secondary storage used for the purpose of temporarily storing or keeping the process is called backing store. It is so named because it provides backing storage for primary memory so that the primary memory can transfer the inactive processes to the backing store and create room for the create process in the primary memory,.

The primary memory is not completely available for used processes, some part is reserved for O.S.. The swap space on secondary storage is created by the O.S. The algorithm for managing swap space are the same ones used for managing main memory.

The allocation of swap space can be in two ways, either allocating space when process is created or allocating swap space when the process is to be swapped out. 2. Paging: -

The advantage of paging is that paging overcomes the problem of external fragmentation occurred in swapping. In paging a process is allocated memory divisions called pages in a continuous manner so as to avoid fragmentation. It improves the overall memory utilization and read write speed to and from memory. It is commonly used by graphics system where every graphical environment is stored in a memory page.

The paging mechanism also involves the secondary storage to solve the problem or limitation of loading larger programs in memory and execute them.

In paging mechanism we divide the logical address space into pages and the physical addresses into frames. A frame can consist of one or more than one page. One or may pages as required for processing are loaded into the primary memory into one frame; or in more than one frame if the pages are more.

The physical or actual address is calculated with the help of a paging table which is maintained in memory. The table contains the [page number and the address of that page. After getting the address of the page from the paging table the MMU loads the pages into physical memory in form of frames or we can say pages are mapped to frames.All the pages are of same size which is calculated by the O.S.

3. Segmentation: -

the segmentation mechanism is very closely related to paging mechanism. In this mechanism the physical memory is partitioned into segments of different size according to process dynamically that is when the process is loaded then a segment is created by MMU for the process.

The calculation of physical address is done with the help of segmentation table which stores segment number, base address of segment and limit of that particular segment.

Virtual Memory: -

Virtual Memory is designed for the execution of oversized program that cannot be fragmented. Virtual memory uses secondary storage i.e. some part of free space in secondary storage and is simulated as if it were physical or primary memory. The advantage offered by it is the execution of programs is possible that demand more free physical memory in comparison to available free physical memory. It also has one major drawback that it reduces the performance of the system as secondary memory is used for the program processing and the processing in secondary memory is always alos in comparison to the processing in primary memory.

Demand Paging: -

Demand Paging system is similar to a paging system but also includes swapping. It may also include other memory management technique if required. In D.P. all the processes reside on S.M. which is usually a disk when we need to execute a program then we swap it into memory using a swapper. A swapper never swaps a page into M.M. unless that page is needed. D.P. = paging + swapping

.D.S. = paging + segmentation

in D.P. when a process is to be swapped the pages and the swapper will be used by the process. Instead of bringing the entire process it brings only the necessary pages into memory. Pager identifies the required pages which are swapped in and out by swapper. Demand segmentation: -

D.P. is generally used by most of the virtual memory system and requires the significant H/W to implement it. If the H/W is not available then V.M cannot be provided. In such situation D.S is used, e.g. the Intel 80286 does not include paging features but supports segmentation. E.g. Os/2 O.S.

O.S./2 uses the segmentation to implement D.S. as the substitute of D.P. Os/2 allocated memory to processes in segments rather in pages. All these segments were monitored by segments discripters that include information about the segment size, address and protection. A process does not need to have all of its segments in memory to execute but can be leaded as and when required.

Page Replacement: -

In paging several pages are allocated to a process to store its code and data. These pages represent individual memory segments identified by unique address in page table. If the degree of multi programming in increased then there may be huge number of processes active in memory having there own pages. This could result into over allocation of memory or page corruption. E.g. if we run five processes in memory each having 10 pages but actually uses five pages then the rest of 5 pages are unnecessarily occupied by each of process. It also results into slow I/O due to involvement of extra 5 pages. This situation must be overcome by using some technique i.e. page replacement. In PR, we can swap out a process freeing all of its frames and reducing the degree of multiprogramming. If we implement page replacement mechanism then in situation where no frame is free i.e. not currently being used then it is freed. This is done by writing its content in the swap space secondary M) and modify the P table to indicate that page is no longer in memory. This free frame can now be utilized by a new process or by a process with faulty page. This mechanism is implemented as page fault service routine. Steps involved in PR: -1. identify the location of desired page on the disk.

2. Find a free frame

I. if a free frame is found use it.

II. Use a PR mechanism algorithm to select a faulty frame

III Write the contents of faulty page to the disk i.e. swap space and modify the page table

3. Read the desired page from swap space and change the page table(recovery).

4. Restart the user process.

Page Replacement Algorithm: -

The selection and implementation of PR algorithm depends on the page fault rate. The selection of algorithm should be done by considering the algorithm with the lowest page fault rate. This is identified by running the algorithm on a particular string of memory references (array of memory addresses) and computing the number of page fault. The memory address string is called reference string which is generated string which is generated artificially by random number generator. Page Replacement Algorithm: -

1. FIFO algorithm (first in first come)

2. Optimal algorithm

3. LRU algorithm (Least recently used)1. FIFO algorithm: -

In FIFO algorithm, the page replacement algorithm associates with each page the time when that was brought into the memory. This associates a time with every page whenever it is allocated by the O.S. When a page fault occurs the page must be replaced by the oldest page. We replace the page by the page of 1st process of the system that is the replacement by the 1st process is FIFO order. To select or an identify the oldest page as per the time associated with every page. 1st in 1st out means that he unused page will be selected by scanning all the processes from oldest to latest one.2. Optimal algorithm: -

In optimal algorithm, the lowest page fault rates are available. It never suffers from high page fault rate. It works on the principle that whenever a page fault occurs then it will be replaced by the page that is not used for the longest period of time. In optimal algorithm, it must be kept in that it should be implemented only on the system with low error rates and also ensures the lowest possible page fault rate for fixed set of frames. It also insure the maximum recovery from a page fault but also has one drawback that it is time consuming as compared to the time rahi3. LRU algorithm: -

In LRU algorithm, which is almost similar to FIFO algorithm. The main distinction is that FIFO algorithm uses the time where a page was brought into memory but in LRU algorithm the page is selected as per the recent past as an proclamation of the future and then we will replace the page that has not been used to the longest period of time depending on its utilization by the process. It associates every page with a time that is the page last used. When the page is to be replaced then it chooses the page from the recent to the oldest process and involves only unused pages per process.Thrashing Algorithm: -

Thrashing is a page replacement algorithm which is used in situation where there is very high paging activity. It mainly deals with the situation where the number of frames allocated to a process falls below the minimum number of frames required by the computer that is the process.. in such situation there are only two possibilities.

1. The MMU tries to recover from the current page fault situation.

2. If the situation can not be recovered then the execution of the process is

suspended i.e. the process is killed or terminated. Then all the used or occupied pages by the process are released and marked as free in the paging table. This high paging actually is called Thrashing. In Thrashing a process spend most of its time in paging and lesser for execution. So in Thrashing the overall performance of the system is degraded.

Consideration for selection of Page replacement algorithm: -

1. Pre-paging (demand)

In Demand paging there is a possibility of large number of page fault that occur in the system when a process is created. Pre-paging is an attempt to prevent this high level of initial paging. The main idea is to bring all the required pages in to memory at one time.

2. Page Size

If the size of the page id increased then the total number of pages available in the memory are reduced (due to bigger size) if less pages are available in memory then the total number of process that can be loaded into memory is also reduced that is it reduces the degree of multi programming. Which is further reduces the possibility of page fault that may occur in the system.

3. Inverted page table

it deals with the mapping of virtual addresses to physical addresses. This mapping should be readable by the algorithm being implemented to handle the page fault situation.

4. Program Structures

The structure of the program that is the way program is created also contributes the consideration for selecting a page replacement algorithm because a bad program structure could result rates.

5. I/O Interlock

There may be situation where some pages are locked in memory with a device. There must be some mechanism to handle this state of te device like device pre-emption so as to avoid unnecessary locking of the process.

Unit 5

Input/ Output Sub-SystemOverwrite of I/O sub system: -

The two major operations performed by a computer system are I/ and processing but I/O forms the major contribution to the workload. These are lost of devices available today that interacts with the computer system. This variation in devices requires a lots of methods to be implemented in the O.S..

These methods are responsible for controlling all the I/O operations and I/O devices. These set of methods are implemented as I/O sub-system of the O.S.

I/O sub system face two big challenges 1st increasing standardization and development of architectures. O.S. designers are responsible to include S/W components for all the standards in architecture. 2nd wide variety of I/O H/W devices are available today and most of them are not backward compatible. O.S. designers try to include S/W supports in the I/O subsystem for most of devices.I/O Hardware: -

Every I/O device can be connected to a computer system provided it is supported by the H/W architecture of the computer. Every device that is connected to a computer system requires an interface called port. There are wide variety of devices each with its own specific part requirements

e.g. serial port, parallel port, graphic port, micro phone, speaker port, USB port etc.

The device is attached to this port using some cable or cable less mechanism which is called bus. A bus is a set of parallel conducting wires with a definite protocol used for control the data transmission over the bus.Every device has an electronic circuitry which is responsible for controlling the associated device and I/O operation to and from it. This electronic circuitry is device specific and controls overall operation of device. It is called device controller.

e.g. monitor = display card (graphics controller)

Hdd = Hard Disk controller

Fdd = floppy disk controller

Serial port = Serial port controller

I/O H/W techniques: -

1. Pooling (busy- waiting)

2. Interrupt mechanism

3. DMA

Pool: -

Pooling or busy waiting. It is a technique which is used to verify the status (busy or free) or device controller. In pooling the value of status register (controller) is continuously monitored for 0 and 1 (busy bit, free bit). Continuously pooling may be inefficient for the system so an interrupt mechanism is used.

(Interrupt: - To overcome the problems raised due to continuous pooling interrupt mechanism is used. When the device becomes free then it notifies its status to M.P. by sending an interrupt signal over the interrupt request line (IRQ). Which is continuously monitored by the M.P. after the execution of every instruction. When an interrupt is identified then the control is transferred to the IVT ( inter vector table) which is to identify the device causing the interrupt and then the requested ISR is executed.( DMA: - By default all the data is read from and written on to the register of the device controller byte by byte by M.P. Also the pooling operation is carried out by M.P. this is called PI/O (program I/O) but it is not feasible to use M.P. time for such operation so the concept of DMA was introduced. In DMA any application can read or write to and from a device directly using DMA controller. Thus elimination the wastage of M.P. time. The request to DMA controller contains three arguments which are source add, destination, add, amount of bytes to transfer (read or write).Kernel I/O sub system: -

The I/O sub system is integrated with in the kernel of the O.S.. It provides all the services related to I/O. the services provided by it are I/O scheduling, buffering, caching, spooling, device reservation, error handling.

In I/O device scheduling, a set of I/O request are scheduled in a proper order and then executed one by one. It improves the overall system performance and esure the optimum utilization of resources. The efficiency of the scheduling mechanism can be determined by the average waiting time.

( Buffer: -

Buffer is a temporary memory area which is used to store the data temporary while it is transferred between two devices or between device and an application. It is mainly used to overcome the speed mismatch between batch the objects of the data structure.

( Spool: -

Spooling almost same as buffering but is used to temporarily hold the O/P data for a device such as printing. It is used along with the S.S> and overcome the limitation of printer due to speed mismatch. The S/W components that implement spooling is called spooler and maintains queue for every device.

( Caching: -

Cache is special purpose memory which holds a copy of frequently accessible data. It is faster than primary memory. the access to cache copy is faster and more efficient then access to the original copy.

( Error Handling: -

It deals with the errors that occur due to read and write operation. They may be generated by a hardware or an application so some mechanism is used to protect the memory so that the complete system failure dies not occur.

e.g. CRC (Cyclic Redundancy Check).( Device Reservation: -

It deals with the reservation of the device in advanced so that it can be occupied or allocated to a process or a program whenever, required but dies not ensure the optimum utilization of devices and may result into unwanted delays.

Performance of system ( I/O sub system): -

The performance of the system is directly effected by the architecture of the I/O sub system. I/O ss place a heavy work load on the CPU by executing certain time consuming operations like device- driver, device scheduling, caching, interrupt handling mechanism etc. Also there is huge ata transfer during the data copy between device controllers and main memory and again from M.M to the application dtat space and Kernel buffers.

There are certain criterias and principle that can be employed to improve the efficiency of I/O.

1. Reduce the number of switching operation that is context switch.

2. Reduce the number of times the data must be copied in memory while passing

between device and application.

3. Reduce the frequency of interrupt by using large transfer and smart controllers

With minimize pooling.

4. Increase the concurrency by using DMA supported controller or channels.

5. Move processing operation into H/W that is device controllers for concurrent operation with the CPU.

6. Balance the CPU memory sub system bus for an improve and stable I/O performance. (load balancing).

Secondary Storage Management: -

Disk StructureSector: -The smallest logical addressable block of memory on the S.S. and is a [part of track. Track is composed of multiple sectors where every sector is uniquely identified by a sector number. The 1st sector on the disk resides on the 1st track of 1st cylinder and in number also zero(0). This number gradually increases as we move from outer to inner surface.

cylinder = number of tracks/ surface

s. capacity =number of cylinder * track/cylinder * scheduling /track* bytes/sector = x. bytes0 track => Boot strap loader is a program that load the O.S. from.( Introduction: -

Magnetic tapes and disks( Sector/ tracks/ cylinder/ cluster

( Storage capacityCylinder( Logical grouping of equi distant tracks.

Disk Scheduling: -

Disks are fast storage medium that can transfer data at high speed it is because of three important factor.

1. Seek time.

2. Latency time. ( relational delay)

3. Band width.

Access time = seek time + latency time

Seek time =Time it takes to search trackLatency time = search a particular sector.(reach to the particular sector)Band width is the number of blocks read or written at a time.

Whenever an application generates an I/O request to an from a disk, it issues the system to the O.S.. This system call requires three arguments: -

1. Whether the operation is I/P or O/P.

2. The disk address for the transfer.

3. The memory address for the transfer including the number of bytes to be transferred.

Scheduling Algorithm: -

1. FCFS ( First Come First Serve.

2. SSTF ( Shortest Seek Time First3. SCAN Algorithm

4. C SCAN Algorithm

5. Look Algorithm

e.g. current cylinder position is = 34

p1 p2 p3 p4 (device queue) (p5, p6 )

p1= 64, p2 = 38, p3 = 99, p4 = 78, p5=95, p6=751. FCFS: -

It is simple and straight forward algorithm where I/O is performed by processes waiting in device queue on 1st cone 1st serve basis. It does not improve the I/O efficiency of the system. Considering the above example, the process will be allocated device in the following manner.

1. p1 64

2. p2 _ 38

3. p3 _ 99

4. p4 _ 78

2. SSTF: -

In SSTF algorithm the efficiency of the I/O is improved in comparison to FCFS. In SSTF, the cylinder with the shortest seek time with reference to the current cylinder position is selected. By considering above e.g. the processes will be allocated device in the following manner.1. p2 38

2. p1 _ 64

3. p4 _ 78

4. p3 _ 99

3. SCAN: -

In scan algorithm the requests are processed in a sequential manner by repositioning the head at one end to the disk. It is done gradually moves towards the other end of the disk fulfilling all the requests for cylinder in between. When the head reaches at the other end and a request is made then it reverses its direction to the other end of the disk fulfilling all the requests for cylinders in between. This is similar to an elevator mechanism so the algorithm is also considered as an elevator algorithm.

Considered the above e.g. let us assume that the scanning of the cylinders starts from the outer end that is cylinder number 0.

p2 , p1, p4 ,p3 &n-0 ( p5, p6 )

4. C SCAN: -

It is a variant of SCAN algorithm with one exception that is when the head reaches from one end to the other end of the disk fulfilling all requests the cylinders in between them it reaches back to the 1st end at very fast speed without fulfilling any request for cylinders ( if disk if utilized only 20 % then it is important then SCAN algorithm because all the data is stored towards the outer ends and similar in SCAN reverse condition).

The request will be processed in the following manner.

p2 , p1, p4 ,p3 &0-n ( p6, p5 )

5. Look Algorithm: -

It is an improvement over SCAN and C SCAN algorithm. Where once the head is positioned at one end of the disk & it starts moving towards the other end of the disk fulfilling all the requests of cylinders in between but it steps at the request for he last cylinder and then bounces back to the originating end.

We should prefer the implementation of SSTF algorithm or FCFS algorithm for uni processor or multitasking system. It can also be used for multiprocessor system where different device queues are maintained by every M.P. This would increase the overall system I/O efficiency. And is most widely being used by the current O.Ss.

p2 , p1, p4 ,p3

Disk Management: -

( Disk formatting:- the process of creating tracks ad sectors is called formatting.In this process all the user data is destroyed and new tracks and sectors are formed.Formatting is of two types,

a) High level formatting :-In this type of formatting the FAT (File Allocation Table) is cleared and the entire disk remain unaffected.In case of high level formatting the data can be recovered.b) Low Level Formatting:-In case of low level formatting the entire contents of disk is cleared, by creating new tracks and sectors and also the FAT is cleared. Data recovery is impossible after low level formatting. ( Boot Block

Boot loader:- The small sized program which locates the OS and loads it into memory and starts executing it.

Boot Strap Loader:- The small sized program which loads the boot loader into memory which further loads the OS.It is located into 0th block (that is 0th sector of 0th track)Sector0 :- The first sector of disk which is at the outer end and on the 0th trackTrack0 :- The first track of the disk on which the boot loader resides.

Search and locateAs soon we start the computer the BIOS after doing the preliminary test (POST) it passes the control to the boot strap loader, which is a small program which resides in 0th block of the disk and its purpose is to load the boot loader of the OS. The boot loader is also a program which loads the kernel of the OS into memory and transfers the control to it.

The O.S. references (direct or indirect).

These days the concept of dual booting is being implemented in two or more than two OS can be installed on one computer and any one of them, which is to be used, can be loaded and used. Eg:Windows XP, Linux and Windows 98. In case of dual booting there is only one boot loader not two. The boot loader in this case keeps the reference of other OS and it also provides and interactive way to select which OS is to be loaded, when you select an OS, it loads that OSs kernel into memory from the location which it previously has. It is a special type of boot loader which can contain the references (addresses) of other OS. Eg:-LILO(Linux), NTLDR(Windows XP)

There are third party softwares also available for dual booting. Eg :boot manager etc.

( Bad Blocks

Hot Fixed Region=> recover the data form bad block.

Clusters with invalid links.

Hot fixed region

Mapping of bad block with the clean block in the hot fix area.

Blocks -> Clean -> empty buffers

-> Dirty-> full of data but data are not flashed means data could not save into the S.M.

-> Bad -> buffers are being corrupted due to some illegal activity

Swap Space Management: -

Swap S.M. is low level operation performed by the O.S. It is implemented for virtual memory which use disk as an extension of main memory. Such disk access is slower than main memory, access swap space can largely effect the system performance. There are certain factors or criteria to be considered to obtain the best throughput out of the system.1. Swap space size and use.

2. Swap space location

Disk Reliability: -

RAID ( Redundant Array Of Inexpensive Disk.

( Disk Stripping.

( Disk Mirroring

( Disk Duplexing.

Disks are most important element of the system that are less reliable in comparison to other system. They have relatively high failure rates and there failure cause great loss of data and significant down time. While the disk is replaced and data is replace is very important for improving the reliability of disk system using some technique like disk stripping raid, mirroring, and duplexing.RAID (Redundant Array Of Inexpensive Disks) : -

Data security and integrity is very important issue since olden days. The scientist have regularly being trying to make the non volatile secure and accurate. For this purpose they earlier used magnetic tapes and expensive disk to backup the data.

RAID is a new technique of making data more secure. The disk drivers have continued to get smaller and cheaper in recent years, so it is now cheaper to attach a large number of disk to a computer system. Using a large number of cheap small disks to store more data may be more cost effective than using smaller number of expensive large disks.

There are 7 RAID levels, which implement different techniques to store data and make it stable storage. The most commonly used RAID levels are: -1. RAID 0 (stripping): -

In this level all the disk are parallel in connection and the data is equally stored on every disk. The main advantage of this RAID level is not duplicated that is there is no redundancy of data. So it is also called non-redundant stripping. (sirs points and diagram).

2. Raid 1 (mirroring): -

In this RAID level the data is duplicate on two or more than two disk together. In case one disk fails the other will be activated and the system will not go down.Raid 2(duplexing): -

In this RAID level there are different controllers for different disk unlike in RAID 1 in which there is a common controller.Stable Storage Implementation: -

Stable Storage mean that information resides on the system storage is never lost. To implement stable storage we need to replicate the needed information, in multiple S.D. with independent failure nodes (dual disk with one controller or independent disk with dual controller). To insure that data is spoiled or not updated properly some verification mechanism is used. Further if any failure occur then there are three possibility.

1. Total: - A failure occurred in the meddle of transfer.

2. Total failure: - The failure occurred before the disk transfer.

3. Total Completion ( successful completion): - Successful completion, failure after successful transfer of disk.

Whenever a failure occurs the system detects it and invokes recovery procedure. The recovery procedure restores the protocol or incomplete block with the duplicate block and the procedure is completed with this it is ensured that the storage is stable. Storage that guarantees stable storage with proper recovery mechanism if a failure occurs.

PAGE OS_BCA_IV_Semester