Top Banner
PREFACE Computer use and applications are in the midst of a major paradigm shift from the centralized, professionally managed facilities towards “computing for the masses” characterized by placing control and computing power right at the desks of millions of the users. During the past few years, several Universities have started the undergraduate and postgraduate courses in Computer Science & Engineering and Information Technology. The subject Operating System is in the course, whether it is undergraduate or postgraduate. The study of the Operating System is very important due to wide variety of scope in the designing foundation & the tools of the Operating system. In the scenario of Graphical User Interface based Personal Computer, the present question bank has been written to guide the students at the examination time. The question bank is written on the basis of the syllabus of the U.P. Technical University, Lucknow and the Indira Gandhi National Open University. I hope that this question bank shall be very useful to the students of the other Universities. I have tried my best to organize the contents of this book with sufficient exercise for each chapter. In the present question bank, it is assumed that the reader is familiar with the basic principals of the Computer Architecture, Data Structure and any High Level Programming Language like C/C++. In the end, I hope that the questions attempted in the present question bank shall be very helpful to score excellent marks in the examination of the Operating System subject. Errors might be crept in despite utmost care and the author would be very grateful if these are pointed along with important suggestions to improve the present question bank at the [email protected]. Vipin Saxena
254
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: Microsoft Word - OS

PREFACE

Computer use and applications are in the midst of a major paradigm shift from the centralized, professionally managed facilities towards “computing for the masses” characterized by placing control and computing power right at the desks of millions of the users. During the past few years, several Universities have started the undergraduate and postgraduate courses in Computer Science & Engineering and Information Technology. The subject Operating System is in the course, whether it is undergraduate or postgraduate. The study of the Operating System is very important due to wide variety of scope in the designing foundation & the tools of the Operating system. In the scenario of Graphical User Interface based Personal Computer, the present question bank has been written to guide the students at the examination time. The question bank is written on the basis of the syllabus of the U.P. Technical University, Lucknow and the Indira Gandhi National Open University. I hope that this question bank shall be very useful to the students of the other Universities. I have tried my best to organize the contents of this book with sufficient exercise for each chapter. In the present question bank, it is assumed that the reader is familiar with the basic principals of the Computer Architecture, Data Structure and any High Level Programming Language like C/C++. In the end, I hope that the questions attempted in the present question bank shall be very helpful to score excellent marks in the examination of the Operating System subject. Errors might be crept in despite utmost care and the author would be very grateful if these are pointed along with important suggestions to improve the present question bank at the [email protected]. Vipin Saxena

Page 2: Microsoft Word - OS

ACKNOWLEDGEMENTS First I bow down at the lotus feet of MAA SARASVATI and pray, “Let noble thought come to me from the universe and by the eternal blessings of omniscient, almighty as to enable persist always in me the strong belief of devotion and determination that anything incomprehensible on this holy earth is comprehensible through team-work with sprit of cooperation, goodwill and hardwork”. It gives me great pleasure to express my deep and sincere gratitude to a man of spiritual personality, philosopher and mentor, Prof. G. Nancharaiah, hon’ble Vice-Chancellor, Babasaheb Bhimrao Ambedkar University, for his soft words and continuous encouragement. I am also very grateful to acknowledge my deep sense of gratitude to Prof. D.S. Chauhan, Vice-Chancellor, U.P. Technical University, Lucknow and Prof. H.P. Dixit, Vice-Chancellor, Indira Gandhi National Open University. I take this opportunity to express my warm regards to Prof. S.K. Singh, Executive Director, UPTEC Computer Consultancy, Lucknow, Prof. D.K. Badhyopadyay, Indian Institue of Management, Lucknow, Prof. R.C. Mittal, Indian Institute of Technology, Roorkee, Col. D.S. Thapa, Sahara Arts and Management Academy, Lucknow and Er. Gaurav Tewari, Co-ordinator. IGNOU Study Centre, UPTEC, Lucknow for constant encouragement of writing this question bank.

Page 3: Microsoft Word - OS

I am delighted to appreciate the help given by my colleagues and friends, who were directly or indirectly involved in the various stages of preparation of the present question bank. There is dearth of proper words to express my feelings for my parents and my wife Alka who always encouraged me in all endeavours. I owe much of my academic success to them. My special thanks are also due to Mr. A. Thangraj, Computer Operator, Bbau, Lucknow for clear and efficient composing and regards to Sri Vibash Pandey for creative cover design. I thank everone at Prakashan Kendra especially Mr. Vivek Malviya for bringing, this question bank in presentable form in record time. In the end, I duly acknowledge the University Grants Commission, India. Vipin Saxena

CHAPTER I

INTRODUCTION In this chapter the most important questions have been considered on the basis of the following syllabus Operating System and Function, The Evolution of the Operating System, Batch, Interactive, Time Sharing and Real Time System, System Protection, Operating System Structure, System Components, System Structure, Operating System Services. Question 1.1 What is the Operating System. What is the need of Operating System. Explain different features of it. Solution

Page 4: Microsoft Word - OS

Before describing the operating system, you must have an idea about the basic resources of a computer system. As you know, every digital computer system consists of several elements & these can be used to carry out the work after finding the resources like processor, storage (memory) & input & output devices. The input & output devices are for user or computer expert. The following figure shows the basic elements of a computer system:

Figure 1.1 Basic Elements of a Computer System The important resources are described below: Data The first necessary resource is data giving by keyboard which are manipulated within the computer & converted in the altered form to communicate to outside either on screen or on the printer. Processor It processes a task or program & execute a task or program. A processor acts upon a data in order to do the work. Memory It stores a task or program in order of processing. All these elements must interact in a controlled way according to predetermined policy. The hardware of a computer system provides the base of the system i.e. the processor, storage, input & output devices. The process is a program in operation. It is a

Page 5: Microsoft Word - OS

execution of a series of pre written instructions. To do any task the computer system requires data. Need of Operating System The operating system manages computer system hardware, the process itself, the data and the communication with the operator, user or outside world in accordance with the policies set by the individual or organization controlling the computer system. Operating system means of controlling resources and communication between the elements making the computer systems. The operating system has the following features

1. The processor, which manages it. 2. The instruction available to it. 3. How data are to be located within storage. 4. How the processor is to be controlled. 5. How the important parts of the system are to be protected

from error.

The operating system interprets the user’s commands into actions. So it is the interface between the user and the hardware that makes up the computer system. In general, one can say that the primary objective of operating system is to increase productivity of a processing resource like computer hardware or computer system users.

Question 1.2 Define the System Software. Solution

The first step is to make coding of instructions simpler by replacing the operation codes with mnemonic codes and also by allowing use of a symbolic address or data name as that a program

Page 6: Microsoft Word - OS

(called as assembler) could need a process a source program and then translate it in to the necessary machine instruction and absolute addressing (called an object program), which the computer then execute. Assembler automatically translates the program into the machine language.

The assembler, a program itself must be loaded into storage by a program called a loader, which loads the object program.The assembler and loader constitute the system software. The object of such software is to provide cushion between the programmer and the computer. Question 1.3 Explain the different ways to work cooperatively. Clearly explain different models. How can you describe in respect of operating system.

Solution

There are different possible ways to divide the work between two or more workers. For two workers, they agree on method of dividing the work with cooperation & sharing tools in such a way that they could complete the entire work in minimum time for organizing the work in three ways, the interface between two workers is necessary in order to achieve a goal., here we have two interfaces between first & second & second and third. The amount of each interface completes the work in minimum time. Consider the following example as shown in Figure 1.2:

Figure 1.2 Library Figure

Page 7: Microsoft Word - OS

A library user needs a particular book, which is not available in library. The user approaches one of the librarians in order to interact with him and in order to obtain a book. The librarian can obtain the book through an inter library loan scheme. Once the book arrives at the library, the librarian tells the availability of books to the user. Here the people are both working to accomplish a particular task getting information (the book) to the one who wants it.

The following are the important methods:

(a) Interrupt Method

In this, you will see that how the librarian’s regular flow of work is interrupted by the borrower. The request prompts an action while the librarian processes this request. The borrower continues working independently. The librarian also continues to work independently. When the librarian receives the book through the inter library loan service. Again it becomes necessary to interrupt borrower by giving to telephonic message that book is available. The borrower wants to obtain the book although he is doing a very important work but the borrower suspend & normal Business temporarily to go to library to obtain the book.

(b)The Polling Method

It is another possible model for the interaction between the librarian & book borrower. Borrower is anxious about the book & telephones the librarian on a regular basis to enquire where the desired book has arrived. When this method works, it requires more of the borrower’s time than the interrupt method. The librarian could ignore the ringing telephone until the book has been received. This avoids the problems of his or her time as well. This is a very simple model.

(c) Mail Box Method

Page 8: Microsoft Word - OS

A third method for the interaction between the librarian and the book borrower is the mail box method. The borrower fills in a request form and put it in to a pre designated place i.e. librarian’s tray for the librarian. The librarian checks this tray on a regular basis & process any request found there. When the librarian obtains the book then it is placed at the position for the borrower, the borrower comes to library from time to time to check out the book. If the book is there, then the borrower can take it & use it. In this method two need not communicate directly with each other.

In the comparison of each method, each one has some advantage.

The interrupt method is most efficient in term of the minimal interaction between two people as soon as book arrives the borrower knows about the book. This method is also the fastest.

The polling method may seem to unnecessary, the method is neither faster nor more efficient. Consider that borrower has asked in a dozen libraries to search for a dozen different books & the little time be lost in obtaining reply of them.

The mail box approach is relaxed one. Both people can be processed independently but did not provide least time & efficiency.

Question 1.4 Describe the batch operating system in brief.

Solution:

This is the logical step to automate the sequencing of operations of a program execution in the mechanical aspects of program development. This operating system increases the system resource utilization and programmer productivity by reducing or eliminating the components idle times caused by the comparatively lengthy manual operations. In this, one can say that if several programs are

Page 9: Microsoft Word - OS

batched together on a single input tape for which different operations are performed only once. This reduces the overhead accordingly. In the batch operating system a number of jobs must be executed automatically, without slow human intervention. For this we must provide the some instructions in the form of the commands to execute the batch stream.

Batch processing requires the program, data and appropriate system commands to be submitted together in the form of a job. This does not allow the interaction between user and the executing programs. This type of operating systems are beneficial for the long jobs like statistical analysis, forecasting and large scientific computing programs. Scheduling of the batch jobs is very easy to handle. Jobs are handled in order of their submission.

Memory management in the batch operating system is very simple and it is divided into two areas. One area is the permanent resident of the operating system and other area is used for the loading the transient program for execution. When one program terminates then the other program is loaded into the same area. Batch operating system does not require any time device management. Therefore many serial and batch operating systems use the simple, program controlled method of I/O. Batch systems provide simple forms of the file management. At the time of the execution of programs, batch operating systems access the files in the serial order.

Question 1.5 What is multiprogramming operating system. How it is different from the serial operating system. Write down the differences between multiprogramming operating system and Multitasking Operating system.

Solution

The following description is based on the following aspects:

· Processor Scheduling

Page 10: Microsoft Word - OS

· Memory Management · I/O Management · File Management

Batch processing is dedicated to the resources of computer system to a single program at a time. Let us consider two jobs J1 & J2 which have identical behaviour in respect of processor and I/O times, the following figure show the serial execution of two jobs. In this, when job J1 completes its execution then, the job J2 starts for execution.

Figure 1.3 Serial Execution of Two Jobs J1 and J2

After this concurrent execution of two programs introduced, which, is shown in the Figure 1.4. In the figure, the processor is assigned to J1 again the to J2, and so forth. It is done by the multiprogramming operating system. This shows hundred percentage of the processor utilization. With a single processor, parallel execution of programs is not possible, and at most one program can be in control of processor at any time. Multiprogramming systems allow two programs to compete for system resources at any time. The number of programs actively competing for the resources of a multiprogrammed computer system is called as the degree of multiprogramming. Higher the degree of multiprogramming shows higher utilization of the processor.

Page 11: Microsoft Word - OS

An instance of a program is called as a process or task. Therefore, a multitasking operating system is distinguished by its ability to support concurrent execution of two or more active processes. Multitasking is implemented by maintaining the code and data of several processes in the memory simultaneously. Multitasking is often coupled with hardware and software support for memory protection in order to prevent erroneous processes. The multiprogramming denotes an operating system, in addition to support multitasking. It provides sophisticated forms of the memory protection and control the concurrent execution of the programs. Multiprogramming operating systems support the multiple users. Sometimes these are known as the multiuser systems. Multiprogramming implies the multitasking but multitasking does not imply the multiprogramming. Multitasking is one of the mechanisms which employs in managing the total computer system resources including processor, memory and I/O devices.

Multiprocessor operating systems manage the operation of the computer systems that incorporate multiple processors. These are the multitasking systems because they support the multiple tasks on the different processors.

Question 1.6 Describe Time sharing operating, real time operating and combination operating system.

Solution

The following are the types of the multiprogramming system:

(a) Time Sharing Operating System

It is a very popular and multiprogrammed and mutiuser systems. Most of the time sharing systems are based on the time slicing scheduling algorithm. These are very useful in the text processing system and computer aided design system. In this, the programs are executed with rotating priority that increase during the waiting

Page 12: Microsoft Word - OS

and drops after the service is granted. For the long program, system defines the time slicing accordingly and place at the end of queue of waiting programs.

Memory management in this system provides for isolation and protection coresident programs. I/O management in the time sharing system must be sophisticated enough to cope with multiple users and devices. File management in the time sharing system must provide the protection and access control.

(b) Real Time Operating System

These types of the operating systems are very useful for the large number of events which are processed in short time. These are very useful for the industrial control, telephone switching equipment, flight control, military applications and real time simulations. These type of the system provides quick event response time.

Real Time Operating systems rely on specific policies and techniques for doing their jobs. Programmer defined and controlled processes are commonly encounters in the real time systems. A separate process is used to handle a single external event. The process is worked upon the number of activated related events. In this, each process is assigned a certain level of priority. The scheduling is also known as the priority based preemptive scheduling and it is used in majority of real time systems. In these system, memory management is less demanding in other types of the multiprogramming systems. Processes in the real time systems are more cooperative with the support. File management is found only in the larger installations of the real time systems. The main objective of the file management is speed of access.

(c) Combination Operating System

In this, the different types of the operating system are optimized. Some of the commercial operating systems provide a combination of services. For example, the time sharing operating system may

Page 13: Microsoft Word - OS

support interactive users and also incorporate a full fledged batch monitor. This allows computationally intensive noninteractive programs to be run concurrently with interactive programs. In this, batch may be used as a filler to improve the processor utilization while accomplishing a useful service own. Nowadays, these types of the operating systems are very popular.

Question1.7 Define the distributed operating system in brief.

Solution

It is a collection of the autonomous computer systems capable of communication and cooperation via their hardware and software interconnections. Distributed computer systems evolved from computer networks in which a number of largely independent hosts are connected by communication links and protocols. The distributed operating system provides a virtual machine abstraction to its users. The major objective is the transparency. These systems provide system wide sharing of the resources like computational capacity, files, and I/O devices. The services are provided to the each node for the benefit of the local clients. These systems may facilitate access to the remote resources, communication with the remote processes and the distribution of the computations.

Question 1.8 Explain the operating system services for a process.

Solution

Let us first define the process. A process is defined as a program, a subprogram, a macro. The concept of process may be implicit or explicit in all multiprogrammed operating systems.

The system services are provided by the kernels of the multiprogramming operating systems for the process management. For the run time services, these are known as the predefined

Page 14: Microsoft Word - OS

system calls that may be invoked by the user’s process. The following are the important system calls:

(a) Create(processed, attributes) In this, the operating system creates a new process with specified identifier and the attributes. For the Create call, the operating system obtains a new PCB(Process Control Box) from the pool of the free memory, fills the fields with specified parameters and inserts the PCB into the ready list. Some of the specified parameters required at the time of creation of process are:

* Priority * Size and Memory Requirements * Level of Privilege * stack size * Memory Protection Information with Access Rights * Other System Dependent Data

(b) Delete(processed)

This service of operating system is also called as Destroy, Terminate or Exit. The operating systems destroy the designated process and remove it from the system. A process may delete itself or by another process. In this PCB is removed from its place of residence in the list and is returned to the free pool. This system call is generally invoked as a part of orderly program termination and it can be handled by the processed.

(a) Abort(processed) This system calls is a forced termination of a process. Although a process could conceivably abort itself, the most frequent use of this call is for involuntary terminations, such as removal of a malfunctioning process from the system. The operating system performs such much of the same actions as in Delete system calls.

(e) Fork/Join

Page 15: Microsoft Word - OS

Another technique for the creation of a process and termination is

done by the Fork/Join pair. The Fork is used to split a sequence of instructions into two concurrently executable sequences. After reaching the identifier specified in Fork, a new process which is called as child is created to execute one branch of the forked code while creating parent process continues to execute the other. The Fork system call returns the identity of child process to the parent process. Join system call is used to merge the two sequences of the code divided by the Fork, and is available to a parent process for synchronization with the child process.

(f) Suspend(processid)

This service of the operating system is called as the Sleep or Block. In this, the designated process is suspended indefinitely and placed in the suspended state. A process may suspend itself or by another process. At the time of suspension, the process surrenders control the operating system. The operating system responds by inserting the target process’s PCB into the suspended list and updating the PCB accordingly.

(g) Resume(processed)

This service is also known as the Wakeup. This call resumes the

target process, which is permanently suspended. A suspend process cannot resume itself. Suspended process depends upon the partner process to issue the Resume. The operating system responds by inserting the target process’s PCB into the ready list, with the state updated. In the systems the operating system keeps track on the depth of suspension and first operating system increments the suspend count, moving the PCB only when the count reaches to zero.

(h) Delay(processed)

Page 16: Microsoft Word - OS

This service is also known as Sleep. In this, the target process is suspended for the duration of the specified time period. The time may be expressed in terms of the system clock ticks that are the system dependent and not portable, or in standard time units such as seconds and minutes. A process may delay itself or by another process.

(i) Get_Attributes(processed, attribute_set)

It is an inquiry to which the operating system responds by

providing the current values of the process attributes, or by specified subset, from the PCB. This system call is used to monitor the status of process, its resource usage and accounting information, or other public data stored in a PCB.

(j) Change_Priority(processed,new_priority)

It is an instance of a more general of the previous system call. This call is not static where the process priority is not static. Run time modifications of a process’s priority may be used to increase or decrease a process’s ability to compete for the system resources. We can say that the priority of a process should rise or fall accordingly.

CHAPTER II

CONCURRENT PROCESSES

Process Concept, Process State Transitions, Interrupts, Principle of Concurrency, The Producer/Consumer Problem, The Critical Section Problem, Semaphore, Classical Problems in Concurrency, Interprocess Synchronization, Process Generation, Process Scheduling

Page 17: Microsoft Word - OS

Question 2.1 How do the data channel and main processor use buffer.

Solution Consider the Figure 1.2 related to the borrower & librarian. These two persons exchange data in order to work together. The librarian could found it to the borrower directly, but this depend upon there meeting face to face therefore one may have to hold the book & wait of other to arrive to take it. But best is that the book can be left in a pre arranged place for the borrower to find an arrival at the library.

Channel

Figure 2.1 Input Data Through the Medium of Buffer Transforming (input) data through the medium of a buffer will be in a same manner, when a main processor & data channels exchange data, then they used buffer. A buffer is area of primary storage where data’s are held temporarily to

Primary Storage Buffer

Main Processor

Page 18: Microsoft Word - OS

facilitate their transfer between devices. Here the data channel can obtain a data from a peripheral device & places it into the buffer. Once the buffer is full, the data channel signals the main processor, which can then access the data. For the output, the processor puts data into the buffer, when buffer is full, then the operating system requests data channel to deal with it. The data channel accesses the buffer & transfers the data to a output device. The above figure gives clear understanding of input data through medium of buffer.

Question 2.2 Explain, how the interrupts work.

Solution Consider I am reading a textbook, & some one interrupts me, then first I will finish the sentence & place a pencil mark immediately then attend the call. After finishing conversation, I can start reading again where I left off, without having back track to the top.

The computer’s processor alternates between running a process & handling interrupts. When processor is interrupted, it must be complete the current instruction & save the status of whatever task it is executing so that it can resume that task, exactly where it left off. So an interrupt causes the processor to save/store the vital information on the status of the task that was executing when the interrupt occurred. It then deals with the interrupt signal. Once it has dealt with the interrupt, the processor restores the interrupted task as it was, & recommences work on it.

Question 2.3 What is meant by the state switching. What must happen in state switching. Explain the concept of PSW.

Page 19: Microsoft Word - OS

Solution

The shift made by the processor between running a processor & handling an interrupt is called state switching or context switching. Any process in the computer system can be said to be state or context. When a process is actively executing, it is running in one possible state. If a process could use the processor but the processor is not available, the process is in a ready state, ready to run if it could. If it has started running but has been interrupted, it is said to be blocked state. The instructions which control context switching & which handle interrupts make up the most fundamental level of an operating system. Codes which handles an interrupt is known as interrupt handler. Therefore, operating system is a program code which controls & manages the hardware & interaction between the hardware & other software. Each processor has a hardware features design to control the fetch & execution of program information. It is a special register, which contains the address of the next instruction to be fetched & executed. It is called the PSW (Program/Process Status Word) sometimes it is called as program counter (PC), Program/Process Address counter (PAC) or instruction pointer (IP). It has two ways, first as hardware structure i.e. a special register, & secondly as the data (an address of an instruction) contained within the register. The size of the register depends upon the number of address bit. The PSW of some processors contains information in addition to the address of the next instruction. The PSW controls the fetching & execution of an instruction.

Page 20: Microsoft Word - OS

The Operating system uses the data in the PSW to control a content switch in a context switch, an interrupt occurs & the currently executing process must be suspended. The currently executing instruction completes, & the data in the PSW is saved. This processes the address of the next instruction to be executed. When, that process is resumed & the condition codes set by the last instruction in that process to have completed. The address associated with the starting instruction of the appropriate interrupt handler is moved to PSW. The processor then fetches the instruction indicated by the address in the PSW & executes it. The reverse occurs when an interrupted process resumes. The stored PSW is restored to the PSW register & the processor fetches the instruction indicated by the address how is the PSW for execution.

Question 2.4 What must an operating system be able to do a process. What purpose is served by a PCB. Solution

When an operating system takes program code & turns that into a process. It also creates a small item of data in order to keep track of other aspects of that process in its progress through the system. The operating system creates a program/process control box (PCB). PCB controls of the process’s unique identifier, start time, startup priority, the address of the first instruction, and the status of the process pointer to any other control blocks related to the process. The operating system updates the PCB to reflect any change in status of the process. For other control blocks,

Page 21: Microsoft Word - OS

change in the status of their objects will be noted by the operating system. From the point of view of operating system, all control block of a given type are identical except for their logical connection to object to be controlled & to related control blocks. Operating system must be able to treat process as objects, minimally, it must be able to create a process from program code, suspend the process to deal with interrupts (block it), make the process to reduce execution (ready it) resume executing the process & finally destroy the process once it has completed.

Question 2.5 How processes run in the different process state transitions. Clearly explain with the help of a diagram. Solution

When the programmer allocates the names to the computer programs then there may be chances of duplication. It may be useful to have two programs with the same name. To create a process, the operating system must find a unique identifier for it. Then the process must be made known to all parts of the operating system & assume that a tracking of its progress must be established. Operating system is it self a group of process & it must switch the attention of the processes among several processes.

Page 22: Microsoft Word - OS

Figure 2.2 Life Cycle of a Process with State Transitions The life cycle of a process is shown in the Figure 2.2. The operating system selects the program, which we want to run, & creates a process from it & creates a PCB. When the processor is available for the work, the operating system dispatches the process, which begins to run. It runs until it reaches a point where it is interrupted. The user process itself initiates a request for a state transition to block. All other interrupts are initiated from outside. The process remains blocked until the event is completed. When, the operating system is said to wake the process up. That is, the operating system changes the process’s status from block to ready. As soon as the processor is available, the operating system dispatches the unblocked process to continue execution. After completing the cycle of blocking, waking up & dispatching, then the operating system destroys the process & selects a new program to turn in to a process. Whenever a process is blocked, the contents of PSW are saved & the appropriate interrupt handler is copied to the PSW. When a process is dispatches for the first time, the address of its first instruction is put into the PSW. On all subsequent dispatches, the saved address is returned to the PSW so that the process resumes where it left off.

Question 2.6 What is process scheduling. What is the role of different kind of scheduler in process scheduling.

Solution

Page 23: Microsoft Word - OS

Process scheduling is referred to the set of policies & mechanisms built into operating system. A scheduler is an operating system module that selects the next job to be admitted into the system & next process to run. The main objective of scheduling is to optimize the system performance according to the criteria designed by the system designer. There are three different types of schedulers whi.ch are shown in Figure 2.3.

Long Term Scheduler Medium Term Schedular Exit Batch Jobs Short Term Schedular Interactive

Programs

Figure 2.3 Different Types of Schedulers

(a) Long term scheduler

Batch Queue Ready Queue

Suspended Queue

CPU

Suspended & Swapped Out Queue

Page 24: Microsoft Word - OS

It works with the batch queue & selects the next batch job to be executed. Batch job contains all necessary data & commands for their execution. Jobs also contain system assigned estimates of resource needs like memory size, expected execution time & device requirement. The primary objective of long-term scheduler is to provide a balanced mix of jobs. This scheduler acts as a first level keeping resource utilization. For example, when processor utilization is low then scheduler admits more jobs to increase the number of processes in a ready queue. In the same manner when the utilization factor is very high then the long term scheduler may reduce the rate of batch jobs. In addition to above long term schedular invoked whenever the completed jobs depart the system. The rate of invocation is less than the rest of two schedulers.

(b) Medium term scheduler After executing the process for some time, the running process may become suspended by making I/O request or by system calls. Suspended process can’t make any progress for completion until the related suspended condition is removed. In the above figure a portion of suspended process are assumed to be swapped out. The remaining processes are assumed to remain in the memory while they are suspended. The medium term scheduler handles the swapped out process, when the suspending condition is removed then the schedular swap the process & put into ready queue for the processor. It also provides the information about the memory requirement of swapped out process. Actual size of the process may be recorded at the time of swapping.

Page 25: Microsoft Word - OS

Medium term scheduler controls the suspended to ready transition of swapped process. This scheduler may be invoked when memory space is vacated by a reporting process.

(c) Short term scheduler It allocates ready processes to the processor for for completing executing instructions. The main objective of this scheduler is to maximize the use of system performance. This scheduler controls the process from ready to running state. It must be invoked for each process switch & to select next process to run.

With above, schedulers occur at several levels in a batch

jobs

o When work is waiting to enter the system. o When work is waiting to be initiated. o When suspended process is awating activation. o When a running process is suspended. o When process compiles.

Question 2.7 Define the Critical Section & Mutual Exclusion.

Solution Critical Section

It is a sequence of instructions with clearly marked beginning & end. It is usually state guards updating of one or more stored numbers.

Page 26: Microsoft Word - OS

When a process enters in a critical section, it must complete all instruction there in before any other process is allowed to enter in the same critical section.

Mutual Exclusion

The process executing in the critical section is allowed to access the shared variables, all other process should be prevented from doing until the completion of the critical section (or previous execution of process). This is called a mutual exclusion. In this case, a single process temporarily excludes all other from using a shared resource in order to ensure the system integrity.

Question 2.8 What is interprocess synchronization. Solution Interprocess synchronization can be defined as · a set of protocols & mechanisms used to preserve

system integrity & consistency when concurrent processes share resources that are serially reusable. A serially reusable resource can be used by at most one process at a time,

· when a set of processes have access to a common address space, they can use shared variable for a number of purposes.

If one considers the two processes KEYBOARD & DISPLAY are accepting input from keyboard & displaying information on the screen. Two processes share a common buffer. The process KEYBOARD, in response to keyboard interrupts receives input & puts it into the buffer. The process DISPLAY echoes the character from buffer by displaying these on the computer screen. Here each process

Page 27: Microsoft Word - OS

maintains a pointer to mark it current working position in the buffer. The variable echo is used to keep track of the running number of characters awaiting display.

The KEYBOARD increments the variable echo for each character input and shown below:

{Process Keyboard} ………. echo = echo + 1; ……….

And DISPLAY decrements echo for every character displayed and shown below:

{Process Display}

………. echo = echo - 1; ……….

Echo contains the running difference between the number of characters input & the number of character displayed. Echo must have the non negative integers.

Question 2.9 What are semaphores. How many types of semaphores exist to handle interprocess synchronization. Solution A semaphore consists of two primitive operations SIGNAL & WAIT. These operates on a special type of variables which are called as semaphore variables and contain only integer value & then value is accessed or manipulated by the use of SIGNAL & WAIT operations. The two primitives may be defined as:

Page 28: Microsoft Word - OS

WAIT(s) It decrements the value of its argument semaphore s, as soon as it would become. The wait operation is shown below:

do { s=s-1; }while (s !>0)

SIGNAL(s) It increments the value of its argument semaphore s as an

indivisible operation s = s+1;

There are two kinds of semaphores one is called as a general semaphore which may take any integer value and other is binary whose variable is allowed to take only the values of 0 (busy) & 1 (free). Semaphore operations & declarations of semaphore variable are usually provided in the operating system cells. Properties of Semaphores Ø These are relatively simple but powerful mechanism for

mutual exclusion among concurrent processes, accessing a shared resource.

Ø When semaphores are used then the modification of code or restructuring of processes & modules don’t generally acceptable changes in other process.

Ø Semaphores may be provided in a programming language as language construct.

Page 29: Microsoft Word - OS

Question 2.10 Explain the mutual exclusion through algorithm. Solution In the mutual exclusion, the process observes the following basic protocols: Negotiation protocol; Critical section; Release protocol; A process that wishes to enter in a critical section first negotiates with all interested parties to make sure that no other conflicting activity is in progress and that all concerned processes are aware of the imminent temporary unavailability of the resource. Once the consensus is reached, the winning process begins executing the critical section of code. Upon completion, the process informs other contenders that the resource is available, and another round of negotiations may be started. Let us consider two processes that share a common resource accessed with the critical section. Both processes are cyclic, and each of them also executes some code other than the critical section. The following code as per Pascal language consists of two jobs p1 & p2 with simple provision for the mutual exclusion

Program/module mutex1; ……… type who={proc1,proc2}; var turn:=who;

Page 30: Microsoft Word - OS

process p1; begin while true do begin while turn:=proc2 do{keeptesting}; critical_section; turn:=proc2; other_p1_processing; end {while} end{p1}

process p2;

begin while true do begin while turn:=proc1 do{keeptesting}; critical_section; turn:=proc1; other_p2_processing; end {while} end{p2}

{parent process} begin {mutex1} turn :=….; initiate p1,p2 end{mutex1} In the above, single module is called as mutex1. Global variable turn is used to control access to unspecified shared resource which may be a piece of code, a data structure or a physical device. This control the two processes p1 and p2. When the process p1 completes its execution with the help of shared resources in the critical section, p1 set2 turn to process p2 and allow process p2 to use the resource next. The code for process p2 is symmetrical; therefore p2

Page 31: Microsoft Word - OS

follows the same protocol in acquiring and releasing the resources. In the above, if other process wishes to access the shared resource at that time, it is kept waiting in the busy loop until the current occupant completes its critical section and updates turn. In the above the two processes are executing in the alternating sequence given below: p1,p2,p1,p2,p1,p2……………. Question 2.11 What are role of semaphores in mutual exclusion. Solution The role of semaphores can be easily understand with the help of the following Pascal program which contains the three processes that share a resource accessed within the critical section. A binary semaphore mutex is used to protect the shared resource program /module smutex.

program/module smutex; ………… var mutex : semaphore; {binary} process p1; begin

while true do begin wait (mutex); critical_section; signal(mutex); other_p1_processing; end {while}

end; {p1}

Page 32: Microsoft Word - OS

process p2; begin

while true do begin wait (mutex); critical_section; signal(mutex); other_p2_processing; end {while}

end; {p2} process p3; begin

while true do begin wait (mutex); critical_section; signal(mutex); other_p3_processing end {while}

end; {p3} {parent process} begin{smutex} mutex:=1 ; {free} initiate p1,p2,p3 end{smutex}

Execution of the above program is shown in the following table

Time Process Status/Activity Mutex

1=free 0=busy

Process in critical section attempting to enter

P1 P2 P3

Page 33: Microsoft Word - OS

M1 - - - 1 -;- M2 wait(mutex) wait(mutex) wait(mutex) 0 -;p1,p2,p3 M3 critical_section waiting waiting 0 p1,p2,p3 M4 signal(mutex) waiting waiting 1 -;p2,p3 M5 other_p1_proc. critical_section waiting 0 p2;p3 M6 wait(mutex) critical_section waiting 0 p2;p3;p1 M7 waiting signal(mutex) waiting 1 -;p3,p1 M8 critical_section other_p2_proc. waiting 0 p1;p3

The above table is self explanatory. As per the program, at the time of M1, the status of the critical section is free, therefore the mutex variable is 1. At the time of M2, all processes are active and ready to enter their respective critical sections. Each process executes its WAIT statement. Now the semaphore variable is decremented to 0. At time M3, p1 is winner and after using the critical section as per the program given above, at the time M5, p2 is in the critical section. From the above table, one can say that the three concurrent processes are working as per the table but some category of scheduling algorithms is must which are described in the next chapter.

Question 2.11 Explain the producers/consumers problem with bouded and unbounded buffer. Solution This is a classical problem of concurrent programming. The problem of producers/consumers is stated below: · Given a set of cooperating processes, some of which

“produce” data items (producers) to be “consumed” by others (consumers), with possible disparity between production and consumption rates.

· Devise a synchronization protocol that allows both producers and consumers to operate concurrently at their respective service rates in such a way that

Page 34: Microsoft Word - OS

produced items are consumed in the exact order in which they are produced. This is FIFO order.

The following are the discussion on producers/consumers problem with unbounded buffer and bounded buffer. producers/consumers with unbounded buffer In this case buffer is unbounded. A producer must be the first process to run in order to provide the first item. A consumer process may run whenever there is more than one item in the buffer produced but not yet consumed. Because of unbounded buffer, producers may run at any time without restrictions. In this we assumed that all items produced and subsequently consumed have identical but unspecified structure. A simple program is given below: program/module producer-consumer with unbounded buffer; ………… var produced:semaphore; {general} process producer; begin

while true do begin produce; place_in_buffer; signal(produced); other_producer_processing end{while}

end;{producer} process producer; begin

while true do begin produce;

Page 35: Microsoft Word - OS

place_in_buffer; signal(produced); other_producer_processing end{while}

end;{producer} process consumer; begin

while true do begin wait(produced); take_from_buffer; consume; other_consumer_processing end{while}

end;{consumer} {parent process} begin {producer-consumer} {produced:=0;} initiate producer,consumer end {producer-consumer} In the unbounded buffer case, the producers may run at any time. When an item is produced, it is placed in the buffer and then signaled is given by the semaphore produced. Consumer process waits at the produced semaphore before consuming an item from the buffer. Produced is initially 0 and satisfies the requirements. Here a consumer may absorb only produced items and must wait when no items are available. In this case the range of buffer is unspecified. A pointer to the next available buffer slot must be global in order to accessible to all producers. It’s consistency can not be maintained in the presence of concurrent updates. This situation is removed if we consider the bounded buffer. producers/consumers with bounded buffer

Page 36: Microsoft Word - OS

In this problem, a consumer may absorb only produced items and must wait when no items are available. Producers may produce items only when there are empty buffer slots to receive them. The Pascal program is given below: program/module producers/consumers with bounded buffer ……. const capacity:=…….; type item:=……; var buffer:array[1……capacity] of item; mayproduce,mayconsume:semaphore;{general} pmutex,cmutex:semaphore[binary] in,out(1………capacity); process producerX; var pitem:item; begin while true do

begin wait(mayproduce); pitem:=produce; wait(pmutex); buffer[in]:=pitem; in:=(in mod capacity) + 1; signal(pmutex); signal(mayconsume); other_X_processing

end{while} end;{produceX} process producerZ; var citem:item;

Page 37: Microsoft Word - OS

begin while true do begin

wait(mayconsume); wait(cmutex); citem:=buffer[out]; out:=(out mod capacity) + 1; signal(cmutex); signal(mayproduce); consume(citem) other_Z_processing

end{while} end;{produceZ} {parent process} begin

in:=1; out:=1; signal(pmutex); signal(cmutex); {mayconsume:=0} for i=0 to capacity do signal(mayproduce); initiate producers, consumers

end In this case, there is finite memory capacity. Producers may produce items only when there are empty buffer slots to receive them. At any time, the shared global buffer may be empty, partially filled or full of produced items ready for consumption. A producer process may run in either of first two cases, but all producers must be kept waiting when buffer is full. After executing, consumers vacate buffer slots. When buffer is empty, consumer must wait for which they can never get ahead of producers. Question 2.12 Explain the readers/writers concurrent programming problem.

Page 38: Microsoft Word - OS

Solution It is most important classical problem of concurrent programming. In this, numbers of processes use a shared global data structure. The processes are categorized on the basis of usage of resource (either readers or writers). A reader never modifies the shared data structure but writer may read and write it. A number of readers may use the shared data structure concurrently. Writers must be granted exclusive access to data. Readers/Writers problem is stated below: · The readers read a common data structure and writers

modify the same common data structure. · Synchronization among readers and writers ensure the

consistency of the common data structure. The Pascal code is given below: program/module readers/writers ……. var readercount:integer; mutex, write:semaphore;{binary} process readerX begin while true do

begin {obtain permission to enter} wait(mutex); readercount:=readercount+1; if readercount=1 then wait(write); signal(mutex); …….. …….. {reads}

Page 39: Microsoft Word - OS

…….. …….. wait(mutex); readercount:=readercount-1; if readercount=0 then signal(write); signal(mutex); other_X_processing end{while}

end{readerX} process writerY begin while true do

begin wait(write); …….. …….. {writes} …….. …….. signal(write); other_Y_processing end{while}

end{writerY} {parent process} begin readercount=0; signal(mutex); signal(write); initiate readers, writers end The above contains reader and writer processes. Write is a very simple process and it waits on the binary semaphore. Write has to grant it permission to enter in the critical section and used to share resources. The reader has two

Page 40: Microsoft Word - OS

critical sections, one before and one after using the resource. Integer number readercount is used to keep track of the numbers of the readers actively using the resource. The first reader passes through mutex, increments the number of readers and waits on writers while reader is reading the shared data semaphore, mutex is free and write is busy. For all readers that arrive at least one is actively reading to quickly pass this critical section and to place for reading. If there are writers waiting, they are prevented from accessing data by busy write semaphore. When last reader finishes, it finds readercount=0 and admits waiting writers through signal to write semaphore. When writer is in the critical section, write is busy and keeps the first reader to arrive from completing its first critical section. After this, subsequent readers are kept waiting in front of the critical section. When system is free both semaphores are free. In the above, the following is happening: · A new reader should not start if there is a writer wating. · All readers waiting at the end of a write should have

priority over the next writer. Question 2.13 Write a short notes on Messages under interprocess synchronization. Solution It is a very simple mechanism & suitable for interprocess communication & synchronization. Many multiprogramming operating system support the inter process messages. Sending & receiving a message is a standard form of inter code communication. Generally message is a collection of information that may be exchange between a sending & receiving process. It may contain data execution commands or code to be transmitted

Page 41: Microsoft Word - OS

between two or more processes. It contains the following format

Sender’s Id Receiver’s Id Length Type

………… …………

Figure 2.4 A Message Format

We describe several important issues in message implementation, which are given below: Naming There are two types of naming direct and other is indirect. In the case of direct message communication, process B and A are the identities of the receiver and the source of message, respectively. This is shown below: process A; ……….. send ( B, message) ……….. process B; ……….. receive (A, message)

Message header

Message body Contain Actual Message

Page 42: Microsoft Word - OS

In the case of indirect message communication, first process sends the message into the mailbox & second process removes a message from the mailbox. In the indirect message communication, one can send the message via using mailbox as shown below:

process A; ………. send (mailbox1, message); ………. process B; ………. receive(mailbox1, message); ……….

Copying

Message exchange between two processes, transfers the contents of message from the sender’s to the receiver’s addressing space. This may be done either copying the whole message into the receiver’s addressing space or simply passing a pointer to the message between processes. We can say that the message can be by value or by reference. In case of copying, the two processes remain fully decoupled. Synchronous versus Asynchronous The exchange of message between sender and receiver may be synchronous or asynchronous. When it synchronous, both the receiver and sender must come together to complete the transfer. In this, the SEND operation is blocking. We can say that, when sender wishes to send a message for which no outstanding RECEIVE is issued, the

Page 43: Microsoft Word - OS

sender must be suspended until receiver accepts the message. In asynchronous message exchange, the sender is not blocked when there is no outstanding RECEIVE. Asynchronous, nonblocking SEND is implemented by having the operating system accept and buffer outstanding messages until matching RECEIVEs are issued. So the sending process may continue execution after sending a message and need not be suspended, regardless of the activity of receivers. Length Message size may be fixed or variable. Fixed size messages result in lower overhead by virtue of allowing the related system buffers to be fixed size. It makes the allocation simple and efficient. Variable size messages create the problem of manageability of the operating system’s dynamic memory. This technique is costly in terms of CPU time than the fixed size message communication. Question 2.14 Explain the Dining Philosophers algorithm. Solution It is one of the important classical problems of the concurrent programming of the operating system. It consists of the five hungry philosophers sitting around round table thinking and eating. Each of them has 1 chopsticks; which makes a total of the five chopsticks. Each one needs two chopsticks to eat which means they must share the chopsticks. If a philosopher is hungry, he needs to pick up the chopsticks on his left side and also the other one on the right side to eat. After he finished, he would just put down the chopsticks. Then other philosophers can take chopsticks necessary to eat leading to

Page 44: Microsoft Word - OS

starvation. Also two neighboring philosophers may try to eat at the same time leading to the deadlocks. It is also possible the all five of them are hungry at the same time and all of them pick up there left chopsticks at the same time, and wait for their right hand side chopsticks to be available. Then they will never be able to eat. This is also a deadlock situation. The solution of the Dining Philosophers problem is given below nearer to the “C” language: #include <prototypes.h> #define n 5 /* number of philosophers */ void philosopher (int j) /* j: which philosopher (0 to n-1) */ { while(true) { think(); /* philosopher is thinking */ take_fork(); /* take left chopstick */ take_fork((j+1)%n); /* take right chopstick */ eat(); put_fork(j); /* put left chopstick back on the table */ put_fork((j+1)%n); /* put right fork chopstick on the table */ } } The above program can be easily modified so that after taking the left chopstick. This program checks to see if the right chopstick is available and waits for some time. Then it will repeat the process again. Nevertheless, if all five philosophers pick up their left chopsticks at the same time, they would not have the right chopstick available and they all would put down their left chopstick too and wait for the right chopsticks. They will repeat the same process again

Page 45: Microsoft Word - OS

and more of them will get a chance to eat. This would lead to starvation. Question 2.15 What are the threads. Explain different applications of threads. Solution Thread process is the basic unit of dispatching, which has: · Program Counter · Register Set · Saved processor context · Execution stack space · Some per thread static storage for local variables

Threads act like processes in the following ways:

· They can have different states: ready, blocked, running, terminated.

· Only one thread can be executing at a time. · Threads have their own stack and PC. · Threads can create child threads.

In a traditional, there is a single thread of control and a single PC. A task consists of one or more threads. The following are the benefits of the threads:

· it takes much less time to create a new thread in an existing process than to create a brand new task.

· It takes less time to terminate a thread. · CPU switching among peer threads is inexpensive

unlike the context switching between processes. · Access to memory/resources of its task, shared with all

other threads in that task. Like: (i) Open files

Page 46: Microsoft Word - OS

(ii) Code section (iii) Data Section (iv) Child Processes (v) Timers (vi) Signals Since all threads have exactly the same address space, every thread can access every virtual address. This shows that for example thread A can read, write or even destroy thread B’s stack. There is no protection between threads because it is impossible to provide. This might seem as a disadvantage but such protection is not necessary since all peer threads belong to a single user and are created to cooperate. The following diagram shows a simplified view of threads: Figure 2.5 Thraeds Representation

Task

PC

Data Section

Process

Thread

Data Section

Data Section

Page 47: Microsoft Word - OS

Application of Threads The following are the important application of threads: File Server The good use of threads is in a server, such as a file server or a local area network. The server is really one process where each connection is a separate thread. As each new file request arrives, a new thread can be created for the file management program. This technique allows critical data to be shared in the process global memory and be available without any special requests by the peer threads. Also, if a thread blocks while waiting for an event such as a disk transfer. Other threads can still run. On the other hand, multiple processes each with a single thread would not work in this situation because no other process can run until the blocked process is running again. Since a server handles many requests in a short period of time, many threads will be created and destroyed. If a multiprocessor server is used than different threads within the same task can be executing simultaneously in parallel on different processors. However, threads are also useful on a single processor where they share the CPU via timesharing. They simplify the entire structure of a program which is doing different functions. WWW Browsers Web pages often contain many small images. Once a connection is made to such a page, the browser sets up a separate connection for each image to the page’s home site and requests that image. These connections can waste large amount of time. So, using multiple threads within the browser allows many images to be requested at the same time. This largely increases performance.

Page 48: Microsoft Word - OS

Thread design issues: · If a parent can have multiple threads, can the child also

have multiple threads. If a child gets as many threads as the parent does, what happens if a thread is blocked. Are both parent and child blocked.

· What happens if a thread closes a file while another is still reading from it.

· How is error reporting handled. When a thread makes a system call, and before it can read the error value, another thread can make a system call and wipe out the original error value.

· How is stack management designed. Process with multiple threads has multiple stacks and if the kernel is not aware of threads than it can not automatically increase the stack and as a result overflow can occur.

CHAPTER III

CPU SCHEDULING

In this chapter the solutions of the questions related to the following syllabus: Scheduling Concept, Performance Criteria, Scheduling Algorithm, Evolution Multiprocessor Scheduling, Multiprocessor Organization. Question 3.1 Explain First Come First(FCFS or FIFO) served scheduling technique of the operating system with suitable example. Solution

Page 49: Microsoft Word - OS

It is one of the simplest scheduling strategies. Here workload is simply processed in order of arrival of job. Once, a request for work has been accepted by the high level scheduler then it will tune into a process as soon as it turns into a queue. This is the case of non pre-emtion scheduling where arguing process may not be replaced by the other process. Now, the medium term scheduler will make the process in a ready state. At the short term scheduler, once the processor is available the process runs until it completes the entire execution. Ø FIFO type of scheduling makes a poor performance & does not use in the modern system. Ø In FIFO system short job may be hurt by long job. This can be seen with the help of following example: Consider 2 jobs j1 & j2 with execution time 20 & 2 time units respectively. If, they are executed in a sequence j1 ----> J2, the the execution time for j1 and j2 jobs are 20 & 22 time units respectively. j2 must wait until j1 completes . Thus total average time is (20+22)/2 i.e. 21 time units. The corresponding waiting times are 0 & 20 units. But, when the 2 jobs arrive in the appropriate area j2 --->j1 then j1 waits until j2 completes the processing. The execution times for j2 and j1 are 2 and 22 time units and the average term around time is (2+22)/2 i.e. 12 which is lesser in comparison of first sequence & the average waiting time is only 1. From this example, we can say that short jobs are hurt by the long jobs. Question 3.2 What is Round Robin(RR) technique. How it is better than FIFO.

Page 50: Microsoft Word - OS

Solution It is a low level scheduling strategy where ready processes are despatched according to some strategies initially but once a process begins running, the short term scheduler gives it a time slice for running the process. Therefore, this technique is known as time slicing technique. It is also known as the quantum round robin. Here the process time is divided into slices. No process can run for more than one time slice. When there are others waiting in the ready queue, if particular process needs more time to complete execution after completing its slice time, then it is placed at the end of the ready queue to wait the next allocation.

Ready Ready Before

Figure 3.1 Round Robin Technique

With Round robin scheduling the response time of long process is directly proportional to the resource requirment. Here long process may require more than one sclice. This strategy, we are using in the time sharing system and

multiuser systems where terminal response time is important. This scheduling discriminates against the long non-interactive jobs and depends on the judicious choice of time slice for adequate performance. Duration of a time slice is a tunable system parameter that may be changed during system generation..

Px Pw Pz Py

Pw Pz Py Px

Page 51: Microsoft Word - OS

This technique is better than the first in first out because, in this, short jobs are not hurt by the long jobs. Because of time slice, the short jobs complete execution first. In this scheduling, too short a time slice results in excessive overhead, and too long a time slice degenerates from round robin to first in first out scheduling. This technique is the pre-emption technique where the processes again & again swap out and swap in according to the time slicing. Question 3.3 What is Shortest Job First(SJF) scheduling. Solution This is the non preemptive technique that selects the job having the shortest service/execution time. Always that job is executed which needs little time for execution. Short time requests preference quikly reduces the number of waiting requests & tends to minimize average turn around time. Here the processor simply shut off that job which grossly exceeds the user’s estimates. This avoids problem where program have endless loops. Therefore, the shortest job first executes & whenever it completes then the next job more then first executed. This type of scheduling reduces the number of waiting jobs. Question 3.4 Explain Shortest Remaining Time Next (SRTN) Scheduling. Solution The meaning of this scheduling is that a running process can be preempted from the processor by some another

Page 52: Microsoft Word - OS

processor whose estimated time to completion is shorter. As with the shortest job first, this method depends on user’s estimate. In this strategy, remaining service time must be recorded for each unit of work. This type of strategy we are using in the modern system. This technique is provably optimal scheduling discipline in terms of minimizing the average waiting time of a given work load. SRTN scheduler can accommodate short jobs that arrive after the commencement of a long job.

Question 3.5 What is the Highest Resource Ratio Next algorithm.

Solution

In the last two scheduling strategies there are biases and short comings of shortest job first scheduling. Now we calculate the priority of each unit of work.

p = (t+s) / s, where, s = estimated service time t = waiting time

The process which contains the highest resource ratio will run first & then after according to queue. Consider two jobs j1 & j2 with estimated service times are 30 & 40 time units, respectively. The waiting times for j1 & j2 are the 10 and 5 time units. Then, the priority for job j1 is (30+10)/10 i.e. 4 time unit, & the priority for job j2 is (40+5)/5 i.e. 9 time units. Therefore, we can say that the job j2 will run first, because of having highest resource ratio.

Page 53: Microsoft Word - OS

Question 3.6 What are the performance criteria of different schedulers to maximize the system performance.

Solution The following are the criteria to analyze the system performance: Processor Utilization It is the average fraction of the time during which the processor is busy. The measurement of the processor utilization is very simple. In the round robin technique, the processor utilization is very high. By keeping the processor busy as much as possible, other component utilization factors will also be high and provide good return. For the 100% processor utilization, the average waiting times and average queue lengths tend to grow excessively. Throughput It refers to the amount of work completed in a unit of time. It means the number of jobs executed in a unit of time. The higher the number, the more work is apparently being done by the system. Turnaround Time It is defined as the time that elapses from the moment a job is submitted until it is completed by a system. It is the time spent in the system and it may be expressed as a sum of the job service/execution time and waiting time. Waiting Time

Page 54: Microsoft Word - OS

It is the time that a job spends waiting for resource allocation due to contentions with others in the multiprogramming system. It is the penality imposed for sharing the resources with others. It can be expressed as difference between the turnaround time(T) and the actual execution time(x) i.e. W=T-x. Response Time

It is defined as the time that elapses from a moment the last character of a command line launching a program. This is called as the terminal response time. In the real time systems, this time is the latency and it is defined as the time from the moment an event is signaled until the first instruction of its respective service routine is executed. This is called as the event response time. Question 3.7 Evaluate the performance of the FIFO, SJF and RR scheduling algorithms. Solution Performance of the multiprogramming and multiuser operating systems is largely dependent on their effectiveness in allocating system resources. An active process simultaneously requires main memory, I/O devices, secondary storage and processor in order to execute. Performance evaluation is an important tool for accessing the effective- ness of the existing systems and for estimating the behaviour of the new systems as they are being designed. Performance evaluation of the three scheduling algorithm is described below:

FCFS The average waiting time in a batch system for the FCFS scheduling is given below:

Page 55: Microsoft Word - OS

WFCFS = W0 / (1-r),

where, W0 = lx2/2 and x2 is the second moment of the service distribution. The quantity W0 is known as the mean residual life of the service time. The average waiting time in FIFO is not a function of the job’s service time, say x. FIFO is purely nondiscriminatory policy that does not take a job’s service requirements into consideration. The above expression shows that the average waiting time increases with the increasing the load. The derivative dW/dr show that the curve increases sharply with increasing r. Shortest Job First It is the non-preemptive scheduling that selects the job with the shortest service time. Since the completed job departs the system, this tends to reduce Nq= lW. Therefore, reduction in Nq gives reduction in W. W reduces to simple expressions for two important special cases. For the short jobs with small service times xà0, WSJF(x) = W0, Whereas for very long jobs (large service time) WSJF(x)=W0/(1-r)2 = WFIFO/(1-r) Short jobs do not wait for anything other than the completion of the job in service. The long jobs are subject to larger delays with increasing r. Round Robin For an extreme case of RR, the time slice q approaches to 0. It is called processor sharing. The average waiting time

Page 56: Microsoft Word - OS

for a job that needs x units of service, denoted as W(x), is given by: WRR(x) = rx/(1-r), It shows that the discrimination in RR system is linear. This is because the dependence of average waiting time on the service time is linear.

Question 3.8 What do you understand by the multiprocessor system. Solution Multiprocessor operating systems manage the operation of the computer systems that incorporate the multiple processors. These systems are the multitasking systems because they support the execution of the multiple processes on the different processors. Multiprocessors provide the appealing architectural alternative for improving the performance of the computer systems by coupling a number of low cost standard processors. Multiprocessor systems can be applied to provide the following: · Application speedup by executing some portions of the application in parallel. · Increased system throughput by executing a number of different user processes on different processors in parallel. In the timesharing environments, the throughput can be improved by executing a number of unrelated user processes on the different processors in parallel. This may improve the system throughput by completing a larger number of tasks in a unit of time without reprogramming.

Question 3.9 What are the important advantages of the multiprocessor system. Solution

Page 57: Microsoft Word - OS

The important advantages of the multiprocessor systems are given below: 1. Performance & Computing Power Speedup of an application can be increased by the multiprocessors. Because of interprocessor communication bandwidth is higher; the problems with higher interactions can be solved more quickly. 2. Fault Tolerance The inherent redundancy in the multiprocessors can be employed to increase availability and to eliminate single points of failure. 3. Flexibility A multiprocessor system can be easily reconfigured to optimize the different objectives for different applications like increased throughput, application speedup, etc. 4. Modular Growth A modular system design can be adapted to the needs of a specific installation by adding exactly the type of the component. 5. Functional Specialization Functional specialized processors can be added to improve the performance of particular applications. 6. Cost/Performance The multiprocessor systems are the cost effective for a wide range of the applications.

Page 58: Microsoft Word - OS

Question 3.10 What are basis to classify the multiprocessor systems. Solution There are number of classifications of parallel computer architecture. There may be SI à Single Instruction Stream MI à Multiple Instruction Stream SD à Operating on Single Data MDà Operating on Multiple Data These give the following classifications of multiprocessor systems. · SISD This is called as single instruction stream; single data stream. These encom-passes in conventional serial computers. · SIMD This is called as single instruction stream; multiple data stream. In this, single instruction may operate on different data in different execution units · MISD This is called as multiple instruction stream; single data stream. Here, multiple instructions operate on a single data stream in parallel. · MIMD This is called as multiple instruction stream, multiple data streams. In this, simultaneous executions of multiple instructions operate on data streams.

Page 59: Microsoft Word - OS

Multiprocessor systems may be classified as

· Tightly Coupled Multiprocessors contain globally shared memory that all processors have access to. · Loosely Coupled Individual processors have private memories, and there is no shared global memory. Question 3.11 Explain different types of multiprocessor Interconnections. Solution The following are the basic architectures of common multiprocessor types: (a) Bus Oriented Systems

Processor Processor Processor Processor Cache Cache Cache Cache

Page 60: Microsoft Word - OS

Shared Memory

Figure 3.2 Shared Bus Multiprocessor Interconnection

One of the simplest ways to construct a multiprocessor is to use a shared bus to connect processors & memories. The structure is shown above. A multitude of processors can communicate with each other. The combinations of above scheme are possible which are given below: · Individual processors may or may not have private memory. · I/O devices may be attached to individual processors or to the

shared bus. · Shared memory itself is usually implemented in the form of

multiple physical banks connected to the shared bus. Cache memory is used to reduce the contention on the shared bus. The two arrangements of cache are possible in shared bus systems which are: 1. Cache is associated with shared memory & processors access it

over the bus. 2. Cache is associated with each individual processor. The above approach is more popular because processor cache can capture many of the local memory.

Page 61: Microsoft Word - OS

(b) Crossbar Connected System M0 M1 M2 MN-1

P0

P1

P2 PN-1

Figure 3.4 Crossbar Interconnection

This interconnection is shown above. The crossbar itself has no interconnection. It allows to access of N processors to N memories provided that each processor accesses a different memory. The cross point switch is only source of delay between a processors and a memory. If processors have switch with no private memories, the resulting system is a Uniform Memory Access (UMA) Multiprocessor. In crossbar based multiprocessors, contention may occur only when more than one processor attempts to access the same memory at the same time. Judicious placement of data can help the memory contention problem in crossbar systems if two processors happen to accessing different data in same module one of them will be differed briefly until the other finishes its data reference and moves on to the next memory module. This technique does not resolve the contention that arises when several processors attempt to access the same memory location.

Page 62: Microsoft Word - OS

(c) Hypercubes The following figure shows a three dimensional hypercube, with a node placed at each vertex:

Figure 3.5 Eight Node Hypercube 3D Topology

The shown figure is constructed of nodes that constitute a processor and its private memory. This interconnection has a number of interesting mathematical properties. Each processor in a hypercube has direct physical links to log2N other nodes in an N node system. The maximum distance between any two nodes is also log2N. Hypercubes are the recursive structures and contain the lower dimensional hypercubes as their proper subsets. Each node has direct links to log28=3 others, the maximum internode distance is 3 and system can be portioned into two disjoint two dimensional hypercubes in three different ways. The communication between adjacent node is direct and longest internode delay is bounded by the log2n. This technology is suitable for the many problems that map nicely their structure.

0100 011 000 001

110 111 010 100 101

Page 63: Microsoft Word - OS

(d) Multisatge Switch Based Systems Processors and memories in a multiprocessor system can be connected with multistage switch. A generalized approach of this technology provides n inputs with N outputs. It has m=log2N stages. Each stage consists of a set of N links connected to the N/2 interchange boxes. The structure of a multistage switching network that connects eight processors to eight memories is shown below: Figure 3.6 The network consists of 3 stages of four switches each. Each switch has 2x2 crossbar that can directly copy an input to output, swap input & output, and copy an input to both output lines. The last gives advantage to the implementation of broadcasts and multicasts. This switching network can connect any input to any output by making appropriate connections in each of the m stages. Routing in the multilevel switching is fixed. It is performed on the basis of the destination address tag that the sender includes with each request for connection. Multistage switching networks provide a form of circuit switching. The multistage switch can simultaneously connect all inputs to all outputs provided no two processors attempt to access the same memory module at the same time. Otherwise contention at both the memory module and within the switching network may develop and cause the traffic to back. Question 3.12 What are the different types of the multiprocessor operating systems. Solution

Page 64: Microsoft Word - OS

The following are the three basic types of the multiprocessor operating systems: Separate Supervisor In this each node contains a separate operating system that manages local processor, memory and I/O resources. This approach manages each processor as independent system. A few additional services and the data structures may be added to support the multiprocessor aspects of the hardware. The most important example of the separate supervisors is the hypercube systems. In this kernel provides services like local process and memory management and implements the message passing primitives. The system level functions like allocation of processors to applications and macroscopic scheduling are delegated to the system executive, which may be implemented in symmetrical or in master/slave fashion. Master/Slave In this approach, one processor is dedicated to theexecuting the operating system. The remaining processors are identical and form a pool of computational processors. The master processor schedules the work and controls the activity of the slaves. Most of the data structures activities are controlled by the master processor. Slave processors may be able to process directly simple local queries. The majority of the operating system services are provided by the master. These systems are relatively simple to develop. The system has limited scalability. Symmetric In this all processors are functionally identical. For allocation purposes, they represent a pool of anonymous resources. Other

Page 65: Microsoft Word - OS

hardware resources like memory and I/O devices; may also be pooled so as to be available to all processors. Otherwise the system will become asymmetric. Because of symmetric; any processor may execute. It allows the parallel execution of the operating system by the several processors. The floating master is a natural first step in implementing the operating system for a symmetric processor. Question 3.13 Explain the scheduling of multiprocessor with suitable example. Solution Once the processors are allocated to applications in some manner, then they have to be scheduled easily. A desirable objective in the multiprocessor systems is to attempt to coschedule processes that interact so that they run at the same time. Processes at the opposite ends of a pipe, the sender and receiver of a message and several threads are the candidates for the coscheduling. Otherwise, considerable time may be wasted as exchanges between out of phase parties are attempted. Let eight processe are scheduled for execution on four processors as shown in Figure 3.7. Let the system is making the time slicing and schedules the processes in the A, B, C, D group at even time intervals, and processes in the P, Q, R, S group at the odd time intervals. Processor

Time Slots

0 1 2 3

0 A B C D 1 P Q R S 2 B C D A 3 P R Q S

Page 66: Microsoft Word - OS

4 B A D C 5 R S Q P

Figure 3.7 An Example of Multiprocessor Scheduling Let one or more processes in the first group communicate with the help of synchronous messages with some selected processes in the second group. After receiving a time slice, process A sends a message to process P. Since P is not executing, A is blocked for the remainder of its time slice. Let P receives the message at the beginning of its time slice and sends an immediate reply; it will be blocked as well until A runs again. If the time slice lasts 50 ms, the scheduling that does not take into consideration process group relationships could restrict a message exchange between A and P to at most one per 100 ms. Individual processors in a multiprocessor system may be uniprogrammed or multiprogrammed. The multiprogramming provides the potential to increase throughput.

CHAPTER IV

DEADLOCK

The solutions of the questions are given on the following syllabus System Model, Deadlock Characterization, Prevention, Avoidance and Detection, Recovery from Deadlock Combined Approach. Question 4.1 What is deadlock. Solution A deadlock is a situation where a group of processes are permanently blocked. This problem occurs under mutual exclusion

Page 67: Microsoft Word - OS

where one process inherited by another process in order to complete execution.

Resource a Resource b

Process A Resource c Process B

Resource d Resource e

Figure 4.1 A Deadlock Situation Consider one process is required to complete execution but at the same time the other process has gained control under mutual exclusion of which the first process requires. The situation is termed as deadlock. In this case neither process can proceed nor can any queued behind either of them, until the other gives way. If one process in order to allow the other to used the resources then there is no blocking. The deadlock condition is shown above where process A needs the resource c and process B needs resource b to complete their execution but resource c is used by process B and resource b is used by process A. In this case neither process can complete the execution. e.g consider the two concurrent process, P1 & P2 are inter leaved in such a way that at some point process P1 is granted the use of printer & P2 manages to seize the disk drive. In this case two processes are deadlocked. Question 4.2 What are necessary conditions in which deadlock occurs. Solution The following are the necessary conditions for deadlock:

Page 68: Microsoft Word - OS

Mutual Exclusion The stored resources are acquired & used in a mutually exclusive manner i.e by at most one process at a time. Hold & Wait Each process continues to hold resources already allocated to it while waiting to acquire other resources. No Preemption Resources granted to a process can be released back to the system only as a result of the voluntary action of that process; the system can’t forcefully revoke them. Circular Waiting Deadlocked processes are involved in a circular chain such that each process holds one or more resources being requested by the next process in the chain. Question 4.3 What is Deadly Deadlock situation. How to avoid the Deadlock situation. Solution The possibility of an occurrence of deadlock exists when concurrent processes are not fully disjoint, but interact in same way. The following are the problems in concurrent execution & to solve them.

Sl. No. Problem Tools 1 Mutual exclusion Critical section 2 Exchanging time signals Semaphores 3 Exchanging Data Message Buffer 4 Serializing use of resource Mutual exclusion

Page 69: Microsoft Word - OS

5 Controlling access to critical sections or buffers

Semaphores

Figure 4.1 Problems in Concurrent Execution & Tools to Remove Them The first condition is related to deadlock with partial allocation of resources under mutual exclusion. In this case operating system allocates resources to a process on resource at a time & that process holds any allocated resource to the exclusion to any other program preceding it. Mutual exclusion is usually difficult to dispense but deadlock can be prevented with one or more of the remaining three conditions. The mutual exclusion is a necessity in cases where data will be updated. The amount of time resources are held under mutual exclusion can be reduced. The preemption cases are simple prevention of deadlock cases but produced very poor performance while non preemption cases produces high class of performance. The hold & wait condition can be estimated by forcing a process to release all resources held by it. In the other words, deadlocks are prevented because waiting processes are not holding any resources. There are basically two possible implementations of this strategy: (a) The process requests all needed resources prior to

commencement of execution. (b) The process requests resources incrementally in the course of

execution.

To request all resources at the out set, a process must pre claim all of its resource needs. Sometimes, it requires additional effort, estimation of resource requirements of processes. There is a problem for data driven programs where actual resource

Page 70: Microsoft Word - OS

requirements are determine dynamically at runtime. For example, updating salaries of all programmers which; may require scanning the entire database to identify.

Therefore over estimation problem is present whenever resource requirements must be stated in advance of execution. An alternative is to acquire resources incrementally as needed so, it is necessary to prevent deadlock by releasing all resources held by a process. This strategy avoids the disadvantage of pre claiming & holding all resources from the beginning of a process. The no pre-emption deadlock condition can be denied by allowing preemption. Since preemption is involuntary from the point of view of affected process, the operating system must be charged with saving the space & restoring it when the process is later resumed. Preemption is only possible for certain types of resources like CPU & main memory, since the CPU portion of a process state is routinely saved during the process switch operation & the contents of pre-empted memory pages can be swapped out to the secondary storage. In the some cases like partially updated files can’t be preempted without corrupting the system.

So the pre-emption is possible only for certain types of resources.

One way to prevent the circular list condition is by linear ordering of different kinds of system resources. The system resources are divided into different classes Cj (j=1,2,3, ------, n). Deadlock conditions are prevented by all processes to request & their resources in a increasing order of the specified system resource classes.

Page 71: Microsoft Word - OS

Once a process acquires a resource belonging to class C j, it can only request resources of class j+1 or higher. Linear ordering of resource classes eliminates the possibility of circular waiting. Question 4.4 Consider that a system has ten resources. Suppose those three processes a, b & c are active & using the above resources. The following table shows that how many eight resources are allocated from a total of ten resources. Write a procedure to remove the deadlock condition.

Process Has allocated Needs

A 3 1 B 5 3 C 0 6

Total Allocated

8

Total free 2 Grand Total

10

Figure 4.2 Allocation of Resources for Process A, B & C

Solution As per the figure shown above, there is only two free resources for allocation. If operating system allocates the two free resources to process b, a deadlock will occur. In this case process a & c could not proceed because there would be not more free resources available to allocated to them & process b would be unable to continue for the reason. All the three processes would wait on each other. If operating system allocates one of the two resources to process a then a can proceed to completion and release of four resources. So total five resources are available after completion of a, then this is

Page 72: Microsoft Word - OS

sufficient to allow the process b when it completes execution then there will be sufficient resources for process c to complete execution. By using this technique, the deadlock condition can be easily removed. Question 4.5 Write and explain a deadlock detection algorithm. Solution In the deadlock detection approaches, the resource allocator simply grants each request for available resource. The following algorithm helps to detect the deadlock: 1. Form Allocated, Requested and Available in accordance with the system state and unmark the active processes. 2. Find an unmarked process i such that: Requested <= Available if found, mark process i, update Available, Available:=Available+Allocated and repeat this step. When no qualifying process can be found, proceed to the next step. 3. If all processes are marked, the system is not deadlocked.

Otherwise, the system is deadlocked, and the set of unmarked processes is deadlocked.

The above algorithm can be easily implemented on the following system state:

Page 73: Microsoft Word - OS

Figure 4.3 System State Let us first define the system data structures for Allocated, Requested and Available which are shown below: R1 R2 R1 R2 R1 R2 ------------- ------------- ------------- P1 1 1 P1 0 1 0 0 P2 0 1 P2 1 0

Figure 4.4 System Data Structure

From the above figure, since no resources are available, and both processes have nonzero resource request, the algorithm can not find a single qualifying process in step 2 of the algorithm and it shows that the two processes are in the deadlock situation.

P1

R1 R2

P2

Page 74: Microsoft Word - OS

CHAPTER V

MEMORY MANAGEMENT

The solutions of problems attempted in this chapter are based on the following syllabus: Real Storage, Resident Monitor, Multiprogramming with Fixed Partition, Multiprogramming with Variable Partition, Multiple Base Register, Paging, Segmentation, Paged Segmentation, Virtual Memory Concept, Demand Paging Performance, Paged Replaced Algorithm, Allocation of Frames, Thrashing, Cache Memory Organization Impact on Performance. Question 5.1 Define the following in brief: (a) Memory Management (b) Separation of Address (c) Sharing of Memory (d) Contiguous Allocation (e) Internal Fragmentation (f) External Fragmentation (g) Wasted Memory (h) Time Complexity (i) Memory Access Overhead Solution (a) Memory Management

Memory management is primarily concerned with allocation of physical memory of finite capacity to requesting processes.

Page 75: Microsoft Word - OS

No process can be activated without the allocation of memory space. Since temporarily inactive processes may be swapped out of memory to make room for others. So, the vacant space can be used to load other processes ready for execution. Now scheduler has chanced to do useful work. The overall resource utilization & performance criteria depend upon the memory management. Main memory introduces two conflict requirements which are separation of address and sharing of memory which are described in (b) & (c) parts of question. (b) Separation of Address The memory manager must manage distinct address spaces to prevent active process erroneously. A memory manager in a multiprogramming environment supports both memory protections by isolating disjoint address space & sharing of memory which is described in (c) part of the question. (c) Sharing of Memory Sharing of memory allows co-operating processes to access common area of memory. When a number of processes can be active at once, it also becomes for user process possible to share resources. In the time sharing system, space is not shared only one user process is resident in the computer system at a time & all others are swapped out awaiting a turn.

Highest Address

Page 76: Microsoft Word - OS

Space Available For User Process Fixed Starting Lowest Address Address for User Process Figure 5.1 Sharing of Memory

But in the other computer systems, there are several processes to be resident in primary storage. Simultaneously, all of them accept one either blocked or ready & waiting to execute. (d) Contiguous Allocation

In this case, each logical object is placed in a set of memory location with consecutive addresses. A common approach with contiguous allocation is to partition the available physical memory & to satisfy request for memory by allocating suitable free partition. When resident of object terminates, its partition is freed & made available for allocation of another requester. (e) Internal Fragmentation

When partitioning is static then memory is wasted in each partition. Wasting of memory within a partition is due to a difference in size of a partition & of the object resident within it. This is called as Internal Fragmentation. Dynamic partitioning eliminates internal fragmentation by making each partition according to request for space for an object. (f) External Fragmentation

Operating System Unused Space User Process Kernel

Page 77: Microsoft Word - OS

When an object is removed from memory, the space is free where one can made new allocation. After some in operation, dynamic partitioning of memory has a tendency to fragment make memory into interspersed areas of allocated and of unused memory. Therefore, allocation may suit to find a free region to consider a request even the combined size of free areas exceeds the needs the request by a wide region. Wasting of memory between partitions due to scattering of the free space into a number of discontinuous areas, is called external fragmentation. (g) Wasted Memory It is part of fraction of unused physical memory. By unused memory, we mean memory not allocated to the system or to user objects. Memory can be wasted by internal fragmentation, external fragmentation & data structures. (h) Time Complexity

It is a combinational complexity of allocating & reallocating memory. Space complexity for data storage is included in the wasted memory measure. (i) Memory Access Overhead It is referred to the duration of additional operations performed by a given memory management scheme in accessing memory. Question 5.2 Explain the Working of a Single Process Monitor.

Solution

The single process monitor is one of the simplest ways of memory management & this is commonly used in Pc-Dos.

Page 78: Microsoft Word - OS

Here, memory is divided into two contiguous parts. The upper part is permanently allocated to the resident portion of the operating system and the lower part is used for allocation to transient processes. These processes are loaded & executed one at a time. When one transient process is completed then operating system loads another one for execution. User processes & non resident portion of operating system may be executed Figure 5.2 A Single in the transient process areas. Process Monitor The operating system expends little time & effort for managing the memory. The operating system needs to keep track of the first & last location available for allocation to the transient processes. The first location is following immediately the resident memory & the last location is showing the capacity of memory. A new transient process may be activated upon termination of the running one. The operating system makes the size of the process image to be loaded within the bounds of available memory otherwise loading can’t be completed & an error massage is generated. Once in the memory, the process receives a control from the operating system & executes until completion or abortion due to some error condition. After completion, the process transfers control to the operating system by invoking exit service. Now, another process in waiting may be loaded into the memory for execution. Protection between processes is supported by a single process monitor. It is desirable to protect the operating system code from being tempered by the executing transient process. The system may frequently crash and need rebooting when un-debugged user programs are run.

Operating System Monitor Transient Process

Area

Page 79: Microsoft Word - OS

A very simple way used in embedded systems to protect operating system code from programs is to place the operating system in read only memory. In this case, the processes require some hardware assistance. Two mechanisms for protection; fence register and protection bits are described. Protection may be accomplished by a dedicated register called fence register. This register is used to draw the boundary between operating system and transient process area. When the resident portion of operating system is in low memory then the fence register is set to highest address by the operating system code. Each memory address generated by process is compared against the fence. Another approach to memory protection is to record the rights in the memory itself. One possibility is to associate the protection bit with each word in the memory. The memory then may easily be divided into two zones of arbitrarily size by setting all protection bits in one area and resetting them in other area. During the system startup, protection bits may be set in all locations where the operating system is loaded. User programs may then be loaded and executed in the remaining memory locations. Sharing of code and data in memory does not make much sense in single process environments. Single process monitors are relatively simple to design and consists of little hardware support. Lack of support of multiprogramming reduces the utilization of processor and memory. Processor cycles are wasted because there is no pending work that may be executed while the running process is waiting for execution. Question 5.3 Explain the working of a static partition memory allocation. What is degree of Multiprogramming. Explain the Partition description Table. Clearly explain the concept of best fit and first fit approaches of allocation

Page 80: Microsoft Word - OS

Solution

In this case, we divide the availabe physical memory into several partitions to support multiprogramming systems . Each partition is is allocated to different processes. The partition may be static or dynamic. Static partition shows that the division of memory is made at some time prior to executation of programs and partitions remain fixed thereafter.The number and sizes of individual partitions are determined during the system generation process, but keeping in mind about the capacity of physical memory, degree of multiprogramming and typical size of process. These three conditions may be generated during the actual system operation. The process may execute within the partition. The number of distinct partitions represents the upper limit of active processes. This is some times called degree of multiprogramming. Partitioned memory is shown in Figure 5.3. There are total six partitions; one is assumed to be occupied by the resident portition of operating system and three for purpose Pi , P j and Pk , and rest two images are blank. Then the opearting system reads the programs which are free and available for allocation. First the opearating system allocates a memory region large enough to hold a process image from the disk to reserve the space. After becomming resident in the 0K

Page 81: Microsoft Word - OS

100K 400K 500K 750K 900K 1000K

Figure 5.3 Static Partition Memory memory, the newly loaded process becomes in ready state and now elligible for execution. After deffining the partitions operating system needs to keep track of the status of either free or in use for allocation purposes. Current partition status and attributes are collected in a data structure called Partition Description Table (PDT) which is given below, each partition shows starting address (base), size and status. When partition is static then only status field of each entry varies in the course of system operation varies. All other fields are static and contain the values defined at partition definition time.

Partition No. Partition Base Partition Size Partition Status 0 0K 100K Allocated 1 100K 300K Free 2 400K 100K Allocated 3 500K 250K Allocated 4 750K 150K Allocated 5 900K 100K Free

Operating System

Pi

Pj

Pk

Page 82: Microsoft Word - OS

Figure 5.4 Partition Description Table

Here partition 1and 5 are assumed to be available for allocation. Partitions 0, 2, 3, & 4 are occupied by some process. When non resident process is to be created or activated the operating system attempts to allocate a free partition. The operating system do the partition by consulting the entries of PDT. If search is successful then the status of selected entry is marked as allocated and the process image is loaded into free partition. Here we have two problems: (a) Partition Allocation Strategy: How to select a specific partition for a given process?

(b) No Suitable Partition: This arises when no suitable partition is available for allocation.

Consider that there are some free partitions. Allocation to these free partitions can be made in different ways. The way may be first fit or best fit which are most common. In the first fit approach allocation may be made to the first free partition large enough to accommodate the created process. The process size must be known to the operating system. In the best fit approach, operating system allocates the smallest free partition which meets the requirement of the process. Both approaches have to search the PDT to identify the free partition. In the above two approaches the first fit is fast than the best fit, but best fit may achieve higher utilization of memory by creating the smallest size of partitions. Request to allocate the partition may come from one of two given sources

Page 83: Microsoft Word - OS

A. Creation of new process. B. Reactivation of swapped out process. There are three cases, which arise at the time of allocation: 1. No partition is large enough to accommodate the incoming process If the partition to be created is so large and very difficult to fit into the partition the operating system produces the error message. To remove this situation one should redefine the partition because this is a configuration error. Another way to remove this is that programmer should modify the coding of the process. 2. All partitions are allocated When no process is free then the operating system defers the loading of the incoming process till the allocation of suitable partition. The removal of this technique is discussed under swapping.

3. Some partitions are free but free partitions are not enough to accommodate the incoming process. Both deferring and swapping are also applicable to handle the third case where free but unsuitable partitions are available. Question 5.4 What is swapping. Explain the role of medium term schedu-ler in swapping. Solution The removing of suspended processes from the memory and then bringing back all the process is called swapping. It is generally used in the multiprogramming systems. Swapping is also useful for improving the processor utilization in partitioned memory

Page 84: Microsoft Word - OS

environment. It is done by increasing the ratio of ready to resident processes. The concept of swapping exists in contiguous allocation, fixed and dynamically partitioned and segmentation of memory management. When the scheduler decides to admit new process for which no free partition is available then the swapper may be invoked to vacate such partition. Swapper is an operating system process having following responsibilities:

· Selection of process to swap out. · Selection of process to swap in. · Allocation and management of swap space.

Swapper performs most of the functions with medium term scheduler. The choice of process to be swap in is generally based upon the time spend in secondary storage, priority, satisfaction of minimum swapped out disk residence. Swapping requires some provisions and considerations like include the file system, specific operating system services and reallocation. A process is prepared for execution and submitted to the operating system in a file form, which contains a program and other data. The file may also contain other attributes like priority, memory requirements, etc. A file is also called as a process image. After swapping, the modifiable portion of a process’s state consists of data and stack location with process register. There are two basic options for the placement of swap file but in either case swapping space for each swappable process is reserved. System wide swap file In a system wide swap file approach, a single large file is created to handle swapping requirements of all processes. The swap file is placed on a fast secondary storage device. The static address and

Page 85: Microsoft Word - OS

size of a swap file are also advantages for direct addressing of swap area on the disk. The size of swap file effect the number of active processes in the system because a newly swappable process can be activated only when sufficient swap space can be reserved, other wise it will give a run time error. Dedicated per process swap file The other way is to have a dedicated swap file for each swappable process in the system. These swap files may be created either dynamically at process creation time or statically at program preparation time. So the advantage of either case is to remove system swap file dimensioning problem and system overflow errors at the run time. Swapping process is a very lengthy operation relative to processor instruction execution.

Question 5.5 What are the differences between static and dynamic relocation under swapping. What are the differences between virtual and physical address. Solution It refers to the ability to load and execute a given program into arbitrary place in the memory. Different load address may be assigned during different executions of a single re-locatable program. One should know about the difference between virtual address and physical address where the program and its data are stored in the memory. · Virtual addresses are the identifiers used to reference information within a program address.

Page 86: Microsoft Word - OS

· Physical addresses are the physical memory locations where information items are stored at run time. There are two basic types of relocation under swapping: Static Relocation In this, relocation is performed before or during the loading of program into memory. A language translator prepares the object module by considering virtual address 0 (starting address of program) and then making the virtual address relative to the program loading address. When object modules are combined by linker, all program locations that need relocation are adjusted with starting physical address allocated to the program. If the relocation information in memory is lost then the re-locatable program cannot be simply copied from one area of the memory into another area.

Dynamic Relocation It implies the mapping for virtual address space to the physical address space which is performed at the run time. Process images in systems with dynamic relocation are prepared to assume the starting location with virtual address 0 & they are loaded in the memory without any relocation adjustments when the related process is being executed, all of its memory references are relocated during instruction execution. This is implemented with the help of base registers. After allocating a suitable partition & loading a process image in the memory, the operating system sets a base register to starting physical load address. These values are obtained from Partition Description Table (PDT). Dynamic relocation is shown in the following figure:

Page 87: Microsoft Word - OS

100,000 Base Register Move A 1000 0

10000 Move A, 1000

101000

CPU 1000 101000 1000

Virtual Physical Address Process Address Relocation Memory Image

Figure 5.5 Dynamic Relocation in Swapping In this figure initial virtual address is 0. It is assumed that address 100,000 is allocated as starting address for loading the process image. The base register points the address into the P.C. Move A, 1000 instruction loads the contents of the virtual address 1000 into accumulated. So the target address resides at the physical address 101000 in memory. This address is produced by the hardware by adding the contents of base register to the virtual address. Relocation is performed by hardware & it is invisible to programmer. This approach makes clear distinction between the virtual & physical address space. Dynamic relocation makes the possibility to move a partially executed process from one area of memory into another area

Page 88: Microsoft Word - OS

+

without affecting its ability access instructions & data correctly in the new space. Question 5.6 Explain the protection and sharing techniques in the swapping. Solution Protection Not only operating system be protected from unauthorized tempering by user processes but also each user process must also be prevented; otherwise, a single error may easily corrupt anything. For prevention, multiuser operation should be provided in the systems with adequate hardware support. To provide the protection for a given system one should protect the partitions of memory. In general limit registers are also used for protection which is also called as bound register. The primary function of a limit register is to detect address space. The limit register is set to the highest virtual address in a program. Base Register 0 <=Limit Yes CPU Register + No Max Memory

Page 89: Microsoft Word - OS

Figure 5.6 Protection in Swapping As shown in the Figure 5.6, before formatting to the memory, each intended memory reference of an executing program is checked against the contents of the limit register. In the above procedure, any attempt to access the memory location outside of specified areas is detected. This is done by the protection. Another approach to protection is to record the access rights in the memory itself. Sharing There are three basic approaches to sharing in systems with fixed partitioning of the memory. These are given below: · Entrust the shared objects to operating system. · Maintain multiple copies, one per participating partition of shared objects. · Use shared memory partitions. The first one is the easiest way to implement sharing. So, operating system should trust on the shared objects & no additional provision is not necessary to support sharing. All the codes bought under operating system umbrella to attain the some level of privilege. This approach could not incorporate new objects dynamically in operating system during system generation. If operating system does not trust on the view of objects, the sharing is quite difficult in the systems with fixed partitioning of memory. Since the memory partitions are fixed, disjoint & difficult to access by processes not belonging to operating system. So, one

Page 90: Microsoft Word - OS

can say that the static partitioning of memory is not very conductive to sharing. The second approach of sharing is to consider a separate physical copy of shared object. A single logical object is represented by multiple projections. Each process runs using its local project of the shared object. Updates are made to copies of the shared object; this is must for all copies to be remaining consisted. Sharing of code is done by maintaining multiple physical copies of the shared object. The third approach of sharing is to place the data in a dedicated common partition. This may be handled by the base registers. Separate sets of dedicated base limit register pairs are needed for accessing the private & shared memory space. Question 5.7 What is the Dynamic partition memory allocation. Explain insertion and deletion techniques in dynamically partitions. Solution Internal fragmentation & other problems occurring in static partition may be eliminated with the help of dynamically partition. These partitions may be set with the requirements of each particular part of processes. From the initial state of system, the partitions may be created dynamically to fit the needs of each requesting process. When a process terminates or becomes swapped out, then the memory manager can return the located space to the pool of free memory area. In this type of partition, neither size nor number of dynamically allocated memory partitions has limit at system generation. At the time of loading the process, the memory management module of operating system creates a suitable partition. For allocation to process & locates the process in a continuos free area of memory, the size of free area must be equal or larger than the size of process. The partition is created by entering its base, size, & status

Page 91: Microsoft Word - OS

(allocated) & put into the PDT. A copy of this information is also recorded in the PCB. After loading the process image into a created partition, the process may be turned over to operating system for further processing which is done by short term scheduler. If no suitable free area can be allocated then operating system indicates the error message. When resident process terminates or becomes swapped out the operating system terminates the related partition, & the entry has been made in PDT. Here operating system needs to keep track on both partitions & memory. In the case of switching & swapping processes, it is important to know which partition belongs to a given process. Operating system has the information about the starting address & size of each free area of a memory. The operating system has also the updated information regarding the size & address. The following figure shows the dynamic memory allocation with PDT giving information about the starting address, size & status of process. Head 0k 0 0k 100k ALLOCATED

1 - - - 100k p

2 400k 100k ALLOCATED

3 500k 250k ALLOCATED 400k

4 750k 150k ALLOCATED 500k

5 - 750k

6 - 900k

7 -

O.S.

Free

Pi

Pj

Pk

Free

900k 300k

--------

100k

Page 92: Microsoft Word - OS

1000k

PDT Memory Free List

Figure 5.7 Dynamic Memory Allocation Here PDT entries are used only for created & allocated partitions. The free areas are described by the free list. Unused entries are available for recording the newly created process. In the above two free areas are recorded in the free list where nodes show the physical address. The following figure shows the PDT & free list after creation of job of 120 kb where partition made out of the larger free area. PDT & the free list entries both are updated accordingly. Head 0k 0 0k 100k ALLOCATED

1 100k 120k ALLOCATED 100k p

2 400k 100k ALLOCATED 220k

3 500k 250k ALLOCATED 400k

4 750k 150k ALLOCATED 500k

5 - 750k

6 - 900k

7 -

1000k

PDT Memory Free List

Figure 5.8 Modified Allocation of Memory after Insertion of a Process

O.S.

Pz

Free

Pi

Pj

Pk

Free

900k 180k

--------

100k

Page 93: Microsoft Word - OS

The Following Figure shows the data structure after termination of

third Partition Pj which creates 250 kb

Head 0k 0 0k 100k ALLOCATED

1 100k 120k ALLOCATED 100k p

2 400k 100k ALLOCATED 220k

3 - - - 400k

4 750k 150k ALLOCATED 500k

5 - 750k

6 - 900k

7 -

1000k

PDT Memory Free List

Figure 5.9 Modified Allocation of Memory after Deletion of a Process

O.S.

Pz

Free

Pi

Free

Pk

Free

500k 180k

--------

100k

900k

250k

Page 94: Microsoft Word - OS

Question 5.8 Write down algorithm to a partition under dynamic memory allocation. What are the different approaches to create a partition. Solution According to the following algorithm, the partition P can be easily created of size P_Size (a) Search the free list for free area F with F_SIZE ³ P_SIZE, if

none found then algorithm terminates with error indication. (b) Calculate Difference = F_SIZE – P_SIZE If Difference <= c

(a very small constant) then allocate the entire free area for creation of partition P by

P_SIZE = F_SIZE & P_BASE = F_BASE

Else then allocate space for partition P from the block F with setting

P_BASE = F_BASE F_SIZE = F_SIZE – P_SIZE F_BASE = P_BASE + P_SIZE (c) Find the unused entry in PDT & record (P_BASE) & size

(P_size) & set status to allocated. (d) Record the PDT’s entry number in the PCB, T, for which the

partition P is being created. There are following three algorithms for selection of a free area to create a partition: First fit & its Variant Next Fit

Page 95: Microsoft Word - OS

Next fit is a modification form of first fit. Here, the pointer to free list is saved an allocation & used to begin the search for the subsequent to follow allocation. Next search continues till the last one left off. It is opposite to starting from beginning of first as in the area of first fit; next fit is not found to be set error to the first fit in reducing the amount of the wasted memory. Best Fit First fit search is faster because it terminates as soon as a free block is found (to access the process). The best fit searches the entire free list to find the smallest free block. So the best fit is slower & utilization of memory is more than first fit. Worst Fit Worst fit is an antipode of best fit. It allocates the largest free block provided the block size exceeds the requested partition size. The idea behind the most fit is to reduce the rate of production of small holes. Worst fit allocation is not very much effective to reduce the wasted memory. Question 5.10 How can you improve the dynamic memory allocation performance. Explain the concept of compaction with protection and sharing under it. Solution It is a important part of dynamic memory allocation. Creation of free areas between partitions is called external fragmentation & it is generally seen in the dynamic memory allocation of variable size. Therefore, when memory becomes seriously fragmented the only way to reduce this is to reallocate one or all partitions into one end of memory & combines the holes (partitions) into large area. At

Page 96: Microsoft Word - OS

this time affected processes must be suspended and copied from one area of memory to another. This is called as memory compaction. The technique to compaction is shown in following figure: 0 0K O.S. O.S. 100k 100k

Pz Pz 220k 220k Free Pk

370k 750k

Pk

Free 900k Free

1000k 1000K Selective move Global move

Figure 5.10 Compaction Technique

The first is indicating the selective move with 530 KB continuous free hole is created. In this case the PDT entries have been changed. In the compacting approach, there is relocation of all partitions to one end of memory. This is shown in second figure. The number of location copied from one place to another place is 150 kb. In this case, two memory references i.e. one read & one write are necessary to move each word. With 0.5 ms memory cycle time the movement of 150 kb take only 0.15 seconds as shown below: 150 x 103 x 2 x 0.5 x 10-6 = 0.15 seconds

Page 97: Microsoft Word - OS

Protection and Sharing in Compaction There is no significant difference in protection & sharing of dynamic partitioning of memory & static partitioning. One difference is that dynamic partitioning allows partitions in physical memory to overlap. BASE SIZE 4000 A 4000 2000 A 5500 B 5500 2500 6000 Shared Areas B B 8000

Figure 5.11 Sharing in Compaction

A single physical copy of a shared object may be accessible from two distinct address spaces. Partitions A & B are overlapping. Sharing of code is generally more restrictive than sharing of data. Shared code must be either reentrant or executed in a mutual exclusion with no presumptions. Let us consider example of subroutine SUB is shared by two processes A & B whose respective partitions overlap in the physical memory. 4000 Base Register 0 1500

+

Page 98: Microsoft Word - OS

100 CAL SUB(CALL 1500) 5500 Base Register 4000

1500 SUB 1550+50 1550 JMP $ + 50 5600 2000

Figure 5.12(a) Accessing Code for Process A

Base Register 0 5500

50 SUB JMP $ + 50 5600 500 Base Register 5500 1800 CALL SUB(CALL 0) 5500 2500

Figure 5.12(b) Accessing Code for Process B Consider the system is using dynamic relocation & dynamic memory allocation. The size of address spaces of two processes are 2000 & 2500 locations. Shared subroutine SUB occupies 500 locations & it is placed in locations 5500 to 5999 in physical memory. The subroutines starts at virtual addresses 1500 & 0 in the address space of A & B, respectively.

+

+

+

Page 99: Microsoft Word - OS

The above frame also shows the references to SUB within two processes. CALL SUB at virtual address 100 of process A is mapped to physical address of 5500 at run time by adding the contents of A base register. Similarly CALL SUB at virtual address 1800 in B is mapped to 5500 at runtime by adding B’s of base register. The above example shows proper referencing. Here $ denotes the address of JMP instructions. At run time both references map to same physical address 5600.

Question 5.12 What is Segmentation. Explain the address translation in the segmentation. Briefly write the concept of segmentation description table. Solution The external fragmentation produces the negative impact on the wasted memory but external fragmentation can reduced the wasted memory where average size of process for allocation is smaller. Now operating system can’t reduce the size of process. Therefore, to reduce the size of a request of memory is divided into blocks that may be placed into noncontiguous areas of memory. This technique is called segmentation. Different segments are formed at the program translation time by grouping together logically related items. A typical process may have separate code, data & stack segments. Other processes may share data or code after placing them in dedicated segments. Different segments may be placed in separate, noncontiguous areas of physical memory. Single segment must be placed in contiguous areas of physical memory. So, segmentation has properties of both contiguous & noncontiguous. This technique is used in a program with logical related entities, like subroutines & local or global data areas.

Page 100: Microsoft Word - OS

Consider the example with program consisting of the four segments DATA, STACK, CODE & SHARED which are given below: à DATA SEGMENT datum x dw xx datum y dw yy DATA ENDS àSTACK SEGMENT ds 500 STACK ENDS àCODE SEGMENT psub ------------- ------------- main ------------- ------------- CODE ENDS à SHARED SEGMENT ssub1 ---------- ---------- ssub2 ---------- ---------- SHARED ENDS

Page 101: Microsoft Word - OS

Figure 5.12 Segmentation Technique Except the SHARED, the name of each segment is chosen to indicate the type of information, it contains. STACK segment is assumed to be consisting of 500 locations. The SHARED segments have two subroutines ssub1 & ssub2. CODE & SHARED segments contain executable instruction. The segment map of above is shown below: 0 DATA Segment no. Size Type d-10 0 d data (STACK) 1 500 stack 2 c code 499 3 s code 0 (CODE) Load Module c-10 (SHARED) 100 s-1 Segment Map Figure 5.13 Segment Map with Load Module The subroutine SSUB2 in segment SHARED is considered to start 100. 100 may fetch the first instruction of the subroutine ssub2 within the segment SHARED, the same relative offset may designate entirely unrelated datum in the DATA segment. So the addresses in segmented systems have two components which are given below:

Page 102: Microsoft Word - OS

(a) Segment name/number (b) Offset within the segment To simplify the processing, segment names are usually mapped to segment number. This mapping is static & performed by system programs in the course of preparation of process image. A simple linker load module for the segment is also shown in the above Figure 5.13. If SHARED segment is assigned number 3, the subroutine ssub2 may be uniquely identified by its virtual address (3100) where 100 is the offset within the segment number 3(SHARED). Address Translation in Segmentation Physical memory in segmented systems consists of linear array organization. But some addresses are needed to convert a two dimensional virtual segment address. In segmented systems, items belonging to a single segment reside in one contiguous area of physical memory. At starting time the virtual address is 0. When segmented process sends request to operating system, the operating system attempts to allocate memory for the segment. Here, we use the concept of dynamic partitioning for creating the separate partition for particular segment. The base & size of loaded segment are recorded as tuple which is called as the Segment Descriptor. All these of a given process are collected in a table called Segment description table (SDT). The following figure shows the address translation in segmented system:

Page 103: Microsoft Word - OS

Figure 5.14 Address Translation in Segmentation The above example shows the placement of segments into physical memory. Finally resulting SDT formed by operating system which is shown in the figure also. Each segment is defining the physical base address. The segment number provided in virtual address is used to index the Segment descriptor table to obtain the physical base address of the related segment. Physical address can be formed by adding the offset of desired item into the base. The Virtual address (3100) to access segment 3 is used to index the SDT & obtain the physical base address 20000. If this offset is within the bounds then base & offset are added to produce the target physical address, which, is 20100. Here the size of a Segment descriptor table is related to size of virtual address space of a process. Collections of logically related items are accessed by a dedicated hardware register called as segment descriptor table base register (SDTBR). The size of SDT may vary and another dedicated hardware register called as Segment descriptor table limit register (SDTLR) is used and which marked the end of the SDT pointed by the SDTBR. Question5.13 Explain the role of Segment Descriptor Caching in the Segmentation. Solution Performance of segmented systems depends on the duration of address translation process. Most of the memory references may be mapped with aid of registers. The rest may be using SDT in

Page 104: Microsoft Word - OS

memory. This scheme is dependent on the operating system’s ability to select the proper segment descriptor for storing into registers. Memory references may be catagorised as access to: 1. Instructions 2. Data 3. Stack Keeping the current code, data & stack segment descriptors in registers may accelerate address translation. Depending on its type, a particular memory reference may be mapped using the appropriate register. The CPU status lines are used to select the appropriate Segment descriptor register (SDR). The size field of selected segment descriptor is used to check whether the intended reference is within the bounds of target segment. If yes then base field is added with offset to produce the physical address. SDR are initially loaded from SDT. When running process makes inter segment reference, the corresponding segment descriptor is loaded into the appropriate register from SDT. Question 5.14 Explain the protection and sharing under segmentation. Solution Protection Protection is necessary in Segmented systems. The legal address space is the collection of segments defined by SDT. Except the Shared segments, seperation of distinct address spaces is enforced by placing different segments in disjoint areas of memory. The protection discussed in dynamic partitions is also applicable to segmented systems. Both reading & writing stack segments are

Page 105: Microsoft Word - OS

necessary for accessing of code segments but code may execute only in read only mode. Data segments can be read only , write only & read-write only. So, access rights to different portions of a single address space may vary according to the type of stored information. Access right bits are included in segment descriptors. Sharing Sharing is the most important part of segmentation. Shared objects, Codes, data are placed in a separate dedicated segments. The shared segment may be mapped to segment descriptor table. Let us consider code segment VS is assumed to be shared by three processes P1, P2 & P3. Base Size Access Rights SDT1

RW EO

EO RW

DATA1 EMAC DATA2 DATA3

Page 106: Microsoft Word - OS

SDT2 SDT3 MEMORY Figure 5.15 Sharing in Segmentation In Figure 5.15, segment descriptor tables SDT1, SDT2, SDT3 for processes are also shown. Segment VS is considered to have different virtual numbers in three address spaces. Here different processes have different access rights. Processes P1 & P2 can execute only the shared segment VS, the process P3 is allowed both reading & writing. Each participating process can execute the shared code from VS using its own private data segment. VS segment is working as a editor where a single copy may serve the entire user number of a time sharing system. For example: users 1, 2 & 3 can have their respective text buffers stored in segments DATA1, DATA2, DATA3, respectively & code Segment descriptor register points to VS in all cases. The current instruction to be executed by the particular process is indicated by Program counter which saved and restored as a part of each process’s state. The counters give the facility to sharing by making all code self references. The sharing is encouraged in segmented systems. This gives the help for increasing the utilization process.

RW RW

Page 107: Microsoft Word - OS

Question 5.15 Define non contiguous allocation. Solution Non contigous allocation means that memory is allocated in such a way that parts of a single logical object may be placed in noncontiguous areas of physical memory. Address translation is performed during the execution of instructions. Question 5.16 Explain the concepts of paging. Explain the address transla-tion under paging technique. Describe the working of Page Map Table. Solution It is a technique for rewriting the requirement of contiguous allocation of physical memory. The physical memory is conceptually divided into number of fixed size slots called as page frames. The virtual address spaces of a process is also split into fixed size blocks of the same size called pages. Allocation of memory consists of finding a sufficient number of unused page frames for loading of requested process’s page. Address translation Scheme is used to map virtual pages to their physical counter parts i.e. pageframes. Here each page is mapped seperately. The basic priciple is shown in the Figure 5.16.

Page 108: Microsoft Word - OS

Virtual Virtual Page Page No. Page Frame Address 0000 0 0 100000 0100 1 1 101000 0200 2 2 102000 0300 3 103000 3 FFC000 PMT FFD000

---200 FFF--- FFF200 FFE000 FFF000 Memory

Figure 5.16 Address Translation of Paging Consider 16MB sample system with virtual & physical addresses are assumed to memory be 24 bits long each. Consider the page size is 4096. So physical memory can accomadate 4096 page frames of 4096 of each bytes. 1 MB of physical memory is set for operating system resident. Remaining 15 MB are available for allocation to user process. 1 MB consists of 256 page frames. So 15 MB consists of 15´256=3840 page frames which are available

LDA 003 200

FFD

100

103

FFF

c

O.S. P1

P2

P0

P3

Page 109: Microsoft Word - OS

for allocation of processes. Each page is 1000 H bytes & the first user allocatable page frames start at physical address 100000 H. The virtual address space of a sample user process is 14,848 bytes long which is divided into four virtual pages numbers called 0 to 3. The mapping from virtual addresses to physical addresses in paging system is performed at page level. Each virtual address is divided into two parts page number & offset within that page. Since pages and page frames have identical size, so offsets within each are identical & need not to be mapped. In above example each 24 bit virtual address may be divided as 12 bit page number & 12 bit offset within the page. This is sufficient for unique identifications of each byte within a page (4096 bytes). In paging address translation is done by Page Map Table (PMT). It’s created at process loading time for making the correspondance between virtual and physical address. As shown in figure, there is one entry in PMT. For each virtual page of a process. Since the offsets are not mapped, so only physical base address i.e. page frame number need to be stored in a PMT entry. Consider the virtual address 0 is to be placed in the physical page frame ( Start-ing address FFD000 H) with each frame (1000 bytes) long, then the corresponding page frame number is FFD H as shown in the physical memory. The value is stored in the first entry of PMT. All the other PMT entries are filled with page frame numbers of the region where corresponding pages are loaded. Consider the virtual address 03200 H is spliting into the page number of 12 bits (003 H) & offset with the page (200 H). The page number is used to index PMT & to obtain the corresponding physical frame number FFF H. This value is a finally concatenated with offset to produce the physical address FFF200 H, which is used to reference the target item in the memory.

Page 110: Microsoft Word - OS

From the above discussion we can say that in the paging systems, memory is allocated in fixed size partition called as pages. Page size is influenced by several factors. In most commercial implementations page sizes vary between 512 bytes to 8kb. Question 5.17 Explain the concepts of Memory Map Table in the paging. Solution In the paging, operating system has to keep track on the status of page frame. Finally the allocated memory is called as Memory Map Table (MMT) which consists of physical addresses & the status of instructions. MMT is shown below for the example consider in the previous question: 000 ALLOCATED 0FF ALLOCATED 100 ALLOCATED 101 FREE 102 FREE

Page 111: Microsoft Word - OS

103 ALLOCATED | | | | | FFC FREE FFD ALLOCATED FFE FREE FFF ALLOCATED Figure 5.17 Memory Map Table MMT describes the status like FREE , or ALLOCATED. So MMT has a fixed number of entries which is same as number of page frames in system. The number of page frames is given by: f =m/p where, f º No. of page frames, m º Capacity of installed physical memory, p º Page size, m & p are integer power of base 2. Therefore, f is integer. If a process of size s is loading then operating system must allocate n free page frames,

Page 112: Microsoft Word - OS

n = [ s/p ] It returns integer result. If the size of a given process is not multiple of the page size the last page frame may be partly unused. This is known as Page fragmentation or Page breakage. Therefore an allocation of memory is based upon the finding n free pageframes. There is no concept of first fit or best fit. After selecting n free page frames the operating system loads process pages into them & construct page map table of the process. So there is one MMT per system & as many PMT’s as there are active processes. Question 5.18 How the page allocation can be made under the technique of paging. Solution The efficiency of memory allocation is based on the speed allocation of free page frames. Consider n free pageframes are randomly distributed in memory, so, the average number of MMT entries are: x = n/q where, q º Probability that a given frame is available for free allocation if u is unused memory then q = u/100 (0£q£1) MMT entries searched x which is directly proportional kn. Where, k = 1/q so k³1

Page 113: Microsoft Word - OS

For example in a system where unused memory is 50%, the probability of given page frame is free = 0.5, to find 10 free frames, we get x = 10/0.5 = 20 From the above, we can say that 20 MMT entries must be examined. x frames increases with the amount of memory in use. Question 5.19 What is Transalation Lookaside Buffer. What are the advantages of this. Also explain the hardware support under paging. Solution Let us first describe harware support in the paging technique. Hardware support for paging is related to storing the mapping tables & speed of mapping from virtual to physical addresses. Each PMT is large enough to accommodate the maximum size allowed for address space of a process. Creation of PMT is done by dedicated hardware Page Map Table Limit Register (PMTLR). It is set of highest virtual page number defined in PMT of running process. Accessing of PMT is done by Page Map Table Base Register (PMTBR) which points to the base address of PMT of running process. The value of above two registers are defined at process loading time & stored in related PCB. Address translation requires two memory references. One to access PMT for mapping & second to reference the target item in physical memory. Translation Lookaside Buffer (TLB) is to use high speed associative memory for storing a subset of used page map table entries. This is also known as mapping cache. The role of the cache in mapping is shown in the Figure 5.18.

Page 114: Microsoft Word - OS

TLB contains pairs of virtual page numbers and corresponding page frame numbers where the related pages are stored in physical memory. The page number is necessary to define each particular entry because a TLB contains only a subset of PMT entries. Virtual Address Memory Yes Translation lookaside Buffer No

PAGE# OFFSET

PAGE in

Cache

i 400 m 600 k 900

c

Page 115: Microsoft Word - OS

i Memory k k m Yes m No PMT

Figure 5.18 Working of Translation Lookaside Buffer The above figure shows that address translation by presenting by number of portions of the virtual address to the TLB. If desired entry is found, then the corresponding page frame combines the offset to produce the physical address. If not, then the PMT in memory must be accessed to complete the mapping. This is done by first consulting PMTLR to verify that the page number provided in the virtual address is within the bounds of related process’s address space. If yes then page number is added to contents of PMTBR to obtain the address of corresponding PMT entry where physical frame number is shared. This value is then concatenated with offset portion of virtual address to obtain physical memory address. The effective memory access time teff in systems with run time address translation is the sum of address translation time tTR & the access time needed to fetch the target item from memory. It is given by

Teff = tTR + tM

TLB is used to assist address translation tTR, which is

<= PMTLR

PMTBR

+

400

900

600

c

Page 116: Microsoft Word - OS

TTR = h.tTLB + (1-h) (tTLB + tM) =tTLB + (1-h) tM Here h is the TLB hit ratio. It is the ratio of address translation, 0<=h<=1, tTLB is the access time and tM is the main memory access time. So, teff = tTLB + tM + (1-h) tM

=tTLB + (2-h) tM

For example if TLB with access time is 10 times as fast as that of main memory & with 90% hit ratio, then teff = 0.1tM + (2-0.9) tM = 1.2 tM

From the above, we can see the effectiveness of the TLB. Question 5.20 Explain the protection and sharing under paging technique Solution In the paging technique, distinct address spaces are placed in disjoint areas of physical memory. The PMT limit is used to detect & to abort attempts to access any memory within the bound. Modifications of PMTBR & PMTLR are possible by means of privileged. Access bits in PMT may allow read only execute only or other restricted forms of access. Paging is the entirely transparent to the programmer. Protection in the paging system may also be done by protection keys. The page size should correspond to the size of the memory block protected by single key. This allows pages belonging to a single process to be scattered through the memory. There should be perfect match for allocation.

Page 117: Microsoft Word - OS

Including the access rights (bits) with protection keys, access to a given page may be restricted when necessary. Therefore a single physical copy of shared page can be easily mapped into as many distinct address spaces. This mapping is performed through a dedicated entry in the page level and must be recognized & supported by systems programs. Like data, the shared code must have the same within page offsets in all address spaces. Unlike segmentation paging is managed in its entirely by the operating system. There is no concept of compaction in paging. Allocation & reallocation of memory in paged systems is very simple. Utilization of physical memory is high with paging. This occurs when the page size is very small & scheduling is allowed to optimum use of memory. Paging consists of following for the memory management: 1. Per process Page Map Table. 2. A system wide Memory Map Table (MMT). 3. Fragmentation (page fragmentation) which on average equals one half of the page size per resident process. Question 2.21 What is virtual memory. Explain the address translation scheme under virtual memory. Solution It is a memory management scheme where a portion of virtual address space of a resident process may actually be loaded into physical memory. The sum of virtual address spaces of active process in a virtual memory system can exceed the capacity of available physical memory. The details of virtual memory management are generally transparent to programmers. The speed of program execution in virtual memory systems is bounded by the

Page 118: Microsoft Word - OS

execution speed of the same program run in a non virtual memory management This is due to extended delays by caused by fetching of missing portion of program’s address space at run time. The maximum program execution speed of virtual memory system can equal but never exceed the execution speed of the same program with virtual memory turned off. The virtual memory can be implemented as an extension of paged or segmented memory management or as a combination of both. Here address translation is done by page map tables, segment description table or both. Real memory is used to denote physical memory. Considered the virtual memory address space V= {0,1-- --- v-1} & the physical memory space M={0,1,.............m-1}. In many large systems the virtual address space is larger than physical memory (V>M) but the reverse V<M can be found in sum old mini and micro computer system. The operating system dynamically allocates the real memory to portion of virtual address space. The address translation mechanism is able to associate virtual names with physical location. Any time, the mapping hardware realized the function which is defined by F:V->M and F(x) = r, if item x is in real memory at location r. Missing Item exception x is not in real memory Address translation hardware virtual system is to detect whether the target item is in real memory or not. The type of missing item depends on the basic memory management scheme and may be segment or page. For describing the operation of virtual memory consider the following paging.

Page 119: Microsoft Word - OS

x y z Presence Frame 0 Main Memory 1 2 0 1 3 2 4 3

IN y OUT OUT IN x IN z OUT

P(3) P(0) P(4)

P(0) P(1) P(2) P(3) P(4) P(5)

Page 120: Microsoft Word - OS

5 4

Page Map Table 5 File Map Table Secondary

Memory

Figure 5.19 Operation of Virtual Memory

The detection of missing item is done by adding the presence indicator, just a bit to each entry of Page Map Table. Consider the presence bit when set indicates the corresponding page is in memory. If presence bit is cleared, the corresponding virtual page is not in a real memory. Before loading the process, operating system clears all presence bits in the related Page Map Table. When specific pages are brought into the memory, the corresponding presence bits are set. When page is evicted from main memory the presence bit is reset. This is shown in above figure inside a Page Map Table. The virtual space is assumed to consist of only six pages. The complete process image is present in secondary memory. The Page Map Table contains an entry for each virtual page of the related process. Each page present in real memory, the present bit is set as IN & the PMT points to physical frame that contains the corresponding page. If the presence bit is cleared i.e. OUT & PMT entry is invalid then the corresponding page is not in the real memory. Address translation checks the presence bit during the mapping of each memory references. If bit is set, the mapping is completed. If the corresponding presence in the PMT is reset, the mapping can’t be completed. In paged virtual memory systems this exception is called a page fault. When of this case happens, it must be suspended until the missing page is brought into main memory.

Page 121: Microsoft Word - OS

The disk address of the faulted page is provided in the File Map Table (FMT). This table is parallel to Page Map Table. When processing a page fault, the operating system uses the virtual page number provided by the mapping hardware to index the File Map Table and to obtain the related disk address. Format of FMT is shown in the above Figure. Question 5.22 What is the management of the virtual memory. Solution Paging is used in the memory management scheme. The implementation of virtual memory requires maintenance of one Page Map Table (PMT) per process. The virtual address space of a process may exceed the capacity of real memory. The operating system has to maintain one Memory Map Table (MMT). The memory manager uses the FMT to load the missing items into the main memory. One FMT is maintained for each active process. Its base may be kept in the control box. FMT has number of entries identical to that of related PMT. A pair of Page Map Table base and page map length registers may be provided in the hardware to check the address translation process and to reduce the size of PMT for smaller process. The allocation of real page frames to the virtual address space of a process requires the following policies into virtual memory: 1. Allocation policy How much real memory to allocate each active process. 2. Fetch policy Which items to bring and when to bring them from secondary storage into the main memory. 3. Replacement Policy When a new item is to be brought in and there is no free real memory, which item to evict in order to make room for the new one .

Page 122: Microsoft Word - OS

4. Placement policy Where to place an incoming item. Question 5.23 What are the replacement policies. Explain the following replacement policies for the paging: (a) First In and First Out or What is Belady’s anomaly. (b) Least Recently Use (c) Optimal Resources Use Solution A process that has a missing item can’t continue to execute until the missing item brought into the memory. But the memory manager have no unused page frames in physical memory for allocation to the incoming item for handling this situation, there are two option: 1. The faulted process may be suspended until some memory becomes available. 2. A page may be evicted to make room for the coming one. The first option is rarely consider because missing item are really faults of virtual memory manager, not the process. So, suspending the process gives the adverse effect on the scheduling and turn around times. With all faulted processes holding on to there already allocated ream memory, free page frames are not likely to be produced very fast. So the eviction is commonly used to free the memory needed to load the missing item. Replacement theory is used for the choice of page. The following are the important replacement algorithms: First In First Out (FIFO Type) This algorithm replaces the resident page that has spent the longest time in memory. Whenever a page is to be evicted, the oldest page

Page 123: Microsoft Word - OS

is identified and removed from main memory. The memory manager has to keep track of relative order of loading pages into main memory. So to implement this, FIFO queue of pages has to maintain. This is shown below: Consider a running process has allocated three real page frames. Memory reference string is given below for program execution on a computer with 1MB of memory in hexadecimal notation. 14489,1448B,14494,14496,……..,14499,2638E,1449A,…….. The referenced pages are obtained by omitting the two least significant digits i.e……..144,144,144,144,A1,144,144,263,144………… This can be compressed as …..144,A1,144,263,144………….. This shown below:

144 A1 144 263 144 168 144 A1 179 A1 A2 263 Reference String

PF0 144 144 144 144 144 168 168 168 179 179 179 179 PF1 - A1 A1 A1 A1 A1 144 144 144 144 A2 A2 PF2 - - - 263 263 263 263 A1 A1 A1 A1 263

Page Frames

IN 144 A1 - 263 - 168 144 A1 179 - A2 263 OUT - - - - - 144 A1 263 168 - 144 A1

Page Faults Front 144 A1 A1 263 263 168 144 A1 179 179 A2 263

Page 124: Microsoft Word - OS

- 144 144 A1 A1 263 168 144 A1 A1 179 A2

- - - 144 144 A1 263 168 144 144 A1 179

Rear FIFO Queue

Figure 5.20 FIFO Replacement Algorithm Implementation In the above, consider with initial empty memory, the first two page references cause pages 144 & A1 to be brought into the memory as a result of page fault. The third page reference is made to page 144 which is already in main memory. So its mapping does not result in a page fault .The fourth memory reference faults again pages 263 to be brought in. The first page replacement is made when page 168 is referenced. Using FIFO policy for replacement, the memory manager consults its FIFO in order to choose victim. In the lower part of figure, oldest page is 144 which, is at the rear of FIFO queue. So, page 144 is removed from memory & 168 is brought into the vacated page frame. The FIFO queue is updated. Since the next page reference is made to page 144. It must be brought back after its removal. The same situation is encounter in next page reference, when page A1 evicted to make room for page 144 which must be immediately brought back in. This approach contains nine page faults. FIFO has relatively poor performance since it tends to throw away frequently used pages because they tend to stay longer in memory .The second problem is that it may increase the number of page faults when more real pages are allocated to the program. This is known as Balady’s anomaly. Therefore FIFO is not the first choice of operating system for page replacement.

Page 125: Microsoft Word - OS

Least Recently Used (LRU Type) This algorithm replaces the least recently used resident page. This perform better than FIFO, It takes into account the patterns of program behavior by considering that the page used in most distant part is least likely to be referenced in near future. In the given figure below, no single page is replaced immediately before being referenced again. This algorithm belongs to a larger class of stack replacement algorithm. This algorithm doesn’t suffer from belady’s anomaly. Whenever a resident page is referenced, it is removed from its current stack position & placed at the top of the stack when page eviction is in order, the page at the bottom of the stack is removed from memory. In this example, when page fault occurs in an attempt to reference page 168 the most recently used page before it. This is shown below:

144 A1 144 263 144 168 144 A1 179 A1 A2 263 Reference String

PF0 144 144 144 144 144 144 144 144 144 144 A2 A2 PF1 - A1 A1 A1 A1 168 168 168 179 179 179 263 PF2 - - - 263 263 263 263 A1 A1 A1 A1 A1

Page Frames

IN 144 A1 - 263 - 168 - A1 179 - A2 263 OUT - - - - - A1 - 263 168 - 144 179

Page Faults Bottom

- - - A1 A1 263 263 168 144 144 179 A1 - 144 A1 144 263 144 168 144 A1 179 A1 A2

144 A1 144 263 144 168 144 A1 179 A1 A2 263 Top

Page 126: Microsoft Word - OS

Reference Stack

Figure 5.21 LRU Replacement Algorithm Implementation 144 is at the top of the stack. The least recently used page A1 is found at the bottom of stack & removed from memory. Maintenance of the page referencing stack requires its updating for each page reference even it results in a page fault or not. The overhead of searching the stack, moving the referenced page to the top & updating the rest of the stack accordingly must be added to all memory references. This approach contains less page faults i.e. 8 than the FIFO approach. Optimal Resource Technique (OPT Type) This algorithm removes the page to be referenced in the most distant future. Since it requires future knowledge, therefore the OPT algorithm is not realizable. It’s significance is theoretical. The implementation of OPT is shown below:

144 A1 144 263 144 168 144 A1 179 A1 A2 263 Reference String

PF0 144 144 144 144 144 144 144 144 144 144 144 144 PF1 - A1 A1 A1 A1 A1 A1 A1 A1 A1 A1 263 PF2 - - - 263 263 168 168 168 179 179 A2 A2

Page Frames

IN 144 A1 - 263 - 168 - - 179 - A2 263 OUT - - - - - 263 - - 168 - 179 A1

Page Faults

Page 127: Microsoft Word - OS

In this approach, the number of page faults is seven which is lesser than the FIFO and LRU. This optimizes the problem of Belady’s anomaly. Question 5.24 What is the page fault frequency. Explain the algorithm for counting page fault. Solution To define Page Fault Frequency (PFF), we have to define for allocation module by setting the upper & lower page fault rates for each process. The actual page rate is calculated by a running process & it is recorded in the PCB .When a process exceeds the upper page fault frequency threshold, then more real pages may be allocated to it. As soon as the process responds by reaching lower page fault frequency threshold allocation of new page frames is stopped. This approach to page replacement is called as PPF. The PPF parameter P is defined by 1/T. T: is the critical inter-page fault time P: is measured in number of page faults per millisecond. PPF algorithm is given below: 1. The operating system defines a system wide (per process) critical page fault frequency P. 2. The operating system measures virtual (process) time & stores the time of most recent page fault in the related PCB. 3. When a page fault occurs the operating system acts as per following conditions: (a) If the last page fault occurred less then T=1/P ms, the process is operating above PPF threshold & new page fault is added from the pool to house the needed page.

Page 128: Microsoft Word - OS

(b) Otherwise process is operating below threshold P & a page frame occupied by a page whose referenced bit & written into but are not set is feed to accommodate the new page. (c)The operating system sweeps & reacts referenced bits of all; resident pages. Pages that are found to be unused, unmodified, & not shared since the last sweep are released & the freed page frames are returned to the pool for future allocations. Question 5.25 Explain the working of segmentation and paging both in the virtual memory system. Solution Most of the examples are based on paging. It is possible to implement virtual memory in the form of demand segmentation. Such implementation inherits the benefits of sharing & protection provided by segmentation. On the other hand, the paging is very convenient for the management of main and secondary memories. Both segmented & paged implementations of virtual memory have their advantages & disadvantages. Some computer system combines the two approaches in order to enjoy the benefits of two. One most popular approach of segmentation is to divide each segment into pages of fixed size for purpose of allocation. The principle of address translation with segmentation & paging is shown in following figure: Virtual Address To Memory Presence No No

Seg. No. Page No. Offset

Authorised Access

+

Page 129: Microsoft Word - OS

Yes Yes x Page Map Table For Segment x X No Base Size Access Rights Non Existent SDT Segment Exception

Figure 5.23 Segmentation and Paging in Virtual Memory In the above figure, both segment descriptor table and page map table are required for mapping. Each entry of SDT contains Base, Size& Access rights. The presence bits in PMT indicate that the corresponding page is in real memory or not. Access rights are recorded as a part of segment descriptors. Each virtual address in a combined system basically controls of three fields, segment number, page number & offset within the page. When virtual address is presented to mapping hardware, the segment number is used to locate the corresponding page map table. If the presence bit is set the mapping is completed by obtaining the page frame address, from the PMT & combined with this offset part of the virtual address. If target page is absent from real memory, the mapping hardware generates a page fault exception. At both mapping system, the length fields are used to

+

SDTBR

<=Limit

<= SDTLR

+

Page 130: Microsoft Word - OS

verify that the memory references of running process lie within the confines of its address space. The combination of segments & paging requires two memory accesses to compute the mapping of each virtual address. However hardware designs of such systems must assist the work of operating system by giving support in terms of mapping registers & look aside.

CHAPTER VI

I/O MANAGEMENT AND DISK SCHEDULING

The solutions of the problems are based on the following syllabus: I/O Devices and Organization of the I/O Function, I/O Buffering, Disk I/O, Operating System design Issues. Question 6.1 Write down the responsibilities of the file management system. Solution File management is used to manage the data that reside on the secondary storage. Logically related data items on the secondary storage are organized into named collections called files. A file may contain a report, an executable program, or a set of command to the operating system. The file management system is supposed to hide all device specific aspects of file manipulation from users and to provide them with an abstraction of a simple, uniform space of named files. A file appears to users as a linear array of characters of record structures. The common responsibilities are given below:

Page 131: Microsoft Word - OS

· Mapping of access requests from logical to physical file address space. · Transmission of the file elements between main and secondary storage. · Management of the secondary storage, such as keeping track of the status, allocation and deallocation of the space. · Support for protection and sharing of the files and the recovery and possible restoration of files after system crashes. The file management system can be implemented as one or more layers of the operating system. Its basic services, like transmission of blocks of data are necessary to support management of virtual memory and swapping. Question 6.2 What are the important file services of the file management system. Solution Some of the typical file realated services that users may invoke by means of the command language are given below: General file manipulation CREATE filename DELETE filename(s) RENAME oldfilename,newfilename ATTRIBUTES filename(s),attributes COPY source_filename(s), destination_filename(s) Directory manipulation DIR dirname MAKE_DIR dirname REMOVE_DIR dirname CHANGE_DIR dirname

Page 132: Microsoft Word - OS

Volume/media manipulation INITDISK drivename MOUNT drivename/volumename DISMOUNT volumename VERIFY volumename BACKUP source_file(s)/volume,destination_file(s)/volume SQUEEZE volumename File manipulation commands contain the facilities for creation and deletion of the fikles like CREATE and DELETE. Creation of file may be performed indirectly by invoking the system programs that subsequently manipulate the newly created file. Deletion of files can be done one at a time. Renaming and attribute are essentially directory operations. The RENAME allows the users to change any or all components of the file name including the type and version number. The ATTRIBUTE command is available in some systems to change the attributes like type of access of an already existing file. The COPY command is frequently used by the users to copy one file into the other file. COPY operation can also be used to print files on the laser disk or to transfer a file to a remote site. This command in most systems creates a separate physical copy of the source file, possibly in a different directory or under a new name. The other important services are also shown above. Question 6.3 What is the disk organization. What is the procedure to evaluate the disk access time. Solution We can use number of input/output for the file storage in the computer system. The unit of data transfer is one word. File storage devices have a pronounced variance in the average time

Page 133: Microsoft Word - OS

necessary to access a given block of the data. The order of magnitude of the variance depends on the physical implementation of a particular device. A physical organization of a magnetic disk is shown below: Figure 6.1 Moving Head Disk The medium for data storage is the magnetic oxide coating, a disk platter which data storage and retrieval in a single disk drive. The disks are said to be removable or fixed. Removable disks are used in the disk cartridge or a floppy disk cover. Once a cartridge is in drive, fixed and removable disks operate in a similar manner. Unlike the magnetic tapes, disks platters are constantly rotated by the drive mechanism at a speed of 3000 rpm or higher; floppy disks rotate at about 300 rpm and they can be stopped completely accesses. Data are read and written by means of the r/w heads mounted on a head assembly in such a way that they can be brought in close contact with the portion of disk where the target data reside. Data are stored on the magnetic disk surface in the form of concentric circles called tracks. The collection of tracks on all surfaces that are the same distance from the disk spindle is called a cylinder. A number of blocks of data called sectors are recorded on each track. Disks are either of the fixed head or of the moving head variety. Fixed disks have a separate r/w head for each track. A given sector is accessed by activating the head on the appropriate track when the target passes under it. The time necessary to access the desired sector is called the rotational latency. It is equal to one half of the disk revolution time which is on the order of milliseconds for typical disk rotational speeds. Moving head disks are characterized by having either one or just a few r/w heads per surface. Removable disks are usually of the moving head variety in order to allow the head assembly to retract

Page 134: Microsoft Word - OS

from the cartridge before it is replaced. With moving heads, reading of a sector requires that the head assembly first be moved to the corresponding cylinder. After this, the head on the track is activated when the target sector passes under it. The access time of a moving head disk includes both the head positioning time. It is also known as the seek time. Hardware related delays in transferring the data between disk and memory are a combination of the three primary factors which are given below: * Seek Time It is the time necessary for the r/w heads to travel to the target cylinder. * Rotational Latency It is the time spent waiting for the target sector to appear under the r/w heads. * Transfer Time It is the time necessary to transfer a sector between the disk and the memory buffer. In the above, the first two components represent the disk access time or latency when accessing a disk sector. The transfer time is the smallest among the three. It is more convenient to transfer the larger amount of data per single disk access since the disk access overhead is then amortized over a larger number of bytes. Question 6.4 What are the relationship between the disk controller and the disk drive. Solution The disks are the electromechanical device. One can understand the differences between the disk controller and the disk drive by the Figure 6.2 given below. A controller is capable to handle

Page 135: Microsoft Word - OS

several drives with similar characteristics. These are shown in figure by DRIVE SELECT lines. HEAD SELECT lines are used to activate a specific head on the selected drive. DIRECTION signal required for moving head drives is used to designate the direction. IN or OUT in which the heads should be moved from the current position. The STEP line is used to provide a timed sequence of step pulses. One pulse moves the head one cylinder and a predetermined number of pulses moves the head assembly from its present cylinder to the target cylinder. The READ & WRITE signals are used to activate the selected r/w head. The DATA IN and DATA OUT lines carry the input or output stream of bits when a READ or a WRITE operation is in progress, respectively. TRACK 00 is a drive supplied signal that shows when the head assembly is on the cylinder 0 track 0, the outmost or home position. The INDEX signal shows the time when the drive electronics senses the cylinder or track address mark. The DISK CHANGE signal provided for removable media, alters the operating system to the media changes. The operating system must invalidate in the main memory all information regarding the related drive, like directory entries and free space tables. Other signals include RESET and FAULT, MISCELLANEOUS IN and OUT are shown below. The primary of the basic disk controller are to: Disk Disk Controller Drive Drive Select Head Select Direction (In/Out) Step

Read

Write

Data Out

Data In

Page 136: Microsoft Word - OS

Reset (Fault Clear)

Track 00

Index

Ready

Fault

Volume Changed

Miscellaneous In

Miscellaneous Out

Figure 6.2 Disk Controller & Disk Drive 1. Convert the higher level commands like SEEK and READ a

sector into a sequence or properly timed drive specific commands.

2. Provide serial to parallel conversion and signal conditioning

necessary to convert from byte or word format into analog bit serial streams expected and produced by disk drives.

3. Perform the error checking and control. A disk driver allows reading and writing of disk sectors specified

by means of the three component physical disk addresses of the form:

<cylinder number, head number, sector number> Question 6.5 Explain the contiguous allocation of the disk

space. Solution

Page 137: Microsoft Word - OS

In this allocation the disk space is allocated in the contiguous areas of the disk in response to the run time requests. Contiguous allocation files are placed in the contigous blocks on the disk. The following figure shows the contiguous allocat-

Figure 6.3 Contigous Allocation of Disk Space ion of a file. The figure contains the starting address and file size, recorded in the directory and these are sufficient to access any block of a contiguous file. Sequential access is both simple and fast because logically adjacent blocks are also physically adjacent on the disk. Random access to the contiguous files is also fast because the address of the target disk block can be easily calculated on the basis of the file’s starting address recorded in the directory. The address of the second logical block (LB2) of the file as shown in the figure is obtained by adding the file’s starting address, 10, and the desired offset 2 to produce the target disk address, 12. Contiguous allocation of the free disk space requires keeping the track of clusters of the contiguous free blocks and implementation of some policy for satisfying the requests for allocation and deallocation of blocks when files are created and deleted, respectively. Addresses and sizes of free disk areas can be kept in a separate list or recorded in unused directory entries. When a new file is created then the operating system can search unused directory entries in order to locate a free area of the suitable size.

0

1 2 3 4

5 6 7 8 9

10 (LB0)

11 (LB1)

12 (LB2)

13 (LB3)

14 (LB4)

15 16 17 18 19

Page 138: Microsoft Word - OS

Deallocation of space is almost trivial in systems with contiguous allocation that keep track of the least part of the free space in unused entries of the basic file directory. In this, a file system can be deleted by simply marking it as absent in the TYPE or PRESENCE field entry. From the above, one can say that the contiguous allocation provides for the fast file accessing without any intervening disk accesses to locate target blocks. The major problem in the contiguous allocation is that the disk fragmentation produces due to variable sizes of the files. A form of the fragmentation i.e. internal fragmentation is present in all file systems due to allocation of space. It requires the file size to be preclaimed that is estimated at the time of creation. Question 6.6 Explain the chaining in the noncontiguous allocation. Solution It is disk based version of the linked list. A few bytes of the each disk block are set aside to point to the next block in the sequence. Both files and the free space lists can be handled in this way. A possible chained implementation of the BFD file is shown below:

0

1 2 3 18 (LB1)

4

5 6 7 (LB4)

8 9

10 3 (LB0)

11 12 13

14

15 16 17 18 19 (LB2)

19 7 (LB3)

Page 139: Microsoft Word - OS

Figure 6.3 Chaned Allocation of Disk space in Noncontigous Allocation in the figure, the physical blocks addresses 10, 3, 18, 18 and 7 are in the order. Chained files are well suited for sequential access because the block being processed contains the address of the next block in line. The use of the multisector transfers for the chained files is difficult because even when adjacent logical blocks are physically contiguous. Chaining of the free block is very simple and convenient. With a single pointer to the free chain kept in the memory, allocation of single blocks can be easily performed. Allocation of blocks in groups requires disk accessing to determine the addresses of each subsequent free block. Blocks are commonly deallocated in the groups upon the deletion of the files. With the help of tail pointer to the free chain of the memory, this operation can be performed with a single disk access that updates the link of the tail block. This is done when a free chain is appended to it otherwise, one disk access per each block freed may be needed to update the pointers accordingly. The major advantage is the simplicity and very little storage overhead. It does not produce the external fragmentation. Disk compaction may not be needed because the disk space needed for pointers is below 1%. The main disadvantage of the chaining is slow random access to files and inability to utilize multisector transfers. Recovering of the chained files on damaged disks is an extremely painful experience. Question 6.7 What are the system services related to the file management. Solution The following are the run time file services defined below in brief:

Page 140: Microsoft Word - OS

CREATE The create routine is working as per following specification: routine CREATE (filename, attribute); begin search directory for filename if found report duplicate, or create new version or overwrite; locate a free directory entry; if none allocate a new one; allocate space for the file; (all, some or none) record allocated blocks in the directory; record file attributes in the directory; end The main purpose of the above routine is to create a file and the first step is to search the symbolic directory specified by pathname in order to find out whether the given name is already exists or not. When the file extensions are supported, files with same name but different extensions are regarded as distinct files. The file name is recorded in the related symbolic file directory. The following shows the OPENT system services. The main purpose is to establish the functional specification between the calling program and the specified file. This is done after verifying the target file exists that the caller is authorized to access it in the desired mode. The system responds by creating a file control block. runtime OPEN(filename, access_mode):connection_ID; {call: connection_ID:=OPEN(filename, access_mode);} begin search directory for filename; if not found indicate error and return; verify authority to access the file is desired mode;

Page 141: Microsoft Word - OS

if not authorized indicate error and return; create file control block FCB; create connection_ID for the FCB; initialize file marker; return connection_ID end; SEEK Seek command is processed by updating the file marker to the point to the byte or to the record whose logical address is supplied by the caller. The routine of seek is shown below: routine SEEK(connection_ID, logical_position); begin verify legitimate connection_ID; calculate desired position; update the file marker; end; The connection_ID is first checked that an authorized user is making a request. The SEEK command has no effect when applied to a sequentially structured file. Some systems simply ignore such calls and other treat them as errors. READ The following routine shows the specification of the READ: routine READ(connection_ID, num_bytes, in_buffer): status; {call ststus:=READ (connection_ID, num_bytes, in_buffer);} begin verify legitimate connection_ID; verify file open for read;{access authorization} synchronize with other active users if necessary;{sharing}

Page 142: Microsoft Word - OS

calculate number & addresses of sectors to read;{mapping} verify target addresses are within the file boundary;{protection} issue read command(s) to device driver, multisector if possible; verify outcome; copy num_bytes of data from internal buffers to in_buffer;{deblocking} update file marker; return status; end; The functional specifications are performed by the file system when processing a READ request. For the possibility of reading errors in data, the service is assumed to be structured as a function that indicates the outcome of its operation by returning the STATUS. Now the user supplied the connection_ID and if it is valid then the user is authorized to read the related file and the system checks whether the file is to be accessed concurrently with other users. This checks may be performed by traversing the list control bloks. WRITE The following routine shows the basic structure of the WRITE: routine WRITE(connection_ID, num_bytes, out_buffer):status; {call: status:=WRITE (connection_ID, num_bytes, out_buffer);} begin verify legitimate connection_ID; verify file open for write;{access authorization} synchronize with other active users if necessary;{sharing} if file is extended allocate required blocks;{allocation of space} update directory if new blocks added; calculate number and addresses of sectors to write;

Page 143: Microsoft Word - OS

copy num_bytes of data from out_buffer to internal buffers;{blocking} issue write command(s) to device, multisector if possible; verify outcome; update file marker; return status; end; The only important difference from the READ service is that WRITE can be used to extend the file. The file system calls the space allocation module to provide the required number of the free blocks. Depending upon the space management policy in use, the directory and file index blocks may have to be updated to reflect the new acquisitions. In any case, the file marker is updated after each disk write. The CLOSE system service breaks the connection between the caller and the file whose connection_ID is passes as a parameter. This service releases the related file control block to the free pool from which FCB’s are allocated. Question 6.8 What are the advantages and disadvantages of the disk caching. Solution The following are the advantages of the disk caching: · Improved effective disk access time by satisfying requests for the

cahed blocks from the memory. · Improved response time to applications for cached blocks on

reads and optionally on delayed writes where the I/O requestor is not forced to wait for the completion of a disk write.

· Elimination of some disk accesses, for example, when the same

block is written and read in some order several times while in the buffer cache. Moreover, with intelligent caching, temporary work files created by many programs, such as compilers, may spend

Page 144: Microsoft Word - OS

their entire lifetime in cache and never be written to disk if their blocks are freed in cache when the file is erased.

· Reduced server and networking loading with client disk caching

in distributed systems. The main disadvantage of the disk caching is the potential corruption of the file system due to loss of power or to the system crashes if delayed writes are allowed. Poorly implemented or parametrized disk caching were also found to underperform noncached systems under certain conditions.

CHAPTER VII

CASE STUDY OF UNIX OPERATING SYSTEM

In this chapter, the important questions related to the Unix operating system have been attempted. The key features of the Unix are clearly described in the questions. Question 7.1 What is Unix Operating system. Explain the important features of Unix. Solution It is a popular multiuser time sharing operating system used for progress development and document preperation environment. It is written in a high level language “C”. Some of the major features of Unix are described below: Multiuser Operation Unix supports multiple users on suitable installations with memory manegement hardware & the appropriate communation interfaces.

Page 145: Microsoft Word - OS

Local users & remote users have access to login facilities & file transfer between unix hosts in network configurations. Portability Unix is portable & available on wide range for different computer hardware.The designs of Unix have configured hardware dependent code to a few modules in order to facilitate porting . Device independent Files & I/O services are treated in a uniform manner by means of the same set of application system calls. I/O redirection & stream level I/O are fully supported at both the command language & system call levels. Tool &Tool Building Utilities Unix is very strong to give a good way to build tools to provide useful building blocks that are combinable according to each user's needs & preferances. Unix utilities are simple & they concentrate on accomplishing a single function well using the facilities provided by I/O redirection & pipes . Hierarchical File System The sharing & co-operation among users that is desirable in program development environments is facilitated by Unix . The hierarichical file system of Unix spans volume boundries, virtually eleminating the need for volume awareness among its users. Question 7.2 Explain the different important commands available in Unix operating system. Solution

Page 146: Microsoft Word - OS

Unix users can invoke the commands with the help of cammnd language interpreter which is called as shell. The shell is a programming language suitable for construction of command files called as shell scripts. All the programs run under the shell start out with three predefined files: standard input which is the terminal keyboard, standard output and error output. The symbols > and < are used to indicate the temporary redirection of the standard input and output. In the Unix ls command is used to display the directory but if we use the redirection operator ls>vipin then this output can be diverted to a file VIPIN. The redirection input operator like vi<script instructs the program vi (editor) to take the input from the file named as SCRIPT. If we are doing C programming under Unix environment system then we can use the editors vi, ed or ex to write the program. The program is saved by pressing ESC key followed by the wq i.e. :wq. If we don’t want to save the file then we should press :q!. The program can be easily written inside the editor by pressing the i key (insert the program). After writing the program, one can run the program by using following: cc vipin.c this will create the vipin.out which can be easily by simply writing the vipin. The following are the important commands with their interpretation: rm This command is used to remove or delete the files. Eg. rm vipin.c will delete the file named as vipin. mv This command is used to move the files/rename the file. Eg. mv vipin.c saxena.c will rename the existing file vipin.c to saxena.c.

Page 147: Microsoft Word - OS

In this original contents of vipin.c has been lost and all the contents of the file vipin.c are completely transferred to the saxena.c. cp This command is used to copy the contents of one file into another. Eg. cp vipin.c saxena.c will copy the contents of file vipin.c to saxena.c. In this case the original contents would not be lost and we have two files i.e. vipin.c and saxena.c with same contents. cat This command is used concatenate one or more files. Eg. cat <<vipin.c saxena.c will concatenate the two files vipin.c and saxena. ls This command will display the directory. Eg. $:>ls will display the complete directory of disk $:>. mkdir This command will create the personal directory. Eg. $:>mkdir vipin will create a directory named as vipin. rm dir This command will remove the existing personal directory. Eg. $:>rmdir vipin will delete a directory named as vipin. cd It is used to change the working directory. Eg. $:>cd vipin will change the current directory c:> to c:\vipin>. The description of other commands are given below:

Page 148: Microsoft Word - OS

mount :attach device, add to the tree of directories umount :remove the file system contained on a device n/i/dcheck :verify integrity of the file system fsck :verify integrity of the file system dump :backup devices or files selectively restor :restore dumped file system chmode : change attributes of files Question 7.3 What is process management & communication in Unix (Unix System Calls). Expalin the process control through system calls. Solution Unix system calls are executed in response to statements placed in users programs. When invoked, the unix system call acts as an extension of calling program. Input and output operations also appear to users as synchronous & unbuffered. A new process can be created by means of fork() system call. Process ID=fork( ) The calling process is split into two releated but separate processes called as parent & child . These have independent copies of original memory & share all open files. Fork( ) address spans a new process that shares all or some of its memory with the process that executes fork .The process that executes fork continues to execute on the same processor. The forked process is assigned to another processor, if one is available otherwise it is queued to await execution.The parent & child processes differ in that the Process ID returned to the parent is the ID of the child (& is never 0), where as the child process always recieves an ID of 0 from the Fork . This information may be used by each process to determine its own identity by using the sequence of code. {parent process}

Page 149: Microsoft Word - OS

……… {parent create a child } process-ID:=fork( ); If process-ID not= 0 {parent & child } then begin {Parent's code continues here } ………… done -ID:= wait (status) {Parent waits for child’s termination} end else begin {Child's code continues here } ………….. ………….. exit (status) {Child terminates execution } end; {if} {parent continues here after child's termination } …………. Figure 7.1 Process Creation in Unix In the above parent process creates the child process by invoking the fork system call. After completion of the fork the two processes continue to execute on their own. Different processes can exchange timing signals by means of the matching KILL and SIGNAL calls. The basic format of KILL is given below: kill (process_ID, signal_ID) Process Control Box in Unix In this code segments are shared through a seperate table called as text table. A pointer to the related entry of the text table is kept in each entry of process table. The relationship is shown below:

Page 150: Microsoft Word - OS

Parent Process Child Process Process Table Text Table Resident ------------------------------------------------------------------------------------------ Swappable

Parent's Control Block Data Segment (Parent

Child Control Block Data Segment (Child)

Code Segment (Shared)

Page 151: Microsoft Word - OS

Figure 7.2 Process Control Blocks in Unix The resident & swapable portions of a parent & a child process are shown in above figure.The first one are kept in the processes table whose entries also point to location of the swapable part of control box of the related process. In the resident portion, memory is allocated to requesting processes using first fit algorithm. Data systems is implemented by allocating a new layer areas & copying the contents of old system to it. When requirements of active processes exceed the capacity of available memory, one or more processes swapped out to disk. Since code segments are never modified, only data segments & swappable parts of process control blocks are actually copied to secondary memory. Ready to run processes are ranked on the basis of their storage record with tine & size. The process that has spent the longest time on secondary storage is swapped in first. A slight penalty is charged to larger programs. Each directory contains at least two entries; a link to its parent directory & a pointer to itself. The file contains the following information: ¨ The user & group ID of the file owner. ¨ File protection bits. ¨ The physical address of file contents. ¨ The time of creation, least use & least modification of the file. ¨ The number of links to the file (usage count). ¨ The file type (ordinary, directory or special). Question 7.4 Explain the working of shell program. Solution

Page 152: Microsoft Word - OS

The shell programming is a very important and very powerful system utility. The shell can be used as a programming language itself. To apply a command such as ls to more than one directory, the use of a loop is required. for i do ls –l $i | grep ‘^d’ done In this for loop means that the body of the loop is executed for each of the parameters. As a precaution a user might wish to test whether any parameters have been passed. Therefore if test $# = 0 then ls dir else for i do ls –l $i | grep ‘^d’ done fi In the above test $3 tests the number of parameters passed; if this is zero then the current directory is searched. If it is non-zero then the directories named as arguments are searched. The above small shell program should give some idea of what can be done. When programmer should use the shell rather than writing a program in assembler or a high level language is not always obvious. If a normal programming language can be used for a problem, then the execution speed penalty of the shell high level features may favor the use of the programming language. If the application involves operations that are standard Unix commands, then shell might be more appropriate.

Page 153: Microsoft Word - OS

CHAPTER VIII

SECURITY AND PROTECTION

This chapter is not in the syllabus of the U.P. Technical University but in the course of IGNOU as part of CS-13. The solutions of the problems are based on the following syllabus: Security Threats and Goals, Penetration Attempts, Security Policies and Mechanisms, Authentication, Protection and Access Control, Formal Models of Protection, Cryptography, Worms and Viruses. Question 8.1 What are the important security threats. Solution The major security threats are (a) Unauthorized Disclosure of Information Disclosure of information to unauthorized parties gives result to losses a privacy. Revelation of a credit card number, a proprietary product design, a list of customers, a bid of a contract, military data can be used by adversaries in numerous ways. Depending upon the nature of information, anyone can use these & produce losses. (b)Unauthorized Alteration or Destruction of Information The undetected altering of information that can’t be recovered is potentially equally dangerous. Even without external leakage, the loss of vital data put a company out of business. Coupled with information disclosure, altering or destruction can further aggravate a bad situation.

Page 154: Microsoft Word - OS

(c)Unauthorized use of service This can produce in the loss of revenue to the service provider; it can be exploited to gain illegal access to information. (d)Denial of service to legitimate Users It implies some form of memory of the computer system that results in partial or complete loss of service in the form of programs that multiply & spread themselves called computer worms. In several instances computer worms have overloaded & brought to a virtual halt major computer network impairment of any online transaction system. The goal of computer security is to guard against & eliminate potential threats. A secure system should maintain the availability & privacy of data. So, the data maintained by system should be correct, available & private. Data integrity means

(a) Protection from unauthorized modification. (b) Resistance to penetration. (c) Protection from undetected modification of data.

Data correctness is a more general notion than security. Question 8.2 Explain the different kinds of penetration attempts. Solution Some of important penetration attempts are given below: (a) Logged on terminal

Page 155: Microsoft Word - OS

The terminal is left unattended by user. An intruder can access the system with full access to all data available to the legitimate user whose identity is assumed. (b) Password It is obtained by intruder for the illegal access in number of ways, including guessing, stealing, trial & error or knowledge of vendor supplied passwords used for systems generations & maintenance. (c) Browsing User may be able to uncover information that they are not authorized to access simply by browsing through the system files. (d) Trapdoors These are secret points of entry without access authorization. They are sometimes left by software designers to allow them to access & possibly modify their programs after installation & production use. (e) Electronics eavesdropping This may be accomplished via active or passive wire taps by means of electromagnetic pickup of screen radiation. (f) Mutual trust Too much trust or careless programming as led to failure to check the validity of parameters passed by other parties. A caller may gain unauthorized access to protected information. (g) Trojan horse Trojan horse program may be written as to steel user passwords. These programs are easy to plant in systems with public terminal

Page 156: Microsoft Word - OS

rooms by leaving a copy active on the terminal & it simulate the log-off/log-on screen.. (h) Computer worms These programs can dump the computer system through a network. (i) Computer viruses These are pieces of code & infect the other programs & perform harmful acts like deletion of files or corruption of the boot block (j)Trials & errors By using trial method password of the computer system may be encrypted but usernames or ID’s are not. When some passwords are obtained they can be used for breaking into the computer system. (k)Search of waste This can be used to uncover pass word or to peruse the deleted files, volumes or tapes. Erasure of files is performed by updating the directory entries & returning the data blocks to the free space. Useful information may then be reconstructed by scanning the free blocks. Question 8.3 What are the bases to consider the security policies. Solution Security policies encompass the following procedures: a. How information can enter & exit the system. b. Who is authorized to access what information & under what conditions.

Page 157: Microsoft Word - OS

c. What are the permissible flows of information within the system.

Security policies are based on the following principles: (i) Least Privilege Each subject should be allowed access only to the information essential for completing the tasks. For example: Hospital accountants need not have access to the patient’s medical records & doctors do not have to be allowed access to accounting data. (ii) Separation of Policies When two or more people conflict the situation then there should be required to do work separately. It should take two people with two different keys. (iii) Rotation in Roles Rotation of roles is very important to complete the tasks in a very effective manner. Sensitive operations should not be permanently entrusted to the same person; so rotation is to uncover wrong doings. With above, choice of security policy is very essential. If we are not following the security policy then process consists of risk assessment & cost assessment, which includes the increased cost of equipment, personal computers & reduced performance. Most computer related security policies belong to one of following category: A. Discretionary Access Control (DAC) Policies are defined by owner of data who may pass access rights to other users. Creator of a file can specify the access rights of

Page 158: Microsoft Word - OS

other user. This form of access control is common in file system. It can reduce the trojan horse attack. B. Mandatory Access Control (MAC) These restrictions are not subject to user discretion. This can also limit the change. Here users are classified according to level of authority or clearance. Data are classified into security classes according level of confidentiality. For example; Military document categories as unclassified, confidential, secret & top secret. The user is required to have clearance equal to or above that of the document in order to access it. Question 8.4 What is Authentication and write down the important features of authentication. Solution The primary goal of authentication is to allow access to legitimate system users & to deny access to unauthorized portion. The following are the two primary measures to make authentication very effectiveness: (a) False Acceptance Ratio This is the % of illegitimate user enormously admitted. (b) False Rejection Ratio This is the % of legitimate users who are denied access due to failure of authentication. The objective is to minimize both false acceptance & the false rejection ratios. Authentication is based on the following techniques:

Page 159: Microsoft Word - OS

1. Possession of a secret i.e. password. 2. Possession of an artifact. 3. Unique physiological or behavioral characteristics of user. One of the important bases of authentication i.e. password is described below: It is most common authentication mechanism based on sharing of a secret. In this system, each has a password and this may be assigned by the system or administrator. Many systems allow users to subsequently change their passwords. The system stores all user passwords and these then to authenticate to the users. At the time of logging in, system requests to user to put secret password. Passwords are not required of special hardware and it is very easy to implement. But with above, passwords offer limited protection since one can easy to obtain or guess. Uncrypted password files are stored in a system. Passwords are dictionary words or proper names so that it may easy to remember. These words from dictionary have the facility of breaking of the encryption key by using exhaustive trial and error. System chosen passwords are generally random combination of letters and numbers and are very hard to guess but also hard to remember. Therefore users store these in a handy place near the terminal. Various techniques are available for the protection of password. One technique is that password schemes are mutilevel and users are required to supply additional passwords at the system's request at random intervals during computer use. Second technique is to have the system a dynamic challenge to user after login. This challenge can take the form of random number generated by computer. The user has to apply a secret

Page 160: Microsoft Word - OS

transformation like squaring or incremantally. Failure may be used to detect unautorized users. The number of consecutive login attempts may also be controlled by disabling the user's account after certain number of unsuccessful aatempts. User may subsequently reinstate the account by establishing his identity with the system manager. This approach is very danger because a miscleneous user describes all the accounts related to other users or administrator, by using their identification after attempting the several login. Such a user can easily knock out the whole system by disabling the accounts of all administrators. So nowadays, some systems prohibit disabling. Question 8.5 What is the Access Matrix Model of protection. Solution There is a protection difference for primary and secondary memory. A computer systems is viewed as a consisting of a set of subjects like processes that operate on and manipulate a set of objects. Objects include hardware and software objects like files etc. From the software point of view, each object is an abstract data type. Operations on an object may transform the state of object. The protection procedure should ensure that (a) No process is allowed to apply a function in appropriate to a

given object type. (b) Each process is permitted to apply only those functions that it

is explicitly authorized for a specific object. The authority to execute an operation on an object is called access rights. We use the concept of protection domain which specifies a set of objects and types of operations that may be performed on

Page 161: Microsoft Word - OS

each object. A protection domain is a collection of access rights and each is a pair of <object identifier, right set>. Domain need not be static, since their elements can change because objects are deleted or created and the access rights are modified. Domain may overlap; a single object can participate in multiple domains with different access rights given there in. A process executes in a protection domain at a point in time. This binding is not static & process may switch between different protection domains in the course of its execution. In a flexible protection system, not all parts & phases of a program need be given equal & unrestricted access to all objects that the program has access rights to. There is a concept of access matrix which is represented the all access rights of all objects in computer system. The model is shown by two dimensional matrix. Both hardware & software objects are included in access matrix. Object File1 File2 File3 Printer Domain D1 Read Execute Output Write D2 Read Read D3 Write Output Execute Copy

Page 162: Microsoft Word - OS

Blank entries are indicating no access rights. A process is executing in domain D2 can only are object File2 in read only mode. File3 is sharing by domain D3 & also executable in domain D1. It is very useful model but access matrix are not sufficient for storage of access rights, captured & expressed by the access matrix. The common access control mechanisms are described below: These are discussed bellow:-- (1) Access Hierarchies When a user program needs to perform an operation outside of its protection domain, It calls the operating system. At the control transfer point, like the supervisor call instruction; the operating system can check the user's authority & grant or deny execution accordingly. As shown in Figure 8.1, the protection rings are used to define a domain of access. At any given time each process runs in a specific protection ring. The range is [0,r-1]. These are used in multics. The access privilege of ring j are a subset of ring i for 0< =i<=j<=r-1. Inner rings have higher access rights protection barriers in form of the call gates are invoked by hardware when a lesser privileged outer ring needs to call on a service running in an inner,(more privileged) ring. The concept of access hierarchy is not unique but also used in software. The figure 8.2 shows the blocked structure language. Identifiers declared in block A are accessible in all of A's nested blocks. A statement contained in inner block D may legally reference all identifiers 0 declared in D's outer blocks A & B but not r-1

1

0

Page 163: Microsoft Word - OS

disjoint block C. Statements in block A don't have access to variable declared in blocks B & D & variables declared in block D can't be accessed from block B. A B D Figure 8.1 Protections Ring C Figure 8.2 Block of Structured Language (2) Access Lists Access lists are one way of recording access rights in a computer system. Access list is an exhaustive enumeration of the specific access rights of all entities (domains or subjects) that are authorised access to a given object. An access list for a specific object is a list that contains all nonempty cells of a column of access matrix associated with a given object. The access list for printer whose access matrix is shown as :-- D1 Output File 1, read/write

D3 Output File 3, execute

Printer, Output

Page 164: Microsoft Word - OS

Access list for printer Capability list for Domain D1

Figure 8.3 Access List and Capability List Many variations of the access list scheme are used to store access information in file systems. Access lists may be combined with other schemes to strengthen the protection. For lengthy lists of authorised users especially for public files, some systems divide users into group & store only the aggregate group access rights. This schemes save storages. (3) Capability Lists Instead of maintaining per object access lists, the system can also maintain per subject lists of access rights.

Capability based systems combine addressing & protection functions in a single mechanism that is used to access all system objects. It's a token or ticket that gives permission to access a specific object in specified manner. Capability mechanisms are: (1) Address both primary & secondary memory. (2) Access both hardware & software resources. (3) Protect objects in both primary & secondary memory. When used to access primary memory, capabilities operate in a manner similar to the segmentation. Here capabilities based systems use capabilities stored in a capability lists with access rights.The given figure shows capability based addressing. An address specifies a capability & offset within the related object.

Page 165: Microsoft Word - OS

Capability Offset Length Base Access Rights Identifier Object Instruction RW Process capability list Global Object Table

Figure 8.4 Capability List The capability points to a data structure that describes access rights to the object & address of object. Address portion is a pointer to a system wide table of objects that contains the base address & the object size. Capabilities are much more general mechanisms that have both hardware & software objects in primary as well as secondary storage.

The capability list of a process may be implemented in a hierarchical manner with access rights. A process that creates an object receives a capability for it. So capabilities are associated with objects. Capability lists are protected objects that are accessible only to the Operating System. In capability lists, the following may happen: (a) Move a capability to different location in a list. (b) Delete a capability. (c) Restrict the access rights portion of a capability. (d) Pass a capability as a parameter.

Page 166: Microsoft Word - OS

(e) Transmit a capability to another set. Process X Alias Object Capability list Process Y Capability List Global Object Code

Figure 8.5 Aliasing for Revocation of Access Rights Association of protection information with subjects complicates revocation of access rights in capability based systems. However all copies of capability point to pointer called alias to the system descriptor of the related object. It allows the owner of capability to revoke access rights of all other subjects by serving the indirect pointer. Question 8.6 Explain the different models of protection for security of the operating system. Solution Protection is necessary in the case of shared subject. Overall system efficiency can be improved with share common utilities. Many protection schemes are based on least privilege. The following are the important models of protection: (1) The Bell_Lapadula Model

Page 167: Microsoft Word - OS

This has a mandatory access control which is based on concepts of security classes. Four level hierarchy of security classes are named as shown below: Level 0-> Unclassified Level 1-> Confidential Level 2-> Secret Level 3-> Top Secret Each object belongs to security class & each subject is assigned an authority level. Each subject is to have corresponding level of clearance. Information is permitted to flow within a class or upward but not down. For entities a & b the relation c(a) <= c(b) where c(e) denotes the security class of entity e, a is the lower security class & information may flow from a to b. The protection in this model is viewed as a set of subjects, a set of objects & an access matrix. Each entry of access matrix can contain a set of generic access right like read, write, execute, read/write & append. Each subject is assigned a clearance & each object is assigned an authority level. The operation of the model is shown in the following points: 1. Get access rights 2. Release access rights 3. Give permission to confer access rights 4. Rescind permission to confer access rights 5. Create object 6. Delete object 7. Change security level in the model, operating system gets the access rights for a particular security class and in the step two, operating system has to confer the access rights if not then rescind the permission to

Page 168: Microsoft Word - OS

confer the access rights. For a particular class operating system has to create an object then use it and after using operating system deletes the object and then change the security level. The information flow mechanism implements the following two properties: (a) Single Security Property It requires that a subject may read only from objects whose security classification is equal or less than it’s own clearance. Read is allowed only when c(s) <= c(o) where c(s) is authority level of subject s & c(o) is security class of object o. (b) The * Property It requires that a subject s may have write access only to object o whose classification level is equal or greater than its own clearance. The write is allowed only when c(s) >= c(o). This model have demonstrated each of seven primitive operations defined above for both simple & * property. (2) Lattice Model of Information Flow This model views the protection system as a set of subjects, objects & security classes. Here static & dynamically arguments of security classes are considered. Information flow is defined by lattice & denoted as (sc,<=) where <= is a binary relation. A flow policy (sc,<=) is a lattice if it is a partially ordered set so there exist a least upper bound operator + & greatest lower bound operator *. Partially ordering means, the relation <= is reflexive, transitive, & anti-symmetric. For security classes A & B the relationship A <= B shows that A is a lower security class & information is following from A->B.

Page 169: Microsoft Word - OS

Considering linear ordering on n classes SC={0,1,2,----,n-1}s.t. for all i, j belongs to [0,n-1]; i + j = max (i,j) & i * j = min (i,j) where, the lowest security class low is 0 & highest security class high is equal to n-1. The security classes are linked by the following: unclassified <= confidential <= secret <= top secret This shows that the information is flowing from unclassified to confidential and then confidential to secret and finally from secret to top secret. This is shown below: unclassified -> confidential -> secret -> top secret The following Figure 8.6 combines linear & subset lattices that show the flow of information in system with two authority levels (0 & 1) & two departments medical (m) & financial (f). Here medical information may flow only into objects that contain medical information & belongs to equal or higher security class. The lattice model allows construction of very rich structures by combining linear & subset lattices. (1, {f,m}) (1,{m}) (0,{f,m}) (1,{f}) (1,0) (0,{m}) (0,{f})

Page 170: Microsoft Word - OS

(0,0)

Figure 8.6 Lattice Model of Protection There are two types of information flow in programs (a) Explicit (b) Implicit Explicit flow results from execution of argument statements of from y = f(x1,……xn) The flow is given by xi -->y (i<= i <= n) The security standpoint can be verified by c(x1)+------+ c(xn) <= c(y) If this relationship is satisfied for the security policy, then the assignment may be completed otherwise assignment should not be executed. Implicit flow of information occurs when a subject may deduce something about confidential information that is not authorized to access. Implicit flow may occur as a result of execution of confidential statements. The lattice model has a powerful abstraction that can be used to express a variety of practical security policies. It also provides a

Page 171: Microsoft Word - OS

foundation for compiler time, run time or combined automatic verification of information flow. (3) Take and Grant Model It is a graph based model which describes a restricted class of protection systems. The main advantage of this model is that the safety of take-grant systems is decidable even if number of subjects and objects that can be created is unbounded. By using the transformation rules, the safety of the protection system is decidable in time that grows linearly with the size of the initial protection graph. This model consists of a. a set of subjects b. a set of objects c. a set of generic rights In this model access rights may be read, write and execute and two special rights take t and grant g. These two special rights are defined as follows: If a subject s has a t right for an object o, then it can take any of o’s rights. If a subject s has g right for an object o, then it can share any of its right to o. These rights resembles capabilities in the sense that the subject with a read right to an object o can take (read) o’s capabilities and a subject with a write capability to o can write (grant) its capabilities to o. This model is described as a graph G whose vertices represent the system subjects and objects and edges are represnted by the access rights. This model defines the following five primitives: 1. Create Subject 2. Create Object

Page 172: Microsoft Word - OS

3. Take 4. Grant 5. Remove (Access Rights) Application of these operations changes the system state. This model describes the transfer of access rights in the systems. It describes the many aspects of the existing systems, especially based on the capabilities. Question 8.7 What is cryptography. Solution Encryption technique is used to make security strengthen. Cryptographic system is shown below: Encryption Method Encryption Method E D P C = Ek(P) Cipher Text C P=Dd(C) Plain Text P Plain Text Encryption Insecure Decryption Key k Communication Key d Channel

Page 173: Microsoft Word - OS

Figure 8.7 Model for Cryptography technique

In the above figure, the original text is called as a plain text or clear text. It can be encrypted by using some encryption method parameterized by a key. The result is called cipher text. This may be stored or transmitted through communication medium, like wires & radio links or by a messenger. Plain text can be obtained by decrypting the message using the decryption key. Here, we are using two keys, encryption k & decryption d & called as to be symmetric. This technique is known as the cryptography. There are three basic types of code breaking attacks. I. Cipher text attack occurs when an adversary comes into

possession of only the cipher text. II. Known plain text problem occurs when the intruder has

some matched portion of the cipher text & plain text. III. The most dangerous is chosen the plain text. Here attacker

has the ability to encrypt pieces of plain text. Good encryption algorithms have been designed to secure the plain text. Question 8.8 What is the conventional cryptography. Explain the three important techniques to secure the plain text. Solution In the conventional cryptography, the substitution cipher can be obtained by using a function that operates on a plain text & the key to produce the cipher text. Binary functions are used in pair-wise to letters of plain text & of key. The letters of key may be used in sequence. The following are the three important techniques: Function Based Substitution Cipher

Page 174: Microsoft Word - OS

The following shows exclusive OR (XOR) function on numeric representation of plain text & the key. Alphabet A is starting with 1 & Z is represented by 26. After setting these numbers to the plain text we have to consider the key which is three letter combinations. For example, key is considered as EFG, which is repeated as necessary and let the plain text is JULIUSCAESAR. With the help of key EFG and after applying the exclusive OR relationship, one can hide the plain texts i.e. cipher texts as shown below: J U L I U S C A E S A R Plain text 10 21 12 09 21 19 03 01 05 19 01 18 Key EFG 05 06 07 05 06 07 05 06 07 05 06 07 _______________________________________________ Cipher text 15 19 11 12 19 20 06 07 02 22 07 21 (Plain XOR) O S K L S T F G B V G V Cipher Text In plain text J is for 10 & in key E for 5. The remainder of cipher text is produced by pair wise XOR letter of plain text & of the key. The plain text is also obtained by the cipher text with the help of key EFG and taking one more time exclusive OR with the cipher text. This is shown below: O S K L S T F G B V G V Cipher Text Cipher text 15 19 11 12 19 20 06 07 02 22 07 21 Key EFG 05 06 07 05 06 07 05 06 07 05 06 07 _______________________________________________ (Plain XOR) 10 21 12 09 21 19 03 01 05 19 01 18 Plain Text One Time Pad Technique

Page 175: Microsoft Word - OS

A substitution cipher can be made unbreakable by using a long nonrepeating key. Such key is called as one time pad. It may be formed by using words which is from a book, starting from a specific place known to both sender and receiver. One time pad ciphers are unbreakable because they give no information to the crypt analyst. The primary difficulty with one time pads is that the key must be as long as message itself, therefore key distribution becomes problem. Different pad must be used for each communication. In the one time pad, we consider the ASCII representation of the plain text. Consider the key is FOREXAMPLEST which is also consider in the ASCII form. This is represented below: Plain Text J U L I U S C A E S A R Plain text ASCII 74 85 76 73 85 83 67 65 69 83 65 82 Key FOREXAMPLEST 70 79 82 69 88 65 77 80 76 69 83 84

___________________________________ Plain XOR 12 26 30 12 13 18 14 17 09 22 18 06 Cipher Text ASCII The plain text is also obtained by the cipher text with the help of key FOREXAMPLEST and taking one more time exclusive OR with the cipher text. This is shown below: Cipher Text 12 26 30 12 13 18 14 17 09 22 18 06 Key FOREXAMPLEST 70 79 82 69 88 65 77 80 76 69 83 84

___________________________________ Plain XOR 74 85 76 73 85 83 67 65 69 83 65 82 Plain text ASCII Columnar Transportation Technique

Page 176: Microsoft Word - OS

Another class is called as transportation ciphers which operate by recording the plain text symbols. Transportation ciphers rearrange the symbols. For example, let the plain text is ENCRYPTION IS PERFORMED BY WRITING THE PLAIN TEXT and also consider the key is CONSULT which is represented by the alphabets numbers as shown below. In this we have to write the plain text in the columnar fashion. C O N S U L T KEY WORD 1 4 3 5 7 2 6 COLUMN NUMBER E N C R Y P T PLAIN TEXT

I O N I S P E R F O R M E D BY W R I T I NG T H E P L A I N T E X T Cipher Text EIRBNAPPETPXCNOWTNNOFYGIRIRRHTTEDILTYSMIEE Transportation cipher is also written in columnar fashion. The encryption key is a word or phase that contains no repeating letter CONSULT. Encryption is performed by writing the plain text into columnar fashion with each row length corresponding to the chosen key 7 (length). The matrix must be full so any left over space in last row is padded with one character. The columns are numbered in some prearranged way like the relative ordering in the keyword letters in the alphabet. The cipher text is obtained by reading the columns in some fashion like all letters from top to bottom in column 1 followed by others in sequence. Qusetion 8.9 What are the public and private keys. Explain the Rivest Shamir Adelman (RSA) algorithm of cryptography. Solution

Page 177: Microsoft Word - OS

One of the important uses of cryptography is to provide security of message communication over insecure channels like Computer Network. Thus, we define a key which is supposed to be a secret shared between a sender & receiver pair. This is called as public key. Public keys algorithms are based on the use of a public enciphering transformation E & private deciphering transformation D. For each user A, the private transformation DA is described by private key. The public key EA is described from private key using one way transformation. If P is plan text then two transformations are related as DA (EA(P)) = P Decryption is performed by private key & encryption uses the public key. The Rivest Shamir Adelman (RSA) algorithm is based on the numbers theory & is generally used for encrypting by using this one can not break the password. Here we have to calculate e in the range [0,n-1] & again have to find unique integer d in [0,n-1] like edmod n=1 The RSA algorithm is given below: 1. Consider two large primes p & q each of greater than 10100. 2. Calculate n=pq & f(n) = (p-1) (q-1). 3. Consider a number f to be a large (random integer) that is relatively prime to f(n) s.t. gcd(d, f(n))=1. 4. Find e s.t. ed mod (f (n))=1 These parameters may be used to encipher plain text P where 0<=d<=n. If plain text is longer, it must be broken into strings smaller than n. Cipher text is obtained as c=Pemod(n). C may be decrypted as P=cd mod(n). This algorithm shows that encryption &

Page 178: Microsoft Word - OS

decryption are inverses of each other. The encryption and decryption are shown below: Encryption Decryption P Pe mod(n) C Cd mod(n) P e=3 d=7 Figure 8.8 Encryption & Decryption By RSA For example: As per the RSA algorithm, consider two values p=3 & q = 11 gives n=pq=3x11=33 and f(n)=(3-1)(11-10) = 20. The private Key’s chosen as 7 which is relatively prime to 20 since two have no common factors. The number e comes from 7e mod 20=1 as e=3. The above parameters are used to encipher the plain text SAMPLE. Receiver B obtains the original plain text by decrypting the cipher text, with own private key d=7. This is shown below: P Pe=3 P3 mod 33=C Cd=7 e7 mod 33=P S 19 6859 28 28 1349228512 19 S A 1 1 1 1 1 1 A M 13 2197 19 19 893871739 13 M P 16 4096 4 4 16384 16 P L 12 1728 12 12 35831808 12 L E 5 125 26 26 8031810176 5 E Sender A Receiver B The security of RSA is based on the difficulty of factoring large numbers. No breakings of RSA cipher have been reported in over decade.

Page 179: Microsoft Word - OS

Question 8.10 Write a brief note on the concept of Digital Signatures. Solution There are important application of cryptography. A digital signature is way of marking electronic message. Digital signatures are used in computerized commercial and finacial transactions. Consider B receives a message M signed by A. The digital signature must satisfy the following: A. It must be possible for B to validate A's signature of M. B. It must be impossible for anyone to forge A's signature. C. It must be impossible for A to repudiate the message M. Digital signatures may be implemented by the use of RSA public key encryption in a way that provides both security and authenticity of messages. Sender and receiver must perform a double transformation of the message. The sender first applies private transformation to obtain DA(P) and then enciphers the result using B's public key EB. The doubly transformed message C=EB(DA(P)) is sent to B. The receiver first applies its private deciphering transformation and then applies A’s public transformation to obtain P as follows: EA(DB(C)) = EA(DB (EB(DA(P)))) = EA(DA(P))=P P DA(P) EB(DA(P)) DA(P) P

Decrypt. Encrypt. Decrypt. Encrypt.

Page 180: Microsoft Word - OS

DA EB DB EA A’s Private B’s Public B’s Private A’s Public Key Key Key Key Sender A Comm. Receiver B Channel

Figure 8.10 The Concept of Digital Signature The process of digital signature is shown in figure 8.8. The above is the implementation of RSA. This scheme ensures both authenticity and secrecy of message. To meet the third requirement above for digital signatures, B can store the intermediate appearence of message after decryption with its private key i.e. DA(P). If A denies having sent the message, B can produce the plain text P and DA(P). The digital signatures may also be implemented by using the conventional cryptography. But, the most reliable is the digital signatures with the help of RSA algorithm. ¨

CHAPTER IX

DISTRIBUTED OPERATING SYSTEM

This chapter is not in the course of the U.P Technical syllabus. It is written on the basis of IGNOU course structure. All the important algorithms have covered related to the following syllabus: Definition of the Distributed Operating System, Algorithms for Distributed Processing, Mutual exclusion in Distributed Systems,

Page 181: Microsoft Word - OS

Coping with Failures, Models of Distributed Systems, Remote Procedure Calls, Distributed File Systems. Question 9.1 Define the distributed operating system. What are the benefits of the distributed operating System. Solution A distributed computer system is a collection of autonomyous computer systems capable of communicating & cooperation via their hardware & software interconnection. Distributed computer systems are greatly chracterised by: a. Absence of shared memory. b. By unpredictable inter code communication delays. c. No global ystem state observable by component machines. The key objective of a distributed operating system is transparency. The following are the important benefits of distributed operating system: 1. Resource sharing & loading balancing. 2. Communication & information sharing. 3. Incremental growth. 4. Reliablity, availabilty, fault tolerance. 5. Performence. Distribution basically contains the following three dimensions: (a) Hardware (b) Control (c) Data System resources generaly fall into one of the two categories: 1. Physical resources like processor & device. 2. Logical resources like files & processes.

Page 182: Microsoft Word - OS

Distributed algorithms & processing involve multiple process that execute on different modes.These processes need to communicate in order to cooperate & to synchronise in orderly to maintain system integrity while they access shared resources. Centralised control requires arbitrary mode that manages a set of system resources.This mode has complete information about the state of the resources. Problems in distributed operating systems are given below: a) A single point of failure. b) Traffic congestion at the control mode. c) Delays in obtaining permits. Hierarchical & decentralisation of control can eleminate most of their problems. Distribution of data may be achieved by partitioning, replication or both. The flexibility to partition & replicate arbitary subset of data is important benefit of distributed processing. In distributed system, cooperating processes may reside on different modes. Inter process communication & synchronization in distrebuted systems. The two basic components of the distributed control algorithms are 1. Processes 2. Communication path A number of distributed algorithms are based on request/reply protocol where a process sends a message to a group of other processes then collects & analyses their replies. A communicating process should be able to write on any of a set of events. Communicating processes are often the clients of higher level communication protocols that provide for logical peer to peer communication, broadcasting & multicasting .

Page 183: Microsoft Word - OS

Question 9.2 Explain the following in the distributed operating system: a. Important assumption b. Desirable assumption c. Time and ordering events d. Clock Condition Solution a. Common Assumptions in Distributed Operating System The following are the important assumptions in the distributed opretaing system: a) Message exchanged by a pair of communicating processes are

received in order they were sent. This implies the absence of message reoccuring in end to end transit & is sometimes referred to as the pipelining property.

b) Every message is delivered to its destination error free in finite

time. There is no duplication of message. c) The network is logically fully connected in the sense that each

site can communicate directly with every other site . b. Desirable Properties in Distributed Operating System The following are the important desirable properties in the distributed opretaing system: a) All nodes should have an equal amount of information. b) Each mode should make decisions solely on the basis of local

infomation. The algorithm must ensure that different nodes make consistent & coherent decisions.

Page 184: Microsoft Word - OS

c) All nodes expand approximately equal effort in reaching the decision.

d) Failure of a node should not result in the breakdown of the

algorithm by affecting the ablity of other nodes to make their decisions & to access the available resources.

The cost of exchanging a message includes: a) Communication cost & delay incurred in the physical transfer

between two end points. b) The computational components i.e. message formation,

buffering & protocol processing at both ends. c. Time & Ordering of Events Sending & receiving of messages each constitutes a distinct event. This happend before relationship, denoted by à and is defined as follows: 1) If a & b are events in the same process & a before b then a à

b. 2) If a represents the sending of a message by one process & b the

receipent of the same message by another process then a à b. 3) The relation is transitive if a à b & b à c then a à c. Two

document events are said to be concurrent if no ordering exists between them i.e. if neither a happend before b nor b happend before a then a & b are concurrent.

Logical clocks provide a way to implement the happend before relationship in a distributed system. d. The Clock Condition The clock conditions are defined by:

Page 185: Microsoft Word - OS

For any events a & b: if a à b then c (a) < (b) à relationships follows that clock condition is satisfied if the following two hold: C1. If a < b are events in process Pi & a comes before b then Ci(a) < Ci (b). C2 If a is the event of sending a message M by process Pi & b is the receipt of that message by process Pj then Ci (a) < Cj (b). The conditions C1 & C2 can be satisfied by implementing the logical clocks so that the following conditions hold: · Process Pi increments its logical clock Ci between any two

successive events. · Messages are time stamped by sending process in the

following way. If event a is sending message M by process Pi then the message M contains the time stamp TM = Ci (a).

· The receiving process adjusts its logical clock upon receipt of a

message M as follows. Upon receipt of a message M , process Pi sets Ci to max (Cj +1 , TM).

Question 9.3 Explain the mutual exclusion in distributed operating system. Describe with the help of algorithm. Solution Distributed processing uses message for interprocess communication. So , mutul exchange is easily used by using variation of messages based technique given in a followingprogram:

type message =record const

Page 186: Microsoft Word - OS

null = ------- ; [empty message ] proces user x ; var msg : message begin while true do begin(mutex) recieve (mailbox, msg); critical_section send(mutex, msg) other_x_Processing end [while ] end; [user x ] ____________ ____________ [other user processes ] ____________ ____________ [ body of parent process ] begin [msg_mutex] create_mailbox (mutex) send(mutex, null); initiate users end[msg_mutex] .

Figure 9.1 Mutual Exclusion in Distributed operating System Distribution algoritms are based on circulating permit message. It is also callled as token. For simplicity, one can assume that a distributed system consists of N nodes and there is exactly one process at each node that may wish to enter a critical section. In mutual exclusion, distributed system is to guarantee that at most one process may be in the critical section at any given point in time. Distribution algorithms

Page 187: Microsoft Word - OS

also consider the different requests must be granted in order in which they are made. Eg. Consider two processes P1 & P2 and one scheduling process P0. Consider P1 sends a request to P0 and sends a message to P2. Upon receiving the later message P2 sends a request. If P2 request reaches to P0 before P1's request does (since pipelining property holds for each individual link but provides no guarantees between different links). P0 may grant P2's request first and thus violate the stated requirement. request P1 P0

Message P2

Question 9.4 Explain the functioning of the Lamport Algorithm of distributed processing system. Solution Consider the presence of pipelining property and delivery of all messages, the solution requires time stamping of all messages and also consider each process maintains a request queue, initially empty that contains request messages ordered by relation à The Lamport algorithm is described under the following points: 1. Initialize i, when the process Pi desires to acquire exclusive

ownership of the resource it sends the time stamped message request (Ti,i) where Ti=Ci to every other process and records the request in its own queue.

Page 188: Microsoft Word - OS

2. Initilize j, j !=i when process Pj receives the request (Ti,i) message, it places the request on its own queue and sends a time stamped reply (TJ,j) and process Pi.

3. Process Pi is allowed to access the resource when the following

two conditions are satisfied:

(a) Pi's request is at the front of the queue. (b) Pi has received a message from every other process with

time stamp later than (Ti,i).

4. Process Pi releases the resource by removing the request from its own queue and by sending a time stamped release message to every other process.

5. Upon receipt of Pi's release message, process Pj removes Pi's

request from its request queue. The description of the above algorithm is very simple and the correctness of algorithm is according to rule 3. The relationshipà provides a total ordering of events in the system and in the per process request queues. Rule 3(a) permits only one process to access the resource at a time. The above solution is a deadlock free due to time stamp ordering of requests which preculdes formation of waits for loops. The communication cost of algorithm is 3(N-1) messages; (N-1) request messages, (N-1) reply message and (N-1) release messages. The request and release notifications are effectively broadcasts, the algorithm clearly performs better in a broadcast type network like the bus. Question 9.5 Expalin the Richart and Agarwal algorithm of distributed operating system with time and ordering events and clock conditions.

Page 189: Microsoft Word - OS

Solution This algorithm is more efficient version of Lamport's algorithm. It is based on identical communication assumptions and on total ordering of events as provided by à relation. Algorithm is described below: 1. Initiate i; when process Pi's requests the release the resource. It

sends the time stamped message request (Ti,i) where Ti=Ci to every other process and records the request in its own queue.

2. Initilize j, j !=i; when process Pj receives the request message it

acts as follows:

(a) If Pj is not currently requesting the resource, it returns a time stamped reply.

(b) If Pj is currently requesting the resource and time stamp

of its request (Tj,j) proceeds (Ti,i) process Pi's request is retained. Otherwise a time stamped reply message is returned.

3. Process P: is allowed to access the resource when the following

two conditions are satisfied:

(a) Pi's request message is at the front of the queue. (b) Pi has received a message from every other process

with a time stamp later than (Ti,i). 4. When process Pi releases the resource, it sends a reply message

for each pending request message. The description of the above algorithm is easily understandable. The above algorithm is more efficient since it requires most 2(N-1) messages to service a request.

Page 190: Microsoft Word - OS

Question 9.6 How many types of failures occur in the distributing operating system. Solution One of the major benefits of distributed processing is resilience to failures & increased system availability. Due to its dependence on multiple nodes & communication links a distributed system may be less reliable than a single site centralised system. The common failures in the distributed systems are defined below: · Communication link failures · Node failures · Lost messages For proper functioning, the rest of the system must

(1) detect failures; (2) determine the cause like identifying the type of failure

& failed component; (3) Reconfigure the system so that it can continue to

operate; (4) Recover when failed component is repaired.

On temporarily idle links, failures may be detected more readily by exchanging & timing "Are you alive" & "I am alive" messages. The use of time outs is a common technique for detecting the missing response or acknowledgements. The choice of a specific time out value presents some practical problems. Too short a time out may trigger false alarms by declaring as missing messages that are just delayed. Short time outs require the communication subsystem to deal with the duplicate messages.

Page 191: Microsoft Word - OS

Possibility of lost message is common to go through a few retries before declaring a failure to communicate. After a predetermined number of retries fails to be acknowledged within the time out limit. The meaning is that a failure has occurred. In a distributed system there is no direct way for a node to determine the cause of failure; like a link or a node on the basis of missing response alone. Although some system component must engage in detection of type of failure so that approximate action may be taken. Failure of a node can result in simple stoppage called fail stop reconfiguration in the case of link failures which consists of choosing an alternate path & updating the corresponding routing information by all affected nodes. Recovery from link failures consists of relatively simple updating of the routing information. Node recovery is more complex since it requires updating of state information & possibly replying of missed message by some of the active nodes. In the some system a recovering node slowly brings itself up to date by querying other nodes for state information. Question 9.7 What is token lost and how it will regenerate. Explain with the help of Misra’s algorithm. Solution Messages are a primary form of inter process communication in the distributed system. The messages are being exchanged between endpoints. This is done through mailbox. A distributed message based solution to the mutual exclusion problem is provided by creating a special type of message called token. This message is circulating among the nodes in a mutually exclusive manner. In circulation permission, each node wishes to access the shared resource with protocol.

Page 192: Microsoft Word - OS

The protocol is defined by: Receive (token); Use_resource; Send(token) The above algorithm breaks down when the token is lost. Many of them depend on time outs & require all process identities to be known to all nodes. The Misra's algorithm will regenerate the lost token which is defined below: Consider that nodes in the system form a logical ring. It uses the two independently circulating token each of which serves to detect possible loss of other. Let us consider ping & pong & the value of each token is nping & npong respectively. Counts the number of time the two token have met. The values of two token have opposite sign. Initially the token values are nping=1, npong=-1. The two token circulate around the ring in opposite direction. Whenever they meet at the node, the value of nping is incremented by 1 & the value of npong is decremented by 1. Numeric values of the two token are related by the variant npong + nping= 0 If the sum of nping and npong is zero then no token will be lost. The preservation indicates the presence of both token in the system otherwise a token is lost & so it must be regenerated. Also consider that each process Pi maintains a variable ni that records the number associated with lost token that the process has seen. Initially ni is set to 0. The algorithm operates as per following points: 1. When the two token meet at a node, it sets their associated values as follows: nping=nping +1; npong=npong -1;

Page 193: Microsoft Word - OS

2. When a node i receives the token (ping, nping) it acts as follows:

(a) If ni ¹ nping, the node sets ni = nping & relays the token otherwise

(b) If ni = nping, the ping token is lost & need to be regenerated. The node sets

nping = nping+1 npong = - nping & sends the tokens along: send(ping, nping) & send(pong, npong), when node i receives the message (pong, npong), it acts as in step 2 just reversing the roles of ping & pong.

The Misra’s algorithm is very important for the regeneration of the lost token and it is widely used in designing of the softwares. Question 9.8 What is election of a successor. Clearly explain with the help of Robert & Chang Algorithm. Solution The Robert & Chang algorithm is an interesting & applicable technique of selective extinction of messages. In this it is assumed that all nodes in the system are connected via a logical unidirectional ring. The nodes are numbered 1 to N and each node knows its own number. Any node can start the election when it detects failure of control node. The node marks itself as a participants & communicates the election message together with its number to the next node down the ring (consider left neighbor). Each recipient of the election message becomes a participant compares the received numbers to its own and sends the greater of two numbers to its neighbor. The overhead is reduced by this selective extinction of messages as

Page 194: Microsoft Word - OS

soon as the bids that they carry become recognized as unwinnable. The election is completed when a participating node receives its own identifier. That node becomes the elected new coordinator & if necessary transmits its identity to other nodes. The steps of the algorithm are defined below: 1. When node i detects failure of current coordinator, it marks itself

as a participant & sends the election message to its closet neighbor in the direction of the ring flow: send(election i).

2. When node j receives an election message, say(election k). It acts as follows:

(i) if k>j, node j marks itself as a participant & relays the election message, i.e. send (election k).

(ii) if k<j & the node is not marked as a participant, it sends the

election message with its own number, send(election j).

(iii) if k=j, node j is elected, & it sends the message send (elected j).

3. (Other nodes) Upon receiving the (elected j) message, node l

unmarks itself as a participant, records the identity of the new coordinator & if j¹l, relays the message send(elected j).

In the step 2b, if node is already a participant, it simply extinguishes the received message because its own bidding message which has a higher priority number is already on the ring. The correctness of algorithm shows the observation that the unidirectional nature of the ring ensures that the message with the highest number must make a complete circuit & be seen by all other nodes. Upon reaching as participant, its participating originator, the message must have a highest active number in the

Page 195: Microsoft Word - OS

system. The unique node numbering ensures that the election is unambiguous regard less of which node initiated the election. In respect of performance, the fastest election occurs when all the nodes initiate election at the same time, thus completing in an amount of time proportional to one round around the ring or (n-1) inter node delays. The worst case in terms of timing is when the left neighbor of the max numbered active node is the sole initiator of the election. In this case the two circles around the ring yielding 2(n-1) message propagation delays is incurred for election: one round for the election message to reach the max node & other to circulate its modified message uncontested. Question 9.9 Explain in brief the different models and configurations of the distributed systems. Solution The following are the four basic models and configurations of the distributed systems: a. The Host Based Model This model is an outgrowth of time sharing systems. A popular variation of this model is clustering of the processors and disks via a high speed local interconnect like VAX clusters. It is also known as mini computer model. Host based distributed systems retain full local autonomy for participating nodes and use the network to provide remote log ins, file transfer and electronic mail. Users of this model have to obtain node specific passwords for each node whose resources they intend to access, except for electronic mail. b. The Processor Pool Model

Page 196: Microsoft Word - OS

In this model, at log in time, the system assigns the user to a processor from the pool taking into account considerations like proximity, system state, and individual processor utilization. Users may be allowed to designate a specific processor for a log in. The terminals are often serviced by and connected to the network via dedicated front end processors, sometimes called terminal interface processors (TIP). Separation of terminals from the processors requires high bandwidth TIP to processor pool links and thus restricts this model to implementations on local area network. c. The Workstation/Server Model In this, a workstation is dedicated to each user at least for the duration of log in session. Common system and shared resources are provided in the form of specialized servers. Servers and user workstations are connected through LAN. The following figure shows a typical workstation/server based distributed system. Figure 9.2 Workstation Model of Distributed Processing In this, common services are invoked using the client/server model, with servers running the server portion. A server is a collection of the hardware and software components that provide a common service to a priori unknown clients. Some of the more common server types include file servers, compute servers, print servers, name servers, authentication servers, network gateway servers, terminal servers and electronic mail servers. Depending on the size of the system, servers may be implemented on the dedicated machines. Graphics is a very attractive aspect of the workstation model. User workstations typically provide user interface and the processing power to execute some applications and most operating system commands locally. The workstation may also contain local secondary storage like a hard disk.

Page 197: Microsoft Word - OS

This model of distributing processing is implemented by extending, rather than redesigning, the existing hardware and software. A common baseline workstation operating system, like Unix, is augmented with a software layer that can distinguish whether a resource requested by an application is local or remote. Local requests are completed as usual, and remote ones are routed to appropriate server. d. The Integrated Model In this model, distribution of resources and services is fully transparent to the application programs. File names are global, and there is single set of system primitives to access them regardless of their geographic location. The use of the uniform procedure call mechanism erases syntactic differences between local and remote calls. In this model, individual computer systems may be workstations or multiuser systems. In any one, each runs with a set of software and deals with its own applications. When a user requests a program to be executed, the system selected a computer system on which it is running, with some affinity for initiating site and then locates and loads the executable program file into it. Question 9.10 What are the remote procedure calls(RPC). What are the important advantages of the RPC. Solution In this, a single program is partitioned and allow it’s pieces to execute on the different nodes of a distributed system. Procedures are natural boundaries for program division because they are largely and contain logically entities. A remote procedure call (RPC) must be provided to make the portioned program work in distributed environment. It is mechanism for transfer of control and data within single computer. When a remote procedure is invoked, the calling environment is suspended and parameters are passed across the network to the node where the procedure is to execute

Page 198: Microsoft Word - OS

called as callee. When procedure finishes and produces its results, the callee’s node returns them across the network to caller’s node. Remote procedure calls has number of properties. · Clean and simple semantics facilitates correct implementation

of the distributed computations. · It makes distribution largely transparent to the programmers. · RPC provides the existing applications developed for a single

can be comparatively easily ported to a distributed environment.

· Procedures are one of the most common and well understood mechanisms for communication between parts of the algorithm in a single computer system.

· Remote procedures are a natural fit for the client/server model of distributed computing. Client routines can call on the services by remotely invoking the execution of the appropriate procedures on the server. Servers may provide common services by means of the public server procedures that a number of potential clients can call.

The major issues for implementing the RPC are: · Transfer of Control · Binding · Transfer of Data Question 9.11 What are the basic components of RPC. Solution The program structure of the remote procedure call is based on the concept of stubs. The following figure shows the basic components of the RPC.

Page 199: Microsoft Word - OS

Figure 9.3 Basic Components of RPC In this caller machine contains a user stub and the RPC run time package. The callee machine has its own RPC run time routines, server, stub and called routine, labeled sever. The user’s remote procedure call is linked on the user machine to the user stub. So, user’s invocation of the remote procedure is converted into local call to the corresponding procedure in the user stub. The user stub is responsible for placing a specification of the target procedure and its arguments into communication packets. The user stub then invokes the local RPC run time via a local procedure call to transmit those packets reliably to the callee machine. RPC run time is responsible for the reliable delivery of packets. The functions are retransmissions, acknowledgements, packet routing, and optional encryption. The server (callee) RPC run time receives the packets and passes them to the appropriate procedure and supplies the received parameters. When the called procedure completes execution, it returns control and results to the server stub. The server stub passes results to the server RPC run time for the transmission to the caller’s machine. The caller’s RPC run time passes the result packets to the user stub. Upon unpacking, the user stub completes the process by making a local return to the calling routine. RPC semantics follows the single machine procedure call. Many RPC implementation provide automatic tools like interface languages to facilitate generation of stubs. A program module that calls procedures from an interface is said to import the interface. Automatic stub generators are software utilities that can use these interfaces to generate the appropriate stubs with little or no user involvement. Implementation of stubs is much easier in the programming languages and environments that support separate compilation. In a single computer system, binding or linking consists of the following:

Page 200: Microsoft Word - OS

1. Locating component modules 2. Resolving address references to procedure an executable program image. Binding may be performed at the compile time & at the run time. In this, RPC package is responsible for determining the machine address of the callee and for specifying to it the procedure to be invoked. These operations entail locating of the exporter and local binding therein. A simple way to identify exporters is to broadcast the interface and then choose among the appropriate respondents. This approach raises the burden to the communication links. Another effective approach is to have exporters advertise their wares by registering them with a broker. Either at the compile time or at the run time, the caller’s node initiates the binding by communicating with the binding server. For the flow od data, RPC is supposed to pass all the parameters at the time of a call. Caller and the callee execute on different nodes and in separate address spaces and all the parameters must be passed explicitly and by the value. In the systems without the shared memory, global variables and calls by reference can not be supported by the RPC. One of the major issues in the RPC implementations is support for hetrerogeneity. Differences in software, hardware or both can lead to incompatible representations of data types at different machines in a distributed system. When an RPC is used to bulk data transfers, like as files and pages of virtual memory, the caller/server interaction consists of a series of packet exchanges. Custom RPC run time protocols can give burden to server design. Client failures can cause the orphan processes, created for procedure execution, to linger at the server. RPC semantics can assume on of the following types: · Exactly once · At most once · At least once

Page 201: Microsoft Word - OS

· May be Process creation is a frequent operation at the servers that have to maintain multiple concurrent clients sessions. Question 9.12 Write a brief note on the RPC versus message passing. Solution Remote procedure calls are a mechanism for distributing an application on procedure call boundaries across multiple machines. Messages are the interprocess synchronization and communication mechanism suitable for the use of the cooperating but separate and largely autonomous processes. So the two mechanisms serve different purposes and are not direct substitutes for each other. When one of these mechanisms is used for other than its primary purpose, some overlap exists and choice between them may have to be made. The following points are described below in respect of RPC versus message passing: Transfer of Control For a single computer system RPCs are synchronous. Message provides both synchronous and asynchronous varieties of the sender/receiver relationship. Asynchronous messages can improve the concurrency at the cost of having to deal with the buffering of messages that are sent but not yet received. Binding RPCs rely on multimachine binding for proper processing of the remote invocations. Message passing does not require binding. It requires a target port or a recipient to be identified by the naming mechanism at the run time. The asymmetric nature of the RPC requires a caller to be bound to one server and to await its reply.

Page 202: Microsoft Word - OS

This property is not convenient for implementing broadcasts and multicasts, which messages can handle quite naturally. Transfer of Data In this, RPCs impose some sort of parameter typing and typically require all parameters to be furnished at the time of the call. Messages are less structured and allow the sender and the receiver to freely negotiate the data format. Messages tend to be easier to adapt to heterogeneous environments. Fault Tolerance Failures are difficult to deal with both the environments. Messages have the convenient property that a receiver can act on a message that is logically independent of the failure or termination of the sender. The operation is to update a shared variable and it may still complete despite the sender’s failure after sending the message. In the RPC model, failure of the client creates an orphan process that is typically aborted by the server. Question 9.13 What is the distributed file systems. Solution It is based on the workstation model. Servers provide common services. A service is a software entity that implements a well defined function and executes on one or more machines. A server is a sub system that provides a particular type of service to clients unknown a priori. A given machine becomes a server by running the service, exporting the client interface and announcing the fact by registering with a name service. Depending on the requirements imposed by running the service, the server hardware can range from a more powerfully configured workstation to a dedicated machine optimized for the task at hand.

Page 203: Microsoft Word - OS

A file server can provide a number of valuable services which are described below in brief: Management of Shared Resources Entrusting administration of shared resources, like files and databases, to a server gives result in improved efficiency, security and availability. Management of Backup and Recovery Backup and recovery procedures require specialized tools, skills and daily attention that may be unreasonable to expect from the individual workstation users. User Mobility It may provide advantages like continuation of the work in cases of workstation failures, working away from the office, pooling of workstations in environments like universities, and supporting of portable computers. Support for the user mobility can be facilitated by maintaining user files on a server rather than on a specific workstation. Diskless Workstations These are desirable in some situations because they can reduce the cost of low entry workstations, provide added security by preventing users from copying sensitive system data and simplify system administration and software standardization by executing all applications from a server.

Page 204: Microsoft Word - OS

MODEL QUESTIONS OF U.P. TECHNICAL UNIVERSITY 1. (a) State three services provided by an operating system. (b) Why is spooling required in batch operating system. (c) What are real time systems. What are difficulties a programmer

must overcome in writing an operating system for a real time system.

(d) Why are distributed systems desirable. How does it differ from parallel system.

2. (a) What is the main advantage of multiprogramming. State two

security problems that can result in a multiprogramming environment.

(b) Describe the differences among short term, long term and middle term scheduling.

(c) Explain the layered approach in the operating system. What is the advantage of layered approach to the system design.

(d) Why are the system calls required. What is the difference between trap and interrupt.

Page 205: Microsoft Word - OS

3. (a) Which of the following instruction should be privileged. (i) Set value or timer (ii) Turn of message (iii) Read the clock (iv) Change to monitor mode 4. (a) What is the critical section. How can we obtain a solution to

the critical section problem. (b) What are semaphore. How can it be implemented. (c) What do you mean by the context switching. 5. (a) Briefly explain the producer consumer problem with respect

to the cooperating processes. (b) What are interrupts. How is it handled. (c) Explain the concept of the synchronization with respect to

any classical problem. 6. (a) Consider the following processes: Process Arrival Time Burst Time P1 0.0 millisecs 6 millisecs P2 0.5 millisecs 4 millisecs P3 1.0 millisecs 2 millisecs Find the average turnaround time and the average waiting time with respect to FCFS, SJF, SRT scheduling algorithms. (b) What is deadlock. How is it different from starvation. (c) How can you prevent deadlock. Explain. 7. (a) Consider the following snapshot of a system Allocation Max Available r1 r2 r3 r4 r1 r2 r3 r4 r1 r2 r3 r4 P1 0 0 1 2 0 0 1 2 2 1 1 0

Page 206: Microsoft Word - OS

P2 2 0 0 0 2 7 5 0 P3 0 0 3 4 6 6 5 6 P4 2 3 5 4 4 3 5 6 P5 0 3 3 2 0 6 5 2 (b) Three processes are the resource units that can be reserved

and released only one at a time. Each process needs a maximum of 2 units. Show that a deadlock can not occur.

(c) Evaluate the Banker’s Algorithm for its usefulness in real life.

8. (a)What is paging. How is it different from segmentation. (b) Explain the working of multi partition allocation with respect to memory Management.

(c) What is the difference between external and internal fragmentation?

How can external fragmentation be overcome. 9. (a) What is virtual memory. What is its advantage. (b) Why do page faults occur. What actions are taken by OS to

service page faults. (c) What is thrashing. How can it be overcome. 10. (a) How is process management handled w.r.t. Unix Operating

System. (b) What should be the design objectives w.r.t. I/O

management. (c)Explain what is meant by I/O buffering. 11. (a) Explain the memory management system in Unix. (b) What is disk scheduling. (c) Briefly explain the organisation of I/O devices in an OS.

Page 207: Microsoft Word - OS

12. What are main advantages of multiprogramming. Describe the essential properties of batch system interactive system and timesharing system. 13. What are the various ways of process communication. What do you understand by critical section. 14. What are Long term and Short term and Medium term schedulers. What is the degree of multiprogramming. How is degree of multiprogramming related to stability of a system. 15. Describe the preemptive and non-preemptive form of priority

scheduling. 16. What are necessary conditions for deadlock. Describe

Deadlock prevention method of handling deadlock. 17. What is Bankers algorithm. Write safety and Resource

Request algorithm for the the same. 18. Explain the first fit, best fit and worst fit allocation algorithm

and compare them. 19. Describe segmentation and its implementation. 20. What is external fragmentation and internal fragmentation.

Describe one solution of external fragmentation. 21. Describe FCFS form of disk scheduling with an example. 22. Write short notes on :

(a) Evolution of operating system. (b) Real time system. (c) Buffering and spooling. (d) Process state transition.

Page 208: Microsoft Word - OS

23.(a)What are the different services provided by an operating system. Explain in brief.

(b) What is process. How it can changes its states in system.

24.(a) Describe inter process communication with the help of message system.

(b) Prove that in bakery algorithm the following property holds : if oi is in its critical section and PK ( k! = i ) has already chosen number [k]!=0, then [i],i)< (number [k],k ).

25. (a) Write a bounded buffer monitor in which the buffers are

embedded within monitor itself. (b) What do you understand by interrupt. How are they

handled while execution of a process.

26. (a) What are the performance criteria of a CPU scheduling algorithm. Explain Round – Robin scheduling

(b) Consider the following snapshot of a system: Allocation Max Available A B C D A B C D A B C D P0 0 0 1 2 0 0 1 2 1 5 2 0 P1 1 0 0 0 1 7 5 0 P2 1 3 5 4 2 3 5 6 P3 0 6 3 2 0 6 5 6 P4 0 0 1 4 0 6 5 6 Answer the following question using banker’s algorithm.

(i) What us the content of the matrix Need? (ii) Is the system in a safe state? (iii) If a request form process P1 arrives for(0,4,2,0)

can the request be granted immediately.

27. (a) Consider the following set of process, with the length of the CPU-burst time given in milliseconds.

Page 209: Microsoft Word - OS

Process Burst Time Priority P1 10 3

P2 1 1

P3 2 3

P4 1 4 P5 5 2 The process are assumed to have arrived in the order P1,P2,P3,P4,P5 all the time 0.

(i) Draw four Gantt charts illustrating the execution of these processing using FCFS, SJF a nonpremptive priority (a small priority number implies a higher priority ), and RR(quantity=1) scheduling.

(ii) What is the turn around time of each process for each of the scheduling algorithm in part (i).

(iii) What is the waiting time of each process for each of the scheduling algorithm in part (i).

(iv) Which of the scheduling in part (i) results in the minimal average waiting time (over all processes).

(b) What are the necessary conditions for a deadlock. Explain

each in brief.

28. What do you understand by virtual memory. Consider the addresses requested by a process having a page size 100 bytes:

0745,0012,0130,0237,0090,0398,0060,0444,0239,0377,0001,0367,0259,

Page 210: Microsoft Word - OS

0179,0200,0010,0199 0700,0078,0180 for a memory with three frames.

(a) Get the memory reference string (b) Calculate the page fault rate for : (i) First in first out page replacement algorithm. (ii) Optional page replacement algorithm. (iii) Least recently used page replacement algorithm.

(c) which of above algorithm do you suggest to implement & why.

29. (a) What do you understand by fragmentation. Explain each in detail. What are the different technique to remove fragmentation in case of MFT & MFT system.

(b) What is cache memory. How it effects the system’s performance. What are the different mapping techniques of cache mapping. Explain 4-way set associate cache mapping technique.

30. (a) Compare Unix and Dos operating system. (b) What are disk scheduling system. Consider that a

process requests for following tracks in a disk : 98,183,37,122,14,124,65,67

(i) Draw the track chart for FCFS, SSTF, SCAN, & C- SCAN

scheduling. (ii) Calculate total track movement in each case. (iii) Suggest the best scheduling. 31. (a) What do you understand by:

(i) seek time (ii) latency time (iii) transfer time

Page 211: Microsoft Word - OS

(b) Multithreading is a commonly used programming technique in UNIX system. Describe three ways that threads could be implemented.

MODEL QUESTION PAPER-1

OPERATING SYSTEM (MCA-242)

Time: Three hours Maximum Marks: 100 Note: Attempt all questions. Each carry equal marks. 1. Answer any four:

(a) What is an O.S. Describe the role of an O.S. as a Resource manager.

(b) Define swapping with practical example. When do we need it.

(c) Define Disk scheduling and give the names of various schemes.

(d) What is UNIX security policy for files. (e) What does a file system consist of. (f) Differentiate between static and dynamic library.

2. Answer briefly any five of following:

(a) Compare Multilevel queue scheduling with Multilevel feedback queue scheduling.

(b) Differentiate between Global replacement and local replacement strategy.

(c) Distinguish between blocking and non blocking I/O. Also explain what is an asynchronous system call.

(d) How does buffering support copy semantics. (e) Differentiate between UNIX semantics and session

semantics.

Page 212: Microsoft Word - OS

(f) Differentiate between loosely coupled system and lightly coupled system.

3. (a) If the block size of a file system is 512kbytes and assume that a file size is going to be between 256K Bytes to 768Kbytes,what can be problem with the current file system and suggest the solution to get rid of this problem. (b) What are the various modes of operation in which CPU operates. Explain the difference between the two. (c)What is the degree of multiprogramming. How long should it be. (d) What is OVERLAYS. Describe various memory allocation schemes in brief. (e) Describe kernel I/O subsystem. (f) Define a shell. What do you understand by a user’s environment.

3.(a) Explain how logical addresses are mapped into physical addresses in segmented memory management where each segment is in turn divided into equal sized pages. (b)Consider the following sequence of memory reference from a 460 word program:

10,11,104,170,73,309,185,245,246,434,458,364 (i) Show the page reference string for this sequence

assuming a page size of 100 words with 200 words of primary memory available to the program.

(ii) Compute number of page faults assuming FIFO and LRU page replacement policies.

3. Answer any TWO

(a) Define user services, turn around time, sharing and protection with respect to batch processing system.

(b) Describe Demand paging and Thrashing. What are the benefits of Demand paging?

Page 213: Microsoft Word - OS

(c) What are the various modes if I/O transfer. Describe DMAI/O.

4. (a) Define the critical section problem. Suggest a solution of the problem using software techniques. Any disadvantage. (b) Define a semaphore. Implement either the diving philosopher’s correct solution or Reader writer’s problem using semaphores.

5. Answer any TWO

(a) Define the context of process. How file descriptor and inode of a file are related to each other. Explain with the help of diagram.

(b) Define address binding? Explain the process of compile time, load time and execution time? Differentiate between logical and physical address? Can they be equal? If yes, under what condition.

(c) Describe FCFS,SJF and round robing methods of CPU scheduling.

6. Consider the following jobs to be executed on a single processor. Job No. Arrival Time CPU Burst Time (ms)

J1 0 4 J2 2 5 J3 5 6 J4 6 2 Compute the average waiting time for jobs using

(i) SJF (ii) Preemptive SJF (iii) Round Robing-1 ms

7. (a) Describe combined scheme of disk space allocation using 15 direct pointers, one single indirect, one double indirect, one triple

Page 214: Microsoft Word - OS

indirect. If block size is 2048 bytes and address bus size is 32 byte, what can be the maximum size of a single file. (b) What are the two fundamental versions of UNIX. Describe the process of users logging into UNIX OS in detail? Write a shell program to create a sub-directory(if this sub-directory does not exist already) in/users/workers/directory in the name of the date for the day when this program would be run. For e.g. /users/work/13-05-02.

8. (a) Give two protocols to prevent hold and wait condition and

one protocol avoid circular wait condition. (b) Give the strategies for recovery from deadlock once it has been

detected.

9. Compare and contrast the following allocation method for file (i) Contiguous (ii) linked (iii) indexed

10. Discuss the following directory structures

(i) Single level and two level (ii) Tree structured (iii) Acyclic graph structure and general graph structure

or to sharing and deletion.

11. Write short notes on any one (a) Disk scheduling algorithms (b) I/O Interrupts, status check, DMA technique of Data

transfer.

Page 215: Microsoft Word - OS

MODEL QUESTION PAPER-2

OPERATING SYSTEM (MCA-242) Time : Three hours Maximum Marks :100

Note : Attempt all questions. Each carry equal marks.

1. (a) What are the main services of the operating system. (b) Discuss the role of the Operating system as a resource manger in the life cycle of a process. (c) Explain Multiprogramming and its advantages.

2. (a) What is the purpose of the command interpreter. Why is it

usually separate from the kernel. (b) Explain the time- sharing and Real time system. (c) Draw the architecture of an operating system. 3. (a) Explain process control block (PCB), explain each component of PCB. (b) What do you mean by concurrent process. What are

semaphores. Give example; discuss its use in process synchronization.

4. (a) Explain deadlock. Explain the resource allocation graph algorithm.

(b) Explain the concept of critical section. How useful to get concurrency. (c) Explain procedures and consumers problem in term concurrency.

5. (a) Assume you have the following jobs to execute with one processor

job Run time priority

Page 216: Microsoft Word - OS

P1 10 3 P2 1 1 P3 2 3 P4 1 4 P5 5 2 The process are resumed to have arrive in order P1, P2, P3, P4 and P5 all at time().

(i) Draw four Gantt Charts illustrating the execution of these Process using FCFS, SJF a non Preemptive priority(a smaller priority number implies a higher priority) and Round Robin (quantum =1) scheduling.

(ii) What is the average turnaround time of each process for each of the above scheduling algorithms.

(iii) What is the waiting time of each process for of the above scheduling algorithms.

(b) Explain the difference between preemptive and non preemptive scheduling.

6. (a) What are the performance criteria for scheduling algorithms. (b) What is implementation problem for SJF scheduling

algorithms. (c) Explain the different type of scheduler.

7. (a) If a logical address space of 32 pages of 2048 words is mapped in to a physical memory of 64 frames then how many bits are these in:

Page 217: Microsoft Word - OS

(i) Logical address (ii) Physical address

(b) When does page faults occur. Describe the action taken by the Operating system when a page fault occurs. (c)Consider the following segment table:

Segment Base Length

0 219 600 1 2300 14 2 90 100 3 1327 580 4 1952 96

What are the physical addresses for the following logical addresses. (a) 0,430 (b) 1,10 (c) 2,500 (d) 3,40 (e) 4,112

8. (a) Explain the difference between internal and external fragmentation. (b) How many page fault occur for LRU page replacement

algorithm for the following reference string, for four page frames.

1,2,3,4,5,3,4,1,6,7,8,9,7,8,9,5,4,5,4,2. (c) Explain the concept of thrashing.

9. (a) Explain two disk scheduling algorithms.

(b) Describe the I/O system and also explain the working of DMA.

10. (a) How were the design goals of UNIX different from those of the other operating the early stage of UNIX development.

(b) Give various security features available in UNIX operating system.

Page 218: Microsoft Word - OS

MCA

FOURTH SEMESTER EXAMINATION, 2001-2002

OPERATING SYSTEM Time : Three Hours Total Marks : 100 Note: (1) Attempt ALL questions.

(2) All questions carry equal marks. 1. Attempt any FOUR parts :-

(5x4=20) (a) What is an Operating system. Discuss its any four

main functions. (b) Discuss the three properties of each of the following

types of operating systems:-

(i) Batch. (ii) Time sharing (iii) Real time

(c) Why is it expensive to switch between processes? Is it less expensive to switch between threads? Justify your answer.

(d) Discuss the following:-

(i) Multiprogramming (ii) Multitasking (iii) Time sharing

(e) Discuss the concurrency problem in detail.

Page 219: Microsoft Word - OS

(f) Three batch jobs A, B, C arrive at a computer centre at almost the same time. They have estimated running time of 10,4,8 minutes. Their priorities are 3,5,4 respectively with 5 being the highest priority. For each of the following scheduling algorithms, determine the mean process turn around tiome.Ignore process switching overhead.All jobs are CPU bound:- (i) Round Robin (ii) Pre-emptive multiprogramming.

2. Attempt any two parts of the following:- (10x2=20)

(a) Identify and discuss three methods of switching from user mode to supervisor mode.

(b) (i)Explain the steps that an OS goes through when CPU receives an interrupt. (ii)What is paging. Explain with proper example.

(c) Fill in the boxes below to get the solution for the

readers-writers problem, using a single binary semaphore, mutex ,(initialize to 1) and busy waiting:-

int R=0 , W=0 reader () { L1:wait(mutex); If (w=0) { R=R+1; signal(mutex); } else

{ Reader does not read R=R-1; goto L1;

Page 220: Microsoft Word - OS

} wait (mutex); R=R-1; signal(mutex);

} writer() { L2:wait(mutex); if (R=0) { signal(mutex); goto L2; } W=1; signal (mutex);

wait (mutex); W=0; signal(mutex);

} 3 Attempt any TWO parts of the following :- (10x20=20)

(a) A soft real time system has two periodic events with periods of 50 and 200 milliseconds each. Suppose that the two events require 25 and 50 milliseconds of CPU time respectively. A new event takes ‘x’ milliseconds of CPU time and has period of ‘p’ milliseconds. Which combination of ‘x’ and ‘p’ is schedulable.

(b) A computer has 6 tapes drive, with ‘n’ processes

competing for them. Each process may need two drives. For which maximum value of ‘n’ is the system deadlock free.

Page 221: Microsoft Word - OS

(c) Sets of processes are going to execute a single CPU. They are

Process Arrival Time Executed CPU time 1 0 14 2 3 12 3 5 7 4 7 4 5 19 7 Determine the sequence of execution for Round Robin (quantum size =4) CPU scheduling algorithm.

(d) Five jobs are waiting to be run. Their expected run

times are 9,6,3,5 and X. In what order should they be run to minimize average response time. (Your answer will depend on X).

4. Attempt any FOUR parts of the following :- (5x4=20)

(a) Consider a swapping system in which memory consists of the following holes size in memory order :

10K,4K,20K,18K,7,K,9K, 12K,and 15K. Which hole is taken for successive segment requests of 12K,10K,9K, for worst fit.

(b) If an instruction takes one microsecond and a page

fault takes an additional ‘n’ microsecond , give a formula for the effective instruction time if page fault occur every ‘k’ instructions.

(c) If a FIFO page replacement is used with four page

frames and eight pages , how many page fault will

Page 222: Microsoft Word - OS

occur with the reference string 0,1,7,2,3,2,2,1,0,3, if the four frames are initially empty.

(d) How long does it take to load a 64 K program from

a disk whose average seek time is 30m-sec, whose rotation time is 20 m-sec , and whose track is (i) For a 2K page size. (ii) For a 4k page size. The pages are spread randomly around a disk.

(e) Explain the difference between internal and external

fragmentation. Which one occurs in paging system. (f) A computer provides each process with 65,536

bytes of address space divided into pages of 4096 bytes. A particular program has a text size of 32,768 bytes , a data size of 16,386 bytes , and a stack size of 15,870 bytes. Will this program fit in the address space? If the page size were 512 bytes , would it fit? Remember that a page may not contain parts of two different segments.

5. Attempt any TWO parts:- (10x2=20)

(a) Consider the following I/O scenarios on a single user PC :- (i) A mouse used with a graphical user

interface. (ii) A disk drive containing user files. (iii) A tape drive on multitasking OS. For each of these I/O scenarios would you design, The OS to use buffering, spooling and cashing.

(b) (i) Why are output files for the printer normally

spooled on disk before being printed.

Page 223: Microsoft Word - OS

(ii) If a disk controller writes the bytes it

receives from the disk to memory as fast as it receives them, with no internal buffering, is interleaving conceivably useful? Discuss.

(c) Disk request come into the disk driver for cyliners

10,22,20,2,40,6 and 38, in that order. A seek takes 6 m-sec. Per cylinder moved. How much seek time is needed for “elevator algorithm”.

IGNOU QUESTION PAPERS

ADCA/MCA (III Yr) Term- End Examination

December, 1997 CS-13 : OPERATING SYSTEMS

Time : 3 hours Maximum Marks : 75

Page 224: Microsoft Word - OS

Note : Question 1 is compulsory. Answer any three from the rest. 1(a) Assume that in the system shown in the following figure, Process P1 does not own a disk drive and it request two disk drives simultaneously. Illustrate that situation by means of the general resource graph and use the deadlock detection algorithm to evaluate the resulting system state. 15

Figure 1 : System State

(b) Discuss weather each of the following programming techniques and program action is good or bad with regard to the degree of locality of page reference it is likely to exhibit. Explain your reasoning and where applicable, state roughly the number of distinct loci of activity (hot spots) that you expect the execution to generate.

P1

P2

R1

Page 225: Microsoft Word - OS

(i) Sequential processing of one dimensional array (ii) Sequential processing of two dimensional array (iii) Hashing (iv) Interrupt servicing (v) Indirect addressing (vi) Procedure invocation 10 (c) Discuss why shared bus multiprocessors are generally regarded as having limited scalability. 5 2(a) Write a program /algorithm that solves the readers/ writer problems by using monitors and also explain it. 8 (b) Compare and contrast the semaphore to the same problem [2(a)] in terms of the type of data abstraction and readability of the code. 7 3.(a) When do page-faults occur? Describe the action taken by the O.S. when page fault occurs. 8 (b) Describe what is Belady’s anomaly and provide an example that illustrates anomalous behaviour of FIFO. 7 4. Discuss RSA algorithm (related to cryptography) and explain its working through one example. 15 5(a) Point out and discuss the major differences in resource management system requirement between uni-processor and multiprocessor O.S. 8

Page 226: Microsoft Word - OS

(b) Discuss the operation of multistage switch-based system. 6. Provide a detailed step by comparison of Lamport’s and Ricart and Agrawala’s algorithm for mutual execution. Identify key differences and explain where the saving in the number of message required by the latter algorithm come from? Assess and compare the difference, if any, in the typical duration of unavailability of the target resource due to synchronization caused by each algorithm. 15

ADCA/MCA (III Yr) Term-End Examination June, 1998 CS-13 : OPERATING SYSTEMS

Times : 3 hours Maximum Marks : 75 Note : Question 1 is compulsory. Answer any three from the rest. 1. (a) Identify and discuss all operations and parameters that influence the effective memory access time. In a virtual-memory system, Indicate the most significant changes that may be expected with future technological improvements, such as introduction of the secondary storage devices with access times one or more orders of magnitude faster than that of contemporary devices. 10 (b) A disk has 305 cylinders, four heads and 17 sectors of 512 bytes each per track. The disk is rotated at 3000 rpm and it has a moving head assembly with an average head positioning time of 30 ms. The peak data transfer rate that the drive can sustain is 4 mbps.

Page 227: Microsoft Word - OS

Calculate the best and the worst-case times needed to transfer 20 consecutive and 20 randomly distributed blocks (sectors) from such a disk. Indicate the dominant factors in determining the transfer times and the variability between the best-case and the worst-case figures. 15 (c) Give several reasons why the study of concurrency is appropriate in O.S. design. 5

2.(a) Write a program/ algorithm that solve the readers and writers problem solved with conditional critical regions. 10

(b) Compare and contrast the semaphore to the same problem (2(a)) in terms of data abstraction and readability of the code. 5

3. Describe the function of a translation lookaside buffer (TLB) in a paging system and discuss the issues and operations involved in TLB management in O.S. In particular, discuss the options and trade-off in TLB entry allocation and replacement policies. Indicate which of these operations are time-critical and therefore suitable candidates for implementation in hardware. 15

4. (a) Discuss the basic model of cryptography. 5

(b) Discuss the Data Encryption Standard (DES) algorithm. What are its limitations? 10

5. Discuss processor management and scheduling in multiprocessor operating system design. 15

Page 228: Microsoft Word - OS

6.(a) Discuss the complexities involved in interprocess communication and synchronization in a distributed system compared to centralized system. 6

(b) Explain the algorithm for election of a successor in a distributed system. Also discuss the correctness and performance of the algorithm. 9

MCA(III Yr) Term-End Examination

December, 1998 CS-13: OPERATING SYSTEMS

Time:3hours Maximum Marks:75 Note: Question 1 is Compulsory. Answer any three from the test. 1(a) Write an algorithm that solves the producer/consumers problem with a bounded buffer. How is it different from the unbounded buffer algorithm? Explain. 8 (b) Explain the two primitive operations of a semaphore and implement the busy-wait implementation of them. 8

Page 229: Microsoft Word - OS

(c) Discuss a detailed step-by-step comparison of Lamport’s and Ricart & Agrawala’s algorithms for mutual exclusion. 8 (d) Write the functional specifications of file CREATE and file READ. 6 2(a) Discuss the performance evaluation of various scheduling algorithms. 10 (b)Explain the following process management OS calls FORK/JOIN, ABORT, CHANGE-PRIORITY. 5 3(a) Write about different formal models of protection and how they are different on the basis of access control. 8 (b)What is the primary goal of Authentication and how can you achieve that through the password mechanism. 7 4(a) Point out and discuss the major differences in resource management requirements between uniprocessor operating systems. 8 (b) Discuss various types of multiprocessor interconnections. Also discuss about their operation, scalability, scheduling, interprocess communication and complexity. 7 5(a) Explain the algorithm for regeneration of Lost Token in a distributed system. 8

Page 230: Microsoft Word - OS

(b)Describe the operation of and discuss the relative advantages and disadvantages of circuit switching. message switching and packet switching. 7 6(a) How can prevent a system from deadlock? Explain. 8 (b)Write an algorithm to determine whether a given system is in a deadlock and explain. 7

Page 231: Microsoft Word - OS

ADCA/MCA(III Yr)

Term-End Examination June, 1999

CS-13 : OPERATING SYSTEMS

Time:3 hours Maximum Marks :75

Note: Question 1 is compulsory. Answer any three from the rest.

. 1(a) Explain First come, First served (FCFS) and Round Robin scheduling algorithms. 9 (b)Consider the following set of processes, with the length of CPU burst time given in milliseconds: Process Burst time P1 10 P2 29 P3 3 P4 7

Page 232: Microsoft Word - OS

P5 12 All five processes arrive at time 0, in the order given. Draw Gantt charts illustrating the execution of the processes using FCFS, SJF and RR (quantum =1) scheduling. What is the turnaround time of each process for each of the scheduling algorithms. Also find the average waiting time for each algorithm. 12 (c) Write about the performance evaluation of FCFS, SJF and Round Robin scheduling algorithms. 11 2(a) What is a scheduler ? Explain the primary objective of scheduling. How many types of schedulers coexist in a complex operating system? Explain. 10 (b) Explain common performance measures and optimization criteria that schedulers may use in attempting to maximize system performance. 10 3. Describe the necessary conditions for a deadlock occurrence. Discuss deadlock avoiding using Banker’s Algorithm. And also discuss data structures for implementing this algorithm. 15 4(a) Explain the important features of a monitor that are lacking in semaphore. 8 (b) Show how a monitor can be implemented with semaphore. 7

Page 233: Microsoft Word - OS

5(a) Mention the advantages of multiprocessors and explain the different classifications of parallel computer architectures. 8 (b) Outline the basic architectures of cross bar connected system and hypercube multiprocessor interconnections. Compare their features with respect to scalability. 6(a) Discuss the common failures in the distributed system. 8 (b) Describe an algorithm for election of a successor and evaluate its performance. 7

Page 234: Microsoft Word - OS

ADCA/MCA (III Yr)

Term-End Examination December, 1999

CS-13 : OPERATING SYSTEMS

Time : 3 hours Maximum Marks : 75

Note : Question 1 is compulsory. Answer any from the rest.

1 (a) Given a set of cooperating processes, some of which “produce” data items (producers) to be “consumed” by others (consumers), with possible disparity between production and consumption rates. Describe synchronization protocol that allows both producers and consumers to operate concurrently at then respective service rates in such a way that produced items are consumed in the exact order in which they are produced (FIFO). 10 (b) Device and explain the deadlock detection algorithm and explain its performance. 10 (c) Discuss the relative time and space complexities of the individual implementations of the message facility and propose an approach that you consider to be the best trade-off in terms of versatility versus performance. 10

Page 235: Microsoft Word - OS

2(a) Threads are the convenient mechanisms for exploiting concurrency within an application. Discuss the support of the above statement. 5 (b) Explain the need for the process Control Block (PCB) fields. 5 (c) Discuss why Round Robin scheduling is often regarded as a fair scheduling discipline. 5 3(a) Explain how monitors provide structural data-abstraction in addition to concurrency control. 8 (b) Device and explain Lamport’s Bakery algorithm. 7 4(a) Device and explain page-fault frequency algorithm. 5 (b) Explain the overall performance of static partitioned memory allocation in contiguous with respect to principles of operation, swapping, relocation, protection and sharing. 5 5(a) Explain the system programmer’s view of the file system. 5 (b) List and interpret the security policies and mechanisms. 5 (c) Compare and contrast Bell-LaPadula model and Lattice model of information flow. 5 6(a) Discuss the implementation issues and considerations involved in processing and memory management in multiprocessor operating system.

Page 236: Microsoft Word - OS

9 (b) Explain why shared-bus multiprocessors are generally regarded as having limited scalability. 6

ADCA/MCA (III Yr)

Page 237: Microsoft Word - OS

Term-End Examination June, 2000

CS-13 : OPERATING SYSTEMS

Time : 3 hours Maximum Marks : 75 Note : Question 1 is compulsory. Answer any three from the rest. 1(a) Hierarchical directories are more complex to manage than flat files, but their advantages are considered to outweigh their drawbacks by many system designers. Explain why. 6 (b) Design algorithms/functional specifications for the basic range of file-related system services given below: (I) CREATE (II) SEEK (III) READ (IV) WRITE 12 (c) Discuss the queuing implementation of semaphores. The algorithm should be nearer to ‘C’ language implementation. 12 2(a) Any synchronization problem that can be solved with semaphores can be solved with message and vice versa. Explain the reasoning you need to come up with in your answer. 6 (b) Discuss the performance evaluation of First come first serve (FCFS), shortest job first (SJF) and Round Robin (RR) scheduling algorithms with appropriate example. 9

Page 238: Microsoft Word - OS

3(a) Device and discuss the Chang and Roberts algorithm for election of a successor in a distributed operating system and explain the performance issues. 8 (b) State all requirements of a “good” solution to the Mutual Exclusion problem in distributed systems. 7 4(a) Discuss various issues related to hardware support for paging scheme of memory management. 6 (b) Contrast demand segmentation with demand paging. Explain the trade-offs involved in page allocation of non-contiguous memory allocation. 9 5 (a) Compare and contrast different authentication techniques. 7 (b) Compare and contrast the formal models of protection, the Access-Control matrix with Take-Grant model. 4 (c) Compare and contrast Public-Key cryptography technique with Conventional cryptography technique. 4 6(a) Device and discuss the Misra’s algorithm for regeneration of a lost token and explain the performance issues. 9 (b) Explain the implementation scheme of concurrency control and deadlocks in distributed systems. 6

Page 239: Microsoft Word - OS

-

ADCA/MCA (III Yr) Term End Examination Dec.2000 CS-13 : OPERATING SYSTEMS

Time:3 Hours Maximum Marks : 75 Note: Question 1 is compulsory. Answer any three from the rest. 1.(a) Write an algorithm that solves the readers/ writers problem using monitors. 10 (b) Devise an algorithm for Dead-lock detection. Discuss the operational aspects of this algorithm. 10 (c) Discuss the common performance measures and optimization criteria that the schedulers use in attempting to maximize system performance. 10 2.(a) Explain the operations involved in a process switch. 5 (b) Devise and explain the Dekker’s solution to mutual exclusion problem. 10

Page 240: Microsoft Word - OS

3.(a)Compare and contrast the Lamport’s algorithm with Ricart and Agrawala’s algorithm for mutual execution in Distributed systems. 10 (b) Explain why shared–bus multiprocessors are generally regarded as having limited scalability. 5

4.(a)Explain why instruction interruptibility is needed in virtual-memory systems. Discuss the issues involved in implementing instruction inter-ruptibility. 8

(b) Explain the address translation in segmented systems. 7

5.(a) Discuss how the lattice model provides a powerful abstraction that can be used to express a variety of practical security policies. 7

(b) Explain the following: 2x4=8

(i) Digital signatures

(ii) RSA algorithm

6. (a) Describe and discuss the merits of the key advantages of distributed computing claimed its proponents. 7

(b) Describe the operation of and discuss the relative advantages and disadvantages of circuit switching, message switching and packet switching. 8

Page 241: Microsoft Word - OS

MCA(III Yr)

Term –End Examination January, 2001

CS-13 : OPERATING SYSTEMS Time : 3 hours Maximum Marks : 75 Note : Question 1 is compulsory. Answer any three from the rest. 1(a) Write an algorithm for implementing Dining philosopher problem using semaphores. Also describe the problems and the algorithm in detail. 15 (b) What is the time stamping scheme of distributed system for mutual exclusion? Explain the functioning of the scheme through a diagram. 5 (c) Describe the algorithm proposed by Ricart and Agrawal, for distributed mutual exclusion. Also distinguish between this algorithm and Lamport’s algorithm on the following lines: * Correctness of the algorithm * Deadlock * Communication cost 10 2. Discuss various machine level implementation of mutual exclusion in general. Also discuss suitability and efficiency of these algorithms. 15 3(i) Explain the advantages and disadvantages of segmented and paged implementation of virtual memory. Explain through a

Page 242: Microsoft Word - OS

diagram, the principles of address translation in combined segmentation and paging. What is the drawback of this translation scheme. 9 (ii) Describe Belady’s anomalies and provide an example that illustrates anomalous behaviour of FIFO. 6 4(i) What is thrashing ? What is the cause of thrashing ?How does the system detect thrashing ? Once it detects thrashing , what are the techniques to prevent it . 10 (ii) Discuss the difference between preemptive and nonpreemptive scheduling. 5 5(i) Discuss scheduling and interprocessor communication suitable for hypercube multiprocessor operating system. 7 (ii) What is RPC (remote procedure call)? What are the major issues in implementing the RPC ? Describe them briefly. 8 6(i) Describe the functioning of DES. What are its advantages and disadvantages. 7 (ii) Describe deadlock detection and recovery algorithm for centralized operating system. 8

Page 243: Microsoft Word - OS

ADCA/MCA (III Yr)

Term-End Examination June, 2001

CS-13 : OPERATING SYSTEMS

Time : 3 hours Maximum Marks : 75

Page 244: Microsoft Word - OS

Note : Question 1is compulsory. Answer any three from the rest. 1(i) Write an algorithm for bounded buffer producer/ consumer problem using monitor. Also describe the problem and algorithm in detail. 10 (ii) Discussion detail the distribution of control and the degree of functional specification of the individual processing elements in the three major classes of multiprocessor operating systems. 10 (iii) A process references five pages A, B, C, D and E in the following order: A, B, C, D, A, E, B, C, E, D. Assume that the replacement algorithm is LRU and FIFO and find out the number of page transfers during the sequence of references starting with an empty main memory with 3 frames. 10 2. Discuss various machine level implementation of mutual exclusions in general and of semaphores in particular. Also discuss suitability and efficiency of these algorithms. 15 3. (a) Summarize the characteristics of all forms of memory management along the following lines : * H/W requirement and its functionality * Effective memory access time * Wasted memory 9 (b) Differentiate between the lattice model and the remaining formal models of protection. 6

Page 245: Microsoft Word - OS

4(a) Explain the routing schemes in multistage switch based system. 6 (b) Discuss the assumption, properties and differences among the four major models of distributed computing. 9 5(a) Discuss the importance of ordering of events in distributed systems. Is ordering of events important in centralized systems as well? Why or why not? 6 (b) Describe the bully’s algorithm for the election of a successor. Also discuss the performance of the algorithm. 9 6(a) What are the necessary condition for deadlock in the centralized environment? Explain through examples. Do we have some conditions for deadlock in distributed systems. 7 (b) Describe two concurrency control protocols through example that can avoid deadlocks and are used in distributed systems. 8

Page 246: Microsoft Word - OS

ADCA/MCA (III Yr)

Term-End Examination December, 2001

CS-13 : OPERATING SYSTEMS Time : 3 hours Maximum Marks : 75 Note : Question 1 is compulsory. Answer any three from the rest. 1(a) Write an algorithm/program using the file system calls (open, create, read, write, break, close, unlike) that determines the length of a file without using a loop in the code. 10 (b) Explain the following 3 primary forms of explicit interprocess interaction:

Page 247: Microsoft Word - OS

(1) Interprocess synchronization (2) Interprocess signaling (3) Interprocess communication. Also discuss the need for interprocess synchronization. 10 (c) What is Translation lookaside buffer (TLB)? Describe the function of TLB in a paging system and also discuss the issues and operations involved in TLB management by operating system. 10 2(a) Explain how “threads” approach improves performance of operating system. 8 (b) “Any synchronization problem that can be solved with messages and vice versa”. Explain the reasoning you need to come up with your answer. 7 3(a) Discuss the Multiprocessor classification based on Flynn’s scheme. Also mention the advantages of multiprocessor. 8 (b) Explain the merits and demerits of distributed processing. 7 4(a) Compare contiguous allocation and Non contiguous allocation with respect to following measures: (a) Wasted memory (b) Time complexity (c) Memory Access overhead. 8

Page 248: Microsoft Word - OS

(b) Explain the role of file table (FMT) in the management of virtual memory. 7 5(a) Explain the following with respect to the Disk space management: (1) Chaining (2) Indexing. 8 (b) Explain the Saltzer and Schroeer’s general design principals for protection mechanisms. 7 6. Write short notes on: (1) Artifact based Authentication (2) Hyper cubes (3) Remote procedure calls (4) Distributed shared memory. 15

Page 249: Microsoft Word - OS

ADCA/MCA (III Yr Term-End Examination

June, 2002 CS-13 : OPERATING SYSTEMS

Time : 3 hours Maximum Marks : 75 Note : Question 1 is compulsory. Answer any three from the rest. 1(a) Explain the drawbacks of Busy-Wait implementation of semaphores. How can we overcome these by using queuing implementation of semaphores? Discuss. 10 (b) Explain the Rivest, Shamir, Adelman (RSA) public key algorithm. 10 (c) Explain the anatomy of Disk Address Translation. 10 2(a) What is the difference between a program and a process? Explain the four general categories of process states with the help of process state-transition diagram. 7 (b) Explain the following scheduling algorithm:

Page 250: Microsoft Word - OS

(i) Shortest remaining Time Next (SRTN) scheduling (ii) Time slice scheduling (iii) Event driven scheduling (iv) Multiple-level queues scheduling 8 3(a) Explain the Dekker’s solution to the mutual exclusion problem. 7 (b) Explain how a monitor can be implemented with semaphores. 8 4(a) Explain the following common algorithms for selection of a free area of memory for creation of a partition: (i) First fit (ii) Best fit (iii) Worst fit 7 (b) Write short notes on : (i) Memory Compaction (ii) Hierarchical Address Translation Table 8 5(a) Explain the following 3 levels of device abstraction and disk storage addressing techniques which are commonly identifiable in implementations of the file management system: (i) File Relative logical addressing (ii) Volume Relative logical addressing (iii) Drive Relative physical addressing 8 (b) Explain the Biometric authentication mechanism. 7

Page 251: Microsoft Word - OS

6(a) Explain the Bus oriented systems and Multistage switch based system architectures for multiprocessor interconnections. 8 (b) Explain the Ricart and Agrawala’s algorithm for distributed processing. 7

ADCA/MCA (III Yr) Term-End Examination

December, 2002

Page 252: Microsoft Word - OS

CS-13 : OPERATING SYSTEMS

Time : 3 hours Maximum Marks : 75 Note : Question 1 is compulsory. Answer any three from the rest. 1(a) Write and explain one of the deadlock detection algorithms and evaluate its performance. 10 (b) Explain the logic of Dekkar’s solution to the mutual-exclusion problem and also discuss whether it is suitable to be implemented in a multiprocessor system with shared memory. 10 (c) Describe Belady’s anatomy and provide an example that illustrates anomalous behaviour of FIFO. 10 2(a) Compare and contrast Implicit and Explicit tasking of the processes. Also highlight the advantages of Explicit tasking. 7 (b) The identified disadvantages of semaphores are (i) Semaphores are unstructured (ii) Semaphores do not support data abstraction. Explain the reasoning you need to come up with your answer. Suggest the alternative mechanisms that support or enforce more structured forms of interprocess communication and synchronization. 8 3(a) Write short notes on the following:

Page 253: Microsoft Word - OS

(i) Two-phase locking (ii) Wait-die and Wound-wait (iii) Busy waiting 8 (b) Compare and contrast Remote Procedure Call (RPC) with Message passing in Distributed O/S environment. 7 4(a) Explain the difference between internal fragmentation and external fragmentation. Which one occurs in paging system? Also, explain through a diagram, the principal of address translation in paging scheme. 8 (b) What are PAGE MAP TABLE (PMT), MEMORY MAP TABLE (MMT) and FILE MAP TABLE (FMT)? Explain how they are associated in the management of virtual memory. 7 5(a) Explain why shared-bus multiprocessors are generally regarded as having limited scalability. 5 (b) State and discuss the differences in resource management requirements between Uniprocessor and Multiprocessor operating systems. 5 (c) What is disk-caching? What are its advantages. 5 6. Write short notes on: (i) Bell- LaPadula Model (ii) Chaining and Indexing allocation strategies of disk space (iii) Rivest, Shamir, Adelman (RSA) algorithm 15

Page 254: Microsoft Word - OS