Complete Operating System notes

Post on 16-Apr-2017

453 Views

Category:

Technology

6 Downloads

Preview:

Click to see full reader

Transcript

Unit-IQ) What is an Operating SystemAns From the user point of view

OS should be easy to use an application Hence Operating system is an interface between user and computer hardware

when it comes to the view point of a system OS needs to ensure that system resources are utilized efficiently Hence Operating system is resource manager

Abstract view of components of computer system are as shown below

Q) OS objectives and functions

Ans OS has 3 objectives or 3 functions

1)Convenience 2) Efficiency 3) Ability to Evolve

1) Convenience The OS provides following services to users to conveniently use the system

a) Program Creation OS provides editors debuggers etc These are actually part of OS but are accessible through OS

b) Program Execution To execute a program 1) it must be loaded into memory 2) IO devices files and other resources must be initialized The OS handles all these tasks

c) Access to IO devices Each IO device has its own instructions and control signals OS takes care of these details and provides a simple interface to programmer

d) Controlled access to files

i) Provides Protection mechanisms

ii) Provides common interface to access files stored in different secondary memories (hard diskmagnetic tape) and having different file system

e) System access protects data and resources from unauthorized users Resolves conflicts in case of contention for resources

f) Error detection and response Many internal and external hardware errors software errors occur while a computer is running The OS handles these errors by ending the program retrying the operation that caused error or prints the error message to user

g) Accounting OS collects usage statistics of various resources for improving performance and future enhancements

2) Efficiency OS decides how much processor time is to be given to a process which processes must be in main memory when an IO device can be used by process controls access to use of files etc efficiently OS manages the resources like CPU memory IO devices Secondary memory efficiently

Only part of OS (called Kernel) is loaded into main memory and the rest of OS will be in hard disk

CPU executes OS code which directs the CPU which process to execute Once the process is executed CPU again runs the OS code to know what

to do next

3 Ability to evolve OS will evolve due to following reasons

i) Hardware upgrades and new types of hardware

ii) New Services

iii) Bug Fixes

Q) Evolution of OS

Ans Serial Processing (1940 ndash mid 1950)

There was no OS User directly interacted with computer hardware Single user system

Input and Output Paper tape or punched cards

Software Used Assemblers compilers linkers loaders device drivers libraries of common subroutines

Main Disadvantages Low CPU utilization high setup time

1) Simple batch system

1 User do not have direct access to machine2 Monitor controls job processing Special cards( that start with $ sign) indicate what to do User

program is prevented from performing IO

3 Resident Monitor (is part of monitor that always reside in memory) holds initial control control transfers to job and then back to monitor

4 Automatic job sequencing is done as follows User submits job to computer operator Computer operator batches the job sequentially and

places the entire job on an Input device (Ex card reader) Monitor reads one job at a time from ip device and places current job in User program Area Control is then passed to current job When the job is completed control return back to monitor

5 Monitor handles job setup and reduces setup time by batching jobs with similar requirements Job Control Language (JCL) instructions are given to monitor to execute the job Monitor reads $FTN card and loads the appropriate compiler from tape During execution of user program any input instruction causes one data card to be read After successful or unsuccessful completion of user job monitor will scan all ip cards until it

encounters next JCL card

Hardware features desirable in Batch system

1 Memory protection2 Timer3 Privileged Instruction4 Interrupts

2) Multi-programmed Batch System memory layout is as shown below

If there is enough main memory to hold OS and two programs When one job needs to wait for IO the CPU can switch to other job

This is known as multi programming or multi tasking

Additional hardware desirable are

1 Interrupt driven IO and Direct Memory Access (DMA)2 Memory management Since several jobs must be in main memory

at same time some form of memory management is required

3) Time sharing System are multi-programming systems and can handle OS gives quantum of CPU time to each user program

If lsquonrsquo users are there each user is given 1n of CPU time

Problems to be handled in time sharing system

1 Since multiple jobs are in memory they must be protected from modifying each otherrsquos data

2 File System must be protected by giving access to only authorized users

3 Contention for resources must be handled

Q) Difference between Batch Multi-programming and Time sharing

Batch Multi-programming Time sharingPrincipal Objective Maximize processor use Minimize response timeSource of instructions to OS JCL instructions provided with job Commands entered at terminal

===================================================================

Unit-II

Q) What is process A process is program in execution A process is an active entity and resides in main memory

Q) Explain structure of process in memoryA process contains

1 program code which is sometimes known as text section2 Process stack contains temporary data (Such as function parameters

return addresses and local variables)3 Heap is the memory allocated dynamically during process run time4 Data section contains global variables

Although two processes are associated with the same program they are considered as two separate execution sequences as the data heap and stack sections differ even though text sections are equivalent

Q) Explain Process statesAs a process executes it changes state Each process can be in one of the following states

1 New the process is being created2 Running Instructions are being executed3 Waiting a process is waiting for some

event to occur (such as IO completion etc)

4 Ready process is waiting to be assigned to processor

5 Terminated the process has finished execution

Q) What is PCBEach process is represented in the operating system by a process control block (PCB) or task control block It contains information about

1 Process state the state may be new ready running blocked or terminated2 Program Counter this register stores the address of next instruction to be executed3 CPU registers like accumulators stack pointers index register and general purpose

registers The values of these registers are saved if the process is interrupted4 CPU scheduling information like process priority scheduling parameters etc5 Memory management information include the values of base and limit register segment tables

page tables etc6 Accounting Information include amount of CPU used time limits job or process number etc7 IO status information like list of IO devices allocated to the process list of open files and so on

Q)Operations on Processes

1 Process Creation

Operating System is responsible for creation of new process

Reasons for a new process creation

1 When a batch job is submitted by user2 In interactive environment or in time sharing system a process is created when user logs on3 Operating system creates a process to manage printing So that user need not wait till printing

completes Here OS creates process on behalf of user4 When a process creates another process Creating process is called parent process and new process is

called child process or sub processNew process can in turn create other processes forming a tree of processesOS identifies a process by unique process identifier (or pid) which is an unique integerIn solaris Operating system at the top of the tree is Sched process with pid=0 This process can create several child processes In the below figure it creates three child processes1 Init process is the parent process for all user processes2 Pageout process 3 Fsflush process

Subprocess may obtain resources

1 Directly from OS2 Share some resources among several of its children3 Parent process partitions its resources among its children

Parent process may pass initialization data to child For ex name of the image file and name of the output device may be passed to a display process (child process)

Parent process may execute

1 concurrently with the child2 waits till some or all of the child process have terminated

Address space of child process may be

1 Duplicate copy of parent (Same program and data)2 New program loaded into it

2 Process Termination

A process terminates when it finishes executing its final statement and requests the operating system to delete it (using exit() system call in Unix and Terminate Process() in Win32 API)

All the resources of the process (open files IO buffers and physical memory) are deallocated by OS

A parent may terminate a child process for a variety of reasons

1 Child has exceeded its usage of some of the resources2 Task assigned to child is no longer required3 Parent is terminating and OS (Ex VMS) does not allow a child to continue if parent is terminating

Q) What is cascading termination

Parent is terminating and OS (Ex VMS) does not allow a child to continue if parent is terminating This is called Cascading Termination initiated by OS

Q) What happens to child when parent terminates in Unix

Init process becomes the parent of all its children

Q)What is context switchWhen the PCB of the currently executing process is saved the operating system loads the PCB of the next process that has to be run on CPU This is a heavy task and it takes a lot of time

Q) Basic Concepts of threads

A thread consists of a program counter a stack and a set of registers and a thread ID Threads are also called light weight processesA process with multiple threads make a great serverThe threads share a lot of resources with other threads belonging to the same process So a context switch among threads for the same process is easy It involves switch of register set the program counter and the stack

Q) Explain two modes of CPU executionProtection of memory IO can be provided via two modes of CPU execution user mode and kernel mode

In kernel privileged supervisor mode OS has access to privileged instructions Privileged instructions can access IO devices control interrupts manipulate memory (pagetable TLB etc)

Privileged instructions are instruction that can only be executed in kernel mode

All the user level processes run in user mode Some critical operations are not allowed to be done by user processes The user processes must use system calls to perform these operations When a system call occurs OS will enter into kernel mode and accesses privileged instructions to perform the desired service to user-level process

For example for Input or Output process makes a system call telling the operating system to read or write particular area and this request is satisfied by the operating system

Q) Explain Inter process communicationAns Cooperating processes require inter process communication (IPC) mechanism to exchange data and information There are two communication models (a) Message passing (b) shared memory as shown below

(a) Shared ndash Memory Systems

1) Communicating processes must establish a region of shared memory2) Shared memory region resides in address space of the creating process

3) Other processes that wish to communicate using shared memory segment must attach the shared memory to their address space4) Processes can exchange information by reading and writing data in shared areas5) Shared memory systems are convenient to communicate6) Shared memory systems are faster and provides maximum speed as

i) System calls are required only to establish shared memory regionsii) Once shared memory is established all accesses are treated as routine memory access No

assistance of kernel is required

(b) Message passing systems

1 Are useful for exchanging smaller amounts of data2 Easy to implement for inter computer communication3 More time consuming than shared memory system as implemented using system calls

and need kernel intervention

To send messages communication link must exist between them Communication link can be physically or logically implemented

Different methods to logically implementing a link are

1 Direct or indirect communication2 Synchronous or asynchronous communication3 Automatic or explicit buffering

1a Direct communication

A link is established automatically between every pair of processes that want to communicate

A link is associated with exactly two processes

Addressing

i) Symmetry in addressing Sender process and receiver process must name each other to communicate Send() and Receive() primitives are as follows Send(P message) ndashsends a message to process P

Receive (Q message) ndash receive a message from process Qii) Asymmetry in addressing Only sender names the receiver process Send() and Receive()

primitives are as follows Send(P message) ndashsends a message to process P

Receive (id message) ndash receive a message from any process

Disadvantage of both types of addressing is limited modularity as changing the id of a process requires to find all the references to old id and then modify them

1b Indirect communication

1 Messages are sent received tofrom Mailboxes or ports2 Each mailbox has unique id (integer value) 3 Two processes can communicate only if the process have a shared mailbox4 A link is established between two processes if they have a shared mailbox

P1 P2

R1

5 A link may be associated with more than two processes6 Mailbox may owned by process or OS

a If mailbox is owned by process we can distinguish between the owner( can receive messages only) and user(can send messages only) When the processes that owns a mailbox terminates the mailbox disappears

b If mailbox is owned by OS OS must provide mechanisms to i Create a new mailbox

ii Send and receive messages through mailboxesiii Delete the mailboxiv Pass ownership to other processes

2 Synchronous or asynchronous communicationMessage passing may be either blocking(synchronous) or non-blocking(Asynchronous)

1 Blocking Send The sending process is blocked until the message is received by the receiving process or mailbox

2 Nonblocking Send The sending process sends the message and resumes operation3 Blocking Receive The receiver blocks until message is available4 Nonblocking receive The receiver retrieves either a valid message or a null

3 Automatic or explicit bufferingMessages exchanged by communicating processes reside in temporary queue Such queues can be implemented in 3 ways

i Zero Capacity Queue length =0 the link cannot have messages waiting in itsender must block until receiver receives the message

ii Bounded Capacity Queue length = finite (say n) When the queue is full the sender must block until space is available in queue

iii Unbounded Capacity Queue length = infinitehellipthe sender never blocks===============================================================================Q) what is deadlock

A set of processes is deadlocked when every process in the set is waiting for a resource that is currently allocated to another process in the set Here the process P1 is allocated resource R2 and P2 is allocated R1

P1 requires R1 and P2 requires R2

Process P1 and P2 will wait forever This situation is called deadlock

Q) What are the four conditions that are necessary for deadlock to occur

1 Mutual Exclusion - At least one resource must be held in a non-sharable mode If any other process requests this resource then that process must wait for the resource to be released

2 Hold and Wait - A process must be simultaneously holding at least one resource and waiting for at least one resource that is currently being held by some other process

3 No preemption - Once a process is holding a resource then that resource cannot be taken away from that process until the process releases it

4 Circular Wait - A set of processes P0 P1 P2 PN must exist such that every P[ i ] is waiting for P[ ( i + 1 ) ( N + 1 ) ]

Q)Methods for handling deadlocks

1 By using deadlock prevention and avoidance protocols system will never enter a deadlocked state1 Allow the system to enter a deadlocked state detect it and recover it2 Ignore the problem and pretend that deadlock never occurs

To make sure that the system must not enter a deadlocked state the system can use

1 Deadlock prevention 2 Deadlock avoidance

==============================================================================

Deadlock Prevention

1 Mutual Exclusion We cannot prevent deadlocks by denying the mutual exclusion condition because some resources are nonsharable (Ex Printer)

2 Hold and Wait

To make sure that the hold-and-wait condition never occurs in the system two protocols that can be used are

Protocol 1 All the resources requested must be allocated before process begins execution

Protocol2 A process can request resources only when it has none If a process requires additional resources it must release all the resources that are currently allocated

Example Consider a process that copies data from DVD drive to a file on disk sorts the file and then prints the results to a printer

If Protocol1 is used it must request the DVD drive disk file and printer at the beginning and must hold them till the end

Disadvantages

1 Starvation A process may wait forever because at least one resource that is need is always allocated to some other process Hence Starvation is possible

2 Resource Utilization is low Process will hold the printer from beginning till end even though it is used at the end

If Protocol 2 is used the process will initially request DVD drive and disk file It copies from the DVD drive to disk and then releases both the DVD drive and disk file It then requests the disk file and printer

Disadvantage There may be a chance that our data may not remain on the disk file

3 No pre-emption

To make sure that this condition does not hold the following protocol is used

Protocol If a process (say A) requests some resources

Case 1 If resources are available then Allocate them

Case 2 if resources are allocated to some other process(say B) that is waiting for additional resources

then Preempt the desired resources from the waiting process (B) and allocate them to requesting process(A)

The process B can be restarted only when it is allocated additional resources it is requesting and takes away the resources that were given to process A

Case 3 if resources are neither available nor held by a waiting process then Process A waits

This protocol is applied to resources like CPU register and memory space as the state of the resources can be saved

4 Circular Wait

To make sure Circular Wait condition never occurs

1 Each Resource is assigned a unique integer number

2 Each Process must request resources in an increasing order of enumeration

We define a one-to-one function F R rarr N where R is the set of resource types and N is the set of natural numbers

A process has requested a resource type say Ri at the beginning

Protocol 1 After that the process can request resource type R j if and only if F (Riquestiquest j)gtF(Riquestiquest i)iquestiquest

Protocol 2 If a process requests a resource type R j it must release all the resources (say (Riquestiquest i )iquest ) whose F (Riquestiquest i)ge F (Riquestiquest j)iquestiquest

Example Let F (tape drive) =1 F(disk drive) = 5 and F(printer)=12

A process can request any number of tape drives disk drives and printers

Protocol 1If a process A has already requested disk drive now A can request only printer and cannot request tape drive

Protocol2 In order to request tape drive the process A must release the disk drive and then can request tape drive

If the above two protocols are used then the circular wait condition never occurs We can prove this by contradiction

Proof Assume circular wait exists Let the set of processes involved in the circular wait be P0 P1 helliphellip Pn where P0 is waiting for resource R0which is held by P1

P1 is allocated R0 and P1 is waiting for resource R1which is held by P2 so F (Riquestiquest0)ltF(R iquestiquest1)iquestiquest

P2 is allocated R1 and P2 is waiting for resource R2which is held by P3 so F (Riquestiquest1)ltF (Riquestiquest2)iquestiquest

P1 P2

R2R2

R1

Pn is allocated Rnminus1 and Pn is waiting for resource Rnwhich is held by P0 so F (Riquestiquestn)ltF(R iquestiquest0)iquestiquest

Hence by transitivity F (Riquestiquest0)ltF(R iquestiquest0)iquestiquest Hence our assumption that circular wait exists is FALSE

===============================================================================Q) Resource Allocation graphDeadlocks can be understood more clearly through the use of Resource-Allocation Graphs having the following properties

1 Resource Types are represented as square nodes on the graph Dots inside the square nodes indicate number of resources

( Ex two dots might represent two laser printers )2 Processes are represented as circles3 Request Edges - If P1 has requested R1 a directed edge from P1 to R1 is request

edge4 Assignment Edges - A directed edge from R2 to P1 indicating that resource R2 has

been allocated to process P1 and that P1 is currently holding resource R2 Note that a request edge can be converted into an assignment edge when request is granted

If a resource-allocation graph does contain cycles AND each resource contains only a single instance then a deadlock exists If a resource category contains more than one instance then cycle in the resource-allocation graph indicates the possibility of a deadlock but does not guarantee one==================================================================Q) Deadlock AvoidanceFor each resource request system can decide whether the request should be granted or not To make this decision the system must have information like

1 resources currently available2 resources currently allocated to each process3 Future requests and releases of each process4 Maximum number of resources it may need

Given this information it is possible to construct an algorithm that makes sure that the system will never enter a deadlocked stateThere are Two deadlock-avoidance algorithms They are

1 Resource-Allocation Graph Algorithm2 Bankers Algorithm

Safe State A system is in safe state if there exists a safe sequence of processes P0 P1 P2 PN such that Resource Requests for Pi lt= Resources allocated to Pi + resources held by all processes Pj where j lt i All safe states are deadlock free

Unsafe state If a safe sequence does not exist then the system is in an unsafe state which MAY lead to deadlock

1 Resource-Allocation Graph Algorithm Resource-allocation graphs can detect deadlocks only if number of resources of each type are one In this case unsafe states can be recognized and avoided by adding claim edges denoted by dashed

lines which point from a process to a resource that it may request in the future All the claim edges are added only at the beginning of process When a process makes a request the claim edge Pi-gtRj is converted to a request edge When a resource is released the assignment edge changes back to a claim edge This approach works by denying requests that would produce cycles in the resource-allocation graph

taking claim edges into effectConsider for example the resource allocation graph as shown

If P2 requests resource R2 then the claim edge from P1-gtR2 will be made request edge as follows

The resulting resource-allocation graph would have a cycle in it and so the request cannot be granted Q) Bankerrsquos Algorithm or Deadlock avoidance algorithm with exampleThere are 12 tape driveslet the current state of the system is as shown in the below figureProcess Allocated Max Need Need= MaxNeed ndash AllocatedP0 5 10 5P1 2 4 2P2 2 9 7

Available = 12-(5+2+2) = 3Resource- Request AlgorithmNow when a request for 1 tape drive by process P2 is made we run resource-request algorithm to check whether the request must be granted or not The request is granted if the after granting the request all the processes in the system can complete For thatWe check 1 Is the request of P2 lt= need of P2

1 lt= 7 therefore TRUE2 Is the request of P2 lt= Available

1 lt= 3 therefore TRUE3 Pretend the request is granted for P2

Now the current state is as shown belowProcess Allocated Max Need Need= MaxNeed ndash AllocatedP0 5 10 5P1 2 4 2P2 2+1=3 9 7-1=6

Available = 3-1= 2Now Run the safety algorithm to check whether the system is in safe state

Safety Algorithm1 Let WORK = Available = 22 Find unfinished process such that Need of unfinished process lt= WORKCheck P0

Need of P0 = 5Work = 2

Is 5 lt= 2 FALSE

Check P1Need of P1 = 2Work = 2

Is 2 lt= 2TRUETherefore P1 can finishIf P1 finishes Work = Work+Allocated to P1

Work=2+2= 4Now again check if P0 can completeNeed of P0 = 5Work =4Is 5 lt=4 FALSE

Check if P3 completesNeed of P3= 6Work =4Is 6lt= 4FALSESo neither P0 can complete and P3 can completeSo the system is in unsafe stateThe request for 1 tape drive by P3 is not granted

2 Bankers Algorithm For resources that contain more than one instance the resource-allocation graph method does not

work So we use Bankerrsquos algorithm When a process starts up it must state in advance the maximum allocation of resources it may

request When a request is made the scheduler determines whether granting the request would leave the

system in a safe state If not then the process must wait until the request can be granted safely The bankers algorithm relies on several key data structures ( where n is the number of processes and

m is the number of resource categories ) o Available[ m ] indicates how many resources are currently available o Max[ n ][ m ] indicates the maximum demand of each process of each resource

o Allocation[ n ][ m ] indicates the number of each resources allocated to each processo Need[ n ][ m ] indicates the remaining resources needed of each type for each process ( Note

that Need[ i ][ j ] = Max[ i ][ j ] - Allocation[ i ][ j ] for all i j ) For simplification of discussions we make the following notations observations

o One row of the Need vector Need[ i ] can be treated as a vector corresponding to the needs of process i and similarly for Allocation and Max

Safety Algorithm In order to apply the Bankers algorithm we first need an algorithm for determining whether or not a

particular state is safe This algorithm determines if the current state of a system is safe according to the following steps

1 Let Work and Finish be vectors of length m and n respectively Work is a working copy of the available resources Finish is a vector of booleans indicating whether a particular process can finish Initialize Work = Available and Finish to false for all elements

2 Find an i such that both (A) Finish[ i ] == false and (B) Need[ i ] lt Work This process has not finished but could with the given available working set If no such i exists go to step 4

3 Set Work = Work + Allocation[ i ] and set Finish[ i ] to true This corresponds to process i finishing up and releasing its resources back into the work pool Then loop back to step 2

4 If finish[ i ] == true for all i then the state is a safe state because a safe sequence has been found

Resource-Request Algorithm ( The Bankers Algorithm ) Now we have a tool for determining if a particular state is safe or not This algorithm determines if a new request is safe and grants it only if it is safe to do so When a request is made ( that does not exceed currently available resources )

pretend it has been granted and then see if the resulting state is a safe one If so grant the request and if not deny the request as follows

1 Let Request[ n ][ m ] indicate the number of resources of each type currently requested by processes If Request[ i ] gt Need[ i ] for any process i raise an error condition

2 If Request[ i ] gt Available for any process i then that process must wait for resources to become available

else the process can continue to step 3

3 Check to see if the request can be granted safely by pretending it has been granted and then seeing if the resulting state is safe If resulting state is safe

grant the requestelse

then the process must wait until its request can be granted safelyThe procedure for granting a request ( or pretending to for testing purposes ) is

Available = Available - Request

Allocation = Allocation + Request Need = Need - Request

Unit III Memory management

Just as processes share the CPU they also share physical memory Memory management unit of OS takes care of memory allocation deallocation and other issues Program must be brought into memory for it to be run Addresses are two types

i) relocatable or relative address wrt to the beginning of program ii) Absolute addresses

Q)Address BindingAns Binding means mapping logical address space to physical address space Address binding can happen at three different stagesCompile time If you know at compile time only

where in memory the program is going to be allocated then compiler generates absolute addresses

Otherwise compiler generates relocatable addresses

Load time loader binds the relocatable addresses generated by compiler to absolute address Hence binding is done at load timeIf Binding is done at compile and load time physical and logical addresses are same

Execution time If address binding is done at run time the process can be moved during its execution from one memory segment to another Here we call logical addresses as virtual addressRun time mapping of virtual to physical address is done by a hardware device called Memory Management Unit (MMU)

Q)Logical vs Physical Address SpaceAnsLogical address ndash generated by the CPU also referred to as virtual addressPhysical address ndash address seen by the memory unitSet of all logical addresses is called logical address space Set of all physical addresses is called physical address space

Q) Memory-Management Unit (MMU)Ans MMU is hardware device that maps virtual to physical addressIn MMU scheme the value in the relocation register is added to every address generated by CPU to read the contents of memoryThe user program only knows logical addresses it never sees the real physical addresses

Q)Dynamic loadingAns Since physical memory is small it may not be possible for the entire program to be in main memory so we can use dynamic loadingIt is responsibility of users to design their programs to take advantage of dynamic loadingOnly the main function is loaded into main memory and when main() calls another function the main() will check whether the function is in main memory IF not the loader will load the desired function to main memory and updates the programrsquos address table

Routine is not loaded until it is called We achieve better memory-space utilization as unused routine is never loaded

Q) Dynamic Linking+ Linking postponed until execution time+ Small piece of code stub used to locate the appropriate memory-resident library routine+ Stub replaces itself with the address of the routine and executes the routine+ Operating system needed to check if routine is in processesrsquo memory address+ Dynamic linking is particularly useful for libraries

Q) OverlaysAns Overlays are needed when process is larger than amount of main memory allocated to it Instructions and data that are needed at any given time is kept in main memory Overlays can be implemented by user Programming design of overlay structure is complex Overlays for a two-pass assembler is as shown in the figure

Q) SwappingAns A process can be swapped temporarily out of memory to a backing store and then brought back into memory for continued executionBacking store ndash fast disk large enough to store copies of all memory images of all usersPriority based scheduling uses a variant of swapping policy called roll out roll in If a higher priority process arrives the memory manager swaps out lower priority process and then swaps in higher priority process When the higher priority process finishes the lower priority process can be swapped in again to main memory A process that is swapped out will be swapped back into the same memory space it occupied previouslyMajor part of the swap time is transfer time We can swap idle process only and cannot swap a process that is waiting for IO

Q) Contiguous memory allocation Ans Each process is contained in a single contiguous section of memory1 Fixed Size Partition (or) Single-partition allocation

Divide the main memory into Fixed sized partitions Each partition may contain exactly one process Relocation register contains value of starting physical address Limit register contains range of logical addressesEvery address generated by a CPU is checked as follows

If logical address lt Limit register then logical address is added with Relocation register to get the corresponding memory address

else a trap to OS is generatedSince every address is checked we can protect the OS and other user programs from being modified by other running process

2 Multiple-partition allocation

1) Fixed Size Partitions Divide the main memory into Fixed sized blocks Here memory allocated to a process may be larger than required The difference between allocated memory and requested memory is called internal fragmentation Internal fragmentation is unused memory inside the partition

0 P1 Block0(0th address to 3rd address)4 P1 Block1(4th address to 7th address)8 Block2(8th address to 11th address)

12 Block3(11th address to 12th address)

if P1 requires just 5 addresses still it is allocated 2 blocks ie 8 addresses So remaining 3 addresses are left unused

2) Variable Size partitions Here main memory is divided into partitions of variable sizesOperating system maintains information about

a) allocated partitions b) free partitions (hole)

In the beginning all main memory is empty and is considered one large block of available memory a hole

Exact memory required by process is only given When a process terminates it releases memory which can be allocated to another process

Memory is allocated to processes until finally no available block of memory (or hole) is large enough to hold the next processExternal fragmentation exists when there is enough memory to satisfy a request but the available memory are not contiguous Wastage of memory outside the partitionOne Solution to problem of external fragmentation is

a Compaction shuffle the memory contents so as to place all free memory together into one large block

===============================================================================Q) Dynamic Storage-Allocation Problem (or) Most commonly used strategies to select a free hole from the set of available holesAns To satisfy a request of size n from a list of free holes below 3 policies can be used

1 First-fit Allocate the first hole that is big enough

2 Best-fit Allocate the smallest hole that is big enough must search entire list if it is not ordered by size Produces the smallest leftover hole

3 Worst-fit Allocate the largest hole must also search entire list Produces the largest leftover holeFirst-fit and best-fit better than worst-fit in terms of speed and storage utilization

===============================================================================Q) Non-Contiguous Memory allocation

1 Paging Paging is a memory management scheme that provides non-contiguous memory allocation0 P1 Block0(0th address to 3rd address)4 P2 Block1(4th address to 7th address)8 Block2(8th address to 11th address)

12 P1 Block3(11th address to 12th address)

Logical address space of a process can be noncontiguous Ex P1 is allocated Block0 and Block3

1 Divide physical memory into fixed-sized blocks called frames (size is power of 2)2 Divide logical memory into blocks of same size called pages3 Logical address is divided into 2 parts

Page number (p) ndash Page table is indexed by page numberPage offset (d) ndash

4 A page table is allocated to each process (A pointer to page table is stored in PCB of process )Page table translates logical to physical addresses Page 0 is in frame1 Page 1 is in frame 4 etc5 Internal fragmentation may occur due to paging

6 If the size of logical address space= 2m and page size = 2n Then the higher order mminusn bits of logical address correspond to page number and lsquonrsquolower order bits for displacement within the page

Ex size of logical address space = 8=23 so m=3

page size =4=22 So n=2

So mminusn = 1 bit for page number to represent 01

remaining n=2 bits for displacement within the page

Q) Implementation of Page Table

Page table can be kept as

1) Set of dedicated registers efficient if the page table is small

2) Page table can be kept in memory page table base register (PTBR) points to page table Page-table length register (PTLR) indicates size of the page table Changing page table requires changing only the value in these registers Advantages of keeping page table in memorya Less context switch time

b Two memory accesses are required One memory access to access the page table and another memory access to access the required memory address

This problem can be solved by using a special fast-lookup hardware cache called associative memory or translation look-aside buffers(TLBs)

i Each entry in TLB consists of 2 parts

I) page number field

II) Value field or frame number field

ii When a logical address is generated by CPU its page number is presented to TLB The page number is compared with all the entries of TLB simultaneously

If page number is found in TLB Itrsquos frame number is immediately available -gt TLB hitIf page number not found check in page table ndashTLB miss

By using TLB Search is fast But TLB hardware is expensive So TLB size is kept small So TLB contains only frequently used few page table entries

Q) Protection in Paging Validinvalid bit attached to each entry in the page table OS sets this validinvalid bit for each page to allow or disallow access to the page When the bit is set to invalid the page is not in processrsquos logical address space Hence it generates Trap to OS

Q) Shared pages in Paging

Reentrant code is the code that never changes during execution Reentrant code can be shared

1 One copy of read-only (reentrant) code is shared among processes (ie text editors compilers window systems) Ex ed1ed2ed3 is shared among Process P1 and P2

2 Shared code must appear in same location in the logical address space of all processes

Each process keeps a separate copy of the private code and data EX data1 and data2

The pages for the private code and data can appear anywhere in the logical address space

==============================================================================Q) Segmentation

Segmentation is a memory management scheme that support userrsquos view of memory

When the user program is compiled the compiler generates segments like

1) The code segment

2) Global variables segment

3) Heap memory segment

4) Stack segment etc

Each entry in segment table has

segment base (Starting physical address of segment) and limit register (specifies the length of the segment)

Logical address is divided into 2 parts

segment-number offset

Segment table is indexed by segment number

Segment-table base register (STBR) stores the location of segment table in main memory

Segment-table length register (STLR) stores number of segments used by a program

The segment number (say lsquosrsquo) is used to find the entry in segment table The required entry is lsquosrsquo locations away from the beginning of the segment table Once the required entry in the segment table is found the offset (lsquodrsquo) is compared with limit

If( offset lt limit) then offset is added with the base entry to generate the physical address

Q) Shared Segments Code sharing occurs at

the segment level Shared segments must

have same segment number

Allocation - dynamic storage allocation problem

use best fitfirst fit may cause external fragmentation

Protection protection bits associated with segments

readwriteexecute privileges array in a separate segment - hardware can check for illegal array indexes

===============================================================================Q) Virtual memory ndash

Ans In paging and segmentation a program will execute only if the entire process is in main memory But here in virtual memory only part of the program needs to be in memory for execution

1 A program that is large enough than the available main memory can still run as only part of program is only loaded into main memory Logical address space can therefore be much larger than physical address space

2 Allows address spaces to be shared by several processes

Virtual memory can be implemented via Demand paging Demand segmentation

The large blank space between heap and stack is part of virtual address space Virtual memory allows files and memory to be shared by 2 or more processes through page sharing as shown in below figure (b) Figure (a) depicts that virtual memory that is larger than physical memory

==============================================================================

Q)Demand pagingDemand paging is technique of loading pages from disk to main memory only when the page is needed Hence using less amount of physical memory we get faster response

Demand paging is similar to PAGING + SWAPPING

Instead of swapping a whole process the pager brings only those pages needed into memory

To distinguish between pages that are in memory and pages that are on the disk validinvalid bit is used

Validinvalid bit is attached to each entry in page table When this bit is set to valid the page is legal and in memory When this bit is set to invalid either the page is illegal or the page is in disk

When CPU generates a logical address whose page table entry is set to invalid page fault occurs Hence the required page is in disk Some section of hard disk called swap space is used to hold pages that are not present in memory

Q) Procedure for Handling a Page Fault

1) CPU generates a logical address and if the validinvalid bit for the page that has this logical address is set to invalid page fault occurs

2) Page fault causes a trap to operating system

3)Check if the logical address is within the logical address space of process ie PTBR and PTLR is checked if (required logical address is not within the logical address space of process)

Terminate the processelse

page is not in memory and page is in disk

4) To bring the required page into memory we need to find a free main memory frame map main memory address to disk block and fetch disk block and load the block into free frame5) When the required page is brought into memory update the page table to indicate the page is in memory6) Restart instruction interrupted by illegal address trap The process will continue as if page had always been in memory

Q) What is pure demand pagingAns If no page belonging to the executing process is in main memory the process will fault for every page it needs Page faults will occur until every page that is needed is in memory This scheme is known as pure demand paging NEVER BRING A PAGE UNTIL IT IS REQUIRED

Q) Page Replacement1 Find the location of the desired page on the disk2 Find a free

i) If there is a free frame use itii) If there is no free frame use Page replacement Algorithms to find some page in memory that is

not really in use and swap itiii) If the modify bit associated with the victim frame is set to 1 the page is written to disk Make the

changes to validinvalid bit of victim frames page table entry to invalid to indicate the victim frame is no longer in memory

else no need to write to disk3 Read the desired page into victim frame and change the frame and page tables4 Restart the user process

=================================================================================================Q) Page Replacement AlgorithmsGoal Produce a low page-fault rateThe algorithms are evaluated by running it on a particular string of memory references (reference string) and by computing the number of page faults on that string

1 First in First out (FIFO) Page replacement algorithm Uses the time when a page was brought into memory The page that was brought first into memory is replaced ie oldest page is chosen for replacement We create a FIFO queue to hold all pages in memory

When a page is brought into memory we insert it at the tail of the queue and we replace the page at the head of the queue

FIFO 15 page faultsProblems due to FIFO is Beladyrsquos anamolyFor a refrence string shown below123412512345When number of frames allocated to a process is 1 number of page faults=12 due to FIFO page replacementWhen number of frames allocated to a process is 2 number of page faults=12 due to FIFO page replacementWhen number of frames allocated to a process is 3 number of page faults=9 due to FIFO page replacementWhen number of frames allocated to a process is 1 number of page faults=10 due to FIFO page replacementAs the number of frames allocated to a process increases page faults must decrease But in FIFO page replacement page faults increase2 Optimal Page replacement AlgorithmReplace the page that will not be used for longest period of time Used for measuring how well your algorithm performs It is not practical to implement OPT algorithm as it is not possible to know what the future references will beex

Optimal page replacement 9 page faults

3 Least Recently used (LRU) Page replacement algorithm Replace the page that has not been used for longest period of time LRU makes use of time of last use of the page Ex

Number of page faults = 12

LRU requires hardware assistance to determine the time of last use of the page We can use 1) Stack or 2) Counter to implement LRU page replacement algorithm

1) Stack implementation ndash keep a stack of page numbers in a double link formwhen a Page is referenced it is moved to the top of stackThis implementation requires 6 pointers to be changed

2) counter implementation Time-of-use field is allocated to each frame allocatedCPU maintains a counter Counter is incremented for every page reference Counter value is copied to the time-of-use field of the referenced page

Disadvantages of LRU page replacement algorithm Updating counter or stack must be done for every memory reference HENCE LRU IS SLOW and implementation requires hardware assistance ==============================================================================Q) LRU approximation page replacement1) Using reference bit2) Using additional reference bits algorithm3) Second chance algorithm4) Enhanced Second chance algorithm5) Counting based Page replacement

Has two schemes1) Least Frequently Used (LFU)2) Most Frequently Used (MFU)

6) Page Buffering AlgorithmsQ) What is ThrashingAns A process is thrashing if it is spending more time paging than executingThrashing may occur when global (or) local page replacement is used as followsSolution to Thrashing is Working Set strategyIf we provide a process with as many frames as it needs thrashing can be avoided Since it is not possible to know how many frames a process needs we use locality model of process execution

here Locality is set of pages that are actively used together Locality model states that as the process executes it moves from locality to locality If we allocate enough frames to a process to accommodate the size of current locality it will not fault again until it changes current localityWorking Set model uses working-set and a parameter ∆ = working-set windowThe set of pages in the most recent ∆ page references is the working setLet WS S i be the working-set size for process Pi ie Pi needs WS S i framesLet D be the total demand for frames then D=sum WS Si

and mbe the total number of available frames

If iquestm thrashing occurs because some processes will not have enough framesSoIF (Dgtm iquest

OS selects a process to suspendelse OS may initiate another processThe working set strategy prevents thrashing while keeping the degree of multi programming as high as possibleThe main difficulty is to keep track of moving working set window ========================================================================Q) How does thrashing occur in global and local page replacementAns In Global page replacement when a page fault occurs for a process any frame in main memory can be replaced causing page fault to some other process that needs the replaced frame Page fault processes must use paging device to swap pages in and out As more number of processes queue up for the paging device

the READY queue empties And CPU becomes idle So OS introduces new processes as CPU is free This further increases number of page faults

In Local page replacement Each process is allocated certain frames in main memory and when a page fault occurs only the frames allocated to it will be replacedWhen a process starts thrashing other processes are not affected But since the processes will be in queue for paging device most of the time average service time for page fault increases Hence effective access time will increase even for a process that is not thrashingQ) Page Table Structure Or Structure of page table in memory

Hierarchical Paging Hashed Page Tables Inverted Page Tables

Hierarchical Paging If Page table is large then we break up the page table into multiple page tables ie page table is paged A simple technique is a two-level page tableTwo-Level Paging Example A logical address (on 32-bit machine with 4K page size) is divided into 1 a page number consisting of 20 bits 2 a page offset consisting of 12 bitsSince the page table is paged the page number is further divided into 1 a 10-bit page number 2 a 10-bit page offset Thus a logical address is as follows page number page offset

pi p2 d

10 10 12 where p1 is an index into the outer page table and p2 is the displacement within the page of the outer page table Two-Level Page-Table Scheme

Address-Translation Scheme Address-translation scheme for a two-level 32-bit paging architecture

Hashed Page Tables (or) Hash table Hashed page tables are common in address spaces gt 32 bitsEach entry in hash table contains a linked list of elements that hash to same locationHash table is indexed by hash valueEach element consists of 3 fields

1 Page number2 Frame number3 Pointer to next element in linked list

The algorithm works as follows Page number is passed to hash function to get hash value Locating the hash value in hash table is easy as Hash table is indexed by hash value Page number is then compared with field 1 in the first element in linked list If there is a match the corresponding frame number is used to get physical address If there is no match next entries in linked list are searched

Inverted Page Table

Invertible page table has one entry for each frame of physical memory Each entry in invertible page table has

Process-id Page number

Logical address also has 3 parts Process-id Page number

OffsetProcess-id and page number of the logical address is compared with each entry in the invertible page tableIf a match is found at ith entry i is added to offset to get logical address If there is no match it means it is illegal addressAdvantages and disadvantages

Decreases memory needed to store each page table Increases time needed to search the table when a page reference occurs Solution Use hash table to

limit the search to one mdash or at most a few mdash page-table entries Inverted Page Table Architecture

Unit-IV CPU Scheduling

Basic conceptsTo maximize CPU utilization some process must be running at all times

In multi programming several processes will be in main memory at a given time If running process has to wait for IO the OS takes the CPU away from that process and assigns CPU to other process in ready queue So CPU must be scheduled and CPU scheduling is fundamental OS function

CPU- IO burst cycleProcess execution begins with a CPU burst that is followed by an IO burst which is followed by another CPU burst then another IO burst and so on as shown in the figure

An IO bound program has many short CPU bursts A CPU-bound program has few long CPU bursts

Schedulers a module in OS for scheduling decisions1048708 Long-term scheduler (or job scheduler) ndash selects which processes should be brought into the ready queue1048708 Medium-term scheduler ndash selects which processes should be swapped inout the memory1048708 Short-term scheduler (or CPU scheduler) ndash Whenever the CPU becomes idle the operating system must select one of the processes in ready queue to be executed This selection is done by short term scheduler (or) CPU scheduler

Dispatcher gives CPU control to the process selected by short term scheduler This function involves following

1 switching context2 switching to user mode3 jumping to proper location in the user program to restart that program

Time taken to stop one process and start another process is known as dispatch latency This must be kept smallReady QueueReady queue is implemented as a FIFO queue a priority queue a tree or simply an unordered linked listThe records in ready queue are PCBs of processes

Pre-emptive Scheduling

CPU scheduling decisions take place under one of four conditions

1 When a process switches from the running state to the waiting state2 When a process switches from the running state to the ready state for example in response to an

interrupt3 When a process switches from the waiting state to the ready state say at completion of IO 4 When a process terminates

For conditions 1 and 4 there is no choice - A new process must be selected For conditions 2 and 3 there is a choice - continue running the current process (or) select a different

one If CPU scheduling takes place only under conditions 1 and 4 the system is said to be non-

preemptive or cooperative In non-preemptive a process runs until it itself gives up the CPU ie when it is waiting for IO (or) when process completes Otherwise the system is said to be pre-emptive

Windows used non-preemptive scheduling up to Windows 3x and started using pre-emptive scheduling with Win95 Macs used non-preemptive prior to OSX and pre-emptive since then Note that pre-emptive scheduling is only possible on hardware that supports a timer interrupt

Note that pre-emptive scheduling can cause problems when two processes share data because one process may get interrupted in the middle of updating shared data structures

Preemption can also be a problem if the kernel is busy implementing a system call ( eg updating critical kernel data structures ) when the preemption occurs Solution wait until the system call has either completed or blocked before allowing the pre-emption But this solution is problematic for real-time systems

Some critical sections of code protect themselves from concurrency problems by disabling interrupts before entering the critical section and re-enabling interrupts on exiting the section But this should only be done only on very short pieces of code that will take less CPU time

Scheduling Criteria to decide CPU scheduling algorithm1 CPU utilization in real system cpu utilization will range from 40 to 902 Throughput Number of processes that are completed per unit time3 Turnaround time Time when the process has completed ndash Time when the process was submittedie it includes waiting time and burst time4 Waiting time amount of time that a process spends waiting for CPU5 Response time Time when the first response came - Time when the process was submitted

Optimization Criteria Max CPU utilizationMax throughput

Min turnaround timeMin waiting timeMin response time CPU Scheduling Algorithms1) First-Come First-Served (FCFS) Scheduling First come first served CPU scheduling algorithm is non pre-emptive Processes are scheduled in the order they have arrivedFCFS is implemented by Queue When the CPU is free it is allocated to the process at the head of the queueAdvantages Easy to implementDisadvantages 1 Average waiting time is quite long

2 Convoy effect (it occurs as small processes wait in queue for big process to leave CPU)Example Process Burst Time P1 24 P2 3 P3 3

Suppose that the processes arrive in the order P1 P2 P3

The Gantt Chart for the schedule is P1 P2 P3

0 24 27 30 Waiting time for P1 = 0 P2 = 24 P3 = 27 Average waiting time (0 + 24 + 27)3 = 17 2) Shortest-Job-First (SJF) SchedulingSchedule the Process with shortest burst time If CPU burst time of two processe are same then FCFS is usedAdvantages Average waiting time decreasesDisadvantages It is difficult to know the length of next CPU burstSJF can be either pre-emptive (or) non pre-emptive1 Non pre-emptive SJF ndash once CPU is given to the process it cannot be preempted until completes its CPU burst2 Pre-emptive SJF (or) Shortest-Remaining-Time-First (SRTF) If the newly arrived process is shorter than what is remaining of currently executing process then pre-empt the currently executing process

Example Process Arrival Time Burst Time P1 00 7

P2 20 4 P3 40 1

P4 50 4

SJF (non-preemptive) Gantt Chart P1 P3 P2 P4

0 7 8 12 16 Average waiting time = [0 +(8-2)+(7-4) +(12-5)] 4 =4

Example of Preemptive SJF

Process Arrival Time Burst TimeP1 00 7P2 20 4P3 40 1P4 50 4

SJF (preemptive) P1 P2 P3 P2 P4 P1

0 2 4 5 7 11 16 Average waiting time = (9 + 1 + 0 +2)4 =3 3) Priority SchedulingA priority number (integer) is associated with each processThe CPU is allocated to the process with the highest priority (smallest integer equiv highest priority)If two processes have equal priority then FCFS is usedSJF is a priority algorithm in which larger the CPU burst lower the priorityThere are two schemes 1 Preemptive 2 nonpreemptive

Problem equiv Starvation ndash low priority processes may never executeSolution equiv Aging ndash as time progresses increase the priority of the process

4) Round Robin (RR) Each process gets a small unit of CPU time (time quantum) usually 10-100 milliseconds After this time has elapsed the process is preempted and added to the end of the ready queueIf there are n processes in the ready queue and the time quantum is q then each process gets 1n of the CPU time in chunks of at most q time units at once No process waits more than (n-1)q time unitsPerformance 1 q large _ FIFO 2 q small _ q must be large with respect to context switch otherwise overhead is too highExample of RR with Time Quantum = 4 Process Burst Time P1 24 P2 3

P3 3 The Gantt chart isP1 P2 P3 P1 P1 P1 P1 P1

0 4 7 10 14 18 22 26 30 Average waiting time = [(30-24)+4+7]3 = 173 =566 5)Multilevel Queue scheduling Ready queue is partitioned into separate queuesFor ex ready queue is divided into 2 queues

1 foreground (interactive) queue2 background (batch) queue

Each queue has its own scheduling algorithmFor ex Round Robin scheduling algorithm can be used in foreground queue

FCFS scheduling algorithm can be used in background queueScheduling must be done between the queues This can be done in 2 ways1 Fixed priority scheduling

Foreground queue has highest priority All the processes in foreground queue must be completed and queue must be empty Then only the processes in background queue will be given CPU As shown in below figure Disadvantage starvation

2 Time slice ndash each queue gets a certain amount of CPU timewhich it can schedule amongst its processes ie 80 to foreground in RR and 20 to background in FCFS

6) Multilevel Feedback Queue Scheduling A process can move between queues The idea is to separate processes according to their CPU bursts IO ndash bound and interactive processes will be in highest priority queueIf a process takes more CPU time it is moved to lower priority queue If a process waits too long in lower priority queue it is moved to higher priority queue to prevent starvationAs shown in the below figure let there be 3 queues Q0 Q1 Q2

1 Q0 ndash time quantum 8 milliseconds 2 Q1 ndash time quantum 16 milliseconds 3 Q2 ndash FCFS

Scheduling

1 A process entering ready queue is put in queue Q0 When it gains CPU the process receives 8 msec If it does not finish in 8 milliseconds process is moved to queue Q1 2 At Q1 process again receives 16 additional milliseconds If it still does not complete it is moved to queue Q2

Q) SynchronizationSeveral processes run in an operating system Some process share resources due to which problems like data inconsistency may arise

Example of process synchronization is Producer Consumer problem (or) Bounded-Buffer problemProducer Consumer problem To make sure that producer process should not add data when the buffer is full andConsumer process should not take data when the buffer is empty

The code for Producer process can be modified as followswhile (true) Produce an item in next producedwhile (counter == BUFFER_SIZE) do nothingbuffer[in]=nextProducedin = (in+1) BUFFER_SIZEcounter++

here an integer variable counter is initialized to zeroCounter is incremented every time a new item is added to the buffer and counter is decremented every time an item is removed from bufferin is an index that always points to an empty slot after the last filled slotout is an index that always points to the first filled slot after at the head of the circular queue

The code for consumer process is as shown belowwhile(true)

while (counter == 0) do nothingnextConsumed = buffer[out]out=(out+1)BUFFER_SIZEcounter--

The above producer and consumer code produces RACE CONDITIONOutcome of execution of several co-operating processes depends on the order in which accesses takes place is called Race ConditionFor Example if counter =5

Given counter =5 and if producer produces one item and consumer consumes one item then correct value of counter= 5 only But here the counter = 4 ie inconsistent resultIf T5 is executed before T4 then the counter =6 ie inconsistent result

Solution to avoid race condition make sure only one process at a time must update the shared variable (here it is counter)

Q) Critical section problemAns Each process has a segment of code called critical section Critical section code is the code that contains common variables of co-operating processesTo avoid data inconsistencyTwo processes must not execute critical section code at same timeFor this to happen1 Each process must request permission to enter its critical sectionmdashEntry section2 End of critical section ndashExit section3 Rest of the code after critical section ndash Remainder section

Solution to critical section must satisfy the following three requirements1 Mutual exclusion if one process is in critical section no other process can be in critical section2 Progress Processes not in remainder section must decide which process will enter critical section next3 Bounded Waiting All the processes must be allowed to enter critical section No process must wait indefinitely

Q) Petersonrsquos solution for critical section problem (Software based solution)AnsTime Process 1 Process2T0 do

flag[1] = TRUE turn =2

do flag[2] = TRUE

T1 turn =1

T2 While( flag[2] ampamp turn == 2)Since turn = 2 while loop will not run

While( flag[1] ampamp turn == 1)Loops continuously until flag[1]

becomes falseT3 Enter critical section

T4 flag[1] = FALSE Enter Critical section

T5 Enters remainder section flag[2]=FALSE

while (TRUE) Enters remainder section

while (TRUE)

101 File Concept Data is stored in secondary memory in the form of files A file is a sequence of bits bytes lines or records A file has structure which depends on its type

File Attributes

1 Name ndash Name of the file for user understanding2 Identifier used by OS to identify a file ( eg inode number )3 Type ndash Text files executable files binary files etc 4 Location ndash location of the file on the hard drive 5 Size current size of the file6 Protection Controls who can read write or execute the file7 Time Date and user identification gives the information when the file was created last modified

and last used etc This data is used for protection security and usage monitoring

All information about files is kept in directory structure The directory entry consists of the filersquos name and its unique identifier The identifier locates the other file attributes Directory structure is kept on disk

File Operations

The file Abstract Data Type supports many common operations 1 Creating a file find free space on disk add entry to directory 2 Writing a file write data at current file position pointer location and update pointer3 Reading a file read data at current file position pointer location and update pointer4 Repositioning within a file or file seek change pointer location to a given value5 Deleting a file free the space allocated to file and remove entry from directory6 Truncating a file delete the data and update file size

Open File Table

Most OSes require that files be opened before access and closed after access Information about all files that are open currently in the system is stored in an open file table This

system wide open file table is maintained by OS And information about files opened by the process is stored by the processrsquos open file table Open file table has the following information

1 File pointer - records the current position in the file for the next read or write2 File-open count - How many times has the current file been opened ( simultaneously by

different processes ) and not yet closedWhen this counter reaches zero the file can be removed from the table

3 Disk location of the file4 Access rights

Some systems provide support for file locking 1 A shared lock is for reading only2 A exclusive lock is for writing as well as reading3 An advisory lock OS makes sure the locks are acquired and released appropriately4 A mandatory lock software developers makes sure the locks are acquired and released

appropriately5 UNIX used advisory locks and Windows uses mandatory locks

File Types

Implementing file types A common technique is to include the type as part of the filename The name is split into 2 parts

1 File Name 2 Extension

File name and extension are separated by a period The system uses the extension to indicate the type of file and the type of operations that can be done on that fileUser opens Microsoft word and clicks on File menu and Open option User will only specify the filename without extension application will look for a file with the given name and extension it expects Hence extensions are not supported by the operating systemUnix system uses a crude magic number stored at the beginning of the file to indicate the type of fileIn Mac OS X operating system file produced by word processor has the word processorrsquos name as its creator When user double clicks on the file word will open the required fileSome common file types are as shown below

File Structure Some files contain an internal structure which may or may not be known to the OS for ex executable files have a specific structure so that it can determine

where in memory to load the file and what is the location of the first instruction

If OS supports many file formats it will increase the size and complexity of the OS UNIX treats all files as sequences of bytes with no internal structure (With the exception of

executable binary programs which it must know how to load and find the first executable statement etc)

Macintosh files have two forks - a resource fork and a data fork The resource fork contains information relating to the UI such as icons and button images and can be modified independently Data fork contains the code or data

Internal File structure

Disk files are accessed in units of physical blocks typically 512 bytes

Internally files are organized in units of logical units like single byte (or)

data record size (or) structure size

The number of logical units which fit into one physical block determines its packingIf a student file is organized internally as records Each record takes 64 bytes of memoryAnd main memory is divided into blocks of 512 bytes Then packing = 51264=8 8 records can be stored per block

Internal fragmentation may occur due to this

102 Access Methods

The information in file can be accessed in the following ways1 Sequential access2 Direct access (or) Relative access

1 Sequential access we cannot randomly access any record

Records are read and written sequentially one record after another

A sequential access file emulates magnetic tape operation and generally supports the following operations read next - read a record and advance the tape to the next position write next - write a record and advance the tape to the next position rewind skip n records - May or may not be supported N may be limited to positive numbers or may be

limited to +- 1

2 Direct access (or) Relative accessA file is made up of fixed length records so that we can randomly access any record

No restriction on order of reading or writing

Direct access files are useful when we need to access large amounts of data randomly

Ex database

Operations supported include n is relative block number ie relative to beginning of file

1 read n - read record number n 2 write n - write record number n 3 jump to record n - could be 0 or the end of file4 Query current record - used to return back to this record later5 Sequential access can be easily emulated using direct access The inverse is complicated and

inefficient

3 Other access methods can be built on top of direct access method We construct an index for the file which contains pointers to various blocks To search a file we first access index and then use pointer to access file

If a student file has 120000 records of students sorted by student id We can create an index consisting of 1st student id in each blockif 64 records can be stored in each block then number of blocks needed to store 120000 records = 12000064 asymp 2000 blocks So index has 2000 entries Index is kept in main memory so that searching becomes faster

If the index itself becomes large we create an index for index file (Ex B-tree)

Q) Allocation Methods

To allocate space to files on disk is such a way that disk space is utilized effectively and files can be accessed quickly

Three major methods of allocating disk space are

1 Contiguous 2 Linked

3 Indexed

1 Contiguous allocation Each file occupies set of contiguous blocks on disk

If the file is n blocks long and starts at location b then it occupies blocks b b+1 b+2hellipb+n-1

The directory entry for each file indicates the address of starting block(ie b) and length (n)

Advantages

1 For direct access to block i we can directly access by finding the block as b+i Since the file is stored sequentially sequential access is easy Hence access is easy

2 Supports both direct and sequential access3 Number of disk seeks required is minimum 4 Seek time is minimum

Disadvantages

1 Difficult to find space to a new file (or) add more data to existing fileFirst fit and Best fit strategies are used to find a free hole( set of free contiguous blocks) from list of available holes But both are inefficient wrt storage utilization

2 External fragmentation even though there are enough free blocks required for a new file they are not contiguous and hence cannot be allocated to new fileSolution copy all the files onto another disk or tape We then copy the files back onto original disk contiguously This is known as compaction But compaction is time consuming

3 Size of the file must be known when the file is created Even though final size is known entire size may not be used by the file at the beginning Hence leads to wastage of space

If size of the file is not known in advance then (i) If we allocate too little space to file file cannot be extended Solution find a larger hole copy the contents of the file to the new space and release the previous space This may be possible as long as space exists Also this is time consuming(ii) If we overestimate the amount of space needed much of the space is unused This will lead to internal fragmentation

To minimize these drawbacks some os uses modified contiguous allocation scheme Here contiguous blocks of memory is allocated at firstIf the file needs more blocks another set of contiguous space known as extent is added Location and a block count plus a link to the first block of next extentIf extents are too large and of fixed size internal fragmentation occursIf extents are of variable sizes external fragmentation occurs

Linked AllocationEach file is a linked list of disk blocks The disk blocks can be scattered anywhere on the diskThe directory contains a pointer to first and last blocks of the file For example a file of 5 blocks might start at a block 9 and continue at block 16 then block 1 and then block 10 and finally block 25 Each block contains a pointer to next block If each block is 512 bytes and disk address (pointer ) requires 4 bytes then the user sees blocks of 512-4=508 bytes To create a new file we simply create a new entry in the directory The pointer will have nil for empty file Also the size field of empty file is 0

To add more data to existing file free block is found and data is written to it and is linked to end of file To read a file we read by following the pointers block to block

Advantages 1 Easy to find space to a new file (or) add more data to existing file2 Size of the file need not be declared when the file is created3 There is no external fragmentation

Disadvantages1 Seek time is more2 Number of disk seeks required is more3 Pointers use larger percentage of filersquos disk space4 Inefficient for direct access files To access i th record i disk reads are required5 Not Reliable if pointers were lost or damaged

Solution to disadvantage 3 is to use to group multiple blocks as clusters(for ex 4 blocks = 1cluster) We allocate clusters instead of blocks to file Makes logical to physical block mapping simpleImproves disk throughput by reducing disk access timeReduces wastage of space due to pointers as only few pointers may be neededFree ndashlist management also becomes simpleDisadvantage internal fragmentation more space is wasted when the cluster is partially full

Solution to disadvantage 5 is to use doubly linked list (or) store the file name and relative block number in each block

An variation of linked allocation is the use of file allocation table (FAT)A section of disk at the beginning of each volume contains FAT FAT has one entry for each disk block and is indexed by block numberThe block number contains the block number of next block that contains the file The last block will have special end of the file value as table entry An unused block is indicated by table value 0

Advantages

1 Easy to find space to a new file (or) add more data to existing file2 Random access time is improved

Disadvantage number of disk seeks is more The disk head must move to read the FAT and find the location of block then move to location of the block

3 Indexed allocation

Each file has its own index block Index block is an array of disk block addresses The ith entry in index block points to ith block of the file

The directory contains the address of the index block

When the file is created all pointers in the index block are set to nil

To add more data to existing file free block is found and data is written to it The address of block is put in the index block

Advantages

1 Easy to find space to a new file (or) add more data to existing file2 Supports direct (or) random access efficiently3 No external fragmentation

Disadvantage

1 Seek time is more2 Number of disk seeks required is more3 Not Reliable4 Index file must be kept in memory If this memory is not available then we have the read the index

block and then desired data block ie 2 disk accesses are required To access a block near the end of the file we need to read all the index blocks to read the needed data block

5 Pointer overhead of index block is gt than that of linked allocation For example if we have file of only one or two blocks Entire index block is allocated to store these one or two pointers

Various mechanisms followed for size of index block

1 linked scheme size of index block is one disk block Index block contains a small header containing name of the file and set of the first 100 disk block addresses The next address contains the address of another index block for large files

2 multilevel index to access a block the OS uses first-level index to find second-level index block which points to file block This approach can be continued to third or fourth level

3 combined scheme

For example Say there are 15 pointers of index block in the filersquos inode The first 12 of these pointers have address of blocks that contains file data The next 3 pointers point to indirect blocks The first points to single indirect block which is an index block that contain addresses of filersquos data blocks

The second points to double indirect blocks and third points to triple indirect blocks as shown below

Q) Free space management

To keep track of free disk space the system maintains free space list To create a file we search free space list and allocate the required space to new file

The free space list can be implemented as

1 Bit vector (or) Bit map2 Linked list3 Grouping4 Counting 5 Space maps

1 Bit vector (or) Bit map Each block is represented by 1 bit If the block is free the bit is 1 and if the block is allocated the bit is 0 For example consider a disk where blocks 23458 blocks are free The free-space bit map would be 001111001000

Advantages simple and easy to get contiguous files

Disadvantages

1 Bit map requires extra space2 Bit map is inefficient if kept in disk Bit maps of smaller disk can be kept in main memory but may

not be possible to keep the bit map of larger disk in main memory3 Bit map must be modified both when blocks are allocated and freed Freeing 1 GB of data on a 1-TB

disk could cause thousands of blocks of bit maps to be updated because those data blocks are scattered all over the disk

2 Linked List Free-space list is implemented by linking all the free disk blocks together A pointer to the first free block is stored in special location on the disk For example consider a disk where blocks 23458 blocks are freeWe keep a pointer to block 2 as the first free block Block 2 would contain a pointer to block 3 Block 3 would point to block 4 and block 4 would point to block 5and so onAdvantage No waste of spaceDisadvantage Cannot get contiguous space easily ie To traverse the free-space list we must read each block which requires more IO time

3 Grouping First free block stores the addresses of n free blocks The first n-1 free blocks are free and the last block contains the addresses of another n free blocks and so onAdvantage large number of free blocks can be found quickly4 Counting free-space list can be maintained by keeping the address of first free block and the number (n)of free contiguous blocks that follow the first block

Each entry in the free-space list consists of disk address and a countEach entry requires more space andlist is shorter only if count is gt1These entries can be stored in a B-tree for efficient insertion deletion and search

5 Space maps ZFS divides the disk space into chunks of manageable sizes called metaslabs Each metaslab has a space mapZFS uses counting algorithm to store information about free blocks Space map is a log of all block activities in time order in counting formatWhen ZFS allocates or frees space from a metaslab it loads the associated space map into memory and updates it Finally free-space list is updated on diskSynchronization Hardware Hardware solution to synchronization is to provide atomic operations These operations operate as a single instruction without interruption Two such operations are 1 Test and Set() hardware instruction2 Swap() instruction

1 Test and Set instruction is as shown

Solution using Test and SetThe shared variable lock is initialized to false

2 Swap() instruction definition is as follows

Solution using Swap1 Shared Boolean variable lock is initialized to false and each process has a local boolean variable key

Both Test and Set Swap instructions satisfy the mutual exclusion requirement but unfortunately do not guarantee bounded waiting If there are multiple processes trying to get into their critical sections there is no guarantee of what order they will enter and any one process could have the bad luck to wait forever until they got their turn in the critical section (Since there is no guarantee as to the relative rates of the processes a very fast process could theoretically release the lock whip through their remainder section and re-lock the lock before a slower process got a chance As more and more processes are involved vying for the same resource the odds of a slow process getting locked out completely increase )

Below figure illustrates a solution using test-and-set that satisfies bounded waiting using two shared data structures boolean lock and boolean waiting[ N ] where N is the number of processes in contention for critical sections

Bounded-waiting mutual exclusion with TestAndSet( )

It first looks in an order ( starting with the next process on the list ) for a process that has been waiting and if it finds one then it releases that particular process from its waiting state without unlocking the critical section thereby allowing a specific process into the critical section while continuing to block all the others

Q) What is semaphoreAns Semaphore is used to solve synchronization problems A semaphore S is an integer variable accessed only through 2 standard atomic operations 1 wait() 2 signal()Wait primitive Signal primitivewait(S)

while(S lt= 0) do nothingS--

signal (S)

S++

Each process that wishes to use a resource performs wait() operation When a process releases a resource it performs signal() operation

Semaphores Usage1 binary semaphore or mutex locks can take values 0 (or) 1

2 Counting Semaphore can take any integer value Counting semaphore is used to count the number remaining resources The counter is initialized to the number of resources available whenever the counting semaphore gt 0 then a process can enter a critical section and use one of the resources When the counter = 0 ( or negative in some implementations ) then the process blocks until another process frees a resource and increments the counting semaphore with a signal callFor example if 3 resources are there and 4 processes Value of counting semaphore =3if process P1 requires a resource it perform wait() operationNow counting semaphore = 2if process P2 requires a resource it perform wait() operationNow counting semaphore = 1if process P3 requires a resource it perform wait() operationNow counting semaphore = 0if process P4 requires a resource it perform wait() operation P4 waits until a resource is available

3 Semaphores can also be used to synchronize certain operations

Semaphore ImplementationWhen a process is in critical section and any other process that tries to enter critical section must loop continuously wasting CPU time (busy waiting) This type of semaphore is called spin lock as the process spins while waiting For example suppose it is important that process P1 execute statement S1 before process P2 executes statement S2

First we create a semaphore named synch that is shared by the two processes and initialize it to zero Then in process P1 we insert the code

S1signal( synch )

and in process P2 we insert the code wait( synch )

S2 Because synch was initialized to 0 process P2 will block on the wait until after P1 executes the call

to signal Each semaphore has a integer value and maintains a waiting queue of waiting processes

Semaphore implementation with no busy waiting To overcome busy waiting we block the waiting process and waiting process is restarted by wakeup() operation(ie the process is removed from waiting queue to ready state)

Problems due to semaphore1 Deadlocks Deadlocks occur when multiple processes are blocked each waiting for a resource that can only be freed by one of the other (blocked) processes shown below

2 Starvation In Starvation one or more processes gets blocked forever and never get a chance to enter critical section For example if we do not specify the algorithms for adding processes to the waiting queue or selecting one to be removed from the queue in the signal( ) call If a LIFO queue is chosen then the first process that starts waiting will never get a chance

3 Priority inversionLet Process LMH have priorities as shown belowLltMltHprocess H is waiting for a resource that has been held by process L so Process L is running When process M enters it pre-empts process L and M is running nowHere M runs before H even though M has lower priority than HTo avoid this L is given high priority so that M does not pre-empt L=====================================================================Q) Classical problems on synchronization

1 The Bounded Buffer Problem (also called the The Producer-Consumer Problem)2 The Readers-Writers Problem3 The Dining Philosophers Problem

These problems are used to test newly proposed synchronization scheme

1 The Bounded Buffer ProblemConsider

n buffers and each buffer holds one item a producer process which creates the items (1 at a time) a consumer process which processes them (1 at a time)

Producer process cannot produce an item if all the buffers are full and Consumer process cannot consume if all the buffers are empty As both processes modify the contents of buffer synchronization must be there For that we use 3 Semaphores

1 empty = n (empty semaphore counts the number of empty buffers)2 full=0 (full semaphore counts the number of filled buffers)3 mutex=1 (mutex provides mutual exclusion for access to the buffer)

Producer process Consumer processdo

hellip produce an item wait(empty) wait(mutex) hellip

do wait(full) wait(mutex) hellip remove an item from buffer hellip

add item to buffer hellip signal(mutex) signal(full)

while (1)

signal(mutex) signal(empty) hellip consume the item hellip while (1)

2 The Readers-Writers ProblemA data item such as a file is shared among several processesEach process is classified as reader or writerMultiple readers can read the file at same timeA writer must have exclusive access (ie cannot share with either a reader or another writer)Two versions of the readers-writers problem

readers priority new reader need not wait because a writer is waiting writers priority if a writer is waiting to access the database no new readers can start reading

A solution to both version may cause starvation in the readers priority version writers may starve in the writers priority version readers may starve

A semaphore solution to the readers priority version uses 3 semaphores1 readcount = 0 (readcount semaphore counts the number of reading shared data2 mutex =1 (mutex semaphore is used for mutually exclusion when readcount is updated)3 wrt = 1 (wrt semaphore is common to both writer and reader)

wrt provides mutually exclusive access to shared data Reader process Writer process

wait(mutex) readcount++ if (readcount == 1) wait(rt) signal(mutex) hellip reading is performed hellip wait(mutex) readcount-- if (readcount == 0) signal(wrt) signal(mutex)

do wait(wrt) hellip writing is performed hellip signal(wrt)

while (TRUE)

3 The Dining Philosophers Problemn philosophers sit around a table thinking and eatingAs time passes a philosopher gets hungry and tries to pick up the chopstick on his left and on his right A philosopher may only pick up one chopstick at a time and cannot pick up a chopstick already in the hand of neighbour philosopherThe dining philosophers problems is an example of a large class or concurrency control problems it is a simple representation of the need to allocate several resources among several processes in a deadlock-free and starvation-free mannerA semaphore solution represent each chopstick with a semaphore

semaphore chopstick[5] Initially all values are 1

Philosopher ido

51

wait(chopstick[i])wait(chopstick[(i+1) 5])hellipeathellipsignal(chopstick[i])signal(chopstick[(i+1) 5])hellipthinkhellip

while (1)This solution guarantees no two neighbouring philosophers eat simultaneously but has the possibility of creating a deadlockQ) Types of operating systems

MULTIPROCESSING SYSTEMS has multiple hardware CPUs

In symmetric multiprocessing (SMP) systems The processes share the same main memory and io facilities All processors can perform the same functions (hence term symmetric)

52

ASYMMETRIC MULTIPROCESSING SYSTEMS In asymmetric multiprocessing (ASMP) all CPUs are not equal (Master and slave CPUs) ASMP have dedicated processors to specific tasks For example one processor may be dedicated to disk operations another to video operations etc These systems dont have the flexibility to assign processes to the least-loaded CPU unlike an SMP system

REAL TIME SYSTEM is used to implement a computer application that must complete its execution within its time constraint

Two kinds of real-time systems have evolved

1 A hard real-time system can guarantee that response requirement will be met under all conditions 2 A soft real-time system cannot guarantee that response requirement meets under all conditions Ex

Digital audio or multimedia systems Digital telephones

DISTRIBUTED SYSTEMS execute parts of a computation in different systems at the same time It uses distributed control ie it spreads its decision-making actions across different computers in the system so that failures of individual computers or the network does not cripple its operation

A distributed operating system appears as a uniprocessor system even though it has multiple processors The users may not know where their programs are being run or where their files are located that should all be handled automatically by the operating system

Distributed systems allow applications to run on several processors at the same time thus requiring more complex processor scheduling algorithms

DESKTOP SYSTEMS A desktop system is a personal computer (PC) system used at a single location Modern desktop systems support multiprogramming Common examples are Linux FreeBSD Windows 8 and the Macintosh operating system

HANDHELD SYSTEMS (palmtop computers) Two of the most popular operating systems for handhelds are Symbian OS and Android OS They have very less memory

53

CLUSTERED SYSTEMS A computer cluster consists of a set of loosely connected computers that work together so that in many respects they can be viewed as a single system The components of a cluster are usually connected to each other through fast local area networks each node (computer used as a server) running its own instance of an operating system

Q) Modern operating system

New developments in hardware applications and security threats lead to the development of modern operating system

New Hardware developments like increased machine speed high speed network increased size and variety of memory devices

New Applications developments like multimedia applications internet and web access and client server computing

Developments in modern operating system can be categorized as

1 Microkernel architecture assigns only few functions to kernel ( IPC and CPU scheduling etc) Other OS services are provided by processes called servers that run in user mode This approach decouples server and kernel development This approach is well suited to distributed environment

2 Multithreading a process is divided into threads that can run concurrently Useful for applications that perform a number of independent tasks Example database server that listens and processes many client requests

3 Symmetric multiprocessing As now-a-days there are multiple microprocessors in a single system SMP operating system provides greater efficiency when there are multiple processors The processes share the same main memory and io facilities All processors can perform the same functions (hence term symmetric)

SMP schedules processes or threads across all of the processors

Advantages

1 Increased performance if some portion of the program can be run in parallel the performance of SMP OS increases

2 increased availability as all processes can perform the same task if one processor fails others will work 3 incremental growth one can add a new processor to increase the performance4 Scaling cost can be dependent on the number of processors used

4 Distributed operating systems Distributed systems allow applications to run on several processors at the same time

5 Object oriented design adds modularity to kernel Os can be customized without effecting system integrity

Q) Linux Operating systems It is open source as its source code is freely available It is free to use Linux was designed considering UNIX compatibility Linux is a multi-user multitasking system Main design goals are speed efficiency and standardization Components of Linux System

54

Linux Operating System has primarily three components as shown in the below diagram

Kernel - Kernel is the core part of Linux It is responsible for all major activities of this operating system It interacts directly with hardware Kernel hides low level hardware details to system or application programs

System Library - System libraries are special functions or programs These are used by application programs to use Kernelrsquos features

System Utility - System Utility programs are responsible to do specialized individual level tasks like updating log file accepting login requests from terminals etc

Kernel Modules Sections of kernel code that can be compiled loaded and unloaded independent of the rest of the kernel A kernel module may typically implement a device driver a file system or a networking protocol

Three components to Linux module support 1 module management Supports loading modules into memory and allows to talk to the rest of the kernel 2driver registration Allows modules to inform rest of the kernel that a new driver has become available 3 conflict resolution protect reserved resources of one driver from accidental use by another driver

Kernel Mode vs User ModeKernel component code executes in a special privileged mode called kernel mode with full access to all resources of the computer This code represents a single process executes in single address space and do not require any context switch and hence is very efficient and fast Kernel runs each processes and provides system services to processes provides protected access to hardwares to processesCode which is not required to run in kernel mode is in System Library User programs and other system programs works in User Mode which has no access to system hardwares and kernel code User programs utilities use System libraries to access Kernel functions to get systems low level tasksBasic FeaturesFollowing are some of the important features of Linux Operating System

Portable - Portability means softwares can works on different types of hardwares in same way Linux kernel and application programs supports their installation on any kind of hardware platform

Open Source - Linux source code is freely available and it is community based development project Multiple teams works in collaboration to enhance the capability of Linux operating system and it is continuously evolving

Multi-User - Linux is a multiuser system means multiple users can access system resources like memory ram application programs at same time

Multiprogramming - Linux is a multiprogramming system means multiple applications can run at same time

Hierarchical File System - Linux provides a standard file structure in which system files user files are arranged

Shell - Linux provides a special interpreter program which can be used to execute commands of the operating system It can be used to do various types of operations call application programs etc

55

Security - Linux provides user security using authentication features like password protection controlled access to specific files encryption of data

Architecture Linux System Architecture is consists of following layers

Hardware layer - Hardware consists of all peripheral devices (RAM HDD CPU etc)

Kernel - Core component of Operating System interacts directly with hardware provides low level services to upper layer components

Shell - An interface to kernel Takes commands from user and executes kernels functions

Utilities - Utility programs giving user most of the functionalities of an operating systems

Q) windows XP operating System1Extensibility mdash layered architecture

Executive which runs in protected mode provides the basic system services On top of the executive several server subsystems operate in user mode Modular structure allows additional environmental subsystems to be added without affecting the

executive 2Portability mdash XP can be moved from on hardware architecture to another with relatively few changes

Written in C and C++ Processor-dependent code is isolated in a dynamic link library (DLL) called the ldquohardware abstraction

layerrdquo (HAL)3 Reliability mdashXP uses hardware protection for virtual memory and software protection mechanisms for operating system resources 4 Compatibility mdash applications that follow the IEEE 10031 (POSIX) standard can be complied to run on 2000 without changing the source code 5 Performance mdash XP subsystems can communicate with one another via high-performance message passing

Preemption of low priority threads enables the system to respond quickly to external events Designed for symmetrical multiprocessing

6 International support mdash supports different locales via the national language support (NLS) APIXP Architecture 1 Layered system of modules 2 Protected mode mdash hardware abstraction layer (HAL) kernel executive 3 User mode mdash collection of subsystems

a Environmental subsystems emulate different operating systems b Protection subsystems provide security functions

XP architecture

56

Q) Windows network Operating systemExample Windows 2000 operating system

All the pictures are taken from Silberschatz Abraham et al Peter Baer Galvin and Greg Gagne Operating system concepts Reading Addison-Wesley

Visit my blog enthusiaststudentblogspotin57

mtechmessengerblogspotin

Keywords OS exam notes OS notesOperating system notes Opearing system galvin notesSimple batch system Multi- programmed Batch System Time sharing System Difference between Batch Multi-programming and Time sharing What is process structure of process in memory Process states PCB Operations on Processes Process Creation Process Termination cascading terminationcontext switch Basic Concepts of threads two modes of CPU execution Inter process communication Shared ndash Memory Systems Message passing systemswhat is deadlock four conditions that are necessary for deadlock to occur Methods for handling deadlocks Deadlock prevention Deadlock avoidanceResource Allocation graphBankers AlgorithmSafe State Unsafe state Safety Algorithm Resource-Request Algorithm Memory management Address Binding Logical vs Physical Address Space MMU Memory-Management Unit Dynamic loading Dynamic Linking Overlays Swapping Contiguous memory allocation Single-partition allocation Multiple-partition allocation Internal fragmentation Variable Size partitions External fragmentation First-fit Best-fit Worst-fit Non-Contiguous Memory allocation Paging Implementation of Page Table PTLR PTBR page table base register Page-table length register translation look-aside buffers TLB Protection in Paging Shared pages in Paging Segmentation Shared Segments Virtual memory Demand paging Procedure for Handling a Page Fault pure demand paging Page Replacement Page Replacement Algorithms FIFO Optimal Page replacement AlgorithmLeast Recently used (LRU) Page replacement algorithm ThrashingWorking Set model global and local page replacement

58

  • 101 File Concept Data is stored in secondary memory in the form of files A file is a sequence of bits bytes lines or records A file has structure which depends on its type
  • File Attributes
  • File Operations
  • Kernel Mode vs User Mode
  • Basic Features
  • Architecture

    e) System access protects data and resources from unauthorized users Resolves conflicts in case of contention for resources

    f) Error detection and response Many internal and external hardware errors software errors occur while a computer is running The OS handles these errors by ending the program retrying the operation that caused error or prints the error message to user

    g) Accounting OS collects usage statistics of various resources for improving performance and future enhancements

    2) Efficiency OS decides how much processor time is to be given to a process which processes must be in main memory when an IO device can be used by process controls access to use of files etc efficiently OS manages the resources like CPU memory IO devices Secondary memory efficiently

    Only part of OS (called Kernel) is loaded into main memory and the rest of OS will be in hard disk

    CPU executes OS code which directs the CPU which process to execute Once the process is executed CPU again runs the OS code to know what

    to do next

    3 Ability to evolve OS will evolve due to following reasons

    i) Hardware upgrades and new types of hardware

    ii) New Services

    iii) Bug Fixes

    Q) Evolution of OS

    Ans Serial Processing (1940 ndash mid 1950)

    There was no OS User directly interacted with computer hardware Single user system

    Input and Output Paper tape or punched cards

    Software Used Assemblers compilers linkers loaders device drivers libraries of common subroutines

    Main Disadvantages Low CPU utilization high setup time

    1) Simple batch system

    1 User do not have direct access to machine2 Monitor controls job processing Special cards( that start with $ sign) indicate what to do User

    program is prevented from performing IO

    3 Resident Monitor (is part of monitor that always reside in memory) holds initial control control transfers to job and then back to monitor

    4 Automatic job sequencing is done as follows User submits job to computer operator Computer operator batches the job sequentially and

    places the entire job on an Input device (Ex card reader) Monitor reads one job at a time from ip device and places current job in User program Area Control is then passed to current job When the job is completed control return back to monitor

    5 Monitor handles job setup and reduces setup time by batching jobs with similar requirements Job Control Language (JCL) instructions are given to monitor to execute the job Monitor reads $FTN card and loads the appropriate compiler from tape During execution of user program any input instruction causes one data card to be read After successful or unsuccessful completion of user job monitor will scan all ip cards until it

    encounters next JCL card

    Hardware features desirable in Batch system

    1 Memory protection2 Timer3 Privileged Instruction4 Interrupts

    2) Multi-programmed Batch System memory layout is as shown below

    If there is enough main memory to hold OS and two programs When one job needs to wait for IO the CPU can switch to other job

    This is known as multi programming or multi tasking

    Additional hardware desirable are

    1 Interrupt driven IO and Direct Memory Access (DMA)2 Memory management Since several jobs must be in main memory

    at same time some form of memory management is required

    3) Time sharing System are multi-programming systems and can handle OS gives quantum of CPU time to each user program

    If lsquonrsquo users are there each user is given 1n of CPU time

    Problems to be handled in time sharing system

    1 Since multiple jobs are in memory they must be protected from modifying each otherrsquos data

    2 File System must be protected by giving access to only authorized users

    3 Contention for resources must be handled

    Q) Difference between Batch Multi-programming and Time sharing

    Batch Multi-programming Time sharingPrincipal Objective Maximize processor use Minimize response timeSource of instructions to OS JCL instructions provided with job Commands entered at terminal

    ===================================================================

    Unit-II

    Q) What is process A process is program in execution A process is an active entity and resides in main memory

    Q) Explain structure of process in memoryA process contains

    1 program code which is sometimes known as text section2 Process stack contains temporary data (Such as function parameters

    return addresses and local variables)3 Heap is the memory allocated dynamically during process run time4 Data section contains global variables

    Although two processes are associated with the same program they are considered as two separate execution sequences as the data heap and stack sections differ even though text sections are equivalent

    Q) Explain Process statesAs a process executes it changes state Each process can be in one of the following states

    1 New the process is being created2 Running Instructions are being executed3 Waiting a process is waiting for some

    event to occur (such as IO completion etc)

    4 Ready process is waiting to be assigned to processor

    5 Terminated the process has finished execution

    Q) What is PCBEach process is represented in the operating system by a process control block (PCB) or task control block It contains information about

    1 Process state the state may be new ready running blocked or terminated2 Program Counter this register stores the address of next instruction to be executed3 CPU registers like accumulators stack pointers index register and general purpose

    registers The values of these registers are saved if the process is interrupted4 CPU scheduling information like process priority scheduling parameters etc5 Memory management information include the values of base and limit register segment tables

    page tables etc6 Accounting Information include amount of CPU used time limits job or process number etc7 IO status information like list of IO devices allocated to the process list of open files and so on

    Q)Operations on Processes

    1 Process Creation

    Operating System is responsible for creation of new process

    Reasons for a new process creation

    1 When a batch job is submitted by user2 In interactive environment or in time sharing system a process is created when user logs on3 Operating system creates a process to manage printing So that user need not wait till printing

    completes Here OS creates process on behalf of user4 When a process creates another process Creating process is called parent process and new process is

    called child process or sub processNew process can in turn create other processes forming a tree of processesOS identifies a process by unique process identifier (or pid) which is an unique integerIn solaris Operating system at the top of the tree is Sched process with pid=0 This process can create several child processes In the below figure it creates three child processes1 Init process is the parent process for all user processes2 Pageout process 3 Fsflush process

    Subprocess may obtain resources

    1 Directly from OS2 Share some resources among several of its children3 Parent process partitions its resources among its children

    Parent process may pass initialization data to child For ex name of the image file and name of the output device may be passed to a display process (child process)

    Parent process may execute

    1 concurrently with the child2 waits till some or all of the child process have terminated

    Address space of child process may be

    1 Duplicate copy of parent (Same program and data)2 New program loaded into it

    2 Process Termination

    A process terminates when it finishes executing its final statement and requests the operating system to delete it (using exit() system call in Unix and Terminate Process() in Win32 API)

    All the resources of the process (open files IO buffers and physical memory) are deallocated by OS

    A parent may terminate a child process for a variety of reasons

    1 Child has exceeded its usage of some of the resources2 Task assigned to child is no longer required3 Parent is terminating and OS (Ex VMS) does not allow a child to continue if parent is terminating

    Q) What is cascading termination

    Parent is terminating and OS (Ex VMS) does not allow a child to continue if parent is terminating This is called Cascading Termination initiated by OS

    Q) What happens to child when parent terminates in Unix

    Init process becomes the parent of all its children

    Q)What is context switchWhen the PCB of the currently executing process is saved the operating system loads the PCB of the next process that has to be run on CPU This is a heavy task and it takes a lot of time

    Q) Basic Concepts of threads

    A thread consists of a program counter a stack and a set of registers and a thread ID Threads are also called light weight processesA process with multiple threads make a great serverThe threads share a lot of resources with other threads belonging to the same process So a context switch among threads for the same process is easy It involves switch of register set the program counter and the stack

    Q) Explain two modes of CPU executionProtection of memory IO can be provided via two modes of CPU execution user mode and kernel mode

    In kernel privileged supervisor mode OS has access to privileged instructions Privileged instructions can access IO devices control interrupts manipulate memory (pagetable TLB etc)

    Privileged instructions are instruction that can only be executed in kernel mode

    All the user level processes run in user mode Some critical operations are not allowed to be done by user processes The user processes must use system calls to perform these operations When a system call occurs OS will enter into kernel mode and accesses privileged instructions to perform the desired service to user-level process

    For example for Input or Output process makes a system call telling the operating system to read or write particular area and this request is satisfied by the operating system

    Q) Explain Inter process communicationAns Cooperating processes require inter process communication (IPC) mechanism to exchange data and information There are two communication models (a) Message passing (b) shared memory as shown below

    (a) Shared ndash Memory Systems

    1) Communicating processes must establish a region of shared memory2) Shared memory region resides in address space of the creating process

    3) Other processes that wish to communicate using shared memory segment must attach the shared memory to their address space4) Processes can exchange information by reading and writing data in shared areas5) Shared memory systems are convenient to communicate6) Shared memory systems are faster and provides maximum speed as

    i) System calls are required only to establish shared memory regionsii) Once shared memory is established all accesses are treated as routine memory access No

    assistance of kernel is required

    (b) Message passing systems

    1 Are useful for exchanging smaller amounts of data2 Easy to implement for inter computer communication3 More time consuming than shared memory system as implemented using system calls

    and need kernel intervention

    To send messages communication link must exist between them Communication link can be physically or logically implemented

    Different methods to logically implementing a link are

    1 Direct or indirect communication2 Synchronous or asynchronous communication3 Automatic or explicit buffering

    1a Direct communication

    A link is established automatically between every pair of processes that want to communicate

    A link is associated with exactly two processes

    Addressing

    i) Symmetry in addressing Sender process and receiver process must name each other to communicate Send() and Receive() primitives are as follows Send(P message) ndashsends a message to process P

    Receive (Q message) ndash receive a message from process Qii) Asymmetry in addressing Only sender names the receiver process Send() and Receive()

    primitives are as follows Send(P message) ndashsends a message to process P

    Receive (id message) ndash receive a message from any process

    Disadvantage of both types of addressing is limited modularity as changing the id of a process requires to find all the references to old id and then modify them

    1b Indirect communication

    1 Messages are sent received tofrom Mailboxes or ports2 Each mailbox has unique id (integer value) 3 Two processes can communicate only if the process have a shared mailbox4 A link is established between two processes if they have a shared mailbox

    P1 P2

    R1

    5 A link may be associated with more than two processes6 Mailbox may owned by process or OS

    a If mailbox is owned by process we can distinguish between the owner( can receive messages only) and user(can send messages only) When the processes that owns a mailbox terminates the mailbox disappears

    b If mailbox is owned by OS OS must provide mechanisms to i Create a new mailbox

    ii Send and receive messages through mailboxesiii Delete the mailboxiv Pass ownership to other processes

    2 Synchronous or asynchronous communicationMessage passing may be either blocking(synchronous) or non-blocking(Asynchronous)

    1 Blocking Send The sending process is blocked until the message is received by the receiving process or mailbox

    2 Nonblocking Send The sending process sends the message and resumes operation3 Blocking Receive The receiver blocks until message is available4 Nonblocking receive The receiver retrieves either a valid message or a null

    3 Automatic or explicit bufferingMessages exchanged by communicating processes reside in temporary queue Such queues can be implemented in 3 ways

    i Zero Capacity Queue length =0 the link cannot have messages waiting in itsender must block until receiver receives the message

    ii Bounded Capacity Queue length = finite (say n) When the queue is full the sender must block until space is available in queue

    iii Unbounded Capacity Queue length = infinitehellipthe sender never blocks===============================================================================Q) what is deadlock

    A set of processes is deadlocked when every process in the set is waiting for a resource that is currently allocated to another process in the set Here the process P1 is allocated resource R2 and P2 is allocated R1

    P1 requires R1 and P2 requires R2

    Process P1 and P2 will wait forever This situation is called deadlock

    Q) What are the four conditions that are necessary for deadlock to occur

    1 Mutual Exclusion - At least one resource must be held in a non-sharable mode If any other process requests this resource then that process must wait for the resource to be released

    2 Hold and Wait - A process must be simultaneously holding at least one resource and waiting for at least one resource that is currently being held by some other process

    3 No preemption - Once a process is holding a resource then that resource cannot be taken away from that process until the process releases it

    4 Circular Wait - A set of processes P0 P1 P2 PN must exist such that every P[ i ] is waiting for P[ ( i + 1 ) ( N + 1 ) ]

    Q)Methods for handling deadlocks

    1 By using deadlock prevention and avoidance protocols system will never enter a deadlocked state1 Allow the system to enter a deadlocked state detect it and recover it2 Ignore the problem and pretend that deadlock never occurs

    To make sure that the system must not enter a deadlocked state the system can use

    1 Deadlock prevention 2 Deadlock avoidance

    ==============================================================================

    Deadlock Prevention

    1 Mutual Exclusion We cannot prevent deadlocks by denying the mutual exclusion condition because some resources are nonsharable (Ex Printer)

    2 Hold and Wait

    To make sure that the hold-and-wait condition never occurs in the system two protocols that can be used are

    Protocol 1 All the resources requested must be allocated before process begins execution

    Protocol2 A process can request resources only when it has none If a process requires additional resources it must release all the resources that are currently allocated

    Example Consider a process that copies data from DVD drive to a file on disk sorts the file and then prints the results to a printer

    If Protocol1 is used it must request the DVD drive disk file and printer at the beginning and must hold them till the end

    Disadvantages

    1 Starvation A process may wait forever because at least one resource that is need is always allocated to some other process Hence Starvation is possible

    2 Resource Utilization is low Process will hold the printer from beginning till end even though it is used at the end

    If Protocol 2 is used the process will initially request DVD drive and disk file It copies from the DVD drive to disk and then releases both the DVD drive and disk file It then requests the disk file and printer

    Disadvantage There may be a chance that our data may not remain on the disk file

    3 No pre-emption

    To make sure that this condition does not hold the following protocol is used

    Protocol If a process (say A) requests some resources

    Case 1 If resources are available then Allocate them

    Case 2 if resources are allocated to some other process(say B) that is waiting for additional resources

    then Preempt the desired resources from the waiting process (B) and allocate them to requesting process(A)

    The process B can be restarted only when it is allocated additional resources it is requesting and takes away the resources that were given to process A

    Case 3 if resources are neither available nor held by a waiting process then Process A waits

    This protocol is applied to resources like CPU register and memory space as the state of the resources can be saved

    4 Circular Wait

    To make sure Circular Wait condition never occurs

    1 Each Resource is assigned a unique integer number

    2 Each Process must request resources in an increasing order of enumeration

    We define a one-to-one function F R rarr N where R is the set of resource types and N is the set of natural numbers

    A process has requested a resource type say Ri at the beginning

    Protocol 1 After that the process can request resource type R j if and only if F (Riquestiquest j)gtF(Riquestiquest i)iquestiquest

    Protocol 2 If a process requests a resource type R j it must release all the resources (say (Riquestiquest i )iquest ) whose F (Riquestiquest i)ge F (Riquestiquest j)iquestiquest

    Example Let F (tape drive) =1 F(disk drive) = 5 and F(printer)=12

    A process can request any number of tape drives disk drives and printers

    Protocol 1If a process A has already requested disk drive now A can request only printer and cannot request tape drive

    Protocol2 In order to request tape drive the process A must release the disk drive and then can request tape drive

    If the above two protocols are used then the circular wait condition never occurs We can prove this by contradiction

    Proof Assume circular wait exists Let the set of processes involved in the circular wait be P0 P1 helliphellip Pn where P0 is waiting for resource R0which is held by P1

    P1 is allocated R0 and P1 is waiting for resource R1which is held by P2 so F (Riquestiquest0)ltF(R iquestiquest1)iquestiquest

    P2 is allocated R1 and P2 is waiting for resource R2which is held by P3 so F (Riquestiquest1)ltF (Riquestiquest2)iquestiquest

    P1 P2

    R2R2

    R1

    Pn is allocated Rnminus1 and Pn is waiting for resource Rnwhich is held by P0 so F (Riquestiquestn)ltF(R iquestiquest0)iquestiquest

    Hence by transitivity F (Riquestiquest0)ltF(R iquestiquest0)iquestiquest Hence our assumption that circular wait exists is FALSE

    ===============================================================================Q) Resource Allocation graphDeadlocks can be understood more clearly through the use of Resource-Allocation Graphs having the following properties

    1 Resource Types are represented as square nodes on the graph Dots inside the square nodes indicate number of resources

    ( Ex two dots might represent two laser printers )2 Processes are represented as circles3 Request Edges - If P1 has requested R1 a directed edge from P1 to R1 is request

    edge4 Assignment Edges - A directed edge from R2 to P1 indicating that resource R2 has

    been allocated to process P1 and that P1 is currently holding resource R2 Note that a request edge can be converted into an assignment edge when request is granted

    If a resource-allocation graph does contain cycles AND each resource contains only a single instance then a deadlock exists If a resource category contains more than one instance then cycle in the resource-allocation graph indicates the possibility of a deadlock but does not guarantee one==================================================================Q) Deadlock AvoidanceFor each resource request system can decide whether the request should be granted or not To make this decision the system must have information like

    1 resources currently available2 resources currently allocated to each process3 Future requests and releases of each process4 Maximum number of resources it may need

    Given this information it is possible to construct an algorithm that makes sure that the system will never enter a deadlocked stateThere are Two deadlock-avoidance algorithms They are

    1 Resource-Allocation Graph Algorithm2 Bankers Algorithm

    Safe State A system is in safe state if there exists a safe sequence of processes P0 P1 P2 PN such that Resource Requests for Pi lt= Resources allocated to Pi + resources held by all processes Pj where j lt i All safe states are deadlock free

    Unsafe state If a safe sequence does not exist then the system is in an unsafe state which MAY lead to deadlock

    1 Resource-Allocation Graph Algorithm Resource-allocation graphs can detect deadlocks only if number of resources of each type are one In this case unsafe states can be recognized and avoided by adding claim edges denoted by dashed

    lines which point from a process to a resource that it may request in the future All the claim edges are added only at the beginning of process When a process makes a request the claim edge Pi-gtRj is converted to a request edge When a resource is released the assignment edge changes back to a claim edge This approach works by denying requests that would produce cycles in the resource-allocation graph

    taking claim edges into effectConsider for example the resource allocation graph as shown

    If P2 requests resource R2 then the claim edge from P1-gtR2 will be made request edge as follows

    The resulting resource-allocation graph would have a cycle in it and so the request cannot be granted Q) Bankerrsquos Algorithm or Deadlock avoidance algorithm with exampleThere are 12 tape driveslet the current state of the system is as shown in the below figureProcess Allocated Max Need Need= MaxNeed ndash AllocatedP0 5 10 5P1 2 4 2P2 2 9 7

    Available = 12-(5+2+2) = 3Resource- Request AlgorithmNow when a request for 1 tape drive by process P2 is made we run resource-request algorithm to check whether the request must be granted or not The request is granted if the after granting the request all the processes in the system can complete For thatWe check 1 Is the request of P2 lt= need of P2

    1 lt= 7 therefore TRUE2 Is the request of P2 lt= Available

    1 lt= 3 therefore TRUE3 Pretend the request is granted for P2

    Now the current state is as shown belowProcess Allocated Max Need Need= MaxNeed ndash AllocatedP0 5 10 5P1 2 4 2P2 2+1=3 9 7-1=6

    Available = 3-1= 2Now Run the safety algorithm to check whether the system is in safe state

    Safety Algorithm1 Let WORK = Available = 22 Find unfinished process such that Need of unfinished process lt= WORKCheck P0

    Need of P0 = 5Work = 2

    Is 5 lt= 2 FALSE

    Check P1Need of P1 = 2Work = 2

    Is 2 lt= 2TRUETherefore P1 can finishIf P1 finishes Work = Work+Allocated to P1

    Work=2+2= 4Now again check if P0 can completeNeed of P0 = 5Work =4Is 5 lt=4 FALSE

    Check if P3 completesNeed of P3= 6Work =4Is 6lt= 4FALSESo neither P0 can complete and P3 can completeSo the system is in unsafe stateThe request for 1 tape drive by P3 is not granted

    2 Bankers Algorithm For resources that contain more than one instance the resource-allocation graph method does not

    work So we use Bankerrsquos algorithm When a process starts up it must state in advance the maximum allocation of resources it may

    request When a request is made the scheduler determines whether granting the request would leave the

    system in a safe state If not then the process must wait until the request can be granted safely The bankers algorithm relies on several key data structures ( where n is the number of processes and

    m is the number of resource categories ) o Available[ m ] indicates how many resources are currently available o Max[ n ][ m ] indicates the maximum demand of each process of each resource

    o Allocation[ n ][ m ] indicates the number of each resources allocated to each processo Need[ n ][ m ] indicates the remaining resources needed of each type for each process ( Note

    that Need[ i ][ j ] = Max[ i ][ j ] - Allocation[ i ][ j ] for all i j ) For simplification of discussions we make the following notations observations

    o One row of the Need vector Need[ i ] can be treated as a vector corresponding to the needs of process i and similarly for Allocation and Max

    Safety Algorithm In order to apply the Bankers algorithm we first need an algorithm for determining whether or not a

    particular state is safe This algorithm determines if the current state of a system is safe according to the following steps

    1 Let Work and Finish be vectors of length m and n respectively Work is a working copy of the available resources Finish is a vector of booleans indicating whether a particular process can finish Initialize Work = Available and Finish to false for all elements

    2 Find an i such that both (A) Finish[ i ] == false and (B) Need[ i ] lt Work This process has not finished but could with the given available working set If no such i exists go to step 4

    3 Set Work = Work + Allocation[ i ] and set Finish[ i ] to true This corresponds to process i finishing up and releasing its resources back into the work pool Then loop back to step 2

    4 If finish[ i ] == true for all i then the state is a safe state because a safe sequence has been found

    Resource-Request Algorithm ( The Bankers Algorithm ) Now we have a tool for determining if a particular state is safe or not This algorithm determines if a new request is safe and grants it only if it is safe to do so When a request is made ( that does not exceed currently available resources )

    pretend it has been granted and then see if the resulting state is a safe one If so grant the request and if not deny the request as follows

    1 Let Request[ n ][ m ] indicate the number of resources of each type currently requested by processes If Request[ i ] gt Need[ i ] for any process i raise an error condition

    2 If Request[ i ] gt Available for any process i then that process must wait for resources to become available

    else the process can continue to step 3

    3 Check to see if the request can be granted safely by pretending it has been granted and then seeing if the resulting state is safe If resulting state is safe

    grant the requestelse

    then the process must wait until its request can be granted safelyThe procedure for granting a request ( or pretending to for testing purposes ) is

    Available = Available - Request

    Allocation = Allocation + Request Need = Need - Request

    Unit III Memory management

    Just as processes share the CPU they also share physical memory Memory management unit of OS takes care of memory allocation deallocation and other issues Program must be brought into memory for it to be run Addresses are two types

    i) relocatable or relative address wrt to the beginning of program ii) Absolute addresses

    Q)Address BindingAns Binding means mapping logical address space to physical address space Address binding can happen at three different stagesCompile time If you know at compile time only

    where in memory the program is going to be allocated then compiler generates absolute addresses

    Otherwise compiler generates relocatable addresses

    Load time loader binds the relocatable addresses generated by compiler to absolute address Hence binding is done at load timeIf Binding is done at compile and load time physical and logical addresses are same

    Execution time If address binding is done at run time the process can be moved during its execution from one memory segment to another Here we call logical addresses as virtual addressRun time mapping of virtual to physical address is done by a hardware device called Memory Management Unit (MMU)

    Q)Logical vs Physical Address SpaceAnsLogical address ndash generated by the CPU also referred to as virtual addressPhysical address ndash address seen by the memory unitSet of all logical addresses is called logical address space Set of all physical addresses is called physical address space

    Q) Memory-Management Unit (MMU)Ans MMU is hardware device that maps virtual to physical addressIn MMU scheme the value in the relocation register is added to every address generated by CPU to read the contents of memoryThe user program only knows logical addresses it never sees the real physical addresses

    Q)Dynamic loadingAns Since physical memory is small it may not be possible for the entire program to be in main memory so we can use dynamic loadingIt is responsibility of users to design their programs to take advantage of dynamic loadingOnly the main function is loaded into main memory and when main() calls another function the main() will check whether the function is in main memory IF not the loader will load the desired function to main memory and updates the programrsquos address table

    Routine is not loaded until it is called We achieve better memory-space utilization as unused routine is never loaded

    Q) Dynamic Linking+ Linking postponed until execution time+ Small piece of code stub used to locate the appropriate memory-resident library routine+ Stub replaces itself with the address of the routine and executes the routine+ Operating system needed to check if routine is in processesrsquo memory address+ Dynamic linking is particularly useful for libraries

    Q) OverlaysAns Overlays are needed when process is larger than amount of main memory allocated to it Instructions and data that are needed at any given time is kept in main memory Overlays can be implemented by user Programming design of overlay structure is complex Overlays for a two-pass assembler is as shown in the figure

    Q) SwappingAns A process can be swapped temporarily out of memory to a backing store and then brought back into memory for continued executionBacking store ndash fast disk large enough to store copies of all memory images of all usersPriority based scheduling uses a variant of swapping policy called roll out roll in If a higher priority process arrives the memory manager swaps out lower priority process and then swaps in higher priority process When the higher priority process finishes the lower priority process can be swapped in again to main memory A process that is swapped out will be swapped back into the same memory space it occupied previouslyMajor part of the swap time is transfer time We can swap idle process only and cannot swap a process that is waiting for IO

    Q) Contiguous memory allocation Ans Each process is contained in a single contiguous section of memory1 Fixed Size Partition (or) Single-partition allocation

    Divide the main memory into Fixed sized partitions Each partition may contain exactly one process Relocation register contains value of starting physical address Limit register contains range of logical addressesEvery address generated by a CPU is checked as follows

    If logical address lt Limit register then logical address is added with Relocation register to get the corresponding memory address

    else a trap to OS is generatedSince every address is checked we can protect the OS and other user programs from being modified by other running process

    2 Multiple-partition allocation

    1) Fixed Size Partitions Divide the main memory into Fixed sized blocks Here memory allocated to a process may be larger than required The difference between allocated memory and requested memory is called internal fragmentation Internal fragmentation is unused memory inside the partition

    0 P1 Block0(0th address to 3rd address)4 P1 Block1(4th address to 7th address)8 Block2(8th address to 11th address)

    12 Block3(11th address to 12th address)

    if P1 requires just 5 addresses still it is allocated 2 blocks ie 8 addresses So remaining 3 addresses are left unused

    2) Variable Size partitions Here main memory is divided into partitions of variable sizesOperating system maintains information about

    a) allocated partitions b) free partitions (hole)

    In the beginning all main memory is empty and is considered one large block of available memory a hole

    Exact memory required by process is only given When a process terminates it releases memory which can be allocated to another process

    Memory is allocated to processes until finally no available block of memory (or hole) is large enough to hold the next processExternal fragmentation exists when there is enough memory to satisfy a request but the available memory are not contiguous Wastage of memory outside the partitionOne Solution to problem of external fragmentation is

    a Compaction shuffle the memory contents so as to place all free memory together into one large block

    ===============================================================================Q) Dynamic Storage-Allocation Problem (or) Most commonly used strategies to select a free hole from the set of available holesAns To satisfy a request of size n from a list of free holes below 3 policies can be used

    1 First-fit Allocate the first hole that is big enough

    2 Best-fit Allocate the smallest hole that is big enough must search entire list if it is not ordered by size Produces the smallest leftover hole

    3 Worst-fit Allocate the largest hole must also search entire list Produces the largest leftover holeFirst-fit and best-fit better than worst-fit in terms of speed and storage utilization

    ===============================================================================Q) Non-Contiguous Memory allocation

    1 Paging Paging is a memory management scheme that provides non-contiguous memory allocation0 P1 Block0(0th address to 3rd address)4 P2 Block1(4th address to 7th address)8 Block2(8th address to 11th address)

    12 P1 Block3(11th address to 12th address)

    Logical address space of a process can be noncontiguous Ex P1 is allocated Block0 and Block3

    1 Divide physical memory into fixed-sized blocks called frames (size is power of 2)2 Divide logical memory into blocks of same size called pages3 Logical address is divided into 2 parts

    Page number (p) ndash Page table is indexed by page numberPage offset (d) ndash

    4 A page table is allocated to each process (A pointer to page table is stored in PCB of process )Page table translates logical to physical addresses Page 0 is in frame1 Page 1 is in frame 4 etc5 Internal fragmentation may occur due to paging

    6 If the size of logical address space= 2m and page size = 2n Then the higher order mminusn bits of logical address correspond to page number and lsquonrsquolower order bits for displacement within the page

    Ex size of logical address space = 8=23 so m=3

    page size =4=22 So n=2

    So mminusn = 1 bit for page number to represent 01

    remaining n=2 bits for displacement within the page

    Q) Implementation of Page Table

    Page table can be kept as

    1) Set of dedicated registers efficient if the page table is small

    2) Page table can be kept in memory page table base register (PTBR) points to page table Page-table length register (PTLR) indicates size of the page table Changing page table requires changing only the value in these registers Advantages of keeping page table in memorya Less context switch time

    b Two memory accesses are required One memory access to access the page table and another memory access to access the required memory address

    This problem can be solved by using a special fast-lookup hardware cache called associative memory or translation look-aside buffers(TLBs)

    i Each entry in TLB consists of 2 parts

    I) page number field

    II) Value field or frame number field

    ii When a logical address is generated by CPU its page number is presented to TLB The page number is compared with all the entries of TLB simultaneously

    If page number is found in TLB Itrsquos frame number is immediately available -gt TLB hitIf page number not found check in page table ndashTLB miss

    By using TLB Search is fast But TLB hardware is expensive So TLB size is kept small So TLB contains only frequently used few page table entries

    Q) Protection in Paging Validinvalid bit attached to each entry in the page table OS sets this validinvalid bit for each page to allow or disallow access to the page When the bit is set to invalid the page is not in processrsquos logical address space Hence it generates Trap to OS

    Q) Shared pages in Paging

    Reentrant code is the code that never changes during execution Reentrant code can be shared

    1 One copy of read-only (reentrant) code is shared among processes (ie text editors compilers window systems) Ex ed1ed2ed3 is shared among Process P1 and P2

    2 Shared code must appear in same location in the logical address space of all processes

    Each process keeps a separate copy of the private code and data EX data1 and data2

    The pages for the private code and data can appear anywhere in the logical address space

    ==============================================================================Q) Segmentation

    Segmentation is a memory management scheme that support userrsquos view of memory

    When the user program is compiled the compiler generates segments like

    1) The code segment

    2) Global variables segment

    3) Heap memory segment

    4) Stack segment etc

    Each entry in segment table has

    segment base (Starting physical address of segment) and limit register (specifies the length of the segment)

    Logical address is divided into 2 parts

    segment-number offset

    Segment table is indexed by segment number

    Segment-table base register (STBR) stores the location of segment table in main memory

    Segment-table length register (STLR) stores number of segments used by a program

    The segment number (say lsquosrsquo) is used to find the entry in segment table The required entry is lsquosrsquo locations away from the beginning of the segment table Once the required entry in the segment table is found the offset (lsquodrsquo) is compared with limit

    If( offset lt limit) then offset is added with the base entry to generate the physical address

    Q) Shared Segments Code sharing occurs at

    the segment level Shared segments must

    have same segment number

    Allocation - dynamic storage allocation problem

    use best fitfirst fit may cause external fragmentation

    Protection protection bits associated with segments

    readwriteexecute privileges array in a separate segment - hardware can check for illegal array indexes

    ===============================================================================Q) Virtual memory ndash

    Ans In paging and segmentation a program will execute only if the entire process is in main memory But here in virtual memory only part of the program needs to be in memory for execution

    1 A program that is large enough than the available main memory can still run as only part of program is only loaded into main memory Logical address space can therefore be much larger than physical address space

    2 Allows address spaces to be shared by several processes

    Virtual memory can be implemented via Demand paging Demand segmentation

    The large blank space between heap and stack is part of virtual address space Virtual memory allows files and memory to be shared by 2 or more processes through page sharing as shown in below figure (b) Figure (a) depicts that virtual memory that is larger than physical memory

    ==============================================================================

    Q)Demand pagingDemand paging is technique of loading pages from disk to main memory only when the page is needed Hence using less amount of physical memory we get faster response

    Demand paging is similar to PAGING + SWAPPING

    Instead of swapping a whole process the pager brings only those pages needed into memory

    To distinguish between pages that are in memory and pages that are on the disk validinvalid bit is used

    Validinvalid bit is attached to each entry in page table When this bit is set to valid the page is legal and in memory When this bit is set to invalid either the page is illegal or the page is in disk

    When CPU generates a logical address whose page table entry is set to invalid page fault occurs Hence the required page is in disk Some section of hard disk called swap space is used to hold pages that are not present in memory

    Q) Procedure for Handling a Page Fault

    1) CPU generates a logical address and if the validinvalid bit for the page that has this logical address is set to invalid page fault occurs

    2) Page fault causes a trap to operating system

    3)Check if the logical address is within the logical address space of process ie PTBR and PTLR is checked if (required logical address is not within the logical address space of process)

    Terminate the processelse

    page is not in memory and page is in disk

    4) To bring the required page into memory we need to find a free main memory frame map main memory address to disk block and fetch disk block and load the block into free frame5) When the required page is brought into memory update the page table to indicate the page is in memory6) Restart instruction interrupted by illegal address trap The process will continue as if page had always been in memory

    Q) What is pure demand pagingAns If no page belonging to the executing process is in main memory the process will fault for every page it needs Page faults will occur until every page that is needed is in memory This scheme is known as pure demand paging NEVER BRING A PAGE UNTIL IT IS REQUIRED

    Q) Page Replacement1 Find the location of the desired page on the disk2 Find a free

    i) If there is a free frame use itii) If there is no free frame use Page replacement Algorithms to find some page in memory that is

    not really in use and swap itiii) If the modify bit associated with the victim frame is set to 1 the page is written to disk Make the

    changes to validinvalid bit of victim frames page table entry to invalid to indicate the victim frame is no longer in memory

    else no need to write to disk3 Read the desired page into victim frame and change the frame and page tables4 Restart the user process

    =================================================================================================Q) Page Replacement AlgorithmsGoal Produce a low page-fault rateThe algorithms are evaluated by running it on a particular string of memory references (reference string) and by computing the number of page faults on that string

    1 First in First out (FIFO) Page replacement algorithm Uses the time when a page was brought into memory The page that was brought first into memory is replaced ie oldest page is chosen for replacement We create a FIFO queue to hold all pages in memory

    When a page is brought into memory we insert it at the tail of the queue and we replace the page at the head of the queue

    FIFO 15 page faultsProblems due to FIFO is Beladyrsquos anamolyFor a refrence string shown below123412512345When number of frames allocated to a process is 1 number of page faults=12 due to FIFO page replacementWhen number of frames allocated to a process is 2 number of page faults=12 due to FIFO page replacementWhen number of frames allocated to a process is 3 number of page faults=9 due to FIFO page replacementWhen number of frames allocated to a process is 1 number of page faults=10 due to FIFO page replacementAs the number of frames allocated to a process increases page faults must decrease But in FIFO page replacement page faults increase2 Optimal Page replacement AlgorithmReplace the page that will not be used for longest period of time Used for measuring how well your algorithm performs It is not practical to implement OPT algorithm as it is not possible to know what the future references will beex

    Optimal page replacement 9 page faults

    3 Least Recently used (LRU) Page replacement algorithm Replace the page that has not been used for longest period of time LRU makes use of time of last use of the page Ex

    Number of page faults = 12

    LRU requires hardware assistance to determine the time of last use of the page We can use 1) Stack or 2) Counter to implement LRU page replacement algorithm

    1) Stack implementation ndash keep a stack of page numbers in a double link formwhen a Page is referenced it is moved to the top of stackThis implementation requires 6 pointers to be changed

    2) counter implementation Time-of-use field is allocated to each frame allocatedCPU maintains a counter Counter is incremented for every page reference Counter value is copied to the time-of-use field of the referenced page

    Disadvantages of LRU page replacement algorithm Updating counter or stack must be done for every memory reference HENCE LRU IS SLOW and implementation requires hardware assistance ==============================================================================Q) LRU approximation page replacement1) Using reference bit2) Using additional reference bits algorithm3) Second chance algorithm4) Enhanced Second chance algorithm5) Counting based Page replacement

    Has two schemes1) Least Frequently Used (LFU)2) Most Frequently Used (MFU)

    6) Page Buffering AlgorithmsQ) What is ThrashingAns A process is thrashing if it is spending more time paging than executingThrashing may occur when global (or) local page replacement is used as followsSolution to Thrashing is Working Set strategyIf we provide a process with as many frames as it needs thrashing can be avoided Since it is not possible to know how many frames a process needs we use locality model of process execution

    here Locality is set of pages that are actively used together Locality model states that as the process executes it moves from locality to locality If we allocate enough frames to a process to accommodate the size of current locality it will not fault again until it changes current localityWorking Set model uses working-set and a parameter ∆ = working-set windowThe set of pages in the most recent ∆ page references is the working setLet WS S i be the working-set size for process Pi ie Pi needs WS S i framesLet D be the total demand for frames then D=sum WS Si

    and mbe the total number of available frames

    If iquestm thrashing occurs because some processes will not have enough framesSoIF (Dgtm iquest

    OS selects a process to suspendelse OS may initiate another processThe working set strategy prevents thrashing while keeping the degree of multi programming as high as possibleThe main difficulty is to keep track of moving working set window ========================================================================Q) How does thrashing occur in global and local page replacementAns In Global page replacement when a page fault occurs for a process any frame in main memory can be replaced causing page fault to some other process that needs the replaced frame Page fault processes must use paging device to swap pages in and out As more number of processes queue up for the paging device

    the READY queue empties And CPU becomes idle So OS introduces new processes as CPU is free This further increases number of page faults

    In Local page replacement Each process is allocated certain frames in main memory and when a page fault occurs only the frames allocated to it will be replacedWhen a process starts thrashing other processes are not affected But since the processes will be in queue for paging device most of the time average service time for page fault increases Hence effective access time will increase even for a process that is not thrashingQ) Page Table Structure Or Structure of page table in memory

    Hierarchical Paging Hashed Page Tables Inverted Page Tables

    Hierarchical Paging If Page table is large then we break up the page table into multiple page tables ie page table is paged A simple technique is a two-level page tableTwo-Level Paging Example A logical address (on 32-bit machine with 4K page size) is divided into 1 a page number consisting of 20 bits 2 a page offset consisting of 12 bitsSince the page table is paged the page number is further divided into 1 a 10-bit page number 2 a 10-bit page offset Thus a logical address is as follows page number page offset

    pi p2 d

    10 10 12 where p1 is an index into the outer page table and p2 is the displacement within the page of the outer page table Two-Level Page-Table Scheme

    Address-Translation Scheme Address-translation scheme for a two-level 32-bit paging architecture

    Hashed Page Tables (or) Hash table Hashed page tables are common in address spaces gt 32 bitsEach entry in hash table contains a linked list of elements that hash to same locationHash table is indexed by hash valueEach element consists of 3 fields

    1 Page number2 Frame number3 Pointer to next element in linked list

    The algorithm works as follows Page number is passed to hash function to get hash value Locating the hash value in hash table is easy as Hash table is indexed by hash value Page number is then compared with field 1 in the first element in linked list If there is a match the corresponding frame number is used to get physical address If there is no match next entries in linked list are searched

    Inverted Page Table

    Invertible page table has one entry for each frame of physical memory Each entry in invertible page table has

    Process-id Page number

    Logical address also has 3 parts Process-id Page number

    OffsetProcess-id and page number of the logical address is compared with each entry in the invertible page tableIf a match is found at ith entry i is added to offset to get logical address If there is no match it means it is illegal addressAdvantages and disadvantages

    Decreases memory needed to store each page table Increases time needed to search the table when a page reference occurs Solution Use hash table to

    limit the search to one mdash or at most a few mdash page-table entries Inverted Page Table Architecture

    Unit-IV CPU Scheduling

    Basic conceptsTo maximize CPU utilization some process must be running at all times

    In multi programming several processes will be in main memory at a given time If running process has to wait for IO the OS takes the CPU away from that process and assigns CPU to other process in ready queue So CPU must be scheduled and CPU scheduling is fundamental OS function

    CPU- IO burst cycleProcess execution begins with a CPU burst that is followed by an IO burst which is followed by another CPU burst then another IO burst and so on as shown in the figure

    An IO bound program has many short CPU bursts A CPU-bound program has few long CPU bursts

    Schedulers a module in OS for scheduling decisions1048708 Long-term scheduler (or job scheduler) ndash selects which processes should be brought into the ready queue1048708 Medium-term scheduler ndash selects which processes should be swapped inout the memory1048708 Short-term scheduler (or CPU scheduler) ndash Whenever the CPU becomes idle the operating system must select one of the processes in ready queue to be executed This selection is done by short term scheduler (or) CPU scheduler

    Dispatcher gives CPU control to the process selected by short term scheduler This function involves following

    1 switching context2 switching to user mode3 jumping to proper location in the user program to restart that program

    Time taken to stop one process and start another process is known as dispatch latency This must be kept smallReady QueueReady queue is implemented as a FIFO queue a priority queue a tree or simply an unordered linked listThe records in ready queue are PCBs of processes

    Pre-emptive Scheduling

    CPU scheduling decisions take place under one of four conditions

    1 When a process switches from the running state to the waiting state2 When a process switches from the running state to the ready state for example in response to an

    interrupt3 When a process switches from the waiting state to the ready state say at completion of IO 4 When a process terminates

    For conditions 1 and 4 there is no choice - A new process must be selected For conditions 2 and 3 there is a choice - continue running the current process (or) select a different

    one If CPU scheduling takes place only under conditions 1 and 4 the system is said to be non-

    preemptive or cooperative In non-preemptive a process runs until it itself gives up the CPU ie when it is waiting for IO (or) when process completes Otherwise the system is said to be pre-emptive

    Windows used non-preemptive scheduling up to Windows 3x and started using pre-emptive scheduling with Win95 Macs used non-preemptive prior to OSX and pre-emptive since then Note that pre-emptive scheduling is only possible on hardware that supports a timer interrupt

    Note that pre-emptive scheduling can cause problems when two processes share data because one process may get interrupted in the middle of updating shared data structures

    Preemption can also be a problem if the kernel is busy implementing a system call ( eg updating critical kernel data structures ) when the preemption occurs Solution wait until the system call has either completed or blocked before allowing the pre-emption But this solution is problematic for real-time systems

    Some critical sections of code protect themselves from concurrency problems by disabling interrupts before entering the critical section and re-enabling interrupts on exiting the section But this should only be done only on very short pieces of code that will take less CPU time

    Scheduling Criteria to decide CPU scheduling algorithm1 CPU utilization in real system cpu utilization will range from 40 to 902 Throughput Number of processes that are completed per unit time3 Turnaround time Time when the process has completed ndash Time when the process was submittedie it includes waiting time and burst time4 Waiting time amount of time that a process spends waiting for CPU5 Response time Time when the first response came - Time when the process was submitted

    Optimization Criteria Max CPU utilizationMax throughput

    Min turnaround timeMin waiting timeMin response time CPU Scheduling Algorithms1) First-Come First-Served (FCFS) Scheduling First come first served CPU scheduling algorithm is non pre-emptive Processes are scheduled in the order they have arrivedFCFS is implemented by Queue When the CPU is free it is allocated to the process at the head of the queueAdvantages Easy to implementDisadvantages 1 Average waiting time is quite long

    2 Convoy effect (it occurs as small processes wait in queue for big process to leave CPU)Example Process Burst Time P1 24 P2 3 P3 3

    Suppose that the processes arrive in the order P1 P2 P3

    The Gantt Chart for the schedule is P1 P2 P3

    0 24 27 30 Waiting time for P1 = 0 P2 = 24 P3 = 27 Average waiting time (0 + 24 + 27)3 = 17 2) Shortest-Job-First (SJF) SchedulingSchedule the Process with shortest burst time If CPU burst time of two processe are same then FCFS is usedAdvantages Average waiting time decreasesDisadvantages It is difficult to know the length of next CPU burstSJF can be either pre-emptive (or) non pre-emptive1 Non pre-emptive SJF ndash once CPU is given to the process it cannot be preempted until completes its CPU burst2 Pre-emptive SJF (or) Shortest-Remaining-Time-First (SRTF) If the newly arrived process is shorter than what is remaining of currently executing process then pre-empt the currently executing process

    Example Process Arrival Time Burst Time P1 00 7

    P2 20 4 P3 40 1

    P4 50 4

    SJF (non-preemptive) Gantt Chart P1 P3 P2 P4

    0 7 8 12 16 Average waiting time = [0 +(8-2)+(7-4) +(12-5)] 4 =4

    Example of Preemptive SJF

    Process Arrival Time Burst TimeP1 00 7P2 20 4P3 40 1P4 50 4

    SJF (preemptive) P1 P2 P3 P2 P4 P1

    0 2 4 5 7 11 16 Average waiting time = (9 + 1 + 0 +2)4 =3 3) Priority SchedulingA priority number (integer) is associated with each processThe CPU is allocated to the process with the highest priority (smallest integer equiv highest priority)If two processes have equal priority then FCFS is usedSJF is a priority algorithm in which larger the CPU burst lower the priorityThere are two schemes 1 Preemptive 2 nonpreemptive

    Problem equiv Starvation ndash low priority processes may never executeSolution equiv Aging ndash as time progresses increase the priority of the process

    4) Round Robin (RR) Each process gets a small unit of CPU time (time quantum) usually 10-100 milliseconds After this time has elapsed the process is preempted and added to the end of the ready queueIf there are n processes in the ready queue and the time quantum is q then each process gets 1n of the CPU time in chunks of at most q time units at once No process waits more than (n-1)q time unitsPerformance 1 q large _ FIFO 2 q small _ q must be large with respect to context switch otherwise overhead is too highExample of RR with Time Quantum = 4 Process Burst Time P1 24 P2 3

    P3 3 The Gantt chart isP1 P2 P3 P1 P1 P1 P1 P1

    0 4 7 10 14 18 22 26 30 Average waiting time = [(30-24)+4+7]3 = 173 =566 5)Multilevel Queue scheduling Ready queue is partitioned into separate queuesFor ex ready queue is divided into 2 queues

    1 foreground (interactive) queue2 background (batch) queue

    Each queue has its own scheduling algorithmFor ex Round Robin scheduling algorithm can be used in foreground queue

    FCFS scheduling algorithm can be used in background queueScheduling must be done between the queues This can be done in 2 ways1 Fixed priority scheduling

    Foreground queue has highest priority All the processes in foreground queue must be completed and queue must be empty Then only the processes in background queue will be given CPU As shown in below figure Disadvantage starvation

    2 Time slice ndash each queue gets a certain amount of CPU timewhich it can schedule amongst its processes ie 80 to foreground in RR and 20 to background in FCFS

    6) Multilevel Feedback Queue Scheduling A process can move between queues The idea is to separate processes according to their CPU bursts IO ndash bound and interactive processes will be in highest priority queueIf a process takes more CPU time it is moved to lower priority queue If a process waits too long in lower priority queue it is moved to higher priority queue to prevent starvationAs shown in the below figure let there be 3 queues Q0 Q1 Q2

    1 Q0 ndash time quantum 8 milliseconds 2 Q1 ndash time quantum 16 milliseconds 3 Q2 ndash FCFS

    Scheduling

    1 A process entering ready queue is put in queue Q0 When it gains CPU the process receives 8 msec If it does not finish in 8 milliseconds process is moved to queue Q1 2 At Q1 process again receives 16 additional milliseconds If it still does not complete it is moved to queue Q2

    Q) SynchronizationSeveral processes run in an operating system Some process share resources due to which problems like data inconsistency may arise

    Example of process synchronization is Producer Consumer problem (or) Bounded-Buffer problemProducer Consumer problem To make sure that producer process should not add data when the buffer is full andConsumer process should not take data when the buffer is empty

    The code for Producer process can be modified as followswhile (true) Produce an item in next producedwhile (counter == BUFFER_SIZE) do nothingbuffer[in]=nextProducedin = (in+1) BUFFER_SIZEcounter++

    here an integer variable counter is initialized to zeroCounter is incremented every time a new item is added to the buffer and counter is decremented every time an item is removed from bufferin is an index that always points to an empty slot after the last filled slotout is an index that always points to the first filled slot after at the head of the circular queue

    The code for consumer process is as shown belowwhile(true)

    while (counter == 0) do nothingnextConsumed = buffer[out]out=(out+1)BUFFER_SIZEcounter--

    The above producer and consumer code produces RACE CONDITIONOutcome of execution of several co-operating processes depends on the order in which accesses takes place is called Race ConditionFor Example if counter =5

    Given counter =5 and if producer produces one item and consumer consumes one item then correct value of counter= 5 only But here the counter = 4 ie inconsistent resultIf T5 is executed before T4 then the counter =6 ie inconsistent result

    Solution to avoid race condition make sure only one process at a time must update the shared variable (here it is counter)

    Q) Critical section problemAns Each process has a segment of code called critical section Critical section code is the code that contains common variables of co-operating processesTo avoid data inconsistencyTwo processes must not execute critical section code at same timeFor this to happen1 Each process must request permission to enter its critical sectionmdashEntry section2 End of critical section ndashExit section3 Rest of the code after critical section ndash Remainder section

    Solution to critical section must satisfy the following three requirements1 Mutual exclusion if one process is in critical section no other process can be in critical section2 Progress Processes not in remainder section must decide which process will enter critical section next3 Bounded Waiting All the processes must be allowed to enter critical section No process must wait indefinitely

    Q) Petersonrsquos solution for critical section problem (Software based solution)AnsTime Process 1 Process2T0 do

    flag[1] = TRUE turn =2

    do flag[2] = TRUE

    T1 turn =1

    T2 While( flag[2] ampamp turn == 2)Since turn = 2 while loop will not run

    While( flag[1] ampamp turn == 1)Loops continuously until flag[1]

    becomes falseT3 Enter critical section

    T4 flag[1] = FALSE Enter Critical section

    T5 Enters remainder section flag[2]=FALSE

    while (TRUE) Enters remainder section

    while (TRUE)

    101 File Concept Data is stored in secondary memory in the form of files A file is a sequence of bits bytes lines or records A file has structure which depends on its type

    File Attributes

    1 Name ndash Name of the file for user understanding2 Identifier used by OS to identify a file ( eg inode number )3 Type ndash Text files executable files binary files etc 4 Location ndash location of the file on the hard drive 5 Size current size of the file6 Protection Controls who can read write or execute the file7 Time Date and user identification gives the information when the file was created last modified

    and last used etc This data is used for protection security and usage monitoring

    All information about files is kept in directory structure The directory entry consists of the filersquos name and its unique identifier The identifier locates the other file attributes Directory structure is kept on disk

    File Operations

    The file Abstract Data Type supports many common operations 1 Creating a file find free space on disk add entry to directory 2 Writing a file write data at current file position pointer location and update pointer3 Reading a file read data at current file position pointer location and update pointer4 Repositioning within a file or file seek change pointer location to a given value5 Deleting a file free the space allocated to file and remove entry from directory6 Truncating a file delete the data and update file size

    Open File Table

    Most OSes require that files be opened before access and closed after access Information about all files that are open currently in the system is stored in an open file table This

    system wide open file table is maintained by OS And information about files opened by the process is stored by the processrsquos open file table Open file table has the following information

    1 File pointer - records the current position in the file for the next read or write2 File-open count - How many times has the current file been opened ( simultaneously by

    different processes ) and not yet closedWhen this counter reaches zero the file can be removed from the table

    3 Disk location of the file4 Access rights

    Some systems provide support for file locking 1 A shared lock is for reading only2 A exclusive lock is for writing as well as reading3 An advisory lock OS makes sure the locks are acquired and released appropriately4 A mandatory lock software developers makes sure the locks are acquired and released

    appropriately5 UNIX used advisory locks and Windows uses mandatory locks

    File Types

    Implementing file types A common technique is to include the type as part of the filename The name is split into 2 parts

    1 File Name 2 Extension

    File name and extension are separated by a period The system uses the extension to indicate the type of file and the type of operations that can be done on that fileUser opens Microsoft word and clicks on File menu and Open option User will only specify the filename without extension application will look for a file with the given name and extension it expects Hence extensions are not supported by the operating systemUnix system uses a crude magic number stored at the beginning of the file to indicate the type of fileIn Mac OS X operating system file produced by word processor has the word processorrsquos name as its creator When user double clicks on the file word will open the required fileSome common file types are as shown below

    File Structure Some files contain an internal structure which may or may not be known to the OS for ex executable files have a specific structure so that it can determine

    where in memory to load the file and what is the location of the first instruction

    If OS supports many file formats it will increase the size and complexity of the OS UNIX treats all files as sequences of bytes with no internal structure (With the exception of

    executable binary programs which it must know how to load and find the first executable statement etc)

    Macintosh files have two forks - a resource fork and a data fork The resource fork contains information relating to the UI such as icons and button images and can be modified independently Data fork contains the code or data

    Internal File structure

    Disk files are accessed in units of physical blocks typically 512 bytes

    Internally files are organized in units of logical units like single byte (or)

    data record size (or) structure size

    The number of logical units which fit into one physical block determines its packingIf a student file is organized internally as records Each record takes 64 bytes of memoryAnd main memory is divided into blocks of 512 bytes Then packing = 51264=8 8 records can be stored per block

    Internal fragmentation may occur due to this

    102 Access Methods

    The information in file can be accessed in the following ways1 Sequential access2 Direct access (or) Relative access

    1 Sequential access we cannot randomly access any record

    Records are read and written sequentially one record after another

    A sequential access file emulates magnetic tape operation and generally supports the following operations read next - read a record and advance the tape to the next position write next - write a record and advance the tape to the next position rewind skip n records - May or may not be supported N may be limited to positive numbers or may be

    limited to +- 1

    2 Direct access (or) Relative accessA file is made up of fixed length records so that we can randomly access any record

    No restriction on order of reading or writing

    Direct access files are useful when we need to access large amounts of data randomly

    Ex database

    Operations supported include n is relative block number ie relative to beginning of file

    1 read n - read record number n 2 write n - write record number n 3 jump to record n - could be 0 or the end of file4 Query current record - used to return back to this record later5 Sequential access can be easily emulated using direct access The inverse is complicated and

    inefficient

    3 Other access methods can be built on top of direct access method We construct an index for the file which contains pointers to various blocks To search a file we first access index and then use pointer to access file

    If a student file has 120000 records of students sorted by student id We can create an index consisting of 1st student id in each blockif 64 records can be stored in each block then number of blocks needed to store 120000 records = 12000064 asymp 2000 blocks So index has 2000 entries Index is kept in main memory so that searching becomes faster

    If the index itself becomes large we create an index for index file (Ex B-tree)

    Q) Allocation Methods

    To allocate space to files on disk is such a way that disk space is utilized effectively and files can be accessed quickly

    Three major methods of allocating disk space are

    1 Contiguous 2 Linked

    3 Indexed

    1 Contiguous allocation Each file occupies set of contiguous blocks on disk

    If the file is n blocks long and starts at location b then it occupies blocks b b+1 b+2hellipb+n-1

    The directory entry for each file indicates the address of starting block(ie b) and length (n)

    Advantages

    1 For direct access to block i we can directly access by finding the block as b+i Since the file is stored sequentially sequential access is easy Hence access is easy

    2 Supports both direct and sequential access3 Number of disk seeks required is minimum 4 Seek time is minimum

    Disadvantages

    1 Difficult to find space to a new file (or) add more data to existing fileFirst fit and Best fit strategies are used to find a free hole( set of free contiguous blocks) from list of available holes But both are inefficient wrt storage utilization

    2 External fragmentation even though there are enough free blocks required for a new file they are not contiguous and hence cannot be allocated to new fileSolution copy all the files onto another disk or tape We then copy the files back onto original disk contiguously This is known as compaction But compaction is time consuming

    3 Size of the file must be known when the file is created Even though final size is known entire size may not be used by the file at the beginning Hence leads to wastage of space

    If size of the file is not known in advance then (i) If we allocate too little space to file file cannot be extended Solution find a larger hole copy the contents of the file to the new space and release the previous space This may be possible as long as space exists Also this is time consuming(ii) If we overestimate the amount of space needed much of the space is unused This will lead to internal fragmentation

    To minimize these drawbacks some os uses modified contiguous allocation scheme Here contiguous blocks of memory is allocated at firstIf the file needs more blocks another set of contiguous space known as extent is added Location and a block count plus a link to the first block of next extentIf extents are too large and of fixed size internal fragmentation occursIf extents are of variable sizes external fragmentation occurs

    Linked AllocationEach file is a linked list of disk blocks The disk blocks can be scattered anywhere on the diskThe directory contains a pointer to first and last blocks of the file For example a file of 5 blocks might start at a block 9 and continue at block 16 then block 1 and then block 10 and finally block 25 Each block contains a pointer to next block If each block is 512 bytes and disk address (pointer ) requires 4 bytes then the user sees blocks of 512-4=508 bytes To create a new file we simply create a new entry in the directory The pointer will have nil for empty file Also the size field of empty file is 0

    To add more data to existing file free block is found and data is written to it and is linked to end of file To read a file we read by following the pointers block to block

    Advantages 1 Easy to find space to a new file (or) add more data to existing file2 Size of the file need not be declared when the file is created3 There is no external fragmentation

    Disadvantages1 Seek time is more2 Number of disk seeks required is more3 Pointers use larger percentage of filersquos disk space4 Inefficient for direct access files To access i th record i disk reads are required5 Not Reliable if pointers were lost or damaged

    Solution to disadvantage 3 is to use to group multiple blocks as clusters(for ex 4 blocks = 1cluster) We allocate clusters instead of blocks to file Makes logical to physical block mapping simpleImproves disk throughput by reducing disk access timeReduces wastage of space due to pointers as only few pointers may be neededFree ndashlist management also becomes simpleDisadvantage internal fragmentation more space is wasted when the cluster is partially full

    Solution to disadvantage 5 is to use doubly linked list (or) store the file name and relative block number in each block

    An variation of linked allocation is the use of file allocation table (FAT)A section of disk at the beginning of each volume contains FAT FAT has one entry for each disk block and is indexed by block numberThe block number contains the block number of next block that contains the file The last block will have special end of the file value as table entry An unused block is indicated by table value 0

    Advantages

    1 Easy to find space to a new file (or) add more data to existing file2 Random access time is improved

    Disadvantage number of disk seeks is more The disk head must move to read the FAT and find the location of block then move to location of the block

    3 Indexed allocation

    Each file has its own index block Index block is an array of disk block addresses The ith entry in index block points to ith block of the file

    The directory contains the address of the index block

    When the file is created all pointers in the index block are set to nil

    To add more data to existing file free block is found and data is written to it The address of block is put in the index block

    Advantages

    1 Easy to find space to a new file (or) add more data to existing file2 Supports direct (or) random access efficiently3 No external fragmentation

    Disadvantage

    1 Seek time is more2 Number of disk seeks required is more3 Not Reliable4 Index file must be kept in memory If this memory is not available then we have the read the index

    block and then desired data block ie 2 disk accesses are required To access a block near the end of the file we need to read all the index blocks to read the needed data block

    5 Pointer overhead of index block is gt than that of linked allocation For example if we have file of only one or two blocks Entire index block is allocated to store these one or two pointers

    Various mechanisms followed for size of index block

    1 linked scheme size of index block is one disk block Index block contains a small header containing name of the file and set of the first 100 disk block addresses The next address contains the address of another index block for large files

    2 multilevel index to access a block the OS uses first-level index to find second-level index block which points to file block This approach can be continued to third or fourth level

    3 combined scheme

    For example Say there are 15 pointers of index block in the filersquos inode The first 12 of these pointers have address of blocks that contains file data The next 3 pointers point to indirect blocks The first points to single indirect block which is an index block that contain addresses of filersquos data blocks

    The second points to double indirect blocks and third points to triple indirect blocks as shown below

    Q) Free space management

    To keep track of free disk space the system maintains free space list To create a file we search free space list and allocate the required space to new file

    The free space list can be implemented as

    1 Bit vector (or) Bit map2 Linked list3 Grouping4 Counting 5 Space maps

    1 Bit vector (or) Bit map Each block is represented by 1 bit If the block is free the bit is 1 and if the block is allocated the bit is 0 For example consider a disk where blocks 23458 blocks are free The free-space bit map would be 001111001000

    Advantages simple and easy to get contiguous files

    Disadvantages

    1 Bit map requires extra space2 Bit map is inefficient if kept in disk Bit maps of smaller disk can be kept in main memory but may

    not be possible to keep the bit map of larger disk in main memory3 Bit map must be modified both when blocks are allocated and freed Freeing 1 GB of data on a 1-TB

    disk could cause thousands of blocks of bit maps to be updated because those data blocks are scattered all over the disk

    2 Linked List Free-space list is implemented by linking all the free disk blocks together A pointer to the first free block is stored in special location on the disk For example consider a disk where blocks 23458 blocks are freeWe keep a pointer to block 2 as the first free block Block 2 would contain a pointer to block 3 Block 3 would point to block 4 and block 4 would point to block 5and so onAdvantage No waste of spaceDisadvantage Cannot get contiguous space easily ie To traverse the free-space list we must read each block which requires more IO time

    3 Grouping First free block stores the addresses of n free blocks The first n-1 free blocks are free and the last block contains the addresses of another n free blocks and so onAdvantage large number of free blocks can be found quickly4 Counting free-space list can be maintained by keeping the address of first free block and the number (n)of free contiguous blocks that follow the first block

    Each entry in the free-space list consists of disk address and a countEach entry requires more space andlist is shorter only if count is gt1These entries can be stored in a B-tree for efficient insertion deletion and search

    5 Space maps ZFS divides the disk space into chunks of manageable sizes called metaslabs Each metaslab has a space mapZFS uses counting algorithm to store information about free blocks Space map is a log of all block activities in time order in counting formatWhen ZFS allocates or frees space from a metaslab it loads the associated space map into memory and updates it Finally free-space list is updated on diskSynchronization Hardware Hardware solution to synchronization is to provide atomic operations These operations operate as a single instruction without interruption Two such operations are 1 Test and Set() hardware instruction2 Swap() instruction

    1 Test and Set instruction is as shown

    Solution using Test and SetThe shared variable lock is initialized to false

    2 Swap() instruction definition is as follows

    Solution using Swap1 Shared Boolean variable lock is initialized to false and each process has a local boolean variable key

    Both Test and Set Swap instructions satisfy the mutual exclusion requirement but unfortunately do not guarantee bounded waiting If there are multiple processes trying to get into their critical sections there is no guarantee of what order they will enter and any one process could have the bad luck to wait forever until they got their turn in the critical section (Since there is no guarantee as to the relative rates of the processes a very fast process could theoretically release the lock whip through their remainder section and re-lock the lock before a slower process got a chance As more and more processes are involved vying for the same resource the odds of a slow process getting locked out completely increase )

    Below figure illustrates a solution using test-and-set that satisfies bounded waiting using two shared data structures boolean lock and boolean waiting[ N ] where N is the number of processes in contention for critical sections

    Bounded-waiting mutual exclusion with TestAndSet( )

    It first looks in an order ( starting with the next process on the list ) for a process that has been waiting and if it finds one then it releases that particular process from its waiting state without unlocking the critical section thereby allowing a specific process into the critical section while continuing to block all the others

    Q) What is semaphoreAns Semaphore is used to solve synchronization problems A semaphore S is an integer variable accessed only through 2 standard atomic operations 1 wait() 2 signal()Wait primitive Signal primitivewait(S)

    while(S lt= 0) do nothingS--

    signal (S)

    S++

    Each process that wishes to use a resource performs wait() operation When a process releases a resource it performs signal() operation

    Semaphores Usage1 binary semaphore or mutex locks can take values 0 (or) 1

    2 Counting Semaphore can take any integer value Counting semaphore is used to count the number remaining resources The counter is initialized to the number of resources available whenever the counting semaphore gt 0 then a process can enter a critical section and use one of the resources When the counter = 0 ( or negative in some implementations ) then the process blocks until another process frees a resource and increments the counting semaphore with a signal callFor example if 3 resources are there and 4 processes Value of counting semaphore =3if process P1 requires a resource it perform wait() operationNow counting semaphore = 2if process P2 requires a resource it perform wait() operationNow counting semaphore = 1if process P3 requires a resource it perform wait() operationNow counting semaphore = 0if process P4 requires a resource it perform wait() operation P4 waits until a resource is available

    3 Semaphores can also be used to synchronize certain operations

    Semaphore ImplementationWhen a process is in critical section and any other process that tries to enter critical section must loop continuously wasting CPU time (busy waiting) This type of semaphore is called spin lock as the process spins while waiting For example suppose it is important that process P1 execute statement S1 before process P2 executes statement S2

    First we create a semaphore named synch that is shared by the two processes and initialize it to zero Then in process P1 we insert the code

    S1signal( synch )

    and in process P2 we insert the code wait( synch )

    S2 Because synch was initialized to 0 process P2 will block on the wait until after P1 executes the call

    to signal Each semaphore has a integer value and maintains a waiting queue of waiting processes

    Semaphore implementation with no busy waiting To overcome busy waiting we block the waiting process and waiting process is restarted by wakeup() operation(ie the process is removed from waiting queue to ready state)

    Problems due to semaphore1 Deadlocks Deadlocks occur when multiple processes are blocked each waiting for a resource that can only be freed by one of the other (blocked) processes shown below

    2 Starvation In Starvation one or more processes gets blocked forever and never get a chance to enter critical section For example if we do not specify the algorithms for adding processes to the waiting queue or selecting one to be removed from the queue in the signal( ) call If a LIFO queue is chosen then the first process that starts waiting will never get a chance

    3 Priority inversionLet Process LMH have priorities as shown belowLltMltHprocess H is waiting for a resource that has been held by process L so Process L is running When process M enters it pre-empts process L and M is running nowHere M runs before H even though M has lower priority than HTo avoid this L is given high priority so that M does not pre-empt L=====================================================================Q) Classical problems on synchronization

    1 The Bounded Buffer Problem (also called the The Producer-Consumer Problem)2 The Readers-Writers Problem3 The Dining Philosophers Problem

    These problems are used to test newly proposed synchronization scheme

    1 The Bounded Buffer ProblemConsider

    n buffers and each buffer holds one item a producer process which creates the items (1 at a time) a consumer process which processes them (1 at a time)

    Producer process cannot produce an item if all the buffers are full and Consumer process cannot consume if all the buffers are empty As both processes modify the contents of buffer synchronization must be there For that we use 3 Semaphores

    1 empty = n (empty semaphore counts the number of empty buffers)2 full=0 (full semaphore counts the number of filled buffers)3 mutex=1 (mutex provides mutual exclusion for access to the buffer)

    Producer process Consumer processdo

    hellip produce an item wait(empty) wait(mutex) hellip

    do wait(full) wait(mutex) hellip remove an item from buffer hellip

    add item to buffer hellip signal(mutex) signal(full)

    while (1)

    signal(mutex) signal(empty) hellip consume the item hellip while (1)

    2 The Readers-Writers ProblemA data item such as a file is shared among several processesEach process is classified as reader or writerMultiple readers can read the file at same timeA writer must have exclusive access (ie cannot share with either a reader or another writer)Two versions of the readers-writers problem

    readers priority new reader need not wait because a writer is waiting writers priority if a writer is waiting to access the database no new readers can start reading

    A solution to both version may cause starvation in the readers priority version writers may starve in the writers priority version readers may starve

    A semaphore solution to the readers priority version uses 3 semaphores1 readcount = 0 (readcount semaphore counts the number of reading shared data2 mutex =1 (mutex semaphore is used for mutually exclusion when readcount is updated)3 wrt = 1 (wrt semaphore is common to both writer and reader)

    wrt provides mutually exclusive access to shared data Reader process Writer process

    wait(mutex) readcount++ if (readcount == 1) wait(rt) signal(mutex) hellip reading is performed hellip wait(mutex) readcount-- if (readcount == 0) signal(wrt) signal(mutex)

    do wait(wrt) hellip writing is performed hellip signal(wrt)

    while (TRUE)

    3 The Dining Philosophers Problemn philosophers sit around a table thinking and eatingAs time passes a philosopher gets hungry and tries to pick up the chopstick on his left and on his right A philosopher may only pick up one chopstick at a time and cannot pick up a chopstick already in the hand of neighbour philosopherThe dining philosophers problems is an example of a large class or concurrency control problems it is a simple representation of the need to allocate several resources among several processes in a deadlock-free and starvation-free mannerA semaphore solution represent each chopstick with a semaphore

    semaphore chopstick[5] Initially all values are 1

    Philosopher ido

    51

    wait(chopstick[i])wait(chopstick[(i+1) 5])hellipeathellipsignal(chopstick[i])signal(chopstick[(i+1) 5])hellipthinkhellip

    while (1)This solution guarantees no two neighbouring philosophers eat simultaneously but has the possibility of creating a deadlockQ) Types of operating systems

    MULTIPROCESSING SYSTEMS has multiple hardware CPUs

    In symmetric multiprocessing (SMP) systems The processes share the same main memory and io facilities All processors can perform the same functions (hence term symmetric)

    52

    ASYMMETRIC MULTIPROCESSING SYSTEMS In asymmetric multiprocessing (ASMP) all CPUs are not equal (Master and slave CPUs) ASMP have dedicated processors to specific tasks For example one processor may be dedicated to disk operations another to video operations etc These systems dont have the flexibility to assign processes to the least-loaded CPU unlike an SMP system

    REAL TIME SYSTEM is used to implement a computer application that must complete its execution within its time constraint

    Two kinds of real-time systems have evolved

    1 A hard real-time system can guarantee that response requirement will be met under all conditions 2 A soft real-time system cannot guarantee that response requirement meets under all conditions Ex

    Digital audio or multimedia systems Digital telephones

    DISTRIBUTED SYSTEMS execute parts of a computation in different systems at the same time It uses distributed control ie it spreads its decision-making actions across different computers in the system so that failures of individual computers or the network does not cripple its operation

    A distributed operating system appears as a uniprocessor system even though it has multiple processors The users may not know where their programs are being run or where their files are located that should all be handled automatically by the operating system

    Distributed systems allow applications to run on several processors at the same time thus requiring more complex processor scheduling algorithms

    DESKTOP SYSTEMS A desktop system is a personal computer (PC) system used at a single location Modern desktop systems support multiprogramming Common examples are Linux FreeBSD Windows 8 and the Macintosh operating system

    HANDHELD SYSTEMS (palmtop computers) Two of the most popular operating systems for handhelds are Symbian OS and Android OS They have very less memory

    53

    CLUSTERED SYSTEMS A computer cluster consists of a set of loosely connected computers that work together so that in many respects they can be viewed as a single system The components of a cluster are usually connected to each other through fast local area networks each node (computer used as a server) running its own instance of an operating system

    Q) Modern operating system

    New developments in hardware applications and security threats lead to the development of modern operating system

    New Hardware developments like increased machine speed high speed network increased size and variety of memory devices

    New Applications developments like multimedia applications internet and web access and client server computing

    Developments in modern operating system can be categorized as

    1 Microkernel architecture assigns only few functions to kernel ( IPC and CPU scheduling etc) Other OS services are provided by processes called servers that run in user mode This approach decouples server and kernel development This approach is well suited to distributed environment

    2 Multithreading a process is divided into threads that can run concurrently Useful for applications that perform a number of independent tasks Example database server that listens and processes many client requests

    3 Symmetric multiprocessing As now-a-days there are multiple microprocessors in a single system SMP operating system provides greater efficiency when there are multiple processors The processes share the same main memory and io facilities All processors can perform the same functions (hence term symmetric)

    SMP schedules processes or threads across all of the processors

    Advantages

    1 Increased performance if some portion of the program can be run in parallel the performance of SMP OS increases

    2 increased availability as all processes can perform the same task if one processor fails others will work 3 incremental growth one can add a new processor to increase the performance4 Scaling cost can be dependent on the number of processors used

    4 Distributed operating systems Distributed systems allow applications to run on several processors at the same time

    5 Object oriented design adds modularity to kernel Os can be customized without effecting system integrity

    Q) Linux Operating systems It is open source as its source code is freely available It is free to use Linux was designed considering UNIX compatibility Linux is a multi-user multitasking system Main design goals are speed efficiency and standardization Components of Linux System

    54

    Linux Operating System has primarily three components as shown in the below diagram

    Kernel - Kernel is the core part of Linux It is responsible for all major activities of this operating system It interacts directly with hardware Kernel hides low level hardware details to system or application programs

    System Library - System libraries are special functions or programs These are used by application programs to use Kernelrsquos features

    System Utility - System Utility programs are responsible to do specialized individual level tasks like updating log file accepting login requests from terminals etc

    Kernel Modules Sections of kernel code that can be compiled loaded and unloaded independent of the rest of the kernel A kernel module may typically implement a device driver a file system or a networking protocol

    Three components to Linux module support 1 module management Supports loading modules into memory and allows to talk to the rest of the kernel 2driver registration Allows modules to inform rest of the kernel that a new driver has become available 3 conflict resolution protect reserved resources of one driver from accidental use by another driver

    Kernel Mode vs User ModeKernel component code executes in a special privileged mode called kernel mode with full access to all resources of the computer This code represents a single process executes in single address space and do not require any context switch and hence is very efficient and fast Kernel runs each processes and provides system services to processes provides protected access to hardwares to processesCode which is not required to run in kernel mode is in System Library User programs and other system programs works in User Mode which has no access to system hardwares and kernel code User programs utilities use System libraries to access Kernel functions to get systems low level tasksBasic FeaturesFollowing are some of the important features of Linux Operating System

    Portable - Portability means softwares can works on different types of hardwares in same way Linux kernel and application programs supports their installation on any kind of hardware platform

    Open Source - Linux source code is freely available and it is community based development project Multiple teams works in collaboration to enhance the capability of Linux operating system and it is continuously evolving

    Multi-User - Linux is a multiuser system means multiple users can access system resources like memory ram application programs at same time

    Multiprogramming - Linux is a multiprogramming system means multiple applications can run at same time

    Hierarchical File System - Linux provides a standard file structure in which system files user files are arranged

    Shell - Linux provides a special interpreter program which can be used to execute commands of the operating system It can be used to do various types of operations call application programs etc

    55

    Security - Linux provides user security using authentication features like password protection controlled access to specific files encryption of data

    Architecture Linux System Architecture is consists of following layers

    Hardware layer - Hardware consists of all peripheral devices (RAM HDD CPU etc)

    Kernel - Core component of Operating System interacts directly with hardware provides low level services to upper layer components

    Shell - An interface to kernel Takes commands from user and executes kernels functions

    Utilities - Utility programs giving user most of the functionalities of an operating systems

    Q) windows XP operating System1Extensibility mdash layered architecture

    Executive which runs in protected mode provides the basic system services On top of the executive several server subsystems operate in user mode Modular structure allows additional environmental subsystems to be added without affecting the

    executive 2Portability mdash XP can be moved from on hardware architecture to another with relatively few changes

    Written in C and C++ Processor-dependent code is isolated in a dynamic link library (DLL) called the ldquohardware abstraction

    layerrdquo (HAL)3 Reliability mdashXP uses hardware protection for virtual memory and software protection mechanisms for operating system resources 4 Compatibility mdash applications that follow the IEEE 10031 (POSIX) standard can be complied to run on 2000 without changing the source code 5 Performance mdash XP subsystems can communicate with one another via high-performance message passing

    Preemption of low priority threads enables the system to respond quickly to external events Designed for symmetrical multiprocessing

    6 International support mdash supports different locales via the national language support (NLS) APIXP Architecture 1 Layered system of modules 2 Protected mode mdash hardware abstraction layer (HAL) kernel executive 3 User mode mdash collection of subsystems

    a Environmental subsystems emulate different operating systems b Protection subsystems provide security functions

    XP architecture

    56

    Q) Windows network Operating systemExample Windows 2000 operating system

    All the pictures are taken from Silberschatz Abraham et al Peter Baer Galvin and Greg Gagne Operating system concepts Reading Addison-Wesley

    Visit my blog enthusiaststudentblogspotin57

    mtechmessengerblogspotin

    Keywords OS exam notes OS notesOperating system notes Opearing system galvin notesSimple batch system Multi- programmed Batch System Time sharing System Difference between Batch Multi-programming and Time sharing What is process structure of process in memory Process states PCB Operations on Processes Process Creation Process Termination cascading terminationcontext switch Basic Concepts of threads two modes of CPU execution Inter process communication Shared ndash Memory Systems Message passing systemswhat is deadlock four conditions that are necessary for deadlock to occur Methods for handling deadlocks Deadlock prevention Deadlock avoidanceResource Allocation graphBankers AlgorithmSafe State Unsafe state Safety Algorithm Resource-Request Algorithm Memory management Address Binding Logical vs Physical Address Space MMU Memory-Management Unit Dynamic loading Dynamic Linking Overlays Swapping Contiguous memory allocation Single-partition allocation Multiple-partition allocation Internal fragmentation Variable Size partitions External fragmentation First-fit Best-fit Worst-fit Non-Contiguous Memory allocation Paging Implementation of Page Table PTLR PTBR page table base register Page-table length register translation look-aside buffers TLB Protection in Paging Shared pages in Paging Segmentation Shared Segments Virtual memory Demand paging Procedure for Handling a Page Fault pure demand paging Page Replacement Page Replacement Algorithms FIFO Optimal Page replacement AlgorithmLeast Recently used (LRU) Page replacement algorithm ThrashingWorking Set model global and local page replacement

    58

    • 101 File Concept Data is stored in secondary memory in the form of files A file is a sequence of bits bytes lines or records A file has structure which depends on its type
    • File Attributes
    • File Operations
    • Kernel Mode vs User Mode
    • Basic Features
    • Architecture

      3 Resident Monitor (is part of monitor that always reside in memory) holds initial control control transfers to job and then back to monitor

      4 Automatic job sequencing is done as follows User submits job to computer operator Computer operator batches the job sequentially and

      places the entire job on an Input device (Ex card reader) Monitor reads one job at a time from ip device and places current job in User program Area Control is then passed to current job When the job is completed control return back to monitor

      5 Monitor handles job setup and reduces setup time by batching jobs with similar requirements Job Control Language (JCL) instructions are given to monitor to execute the job Monitor reads $FTN card and loads the appropriate compiler from tape During execution of user program any input instruction causes one data card to be read After successful or unsuccessful completion of user job monitor will scan all ip cards until it

      encounters next JCL card

      Hardware features desirable in Batch system

      1 Memory protection2 Timer3 Privileged Instruction4 Interrupts

      2) Multi-programmed Batch System memory layout is as shown below

      If there is enough main memory to hold OS and two programs When one job needs to wait for IO the CPU can switch to other job

      This is known as multi programming or multi tasking

      Additional hardware desirable are

      1 Interrupt driven IO and Direct Memory Access (DMA)2 Memory management Since several jobs must be in main memory

      at same time some form of memory management is required

      3) Time sharing System are multi-programming systems and can handle OS gives quantum of CPU time to each user program

      If lsquonrsquo users are there each user is given 1n of CPU time

      Problems to be handled in time sharing system

      1 Since multiple jobs are in memory they must be protected from modifying each otherrsquos data

      2 File System must be protected by giving access to only authorized users

      3 Contention for resources must be handled

      Q) Difference between Batch Multi-programming and Time sharing

      Batch Multi-programming Time sharingPrincipal Objective Maximize processor use Minimize response timeSource of instructions to OS JCL instructions provided with job Commands entered at terminal

      ===================================================================

      Unit-II

      Q) What is process A process is program in execution A process is an active entity and resides in main memory

      Q) Explain structure of process in memoryA process contains

      1 program code which is sometimes known as text section2 Process stack contains temporary data (Such as function parameters

      return addresses and local variables)3 Heap is the memory allocated dynamically during process run time4 Data section contains global variables

      Although two processes are associated with the same program they are considered as two separate execution sequences as the data heap and stack sections differ even though text sections are equivalent

      Q) Explain Process statesAs a process executes it changes state Each process can be in one of the following states

      1 New the process is being created2 Running Instructions are being executed3 Waiting a process is waiting for some

      event to occur (such as IO completion etc)

      4 Ready process is waiting to be assigned to processor

      5 Terminated the process has finished execution

      Q) What is PCBEach process is represented in the operating system by a process control block (PCB) or task control block It contains information about

      1 Process state the state may be new ready running blocked or terminated2 Program Counter this register stores the address of next instruction to be executed3 CPU registers like accumulators stack pointers index register and general purpose

      registers The values of these registers are saved if the process is interrupted4 CPU scheduling information like process priority scheduling parameters etc5 Memory management information include the values of base and limit register segment tables

      page tables etc6 Accounting Information include amount of CPU used time limits job or process number etc7 IO status information like list of IO devices allocated to the process list of open files and so on

      Q)Operations on Processes

      1 Process Creation

      Operating System is responsible for creation of new process

      Reasons for a new process creation

      1 When a batch job is submitted by user2 In interactive environment or in time sharing system a process is created when user logs on3 Operating system creates a process to manage printing So that user need not wait till printing

      completes Here OS creates process on behalf of user4 When a process creates another process Creating process is called parent process and new process is

      called child process or sub processNew process can in turn create other processes forming a tree of processesOS identifies a process by unique process identifier (or pid) which is an unique integerIn solaris Operating system at the top of the tree is Sched process with pid=0 This process can create several child processes In the below figure it creates three child processes1 Init process is the parent process for all user processes2 Pageout process 3 Fsflush process

      Subprocess may obtain resources

      1 Directly from OS2 Share some resources among several of its children3 Parent process partitions its resources among its children

      Parent process may pass initialization data to child For ex name of the image file and name of the output device may be passed to a display process (child process)

      Parent process may execute

      1 concurrently with the child2 waits till some or all of the child process have terminated

      Address space of child process may be

      1 Duplicate copy of parent (Same program and data)2 New program loaded into it

      2 Process Termination

      A process terminates when it finishes executing its final statement and requests the operating system to delete it (using exit() system call in Unix and Terminate Process() in Win32 API)

      All the resources of the process (open files IO buffers and physical memory) are deallocated by OS

      A parent may terminate a child process for a variety of reasons

      1 Child has exceeded its usage of some of the resources2 Task assigned to child is no longer required3 Parent is terminating and OS (Ex VMS) does not allow a child to continue if parent is terminating

      Q) What is cascading termination

      Parent is terminating and OS (Ex VMS) does not allow a child to continue if parent is terminating This is called Cascading Termination initiated by OS

      Q) What happens to child when parent terminates in Unix

      Init process becomes the parent of all its children

      Q)What is context switchWhen the PCB of the currently executing process is saved the operating system loads the PCB of the next process that has to be run on CPU This is a heavy task and it takes a lot of time

      Q) Basic Concepts of threads

      A thread consists of a program counter a stack and a set of registers and a thread ID Threads are also called light weight processesA process with multiple threads make a great serverThe threads share a lot of resources with other threads belonging to the same process So a context switch among threads for the same process is easy It involves switch of register set the program counter and the stack

      Q) Explain two modes of CPU executionProtection of memory IO can be provided via two modes of CPU execution user mode and kernel mode

      In kernel privileged supervisor mode OS has access to privileged instructions Privileged instructions can access IO devices control interrupts manipulate memory (pagetable TLB etc)

      Privileged instructions are instruction that can only be executed in kernel mode

      All the user level processes run in user mode Some critical operations are not allowed to be done by user processes The user processes must use system calls to perform these operations When a system call occurs OS will enter into kernel mode and accesses privileged instructions to perform the desired service to user-level process

      For example for Input or Output process makes a system call telling the operating system to read or write particular area and this request is satisfied by the operating system

      Q) Explain Inter process communicationAns Cooperating processes require inter process communication (IPC) mechanism to exchange data and information There are two communication models (a) Message passing (b) shared memory as shown below

      (a) Shared ndash Memory Systems

      1) Communicating processes must establish a region of shared memory2) Shared memory region resides in address space of the creating process

      3) Other processes that wish to communicate using shared memory segment must attach the shared memory to their address space4) Processes can exchange information by reading and writing data in shared areas5) Shared memory systems are convenient to communicate6) Shared memory systems are faster and provides maximum speed as

      i) System calls are required only to establish shared memory regionsii) Once shared memory is established all accesses are treated as routine memory access No

      assistance of kernel is required

      (b) Message passing systems

      1 Are useful for exchanging smaller amounts of data2 Easy to implement for inter computer communication3 More time consuming than shared memory system as implemented using system calls

      and need kernel intervention

      To send messages communication link must exist between them Communication link can be physically or logically implemented

      Different methods to logically implementing a link are

      1 Direct or indirect communication2 Synchronous or asynchronous communication3 Automatic or explicit buffering

      1a Direct communication

      A link is established automatically between every pair of processes that want to communicate

      A link is associated with exactly two processes

      Addressing

      i) Symmetry in addressing Sender process and receiver process must name each other to communicate Send() and Receive() primitives are as follows Send(P message) ndashsends a message to process P

      Receive (Q message) ndash receive a message from process Qii) Asymmetry in addressing Only sender names the receiver process Send() and Receive()

      primitives are as follows Send(P message) ndashsends a message to process P

      Receive (id message) ndash receive a message from any process

      Disadvantage of both types of addressing is limited modularity as changing the id of a process requires to find all the references to old id and then modify them

      1b Indirect communication

      1 Messages are sent received tofrom Mailboxes or ports2 Each mailbox has unique id (integer value) 3 Two processes can communicate only if the process have a shared mailbox4 A link is established between two processes if they have a shared mailbox

      P1 P2

      R1

      5 A link may be associated with more than two processes6 Mailbox may owned by process or OS

      a If mailbox is owned by process we can distinguish between the owner( can receive messages only) and user(can send messages only) When the processes that owns a mailbox terminates the mailbox disappears

      b If mailbox is owned by OS OS must provide mechanisms to i Create a new mailbox

      ii Send and receive messages through mailboxesiii Delete the mailboxiv Pass ownership to other processes

      2 Synchronous or asynchronous communicationMessage passing may be either blocking(synchronous) or non-blocking(Asynchronous)

      1 Blocking Send The sending process is blocked until the message is received by the receiving process or mailbox

      2 Nonblocking Send The sending process sends the message and resumes operation3 Blocking Receive The receiver blocks until message is available4 Nonblocking receive The receiver retrieves either a valid message or a null

      3 Automatic or explicit bufferingMessages exchanged by communicating processes reside in temporary queue Such queues can be implemented in 3 ways

      i Zero Capacity Queue length =0 the link cannot have messages waiting in itsender must block until receiver receives the message

      ii Bounded Capacity Queue length = finite (say n) When the queue is full the sender must block until space is available in queue

      iii Unbounded Capacity Queue length = infinitehellipthe sender never blocks===============================================================================Q) what is deadlock

      A set of processes is deadlocked when every process in the set is waiting for a resource that is currently allocated to another process in the set Here the process P1 is allocated resource R2 and P2 is allocated R1

      P1 requires R1 and P2 requires R2

      Process P1 and P2 will wait forever This situation is called deadlock

      Q) What are the four conditions that are necessary for deadlock to occur

      1 Mutual Exclusion - At least one resource must be held in a non-sharable mode If any other process requests this resource then that process must wait for the resource to be released

      2 Hold and Wait - A process must be simultaneously holding at least one resource and waiting for at least one resource that is currently being held by some other process

      3 No preemption - Once a process is holding a resource then that resource cannot be taken away from that process until the process releases it

      4 Circular Wait - A set of processes P0 P1 P2 PN must exist such that every P[ i ] is waiting for P[ ( i + 1 ) ( N + 1 ) ]

      Q)Methods for handling deadlocks

      1 By using deadlock prevention and avoidance protocols system will never enter a deadlocked state1 Allow the system to enter a deadlocked state detect it and recover it2 Ignore the problem and pretend that deadlock never occurs

      To make sure that the system must not enter a deadlocked state the system can use

      1 Deadlock prevention 2 Deadlock avoidance

      ==============================================================================

      Deadlock Prevention

      1 Mutual Exclusion We cannot prevent deadlocks by denying the mutual exclusion condition because some resources are nonsharable (Ex Printer)

      2 Hold and Wait

      To make sure that the hold-and-wait condition never occurs in the system two protocols that can be used are

      Protocol 1 All the resources requested must be allocated before process begins execution

      Protocol2 A process can request resources only when it has none If a process requires additional resources it must release all the resources that are currently allocated

      Example Consider a process that copies data from DVD drive to a file on disk sorts the file and then prints the results to a printer

      If Protocol1 is used it must request the DVD drive disk file and printer at the beginning and must hold them till the end

      Disadvantages

      1 Starvation A process may wait forever because at least one resource that is need is always allocated to some other process Hence Starvation is possible

      2 Resource Utilization is low Process will hold the printer from beginning till end even though it is used at the end

      If Protocol 2 is used the process will initially request DVD drive and disk file It copies from the DVD drive to disk and then releases both the DVD drive and disk file It then requests the disk file and printer

      Disadvantage There may be a chance that our data may not remain on the disk file

      3 No pre-emption

      To make sure that this condition does not hold the following protocol is used

      Protocol If a process (say A) requests some resources

      Case 1 If resources are available then Allocate them

      Case 2 if resources are allocated to some other process(say B) that is waiting for additional resources

      then Preempt the desired resources from the waiting process (B) and allocate them to requesting process(A)

      The process B can be restarted only when it is allocated additional resources it is requesting and takes away the resources that were given to process A

      Case 3 if resources are neither available nor held by a waiting process then Process A waits

      This protocol is applied to resources like CPU register and memory space as the state of the resources can be saved

      4 Circular Wait

      To make sure Circular Wait condition never occurs

      1 Each Resource is assigned a unique integer number

      2 Each Process must request resources in an increasing order of enumeration

      We define a one-to-one function F R rarr N where R is the set of resource types and N is the set of natural numbers

      A process has requested a resource type say Ri at the beginning

      Protocol 1 After that the process can request resource type R j if and only if F (Riquestiquest j)gtF(Riquestiquest i)iquestiquest

      Protocol 2 If a process requests a resource type R j it must release all the resources (say (Riquestiquest i )iquest ) whose F (Riquestiquest i)ge F (Riquestiquest j)iquestiquest

      Example Let F (tape drive) =1 F(disk drive) = 5 and F(printer)=12

      A process can request any number of tape drives disk drives and printers

      Protocol 1If a process A has already requested disk drive now A can request only printer and cannot request tape drive

      Protocol2 In order to request tape drive the process A must release the disk drive and then can request tape drive

      If the above two protocols are used then the circular wait condition never occurs We can prove this by contradiction

      Proof Assume circular wait exists Let the set of processes involved in the circular wait be P0 P1 helliphellip Pn where P0 is waiting for resource R0which is held by P1

      P1 is allocated R0 and P1 is waiting for resource R1which is held by P2 so F (Riquestiquest0)ltF(R iquestiquest1)iquestiquest

      P2 is allocated R1 and P2 is waiting for resource R2which is held by P3 so F (Riquestiquest1)ltF (Riquestiquest2)iquestiquest

      P1 P2

      R2R2

      R1

      Pn is allocated Rnminus1 and Pn is waiting for resource Rnwhich is held by P0 so F (Riquestiquestn)ltF(R iquestiquest0)iquestiquest

      Hence by transitivity F (Riquestiquest0)ltF(R iquestiquest0)iquestiquest Hence our assumption that circular wait exists is FALSE

      ===============================================================================Q) Resource Allocation graphDeadlocks can be understood more clearly through the use of Resource-Allocation Graphs having the following properties

      1 Resource Types are represented as square nodes on the graph Dots inside the square nodes indicate number of resources

      ( Ex two dots might represent two laser printers )2 Processes are represented as circles3 Request Edges - If P1 has requested R1 a directed edge from P1 to R1 is request

      edge4 Assignment Edges - A directed edge from R2 to P1 indicating that resource R2 has

      been allocated to process P1 and that P1 is currently holding resource R2 Note that a request edge can be converted into an assignment edge when request is granted

      If a resource-allocation graph does contain cycles AND each resource contains only a single instance then a deadlock exists If a resource category contains more than one instance then cycle in the resource-allocation graph indicates the possibility of a deadlock but does not guarantee one==================================================================Q) Deadlock AvoidanceFor each resource request system can decide whether the request should be granted or not To make this decision the system must have information like

      1 resources currently available2 resources currently allocated to each process3 Future requests and releases of each process4 Maximum number of resources it may need

      Given this information it is possible to construct an algorithm that makes sure that the system will never enter a deadlocked stateThere are Two deadlock-avoidance algorithms They are

      1 Resource-Allocation Graph Algorithm2 Bankers Algorithm

      Safe State A system is in safe state if there exists a safe sequence of processes P0 P1 P2 PN such that Resource Requests for Pi lt= Resources allocated to Pi + resources held by all processes Pj where j lt i All safe states are deadlock free

      Unsafe state If a safe sequence does not exist then the system is in an unsafe state which MAY lead to deadlock

      1 Resource-Allocation Graph Algorithm Resource-allocation graphs can detect deadlocks only if number of resources of each type are one In this case unsafe states can be recognized and avoided by adding claim edges denoted by dashed

      lines which point from a process to a resource that it may request in the future All the claim edges are added only at the beginning of process When a process makes a request the claim edge Pi-gtRj is converted to a request edge When a resource is released the assignment edge changes back to a claim edge This approach works by denying requests that would produce cycles in the resource-allocation graph

      taking claim edges into effectConsider for example the resource allocation graph as shown

      If P2 requests resource R2 then the claim edge from P1-gtR2 will be made request edge as follows

      The resulting resource-allocation graph would have a cycle in it and so the request cannot be granted Q) Bankerrsquos Algorithm or Deadlock avoidance algorithm with exampleThere are 12 tape driveslet the current state of the system is as shown in the below figureProcess Allocated Max Need Need= MaxNeed ndash AllocatedP0 5 10 5P1 2 4 2P2 2 9 7

      Available = 12-(5+2+2) = 3Resource- Request AlgorithmNow when a request for 1 tape drive by process P2 is made we run resource-request algorithm to check whether the request must be granted or not The request is granted if the after granting the request all the processes in the system can complete For thatWe check 1 Is the request of P2 lt= need of P2

      1 lt= 7 therefore TRUE2 Is the request of P2 lt= Available

      1 lt= 3 therefore TRUE3 Pretend the request is granted for P2

      Now the current state is as shown belowProcess Allocated Max Need Need= MaxNeed ndash AllocatedP0 5 10 5P1 2 4 2P2 2+1=3 9 7-1=6

      Available = 3-1= 2Now Run the safety algorithm to check whether the system is in safe state

      Safety Algorithm1 Let WORK = Available = 22 Find unfinished process such that Need of unfinished process lt= WORKCheck P0

      Need of P0 = 5Work = 2

      Is 5 lt= 2 FALSE

      Check P1Need of P1 = 2Work = 2

      Is 2 lt= 2TRUETherefore P1 can finishIf P1 finishes Work = Work+Allocated to P1

      Work=2+2= 4Now again check if P0 can completeNeed of P0 = 5Work =4Is 5 lt=4 FALSE

      Check if P3 completesNeed of P3= 6Work =4Is 6lt= 4FALSESo neither P0 can complete and P3 can completeSo the system is in unsafe stateThe request for 1 tape drive by P3 is not granted

      2 Bankers Algorithm For resources that contain more than one instance the resource-allocation graph method does not

      work So we use Bankerrsquos algorithm When a process starts up it must state in advance the maximum allocation of resources it may

      request When a request is made the scheduler determines whether granting the request would leave the

      system in a safe state If not then the process must wait until the request can be granted safely The bankers algorithm relies on several key data structures ( where n is the number of processes and

      m is the number of resource categories ) o Available[ m ] indicates how many resources are currently available o Max[ n ][ m ] indicates the maximum demand of each process of each resource

      o Allocation[ n ][ m ] indicates the number of each resources allocated to each processo Need[ n ][ m ] indicates the remaining resources needed of each type for each process ( Note

      that Need[ i ][ j ] = Max[ i ][ j ] - Allocation[ i ][ j ] for all i j ) For simplification of discussions we make the following notations observations

      o One row of the Need vector Need[ i ] can be treated as a vector corresponding to the needs of process i and similarly for Allocation and Max

      Safety Algorithm In order to apply the Bankers algorithm we first need an algorithm for determining whether or not a

      particular state is safe This algorithm determines if the current state of a system is safe according to the following steps

      1 Let Work and Finish be vectors of length m and n respectively Work is a working copy of the available resources Finish is a vector of booleans indicating whether a particular process can finish Initialize Work = Available and Finish to false for all elements

      2 Find an i such that both (A) Finish[ i ] == false and (B) Need[ i ] lt Work This process has not finished but could with the given available working set If no such i exists go to step 4

      3 Set Work = Work + Allocation[ i ] and set Finish[ i ] to true This corresponds to process i finishing up and releasing its resources back into the work pool Then loop back to step 2

      4 If finish[ i ] == true for all i then the state is a safe state because a safe sequence has been found

      Resource-Request Algorithm ( The Bankers Algorithm ) Now we have a tool for determining if a particular state is safe or not This algorithm determines if a new request is safe and grants it only if it is safe to do so When a request is made ( that does not exceed currently available resources )

      pretend it has been granted and then see if the resulting state is a safe one If so grant the request and if not deny the request as follows

      1 Let Request[ n ][ m ] indicate the number of resources of each type currently requested by processes If Request[ i ] gt Need[ i ] for any process i raise an error condition

      2 If Request[ i ] gt Available for any process i then that process must wait for resources to become available

      else the process can continue to step 3

      3 Check to see if the request can be granted safely by pretending it has been granted and then seeing if the resulting state is safe If resulting state is safe

      grant the requestelse

      then the process must wait until its request can be granted safelyThe procedure for granting a request ( or pretending to for testing purposes ) is

      Available = Available - Request

      Allocation = Allocation + Request Need = Need - Request

      Unit III Memory management

      Just as processes share the CPU they also share physical memory Memory management unit of OS takes care of memory allocation deallocation and other issues Program must be brought into memory for it to be run Addresses are two types

      i) relocatable or relative address wrt to the beginning of program ii) Absolute addresses

      Q)Address BindingAns Binding means mapping logical address space to physical address space Address binding can happen at three different stagesCompile time If you know at compile time only

      where in memory the program is going to be allocated then compiler generates absolute addresses

      Otherwise compiler generates relocatable addresses

      Load time loader binds the relocatable addresses generated by compiler to absolute address Hence binding is done at load timeIf Binding is done at compile and load time physical and logical addresses are same

      Execution time If address binding is done at run time the process can be moved during its execution from one memory segment to another Here we call logical addresses as virtual addressRun time mapping of virtual to physical address is done by a hardware device called Memory Management Unit (MMU)

      Q)Logical vs Physical Address SpaceAnsLogical address ndash generated by the CPU also referred to as virtual addressPhysical address ndash address seen by the memory unitSet of all logical addresses is called logical address space Set of all physical addresses is called physical address space

      Q) Memory-Management Unit (MMU)Ans MMU is hardware device that maps virtual to physical addressIn MMU scheme the value in the relocation register is added to every address generated by CPU to read the contents of memoryThe user program only knows logical addresses it never sees the real physical addresses

      Q)Dynamic loadingAns Since physical memory is small it may not be possible for the entire program to be in main memory so we can use dynamic loadingIt is responsibility of users to design their programs to take advantage of dynamic loadingOnly the main function is loaded into main memory and when main() calls another function the main() will check whether the function is in main memory IF not the loader will load the desired function to main memory and updates the programrsquos address table

      Routine is not loaded until it is called We achieve better memory-space utilization as unused routine is never loaded

      Q) Dynamic Linking+ Linking postponed until execution time+ Small piece of code stub used to locate the appropriate memory-resident library routine+ Stub replaces itself with the address of the routine and executes the routine+ Operating system needed to check if routine is in processesrsquo memory address+ Dynamic linking is particularly useful for libraries

      Q) OverlaysAns Overlays are needed when process is larger than amount of main memory allocated to it Instructions and data that are needed at any given time is kept in main memory Overlays can be implemented by user Programming design of overlay structure is complex Overlays for a two-pass assembler is as shown in the figure

      Q) SwappingAns A process can be swapped temporarily out of memory to a backing store and then brought back into memory for continued executionBacking store ndash fast disk large enough to store copies of all memory images of all usersPriority based scheduling uses a variant of swapping policy called roll out roll in If a higher priority process arrives the memory manager swaps out lower priority process and then swaps in higher priority process When the higher priority process finishes the lower priority process can be swapped in again to main memory A process that is swapped out will be swapped back into the same memory space it occupied previouslyMajor part of the swap time is transfer time We can swap idle process only and cannot swap a process that is waiting for IO

      Q) Contiguous memory allocation Ans Each process is contained in a single contiguous section of memory1 Fixed Size Partition (or) Single-partition allocation

      Divide the main memory into Fixed sized partitions Each partition may contain exactly one process Relocation register contains value of starting physical address Limit register contains range of logical addressesEvery address generated by a CPU is checked as follows

      If logical address lt Limit register then logical address is added with Relocation register to get the corresponding memory address

      else a trap to OS is generatedSince every address is checked we can protect the OS and other user programs from being modified by other running process

      2 Multiple-partition allocation

      1) Fixed Size Partitions Divide the main memory into Fixed sized blocks Here memory allocated to a process may be larger than required The difference between allocated memory and requested memory is called internal fragmentation Internal fragmentation is unused memory inside the partition

      0 P1 Block0(0th address to 3rd address)4 P1 Block1(4th address to 7th address)8 Block2(8th address to 11th address)

      12 Block3(11th address to 12th address)

      if P1 requires just 5 addresses still it is allocated 2 blocks ie 8 addresses So remaining 3 addresses are left unused

      2) Variable Size partitions Here main memory is divided into partitions of variable sizesOperating system maintains information about

      a) allocated partitions b) free partitions (hole)

      In the beginning all main memory is empty and is considered one large block of available memory a hole

      Exact memory required by process is only given When a process terminates it releases memory which can be allocated to another process

      Memory is allocated to processes until finally no available block of memory (or hole) is large enough to hold the next processExternal fragmentation exists when there is enough memory to satisfy a request but the available memory are not contiguous Wastage of memory outside the partitionOne Solution to problem of external fragmentation is

      a Compaction shuffle the memory contents so as to place all free memory together into one large block

      ===============================================================================Q) Dynamic Storage-Allocation Problem (or) Most commonly used strategies to select a free hole from the set of available holesAns To satisfy a request of size n from a list of free holes below 3 policies can be used

      1 First-fit Allocate the first hole that is big enough

      2 Best-fit Allocate the smallest hole that is big enough must search entire list if it is not ordered by size Produces the smallest leftover hole

      3 Worst-fit Allocate the largest hole must also search entire list Produces the largest leftover holeFirst-fit and best-fit better than worst-fit in terms of speed and storage utilization

      ===============================================================================Q) Non-Contiguous Memory allocation

      1 Paging Paging is a memory management scheme that provides non-contiguous memory allocation0 P1 Block0(0th address to 3rd address)4 P2 Block1(4th address to 7th address)8 Block2(8th address to 11th address)

      12 P1 Block3(11th address to 12th address)

      Logical address space of a process can be noncontiguous Ex P1 is allocated Block0 and Block3

      1 Divide physical memory into fixed-sized blocks called frames (size is power of 2)2 Divide logical memory into blocks of same size called pages3 Logical address is divided into 2 parts

      Page number (p) ndash Page table is indexed by page numberPage offset (d) ndash

      4 A page table is allocated to each process (A pointer to page table is stored in PCB of process )Page table translates logical to physical addresses Page 0 is in frame1 Page 1 is in frame 4 etc5 Internal fragmentation may occur due to paging

      6 If the size of logical address space= 2m and page size = 2n Then the higher order mminusn bits of logical address correspond to page number and lsquonrsquolower order bits for displacement within the page

      Ex size of logical address space = 8=23 so m=3

      page size =4=22 So n=2

      So mminusn = 1 bit for page number to represent 01

      remaining n=2 bits for displacement within the page

      Q) Implementation of Page Table

      Page table can be kept as

      1) Set of dedicated registers efficient if the page table is small

      2) Page table can be kept in memory page table base register (PTBR) points to page table Page-table length register (PTLR) indicates size of the page table Changing page table requires changing only the value in these registers Advantages of keeping page table in memorya Less context switch time

      b Two memory accesses are required One memory access to access the page table and another memory access to access the required memory address

      This problem can be solved by using a special fast-lookup hardware cache called associative memory or translation look-aside buffers(TLBs)

      i Each entry in TLB consists of 2 parts

      I) page number field

      II) Value field or frame number field

      ii When a logical address is generated by CPU its page number is presented to TLB The page number is compared with all the entries of TLB simultaneously

      If page number is found in TLB Itrsquos frame number is immediately available -gt TLB hitIf page number not found check in page table ndashTLB miss

      By using TLB Search is fast But TLB hardware is expensive So TLB size is kept small So TLB contains only frequently used few page table entries

      Q) Protection in Paging Validinvalid bit attached to each entry in the page table OS sets this validinvalid bit for each page to allow or disallow access to the page When the bit is set to invalid the page is not in processrsquos logical address space Hence it generates Trap to OS

      Q) Shared pages in Paging

      Reentrant code is the code that never changes during execution Reentrant code can be shared

      1 One copy of read-only (reentrant) code is shared among processes (ie text editors compilers window systems) Ex ed1ed2ed3 is shared among Process P1 and P2

      2 Shared code must appear in same location in the logical address space of all processes

      Each process keeps a separate copy of the private code and data EX data1 and data2

      The pages for the private code and data can appear anywhere in the logical address space

      ==============================================================================Q) Segmentation

      Segmentation is a memory management scheme that support userrsquos view of memory

      When the user program is compiled the compiler generates segments like

      1) The code segment

      2) Global variables segment

      3) Heap memory segment

      4) Stack segment etc

      Each entry in segment table has

      segment base (Starting physical address of segment) and limit register (specifies the length of the segment)

      Logical address is divided into 2 parts

      segment-number offset

      Segment table is indexed by segment number

      Segment-table base register (STBR) stores the location of segment table in main memory

      Segment-table length register (STLR) stores number of segments used by a program

      The segment number (say lsquosrsquo) is used to find the entry in segment table The required entry is lsquosrsquo locations away from the beginning of the segment table Once the required entry in the segment table is found the offset (lsquodrsquo) is compared with limit

      If( offset lt limit) then offset is added with the base entry to generate the physical address

      Q) Shared Segments Code sharing occurs at

      the segment level Shared segments must

      have same segment number

      Allocation - dynamic storage allocation problem

      use best fitfirst fit may cause external fragmentation

      Protection protection bits associated with segments

      readwriteexecute privileges array in a separate segment - hardware can check for illegal array indexes

      ===============================================================================Q) Virtual memory ndash

      Ans In paging and segmentation a program will execute only if the entire process is in main memory But here in virtual memory only part of the program needs to be in memory for execution

      1 A program that is large enough than the available main memory can still run as only part of program is only loaded into main memory Logical address space can therefore be much larger than physical address space

      2 Allows address spaces to be shared by several processes

      Virtual memory can be implemented via Demand paging Demand segmentation

      The large blank space between heap and stack is part of virtual address space Virtual memory allows files and memory to be shared by 2 or more processes through page sharing as shown in below figure (b) Figure (a) depicts that virtual memory that is larger than physical memory

      ==============================================================================

      Q)Demand pagingDemand paging is technique of loading pages from disk to main memory only when the page is needed Hence using less amount of physical memory we get faster response

      Demand paging is similar to PAGING + SWAPPING

      Instead of swapping a whole process the pager brings only those pages needed into memory

      To distinguish between pages that are in memory and pages that are on the disk validinvalid bit is used

      Validinvalid bit is attached to each entry in page table When this bit is set to valid the page is legal and in memory When this bit is set to invalid either the page is illegal or the page is in disk

      When CPU generates a logical address whose page table entry is set to invalid page fault occurs Hence the required page is in disk Some section of hard disk called swap space is used to hold pages that are not present in memory

      Q) Procedure for Handling a Page Fault

      1) CPU generates a logical address and if the validinvalid bit for the page that has this logical address is set to invalid page fault occurs

      2) Page fault causes a trap to operating system

      3)Check if the logical address is within the logical address space of process ie PTBR and PTLR is checked if (required logical address is not within the logical address space of process)

      Terminate the processelse

      page is not in memory and page is in disk

      4) To bring the required page into memory we need to find a free main memory frame map main memory address to disk block and fetch disk block and load the block into free frame5) When the required page is brought into memory update the page table to indicate the page is in memory6) Restart instruction interrupted by illegal address trap The process will continue as if page had always been in memory

      Q) What is pure demand pagingAns If no page belonging to the executing process is in main memory the process will fault for every page it needs Page faults will occur until every page that is needed is in memory This scheme is known as pure demand paging NEVER BRING A PAGE UNTIL IT IS REQUIRED

      Q) Page Replacement1 Find the location of the desired page on the disk2 Find a free

      i) If there is a free frame use itii) If there is no free frame use Page replacement Algorithms to find some page in memory that is

      not really in use and swap itiii) If the modify bit associated with the victim frame is set to 1 the page is written to disk Make the

      changes to validinvalid bit of victim frames page table entry to invalid to indicate the victim frame is no longer in memory

      else no need to write to disk3 Read the desired page into victim frame and change the frame and page tables4 Restart the user process

      =================================================================================================Q) Page Replacement AlgorithmsGoal Produce a low page-fault rateThe algorithms are evaluated by running it on a particular string of memory references (reference string) and by computing the number of page faults on that string

      1 First in First out (FIFO) Page replacement algorithm Uses the time when a page was brought into memory The page that was brought first into memory is replaced ie oldest page is chosen for replacement We create a FIFO queue to hold all pages in memory

      When a page is brought into memory we insert it at the tail of the queue and we replace the page at the head of the queue

      FIFO 15 page faultsProblems due to FIFO is Beladyrsquos anamolyFor a refrence string shown below123412512345When number of frames allocated to a process is 1 number of page faults=12 due to FIFO page replacementWhen number of frames allocated to a process is 2 number of page faults=12 due to FIFO page replacementWhen number of frames allocated to a process is 3 number of page faults=9 due to FIFO page replacementWhen number of frames allocated to a process is 1 number of page faults=10 due to FIFO page replacementAs the number of frames allocated to a process increases page faults must decrease But in FIFO page replacement page faults increase2 Optimal Page replacement AlgorithmReplace the page that will not be used for longest period of time Used for measuring how well your algorithm performs It is not practical to implement OPT algorithm as it is not possible to know what the future references will beex

      Optimal page replacement 9 page faults

      3 Least Recently used (LRU) Page replacement algorithm Replace the page that has not been used for longest period of time LRU makes use of time of last use of the page Ex

      Number of page faults = 12

      LRU requires hardware assistance to determine the time of last use of the page We can use 1) Stack or 2) Counter to implement LRU page replacement algorithm

      1) Stack implementation ndash keep a stack of page numbers in a double link formwhen a Page is referenced it is moved to the top of stackThis implementation requires 6 pointers to be changed

      2) counter implementation Time-of-use field is allocated to each frame allocatedCPU maintains a counter Counter is incremented for every page reference Counter value is copied to the time-of-use field of the referenced page

      Disadvantages of LRU page replacement algorithm Updating counter or stack must be done for every memory reference HENCE LRU IS SLOW and implementation requires hardware assistance ==============================================================================Q) LRU approximation page replacement1) Using reference bit2) Using additional reference bits algorithm3) Second chance algorithm4) Enhanced Second chance algorithm5) Counting based Page replacement

      Has two schemes1) Least Frequently Used (LFU)2) Most Frequently Used (MFU)

      6) Page Buffering AlgorithmsQ) What is ThrashingAns A process is thrashing if it is spending more time paging than executingThrashing may occur when global (or) local page replacement is used as followsSolution to Thrashing is Working Set strategyIf we provide a process with as many frames as it needs thrashing can be avoided Since it is not possible to know how many frames a process needs we use locality model of process execution

      here Locality is set of pages that are actively used together Locality model states that as the process executes it moves from locality to locality If we allocate enough frames to a process to accommodate the size of current locality it will not fault again until it changes current localityWorking Set model uses working-set and a parameter ∆ = working-set windowThe set of pages in the most recent ∆ page references is the working setLet WS S i be the working-set size for process Pi ie Pi needs WS S i framesLet D be the total demand for frames then D=sum WS Si

      and mbe the total number of available frames

      If iquestm thrashing occurs because some processes will not have enough framesSoIF (Dgtm iquest

      OS selects a process to suspendelse OS may initiate another processThe working set strategy prevents thrashing while keeping the degree of multi programming as high as possibleThe main difficulty is to keep track of moving working set window ========================================================================Q) How does thrashing occur in global and local page replacementAns In Global page replacement when a page fault occurs for a process any frame in main memory can be replaced causing page fault to some other process that needs the replaced frame Page fault processes must use paging device to swap pages in and out As more number of processes queue up for the paging device

      the READY queue empties And CPU becomes idle So OS introduces new processes as CPU is free This further increases number of page faults

      In Local page replacement Each process is allocated certain frames in main memory and when a page fault occurs only the frames allocated to it will be replacedWhen a process starts thrashing other processes are not affected But since the processes will be in queue for paging device most of the time average service time for page fault increases Hence effective access time will increase even for a process that is not thrashingQ) Page Table Structure Or Structure of page table in memory

      Hierarchical Paging Hashed Page Tables Inverted Page Tables

      Hierarchical Paging If Page table is large then we break up the page table into multiple page tables ie page table is paged A simple technique is a two-level page tableTwo-Level Paging Example A logical address (on 32-bit machine with 4K page size) is divided into 1 a page number consisting of 20 bits 2 a page offset consisting of 12 bitsSince the page table is paged the page number is further divided into 1 a 10-bit page number 2 a 10-bit page offset Thus a logical address is as follows page number page offset

      pi p2 d

      10 10 12 where p1 is an index into the outer page table and p2 is the displacement within the page of the outer page table Two-Level Page-Table Scheme

      Address-Translation Scheme Address-translation scheme for a two-level 32-bit paging architecture

      Hashed Page Tables (or) Hash table Hashed page tables are common in address spaces gt 32 bitsEach entry in hash table contains a linked list of elements that hash to same locationHash table is indexed by hash valueEach element consists of 3 fields

      1 Page number2 Frame number3 Pointer to next element in linked list

      The algorithm works as follows Page number is passed to hash function to get hash value Locating the hash value in hash table is easy as Hash table is indexed by hash value Page number is then compared with field 1 in the first element in linked list If there is a match the corresponding frame number is used to get physical address If there is no match next entries in linked list are searched

      Inverted Page Table

      Invertible page table has one entry for each frame of physical memory Each entry in invertible page table has

      Process-id Page number

      Logical address also has 3 parts Process-id Page number

      OffsetProcess-id and page number of the logical address is compared with each entry in the invertible page tableIf a match is found at ith entry i is added to offset to get logical address If there is no match it means it is illegal addressAdvantages and disadvantages

      Decreases memory needed to store each page table Increases time needed to search the table when a page reference occurs Solution Use hash table to

      limit the search to one mdash or at most a few mdash page-table entries Inverted Page Table Architecture

      Unit-IV CPU Scheduling

      Basic conceptsTo maximize CPU utilization some process must be running at all times

      In multi programming several processes will be in main memory at a given time If running process has to wait for IO the OS takes the CPU away from that process and assigns CPU to other process in ready queue So CPU must be scheduled and CPU scheduling is fundamental OS function

      CPU- IO burst cycleProcess execution begins with a CPU burst that is followed by an IO burst which is followed by another CPU burst then another IO burst and so on as shown in the figure

      An IO bound program has many short CPU bursts A CPU-bound program has few long CPU bursts

      Schedulers a module in OS for scheduling decisions1048708 Long-term scheduler (or job scheduler) ndash selects which processes should be brought into the ready queue1048708 Medium-term scheduler ndash selects which processes should be swapped inout the memory1048708 Short-term scheduler (or CPU scheduler) ndash Whenever the CPU becomes idle the operating system must select one of the processes in ready queue to be executed This selection is done by short term scheduler (or) CPU scheduler

      Dispatcher gives CPU control to the process selected by short term scheduler This function involves following

      1 switching context2 switching to user mode3 jumping to proper location in the user program to restart that program

      Time taken to stop one process and start another process is known as dispatch latency This must be kept smallReady QueueReady queue is implemented as a FIFO queue a priority queue a tree or simply an unordered linked listThe records in ready queue are PCBs of processes

      Pre-emptive Scheduling

      CPU scheduling decisions take place under one of four conditions

      1 When a process switches from the running state to the waiting state2 When a process switches from the running state to the ready state for example in response to an

      interrupt3 When a process switches from the waiting state to the ready state say at completion of IO 4 When a process terminates

      For conditions 1 and 4 there is no choice - A new process must be selected For conditions 2 and 3 there is a choice - continue running the current process (or) select a different

      one If CPU scheduling takes place only under conditions 1 and 4 the system is said to be non-

      preemptive or cooperative In non-preemptive a process runs until it itself gives up the CPU ie when it is waiting for IO (or) when process completes Otherwise the system is said to be pre-emptive

      Windows used non-preemptive scheduling up to Windows 3x and started using pre-emptive scheduling with Win95 Macs used non-preemptive prior to OSX and pre-emptive since then Note that pre-emptive scheduling is only possible on hardware that supports a timer interrupt

      Note that pre-emptive scheduling can cause problems when two processes share data because one process may get interrupted in the middle of updating shared data structures

      Preemption can also be a problem if the kernel is busy implementing a system call ( eg updating critical kernel data structures ) when the preemption occurs Solution wait until the system call has either completed or blocked before allowing the pre-emption But this solution is problematic for real-time systems

      Some critical sections of code protect themselves from concurrency problems by disabling interrupts before entering the critical section and re-enabling interrupts on exiting the section But this should only be done only on very short pieces of code that will take less CPU time

      Scheduling Criteria to decide CPU scheduling algorithm1 CPU utilization in real system cpu utilization will range from 40 to 902 Throughput Number of processes that are completed per unit time3 Turnaround time Time when the process has completed ndash Time when the process was submittedie it includes waiting time and burst time4 Waiting time amount of time that a process spends waiting for CPU5 Response time Time when the first response came - Time when the process was submitted

      Optimization Criteria Max CPU utilizationMax throughput

      Min turnaround timeMin waiting timeMin response time CPU Scheduling Algorithms1) First-Come First-Served (FCFS) Scheduling First come first served CPU scheduling algorithm is non pre-emptive Processes are scheduled in the order they have arrivedFCFS is implemented by Queue When the CPU is free it is allocated to the process at the head of the queueAdvantages Easy to implementDisadvantages 1 Average waiting time is quite long

      2 Convoy effect (it occurs as small processes wait in queue for big process to leave CPU)Example Process Burst Time P1 24 P2 3 P3 3

      Suppose that the processes arrive in the order P1 P2 P3

      The Gantt Chart for the schedule is P1 P2 P3

      0 24 27 30 Waiting time for P1 = 0 P2 = 24 P3 = 27 Average waiting time (0 + 24 + 27)3 = 17 2) Shortest-Job-First (SJF) SchedulingSchedule the Process with shortest burst time If CPU burst time of two processe are same then FCFS is usedAdvantages Average waiting time decreasesDisadvantages It is difficult to know the length of next CPU burstSJF can be either pre-emptive (or) non pre-emptive1 Non pre-emptive SJF ndash once CPU is given to the process it cannot be preempted until completes its CPU burst2 Pre-emptive SJF (or) Shortest-Remaining-Time-First (SRTF) If the newly arrived process is shorter than what is remaining of currently executing process then pre-empt the currently executing process

      Example Process Arrival Time Burst Time P1 00 7

      P2 20 4 P3 40 1

      P4 50 4

      SJF (non-preemptive) Gantt Chart P1 P3 P2 P4

      0 7 8 12 16 Average waiting time = [0 +(8-2)+(7-4) +(12-5)] 4 =4

      Example of Preemptive SJF

      Process Arrival Time Burst TimeP1 00 7P2 20 4P3 40 1P4 50 4

      SJF (preemptive) P1 P2 P3 P2 P4 P1

      0 2 4 5 7 11 16 Average waiting time = (9 + 1 + 0 +2)4 =3 3) Priority SchedulingA priority number (integer) is associated with each processThe CPU is allocated to the process with the highest priority (smallest integer equiv highest priority)If two processes have equal priority then FCFS is usedSJF is a priority algorithm in which larger the CPU burst lower the priorityThere are two schemes 1 Preemptive 2 nonpreemptive

      Problem equiv Starvation ndash low priority processes may never executeSolution equiv Aging ndash as time progresses increase the priority of the process

      4) Round Robin (RR) Each process gets a small unit of CPU time (time quantum) usually 10-100 milliseconds After this time has elapsed the process is preempted and added to the end of the ready queueIf there are n processes in the ready queue and the time quantum is q then each process gets 1n of the CPU time in chunks of at most q time units at once No process waits more than (n-1)q time unitsPerformance 1 q large _ FIFO 2 q small _ q must be large with respect to context switch otherwise overhead is too highExample of RR with Time Quantum = 4 Process Burst Time P1 24 P2 3

      P3 3 The Gantt chart isP1 P2 P3 P1 P1 P1 P1 P1

      0 4 7 10 14 18 22 26 30 Average waiting time = [(30-24)+4+7]3 = 173 =566 5)Multilevel Queue scheduling Ready queue is partitioned into separate queuesFor ex ready queue is divided into 2 queues

      1 foreground (interactive) queue2 background (batch) queue

      Each queue has its own scheduling algorithmFor ex Round Robin scheduling algorithm can be used in foreground queue

      FCFS scheduling algorithm can be used in background queueScheduling must be done between the queues This can be done in 2 ways1 Fixed priority scheduling

      Foreground queue has highest priority All the processes in foreground queue must be completed and queue must be empty Then only the processes in background queue will be given CPU As shown in below figure Disadvantage starvation

      2 Time slice ndash each queue gets a certain amount of CPU timewhich it can schedule amongst its processes ie 80 to foreground in RR and 20 to background in FCFS

      6) Multilevel Feedback Queue Scheduling A process can move between queues The idea is to separate processes according to their CPU bursts IO ndash bound and interactive processes will be in highest priority queueIf a process takes more CPU time it is moved to lower priority queue If a process waits too long in lower priority queue it is moved to higher priority queue to prevent starvationAs shown in the below figure let there be 3 queues Q0 Q1 Q2

      1 Q0 ndash time quantum 8 milliseconds 2 Q1 ndash time quantum 16 milliseconds 3 Q2 ndash FCFS

      Scheduling

      1 A process entering ready queue is put in queue Q0 When it gains CPU the process receives 8 msec If it does not finish in 8 milliseconds process is moved to queue Q1 2 At Q1 process again receives 16 additional milliseconds If it still does not complete it is moved to queue Q2

      Q) SynchronizationSeveral processes run in an operating system Some process share resources due to which problems like data inconsistency may arise

      Example of process synchronization is Producer Consumer problem (or) Bounded-Buffer problemProducer Consumer problem To make sure that producer process should not add data when the buffer is full andConsumer process should not take data when the buffer is empty

      The code for Producer process can be modified as followswhile (true) Produce an item in next producedwhile (counter == BUFFER_SIZE) do nothingbuffer[in]=nextProducedin = (in+1) BUFFER_SIZEcounter++

      here an integer variable counter is initialized to zeroCounter is incremented every time a new item is added to the buffer and counter is decremented every time an item is removed from bufferin is an index that always points to an empty slot after the last filled slotout is an index that always points to the first filled slot after at the head of the circular queue

      The code for consumer process is as shown belowwhile(true)

      while (counter == 0) do nothingnextConsumed = buffer[out]out=(out+1)BUFFER_SIZEcounter--

      The above producer and consumer code produces RACE CONDITIONOutcome of execution of several co-operating processes depends on the order in which accesses takes place is called Race ConditionFor Example if counter =5

      Given counter =5 and if producer produces one item and consumer consumes one item then correct value of counter= 5 only But here the counter = 4 ie inconsistent resultIf T5 is executed before T4 then the counter =6 ie inconsistent result

      Solution to avoid race condition make sure only one process at a time must update the shared variable (here it is counter)

      Q) Critical section problemAns Each process has a segment of code called critical section Critical section code is the code that contains common variables of co-operating processesTo avoid data inconsistencyTwo processes must not execute critical section code at same timeFor this to happen1 Each process must request permission to enter its critical sectionmdashEntry section2 End of critical section ndashExit section3 Rest of the code after critical section ndash Remainder section

      Solution to critical section must satisfy the following three requirements1 Mutual exclusion if one process is in critical section no other process can be in critical section2 Progress Processes not in remainder section must decide which process will enter critical section next3 Bounded Waiting All the processes must be allowed to enter critical section No process must wait indefinitely

      Q) Petersonrsquos solution for critical section problem (Software based solution)AnsTime Process 1 Process2T0 do

      flag[1] = TRUE turn =2

      do flag[2] = TRUE

      T1 turn =1

      T2 While( flag[2] ampamp turn == 2)Since turn = 2 while loop will not run

      While( flag[1] ampamp turn == 1)Loops continuously until flag[1]

      becomes falseT3 Enter critical section

      T4 flag[1] = FALSE Enter Critical section

      T5 Enters remainder section flag[2]=FALSE

      while (TRUE) Enters remainder section

      while (TRUE)

      101 File Concept Data is stored in secondary memory in the form of files A file is a sequence of bits bytes lines or records A file has structure which depends on its type

      File Attributes

      1 Name ndash Name of the file for user understanding2 Identifier used by OS to identify a file ( eg inode number )3 Type ndash Text files executable files binary files etc 4 Location ndash location of the file on the hard drive 5 Size current size of the file6 Protection Controls who can read write or execute the file7 Time Date and user identification gives the information when the file was created last modified

      and last used etc This data is used for protection security and usage monitoring

      All information about files is kept in directory structure The directory entry consists of the filersquos name and its unique identifier The identifier locates the other file attributes Directory structure is kept on disk

      File Operations

      The file Abstract Data Type supports many common operations 1 Creating a file find free space on disk add entry to directory 2 Writing a file write data at current file position pointer location and update pointer3 Reading a file read data at current file position pointer location and update pointer4 Repositioning within a file or file seek change pointer location to a given value5 Deleting a file free the space allocated to file and remove entry from directory6 Truncating a file delete the data and update file size

      Open File Table

      Most OSes require that files be opened before access and closed after access Information about all files that are open currently in the system is stored in an open file table This

      system wide open file table is maintained by OS And information about files opened by the process is stored by the processrsquos open file table Open file table has the following information

      1 File pointer - records the current position in the file for the next read or write2 File-open count - How many times has the current file been opened ( simultaneously by

      different processes ) and not yet closedWhen this counter reaches zero the file can be removed from the table

      3 Disk location of the file4 Access rights

      Some systems provide support for file locking 1 A shared lock is for reading only2 A exclusive lock is for writing as well as reading3 An advisory lock OS makes sure the locks are acquired and released appropriately4 A mandatory lock software developers makes sure the locks are acquired and released

      appropriately5 UNIX used advisory locks and Windows uses mandatory locks

      File Types

      Implementing file types A common technique is to include the type as part of the filename The name is split into 2 parts

      1 File Name 2 Extension

      File name and extension are separated by a period The system uses the extension to indicate the type of file and the type of operations that can be done on that fileUser opens Microsoft word and clicks on File menu and Open option User will only specify the filename without extension application will look for a file with the given name and extension it expects Hence extensions are not supported by the operating systemUnix system uses a crude magic number stored at the beginning of the file to indicate the type of fileIn Mac OS X operating system file produced by word processor has the word processorrsquos name as its creator When user double clicks on the file word will open the required fileSome common file types are as shown below

      File Structure Some files contain an internal structure which may or may not be known to the OS for ex executable files have a specific structure so that it can determine

      where in memory to load the file and what is the location of the first instruction

      If OS supports many file formats it will increase the size and complexity of the OS UNIX treats all files as sequences of bytes with no internal structure (With the exception of

      executable binary programs which it must know how to load and find the first executable statement etc)

      Macintosh files have two forks - a resource fork and a data fork The resource fork contains information relating to the UI such as icons and button images and can be modified independently Data fork contains the code or data

      Internal File structure

      Disk files are accessed in units of physical blocks typically 512 bytes

      Internally files are organized in units of logical units like single byte (or)

      data record size (or) structure size

      The number of logical units which fit into one physical block determines its packingIf a student file is organized internally as records Each record takes 64 bytes of memoryAnd main memory is divided into blocks of 512 bytes Then packing = 51264=8 8 records can be stored per block

      Internal fragmentation may occur due to this

      102 Access Methods

      The information in file can be accessed in the following ways1 Sequential access2 Direct access (or) Relative access

      1 Sequential access we cannot randomly access any record

      Records are read and written sequentially one record after another

      A sequential access file emulates magnetic tape operation and generally supports the following operations read next - read a record and advance the tape to the next position write next - write a record and advance the tape to the next position rewind skip n records - May or may not be supported N may be limited to positive numbers or may be

      limited to +- 1

      2 Direct access (or) Relative accessA file is made up of fixed length records so that we can randomly access any record

      No restriction on order of reading or writing

      Direct access files are useful when we need to access large amounts of data randomly

      Ex database

      Operations supported include n is relative block number ie relative to beginning of file

      1 read n - read record number n 2 write n - write record number n 3 jump to record n - could be 0 or the end of file4 Query current record - used to return back to this record later5 Sequential access can be easily emulated using direct access The inverse is complicated and

      inefficient

      3 Other access methods can be built on top of direct access method We construct an index for the file which contains pointers to various blocks To search a file we first access index and then use pointer to access file

      If a student file has 120000 records of students sorted by student id We can create an index consisting of 1st student id in each blockif 64 records can be stored in each block then number of blocks needed to store 120000 records = 12000064 asymp 2000 blocks So index has 2000 entries Index is kept in main memory so that searching becomes faster

      If the index itself becomes large we create an index for index file (Ex B-tree)

      Q) Allocation Methods

      To allocate space to files on disk is such a way that disk space is utilized effectively and files can be accessed quickly

      Three major methods of allocating disk space are

      1 Contiguous 2 Linked

      3 Indexed

      1 Contiguous allocation Each file occupies set of contiguous blocks on disk

      If the file is n blocks long and starts at location b then it occupies blocks b b+1 b+2hellipb+n-1

      The directory entry for each file indicates the address of starting block(ie b) and length (n)

      Advantages

      1 For direct access to block i we can directly access by finding the block as b+i Since the file is stored sequentially sequential access is easy Hence access is easy

      2 Supports both direct and sequential access3 Number of disk seeks required is minimum 4 Seek time is minimum

      Disadvantages

      1 Difficult to find space to a new file (or) add more data to existing fileFirst fit and Best fit strategies are used to find a free hole( set of free contiguous blocks) from list of available holes But both are inefficient wrt storage utilization

      2 External fragmentation even though there are enough free blocks required for a new file they are not contiguous and hence cannot be allocated to new fileSolution copy all the files onto another disk or tape We then copy the files back onto original disk contiguously This is known as compaction But compaction is time consuming

      3 Size of the file must be known when the file is created Even though final size is known entire size may not be used by the file at the beginning Hence leads to wastage of space

      If size of the file is not known in advance then (i) If we allocate too little space to file file cannot be extended Solution find a larger hole copy the contents of the file to the new space and release the previous space This may be possible as long as space exists Also this is time consuming(ii) If we overestimate the amount of space needed much of the space is unused This will lead to internal fragmentation

      To minimize these drawbacks some os uses modified contiguous allocation scheme Here contiguous blocks of memory is allocated at firstIf the file needs more blocks another set of contiguous space known as extent is added Location and a block count plus a link to the first block of next extentIf extents are too large and of fixed size internal fragmentation occursIf extents are of variable sizes external fragmentation occurs

      Linked AllocationEach file is a linked list of disk blocks The disk blocks can be scattered anywhere on the diskThe directory contains a pointer to first and last blocks of the file For example a file of 5 blocks might start at a block 9 and continue at block 16 then block 1 and then block 10 and finally block 25 Each block contains a pointer to next block If each block is 512 bytes and disk address (pointer ) requires 4 bytes then the user sees blocks of 512-4=508 bytes To create a new file we simply create a new entry in the directory The pointer will have nil for empty file Also the size field of empty file is 0

      To add more data to existing file free block is found and data is written to it and is linked to end of file To read a file we read by following the pointers block to block

      Advantages 1 Easy to find space to a new file (or) add more data to existing file2 Size of the file need not be declared when the file is created3 There is no external fragmentation

      Disadvantages1 Seek time is more2 Number of disk seeks required is more3 Pointers use larger percentage of filersquos disk space4 Inefficient for direct access files To access i th record i disk reads are required5 Not Reliable if pointers were lost or damaged

      Solution to disadvantage 3 is to use to group multiple blocks as clusters(for ex 4 blocks = 1cluster) We allocate clusters instead of blocks to file Makes logical to physical block mapping simpleImproves disk throughput by reducing disk access timeReduces wastage of space due to pointers as only few pointers may be neededFree ndashlist management also becomes simpleDisadvantage internal fragmentation more space is wasted when the cluster is partially full

      Solution to disadvantage 5 is to use doubly linked list (or) store the file name and relative block number in each block

      An variation of linked allocation is the use of file allocation table (FAT)A section of disk at the beginning of each volume contains FAT FAT has one entry for each disk block and is indexed by block numberThe block number contains the block number of next block that contains the file The last block will have special end of the file value as table entry An unused block is indicated by table value 0

      Advantages

      1 Easy to find space to a new file (or) add more data to existing file2 Random access time is improved

      Disadvantage number of disk seeks is more The disk head must move to read the FAT and find the location of block then move to location of the block

      3 Indexed allocation

      Each file has its own index block Index block is an array of disk block addresses The ith entry in index block points to ith block of the file

      The directory contains the address of the index block

      When the file is created all pointers in the index block are set to nil

      To add more data to existing file free block is found and data is written to it The address of block is put in the index block

      Advantages

      1 Easy to find space to a new file (or) add more data to existing file2 Supports direct (or) random access efficiently3 No external fragmentation

      Disadvantage

      1 Seek time is more2 Number of disk seeks required is more3 Not Reliable4 Index file must be kept in memory If this memory is not available then we have the read the index

      block and then desired data block ie 2 disk accesses are required To access a block near the end of the file we need to read all the index blocks to read the needed data block

      5 Pointer overhead of index block is gt than that of linked allocation For example if we have file of only one or two blocks Entire index block is allocated to store these one or two pointers

      Various mechanisms followed for size of index block

      1 linked scheme size of index block is one disk block Index block contains a small header containing name of the file and set of the first 100 disk block addresses The next address contains the address of another index block for large files

      2 multilevel index to access a block the OS uses first-level index to find second-level index block which points to file block This approach can be continued to third or fourth level

      3 combined scheme

      For example Say there are 15 pointers of index block in the filersquos inode The first 12 of these pointers have address of blocks that contains file data The next 3 pointers point to indirect blocks The first points to single indirect block which is an index block that contain addresses of filersquos data blocks

      The second points to double indirect blocks and third points to triple indirect blocks as shown below

      Q) Free space management

      To keep track of free disk space the system maintains free space list To create a file we search free space list and allocate the required space to new file

      The free space list can be implemented as

      1 Bit vector (or) Bit map2 Linked list3 Grouping4 Counting 5 Space maps

      1 Bit vector (or) Bit map Each block is represented by 1 bit If the block is free the bit is 1 and if the block is allocated the bit is 0 For example consider a disk where blocks 23458 blocks are free The free-space bit map would be 001111001000

      Advantages simple and easy to get contiguous files

      Disadvantages

      1 Bit map requires extra space2 Bit map is inefficient if kept in disk Bit maps of smaller disk can be kept in main memory but may

      not be possible to keep the bit map of larger disk in main memory3 Bit map must be modified both when blocks are allocated and freed Freeing 1 GB of data on a 1-TB

      disk could cause thousands of blocks of bit maps to be updated because those data blocks are scattered all over the disk

      2 Linked List Free-space list is implemented by linking all the free disk blocks together A pointer to the first free block is stored in special location on the disk For example consider a disk where blocks 23458 blocks are freeWe keep a pointer to block 2 as the first free block Block 2 would contain a pointer to block 3 Block 3 would point to block 4 and block 4 would point to block 5and so onAdvantage No waste of spaceDisadvantage Cannot get contiguous space easily ie To traverse the free-space list we must read each block which requires more IO time

      3 Grouping First free block stores the addresses of n free blocks The first n-1 free blocks are free and the last block contains the addresses of another n free blocks and so onAdvantage large number of free blocks can be found quickly4 Counting free-space list can be maintained by keeping the address of first free block and the number (n)of free contiguous blocks that follow the first block

      Each entry in the free-space list consists of disk address and a countEach entry requires more space andlist is shorter only if count is gt1These entries can be stored in a B-tree for efficient insertion deletion and search

      5 Space maps ZFS divides the disk space into chunks of manageable sizes called metaslabs Each metaslab has a space mapZFS uses counting algorithm to store information about free blocks Space map is a log of all block activities in time order in counting formatWhen ZFS allocates or frees space from a metaslab it loads the associated space map into memory and updates it Finally free-space list is updated on diskSynchronization Hardware Hardware solution to synchronization is to provide atomic operations These operations operate as a single instruction without interruption Two such operations are 1 Test and Set() hardware instruction2 Swap() instruction

      1 Test and Set instruction is as shown

      Solution using Test and SetThe shared variable lock is initialized to false

      2 Swap() instruction definition is as follows

      Solution using Swap1 Shared Boolean variable lock is initialized to false and each process has a local boolean variable key

      Both Test and Set Swap instructions satisfy the mutual exclusion requirement but unfortunately do not guarantee bounded waiting If there are multiple processes trying to get into their critical sections there is no guarantee of what order they will enter and any one process could have the bad luck to wait forever until they got their turn in the critical section (Since there is no guarantee as to the relative rates of the processes a very fast process could theoretically release the lock whip through their remainder section and re-lock the lock before a slower process got a chance As more and more processes are involved vying for the same resource the odds of a slow process getting locked out completely increase )

      Below figure illustrates a solution using test-and-set that satisfies bounded waiting using two shared data structures boolean lock and boolean waiting[ N ] where N is the number of processes in contention for critical sections

      Bounded-waiting mutual exclusion with TestAndSet( )

      It first looks in an order ( starting with the next process on the list ) for a process that has been waiting and if it finds one then it releases that particular process from its waiting state without unlocking the critical section thereby allowing a specific process into the critical section while continuing to block all the others

      Q) What is semaphoreAns Semaphore is used to solve synchronization problems A semaphore S is an integer variable accessed only through 2 standard atomic operations 1 wait() 2 signal()Wait primitive Signal primitivewait(S)

      while(S lt= 0) do nothingS--

      signal (S)

      S++

      Each process that wishes to use a resource performs wait() operation When a process releases a resource it performs signal() operation

      Semaphores Usage1 binary semaphore or mutex locks can take values 0 (or) 1

      2 Counting Semaphore can take any integer value Counting semaphore is used to count the number remaining resources The counter is initialized to the number of resources available whenever the counting semaphore gt 0 then a process can enter a critical section and use one of the resources When the counter = 0 ( or negative in some implementations ) then the process blocks until another process frees a resource and increments the counting semaphore with a signal callFor example if 3 resources are there and 4 processes Value of counting semaphore =3if process P1 requires a resource it perform wait() operationNow counting semaphore = 2if process P2 requires a resource it perform wait() operationNow counting semaphore = 1if process P3 requires a resource it perform wait() operationNow counting semaphore = 0if process P4 requires a resource it perform wait() operation P4 waits until a resource is available

      3 Semaphores can also be used to synchronize certain operations

      Semaphore ImplementationWhen a process is in critical section and any other process that tries to enter critical section must loop continuously wasting CPU time (busy waiting) This type of semaphore is called spin lock as the process spins while waiting For example suppose it is important that process P1 execute statement S1 before process P2 executes statement S2

      First we create a semaphore named synch that is shared by the two processes and initialize it to zero Then in process P1 we insert the code

      S1signal( synch )

      and in process P2 we insert the code wait( synch )

      S2 Because synch was initialized to 0 process P2 will block on the wait until after P1 executes the call

      to signal Each semaphore has a integer value and maintains a waiting queue of waiting processes

      Semaphore implementation with no busy waiting To overcome busy waiting we block the waiting process and waiting process is restarted by wakeup() operation(ie the process is removed from waiting queue to ready state)

      Problems due to semaphore1 Deadlocks Deadlocks occur when multiple processes are blocked each waiting for a resource that can only be freed by one of the other (blocked) processes shown below

      2 Starvation In Starvation one or more processes gets blocked forever and never get a chance to enter critical section For example if we do not specify the algorithms for adding processes to the waiting queue or selecting one to be removed from the queue in the signal( ) call If a LIFO queue is chosen then the first process that starts waiting will never get a chance

      3 Priority inversionLet Process LMH have priorities as shown belowLltMltHprocess H is waiting for a resource that has been held by process L so Process L is running When process M enters it pre-empts process L and M is running nowHere M runs before H even though M has lower priority than HTo avoid this L is given high priority so that M does not pre-empt L=====================================================================Q) Classical problems on synchronization

      1 The Bounded Buffer Problem (also called the The Producer-Consumer Problem)2 The Readers-Writers Problem3 The Dining Philosophers Problem

      These problems are used to test newly proposed synchronization scheme

      1 The Bounded Buffer ProblemConsider

      n buffers and each buffer holds one item a producer process which creates the items (1 at a time) a consumer process which processes them (1 at a time)

      Producer process cannot produce an item if all the buffers are full and Consumer process cannot consume if all the buffers are empty As both processes modify the contents of buffer synchronization must be there For that we use 3 Semaphores

      1 empty = n (empty semaphore counts the number of empty buffers)2 full=0 (full semaphore counts the number of filled buffers)3 mutex=1 (mutex provides mutual exclusion for access to the buffer)

      Producer process Consumer processdo

      hellip produce an item wait(empty) wait(mutex) hellip

      do wait(full) wait(mutex) hellip remove an item from buffer hellip

      add item to buffer hellip signal(mutex) signal(full)

      while (1)

      signal(mutex) signal(empty) hellip consume the item hellip while (1)

      2 The Readers-Writers ProblemA data item such as a file is shared among several processesEach process is classified as reader or writerMultiple readers can read the file at same timeA writer must have exclusive access (ie cannot share with either a reader or another writer)Two versions of the readers-writers problem

      readers priority new reader need not wait because a writer is waiting writers priority if a writer is waiting to access the database no new readers can start reading

      A solution to both version may cause starvation in the readers priority version writers may starve in the writers priority version readers may starve

      A semaphore solution to the readers priority version uses 3 semaphores1 readcount = 0 (readcount semaphore counts the number of reading shared data2 mutex =1 (mutex semaphore is used for mutually exclusion when readcount is updated)3 wrt = 1 (wrt semaphore is common to both writer and reader)

      wrt provides mutually exclusive access to shared data Reader process Writer process

      wait(mutex) readcount++ if (readcount == 1) wait(rt) signal(mutex) hellip reading is performed hellip wait(mutex) readcount-- if (readcount == 0) signal(wrt) signal(mutex)

      do wait(wrt) hellip writing is performed hellip signal(wrt)

      while (TRUE)

      3 The Dining Philosophers Problemn philosophers sit around a table thinking and eatingAs time passes a philosopher gets hungry and tries to pick up the chopstick on his left and on his right A philosopher may only pick up one chopstick at a time and cannot pick up a chopstick already in the hand of neighbour philosopherThe dining philosophers problems is an example of a large class or concurrency control problems it is a simple representation of the need to allocate several resources among several processes in a deadlock-free and starvation-free mannerA semaphore solution represent each chopstick with a semaphore

      semaphore chopstick[5] Initially all values are 1

      Philosopher ido

      51

      wait(chopstick[i])wait(chopstick[(i+1) 5])hellipeathellipsignal(chopstick[i])signal(chopstick[(i+1) 5])hellipthinkhellip

      while (1)This solution guarantees no two neighbouring philosophers eat simultaneously but has the possibility of creating a deadlockQ) Types of operating systems

      MULTIPROCESSING SYSTEMS has multiple hardware CPUs

      In symmetric multiprocessing (SMP) systems The processes share the same main memory and io facilities All processors can perform the same functions (hence term symmetric)

      52

      ASYMMETRIC MULTIPROCESSING SYSTEMS In asymmetric multiprocessing (ASMP) all CPUs are not equal (Master and slave CPUs) ASMP have dedicated processors to specific tasks For example one processor may be dedicated to disk operations another to video operations etc These systems dont have the flexibility to assign processes to the least-loaded CPU unlike an SMP system

      REAL TIME SYSTEM is used to implement a computer application that must complete its execution within its time constraint

      Two kinds of real-time systems have evolved

      1 A hard real-time system can guarantee that response requirement will be met under all conditions 2 A soft real-time system cannot guarantee that response requirement meets under all conditions Ex

      Digital audio or multimedia systems Digital telephones

      DISTRIBUTED SYSTEMS execute parts of a computation in different systems at the same time It uses distributed control ie it spreads its decision-making actions across different computers in the system so that failures of individual computers or the network does not cripple its operation

      A distributed operating system appears as a uniprocessor system even though it has multiple processors The users may not know where their programs are being run or where their files are located that should all be handled automatically by the operating system

      Distributed systems allow applications to run on several processors at the same time thus requiring more complex processor scheduling algorithms

      DESKTOP SYSTEMS A desktop system is a personal computer (PC) system used at a single location Modern desktop systems support multiprogramming Common examples are Linux FreeBSD Windows 8 and the Macintosh operating system

      HANDHELD SYSTEMS (palmtop computers) Two of the most popular operating systems for handhelds are Symbian OS and Android OS They have very less memory

      53

      CLUSTERED SYSTEMS A computer cluster consists of a set of loosely connected computers that work together so that in many respects they can be viewed as a single system The components of a cluster are usually connected to each other through fast local area networks each node (computer used as a server) running its own instance of an operating system

      Q) Modern operating system

      New developments in hardware applications and security threats lead to the development of modern operating system

      New Hardware developments like increased machine speed high speed network increased size and variety of memory devices

      New Applications developments like multimedia applications internet and web access and client server computing

      Developments in modern operating system can be categorized as

      1 Microkernel architecture assigns only few functions to kernel ( IPC and CPU scheduling etc) Other OS services are provided by processes called servers that run in user mode This approach decouples server and kernel development This approach is well suited to distributed environment

      2 Multithreading a process is divided into threads that can run concurrently Useful for applications that perform a number of independent tasks Example database server that listens and processes many client requests

      3 Symmetric multiprocessing As now-a-days there are multiple microprocessors in a single system SMP operating system provides greater efficiency when there are multiple processors The processes share the same main memory and io facilities All processors can perform the same functions (hence term symmetric)

      SMP schedules processes or threads across all of the processors

      Advantages

      1 Increased performance if some portion of the program can be run in parallel the performance of SMP OS increases

      2 increased availability as all processes can perform the same task if one processor fails others will work 3 incremental growth one can add a new processor to increase the performance4 Scaling cost can be dependent on the number of processors used

      4 Distributed operating systems Distributed systems allow applications to run on several processors at the same time

      5 Object oriented design adds modularity to kernel Os can be customized without effecting system integrity

      Q) Linux Operating systems It is open source as its source code is freely available It is free to use Linux was designed considering UNIX compatibility Linux is a multi-user multitasking system Main design goals are speed efficiency and standardization Components of Linux System

      54

      Linux Operating System has primarily three components as shown in the below diagram

      Kernel - Kernel is the core part of Linux It is responsible for all major activities of this operating system It interacts directly with hardware Kernel hides low level hardware details to system or application programs

      System Library - System libraries are special functions or programs These are used by application programs to use Kernelrsquos features

      System Utility - System Utility programs are responsible to do specialized individual level tasks like updating log file accepting login requests from terminals etc

      Kernel Modules Sections of kernel code that can be compiled loaded and unloaded independent of the rest of the kernel A kernel module may typically implement a device driver a file system or a networking protocol

      Three components to Linux module support 1 module management Supports loading modules into memory and allows to talk to the rest of the kernel 2driver registration Allows modules to inform rest of the kernel that a new driver has become available 3 conflict resolution protect reserved resources of one driver from accidental use by another driver

      Kernel Mode vs User ModeKernel component code executes in a special privileged mode called kernel mode with full access to all resources of the computer This code represents a single process executes in single address space and do not require any context switch and hence is very efficient and fast Kernel runs each processes and provides system services to processes provides protected access to hardwares to processesCode which is not required to run in kernel mode is in System Library User programs and other system programs works in User Mode which has no access to system hardwares and kernel code User programs utilities use System libraries to access Kernel functions to get systems low level tasksBasic FeaturesFollowing are some of the important features of Linux Operating System

      Portable - Portability means softwares can works on different types of hardwares in same way Linux kernel and application programs supports their installation on any kind of hardware platform

      Open Source - Linux source code is freely available and it is community based development project Multiple teams works in collaboration to enhance the capability of Linux operating system and it is continuously evolving

      Multi-User - Linux is a multiuser system means multiple users can access system resources like memory ram application programs at same time

      Multiprogramming - Linux is a multiprogramming system means multiple applications can run at same time

      Hierarchical File System - Linux provides a standard file structure in which system files user files are arranged

      Shell - Linux provides a special interpreter program which can be used to execute commands of the operating system It can be used to do various types of operations call application programs etc

      55

      Security - Linux provides user security using authentication features like password protection controlled access to specific files encryption of data

      Architecture Linux System Architecture is consists of following layers

      Hardware layer - Hardware consists of all peripheral devices (RAM HDD CPU etc)

      Kernel - Core component of Operating System interacts directly with hardware provides low level services to upper layer components

      Shell - An interface to kernel Takes commands from user and executes kernels functions

      Utilities - Utility programs giving user most of the functionalities of an operating systems

      Q) windows XP operating System1Extensibility mdash layered architecture

      Executive which runs in protected mode provides the basic system services On top of the executive several server subsystems operate in user mode Modular structure allows additional environmental subsystems to be added without affecting the

      executive 2Portability mdash XP can be moved from on hardware architecture to another with relatively few changes

      Written in C and C++ Processor-dependent code is isolated in a dynamic link library (DLL) called the ldquohardware abstraction

      layerrdquo (HAL)3 Reliability mdashXP uses hardware protection for virtual memory and software protection mechanisms for operating system resources 4 Compatibility mdash applications that follow the IEEE 10031 (POSIX) standard can be complied to run on 2000 without changing the source code 5 Performance mdash XP subsystems can communicate with one another via high-performance message passing

      Preemption of low priority threads enables the system to respond quickly to external events Designed for symmetrical multiprocessing

      6 International support mdash supports different locales via the national language support (NLS) APIXP Architecture 1 Layered system of modules 2 Protected mode mdash hardware abstraction layer (HAL) kernel executive 3 User mode mdash collection of subsystems

      a Environmental subsystems emulate different operating systems b Protection subsystems provide security functions

      XP architecture

      56

      Q) Windows network Operating systemExample Windows 2000 operating system

      All the pictures are taken from Silberschatz Abraham et al Peter Baer Galvin and Greg Gagne Operating system concepts Reading Addison-Wesley

      Visit my blog enthusiaststudentblogspotin57

      mtechmessengerblogspotin

      Keywords OS exam notes OS notesOperating system notes Opearing system galvin notesSimple batch system Multi- programmed Batch System Time sharing System Difference between Batch Multi-programming and Time sharing What is process structure of process in memory Process states PCB Operations on Processes Process Creation Process Termination cascading terminationcontext switch Basic Concepts of threads two modes of CPU execution Inter process communication Shared ndash Memory Systems Message passing systemswhat is deadlock four conditions that are necessary for deadlock to occur Methods for handling deadlocks Deadlock prevention Deadlock avoidanceResource Allocation graphBankers AlgorithmSafe State Unsafe state Safety Algorithm Resource-Request Algorithm Memory management Address Binding Logical vs Physical Address Space MMU Memory-Management Unit Dynamic loading Dynamic Linking Overlays Swapping Contiguous memory allocation Single-partition allocation Multiple-partition allocation Internal fragmentation Variable Size partitions External fragmentation First-fit Best-fit Worst-fit Non-Contiguous Memory allocation Paging Implementation of Page Table PTLR PTBR page table base register Page-table length register translation look-aside buffers TLB Protection in Paging Shared pages in Paging Segmentation Shared Segments Virtual memory Demand paging Procedure for Handling a Page Fault pure demand paging Page Replacement Page Replacement Algorithms FIFO Optimal Page replacement AlgorithmLeast Recently used (LRU) Page replacement algorithm ThrashingWorking Set model global and local page replacement

      58

      • 101 File Concept Data is stored in secondary memory in the form of files A file is a sequence of bits bytes lines or records A file has structure which depends on its type
      • File Attributes
      • File Operations
      • Kernel Mode vs User Mode
      • Basic Features
      • Architecture

        ===================================================================

        Unit-II

        Q) What is process A process is program in execution A process is an active entity and resides in main memory

        Q) Explain structure of process in memoryA process contains

        1 program code which is sometimes known as text section2 Process stack contains temporary data (Such as function parameters

        return addresses and local variables)3 Heap is the memory allocated dynamically during process run time4 Data section contains global variables

        Although two processes are associated with the same program they are considered as two separate execution sequences as the data heap and stack sections differ even though text sections are equivalent

        Q) Explain Process statesAs a process executes it changes state Each process can be in one of the following states

        1 New the process is being created2 Running Instructions are being executed3 Waiting a process is waiting for some

        event to occur (such as IO completion etc)

        4 Ready process is waiting to be assigned to processor

        5 Terminated the process has finished execution

        Q) What is PCBEach process is represented in the operating system by a process control block (PCB) or task control block It contains information about

        1 Process state the state may be new ready running blocked or terminated2 Program Counter this register stores the address of next instruction to be executed3 CPU registers like accumulators stack pointers index register and general purpose

        registers The values of these registers are saved if the process is interrupted4 CPU scheduling information like process priority scheduling parameters etc5 Memory management information include the values of base and limit register segment tables

        page tables etc6 Accounting Information include amount of CPU used time limits job or process number etc7 IO status information like list of IO devices allocated to the process list of open files and so on

        Q)Operations on Processes

        1 Process Creation

        Operating System is responsible for creation of new process

        Reasons for a new process creation

        1 When a batch job is submitted by user2 In interactive environment or in time sharing system a process is created when user logs on3 Operating system creates a process to manage printing So that user need not wait till printing

        completes Here OS creates process on behalf of user4 When a process creates another process Creating process is called parent process and new process is

        called child process or sub processNew process can in turn create other processes forming a tree of processesOS identifies a process by unique process identifier (or pid) which is an unique integerIn solaris Operating system at the top of the tree is Sched process with pid=0 This process can create several child processes In the below figure it creates three child processes1 Init process is the parent process for all user processes2 Pageout process 3 Fsflush process

        Subprocess may obtain resources

        1 Directly from OS2 Share some resources among several of its children3 Parent process partitions its resources among its children

        Parent process may pass initialization data to child For ex name of the image file and name of the output device may be passed to a display process (child process)

        Parent process may execute

        1 concurrently with the child2 waits till some or all of the child process have terminated

        Address space of child process may be

        1 Duplicate copy of parent (Same program and data)2 New program loaded into it

        2 Process Termination

        A process terminates when it finishes executing its final statement and requests the operating system to delete it (using exit() system call in Unix and Terminate Process() in Win32 API)

        All the resources of the process (open files IO buffers and physical memory) are deallocated by OS

        A parent may terminate a child process for a variety of reasons

        1 Child has exceeded its usage of some of the resources2 Task assigned to child is no longer required3 Parent is terminating and OS (Ex VMS) does not allow a child to continue if parent is terminating

        Q) What is cascading termination

        Parent is terminating and OS (Ex VMS) does not allow a child to continue if parent is terminating This is called Cascading Termination initiated by OS

        Q) What happens to child when parent terminates in Unix

        Init process becomes the parent of all its children

        Q)What is context switchWhen the PCB of the currently executing process is saved the operating system loads the PCB of the next process that has to be run on CPU This is a heavy task and it takes a lot of time

        Q) Basic Concepts of threads

        A thread consists of a program counter a stack and a set of registers and a thread ID Threads are also called light weight processesA process with multiple threads make a great serverThe threads share a lot of resources with other threads belonging to the same process So a context switch among threads for the same process is easy It involves switch of register set the program counter and the stack

        Q) Explain two modes of CPU executionProtection of memory IO can be provided via two modes of CPU execution user mode and kernel mode

        In kernel privileged supervisor mode OS has access to privileged instructions Privileged instructions can access IO devices control interrupts manipulate memory (pagetable TLB etc)

        Privileged instructions are instruction that can only be executed in kernel mode

        All the user level processes run in user mode Some critical operations are not allowed to be done by user processes The user processes must use system calls to perform these operations When a system call occurs OS will enter into kernel mode and accesses privileged instructions to perform the desired service to user-level process

        For example for Input or Output process makes a system call telling the operating system to read or write particular area and this request is satisfied by the operating system

        Q) Explain Inter process communicationAns Cooperating processes require inter process communication (IPC) mechanism to exchange data and information There are two communication models (a) Message passing (b) shared memory as shown below

        (a) Shared ndash Memory Systems

        1) Communicating processes must establish a region of shared memory2) Shared memory region resides in address space of the creating process

        3) Other processes that wish to communicate using shared memory segment must attach the shared memory to their address space4) Processes can exchange information by reading and writing data in shared areas5) Shared memory systems are convenient to communicate6) Shared memory systems are faster and provides maximum speed as

        i) System calls are required only to establish shared memory regionsii) Once shared memory is established all accesses are treated as routine memory access No

        assistance of kernel is required

        (b) Message passing systems

        1 Are useful for exchanging smaller amounts of data2 Easy to implement for inter computer communication3 More time consuming than shared memory system as implemented using system calls

        and need kernel intervention

        To send messages communication link must exist between them Communication link can be physically or logically implemented

        Different methods to logically implementing a link are

        1 Direct or indirect communication2 Synchronous or asynchronous communication3 Automatic or explicit buffering

        1a Direct communication

        A link is established automatically between every pair of processes that want to communicate

        A link is associated with exactly two processes

        Addressing

        i) Symmetry in addressing Sender process and receiver process must name each other to communicate Send() and Receive() primitives are as follows Send(P message) ndashsends a message to process P

        Receive (Q message) ndash receive a message from process Qii) Asymmetry in addressing Only sender names the receiver process Send() and Receive()

        primitives are as follows Send(P message) ndashsends a message to process P

        Receive (id message) ndash receive a message from any process

        Disadvantage of both types of addressing is limited modularity as changing the id of a process requires to find all the references to old id and then modify them

        1b Indirect communication

        1 Messages are sent received tofrom Mailboxes or ports2 Each mailbox has unique id (integer value) 3 Two processes can communicate only if the process have a shared mailbox4 A link is established between two processes if they have a shared mailbox

        P1 P2

        R1

        5 A link may be associated with more than two processes6 Mailbox may owned by process or OS

        a If mailbox is owned by process we can distinguish between the owner( can receive messages only) and user(can send messages only) When the processes that owns a mailbox terminates the mailbox disappears

        b If mailbox is owned by OS OS must provide mechanisms to i Create a new mailbox

        ii Send and receive messages through mailboxesiii Delete the mailboxiv Pass ownership to other processes

        2 Synchronous or asynchronous communicationMessage passing may be either blocking(synchronous) or non-blocking(Asynchronous)

        1 Blocking Send The sending process is blocked until the message is received by the receiving process or mailbox

        2 Nonblocking Send The sending process sends the message and resumes operation3 Blocking Receive The receiver blocks until message is available4 Nonblocking receive The receiver retrieves either a valid message or a null

        3 Automatic or explicit bufferingMessages exchanged by communicating processes reside in temporary queue Such queues can be implemented in 3 ways

        i Zero Capacity Queue length =0 the link cannot have messages waiting in itsender must block until receiver receives the message

        ii Bounded Capacity Queue length = finite (say n) When the queue is full the sender must block until space is available in queue

        iii Unbounded Capacity Queue length = infinitehellipthe sender never blocks===============================================================================Q) what is deadlock

        A set of processes is deadlocked when every process in the set is waiting for a resource that is currently allocated to another process in the set Here the process P1 is allocated resource R2 and P2 is allocated R1

        P1 requires R1 and P2 requires R2

        Process P1 and P2 will wait forever This situation is called deadlock

        Q) What are the four conditions that are necessary for deadlock to occur

        1 Mutual Exclusion - At least one resource must be held in a non-sharable mode If any other process requests this resource then that process must wait for the resource to be released

        2 Hold and Wait - A process must be simultaneously holding at least one resource and waiting for at least one resource that is currently being held by some other process

        3 No preemption - Once a process is holding a resource then that resource cannot be taken away from that process until the process releases it

        4 Circular Wait - A set of processes P0 P1 P2 PN must exist such that every P[ i ] is waiting for P[ ( i + 1 ) ( N + 1 ) ]

        Q)Methods for handling deadlocks

        1 By using deadlock prevention and avoidance protocols system will never enter a deadlocked state1 Allow the system to enter a deadlocked state detect it and recover it2 Ignore the problem and pretend that deadlock never occurs

        To make sure that the system must not enter a deadlocked state the system can use

        1 Deadlock prevention 2 Deadlock avoidance

        ==============================================================================

        Deadlock Prevention

        1 Mutual Exclusion We cannot prevent deadlocks by denying the mutual exclusion condition because some resources are nonsharable (Ex Printer)

        2 Hold and Wait

        To make sure that the hold-and-wait condition never occurs in the system two protocols that can be used are

        Protocol 1 All the resources requested must be allocated before process begins execution

        Protocol2 A process can request resources only when it has none If a process requires additional resources it must release all the resources that are currently allocated

        Example Consider a process that copies data from DVD drive to a file on disk sorts the file and then prints the results to a printer

        If Protocol1 is used it must request the DVD drive disk file and printer at the beginning and must hold them till the end

        Disadvantages

        1 Starvation A process may wait forever because at least one resource that is need is always allocated to some other process Hence Starvation is possible

        2 Resource Utilization is low Process will hold the printer from beginning till end even though it is used at the end

        If Protocol 2 is used the process will initially request DVD drive and disk file It copies from the DVD drive to disk and then releases both the DVD drive and disk file It then requests the disk file and printer

        Disadvantage There may be a chance that our data may not remain on the disk file

        3 No pre-emption

        To make sure that this condition does not hold the following protocol is used

        Protocol If a process (say A) requests some resources

        Case 1 If resources are available then Allocate them

        Case 2 if resources are allocated to some other process(say B) that is waiting for additional resources

        then Preempt the desired resources from the waiting process (B) and allocate them to requesting process(A)

        The process B can be restarted only when it is allocated additional resources it is requesting and takes away the resources that were given to process A

        Case 3 if resources are neither available nor held by a waiting process then Process A waits

        This protocol is applied to resources like CPU register and memory space as the state of the resources can be saved

        4 Circular Wait

        To make sure Circular Wait condition never occurs

        1 Each Resource is assigned a unique integer number

        2 Each Process must request resources in an increasing order of enumeration

        We define a one-to-one function F R rarr N where R is the set of resource types and N is the set of natural numbers

        A process has requested a resource type say Ri at the beginning

        Protocol 1 After that the process can request resource type R j if and only if F (Riquestiquest j)gtF(Riquestiquest i)iquestiquest

        Protocol 2 If a process requests a resource type R j it must release all the resources (say (Riquestiquest i )iquest ) whose F (Riquestiquest i)ge F (Riquestiquest j)iquestiquest

        Example Let F (tape drive) =1 F(disk drive) = 5 and F(printer)=12

        A process can request any number of tape drives disk drives and printers

        Protocol 1If a process A has already requested disk drive now A can request only printer and cannot request tape drive

        Protocol2 In order to request tape drive the process A must release the disk drive and then can request tape drive

        If the above two protocols are used then the circular wait condition never occurs We can prove this by contradiction

        Proof Assume circular wait exists Let the set of processes involved in the circular wait be P0 P1 helliphellip Pn where P0 is waiting for resource R0which is held by P1

        P1 is allocated R0 and P1 is waiting for resource R1which is held by P2 so F (Riquestiquest0)ltF(R iquestiquest1)iquestiquest

        P2 is allocated R1 and P2 is waiting for resource R2which is held by P3 so F (Riquestiquest1)ltF (Riquestiquest2)iquestiquest

        P1 P2

        R2R2

        R1

        Pn is allocated Rnminus1 and Pn is waiting for resource Rnwhich is held by P0 so F (Riquestiquestn)ltF(R iquestiquest0)iquestiquest

        Hence by transitivity F (Riquestiquest0)ltF(R iquestiquest0)iquestiquest Hence our assumption that circular wait exists is FALSE

        ===============================================================================Q) Resource Allocation graphDeadlocks can be understood more clearly through the use of Resource-Allocation Graphs having the following properties

        1 Resource Types are represented as square nodes on the graph Dots inside the square nodes indicate number of resources

        ( Ex two dots might represent two laser printers )2 Processes are represented as circles3 Request Edges - If P1 has requested R1 a directed edge from P1 to R1 is request

        edge4 Assignment Edges - A directed edge from R2 to P1 indicating that resource R2 has

        been allocated to process P1 and that P1 is currently holding resource R2 Note that a request edge can be converted into an assignment edge when request is granted

        If a resource-allocation graph does contain cycles AND each resource contains only a single instance then a deadlock exists If a resource category contains more than one instance then cycle in the resource-allocation graph indicates the possibility of a deadlock but does not guarantee one==================================================================Q) Deadlock AvoidanceFor each resource request system can decide whether the request should be granted or not To make this decision the system must have information like

        1 resources currently available2 resources currently allocated to each process3 Future requests and releases of each process4 Maximum number of resources it may need

        Given this information it is possible to construct an algorithm that makes sure that the system will never enter a deadlocked stateThere are Two deadlock-avoidance algorithms They are

        1 Resource-Allocation Graph Algorithm2 Bankers Algorithm

        Safe State A system is in safe state if there exists a safe sequence of processes P0 P1 P2 PN such that Resource Requests for Pi lt= Resources allocated to Pi + resources held by all processes Pj where j lt i All safe states are deadlock free

        Unsafe state If a safe sequence does not exist then the system is in an unsafe state which MAY lead to deadlock

        1 Resource-Allocation Graph Algorithm Resource-allocation graphs can detect deadlocks only if number of resources of each type are one In this case unsafe states can be recognized and avoided by adding claim edges denoted by dashed

        lines which point from a process to a resource that it may request in the future All the claim edges are added only at the beginning of process When a process makes a request the claim edge Pi-gtRj is converted to a request edge When a resource is released the assignment edge changes back to a claim edge This approach works by denying requests that would produce cycles in the resource-allocation graph

        taking claim edges into effectConsider for example the resource allocation graph as shown

        If P2 requests resource R2 then the claim edge from P1-gtR2 will be made request edge as follows

        The resulting resource-allocation graph would have a cycle in it and so the request cannot be granted Q) Bankerrsquos Algorithm or Deadlock avoidance algorithm with exampleThere are 12 tape driveslet the current state of the system is as shown in the below figureProcess Allocated Max Need Need= MaxNeed ndash AllocatedP0 5 10 5P1 2 4 2P2 2 9 7

        Available = 12-(5+2+2) = 3Resource- Request AlgorithmNow when a request for 1 tape drive by process P2 is made we run resource-request algorithm to check whether the request must be granted or not The request is granted if the after granting the request all the processes in the system can complete For thatWe check 1 Is the request of P2 lt= need of P2

        1 lt= 7 therefore TRUE2 Is the request of P2 lt= Available

        1 lt= 3 therefore TRUE3 Pretend the request is granted for P2

        Now the current state is as shown belowProcess Allocated Max Need Need= MaxNeed ndash AllocatedP0 5 10 5P1 2 4 2P2 2+1=3 9 7-1=6

        Available = 3-1= 2Now Run the safety algorithm to check whether the system is in safe state

        Safety Algorithm1 Let WORK = Available = 22 Find unfinished process such that Need of unfinished process lt= WORKCheck P0

        Need of P0 = 5Work = 2

        Is 5 lt= 2 FALSE

        Check P1Need of P1 = 2Work = 2

        Is 2 lt= 2TRUETherefore P1 can finishIf P1 finishes Work = Work+Allocated to P1

        Work=2+2= 4Now again check if P0 can completeNeed of P0 = 5Work =4Is 5 lt=4 FALSE

        Check if P3 completesNeed of P3= 6Work =4Is 6lt= 4FALSESo neither P0 can complete and P3 can completeSo the system is in unsafe stateThe request for 1 tape drive by P3 is not granted

        2 Bankers Algorithm For resources that contain more than one instance the resource-allocation graph method does not

        work So we use Bankerrsquos algorithm When a process starts up it must state in advance the maximum allocation of resources it may

        request When a request is made the scheduler determines whether granting the request would leave the

        system in a safe state If not then the process must wait until the request can be granted safely The bankers algorithm relies on several key data structures ( where n is the number of processes and

        m is the number of resource categories ) o Available[ m ] indicates how many resources are currently available o Max[ n ][ m ] indicates the maximum demand of each process of each resource

        o Allocation[ n ][ m ] indicates the number of each resources allocated to each processo Need[ n ][ m ] indicates the remaining resources needed of each type for each process ( Note

        that Need[ i ][ j ] = Max[ i ][ j ] - Allocation[ i ][ j ] for all i j ) For simplification of discussions we make the following notations observations

        o One row of the Need vector Need[ i ] can be treated as a vector corresponding to the needs of process i and similarly for Allocation and Max

        Safety Algorithm In order to apply the Bankers algorithm we first need an algorithm for determining whether or not a

        particular state is safe This algorithm determines if the current state of a system is safe according to the following steps

        1 Let Work and Finish be vectors of length m and n respectively Work is a working copy of the available resources Finish is a vector of booleans indicating whether a particular process can finish Initialize Work = Available and Finish to false for all elements

        2 Find an i such that both (A) Finish[ i ] == false and (B) Need[ i ] lt Work This process has not finished but could with the given available working set If no such i exists go to step 4

        3 Set Work = Work + Allocation[ i ] and set Finish[ i ] to true This corresponds to process i finishing up and releasing its resources back into the work pool Then loop back to step 2

        4 If finish[ i ] == true for all i then the state is a safe state because a safe sequence has been found

        Resource-Request Algorithm ( The Bankers Algorithm ) Now we have a tool for determining if a particular state is safe or not This algorithm determines if a new request is safe and grants it only if it is safe to do so When a request is made ( that does not exceed currently available resources )

        pretend it has been granted and then see if the resulting state is a safe one If so grant the request and if not deny the request as follows

        1 Let Request[ n ][ m ] indicate the number of resources of each type currently requested by processes If Request[ i ] gt Need[ i ] for any process i raise an error condition

        2 If Request[ i ] gt Available for any process i then that process must wait for resources to become available

        else the process can continue to step 3

        3 Check to see if the request can be granted safely by pretending it has been granted and then seeing if the resulting state is safe If resulting state is safe

        grant the requestelse

        then the process must wait until its request can be granted safelyThe procedure for granting a request ( or pretending to for testing purposes ) is

        Available = Available - Request

        Allocation = Allocation + Request Need = Need - Request

        Unit III Memory management

        Just as processes share the CPU they also share physical memory Memory management unit of OS takes care of memory allocation deallocation and other issues Program must be brought into memory for it to be run Addresses are two types

        i) relocatable or relative address wrt to the beginning of program ii) Absolute addresses

        Q)Address BindingAns Binding means mapping logical address space to physical address space Address binding can happen at three different stagesCompile time If you know at compile time only

        where in memory the program is going to be allocated then compiler generates absolute addresses

        Otherwise compiler generates relocatable addresses

        Load time loader binds the relocatable addresses generated by compiler to absolute address Hence binding is done at load timeIf Binding is done at compile and load time physical and logical addresses are same

        Execution time If address binding is done at run time the process can be moved during its execution from one memory segment to another Here we call logical addresses as virtual addressRun time mapping of virtual to physical address is done by a hardware device called Memory Management Unit (MMU)

        Q)Logical vs Physical Address SpaceAnsLogical address ndash generated by the CPU also referred to as virtual addressPhysical address ndash address seen by the memory unitSet of all logical addresses is called logical address space Set of all physical addresses is called physical address space

        Q) Memory-Management Unit (MMU)Ans MMU is hardware device that maps virtual to physical addressIn MMU scheme the value in the relocation register is added to every address generated by CPU to read the contents of memoryThe user program only knows logical addresses it never sees the real physical addresses

        Q)Dynamic loadingAns Since physical memory is small it may not be possible for the entire program to be in main memory so we can use dynamic loadingIt is responsibility of users to design their programs to take advantage of dynamic loadingOnly the main function is loaded into main memory and when main() calls another function the main() will check whether the function is in main memory IF not the loader will load the desired function to main memory and updates the programrsquos address table

        Routine is not loaded until it is called We achieve better memory-space utilization as unused routine is never loaded

        Q) Dynamic Linking+ Linking postponed until execution time+ Small piece of code stub used to locate the appropriate memory-resident library routine+ Stub replaces itself with the address of the routine and executes the routine+ Operating system needed to check if routine is in processesrsquo memory address+ Dynamic linking is particularly useful for libraries

        Q) OverlaysAns Overlays are needed when process is larger than amount of main memory allocated to it Instructions and data that are needed at any given time is kept in main memory Overlays can be implemented by user Programming design of overlay structure is complex Overlays for a two-pass assembler is as shown in the figure

        Q) SwappingAns A process can be swapped temporarily out of memory to a backing store and then brought back into memory for continued executionBacking store ndash fast disk large enough to store copies of all memory images of all usersPriority based scheduling uses a variant of swapping policy called roll out roll in If a higher priority process arrives the memory manager swaps out lower priority process and then swaps in higher priority process When the higher priority process finishes the lower priority process can be swapped in again to main memory A process that is swapped out will be swapped back into the same memory space it occupied previouslyMajor part of the swap time is transfer time We can swap idle process only and cannot swap a process that is waiting for IO

        Q) Contiguous memory allocation Ans Each process is contained in a single contiguous section of memory1 Fixed Size Partition (or) Single-partition allocation

        Divide the main memory into Fixed sized partitions Each partition may contain exactly one process Relocation register contains value of starting physical address Limit register contains range of logical addressesEvery address generated by a CPU is checked as follows

        If logical address lt Limit register then logical address is added with Relocation register to get the corresponding memory address

        else a trap to OS is generatedSince every address is checked we can protect the OS and other user programs from being modified by other running process

        2 Multiple-partition allocation

        1) Fixed Size Partitions Divide the main memory into Fixed sized blocks Here memory allocated to a process may be larger than required The difference between allocated memory and requested memory is called internal fragmentation Internal fragmentation is unused memory inside the partition

        0 P1 Block0(0th address to 3rd address)4 P1 Block1(4th address to 7th address)8 Block2(8th address to 11th address)

        12 Block3(11th address to 12th address)

        if P1 requires just 5 addresses still it is allocated 2 blocks ie 8 addresses So remaining 3 addresses are left unused

        2) Variable Size partitions Here main memory is divided into partitions of variable sizesOperating system maintains information about

        a) allocated partitions b) free partitions (hole)

        In the beginning all main memory is empty and is considered one large block of available memory a hole

        Exact memory required by process is only given When a process terminates it releases memory which can be allocated to another process

        Memory is allocated to processes until finally no available block of memory (or hole) is large enough to hold the next processExternal fragmentation exists when there is enough memory to satisfy a request but the available memory are not contiguous Wastage of memory outside the partitionOne Solution to problem of external fragmentation is

        a Compaction shuffle the memory contents so as to place all free memory together into one large block

        ===============================================================================Q) Dynamic Storage-Allocation Problem (or) Most commonly used strategies to select a free hole from the set of available holesAns To satisfy a request of size n from a list of free holes below 3 policies can be used

        1 First-fit Allocate the first hole that is big enough

        2 Best-fit Allocate the smallest hole that is big enough must search entire list if it is not ordered by size Produces the smallest leftover hole

        3 Worst-fit Allocate the largest hole must also search entire list Produces the largest leftover holeFirst-fit and best-fit better than worst-fit in terms of speed and storage utilization

        ===============================================================================Q) Non-Contiguous Memory allocation

        1 Paging Paging is a memory management scheme that provides non-contiguous memory allocation0 P1 Block0(0th address to 3rd address)4 P2 Block1(4th address to 7th address)8 Block2(8th address to 11th address)

        12 P1 Block3(11th address to 12th address)

        Logical address space of a process can be noncontiguous Ex P1 is allocated Block0 and Block3

        1 Divide physical memory into fixed-sized blocks called frames (size is power of 2)2 Divide logical memory into blocks of same size called pages3 Logical address is divided into 2 parts

        Page number (p) ndash Page table is indexed by page numberPage offset (d) ndash

        4 A page table is allocated to each process (A pointer to page table is stored in PCB of process )Page table translates logical to physical addresses Page 0 is in frame1 Page 1 is in frame 4 etc5 Internal fragmentation may occur due to paging

        6 If the size of logical address space= 2m and page size = 2n Then the higher order mminusn bits of logical address correspond to page number and lsquonrsquolower order bits for displacement within the page

        Ex size of logical address space = 8=23 so m=3

        page size =4=22 So n=2

        So mminusn = 1 bit for page number to represent 01

        remaining n=2 bits for displacement within the page

        Q) Implementation of Page Table

        Page table can be kept as

        1) Set of dedicated registers efficient if the page table is small

        2) Page table can be kept in memory page table base register (PTBR) points to page table Page-table length register (PTLR) indicates size of the page table Changing page table requires changing only the value in these registers Advantages of keeping page table in memorya Less context switch time

        b Two memory accesses are required One memory access to access the page table and another memory access to access the required memory address

        This problem can be solved by using a special fast-lookup hardware cache called associative memory or translation look-aside buffers(TLBs)

        i Each entry in TLB consists of 2 parts

        I) page number field

        II) Value field or frame number field

        ii When a logical address is generated by CPU its page number is presented to TLB The page number is compared with all the entries of TLB simultaneously

        If page number is found in TLB Itrsquos frame number is immediately available -gt TLB hitIf page number not found check in page table ndashTLB miss

        By using TLB Search is fast But TLB hardware is expensive So TLB size is kept small So TLB contains only frequently used few page table entries

        Q) Protection in Paging Validinvalid bit attached to each entry in the page table OS sets this validinvalid bit for each page to allow or disallow access to the page When the bit is set to invalid the page is not in processrsquos logical address space Hence it generates Trap to OS

        Q) Shared pages in Paging

        Reentrant code is the code that never changes during execution Reentrant code can be shared

        1 One copy of read-only (reentrant) code is shared among processes (ie text editors compilers window systems) Ex ed1ed2ed3 is shared among Process P1 and P2

        2 Shared code must appear in same location in the logical address space of all processes

        Each process keeps a separate copy of the private code and data EX data1 and data2

        The pages for the private code and data can appear anywhere in the logical address space

        ==============================================================================Q) Segmentation

        Segmentation is a memory management scheme that support userrsquos view of memory

        When the user program is compiled the compiler generates segments like

        1) The code segment

        2) Global variables segment

        3) Heap memory segment

        4) Stack segment etc

        Each entry in segment table has

        segment base (Starting physical address of segment) and limit register (specifies the length of the segment)

        Logical address is divided into 2 parts

        segment-number offset

        Segment table is indexed by segment number

        Segment-table base register (STBR) stores the location of segment table in main memory

        Segment-table length register (STLR) stores number of segments used by a program

        The segment number (say lsquosrsquo) is used to find the entry in segment table The required entry is lsquosrsquo locations away from the beginning of the segment table Once the required entry in the segment table is found the offset (lsquodrsquo) is compared with limit

        If( offset lt limit) then offset is added with the base entry to generate the physical address

        Q) Shared Segments Code sharing occurs at

        the segment level Shared segments must

        have same segment number

        Allocation - dynamic storage allocation problem

        use best fitfirst fit may cause external fragmentation

        Protection protection bits associated with segments

        readwriteexecute privileges array in a separate segment - hardware can check for illegal array indexes

        ===============================================================================Q) Virtual memory ndash

        Ans In paging and segmentation a program will execute only if the entire process is in main memory But here in virtual memory only part of the program needs to be in memory for execution

        1 A program that is large enough than the available main memory can still run as only part of program is only loaded into main memory Logical address space can therefore be much larger than physical address space

        2 Allows address spaces to be shared by several processes

        Virtual memory can be implemented via Demand paging Demand segmentation

        The large blank space between heap and stack is part of virtual address space Virtual memory allows files and memory to be shared by 2 or more processes through page sharing as shown in below figure (b) Figure (a) depicts that virtual memory that is larger than physical memory

        ==============================================================================

        Q)Demand pagingDemand paging is technique of loading pages from disk to main memory only when the page is needed Hence using less amount of physical memory we get faster response

        Demand paging is similar to PAGING + SWAPPING

        Instead of swapping a whole process the pager brings only those pages needed into memory

        To distinguish between pages that are in memory and pages that are on the disk validinvalid bit is used

        Validinvalid bit is attached to each entry in page table When this bit is set to valid the page is legal and in memory When this bit is set to invalid either the page is illegal or the page is in disk

        When CPU generates a logical address whose page table entry is set to invalid page fault occurs Hence the required page is in disk Some section of hard disk called swap space is used to hold pages that are not present in memory

        Q) Procedure for Handling a Page Fault

        1) CPU generates a logical address and if the validinvalid bit for the page that has this logical address is set to invalid page fault occurs

        2) Page fault causes a trap to operating system

        3)Check if the logical address is within the logical address space of process ie PTBR and PTLR is checked if (required logical address is not within the logical address space of process)

        Terminate the processelse

        page is not in memory and page is in disk

        4) To bring the required page into memory we need to find a free main memory frame map main memory address to disk block and fetch disk block and load the block into free frame5) When the required page is brought into memory update the page table to indicate the page is in memory6) Restart instruction interrupted by illegal address trap The process will continue as if page had always been in memory

        Q) What is pure demand pagingAns If no page belonging to the executing process is in main memory the process will fault for every page it needs Page faults will occur until every page that is needed is in memory This scheme is known as pure demand paging NEVER BRING A PAGE UNTIL IT IS REQUIRED

        Q) Page Replacement1 Find the location of the desired page on the disk2 Find a free

        i) If there is a free frame use itii) If there is no free frame use Page replacement Algorithms to find some page in memory that is

        not really in use and swap itiii) If the modify bit associated with the victim frame is set to 1 the page is written to disk Make the

        changes to validinvalid bit of victim frames page table entry to invalid to indicate the victim frame is no longer in memory

        else no need to write to disk3 Read the desired page into victim frame and change the frame and page tables4 Restart the user process

        =================================================================================================Q) Page Replacement AlgorithmsGoal Produce a low page-fault rateThe algorithms are evaluated by running it on a particular string of memory references (reference string) and by computing the number of page faults on that string

        1 First in First out (FIFO) Page replacement algorithm Uses the time when a page was brought into memory The page that was brought first into memory is replaced ie oldest page is chosen for replacement We create a FIFO queue to hold all pages in memory

        When a page is brought into memory we insert it at the tail of the queue and we replace the page at the head of the queue

        FIFO 15 page faultsProblems due to FIFO is Beladyrsquos anamolyFor a refrence string shown below123412512345When number of frames allocated to a process is 1 number of page faults=12 due to FIFO page replacementWhen number of frames allocated to a process is 2 number of page faults=12 due to FIFO page replacementWhen number of frames allocated to a process is 3 number of page faults=9 due to FIFO page replacementWhen number of frames allocated to a process is 1 number of page faults=10 due to FIFO page replacementAs the number of frames allocated to a process increases page faults must decrease But in FIFO page replacement page faults increase2 Optimal Page replacement AlgorithmReplace the page that will not be used for longest period of time Used for measuring how well your algorithm performs It is not practical to implement OPT algorithm as it is not possible to know what the future references will beex

        Optimal page replacement 9 page faults

        3 Least Recently used (LRU) Page replacement algorithm Replace the page that has not been used for longest period of time LRU makes use of time of last use of the page Ex

        Number of page faults = 12

        LRU requires hardware assistance to determine the time of last use of the page We can use 1) Stack or 2) Counter to implement LRU page replacement algorithm

        1) Stack implementation ndash keep a stack of page numbers in a double link formwhen a Page is referenced it is moved to the top of stackThis implementation requires 6 pointers to be changed

        2) counter implementation Time-of-use field is allocated to each frame allocatedCPU maintains a counter Counter is incremented for every page reference Counter value is copied to the time-of-use field of the referenced page

        Disadvantages of LRU page replacement algorithm Updating counter or stack must be done for every memory reference HENCE LRU IS SLOW and implementation requires hardware assistance ==============================================================================Q) LRU approximation page replacement1) Using reference bit2) Using additional reference bits algorithm3) Second chance algorithm4) Enhanced Second chance algorithm5) Counting based Page replacement

        Has two schemes1) Least Frequently Used (LFU)2) Most Frequently Used (MFU)

        6) Page Buffering AlgorithmsQ) What is ThrashingAns A process is thrashing if it is spending more time paging than executingThrashing may occur when global (or) local page replacement is used as followsSolution to Thrashing is Working Set strategyIf we provide a process with as many frames as it needs thrashing can be avoided Since it is not possible to know how many frames a process needs we use locality model of process execution

        here Locality is set of pages that are actively used together Locality model states that as the process executes it moves from locality to locality If we allocate enough frames to a process to accommodate the size of current locality it will not fault again until it changes current localityWorking Set model uses working-set and a parameter ∆ = working-set windowThe set of pages in the most recent ∆ page references is the working setLet WS S i be the working-set size for process Pi ie Pi needs WS S i framesLet D be the total demand for frames then D=sum WS Si

        and mbe the total number of available frames

        If iquestm thrashing occurs because some processes will not have enough framesSoIF (Dgtm iquest

        OS selects a process to suspendelse OS may initiate another processThe working set strategy prevents thrashing while keeping the degree of multi programming as high as possibleThe main difficulty is to keep track of moving working set window ========================================================================Q) How does thrashing occur in global and local page replacementAns In Global page replacement when a page fault occurs for a process any frame in main memory can be replaced causing page fault to some other process that needs the replaced frame Page fault processes must use paging device to swap pages in and out As more number of processes queue up for the paging device

        the READY queue empties And CPU becomes idle So OS introduces new processes as CPU is free This further increases number of page faults

        In Local page replacement Each process is allocated certain frames in main memory and when a page fault occurs only the frames allocated to it will be replacedWhen a process starts thrashing other processes are not affected But since the processes will be in queue for paging device most of the time average service time for page fault increases Hence effective access time will increase even for a process that is not thrashingQ) Page Table Structure Or Structure of page table in memory

        Hierarchical Paging Hashed Page Tables Inverted Page Tables

        Hierarchical Paging If Page table is large then we break up the page table into multiple page tables ie page table is paged A simple technique is a two-level page tableTwo-Level Paging Example A logical address (on 32-bit machine with 4K page size) is divided into 1 a page number consisting of 20 bits 2 a page offset consisting of 12 bitsSince the page table is paged the page number is further divided into 1 a 10-bit page number 2 a 10-bit page offset Thus a logical address is as follows page number page offset

        pi p2 d

        10 10 12 where p1 is an index into the outer page table and p2 is the displacement within the page of the outer page table Two-Level Page-Table Scheme

        Address-Translation Scheme Address-translation scheme for a two-level 32-bit paging architecture

        Hashed Page Tables (or) Hash table Hashed page tables are common in address spaces gt 32 bitsEach entry in hash table contains a linked list of elements that hash to same locationHash table is indexed by hash valueEach element consists of 3 fields

        1 Page number2 Frame number3 Pointer to next element in linked list

        The algorithm works as follows Page number is passed to hash function to get hash value Locating the hash value in hash table is easy as Hash table is indexed by hash value Page number is then compared with field 1 in the first element in linked list If there is a match the corresponding frame number is used to get physical address If there is no match next entries in linked list are searched

        Inverted Page Table

        Invertible page table has one entry for each frame of physical memory Each entry in invertible page table has

        Process-id Page number

        Logical address also has 3 parts Process-id Page number

        OffsetProcess-id and page number of the logical address is compared with each entry in the invertible page tableIf a match is found at ith entry i is added to offset to get logical address If there is no match it means it is illegal addressAdvantages and disadvantages

        Decreases memory needed to store each page table Increases time needed to search the table when a page reference occurs Solution Use hash table to

        limit the search to one mdash or at most a few mdash page-table entries Inverted Page Table Architecture

        Unit-IV CPU Scheduling

        Basic conceptsTo maximize CPU utilization some process must be running at all times

        In multi programming several processes will be in main memory at a given time If running process has to wait for IO the OS takes the CPU away from that process and assigns CPU to other process in ready queue So CPU must be scheduled and CPU scheduling is fundamental OS function

        CPU- IO burst cycleProcess execution begins with a CPU burst that is followed by an IO burst which is followed by another CPU burst then another IO burst and so on as shown in the figure

        An IO bound program has many short CPU bursts A CPU-bound program has few long CPU bursts

        Schedulers a module in OS for scheduling decisions1048708 Long-term scheduler (or job scheduler) ndash selects which processes should be brought into the ready queue1048708 Medium-term scheduler ndash selects which processes should be swapped inout the memory1048708 Short-term scheduler (or CPU scheduler) ndash Whenever the CPU becomes idle the operating system must select one of the processes in ready queue to be executed This selection is done by short term scheduler (or) CPU scheduler

        Dispatcher gives CPU control to the process selected by short term scheduler This function involves following

        1 switching context2 switching to user mode3 jumping to proper location in the user program to restart that program

        Time taken to stop one process and start another process is known as dispatch latency This must be kept smallReady QueueReady queue is implemented as a FIFO queue a priority queue a tree or simply an unordered linked listThe records in ready queue are PCBs of processes

        Pre-emptive Scheduling

        CPU scheduling decisions take place under one of four conditions

        1 When a process switches from the running state to the waiting state2 When a process switches from the running state to the ready state for example in response to an

        interrupt3 When a process switches from the waiting state to the ready state say at completion of IO 4 When a process terminates

        For conditions 1 and 4 there is no choice - A new process must be selected For conditions 2 and 3 there is a choice - continue running the current process (or) select a different

        one If CPU scheduling takes place only under conditions 1 and 4 the system is said to be non-

        preemptive or cooperative In non-preemptive a process runs until it itself gives up the CPU ie when it is waiting for IO (or) when process completes Otherwise the system is said to be pre-emptive

        Windows used non-preemptive scheduling up to Windows 3x and started using pre-emptive scheduling with Win95 Macs used non-preemptive prior to OSX and pre-emptive since then Note that pre-emptive scheduling is only possible on hardware that supports a timer interrupt

        Note that pre-emptive scheduling can cause problems when two processes share data because one process may get interrupted in the middle of updating shared data structures

        Preemption can also be a problem if the kernel is busy implementing a system call ( eg updating critical kernel data structures ) when the preemption occurs Solution wait until the system call has either completed or blocked before allowing the pre-emption But this solution is problematic for real-time systems

        Some critical sections of code protect themselves from concurrency problems by disabling interrupts before entering the critical section and re-enabling interrupts on exiting the section But this should only be done only on very short pieces of code that will take less CPU time

        Scheduling Criteria to decide CPU scheduling algorithm1 CPU utilization in real system cpu utilization will range from 40 to 902 Throughput Number of processes that are completed per unit time3 Turnaround time Time when the process has completed ndash Time when the process was submittedie it includes waiting time and burst time4 Waiting time amount of time that a process spends waiting for CPU5 Response time Time when the first response came - Time when the process was submitted

        Optimization Criteria Max CPU utilizationMax throughput

        Min turnaround timeMin waiting timeMin response time CPU Scheduling Algorithms1) First-Come First-Served (FCFS) Scheduling First come first served CPU scheduling algorithm is non pre-emptive Processes are scheduled in the order they have arrivedFCFS is implemented by Queue When the CPU is free it is allocated to the process at the head of the queueAdvantages Easy to implementDisadvantages 1 Average waiting time is quite long

        2 Convoy effect (it occurs as small processes wait in queue for big process to leave CPU)Example Process Burst Time P1 24 P2 3 P3 3

        Suppose that the processes arrive in the order P1 P2 P3

        The Gantt Chart for the schedule is P1 P2 P3

        0 24 27 30 Waiting time for P1 = 0 P2 = 24 P3 = 27 Average waiting time (0 + 24 + 27)3 = 17 2) Shortest-Job-First (SJF) SchedulingSchedule the Process with shortest burst time If CPU burst time of two processe are same then FCFS is usedAdvantages Average waiting time decreasesDisadvantages It is difficult to know the length of next CPU burstSJF can be either pre-emptive (or) non pre-emptive1 Non pre-emptive SJF ndash once CPU is given to the process it cannot be preempted until completes its CPU burst2 Pre-emptive SJF (or) Shortest-Remaining-Time-First (SRTF) If the newly arrived process is shorter than what is remaining of currently executing process then pre-empt the currently executing process

        Example Process Arrival Time Burst Time P1 00 7

        P2 20 4 P3 40 1

        P4 50 4

        SJF (non-preemptive) Gantt Chart P1 P3 P2 P4

        0 7 8 12 16 Average waiting time = [0 +(8-2)+(7-4) +(12-5)] 4 =4

        Example of Preemptive SJF

        Process Arrival Time Burst TimeP1 00 7P2 20 4P3 40 1P4 50 4

        SJF (preemptive) P1 P2 P3 P2 P4 P1

        0 2 4 5 7 11 16 Average waiting time = (9 + 1 + 0 +2)4 =3 3) Priority SchedulingA priority number (integer) is associated with each processThe CPU is allocated to the process with the highest priority (smallest integer equiv highest priority)If two processes have equal priority then FCFS is usedSJF is a priority algorithm in which larger the CPU burst lower the priorityThere are two schemes 1 Preemptive 2 nonpreemptive

        Problem equiv Starvation ndash low priority processes may never executeSolution equiv Aging ndash as time progresses increase the priority of the process

        4) Round Robin (RR) Each process gets a small unit of CPU time (time quantum) usually 10-100 milliseconds After this time has elapsed the process is preempted and added to the end of the ready queueIf there are n processes in the ready queue and the time quantum is q then each process gets 1n of the CPU time in chunks of at most q time units at once No process waits more than (n-1)q time unitsPerformance 1 q large _ FIFO 2 q small _ q must be large with respect to context switch otherwise overhead is too highExample of RR with Time Quantum = 4 Process Burst Time P1 24 P2 3

        P3 3 The Gantt chart isP1 P2 P3 P1 P1 P1 P1 P1

        0 4 7 10 14 18 22 26 30 Average waiting time = [(30-24)+4+7]3 = 173 =566 5)Multilevel Queue scheduling Ready queue is partitioned into separate queuesFor ex ready queue is divided into 2 queues

        1 foreground (interactive) queue2 background (batch) queue

        Each queue has its own scheduling algorithmFor ex Round Robin scheduling algorithm can be used in foreground queue

        FCFS scheduling algorithm can be used in background queueScheduling must be done between the queues This can be done in 2 ways1 Fixed priority scheduling

        Foreground queue has highest priority All the processes in foreground queue must be completed and queue must be empty Then only the processes in background queue will be given CPU As shown in below figure Disadvantage starvation

        2 Time slice ndash each queue gets a certain amount of CPU timewhich it can schedule amongst its processes ie 80 to foreground in RR and 20 to background in FCFS

        6) Multilevel Feedback Queue Scheduling A process can move between queues The idea is to separate processes according to their CPU bursts IO ndash bound and interactive processes will be in highest priority queueIf a process takes more CPU time it is moved to lower priority queue If a process waits too long in lower priority queue it is moved to higher priority queue to prevent starvationAs shown in the below figure let there be 3 queues Q0 Q1 Q2

        1 Q0 ndash time quantum 8 milliseconds 2 Q1 ndash time quantum 16 milliseconds 3 Q2 ndash FCFS

        Scheduling

        1 A process entering ready queue is put in queue Q0 When it gains CPU the process receives 8 msec If it does not finish in 8 milliseconds process is moved to queue Q1 2 At Q1 process again receives 16 additional milliseconds If it still does not complete it is moved to queue Q2

        Q) SynchronizationSeveral processes run in an operating system Some process share resources due to which problems like data inconsistency may arise

        Example of process synchronization is Producer Consumer problem (or) Bounded-Buffer problemProducer Consumer problem To make sure that producer process should not add data when the buffer is full andConsumer process should not take data when the buffer is empty

        The code for Producer process can be modified as followswhile (true) Produce an item in next producedwhile (counter == BUFFER_SIZE) do nothingbuffer[in]=nextProducedin = (in+1) BUFFER_SIZEcounter++

        here an integer variable counter is initialized to zeroCounter is incremented every time a new item is added to the buffer and counter is decremented every time an item is removed from bufferin is an index that always points to an empty slot after the last filled slotout is an index that always points to the first filled slot after at the head of the circular queue

        The code for consumer process is as shown belowwhile(true)

        while (counter == 0) do nothingnextConsumed = buffer[out]out=(out+1)BUFFER_SIZEcounter--

        The above producer and consumer code produces RACE CONDITIONOutcome of execution of several co-operating processes depends on the order in which accesses takes place is called Race ConditionFor Example if counter =5

        Given counter =5 and if producer produces one item and consumer consumes one item then correct value of counter= 5 only But here the counter = 4 ie inconsistent resultIf T5 is executed before T4 then the counter =6 ie inconsistent result

        Solution to avoid race condition make sure only one process at a time must update the shared variable (here it is counter)

        Q) Critical section problemAns Each process has a segment of code called critical section Critical section code is the code that contains common variables of co-operating processesTo avoid data inconsistencyTwo processes must not execute critical section code at same timeFor this to happen1 Each process must request permission to enter its critical sectionmdashEntry section2 End of critical section ndashExit section3 Rest of the code after critical section ndash Remainder section

        Solution to critical section must satisfy the following three requirements1 Mutual exclusion if one process is in critical section no other process can be in critical section2 Progress Processes not in remainder section must decide which process will enter critical section next3 Bounded Waiting All the processes must be allowed to enter critical section No process must wait indefinitely

        Q) Petersonrsquos solution for critical section problem (Software based solution)AnsTime Process 1 Process2T0 do

        flag[1] = TRUE turn =2

        do flag[2] = TRUE

        T1 turn =1

        T2 While( flag[2] ampamp turn == 2)Since turn = 2 while loop will not run

        While( flag[1] ampamp turn == 1)Loops continuously until flag[1]

        becomes falseT3 Enter critical section

        T4 flag[1] = FALSE Enter Critical section

        T5 Enters remainder section flag[2]=FALSE

        while (TRUE) Enters remainder section

        while (TRUE)

        101 File Concept Data is stored in secondary memory in the form of files A file is a sequence of bits bytes lines or records A file has structure which depends on its type

        File Attributes

        1 Name ndash Name of the file for user understanding2 Identifier used by OS to identify a file ( eg inode number )3 Type ndash Text files executable files binary files etc 4 Location ndash location of the file on the hard drive 5 Size current size of the file6 Protection Controls who can read write or execute the file7 Time Date and user identification gives the information when the file was created last modified

        and last used etc This data is used for protection security and usage monitoring

        All information about files is kept in directory structure The directory entry consists of the filersquos name and its unique identifier The identifier locates the other file attributes Directory structure is kept on disk

        File Operations

        The file Abstract Data Type supports many common operations 1 Creating a file find free space on disk add entry to directory 2 Writing a file write data at current file position pointer location and update pointer3 Reading a file read data at current file position pointer location and update pointer4 Repositioning within a file or file seek change pointer location to a given value5 Deleting a file free the space allocated to file and remove entry from directory6 Truncating a file delete the data and update file size

        Open File Table

        Most OSes require that files be opened before access and closed after access Information about all files that are open currently in the system is stored in an open file table This

        system wide open file table is maintained by OS And information about files opened by the process is stored by the processrsquos open file table Open file table has the following information

        1 File pointer - records the current position in the file for the next read or write2 File-open count - How many times has the current file been opened ( simultaneously by

        different processes ) and not yet closedWhen this counter reaches zero the file can be removed from the table

        3 Disk location of the file4 Access rights

        Some systems provide support for file locking 1 A shared lock is for reading only2 A exclusive lock is for writing as well as reading3 An advisory lock OS makes sure the locks are acquired and released appropriately4 A mandatory lock software developers makes sure the locks are acquired and released

        appropriately5 UNIX used advisory locks and Windows uses mandatory locks

        File Types

        Implementing file types A common technique is to include the type as part of the filename The name is split into 2 parts

        1 File Name 2 Extension

        File name and extension are separated by a period The system uses the extension to indicate the type of file and the type of operations that can be done on that fileUser opens Microsoft word and clicks on File menu and Open option User will only specify the filename without extension application will look for a file with the given name and extension it expects Hence extensions are not supported by the operating systemUnix system uses a crude magic number stored at the beginning of the file to indicate the type of fileIn Mac OS X operating system file produced by word processor has the word processorrsquos name as its creator When user double clicks on the file word will open the required fileSome common file types are as shown below

        File Structure Some files contain an internal structure which may or may not be known to the OS for ex executable files have a specific structure so that it can determine

        where in memory to load the file and what is the location of the first instruction

        If OS supports many file formats it will increase the size and complexity of the OS UNIX treats all files as sequences of bytes with no internal structure (With the exception of

        executable binary programs which it must know how to load and find the first executable statement etc)

        Macintosh files have two forks - a resource fork and a data fork The resource fork contains information relating to the UI such as icons and button images and can be modified independently Data fork contains the code or data

        Internal File structure

        Disk files are accessed in units of physical blocks typically 512 bytes

        Internally files are organized in units of logical units like single byte (or)

        data record size (or) structure size

        The number of logical units which fit into one physical block determines its packingIf a student file is organized internally as records Each record takes 64 bytes of memoryAnd main memory is divided into blocks of 512 bytes Then packing = 51264=8 8 records can be stored per block

        Internal fragmentation may occur due to this

        102 Access Methods

        The information in file can be accessed in the following ways1 Sequential access2 Direct access (or) Relative access

        1 Sequential access we cannot randomly access any record

        Records are read and written sequentially one record after another

        A sequential access file emulates magnetic tape operation and generally supports the following operations read next - read a record and advance the tape to the next position write next - write a record and advance the tape to the next position rewind skip n records - May or may not be supported N may be limited to positive numbers or may be

        limited to +- 1

        2 Direct access (or) Relative accessA file is made up of fixed length records so that we can randomly access any record

        No restriction on order of reading or writing

        Direct access files are useful when we need to access large amounts of data randomly

        Ex database

        Operations supported include n is relative block number ie relative to beginning of file

        1 read n - read record number n 2 write n - write record number n 3 jump to record n - could be 0 or the end of file4 Query current record - used to return back to this record later5 Sequential access can be easily emulated using direct access The inverse is complicated and

        inefficient

        3 Other access methods can be built on top of direct access method We construct an index for the file which contains pointers to various blocks To search a file we first access index and then use pointer to access file

        If a student file has 120000 records of students sorted by student id We can create an index consisting of 1st student id in each blockif 64 records can be stored in each block then number of blocks needed to store 120000 records = 12000064 asymp 2000 blocks So index has 2000 entries Index is kept in main memory so that searching becomes faster

        If the index itself becomes large we create an index for index file (Ex B-tree)

        Q) Allocation Methods

        To allocate space to files on disk is such a way that disk space is utilized effectively and files can be accessed quickly

        Three major methods of allocating disk space are

        1 Contiguous 2 Linked

        3 Indexed

        1 Contiguous allocation Each file occupies set of contiguous blocks on disk

        If the file is n blocks long and starts at location b then it occupies blocks b b+1 b+2hellipb+n-1

        The directory entry for each file indicates the address of starting block(ie b) and length (n)

        Advantages

        1 For direct access to block i we can directly access by finding the block as b+i Since the file is stored sequentially sequential access is easy Hence access is easy

        2 Supports both direct and sequential access3 Number of disk seeks required is minimum 4 Seek time is minimum

        Disadvantages

        1 Difficult to find space to a new file (or) add more data to existing fileFirst fit and Best fit strategies are used to find a free hole( set of free contiguous blocks) from list of available holes But both are inefficient wrt storage utilization

        2 External fragmentation even though there are enough free blocks required for a new file they are not contiguous and hence cannot be allocated to new fileSolution copy all the files onto another disk or tape We then copy the files back onto original disk contiguously This is known as compaction But compaction is time consuming

        3 Size of the file must be known when the file is created Even though final size is known entire size may not be used by the file at the beginning Hence leads to wastage of space

        If size of the file is not known in advance then (i) If we allocate too little space to file file cannot be extended Solution find a larger hole copy the contents of the file to the new space and release the previous space This may be possible as long as space exists Also this is time consuming(ii) If we overestimate the amount of space needed much of the space is unused This will lead to internal fragmentation

        To minimize these drawbacks some os uses modified contiguous allocation scheme Here contiguous blocks of memory is allocated at firstIf the file needs more blocks another set of contiguous space known as extent is added Location and a block count plus a link to the first block of next extentIf extents are too large and of fixed size internal fragmentation occursIf extents are of variable sizes external fragmentation occurs

        Linked AllocationEach file is a linked list of disk blocks The disk blocks can be scattered anywhere on the diskThe directory contains a pointer to first and last blocks of the file For example a file of 5 blocks might start at a block 9 and continue at block 16 then block 1 and then block 10 and finally block 25 Each block contains a pointer to next block If each block is 512 bytes and disk address (pointer ) requires 4 bytes then the user sees blocks of 512-4=508 bytes To create a new file we simply create a new entry in the directory The pointer will have nil for empty file Also the size field of empty file is 0

        To add more data to existing file free block is found and data is written to it and is linked to end of file To read a file we read by following the pointers block to block

        Advantages 1 Easy to find space to a new file (or) add more data to existing file2 Size of the file need not be declared when the file is created3 There is no external fragmentation

        Disadvantages1 Seek time is more2 Number of disk seeks required is more3 Pointers use larger percentage of filersquos disk space4 Inefficient for direct access files To access i th record i disk reads are required5 Not Reliable if pointers were lost or damaged

        Solution to disadvantage 3 is to use to group multiple blocks as clusters(for ex 4 blocks = 1cluster) We allocate clusters instead of blocks to file Makes logical to physical block mapping simpleImproves disk throughput by reducing disk access timeReduces wastage of space due to pointers as only few pointers may be neededFree ndashlist management also becomes simpleDisadvantage internal fragmentation more space is wasted when the cluster is partially full

        Solution to disadvantage 5 is to use doubly linked list (or) store the file name and relative block number in each block

        An variation of linked allocation is the use of file allocation table (FAT)A section of disk at the beginning of each volume contains FAT FAT has one entry for each disk block and is indexed by block numberThe block number contains the block number of next block that contains the file The last block will have special end of the file value as table entry An unused block is indicated by table value 0

        Advantages

        1 Easy to find space to a new file (or) add more data to existing file2 Random access time is improved

        Disadvantage number of disk seeks is more The disk head must move to read the FAT and find the location of block then move to location of the block

        3 Indexed allocation

        Each file has its own index block Index block is an array of disk block addresses The ith entry in index block points to ith block of the file

        The directory contains the address of the index block

        When the file is created all pointers in the index block are set to nil

        To add more data to existing file free block is found and data is written to it The address of block is put in the index block

        Advantages

        1 Easy to find space to a new file (or) add more data to existing file2 Supports direct (or) random access efficiently3 No external fragmentation

        Disadvantage

        1 Seek time is more2 Number of disk seeks required is more3 Not Reliable4 Index file must be kept in memory If this memory is not available then we have the read the index

        block and then desired data block ie 2 disk accesses are required To access a block near the end of the file we need to read all the index blocks to read the needed data block

        5 Pointer overhead of index block is gt than that of linked allocation For example if we have file of only one or two blocks Entire index block is allocated to store these one or two pointers

        Various mechanisms followed for size of index block

        1 linked scheme size of index block is one disk block Index block contains a small header containing name of the file and set of the first 100 disk block addresses The next address contains the address of another index block for large files

        2 multilevel index to access a block the OS uses first-level index to find second-level index block which points to file block This approach can be continued to third or fourth level

        3 combined scheme

        For example Say there are 15 pointers of index block in the filersquos inode The first 12 of these pointers have address of blocks that contains file data The next 3 pointers point to indirect blocks The first points to single indirect block which is an index block that contain addresses of filersquos data blocks

        The second points to double indirect blocks and third points to triple indirect blocks as shown below

        Q) Free space management

        To keep track of free disk space the system maintains free space list To create a file we search free space list and allocate the required space to new file

        The free space list can be implemented as

        1 Bit vector (or) Bit map2 Linked list3 Grouping4 Counting 5 Space maps

        1 Bit vector (or) Bit map Each block is represented by 1 bit If the block is free the bit is 1 and if the block is allocated the bit is 0 For example consider a disk where blocks 23458 blocks are free The free-space bit map would be 001111001000

        Advantages simple and easy to get contiguous files

        Disadvantages

        1 Bit map requires extra space2 Bit map is inefficient if kept in disk Bit maps of smaller disk can be kept in main memory but may

        not be possible to keep the bit map of larger disk in main memory3 Bit map must be modified both when blocks are allocated and freed Freeing 1 GB of data on a 1-TB

        disk could cause thousands of blocks of bit maps to be updated because those data blocks are scattered all over the disk

        2 Linked List Free-space list is implemented by linking all the free disk blocks together A pointer to the first free block is stored in special location on the disk For example consider a disk where blocks 23458 blocks are freeWe keep a pointer to block 2 as the first free block Block 2 would contain a pointer to block 3 Block 3 would point to block 4 and block 4 would point to block 5and so onAdvantage No waste of spaceDisadvantage Cannot get contiguous space easily ie To traverse the free-space list we must read each block which requires more IO time

        3 Grouping First free block stores the addresses of n free blocks The first n-1 free blocks are free and the last block contains the addresses of another n free blocks and so onAdvantage large number of free blocks can be found quickly4 Counting free-space list can be maintained by keeping the address of first free block and the number (n)of free contiguous blocks that follow the first block

        Each entry in the free-space list consists of disk address and a countEach entry requires more space andlist is shorter only if count is gt1These entries can be stored in a B-tree for efficient insertion deletion and search

        5 Space maps ZFS divides the disk space into chunks of manageable sizes called metaslabs Each metaslab has a space mapZFS uses counting algorithm to store information about free blocks Space map is a log of all block activities in time order in counting formatWhen ZFS allocates or frees space from a metaslab it loads the associated space map into memory and updates it Finally free-space list is updated on diskSynchronization Hardware Hardware solution to synchronization is to provide atomic operations These operations operate as a single instruction without interruption Two such operations are 1 Test and Set() hardware instruction2 Swap() instruction

        1 Test and Set instruction is as shown

        Solution using Test and SetThe shared variable lock is initialized to false

        2 Swap() instruction definition is as follows

        Solution using Swap1 Shared Boolean variable lock is initialized to false and each process has a local boolean variable key

        Both Test and Set Swap instructions satisfy the mutual exclusion requirement but unfortunately do not guarantee bounded waiting If there are multiple processes trying to get into their critical sections there is no guarantee of what order they will enter and any one process could have the bad luck to wait forever until they got their turn in the critical section (Since there is no guarantee as to the relative rates of the processes a very fast process could theoretically release the lock whip through their remainder section and re-lock the lock before a slower process got a chance As more and more processes are involved vying for the same resource the odds of a slow process getting locked out completely increase )

        Below figure illustrates a solution using test-and-set that satisfies bounded waiting using two shared data structures boolean lock and boolean waiting[ N ] where N is the number of processes in contention for critical sections

        Bounded-waiting mutual exclusion with TestAndSet( )

        It first looks in an order ( starting with the next process on the list ) for a process that has been waiting and if it finds one then it releases that particular process from its waiting state without unlocking the critical section thereby allowing a specific process into the critical section while continuing to block all the others

        Q) What is semaphoreAns Semaphore is used to solve synchronization problems A semaphore S is an integer variable accessed only through 2 standard atomic operations 1 wait() 2 signal()Wait primitive Signal primitivewait(S)

        while(S lt= 0) do nothingS--

        signal (S)

        S++

        Each process that wishes to use a resource performs wait() operation When a process releases a resource it performs signal() operation

        Semaphores Usage1 binary semaphore or mutex locks can take values 0 (or) 1

        2 Counting Semaphore can take any integer value Counting semaphore is used to count the number remaining resources The counter is initialized to the number of resources available whenever the counting semaphore gt 0 then a process can enter a critical section and use one of the resources When the counter = 0 ( or negative in some implementations ) then the process blocks until another process frees a resource and increments the counting semaphore with a signal callFor example if 3 resources are there and 4 processes Value of counting semaphore =3if process P1 requires a resource it perform wait() operationNow counting semaphore = 2if process P2 requires a resource it perform wait() operationNow counting semaphore = 1if process P3 requires a resource it perform wait() operationNow counting semaphore = 0if process P4 requires a resource it perform wait() operation P4 waits until a resource is available

        3 Semaphores can also be used to synchronize certain operations

        Semaphore ImplementationWhen a process is in critical section and any other process that tries to enter critical section must loop continuously wasting CPU time (busy waiting) This type of semaphore is called spin lock as the process spins while waiting For example suppose it is important that process P1 execute statement S1 before process P2 executes statement S2

        First we create a semaphore named synch that is shared by the two processes and initialize it to zero Then in process P1 we insert the code

        S1signal( synch )

        and in process P2 we insert the code wait( synch )

        S2 Because synch was initialized to 0 process P2 will block on the wait until after P1 executes the call

        to signal Each semaphore has a integer value and maintains a waiting queue of waiting processes

        Semaphore implementation with no busy waiting To overcome busy waiting we block the waiting process and waiting process is restarted by wakeup() operation(ie the process is removed from waiting queue to ready state)

        Problems due to semaphore1 Deadlocks Deadlocks occur when multiple processes are blocked each waiting for a resource that can only be freed by one of the other (blocked) processes shown below

        2 Starvation In Starvation one or more processes gets blocked forever and never get a chance to enter critical section For example if we do not specify the algorithms for adding processes to the waiting queue or selecting one to be removed from the queue in the signal( ) call If a LIFO queue is chosen then the first process that starts waiting will never get a chance

        3 Priority inversionLet Process LMH have priorities as shown belowLltMltHprocess H is waiting for a resource that has been held by process L so Process L is running When process M enters it pre-empts process L and M is running nowHere M runs before H even though M has lower priority than HTo avoid this L is given high priority so that M does not pre-empt L=====================================================================Q) Classical problems on synchronization

        1 The Bounded Buffer Problem (also called the The Producer-Consumer Problem)2 The Readers-Writers Problem3 The Dining Philosophers Problem

        These problems are used to test newly proposed synchronization scheme

        1 The Bounded Buffer ProblemConsider

        n buffers and each buffer holds one item a producer process which creates the items (1 at a time) a consumer process which processes them (1 at a time)

        Producer process cannot produce an item if all the buffers are full and Consumer process cannot consume if all the buffers are empty As both processes modify the contents of buffer synchronization must be there For that we use 3 Semaphores

        1 empty = n (empty semaphore counts the number of empty buffers)2 full=0 (full semaphore counts the number of filled buffers)3 mutex=1 (mutex provides mutual exclusion for access to the buffer)

        Producer process Consumer processdo

        hellip produce an item wait(empty) wait(mutex) hellip

        do wait(full) wait(mutex) hellip remove an item from buffer hellip

        add item to buffer hellip signal(mutex) signal(full)

        while (1)

        signal(mutex) signal(empty) hellip consume the item hellip while (1)

        2 The Readers-Writers ProblemA data item such as a file is shared among several processesEach process is classified as reader or writerMultiple readers can read the file at same timeA writer must have exclusive access (ie cannot share with either a reader or another writer)Two versions of the readers-writers problem

        readers priority new reader need not wait because a writer is waiting writers priority if a writer is waiting to access the database no new readers can start reading

        A solution to both version may cause starvation in the readers priority version writers may starve in the writers priority version readers may starve

        A semaphore solution to the readers priority version uses 3 semaphores1 readcount = 0 (readcount semaphore counts the number of reading shared data2 mutex =1 (mutex semaphore is used for mutually exclusion when readcount is updated)3 wrt = 1 (wrt semaphore is common to both writer and reader)

        wrt provides mutually exclusive access to shared data Reader process Writer process

        wait(mutex) readcount++ if (readcount == 1) wait(rt) signal(mutex) hellip reading is performed hellip wait(mutex) readcount-- if (readcount == 0) signal(wrt) signal(mutex)

        do wait(wrt) hellip writing is performed hellip signal(wrt)

        while (TRUE)

        3 The Dining Philosophers Problemn philosophers sit around a table thinking and eatingAs time passes a philosopher gets hungry and tries to pick up the chopstick on his left and on his right A philosopher may only pick up one chopstick at a time and cannot pick up a chopstick already in the hand of neighbour philosopherThe dining philosophers problems is an example of a large class or concurrency control problems it is a simple representation of the need to allocate several resources among several processes in a deadlock-free and starvation-free mannerA semaphore solution represent each chopstick with a semaphore

        semaphore chopstick[5] Initially all values are 1

        Philosopher ido

        51

        wait(chopstick[i])wait(chopstick[(i+1) 5])hellipeathellipsignal(chopstick[i])signal(chopstick[(i+1) 5])hellipthinkhellip

        while (1)This solution guarantees no two neighbouring philosophers eat simultaneously but has the possibility of creating a deadlockQ) Types of operating systems

        MULTIPROCESSING SYSTEMS has multiple hardware CPUs

        In symmetric multiprocessing (SMP) systems The processes share the same main memory and io facilities All processors can perform the same functions (hence term symmetric)

        52

        ASYMMETRIC MULTIPROCESSING SYSTEMS In asymmetric multiprocessing (ASMP) all CPUs are not equal (Master and slave CPUs) ASMP have dedicated processors to specific tasks For example one processor may be dedicated to disk operations another to video operations etc These systems dont have the flexibility to assign processes to the least-loaded CPU unlike an SMP system

        REAL TIME SYSTEM is used to implement a computer application that must complete its execution within its time constraint

        Two kinds of real-time systems have evolved

        1 A hard real-time system can guarantee that response requirement will be met under all conditions 2 A soft real-time system cannot guarantee that response requirement meets under all conditions Ex

        Digital audio or multimedia systems Digital telephones

        DISTRIBUTED SYSTEMS execute parts of a computation in different systems at the same time It uses distributed control ie it spreads its decision-making actions across different computers in the system so that failures of individual computers or the network does not cripple its operation

        A distributed operating system appears as a uniprocessor system even though it has multiple processors The users may not know where their programs are being run or where their files are located that should all be handled automatically by the operating system

        Distributed systems allow applications to run on several processors at the same time thus requiring more complex processor scheduling algorithms

        DESKTOP SYSTEMS A desktop system is a personal computer (PC) system used at a single location Modern desktop systems support multiprogramming Common examples are Linux FreeBSD Windows 8 and the Macintosh operating system

        HANDHELD SYSTEMS (palmtop computers) Two of the most popular operating systems for handhelds are Symbian OS and Android OS They have very less memory

        53

        CLUSTERED SYSTEMS A computer cluster consists of a set of loosely connected computers that work together so that in many respects they can be viewed as a single system The components of a cluster are usually connected to each other through fast local area networks each node (computer used as a server) running its own instance of an operating system

        Q) Modern operating system

        New developments in hardware applications and security threats lead to the development of modern operating system

        New Hardware developments like increased machine speed high speed network increased size and variety of memory devices

        New Applications developments like multimedia applications internet and web access and client server computing

        Developments in modern operating system can be categorized as

        1 Microkernel architecture assigns only few functions to kernel ( IPC and CPU scheduling etc) Other OS services are provided by processes called servers that run in user mode This approach decouples server and kernel development This approach is well suited to distributed environment

        2 Multithreading a process is divided into threads that can run concurrently Useful for applications that perform a number of independent tasks Example database server that listens and processes many client requests

        3 Symmetric multiprocessing As now-a-days there are multiple microprocessors in a single system SMP operating system provides greater efficiency when there are multiple processors The processes share the same main memory and io facilities All processors can perform the same functions (hence term symmetric)

        SMP schedules processes or threads across all of the processors

        Advantages

        1 Increased performance if some portion of the program can be run in parallel the performance of SMP OS increases

        2 increased availability as all processes can perform the same task if one processor fails others will work 3 incremental growth one can add a new processor to increase the performance4 Scaling cost can be dependent on the number of processors used

        4 Distributed operating systems Distributed systems allow applications to run on several processors at the same time

        5 Object oriented design adds modularity to kernel Os can be customized without effecting system integrity

        Q) Linux Operating systems It is open source as its source code is freely available It is free to use Linux was designed considering UNIX compatibility Linux is a multi-user multitasking system Main design goals are speed efficiency and standardization Components of Linux System

        54

        Linux Operating System has primarily three components as shown in the below diagram

        Kernel - Kernel is the core part of Linux It is responsible for all major activities of this operating system It interacts directly with hardware Kernel hides low level hardware details to system or application programs

        System Library - System libraries are special functions or programs These are used by application programs to use Kernelrsquos features

        System Utility - System Utility programs are responsible to do specialized individual level tasks like updating log file accepting login requests from terminals etc

        Kernel Modules Sections of kernel code that can be compiled loaded and unloaded independent of the rest of the kernel A kernel module may typically implement a device driver a file system or a networking protocol

        Three components to Linux module support 1 module management Supports loading modules into memory and allows to talk to the rest of the kernel 2driver registration Allows modules to inform rest of the kernel that a new driver has become available 3 conflict resolution protect reserved resources of one driver from accidental use by another driver

        Kernel Mode vs User ModeKernel component code executes in a special privileged mode called kernel mode with full access to all resources of the computer This code represents a single process executes in single address space and do not require any context switch and hence is very efficient and fast Kernel runs each processes and provides system services to processes provides protected access to hardwares to processesCode which is not required to run in kernel mode is in System Library User programs and other system programs works in User Mode which has no access to system hardwares and kernel code User programs utilities use System libraries to access Kernel functions to get systems low level tasksBasic FeaturesFollowing are some of the important features of Linux Operating System

        Portable - Portability means softwares can works on different types of hardwares in same way Linux kernel and application programs supports their installation on any kind of hardware platform

        Open Source - Linux source code is freely available and it is community based development project Multiple teams works in collaboration to enhance the capability of Linux operating system and it is continuously evolving

        Multi-User - Linux is a multiuser system means multiple users can access system resources like memory ram application programs at same time

        Multiprogramming - Linux is a multiprogramming system means multiple applications can run at same time

        Hierarchical File System - Linux provides a standard file structure in which system files user files are arranged

        Shell - Linux provides a special interpreter program which can be used to execute commands of the operating system It can be used to do various types of operations call application programs etc

        55

        Security - Linux provides user security using authentication features like password protection controlled access to specific files encryption of data

        Architecture Linux System Architecture is consists of following layers

        Hardware layer - Hardware consists of all peripheral devices (RAM HDD CPU etc)

        Kernel - Core component of Operating System interacts directly with hardware provides low level services to upper layer components

        Shell - An interface to kernel Takes commands from user and executes kernels functions

        Utilities - Utility programs giving user most of the functionalities of an operating systems

        Q) windows XP operating System1Extensibility mdash layered architecture

        Executive which runs in protected mode provides the basic system services On top of the executive several server subsystems operate in user mode Modular structure allows additional environmental subsystems to be added without affecting the

        executive 2Portability mdash XP can be moved from on hardware architecture to another with relatively few changes

        Written in C and C++ Processor-dependent code is isolated in a dynamic link library (DLL) called the ldquohardware abstraction

        layerrdquo (HAL)3 Reliability mdashXP uses hardware protection for virtual memory and software protection mechanisms for operating system resources 4 Compatibility mdash applications that follow the IEEE 10031 (POSIX) standard can be complied to run on 2000 without changing the source code 5 Performance mdash XP subsystems can communicate with one another via high-performance message passing

        Preemption of low priority threads enables the system to respond quickly to external events Designed for symmetrical multiprocessing

        6 International support mdash supports different locales via the national language support (NLS) APIXP Architecture 1 Layered system of modules 2 Protected mode mdash hardware abstraction layer (HAL) kernel executive 3 User mode mdash collection of subsystems

        a Environmental subsystems emulate different operating systems b Protection subsystems provide security functions

        XP architecture

        56

        Q) Windows network Operating systemExample Windows 2000 operating system

        All the pictures are taken from Silberschatz Abraham et al Peter Baer Galvin and Greg Gagne Operating system concepts Reading Addison-Wesley

        Visit my blog enthusiaststudentblogspotin57

        mtechmessengerblogspotin

        Keywords OS exam notes OS notesOperating system notes Opearing system galvin notesSimple batch system Multi- programmed Batch System Time sharing System Difference between Batch Multi-programming and Time sharing What is process structure of process in memory Process states PCB Operations on Processes Process Creation Process Termination cascading terminationcontext switch Basic Concepts of threads two modes of CPU execution Inter process communication Shared ndash Memory Systems Message passing systemswhat is deadlock four conditions that are necessary for deadlock to occur Methods for handling deadlocks Deadlock prevention Deadlock avoidanceResource Allocation graphBankers AlgorithmSafe State Unsafe state Safety Algorithm Resource-Request Algorithm Memory management Address Binding Logical vs Physical Address Space MMU Memory-Management Unit Dynamic loading Dynamic Linking Overlays Swapping Contiguous memory allocation Single-partition allocation Multiple-partition allocation Internal fragmentation Variable Size partitions External fragmentation First-fit Best-fit Worst-fit Non-Contiguous Memory allocation Paging Implementation of Page Table PTLR PTBR page table base register Page-table length register translation look-aside buffers TLB Protection in Paging Shared pages in Paging Segmentation Shared Segments Virtual memory Demand paging Procedure for Handling a Page Fault pure demand paging Page Replacement Page Replacement Algorithms FIFO Optimal Page replacement AlgorithmLeast Recently used (LRU) Page replacement algorithm ThrashingWorking Set model global and local page replacement

        58

        • 101 File Concept Data is stored in secondary memory in the form of files A file is a sequence of bits bytes lines or records A file has structure which depends on its type
        • File Attributes
        • File Operations
        • Kernel Mode vs User Mode
        • Basic Features
        • Architecture

          Operating System is responsible for creation of new process

          Reasons for a new process creation

          1 When a batch job is submitted by user2 In interactive environment or in time sharing system a process is created when user logs on3 Operating system creates a process to manage printing So that user need not wait till printing

          completes Here OS creates process on behalf of user4 When a process creates another process Creating process is called parent process and new process is

          called child process or sub processNew process can in turn create other processes forming a tree of processesOS identifies a process by unique process identifier (or pid) which is an unique integerIn solaris Operating system at the top of the tree is Sched process with pid=0 This process can create several child processes In the below figure it creates three child processes1 Init process is the parent process for all user processes2 Pageout process 3 Fsflush process

          Subprocess may obtain resources

          1 Directly from OS2 Share some resources among several of its children3 Parent process partitions its resources among its children

          Parent process may pass initialization data to child For ex name of the image file and name of the output device may be passed to a display process (child process)

          Parent process may execute

          1 concurrently with the child2 waits till some or all of the child process have terminated

          Address space of child process may be

          1 Duplicate copy of parent (Same program and data)2 New program loaded into it

          2 Process Termination

          A process terminates when it finishes executing its final statement and requests the operating system to delete it (using exit() system call in Unix and Terminate Process() in Win32 API)

          All the resources of the process (open files IO buffers and physical memory) are deallocated by OS

          A parent may terminate a child process for a variety of reasons

          1 Child has exceeded its usage of some of the resources2 Task assigned to child is no longer required3 Parent is terminating and OS (Ex VMS) does not allow a child to continue if parent is terminating

          Q) What is cascading termination

          Parent is terminating and OS (Ex VMS) does not allow a child to continue if parent is terminating This is called Cascading Termination initiated by OS

          Q) What happens to child when parent terminates in Unix

          Init process becomes the parent of all its children

          Q)What is context switchWhen the PCB of the currently executing process is saved the operating system loads the PCB of the next process that has to be run on CPU This is a heavy task and it takes a lot of time

          Q) Basic Concepts of threads

          A thread consists of a program counter a stack and a set of registers and a thread ID Threads are also called light weight processesA process with multiple threads make a great serverThe threads share a lot of resources with other threads belonging to the same process So a context switch among threads for the same process is easy It involves switch of register set the program counter and the stack

          Q) Explain two modes of CPU executionProtection of memory IO can be provided via two modes of CPU execution user mode and kernel mode

          In kernel privileged supervisor mode OS has access to privileged instructions Privileged instructions can access IO devices control interrupts manipulate memory (pagetable TLB etc)

          Privileged instructions are instruction that can only be executed in kernel mode

          All the user level processes run in user mode Some critical operations are not allowed to be done by user processes The user processes must use system calls to perform these operations When a system call occurs OS will enter into kernel mode and accesses privileged instructions to perform the desired service to user-level process

          For example for Input or Output process makes a system call telling the operating system to read or write particular area and this request is satisfied by the operating system

          Q) Explain Inter process communicationAns Cooperating processes require inter process communication (IPC) mechanism to exchange data and information There are two communication models (a) Message passing (b) shared memory as shown below

          (a) Shared ndash Memory Systems

          1) Communicating processes must establish a region of shared memory2) Shared memory region resides in address space of the creating process

          3) Other processes that wish to communicate using shared memory segment must attach the shared memory to their address space4) Processes can exchange information by reading and writing data in shared areas5) Shared memory systems are convenient to communicate6) Shared memory systems are faster and provides maximum speed as

          i) System calls are required only to establish shared memory regionsii) Once shared memory is established all accesses are treated as routine memory access No

          assistance of kernel is required

          (b) Message passing systems

          1 Are useful for exchanging smaller amounts of data2 Easy to implement for inter computer communication3 More time consuming than shared memory system as implemented using system calls

          and need kernel intervention

          To send messages communication link must exist between them Communication link can be physically or logically implemented

          Different methods to logically implementing a link are

          1 Direct or indirect communication2 Synchronous or asynchronous communication3 Automatic or explicit buffering

          1a Direct communication

          A link is established automatically between every pair of processes that want to communicate

          A link is associated with exactly two processes

          Addressing

          i) Symmetry in addressing Sender process and receiver process must name each other to communicate Send() and Receive() primitives are as follows Send(P message) ndashsends a message to process P

          Receive (Q message) ndash receive a message from process Qii) Asymmetry in addressing Only sender names the receiver process Send() and Receive()

          primitives are as follows Send(P message) ndashsends a message to process P

          Receive (id message) ndash receive a message from any process

          Disadvantage of both types of addressing is limited modularity as changing the id of a process requires to find all the references to old id and then modify them

          1b Indirect communication

          1 Messages are sent received tofrom Mailboxes or ports2 Each mailbox has unique id (integer value) 3 Two processes can communicate only if the process have a shared mailbox4 A link is established between two processes if they have a shared mailbox

          P1 P2

          R1

          5 A link may be associated with more than two processes6 Mailbox may owned by process or OS

          a If mailbox is owned by process we can distinguish between the owner( can receive messages only) and user(can send messages only) When the processes that owns a mailbox terminates the mailbox disappears

          b If mailbox is owned by OS OS must provide mechanisms to i Create a new mailbox

          ii Send and receive messages through mailboxesiii Delete the mailboxiv Pass ownership to other processes

          2 Synchronous or asynchronous communicationMessage passing may be either blocking(synchronous) or non-blocking(Asynchronous)

          1 Blocking Send The sending process is blocked until the message is received by the receiving process or mailbox

          2 Nonblocking Send The sending process sends the message and resumes operation3 Blocking Receive The receiver blocks until message is available4 Nonblocking receive The receiver retrieves either a valid message or a null

          3 Automatic or explicit bufferingMessages exchanged by communicating processes reside in temporary queue Such queues can be implemented in 3 ways

          i Zero Capacity Queue length =0 the link cannot have messages waiting in itsender must block until receiver receives the message

          ii Bounded Capacity Queue length = finite (say n) When the queue is full the sender must block until space is available in queue

          iii Unbounded Capacity Queue length = infinitehellipthe sender never blocks===============================================================================Q) what is deadlock

          A set of processes is deadlocked when every process in the set is waiting for a resource that is currently allocated to another process in the set Here the process P1 is allocated resource R2 and P2 is allocated R1

          P1 requires R1 and P2 requires R2

          Process P1 and P2 will wait forever This situation is called deadlock

          Q) What are the four conditions that are necessary for deadlock to occur

          1 Mutual Exclusion - At least one resource must be held in a non-sharable mode If any other process requests this resource then that process must wait for the resource to be released

          2 Hold and Wait - A process must be simultaneously holding at least one resource and waiting for at least one resource that is currently being held by some other process

          3 No preemption - Once a process is holding a resource then that resource cannot be taken away from that process until the process releases it

          4 Circular Wait - A set of processes P0 P1 P2 PN must exist such that every P[ i ] is waiting for P[ ( i + 1 ) ( N + 1 ) ]

          Q)Methods for handling deadlocks

          1 By using deadlock prevention and avoidance protocols system will never enter a deadlocked state1 Allow the system to enter a deadlocked state detect it and recover it2 Ignore the problem and pretend that deadlock never occurs

          To make sure that the system must not enter a deadlocked state the system can use

          1 Deadlock prevention 2 Deadlock avoidance

          ==============================================================================

          Deadlock Prevention

          1 Mutual Exclusion We cannot prevent deadlocks by denying the mutual exclusion condition because some resources are nonsharable (Ex Printer)

          2 Hold and Wait

          To make sure that the hold-and-wait condition never occurs in the system two protocols that can be used are

          Protocol 1 All the resources requested must be allocated before process begins execution

          Protocol2 A process can request resources only when it has none If a process requires additional resources it must release all the resources that are currently allocated

          Example Consider a process that copies data from DVD drive to a file on disk sorts the file and then prints the results to a printer

          If Protocol1 is used it must request the DVD drive disk file and printer at the beginning and must hold them till the end

          Disadvantages

          1 Starvation A process may wait forever because at least one resource that is need is always allocated to some other process Hence Starvation is possible

          2 Resource Utilization is low Process will hold the printer from beginning till end even though it is used at the end

          If Protocol 2 is used the process will initially request DVD drive and disk file It copies from the DVD drive to disk and then releases both the DVD drive and disk file It then requests the disk file and printer

          Disadvantage There may be a chance that our data may not remain on the disk file

          3 No pre-emption

          To make sure that this condition does not hold the following protocol is used

          Protocol If a process (say A) requests some resources

          Case 1 If resources are available then Allocate them

          Case 2 if resources are allocated to some other process(say B) that is waiting for additional resources

          then Preempt the desired resources from the waiting process (B) and allocate them to requesting process(A)

          The process B can be restarted only when it is allocated additional resources it is requesting and takes away the resources that were given to process A

          Case 3 if resources are neither available nor held by a waiting process then Process A waits

          This protocol is applied to resources like CPU register and memory space as the state of the resources can be saved

          4 Circular Wait

          To make sure Circular Wait condition never occurs

          1 Each Resource is assigned a unique integer number

          2 Each Process must request resources in an increasing order of enumeration

          We define a one-to-one function F R rarr N where R is the set of resource types and N is the set of natural numbers

          A process has requested a resource type say Ri at the beginning

          Protocol 1 After that the process can request resource type R j if and only if F (Riquestiquest j)gtF(Riquestiquest i)iquestiquest

          Protocol 2 If a process requests a resource type R j it must release all the resources (say (Riquestiquest i )iquest ) whose F (Riquestiquest i)ge F (Riquestiquest j)iquestiquest

          Example Let F (tape drive) =1 F(disk drive) = 5 and F(printer)=12

          A process can request any number of tape drives disk drives and printers

          Protocol 1If a process A has already requested disk drive now A can request only printer and cannot request tape drive

          Protocol2 In order to request tape drive the process A must release the disk drive and then can request tape drive

          If the above two protocols are used then the circular wait condition never occurs We can prove this by contradiction

          Proof Assume circular wait exists Let the set of processes involved in the circular wait be P0 P1 helliphellip Pn where P0 is waiting for resource R0which is held by P1

          P1 is allocated R0 and P1 is waiting for resource R1which is held by P2 so F (Riquestiquest0)ltF(R iquestiquest1)iquestiquest

          P2 is allocated R1 and P2 is waiting for resource R2which is held by P3 so F (Riquestiquest1)ltF (Riquestiquest2)iquestiquest

          P1 P2

          R2R2

          R1

          Pn is allocated Rnminus1 and Pn is waiting for resource Rnwhich is held by P0 so F (Riquestiquestn)ltF(R iquestiquest0)iquestiquest

          Hence by transitivity F (Riquestiquest0)ltF(R iquestiquest0)iquestiquest Hence our assumption that circular wait exists is FALSE

          ===============================================================================Q) Resource Allocation graphDeadlocks can be understood more clearly through the use of Resource-Allocation Graphs having the following properties

          1 Resource Types are represented as square nodes on the graph Dots inside the square nodes indicate number of resources

          ( Ex two dots might represent two laser printers )2 Processes are represented as circles3 Request Edges - If P1 has requested R1 a directed edge from P1 to R1 is request

          edge4 Assignment Edges - A directed edge from R2 to P1 indicating that resource R2 has

          been allocated to process P1 and that P1 is currently holding resource R2 Note that a request edge can be converted into an assignment edge when request is granted

          If a resource-allocation graph does contain cycles AND each resource contains only a single instance then a deadlock exists If a resource category contains more than one instance then cycle in the resource-allocation graph indicates the possibility of a deadlock but does not guarantee one==================================================================Q) Deadlock AvoidanceFor each resource request system can decide whether the request should be granted or not To make this decision the system must have information like

          1 resources currently available2 resources currently allocated to each process3 Future requests and releases of each process4 Maximum number of resources it may need

          Given this information it is possible to construct an algorithm that makes sure that the system will never enter a deadlocked stateThere are Two deadlock-avoidance algorithms They are

          1 Resource-Allocation Graph Algorithm2 Bankers Algorithm

          Safe State A system is in safe state if there exists a safe sequence of processes P0 P1 P2 PN such that Resource Requests for Pi lt= Resources allocated to Pi + resources held by all processes Pj where j lt i All safe states are deadlock free

          Unsafe state If a safe sequence does not exist then the system is in an unsafe state which MAY lead to deadlock

          1 Resource-Allocation Graph Algorithm Resource-allocation graphs can detect deadlocks only if number of resources of each type are one In this case unsafe states can be recognized and avoided by adding claim edges denoted by dashed

          lines which point from a process to a resource that it may request in the future All the claim edges are added only at the beginning of process When a process makes a request the claim edge Pi-gtRj is converted to a request edge When a resource is released the assignment edge changes back to a claim edge This approach works by denying requests that would produce cycles in the resource-allocation graph

          taking claim edges into effectConsider for example the resource allocation graph as shown

          If P2 requests resource R2 then the claim edge from P1-gtR2 will be made request edge as follows

          The resulting resource-allocation graph would have a cycle in it and so the request cannot be granted Q) Bankerrsquos Algorithm or Deadlock avoidance algorithm with exampleThere are 12 tape driveslet the current state of the system is as shown in the below figureProcess Allocated Max Need Need= MaxNeed ndash AllocatedP0 5 10 5P1 2 4 2P2 2 9 7

          Available = 12-(5+2+2) = 3Resource- Request AlgorithmNow when a request for 1 tape drive by process P2 is made we run resource-request algorithm to check whether the request must be granted or not The request is granted if the after granting the request all the processes in the system can complete For thatWe check 1 Is the request of P2 lt= need of P2

          1 lt= 7 therefore TRUE2 Is the request of P2 lt= Available

          1 lt= 3 therefore TRUE3 Pretend the request is granted for P2

          Now the current state is as shown belowProcess Allocated Max Need Need= MaxNeed ndash AllocatedP0 5 10 5P1 2 4 2P2 2+1=3 9 7-1=6

          Available = 3-1= 2Now Run the safety algorithm to check whether the system is in safe state

          Safety Algorithm1 Let WORK = Available = 22 Find unfinished process such that Need of unfinished process lt= WORKCheck P0

          Need of P0 = 5Work = 2

          Is 5 lt= 2 FALSE

          Check P1Need of P1 = 2Work = 2

          Is 2 lt= 2TRUETherefore P1 can finishIf P1 finishes Work = Work+Allocated to P1

          Work=2+2= 4Now again check if P0 can completeNeed of P0 = 5Work =4Is 5 lt=4 FALSE

          Check if P3 completesNeed of P3= 6Work =4Is 6lt= 4FALSESo neither P0 can complete and P3 can completeSo the system is in unsafe stateThe request for 1 tape drive by P3 is not granted

          2 Bankers Algorithm For resources that contain more than one instance the resource-allocation graph method does not

          work So we use Bankerrsquos algorithm When a process starts up it must state in advance the maximum allocation of resources it may

          request When a request is made the scheduler determines whether granting the request would leave the

          system in a safe state If not then the process must wait until the request can be granted safely The bankers algorithm relies on several key data structures ( where n is the number of processes and

          m is the number of resource categories ) o Available[ m ] indicates how many resources are currently available o Max[ n ][ m ] indicates the maximum demand of each process of each resource

          o Allocation[ n ][ m ] indicates the number of each resources allocated to each processo Need[ n ][ m ] indicates the remaining resources needed of each type for each process ( Note

          that Need[ i ][ j ] = Max[ i ][ j ] - Allocation[ i ][ j ] for all i j ) For simplification of discussions we make the following notations observations

          o One row of the Need vector Need[ i ] can be treated as a vector corresponding to the needs of process i and similarly for Allocation and Max

          Safety Algorithm In order to apply the Bankers algorithm we first need an algorithm for determining whether or not a

          particular state is safe This algorithm determines if the current state of a system is safe according to the following steps

          1 Let Work and Finish be vectors of length m and n respectively Work is a working copy of the available resources Finish is a vector of booleans indicating whether a particular process can finish Initialize Work = Available and Finish to false for all elements

          2 Find an i such that both (A) Finish[ i ] == false and (B) Need[ i ] lt Work This process has not finished but could with the given available working set If no such i exists go to step 4

          3 Set Work = Work + Allocation[ i ] and set Finish[ i ] to true This corresponds to process i finishing up and releasing its resources back into the work pool Then loop back to step 2

          4 If finish[ i ] == true for all i then the state is a safe state because a safe sequence has been found

          Resource-Request Algorithm ( The Bankers Algorithm ) Now we have a tool for determining if a particular state is safe or not This algorithm determines if a new request is safe and grants it only if it is safe to do so When a request is made ( that does not exceed currently available resources )

          pretend it has been granted and then see if the resulting state is a safe one If so grant the request and if not deny the request as follows

          1 Let Request[ n ][ m ] indicate the number of resources of each type currently requested by processes If Request[ i ] gt Need[ i ] for any process i raise an error condition

          2 If Request[ i ] gt Available for any process i then that process must wait for resources to become available

          else the process can continue to step 3

          3 Check to see if the request can be granted safely by pretending it has been granted and then seeing if the resulting state is safe If resulting state is safe

          grant the requestelse

          then the process must wait until its request can be granted safelyThe procedure for granting a request ( or pretending to for testing purposes ) is

          Available = Available - Request

          Allocation = Allocation + Request Need = Need - Request

          Unit III Memory management

          Just as processes share the CPU they also share physical memory Memory management unit of OS takes care of memory allocation deallocation and other issues Program must be brought into memory for it to be run Addresses are two types

          i) relocatable or relative address wrt to the beginning of program ii) Absolute addresses

          Q)Address BindingAns Binding means mapping logical address space to physical address space Address binding can happen at three different stagesCompile time If you know at compile time only

          where in memory the program is going to be allocated then compiler generates absolute addresses

          Otherwise compiler generates relocatable addresses

          Load time loader binds the relocatable addresses generated by compiler to absolute address Hence binding is done at load timeIf Binding is done at compile and load time physical and logical addresses are same

          Execution time If address binding is done at run time the process can be moved during its execution from one memory segment to another Here we call logical addresses as virtual addressRun time mapping of virtual to physical address is done by a hardware device called Memory Management Unit (MMU)

          Q)Logical vs Physical Address SpaceAnsLogical address ndash generated by the CPU also referred to as virtual addressPhysical address ndash address seen by the memory unitSet of all logical addresses is called logical address space Set of all physical addresses is called physical address space

          Q) Memory-Management Unit (MMU)Ans MMU is hardware device that maps virtual to physical addressIn MMU scheme the value in the relocation register is added to every address generated by CPU to read the contents of memoryThe user program only knows logical addresses it never sees the real physical addresses

          Q)Dynamic loadingAns Since physical memory is small it may not be possible for the entire program to be in main memory so we can use dynamic loadingIt is responsibility of users to design their programs to take advantage of dynamic loadingOnly the main function is loaded into main memory and when main() calls another function the main() will check whether the function is in main memory IF not the loader will load the desired function to main memory and updates the programrsquos address table

          Routine is not loaded until it is called We achieve better memory-space utilization as unused routine is never loaded

          Q) Dynamic Linking+ Linking postponed until execution time+ Small piece of code stub used to locate the appropriate memory-resident library routine+ Stub replaces itself with the address of the routine and executes the routine+ Operating system needed to check if routine is in processesrsquo memory address+ Dynamic linking is particularly useful for libraries

          Q) OverlaysAns Overlays are needed when process is larger than amount of main memory allocated to it Instructions and data that are needed at any given time is kept in main memory Overlays can be implemented by user Programming design of overlay structure is complex Overlays for a two-pass assembler is as shown in the figure

          Q) SwappingAns A process can be swapped temporarily out of memory to a backing store and then brought back into memory for continued executionBacking store ndash fast disk large enough to store copies of all memory images of all usersPriority based scheduling uses a variant of swapping policy called roll out roll in If a higher priority process arrives the memory manager swaps out lower priority process and then swaps in higher priority process When the higher priority process finishes the lower priority process can be swapped in again to main memory A process that is swapped out will be swapped back into the same memory space it occupied previouslyMajor part of the swap time is transfer time We can swap idle process only and cannot swap a process that is waiting for IO

          Q) Contiguous memory allocation Ans Each process is contained in a single contiguous section of memory1 Fixed Size Partition (or) Single-partition allocation

          Divide the main memory into Fixed sized partitions Each partition may contain exactly one process Relocation register contains value of starting physical address Limit register contains range of logical addressesEvery address generated by a CPU is checked as follows

          If logical address lt Limit register then logical address is added with Relocation register to get the corresponding memory address

          else a trap to OS is generatedSince every address is checked we can protect the OS and other user programs from being modified by other running process

          2 Multiple-partition allocation

          1) Fixed Size Partitions Divide the main memory into Fixed sized blocks Here memory allocated to a process may be larger than required The difference between allocated memory and requested memory is called internal fragmentation Internal fragmentation is unused memory inside the partition

          0 P1 Block0(0th address to 3rd address)4 P1 Block1(4th address to 7th address)8 Block2(8th address to 11th address)

          12 Block3(11th address to 12th address)

          if P1 requires just 5 addresses still it is allocated 2 blocks ie 8 addresses So remaining 3 addresses are left unused

          2) Variable Size partitions Here main memory is divided into partitions of variable sizesOperating system maintains information about

          a) allocated partitions b) free partitions (hole)

          In the beginning all main memory is empty and is considered one large block of available memory a hole

          Exact memory required by process is only given When a process terminates it releases memory which can be allocated to another process

          Memory is allocated to processes until finally no available block of memory (or hole) is large enough to hold the next processExternal fragmentation exists when there is enough memory to satisfy a request but the available memory are not contiguous Wastage of memory outside the partitionOne Solution to problem of external fragmentation is

          a Compaction shuffle the memory contents so as to place all free memory together into one large block

          ===============================================================================Q) Dynamic Storage-Allocation Problem (or) Most commonly used strategies to select a free hole from the set of available holesAns To satisfy a request of size n from a list of free holes below 3 policies can be used

          1 First-fit Allocate the first hole that is big enough

          2 Best-fit Allocate the smallest hole that is big enough must search entire list if it is not ordered by size Produces the smallest leftover hole

          3 Worst-fit Allocate the largest hole must also search entire list Produces the largest leftover holeFirst-fit and best-fit better than worst-fit in terms of speed and storage utilization

          ===============================================================================Q) Non-Contiguous Memory allocation

          1 Paging Paging is a memory management scheme that provides non-contiguous memory allocation0 P1 Block0(0th address to 3rd address)4 P2 Block1(4th address to 7th address)8 Block2(8th address to 11th address)

          12 P1 Block3(11th address to 12th address)

          Logical address space of a process can be noncontiguous Ex P1 is allocated Block0 and Block3

          1 Divide physical memory into fixed-sized blocks called frames (size is power of 2)2 Divide logical memory into blocks of same size called pages3 Logical address is divided into 2 parts

          Page number (p) ndash Page table is indexed by page numberPage offset (d) ndash

          4 A page table is allocated to each process (A pointer to page table is stored in PCB of process )Page table translates logical to physical addresses Page 0 is in frame1 Page 1 is in frame 4 etc5 Internal fragmentation may occur due to paging

          6 If the size of logical address space= 2m and page size = 2n Then the higher order mminusn bits of logical address correspond to page number and lsquonrsquolower order bits for displacement within the page

          Ex size of logical address space = 8=23 so m=3

          page size =4=22 So n=2

          So mminusn = 1 bit for page number to represent 01

          remaining n=2 bits for displacement within the page

          Q) Implementation of Page Table

          Page table can be kept as

          1) Set of dedicated registers efficient if the page table is small

          2) Page table can be kept in memory page table base register (PTBR) points to page table Page-table length register (PTLR) indicates size of the page table Changing page table requires changing only the value in these registers Advantages of keeping page table in memorya Less context switch time

          b Two memory accesses are required One memory access to access the page table and another memory access to access the required memory address

          This problem can be solved by using a special fast-lookup hardware cache called associative memory or translation look-aside buffers(TLBs)

          i Each entry in TLB consists of 2 parts

          I) page number field

          II) Value field or frame number field

          ii When a logical address is generated by CPU its page number is presented to TLB The page number is compared with all the entries of TLB simultaneously

          If page number is found in TLB Itrsquos frame number is immediately available -gt TLB hitIf page number not found check in page table ndashTLB miss

          By using TLB Search is fast But TLB hardware is expensive So TLB size is kept small So TLB contains only frequently used few page table entries

          Q) Protection in Paging Validinvalid bit attached to each entry in the page table OS sets this validinvalid bit for each page to allow or disallow access to the page When the bit is set to invalid the page is not in processrsquos logical address space Hence it generates Trap to OS

          Q) Shared pages in Paging

          Reentrant code is the code that never changes during execution Reentrant code can be shared

          1 One copy of read-only (reentrant) code is shared among processes (ie text editors compilers window systems) Ex ed1ed2ed3 is shared among Process P1 and P2

          2 Shared code must appear in same location in the logical address space of all processes

          Each process keeps a separate copy of the private code and data EX data1 and data2

          The pages for the private code and data can appear anywhere in the logical address space

          ==============================================================================Q) Segmentation

          Segmentation is a memory management scheme that support userrsquos view of memory

          When the user program is compiled the compiler generates segments like

          1) The code segment

          2) Global variables segment

          3) Heap memory segment

          4) Stack segment etc

          Each entry in segment table has

          segment base (Starting physical address of segment) and limit register (specifies the length of the segment)

          Logical address is divided into 2 parts

          segment-number offset

          Segment table is indexed by segment number

          Segment-table base register (STBR) stores the location of segment table in main memory

          Segment-table length register (STLR) stores number of segments used by a program

          The segment number (say lsquosrsquo) is used to find the entry in segment table The required entry is lsquosrsquo locations away from the beginning of the segment table Once the required entry in the segment table is found the offset (lsquodrsquo) is compared with limit

          If( offset lt limit) then offset is added with the base entry to generate the physical address

          Q) Shared Segments Code sharing occurs at

          the segment level Shared segments must

          have same segment number

          Allocation - dynamic storage allocation problem

          use best fitfirst fit may cause external fragmentation

          Protection protection bits associated with segments

          readwriteexecute privileges array in a separate segment - hardware can check for illegal array indexes

          ===============================================================================Q) Virtual memory ndash

          Ans In paging and segmentation a program will execute only if the entire process is in main memory But here in virtual memory only part of the program needs to be in memory for execution

          1 A program that is large enough than the available main memory can still run as only part of program is only loaded into main memory Logical address space can therefore be much larger than physical address space

          2 Allows address spaces to be shared by several processes

          Virtual memory can be implemented via Demand paging Demand segmentation

          The large blank space between heap and stack is part of virtual address space Virtual memory allows files and memory to be shared by 2 or more processes through page sharing as shown in below figure (b) Figure (a) depicts that virtual memory that is larger than physical memory

          ==============================================================================

          Q)Demand pagingDemand paging is technique of loading pages from disk to main memory only when the page is needed Hence using less amount of physical memory we get faster response

          Demand paging is similar to PAGING + SWAPPING

          Instead of swapping a whole process the pager brings only those pages needed into memory

          To distinguish between pages that are in memory and pages that are on the disk validinvalid bit is used

          Validinvalid bit is attached to each entry in page table When this bit is set to valid the page is legal and in memory When this bit is set to invalid either the page is illegal or the page is in disk

          When CPU generates a logical address whose page table entry is set to invalid page fault occurs Hence the required page is in disk Some section of hard disk called swap space is used to hold pages that are not present in memory

          Q) Procedure for Handling a Page Fault

          1) CPU generates a logical address and if the validinvalid bit for the page that has this logical address is set to invalid page fault occurs

          2) Page fault causes a trap to operating system

          3)Check if the logical address is within the logical address space of process ie PTBR and PTLR is checked if (required logical address is not within the logical address space of process)

          Terminate the processelse

          page is not in memory and page is in disk

          4) To bring the required page into memory we need to find a free main memory frame map main memory address to disk block and fetch disk block and load the block into free frame5) When the required page is brought into memory update the page table to indicate the page is in memory6) Restart instruction interrupted by illegal address trap The process will continue as if page had always been in memory

          Q) What is pure demand pagingAns If no page belonging to the executing process is in main memory the process will fault for every page it needs Page faults will occur until every page that is needed is in memory This scheme is known as pure demand paging NEVER BRING A PAGE UNTIL IT IS REQUIRED

          Q) Page Replacement1 Find the location of the desired page on the disk2 Find a free

          i) If there is a free frame use itii) If there is no free frame use Page replacement Algorithms to find some page in memory that is

          not really in use and swap itiii) If the modify bit associated with the victim frame is set to 1 the page is written to disk Make the

          changes to validinvalid bit of victim frames page table entry to invalid to indicate the victim frame is no longer in memory

          else no need to write to disk3 Read the desired page into victim frame and change the frame and page tables4 Restart the user process

          =================================================================================================Q) Page Replacement AlgorithmsGoal Produce a low page-fault rateThe algorithms are evaluated by running it on a particular string of memory references (reference string) and by computing the number of page faults on that string

          1 First in First out (FIFO) Page replacement algorithm Uses the time when a page was brought into memory The page that was brought first into memory is replaced ie oldest page is chosen for replacement We create a FIFO queue to hold all pages in memory

          When a page is brought into memory we insert it at the tail of the queue and we replace the page at the head of the queue

          FIFO 15 page faultsProblems due to FIFO is Beladyrsquos anamolyFor a refrence string shown below123412512345When number of frames allocated to a process is 1 number of page faults=12 due to FIFO page replacementWhen number of frames allocated to a process is 2 number of page faults=12 due to FIFO page replacementWhen number of frames allocated to a process is 3 number of page faults=9 due to FIFO page replacementWhen number of frames allocated to a process is 1 number of page faults=10 due to FIFO page replacementAs the number of frames allocated to a process increases page faults must decrease But in FIFO page replacement page faults increase2 Optimal Page replacement AlgorithmReplace the page that will not be used for longest period of time Used for measuring how well your algorithm performs It is not practical to implement OPT algorithm as it is not possible to know what the future references will beex

          Optimal page replacement 9 page faults

          3 Least Recently used (LRU) Page replacement algorithm Replace the page that has not been used for longest period of time LRU makes use of time of last use of the page Ex

          Number of page faults = 12

          LRU requires hardware assistance to determine the time of last use of the page We can use 1) Stack or 2) Counter to implement LRU page replacement algorithm

          1) Stack implementation ndash keep a stack of page numbers in a double link formwhen a Page is referenced it is moved to the top of stackThis implementation requires 6 pointers to be changed

          2) counter implementation Time-of-use field is allocated to each frame allocatedCPU maintains a counter Counter is incremented for every page reference Counter value is copied to the time-of-use field of the referenced page

          Disadvantages of LRU page replacement algorithm Updating counter or stack must be done for every memory reference HENCE LRU IS SLOW and implementation requires hardware assistance ==============================================================================Q) LRU approximation page replacement1) Using reference bit2) Using additional reference bits algorithm3) Second chance algorithm4) Enhanced Second chance algorithm5) Counting based Page replacement

          Has two schemes1) Least Frequently Used (LFU)2) Most Frequently Used (MFU)

          6) Page Buffering AlgorithmsQ) What is ThrashingAns A process is thrashing if it is spending more time paging than executingThrashing may occur when global (or) local page replacement is used as followsSolution to Thrashing is Working Set strategyIf we provide a process with as many frames as it needs thrashing can be avoided Since it is not possible to know how many frames a process needs we use locality model of process execution

          here Locality is set of pages that are actively used together Locality model states that as the process executes it moves from locality to locality If we allocate enough frames to a process to accommodate the size of current locality it will not fault again until it changes current localityWorking Set model uses working-set and a parameter ∆ = working-set windowThe set of pages in the most recent ∆ page references is the working setLet WS S i be the working-set size for process Pi ie Pi needs WS S i framesLet D be the total demand for frames then D=sum WS Si

          and mbe the total number of available frames

          If iquestm thrashing occurs because some processes will not have enough framesSoIF (Dgtm iquest

          OS selects a process to suspendelse OS may initiate another processThe working set strategy prevents thrashing while keeping the degree of multi programming as high as possibleThe main difficulty is to keep track of moving working set window ========================================================================Q) How does thrashing occur in global and local page replacementAns In Global page replacement when a page fault occurs for a process any frame in main memory can be replaced causing page fault to some other process that needs the replaced frame Page fault processes must use paging device to swap pages in and out As more number of processes queue up for the paging device

          the READY queue empties And CPU becomes idle So OS introduces new processes as CPU is free This further increases number of page faults

          In Local page replacement Each process is allocated certain frames in main memory and when a page fault occurs only the frames allocated to it will be replacedWhen a process starts thrashing other processes are not affected But since the processes will be in queue for paging device most of the time average service time for page fault increases Hence effective access time will increase even for a process that is not thrashingQ) Page Table Structure Or Structure of page table in memory

          Hierarchical Paging Hashed Page Tables Inverted Page Tables

          Hierarchical Paging If Page table is large then we break up the page table into multiple page tables ie page table is paged A simple technique is a two-level page tableTwo-Level Paging Example A logical address (on 32-bit machine with 4K page size) is divided into 1 a page number consisting of 20 bits 2 a page offset consisting of 12 bitsSince the page table is paged the page number is further divided into 1 a 10-bit page number 2 a 10-bit page offset Thus a logical address is as follows page number page offset

          pi p2 d

          10 10 12 where p1 is an index into the outer page table and p2 is the displacement within the page of the outer page table Two-Level Page-Table Scheme

          Address-Translation Scheme Address-translation scheme for a two-level 32-bit paging architecture

          Hashed Page Tables (or) Hash table Hashed page tables are common in address spaces gt 32 bitsEach entry in hash table contains a linked list of elements that hash to same locationHash table is indexed by hash valueEach element consists of 3 fields

          1 Page number2 Frame number3 Pointer to next element in linked list

          The algorithm works as follows Page number is passed to hash function to get hash value Locating the hash value in hash table is easy as Hash table is indexed by hash value Page number is then compared with field 1 in the first element in linked list If there is a match the corresponding frame number is used to get physical address If there is no match next entries in linked list are searched

          Inverted Page Table

          Invertible page table has one entry for each frame of physical memory Each entry in invertible page table has

          Process-id Page number

          Logical address also has 3 parts Process-id Page number

          OffsetProcess-id and page number of the logical address is compared with each entry in the invertible page tableIf a match is found at ith entry i is added to offset to get logical address If there is no match it means it is illegal addressAdvantages and disadvantages

          Decreases memory needed to store each page table Increases time needed to search the table when a page reference occurs Solution Use hash table to

          limit the search to one mdash or at most a few mdash page-table entries Inverted Page Table Architecture

          Unit-IV CPU Scheduling

          Basic conceptsTo maximize CPU utilization some process must be running at all times

          In multi programming several processes will be in main memory at a given time If running process has to wait for IO the OS takes the CPU away from that process and assigns CPU to other process in ready queue So CPU must be scheduled and CPU scheduling is fundamental OS function

          CPU- IO burst cycleProcess execution begins with a CPU burst that is followed by an IO burst which is followed by another CPU burst then another IO burst and so on as shown in the figure

          An IO bound program has many short CPU bursts A CPU-bound program has few long CPU bursts

          Schedulers a module in OS for scheduling decisions1048708 Long-term scheduler (or job scheduler) ndash selects which processes should be brought into the ready queue1048708 Medium-term scheduler ndash selects which processes should be swapped inout the memory1048708 Short-term scheduler (or CPU scheduler) ndash Whenever the CPU becomes idle the operating system must select one of the processes in ready queue to be executed This selection is done by short term scheduler (or) CPU scheduler

          Dispatcher gives CPU control to the process selected by short term scheduler This function involves following

          1 switching context2 switching to user mode3 jumping to proper location in the user program to restart that program

          Time taken to stop one process and start another process is known as dispatch latency This must be kept smallReady QueueReady queue is implemented as a FIFO queue a priority queue a tree or simply an unordered linked listThe records in ready queue are PCBs of processes

          Pre-emptive Scheduling

          CPU scheduling decisions take place under one of four conditions

          1 When a process switches from the running state to the waiting state2 When a process switches from the running state to the ready state for example in response to an

          interrupt3 When a process switches from the waiting state to the ready state say at completion of IO 4 When a process terminates

          For conditions 1 and 4 there is no choice - A new process must be selected For conditions 2 and 3 there is a choice - continue running the current process (or) select a different

          one If CPU scheduling takes place only under conditions 1 and 4 the system is said to be non-

          preemptive or cooperative In non-preemptive a process runs until it itself gives up the CPU ie when it is waiting for IO (or) when process completes Otherwise the system is said to be pre-emptive

          Windows used non-preemptive scheduling up to Windows 3x and started using pre-emptive scheduling with Win95 Macs used non-preemptive prior to OSX and pre-emptive since then Note that pre-emptive scheduling is only possible on hardware that supports a timer interrupt

          Note that pre-emptive scheduling can cause problems when two processes share data because one process may get interrupted in the middle of updating shared data structures

          Preemption can also be a problem if the kernel is busy implementing a system call ( eg updating critical kernel data structures ) when the preemption occurs Solution wait until the system call has either completed or blocked before allowing the pre-emption But this solution is problematic for real-time systems

          Some critical sections of code protect themselves from concurrency problems by disabling interrupts before entering the critical section and re-enabling interrupts on exiting the section But this should only be done only on very short pieces of code that will take less CPU time

          Scheduling Criteria to decide CPU scheduling algorithm1 CPU utilization in real system cpu utilization will range from 40 to 902 Throughput Number of processes that are completed per unit time3 Turnaround time Time when the process has completed ndash Time when the process was submittedie it includes waiting time and burst time4 Waiting time amount of time that a process spends waiting for CPU5 Response time Time when the first response came - Time when the process was submitted

          Optimization Criteria Max CPU utilizationMax throughput

          Min turnaround timeMin waiting timeMin response time CPU Scheduling Algorithms1) First-Come First-Served (FCFS) Scheduling First come first served CPU scheduling algorithm is non pre-emptive Processes are scheduled in the order they have arrivedFCFS is implemented by Queue When the CPU is free it is allocated to the process at the head of the queueAdvantages Easy to implementDisadvantages 1 Average waiting time is quite long

          2 Convoy effect (it occurs as small processes wait in queue for big process to leave CPU)Example Process Burst Time P1 24 P2 3 P3 3

          Suppose that the processes arrive in the order P1 P2 P3

          The Gantt Chart for the schedule is P1 P2 P3

          0 24 27 30 Waiting time for P1 = 0 P2 = 24 P3 = 27 Average waiting time (0 + 24 + 27)3 = 17 2) Shortest-Job-First (SJF) SchedulingSchedule the Process with shortest burst time If CPU burst time of two processe are same then FCFS is usedAdvantages Average waiting time decreasesDisadvantages It is difficult to know the length of next CPU burstSJF can be either pre-emptive (or) non pre-emptive1 Non pre-emptive SJF ndash once CPU is given to the process it cannot be preempted until completes its CPU burst2 Pre-emptive SJF (or) Shortest-Remaining-Time-First (SRTF) If the newly arrived process is shorter than what is remaining of currently executing process then pre-empt the currently executing process

          Example Process Arrival Time Burst Time P1 00 7

          P2 20 4 P3 40 1

          P4 50 4

          SJF (non-preemptive) Gantt Chart P1 P3 P2 P4

          0 7 8 12 16 Average waiting time = [0 +(8-2)+(7-4) +(12-5)] 4 =4

          Example of Preemptive SJF

          Process Arrival Time Burst TimeP1 00 7P2 20 4P3 40 1P4 50 4

          SJF (preemptive) P1 P2 P3 P2 P4 P1

          0 2 4 5 7 11 16 Average waiting time = (9 + 1 + 0 +2)4 =3 3) Priority SchedulingA priority number (integer) is associated with each processThe CPU is allocated to the process with the highest priority (smallest integer equiv highest priority)If two processes have equal priority then FCFS is usedSJF is a priority algorithm in which larger the CPU burst lower the priorityThere are two schemes 1 Preemptive 2 nonpreemptive

          Problem equiv Starvation ndash low priority processes may never executeSolution equiv Aging ndash as time progresses increase the priority of the process

          4) Round Robin (RR) Each process gets a small unit of CPU time (time quantum) usually 10-100 milliseconds After this time has elapsed the process is preempted and added to the end of the ready queueIf there are n processes in the ready queue and the time quantum is q then each process gets 1n of the CPU time in chunks of at most q time units at once No process waits more than (n-1)q time unitsPerformance 1 q large _ FIFO 2 q small _ q must be large with respect to context switch otherwise overhead is too highExample of RR with Time Quantum = 4 Process Burst Time P1 24 P2 3

          P3 3 The Gantt chart isP1 P2 P3 P1 P1 P1 P1 P1

          0 4 7 10 14 18 22 26 30 Average waiting time = [(30-24)+4+7]3 = 173 =566 5)Multilevel Queue scheduling Ready queue is partitioned into separate queuesFor ex ready queue is divided into 2 queues

          1 foreground (interactive) queue2 background (batch) queue

          Each queue has its own scheduling algorithmFor ex Round Robin scheduling algorithm can be used in foreground queue

          FCFS scheduling algorithm can be used in background queueScheduling must be done between the queues This can be done in 2 ways1 Fixed priority scheduling

          Foreground queue has highest priority All the processes in foreground queue must be completed and queue must be empty Then only the processes in background queue will be given CPU As shown in below figure Disadvantage starvation

          2 Time slice ndash each queue gets a certain amount of CPU timewhich it can schedule amongst its processes ie 80 to foreground in RR and 20 to background in FCFS

          6) Multilevel Feedback Queue Scheduling A process can move between queues The idea is to separate processes according to their CPU bursts IO ndash bound and interactive processes will be in highest priority queueIf a process takes more CPU time it is moved to lower priority queue If a process waits too long in lower priority queue it is moved to higher priority queue to prevent starvationAs shown in the below figure let there be 3 queues Q0 Q1 Q2

          1 Q0 ndash time quantum 8 milliseconds 2 Q1 ndash time quantum 16 milliseconds 3 Q2 ndash FCFS

          Scheduling

          1 A process entering ready queue is put in queue Q0 When it gains CPU the process receives 8 msec If it does not finish in 8 milliseconds process is moved to queue Q1 2 At Q1 process again receives 16 additional milliseconds If it still does not complete it is moved to queue Q2

          Q) SynchronizationSeveral processes run in an operating system Some process share resources due to which problems like data inconsistency may arise

          Example of process synchronization is Producer Consumer problem (or) Bounded-Buffer problemProducer Consumer problem To make sure that producer process should not add data when the buffer is full andConsumer process should not take data when the buffer is empty

          The code for Producer process can be modified as followswhile (true) Produce an item in next producedwhile (counter == BUFFER_SIZE) do nothingbuffer[in]=nextProducedin = (in+1) BUFFER_SIZEcounter++

          here an integer variable counter is initialized to zeroCounter is incremented every time a new item is added to the buffer and counter is decremented every time an item is removed from bufferin is an index that always points to an empty slot after the last filled slotout is an index that always points to the first filled slot after at the head of the circular queue

          The code for consumer process is as shown belowwhile(true)

          while (counter == 0) do nothingnextConsumed = buffer[out]out=(out+1)BUFFER_SIZEcounter--

          The above producer and consumer code produces RACE CONDITIONOutcome of execution of several co-operating processes depends on the order in which accesses takes place is called Race ConditionFor Example if counter =5

          Given counter =5 and if producer produces one item and consumer consumes one item then correct value of counter= 5 only But here the counter = 4 ie inconsistent resultIf T5 is executed before T4 then the counter =6 ie inconsistent result

          Solution to avoid race condition make sure only one process at a time must update the shared variable (here it is counter)

          Q) Critical section problemAns Each process has a segment of code called critical section Critical section code is the code that contains common variables of co-operating processesTo avoid data inconsistencyTwo processes must not execute critical section code at same timeFor this to happen1 Each process must request permission to enter its critical sectionmdashEntry section2 End of critical section ndashExit section3 Rest of the code after critical section ndash Remainder section

          Solution to critical section must satisfy the following three requirements1 Mutual exclusion if one process is in critical section no other process can be in critical section2 Progress Processes not in remainder section must decide which process will enter critical section next3 Bounded Waiting All the processes must be allowed to enter critical section No process must wait indefinitely

          Q) Petersonrsquos solution for critical section problem (Software based solution)AnsTime Process 1 Process2T0 do

          flag[1] = TRUE turn =2

          do flag[2] = TRUE

          T1 turn =1

          T2 While( flag[2] ampamp turn == 2)Since turn = 2 while loop will not run

          While( flag[1] ampamp turn == 1)Loops continuously until flag[1]

          becomes falseT3 Enter critical section

          T4 flag[1] = FALSE Enter Critical section

          T5 Enters remainder section flag[2]=FALSE

          while (TRUE) Enters remainder section

          while (TRUE)

          101 File Concept Data is stored in secondary memory in the form of files A file is a sequence of bits bytes lines or records A file has structure which depends on its type

          File Attributes

          1 Name ndash Name of the file for user understanding2 Identifier used by OS to identify a file ( eg inode number )3 Type ndash Text files executable files binary files etc 4 Location ndash location of the file on the hard drive 5 Size current size of the file6 Protection Controls who can read write or execute the file7 Time Date and user identification gives the information when the file was created last modified

          and last used etc This data is used for protection security and usage monitoring

          All information about files is kept in directory structure The directory entry consists of the filersquos name and its unique identifier The identifier locates the other file attributes Directory structure is kept on disk

          File Operations

          The file Abstract Data Type supports many common operations 1 Creating a file find free space on disk add entry to directory 2 Writing a file write data at current file position pointer location and update pointer3 Reading a file read data at current file position pointer location and update pointer4 Repositioning within a file or file seek change pointer location to a given value5 Deleting a file free the space allocated to file and remove entry from directory6 Truncating a file delete the data and update file size

          Open File Table

          Most OSes require that files be opened before access and closed after access Information about all files that are open currently in the system is stored in an open file table This

          system wide open file table is maintained by OS And information about files opened by the process is stored by the processrsquos open file table Open file table has the following information

          1 File pointer - records the current position in the file for the next read or write2 File-open count - How many times has the current file been opened ( simultaneously by

          different processes ) and not yet closedWhen this counter reaches zero the file can be removed from the table

          3 Disk location of the file4 Access rights

          Some systems provide support for file locking 1 A shared lock is for reading only2 A exclusive lock is for writing as well as reading3 An advisory lock OS makes sure the locks are acquired and released appropriately4 A mandatory lock software developers makes sure the locks are acquired and released

          appropriately5 UNIX used advisory locks and Windows uses mandatory locks

          File Types

          Implementing file types A common technique is to include the type as part of the filename The name is split into 2 parts

          1 File Name 2 Extension

          File name and extension are separated by a period The system uses the extension to indicate the type of file and the type of operations that can be done on that fileUser opens Microsoft word and clicks on File menu and Open option User will only specify the filename without extension application will look for a file with the given name and extension it expects Hence extensions are not supported by the operating systemUnix system uses a crude magic number stored at the beginning of the file to indicate the type of fileIn Mac OS X operating system file produced by word processor has the word processorrsquos name as its creator When user double clicks on the file word will open the required fileSome common file types are as shown below

          File Structure Some files contain an internal structure which may or may not be known to the OS for ex executable files have a specific structure so that it can determine

          where in memory to load the file and what is the location of the first instruction

          If OS supports many file formats it will increase the size and complexity of the OS UNIX treats all files as sequences of bytes with no internal structure (With the exception of

          executable binary programs which it must know how to load and find the first executable statement etc)

          Macintosh files have two forks - a resource fork and a data fork The resource fork contains information relating to the UI such as icons and button images and can be modified independently Data fork contains the code or data

          Internal File structure

          Disk files are accessed in units of physical blocks typically 512 bytes

          Internally files are organized in units of logical units like single byte (or)

          data record size (or) structure size

          The number of logical units which fit into one physical block determines its packingIf a student file is organized internally as records Each record takes 64 bytes of memoryAnd main memory is divided into blocks of 512 bytes Then packing = 51264=8 8 records can be stored per block

          Internal fragmentation may occur due to this

          102 Access Methods

          The information in file can be accessed in the following ways1 Sequential access2 Direct access (or) Relative access

          1 Sequential access we cannot randomly access any record

          Records are read and written sequentially one record after another

          A sequential access file emulates magnetic tape operation and generally supports the following operations read next - read a record and advance the tape to the next position write next - write a record and advance the tape to the next position rewind skip n records - May or may not be supported N may be limited to positive numbers or may be

          limited to +- 1

          2 Direct access (or) Relative accessA file is made up of fixed length records so that we can randomly access any record

          No restriction on order of reading or writing

          Direct access files are useful when we need to access large amounts of data randomly

          Ex database

          Operations supported include n is relative block number ie relative to beginning of file

          1 read n - read record number n 2 write n - write record number n 3 jump to record n - could be 0 or the end of file4 Query current record - used to return back to this record later5 Sequential access can be easily emulated using direct access The inverse is complicated and

          inefficient

          3 Other access methods can be built on top of direct access method We construct an index for the file which contains pointers to various blocks To search a file we first access index and then use pointer to access file

          If a student file has 120000 records of students sorted by student id We can create an index consisting of 1st student id in each blockif 64 records can be stored in each block then number of blocks needed to store 120000 records = 12000064 asymp 2000 blocks So index has 2000 entries Index is kept in main memory so that searching becomes faster

          If the index itself becomes large we create an index for index file (Ex B-tree)

          Q) Allocation Methods

          To allocate space to files on disk is such a way that disk space is utilized effectively and files can be accessed quickly

          Three major methods of allocating disk space are

          1 Contiguous 2 Linked

          3 Indexed

          1 Contiguous allocation Each file occupies set of contiguous blocks on disk

          If the file is n blocks long and starts at location b then it occupies blocks b b+1 b+2hellipb+n-1

          The directory entry for each file indicates the address of starting block(ie b) and length (n)

          Advantages

          1 For direct access to block i we can directly access by finding the block as b+i Since the file is stored sequentially sequential access is easy Hence access is easy

          2 Supports both direct and sequential access3 Number of disk seeks required is minimum 4 Seek time is minimum

          Disadvantages

          1 Difficult to find space to a new file (or) add more data to existing fileFirst fit and Best fit strategies are used to find a free hole( set of free contiguous blocks) from list of available holes But both are inefficient wrt storage utilization

          2 External fragmentation even though there are enough free blocks required for a new file they are not contiguous and hence cannot be allocated to new fileSolution copy all the files onto another disk or tape We then copy the files back onto original disk contiguously This is known as compaction But compaction is time consuming

          3 Size of the file must be known when the file is created Even though final size is known entire size may not be used by the file at the beginning Hence leads to wastage of space

          If size of the file is not known in advance then (i) If we allocate too little space to file file cannot be extended Solution find a larger hole copy the contents of the file to the new space and release the previous space This may be possible as long as space exists Also this is time consuming(ii) If we overestimate the amount of space needed much of the space is unused This will lead to internal fragmentation

          To minimize these drawbacks some os uses modified contiguous allocation scheme Here contiguous blocks of memory is allocated at firstIf the file needs more blocks another set of contiguous space known as extent is added Location and a block count plus a link to the first block of next extentIf extents are too large and of fixed size internal fragmentation occursIf extents are of variable sizes external fragmentation occurs

          Linked AllocationEach file is a linked list of disk blocks The disk blocks can be scattered anywhere on the diskThe directory contains a pointer to first and last blocks of the file For example a file of 5 blocks might start at a block 9 and continue at block 16 then block 1 and then block 10 and finally block 25 Each block contains a pointer to next block If each block is 512 bytes and disk address (pointer ) requires 4 bytes then the user sees blocks of 512-4=508 bytes To create a new file we simply create a new entry in the directory The pointer will have nil for empty file Also the size field of empty file is 0

          To add more data to existing file free block is found and data is written to it and is linked to end of file To read a file we read by following the pointers block to block

          Advantages 1 Easy to find space to a new file (or) add more data to existing file2 Size of the file need not be declared when the file is created3 There is no external fragmentation

          Disadvantages1 Seek time is more2 Number of disk seeks required is more3 Pointers use larger percentage of filersquos disk space4 Inefficient for direct access files To access i th record i disk reads are required5 Not Reliable if pointers were lost or damaged

          Solution to disadvantage 3 is to use to group multiple blocks as clusters(for ex 4 blocks = 1cluster) We allocate clusters instead of blocks to file Makes logical to physical block mapping simpleImproves disk throughput by reducing disk access timeReduces wastage of space due to pointers as only few pointers may be neededFree ndashlist management also becomes simpleDisadvantage internal fragmentation more space is wasted when the cluster is partially full

          Solution to disadvantage 5 is to use doubly linked list (or) store the file name and relative block number in each block

          An variation of linked allocation is the use of file allocation table (FAT)A section of disk at the beginning of each volume contains FAT FAT has one entry for each disk block and is indexed by block numberThe block number contains the block number of next block that contains the file The last block will have special end of the file value as table entry An unused block is indicated by table value 0

          Advantages

          1 Easy to find space to a new file (or) add more data to existing file2 Random access time is improved

          Disadvantage number of disk seeks is more The disk head must move to read the FAT and find the location of block then move to location of the block

          3 Indexed allocation

          Each file has its own index block Index block is an array of disk block addresses The ith entry in index block points to ith block of the file

          The directory contains the address of the index block

          When the file is created all pointers in the index block are set to nil

          To add more data to existing file free block is found and data is written to it The address of block is put in the index block

          Advantages

          1 Easy to find space to a new file (or) add more data to existing file2 Supports direct (or) random access efficiently3 No external fragmentation

          Disadvantage

          1 Seek time is more2 Number of disk seeks required is more3 Not Reliable4 Index file must be kept in memory If this memory is not available then we have the read the index

          block and then desired data block ie 2 disk accesses are required To access a block near the end of the file we need to read all the index blocks to read the needed data block

          5 Pointer overhead of index block is gt than that of linked allocation For example if we have file of only one or two blocks Entire index block is allocated to store these one or two pointers

          Various mechanisms followed for size of index block

          1 linked scheme size of index block is one disk block Index block contains a small header containing name of the file and set of the first 100 disk block addresses The next address contains the address of another index block for large files

          2 multilevel index to access a block the OS uses first-level index to find second-level index block which points to file block This approach can be continued to third or fourth level

          3 combined scheme

          For example Say there are 15 pointers of index block in the filersquos inode The first 12 of these pointers have address of blocks that contains file data The next 3 pointers point to indirect blocks The first points to single indirect block which is an index block that contain addresses of filersquos data blocks

          The second points to double indirect blocks and third points to triple indirect blocks as shown below

          Q) Free space management

          To keep track of free disk space the system maintains free space list To create a file we search free space list and allocate the required space to new file

          The free space list can be implemented as

          1 Bit vector (or) Bit map2 Linked list3 Grouping4 Counting 5 Space maps

          1 Bit vector (or) Bit map Each block is represented by 1 bit If the block is free the bit is 1 and if the block is allocated the bit is 0 For example consider a disk where blocks 23458 blocks are free The free-space bit map would be 001111001000

          Advantages simple and easy to get contiguous files

          Disadvantages

          1 Bit map requires extra space2 Bit map is inefficient if kept in disk Bit maps of smaller disk can be kept in main memory but may

          not be possible to keep the bit map of larger disk in main memory3 Bit map must be modified both when blocks are allocated and freed Freeing 1 GB of data on a 1-TB

          disk could cause thousands of blocks of bit maps to be updated because those data blocks are scattered all over the disk

          2 Linked List Free-space list is implemented by linking all the free disk blocks together A pointer to the first free block is stored in special location on the disk For example consider a disk where blocks 23458 blocks are freeWe keep a pointer to block 2 as the first free block Block 2 would contain a pointer to block 3 Block 3 would point to block 4 and block 4 would point to block 5and so onAdvantage No waste of spaceDisadvantage Cannot get contiguous space easily ie To traverse the free-space list we must read each block which requires more IO time

          3 Grouping First free block stores the addresses of n free blocks The first n-1 free blocks are free and the last block contains the addresses of another n free blocks and so onAdvantage large number of free blocks can be found quickly4 Counting free-space list can be maintained by keeping the address of first free block and the number (n)of free contiguous blocks that follow the first block

          Each entry in the free-space list consists of disk address and a countEach entry requires more space andlist is shorter only if count is gt1These entries can be stored in a B-tree for efficient insertion deletion and search

          5 Space maps ZFS divides the disk space into chunks of manageable sizes called metaslabs Each metaslab has a space mapZFS uses counting algorithm to store information about free blocks Space map is a log of all block activities in time order in counting formatWhen ZFS allocates or frees space from a metaslab it loads the associated space map into memory and updates it Finally free-space list is updated on diskSynchronization Hardware Hardware solution to synchronization is to provide atomic operations These operations operate as a single instruction without interruption Two such operations are 1 Test and Set() hardware instruction2 Swap() instruction

          1 Test and Set instruction is as shown

          Solution using Test and SetThe shared variable lock is initialized to false

          2 Swap() instruction definition is as follows

          Solution using Swap1 Shared Boolean variable lock is initialized to false and each process has a local boolean variable key

          Both Test and Set Swap instructions satisfy the mutual exclusion requirement but unfortunately do not guarantee bounded waiting If there are multiple processes trying to get into their critical sections there is no guarantee of what order they will enter and any one process could have the bad luck to wait forever until they got their turn in the critical section (Since there is no guarantee as to the relative rates of the processes a very fast process could theoretically release the lock whip through their remainder section and re-lock the lock before a slower process got a chance As more and more processes are involved vying for the same resource the odds of a slow process getting locked out completely increase )

          Below figure illustrates a solution using test-and-set that satisfies bounded waiting using two shared data structures boolean lock and boolean waiting[ N ] where N is the number of processes in contention for critical sections

          Bounded-waiting mutual exclusion with TestAndSet( )

          It first looks in an order ( starting with the next process on the list ) for a process that has been waiting and if it finds one then it releases that particular process from its waiting state without unlocking the critical section thereby allowing a specific process into the critical section while continuing to block all the others

          Q) What is semaphoreAns Semaphore is used to solve synchronization problems A semaphore S is an integer variable accessed only through 2 standard atomic operations 1 wait() 2 signal()Wait primitive Signal primitivewait(S)

          while(S lt= 0) do nothingS--

          signal (S)

          S++

          Each process that wishes to use a resource performs wait() operation When a process releases a resource it performs signal() operation

          Semaphores Usage1 binary semaphore or mutex locks can take values 0 (or) 1

          2 Counting Semaphore can take any integer value Counting semaphore is used to count the number remaining resources The counter is initialized to the number of resources available whenever the counting semaphore gt 0 then a process can enter a critical section and use one of the resources When the counter = 0 ( or negative in some implementations ) then the process blocks until another process frees a resource and increments the counting semaphore with a signal callFor example if 3 resources are there and 4 processes Value of counting semaphore =3if process P1 requires a resource it perform wait() operationNow counting semaphore = 2if process P2 requires a resource it perform wait() operationNow counting semaphore = 1if process P3 requires a resource it perform wait() operationNow counting semaphore = 0if process P4 requires a resource it perform wait() operation P4 waits until a resource is available

          3 Semaphores can also be used to synchronize certain operations

          Semaphore ImplementationWhen a process is in critical section and any other process that tries to enter critical section must loop continuously wasting CPU time (busy waiting) This type of semaphore is called spin lock as the process spins while waiting For example suppose it is important that process P1 execute statement S1 before process P2 executes statement S2

          First we create a semaphore named synch that is shared by the two processes and initialize it to zero Then in process P1 we insert the code

          S1signal( synch )

          and in process P2 we insert the code wait( synch )

          S2 Because synch was initialized to 0 process P2 will block on the wait until after P1 executes the call

          to signal Each semaphore has a integer value and maintains a waiting queue of waiting processes

          Semaphore implementation with no busy waiting To overcome busy waiting we block the waiting process and waiting process is restarted by wakeup() operation(ie the process is removed from waiting queue to ready state)

          Problems due to semaphore1 Deadlocks Deadlocks occur when multiple processes are blocked each waiting for a resource that can only be freed by one of the other (blocked) processes shown below

          2 Starvation In Starvation one or more processes gets blocked forever and never get a chance to enter critical section For example if we do not specify the algorithms for adding processes to the waiting queue or selecting one to be removed from the queue in the signal( ) call If a LIFO queue is chosen then the first process that starts waiting will never get a chance

          3 Priority inversionLet Process LMH have priorities as shown belowLltMltHprocess H is waiting for a resource that has been held by process L so Process L is running When process M enters it pre-empts process L and M is running nowHere M runs before H even though M has lower priority than HTo avoid this L is given high priority so that M does not pre-empt L=====================================================================Q) Classical problems on synchronization

          1 The Bounded Buffer Problem (also called the The Producer-Consumer Problem)2 The Readers-Writers Problem3 The Dining Philosophers Problem

          These problems are used to test newly proposed synchronization scheme

          1 The Bounded Buffer ProblemConsider

          n buffers and each buffer holds one item a producer process which creates the items (1 at a time) a consumer process which processes them (1 at a time)

          Producer process cannot produce an item if all the buffers are full and Consumer process cannot consume if all the buffers are empty As both processes modify the contents of buffer synchronization must be there For that we use 3 Semaphores

          1 empty = n (empty semaphore counts the number of empty buffers)2 full=0 (full semaphore counts the number of filled buffers)3 mutex=1 (mutex provides mutual exclusion for access to the buffer)

          Producer process Consumer processdo

          hellip produce an item wait(empty) wait(mutex) hellip

          do wait(full) wait(mutex) hellip remove an item from buffer hellip

          add item to buffer hellip signal(mutex) signal(full)

          while (1)

          signal(mutex) signal(empty) hellip consume the item hellip while (1)

          2 The Readers-Writers ProblemA data item such as a file is shared among several processesEach process is classified as reader or writerMultiple readers can read the file at same timeA writer must have exclusive access (ie cannot share with either a reader or another writer)Two versions of the readers-writers problem

          readers priority new reader need not wait because a writer is waiting writers priority if a writer is waiting to access the database no new readers can start reading

          A solution to both version may cause starvation in the readers priority version writers may starve in the writers priority version readers may starve

          A semaphore solution to the readers priority version uses 3 semaphores1 readcount = 0 (readcount semaphore counts the number of reading shared data2 mutex =1 (mutex semaphore is used for mutually exclusion when readcount is updated)3 wrt = 1 (wrt semaphore is common to both writer and reader)

          wrt provides mutually exclusive access to shared data Reader process Writer process

          wait(mutex) readcount++ if (readcount == 1) wait(rt) signal(mutex) hellip reading is performed hellip wait(mutex) readcount-- if (readcount == 0) signal(wrt) signal(mutex)

          do wait(wrt) hellip writing is performed hellip signal(wrt)

          while (TRUE)

          3 The Dining Philosophers Problemn philosophers sit around a table thinking and eatingAs time passes a philosopher gets hungry and tries to pick up the chopstick on his left and on his right A philosopher may only pick up one chopstick at a time and cannot pick up a chopstick already in the hand of neighbour philosopherThe dining philosophers problems is an example of a large class or concurrency control problems it is a simple representation of the need to allocate several resources among several processes in a deadlock-free and starvation-free mannerA semaphore solution represent each chopstick with a semaphore

          semaphore chopstick[5] Initially all values are 1

          Philosopher ido

          51

          wait(chopstick[i])wait(chopstick[(i+1) 5])hellipeathellipsignal(chopstick[i])signal(chopstick[(i+1) 5])hellipthinkhellip

          while (1)This solution guarantees no two neighbouring philosophers eat simultaneously but has the possibility of creating a deadlockQ) Types of operating systems

          MULTIPROCESSING SYSTEMS has multiple hardware CPUs

          In symmetric multiprocessing (SMP) systems The processes share the same main memory and io facilities All processors can perform the same functions (hence term symmetric)

          52

          ASYMMETRIC MULTIPROCESSING SYSTEMS In asymmetric multiprocessing (ASMP) all CPUs are not equal (Master and slave CPUs) ASMP have dedicated processors to specific tasks For example one processor may be dedicated to disk operations another to video operations etc These systems dont have the flexibility to assign processes to the least-loaded CPU unlike an SMP system

          REAL TIME SYSTEM is used to implement a computer application that must complete its execution within its time constraint

          Two kinds of real-time systems have evolved

          1 A hard real-time system can guarantee that response requirement will be met under all conditions 2 A soft real-time system cannot guarantee that response requirement meets under all conditions Ex

          Digital audio or multimedia systems Digital telephones

          DISTRIBUTED SYSTEMS execute parts of a computation in different systems at the same time It uses distributed control ie it spreads its decision-making actions across different computers in the system so that failures of individual computers or the network does not cripple its operation

          A distributed operating system appears as a uniprocessor system even though it has multiple processors The users may not know where their programs are being run or where their files are located that should all be handled automatically by the operating system

          Distributed systems allow applications to run on several processors at the same time thus requiring more complex processor scheduling algorithms

          DESKTOP SYSTEMS A desktop system is a personal computer (PC) system used at a single location Modern desktop systems support multiprogramming Common examples are Linux FreeBSD Windows 8 and the Macintosh operating system

          HANDHELD SYSTEMS (palmtop computers) Two of the most popular operating systems for handhelds are Symbian OS and Android OS They have very less memory

          53

          CLUSTERED SYSTEMS A computer cluster consists of a set of loosely connected computers that work together so that in many respects they can be viewed as a single system The components of a cluster are usually connected to each other through fast local area networks each node (computer used as a server) running its own instance of an operating system

          Q) Modern operating system

          New developments in hardware applications and security threats lead to the development of modern operating system

          New Hardware developments like increased machine speed high speed network increased size and variety of memory devices

          New Applications developments like multimedia applications internet and web access and client server computing

          Developments in modern operating system can be categorized as

          1 Microkernel architecture assigns only few functions to kernel ( IPC and CPU scheduling etc) Other OS services are provided by processes called servers that run in user mode This approach decouples server and kernel development This approach is well suited to distributed environment

          2 Multithreading a process is divided into threads that can run concurrently Useful for applications that perform a number of independent tasks Example database server that listens and processes many client requests

          3 Symmetric multiprocessing As now-a-days there are multiple microprocessors in a single system SMP operating system provides greater efficiency when there are multiple processors The processes share the same main memory and io facilities All processors can perform the same functions (hence term symmetric)

          SMP schedules processes or threads across all of the processors

          Advantages

          1 Increased performance if some portion of the program can be run in parallel the performance of SMP OS increases

          2 increased availability as all processes can perform the same task if one processor fails others will work 3 incremental growth one can add a new processor to increase the performance4 Scaling cost can be dependent on the number of processors used

          4 Distributed operating systems Distributed systems allow applications to run on several processors at the same time

          5 Object oriented design adds modularity to kernel Os can be customized without effecting system integrity

          Q) Linux Operating systems It is open source as its source code is freely available It is free to use Linux was designed considering UNIX compatibility Linux is a multi-user multitasking system Main design goals are speed efficiency and standardization Components of Linux System

          54

          Linux Operating System has primarily three components as shown in the below diagram

          Kernel - Kernel is the core part of Linux It is responsible for all major activities of this operating system It interacts directly with hardware Kernel hides low level hardware details to system or application programs

          System Library - System libraries are special functions or programs These are used by application programs to use Kernelrsquos features

          System Utility - System Utility programs are responsible to do specialized individual level tasks like updating log file accepting login requests from terminals etc

          Kernel Modules Sections of kernel code that can be compiled loaded and unloaded independent of the rest of the kernel A kernel module may typically implement a device driver a file system or a networking protocol

          Three components to Linux module support 1 module management Supports loading modules into memory and allows to talk to the rest of the kernel 2driver registration Allows modules to inform rest of the kernel that a new driver has become available 3 conflict resolution protect reserved resources of one driver from accidental use by another driver

          Kernel Mode vs User ModeKernel component code executes in a special privileged mode called kernel mode with full access to all resources of the computer This code represents a single process executes in single address space and do not require any context switch and hence is very efficient and fast Kernel runs each processes and provides system services to processes provides protected access to hardwares to processesCode which is not required to run in kernel mode is in System Library User programs and other system programs works in User Mode which has no access to system hardwares and kernel code User programs utilities use System libraries to access Kernel functions to get systems low level tasksBasic FeaturesFollowing are some of the important features of Linux Operating System

          Portable - Portability means softwares can works on different types of hardwares in same way Linux kernel and application programs supports their installation on any kind of hardware platform

          Open Source - Linux source code is freely available and it is community based development project Multiple teams works in collaboration to enhance the capability of Linux operating system and it is continuously evolving

          Multi-User - Linux is a multiuser system means multiple users can access system resources like memory ram application programs at same time

          Multiprogramming - Linux is a multiprogramming system means multiple applications can run at same time

          Hierarchical File System - Linux provides a standard file structure in which system files user files are arranged

          Shell - Linux provides a special interpreter program which can be used to execute commands of the operating system It can be used to do various types of operations call application programs etc

          55

          Security - Linux provides user security using authentication features like password protection controlled access to specific files encryption of data

          Architecture Linux System Architecture is consists of following layers

          Hardware layer - Hardware consists of all peripheral devices (RAM HDD CPU etc)

          Kernel - Core component of Operating System interacts directly with hardware provides low level services to upper layer components

          Shell - An interface to kernel Takes commands from user and executes kernels functions

          Utilities - Utility programs giving user most of the functionalities of an operating systems

          Q) windows XP operating System1Extensibility mdash layered architecture

          Executive which runs in protected mode provides the basic system services On top of the executive several server subsystems operate in user mode Modular structure allows additional environmental subsystems to be added without affecting the

          executive 2Portability mdash XP can be moved from on hardware architecture to another with relatively few changes

          Written in C and C++ Processor-dependent code is isolated in a dynamic link library (DLL) called the ldquohardware abstraction

          layerrdquo (HAL)3 Reliability mdashXP uses hardware protection for virtual memory and software protection mechanisms for operating system resources 4 Compatibility mdash applications that follow the IEEE 10031 (POSIX) standard can be complied to run on 2000 without changing the source code 5 Performance mdash XP subsystems can communicate with one another via high-performance message passing

          Preemption of low priority threads enables the system to respond quickly to external events Designed for symmetrical multiprocessing

          6 International support mdash supports different locales via the national language support (NLS) APIXP Architecture 1 Layered system of modules 2 Protected mode mdash hardware abstraction layer (HAL) kernel executive 3 User mode mdash collection of subsystems

          a Environmental subsystems emulate different operating systems b Protection subsystems provide security functions

          XP architecture

          56

          Q) Windows network Operating systemExample Windows 2000 operating system

          All the pictures are taken from Silberschatz Abraham et al Peter Baer Galvin and Greg Gagne Operating system concepts Reading Addison-Wesley

          Visit my blog enthusiaststudentblogspotin57

          mtechmessengerblogspotin

          Keywords OS exam notes OS notesOperating system notes Opearing system galvin notesSimple batch system Multi- programmed Batch System Time sharing System Difference between Batch Multi-programming and Time sharing What is process structure of process in memory Process states PCB Operations on Processes Process Creation Process Termination cascading terminationcontext switch Basic Concepts of threads two modes of CPU execution Inter process communication Shared ndash Memory Systems Message passing systemswhat is deadlock four conditions that are necessary for deadlock to occur Methods for handling deadlocks Deadlock prevention Deadlock avoidanceResource Allocation graphBankers AlgorithmSafe State Unsafe state Safety Algorithm Resource-Request Algorithm Memory management Address Binding Logical vs Physical Address Space MMU Memory-Management Unit Dynamic loading Dynamic Linking Overlays Swapping Contiguous memory allocation Single-partition allocation Multiple-partition allocation Internal fragmentation Variable Size partitions External fragmentation First-fit Best-fit Worst-fit Non-Contiguous Memory allocation Paging Implementation of Page Table PTLR PTBR page table base register Page-table length register translation look-aside buffers TLB Protection in Paging Shared pages in Paging Segmentation Shared Segments Virtual memory Demand paging Procedure for Handling a Page Fault pure demand paging Page Replacement Page Replacement Algorithms FIFO Optimal Page replacement AlgorithmLeast Recently used (LRU) Page replacement algorithm ThrashingWorking Set model global and local page replacement

          58

          • 101 File Concept Data is stored in secondary memory in the form of files A file is a sequence of bits bytes lines or records A file has structure which depends on its type
          • File Attributes
          • File Operations
          • Kernel Mode vs User Mode
          • Basic Features
          • Architecture

            Q) What happens to child when parent terminates in Unix

            Init process becomes the parent of all its children

            Q)What is context switchWhen the PCB of the currently executing process is saved the operating system loads the PCB of the next process that has to be run on CPU This is a heavy task and it takes a lot of time

            Q) Basic Concepts of threads

            A thread consists of a program counter a stack and a set of registers and a thread ID Threads are also called light weight processesA process with multiple threads make a great serverThe threads share a lot of resources with other threads belonging to the same process So a context switch among threads for the same process is easy It involves switch of register set the program counter and the stack

            Q) Explain two modes of CPU executionProtection of memory IO can be provided via two modes of CPU execution user mode and kernel mode

            In kernel privileged supervisor mode OS has access to privileged instructions Privileged instructions can access IO devices control interrupts manipulate memory (pagetable TLB etc)

            Privileged instructions are instruction that can only be executed in kernel mode

            All the user level processes run in user mode Some critical operations are not allowed to be done by user processes The user processes must use system calls to perform these operations When a system call occurs OS will enter into kernel mode and accesses privileged instructions to perform the desired service to user-level process

            For example for Input or Output process makes a system call telling the operating system to read or write particular area and this request is satisfied by the operating system

            Q) Explain Inter process communicationAns Cooperating processes require inter process communication (IPC) mechanism to exchange data and information There are two communication models (a) Message passing (b) shared memory as shown below

            (a) Shared ndash Memory Systems

            1) Communicating processes must establish a region of shared memory2) Shared memory region resides in address space of the creating process

            3) Other processes that wish to communicate using shared memory segment must attach the shared memory to their address space4) Processes can exchange information by reading and writing data in shared areas5) Shared memory systems are convenient to communicate6) Shared memory systems are faster and provides maximum speed as

            i) System calls are required only to establish shared memory regionsii) Once shared memory is established all accesses are treated as routine memory access No

            assistance of kernel is required

            (b) Message passing systems

            1 Are useful for exchanging smaller amounts of data2 Easy to implement for inter computer communication3 More time consuming than shared memory system as implemented using system calls

            and need kernel intervention

            To send messages communication link must exist between them Communication link can be physically or logically implemented

            Different methods to logically implementing a link are

            1 Direct or indirect communication2 Synchronous or asynchronous communication3 Automatic or explicit buffering

            1a Direct communication

            A link is established automatically between every pair of processes that want to communicate

            A link is associated with exactly two processes

            Addressing

            i) Symmetry in addressing Sender process and receiver process must name each other to communicate Send() and Receive() primitives are as follows Send(P message) ndashsends a message to process P

            Receive (Q message) ndash receive a message from process Qii) Asymmetry in addressing Only sender names the receiver process Send() and Receive()

            primitives are as follows Send(P message) ndashsends a message to process P

            Receive (id message) ndash receive a message from any process

            Disadvantage of both types of addressing is limited modularity as changing the id of a process requires to find all the references to old id and then modify them

            1b Indirect communication

            1 Messages are sent received tofrom Mailboxes or ports2 Each mailbox has unique id (integer value) 3 Two processes can communicate only if the process have a shared mailbox4 A link is established between two processes if they have a shared mailbox

            P1 P2

            R1

            5 A link may be associated with more than two processes6 Mailbox may owned by process or OS

            a If mailbox is owned by process we can distinguish between the owner( can receive messages only) and user(can send messages only) When the processes that owns a mailbox terminates the mailbox disappears

            b If mailbox is owned by OS OS must provide mechanisms to i Create a new mailbox

            ii Send and receive messages through mailboxesiii Delete the mailboxiv Pass ownership to other processes

            2 Synchronous or asynchronous communicationMessage passing may be either blocking(synchronous) or non-blocking(Asynchronous)

            1 Blocking Send The sending process is blocked until the message is received by the receiving process or mailbox

            2 Nonblocking Send The sending process sends the message and resumes operation3 Blocking Receive The receiver blocks until message is available4 Nonblocking receive The receiver retrieves either a valid message or a null

            3 Automatic or explicit bufferingMessages exchanged by communicating processes reside in temporary queue Such queues can be implemented in 3 ways

            i Zero Capacity Queue length =0 the link cannot have messages waiting in itsender must block until receiver receives the message

            ii Bounded Capacity Queue length = finite (say n) When the queue is full the sender must block until space is available in queue

            iii Unbounded Capacity Queue length = infinitehellipthe sender never blocks===============================================================================Q) what is deadlock

            A set of processes is deadlocked when every process in the set is waiting for a resource that is currently allocated to another process in the set Here the process P1 is allocated resource R2 and P2 is allocated R1

            P1 requires R1 and P2 requires R2

            Process P1 and P2 will wait forever This situation is called deadlock

            Q) What are the four conditions that are necessary for deadlock to occur

            1 Mutual Exclusion - At least one resource must be held in a non-sharable mode If any other process requests this resource then that process must wait for the resource to be released

            2 Hold and Wait - A process must be simultaneously holding at least one resource and waiting for at least one resource that is currently being held by some other process

            3 No preemption - Once a process is holding a resource then that resource cannot be taken away from that process until the process releases it

            4 Circular Wait - A set of processes P0 P1 P2 PN must exist such that every P[ i ] is waiting for P[ ( i + 1 ) ( N + 1 ) ]

            Q)Methods for handling deadlocks

            1 By using deadlock prevention and avoidance protocols system will never enter a deadlocked state1 Allow the system to enter a deadlocked state detect it and recover it2 Ignore the problem and pretend that deadlock never occurs

            To make sure that the system must not enter a deadlocked state the system can use

            1 Deadlock prevention 2 Deadlock avoidance

            ==============================================================================

            Deadlock Prevention

            1 Mutual Exclusion We cannot prevent deadlocks by denying the mutual exclusion condition because some resources are nonsharable (Ex Printer)

            2 Hold and Wait

            To make sure that the hold-and-wait condition never occurs in the system two protocols that can be used are

            Protocol 1 All the resources requested must be allocated before process begins execution

            Protocol2 A process can request resources only when it has none If a process requires additional resources it must release all the resources that are currently allocated

            Example Consider a process that copies data from DVD drive to a file on disk sorts the file and then prints the results to a printer

            If Protocol1 is used it must request the DVD drive disk file and printer at the beginning and must hold them till the end

            Disadvantages

            1 Starvation A process may wait forever because at least one resource that is need is always allocated to some other process Hence Starvation is possible

            2 Resource Utilization is low Process will hold the printer from beginning till end even though it is used at the end

            If Protocol 2 is used the process will initially request DVD drive and disk file It copies from the DVD drive to disk and then releases both the DVD drive and disk file It then requests the disk file and printer

            Disadvantage There may be a chance that our data may not remain on the disk file

            3 No pre-emption

            To make sure that this condition does not hold the following protocol is used

            Protocol If a process (say A) requests some resources

            Case 1 If resources are available then Allocate them

            Case 2 if resources are allocated to some other process(say B) that is waiting for additional resources

            then Preempt the desired resources from the waiting process (B) and allocate them to requesting process(A)

            The process B can be restarted only when it is allocated additional resources it is requesting and takes away the resources that were given to process A

            Case 3 if resources are neither available nor held by a waiting process then Process A waits

            This protocol is applied to resources like CPU register and memory space as the state of the resources can be saved

            4 Circular Wait

            To make sure Circular Wait condition never occurs

            1 Each Resource is assigned a unique integer number

            2 Each Process must request resources in an increasing order of enumeration

            We define a one-to-one function F R rarr N where R is the set of resource types and N is the set of natural numbers

            A process has requested a resource type say Ri at the beginning

            Protocol 1 After that the process can request resource type R j if and only if F (Riquestiquest j)gtF(Riquestiquest i)iquestiquest

            Protocol 2 If a process requests a resource type R j it must release all the resources (say (Riquestiquest i )iquest ) whose F (Riquestiquest i)ge F (Riquestiquest j)iquestiquest

            Example Let F (tape drive) =1 F(disk drive) = 5 and F(printer)=12

            A process can request any number of tape drives disk drives and printers

            Protocol 1If a process A has already requested disk drive now A can request only printer and cannot request tape drive

            Protocol2 In order to request tape drive the process A must release the disk drive and then can request tape drive

            If the above two protocols are used then the circular wait condition never occurs We can prove this by contradiction

            Proof Assume circular wait exists Let the set of processes involved in the circular wait be P0 P1 helliphellip Pn where P0 is waiting for resource R0which is held by P1

            P1 is allocated R0 and P1 is waiting for resource R1which is held by P2 so F (Riquestiquest0)ltF(R iquestiquest1)iquestiquest

            P2 is allocated R1 and P2 is waiting for resource R2which is held by P3 so F (Riquestiquest1)ltF (Riquestiquest2)iquestiquest

            P1 P2

            R2R2

            R1

            Pn is allocated Rnminus1 and Pn is waiting for resource Rnwhich is held by P0 so F (Riquestiquestn)ltF(R iquestiquest0)iquestiquest

            Hence by transitivity F (Riquestiquest0)ltF(R iquestiquest0)iquestiquest Hence our assumption that circular wait exists is FALSE

            ===============================================================================Q) Resource Allocation graphDeadlocks can be understood more clearly through the use of Resource-Allocation Graphs having the following properties

            1 Resource Types are represented as square nodes on the graph Dots inside the square nodes indicate number of resources

            ( Ex two dots might represent two laser printers )2 Processes are represented as circles3 Request Edges - If P1 has requested R1 a directed edge from P1 to R1 is request

            edge4 Assignment Edges - A directed edge from R2 to P1 indicating that resource R2 has

            been allocated to process P1 and that P1 is currently holding resource R2 Note that a request edge can be converted into an assignment edge when request is granted

            If a resource-allocation graph does contain cycles AND each resource contains only a single instance then a deadlock exists If a resource category contains more than one instance then cycle in the resource-allocation graph indicates the possibility of a deadlock but does not guarantee one==================================================================Q) Deadlock AvoidanceFor each resource request system can decide whether the request should be granted or not To make this decision the system must have information like

            1 resources currently available2 resources currently allocated to each process3 Future requests and releases of each process4 Maximum number of resources it may need

            Given this information it is possible to construct an algorithm that makes sure that the system will never enter a deadlocked stateThere are Two deadlock-avoidance algorithms They are

            1 Resource-Allocation Graph Algorithm2 Bankers Algorithm

            Safe State A system is in safe state if there exists a safe sequence of processes P0 P1 P2 PN such that Resource Requests for Pi lt= Resources allocated to Pi + resources held by all processes Pj where j lt i All safe states are deadlock free

            Unsafe state If a safe sequence does not exist then the system is in an unsafe state which MAY lead to deadlock

            1 Resource-Allocation Graph Algorithm Resource-allocation graphs can detect deadlocks only if number of resources of each type are one In this case unsafe states can be recognized and avoided by adding claim edges denoted by dashed

            lines which point from a process to a resource that it may request in the future All the claim edges are added only at the beginning of process When a process makes a request the claim edge Pi-gtRj is converted to a request edge When a resource is released the assignment edge changes back to a claim edge This approach works by denying requests that would produce cycles in the resource-allocation graph

            taking claim edges into effectConsider for example the resource allocation graph as shown

            If P2 requests resource R2 then the claim edge from P1-gtR2 will be made request edge as follows

            The resulting resource-allocation graph would have a cycle in it and so the request cannot be granted Q) Bankerrsquos Algorithm or Deadlock avoidance algorithm with exampleThere are 12 tape driveslet the current state of the system is as shown in the below figureProcess Allocated Max Need Need= MaxNeed ndash AllocatedP0 5 10 5P1 2 4 2P2 2 9 7

            Available = 12-(5+2+2) = 3Resource- Request AlgorithmNow when a request for 1 tape drive by process P2 is made we run resource-request algorithm to check whether the request must be granted or not The request is granted if the after granting the request all the processes in the system can complete For thatWe check 1 Is the request of P2 lt= need of P2

            1 lt= 7 therefore TRUE2 Is the request of P2 lt= Available

            1 lt= 3 therefore TRUE3 Pretend the request is granted for P2

            Now the current state is as shown belowProcess Allocated Max Need Need= MaxNeed ndash AllocatedP0 5 10 5P1 2 4 2P2 2+1=3 9 7-1=6

            Available = 3-1= 2Now Run the safety algorithm to check whether the system is in safe state

            Safety Algorithm1 Let WORK = Available = 22 Find unfinished process such that Need of unfinished process lt= WORKCheck P0

            Need of P0 = 5Work = 2

            Is 5 lt= 2 FALSE

            Check P1Need of P1 = 2Work = 2

            Is 2 lt= 2TRUETherefore P1 can finishIf P1 finishes Work = Work+Allocated to P1

            Work=2+2= 4Now again check if P0 can completeNeed of P0 = 5Work =4Is 5 lt=4 FALSE

            Check if P3 completesNeed of P3= 6Work =4Is 6lt= 4FALSESo neither P0 can complete and P3 can completeSo the system is in unsafe stateThe request for 1 tape drive by P3 is not granted

            2 Bankers Algorithm For resources that contain more than one instance the resource-allocation graph method does not

            work So we use Bankerrsquos algorithm When a process starts up it must state in advance the maximum allocation of resources it may

            request When a request is made the scheduler determines whether granting the request would leave the

            system in a safe state If not then the process must wait until the request can be granted safely The bankers algorithm relies on several key data structures ( where n is the number of processes and

            m is the number of resource categories ) o Available[ m ] indicates how many resources are currently available o Max[ n ][ m ] indicates the maximum demand of each process of each resource

            o Allocation[ n ][ m ] indicates the number of each resources allocated to each processo Need[ n ][ m ] indicates the remaining resources needed of each type for each process ( Note

            that Need[ i ][ j ] = Max[ i ][ j ] - Allocation[ i ][ j ] for all i j ) For simplification of discussions we make the following notations observations

            o One row of the Need vector Need[ i ] can be treated as a vector corresponding to the needs of process i and similarly for Allocation and Max

            Safety Algorithm In order to apply the Bankers algorithm we first need an algorithm for determining whether or not a

            particular state is safe This algorithm determines if the current state of a system is safe according to the following steps

            1 Let Work and Finish be vectors of length m and n respectively Work is a working copy of the available resources Finish is a vector of booleans indicating whether a particular process can finish Initialize Work = Available and Finish to false for all elements

            2 Find an i such that both (A) Finish[ i ] == false and (B) Need[ i ] lt Work This process has not finished but could with the given available working set If no such i exists go to step 4

            3 Set Work = Work + Allocation[ i ] and set Finish[ i ] to true This corresponds to process i finishing up and releasing its resources back into the work pool Then loop back to step 2

            4 If finish[ i ] == true for all i then the state is a safe state because a safe sequence has been found

            Resource-Request Algorithm ( The Bankers Algorithm ) Now we have a tool for determining if a particular state is safe or not This algorithm determines if a new request is safe and grants it only if it is safe to do so When a request is made ( that does not exceed currently available resources )

            pretend it has been granted and then see if the resulting state is a safe one If so grant the request and if not deny the request as follows

            1 Let Request[ n ][ m ] indicate the number of resources of each type currently requested by processes If Request[ i ] gt Need[ i ] for any process i raise an error condition

            2 If Request[ i ] gt Available for any process i then that process must wait for resources to become available

            else the process can continue to step 3

            3 Check to see if the request can be granted safely by pretending it has been granted and then seeing if the resulting state is safe If resulting state is safe

            grant the requestelse

            then the process must wait until its request can be granted safelyThe procedure for granting a request ( or pretending to for testing purposes ) is

            Available = Available - Request

            Allocation = Allocation + Request Need = Need - Request

            Unit III Memory management

            Just as processes share the CPU they also share physical memory Memory management unit of OS takes care of memory allocation deallocation and other issues Program must be brought into memory for it to be run Addresses are two types

            i) relocatable or relative address wrt to the beginning of program ii) Absolute addresses

            Q)Address BindingAns Binding means mapping logical address space to physical address space Address binding can happen at three different stagesCompile time If you know at compile time only

            where in memory the program is going to be allocated then compiler generates absolute addresses

            Otherwise compiler generates relocatable addresses

            Load time loader binds the relocatable addresses generated by compiler to absolute address Hence binding is done at load timeIf Binding is done at compile and load time physical and logical addresses are same

            Execution time If address binding is done at run time the process can be moved during its execution from one memory segment to another Here we call logical addresses as virtual addressRun time mapping of virtual to physical address is done by a hardware device called Memory Management Unit (MMU)

            Q)Logical vs Physical Address SpaceAnsLogical address ndash generated by the CPU also referred to as virtual addressPhysical address ndash address seen by the memory unitSet of all logical addresses is called logical address space Set of all physical addresses is called physical address space

            Q) Memory-Management Unit (MMU)Ans MMU is hardware device that maps virtual to physical addressIn MMU scheme the value in the relocation register is added to every address generated by CPU to read the contents of memoryThe user program only knows logical addresses it never sees the real physical addresses

            Q)Dynamic loadingAns Since physical memory is small it may not be possible for the entire program to be in main memory so we can use dynamic loadingIt is responsibility of users to design their programs to take advantage of dynamic loadingOnly the main function is loaded into main memory and when main() calls another function the main() will check whether the function is in main memory IF not the loader will load the desired function to main memory and updates the programrsquos address table

            Routine is not loaded until it is called We achieve better memory-space utilization as unused routine is never loaded

            Q) Dynamic Linking+ Linking postponed until execution time+ Small piece of code stub used to locate the appropriate memory-resident library routine+ Stub replaces itself with the address of the routine and executes the routine+ Operating system needed to check if routine is in processesrsquo memory address+ Dynamic linking is particularly useful for libraries

            Q) OverlaysAns Overlays are needed when process is larger than amount of main memory allocated to it Instructions and data that are needed at any given time is kept in main memory Overlays can be implemented by user Programming design of overlay structure is complex Overlays for a two-pass assembler is as shown in the figure

            Q) SwappingAns A process can be swapped temporarily out of memory to a backing store and then brought back into memory for continued executionBacking store ndash fast disk large enough to store copies of all memory images of all usersPriority based scheduling uses a variant of swapping policy called roll out roll in If a higher priority process arrives the memory manager swaps out lower priority process and then swaps in higher priority process When the higher priority process finishes the lower priority process can be swapped in again to main memory A process that is swapped out will be swapped back into the same memory space it occupied previouslyMajor part of the swap time is transfer time We can swap idle process only and cannot swap a process that is waiting for IO

            Q) Contiguous memory allocation Ans Each process is contained in a single contiguous section of memory1 Fixed Size Partition (or) Single-partition allocation

            Divide the main memory into Fixed sized partitions Each partition may contain exactly one process Relocation register contains value of starting physical address Limit register contains range of logical addressesEvery address generated by a CPU is checked as follows

            If logical address lt Limit register then logical address is added with Relocation register to get the corresponding memory address

            else a trap to OS is generatedSince every address is checked we can protect the OS and other user programs from being modified by other running process

            2 Multiple-partition allocation

            1) Fixed Size Partitions Divide the main memory into Fixed sized blocks Here memory allocated to a process may be larger than required The difference between allocated memory and requested memory is called internal fragmentation Internal fragmentation is unused memory inside the partition

            0 P1 Block0(0th address to 3rd address)4 P1 Block1(4th address to 7th address)8 Block2(8th address to 11th address)

            12 Block3(11th address to 12th address)

            if P1 requires just 5 addresses still it is allocated 2 blocks ie 8 addresses So remaining 3 addresses are left unused

            2) Variable Size partitions Here main memory is divided into partitions of variable sizesOperating system maintains information about

            a) allocated partitions b) free partitions (hole)

            In the beginning all main memory is empty and is considered one large block of available memory a hole

            Exact memory required by process is only given When a process terminates it releases memory which can be allocated to another process

            Memory is allocated to processes until finally no available block of memory (or hole) is large enough to hold the next processExternal fragmentation exists when there is enough memory to satisfy a request but the available memory are not contiguous Wastage of memory outside the partitionOne Solution to problem of external fragmentation is

            a Compaction shuffle the memory contents so as to place all free memory together into one large block

            ===============================================================================Q) Dynamic Storage-Allocation Problem (or) Most commonly used strategies to select a free hole from the set of available holesAns To satisfy a request of size n from a list of free holes below 3 policies can be used

            1 First-fit Allocate the first hole that is big enough

            2 Best-fit Allocate the smallest hole that is big enough must search entire list if it is not ordered by size Produces the smallest leftover hole

            3 Worst-fit Allocate the largest hole must also search entire list Produces the largest leftover holeFirst-fit and best-fit better than worst-fit in terms of speed and storage utilization

            ===============================================================================Q) Non-Contiguous Memory allocation

            1 Paging Paging is a memory management scheme that provides non-contiguous memory allocation0 P1 Block0(0th address to 3rd address)4 P2 Block1(4th address to 7th address)8 Block2(8th address to 11th address)

            12 P1 Block3(11th address to 12th address)

            Logical address space of a process can be noncontiguous Ex P1 is allocated Block0 and Block3

            1 Divide physical memory into fixed-sized blocks called frames (size is power of 2)2 Divide logical memory into blocks of same size called pages3 Logical address is divided into 2 parts

            Page number (p) ndash Page table is indexed by page numberPage offset (d) ndash

            4 A page table is allocated to each process (A pointer to page table is stored in PCB of process )Page table translates logical to physical addresses Page 0 is in frame1 Page 1 is in frame 4 etc5 Internal fragmentation may occur due to paging

            6 If the size of logical address space= 2m and page size = 2n Then the higher order mminusn bits of logical address correspond to page number and lsquonrsquolower order bits for displacement within the page

            Ex size of logical address space = 8=23 so m=3

            page size =4=22 So n=2

            So mminusn = 1 bit for page number to represent 01

            remaining n=2 bits for displacement within the page

            Q) Implementation of Page Table

            Page table can be kept as

            1) Set of dedicated registers efficient if the page table is small

            2) Page table can be kept in memory page table base register (PTBR) points to page table Page-table length register (PTLR) indicates size of the page table Changing page table requires changing only the value in these registers Advantages of keeping page table in memorya Less context switch time

            b Two memory accesses are required One memory access to access the page table and another memory access to access the required memory address

            This problem can be solved by using a special fast-lookup hardware cache called associative memory or translation look-aside buffers(TLBs)

            i Each entry in TLB consists of 2 parts

            I) page number field

            II) Value field or frame number field

            ii When a logical address is generated by CPU its page number is presented to TLB The page number is compared with all the entries of TLB simultaneously

            If page number is found in TLB Itrsquos frame number is immediately available -gt TLB hitIf page number not found check in page table ndashTLB miss

            By using TLB Search is fast But TLB hardware is expensive So TLB size is kept small So TLB contains only frequently used few page table entries

            Q) Protection in Paging Validinvalid bit attached to each entry in the page table OS sets this validinvalid bit for each page to allow or disallow access to the page When the bit is set to invalid the page is not in processrsquos logical address space Hence it generates Trap to OS

            Q) Shared pages in Paging

            Reentrant code is the code that never changes during execution Reentrant code can be shared

            1 One copy of read-only (reentrant) code is shared among processes (ie text editors compilers window systems) Ex ed1ed2ed3 is shared among Process P1 and P2

            2 Shared code must appear in same location in the logical address space of all processes

            Each process keeps a separate copy of the private code and data EX data1 and data2

            The pages for the private code and data can appear anywhere in the logical address space

            ==============================================================================Q) Segmentation

            Segmentation is a memory management scheme that support userrsquos view of memory

            When the user program is compiled the compiler generates segments like

            1) The code segment

            2) Global variables segment

            3) Heap memory segment

            4) Stack segment etc

            Each entry in segment table has

            segment base (Starting physical address of segment) and limit register (specifies the length of the segment)

            Logical address is divided into 2 parts

            segment-number offset

            Segment table is indexed by segment number

            Segment-table base register (STBR) stores the location of segment table in main memory

            Segment-table length register (STLR) stores number of segments used by a program

            The segment number (say lsquosrsquo) is used to find the entry in segment table The required entry is lsquosrsquo locations away from the beginning of the segment table Once the required entry in the segment table is found the offset (lsquodrsquo) is compared with limit

            If( offset lt limit) then offset is added with the base entry to generate the physical address

            Q) Shared Segments Code sharing occurs at

            the segment level Shared segments must

            have same segment number

            Allocation - dynamic storage allocation problem

            use best fitfirst fit may cause external fragmentation

            Protection protection bits associated with segments

            readwriteexecute privileges array in a separate segment - hardware can check for illegal array indexes

            ===============================================================================Q) Virtual memory ndash

            Ans In paging and segmentation a program will execute only if the entire process is in main memory But here in virtual memory only part of the program needs to be in memory for execution

            1 A program that is large enough than the available main memory can still run as only part of program is only loaded into main memory Logical address space can therefore be much larger than physical address space

            2 Allows address spaces to be shared by several processes

            Virtual memory can be implemented via Demand paging Demand segmentation

            The large blank space between heap and stack is part of virtual address space Virtual memory allows files and memory to be shared by 2 or more processes through page sharing as shown in below figure (b) Figure (a) depicts that virtual memory that is larger than physical memory

            ==============================================================================

            Q)Demand pagingDemand paging is technique of loading pages from disk to main memory only when the page is needed Hence using less amount of physical memory we get faster response

            Demand paging is similar to PAGING + SWAPPING

            Instead of swapping a whole process the pager brings only those pages needed into memory

            To distinguish between pages that are in memory and pages that are on the disk validinvalid bit is used

            Validinvalid bit is attached to each entry in page table When this bit is set to valid the page is legal and in memory When this bit is set to invalid either the page is illegal or the page is in disk

            When CPU generates a logical address whose page table entry is set to invalid page fault occurs Hence the required page is in disk Some section of hard disk called swap space is used to hold pages that are not present in memory

            Q) Procedure for Handling a Page Fault

            1) CPU generates a logical address and if the validinvalid bit for the page that has this logical address is set to invalid page fault occurs

            2) Page fault causes a trap to operating system

            3)Check if the logical address is within the logical address space of process ie PTBR and PTLR is checked if (required logical address is not within the logical address space of process)

            Terminate the processelse

            page is not in memory and page is in disk

            4) To bring the required page into memory we need to find a free main memory frame map main memory address to disk block and fetch disk block and load the block into free frame5) When the required page is brought into memory update the page table to indicate the page is in memory6) Restart instruction interrupted by illegal address trap The process will continue as if page had always been in memory

            Q) What is pure demand pagingAns If no page belonging to the executing process is in main memory the process will fault for every page it needs Page faults will occur until every page that is needed is in memory This scheme is known as pure demand paging NEVER BRING A PAGE UNTIL IT IS REQUIRED

            Q) Page Replacement1 Find the location of the desired page on the disk2 Find a free

            i) If there is a free frame use itii) If there is no free frame use Page replacement Algorithms to find some page in memory that is

            not really in use and swap itiii) If the modify bit associated with the victim frame is set to 1 the page is written to disk Make the

            changes to validinvalid bit of victim frames page table entry to invalid to indicate the victim frame is no longer in memory

            else no need to write to disk3 Read the desired page into victim frame and change the frame and page tables4 Restart the user process

            =================================================================================================Q) Page Replacement AlgorithmsGoal Produce a low page-fault rateThe algorithms are evaluated by running it on a particular string of memory references (reference string) and by computing the number of page faults on that string

            1 First in First out (FIFO) Page replacement algorithm Uses the time when a page was brought into memory The page that was brought first into memory is replaced ie oldest page is chosen for replacement We create a FIFO queue to hold all pages in memory

            When a page is brought into memory we insert it at the tail of the queue and we replace the page at the head of the queue

            FIFO 15 page faultsProblems due to FIFO is Beladyrsquos anamolyFor a refrence string shown below123412512345When number of frames allocated to a process is 1 number of page faults=12 due to FIFO page replacementWhen number of frames allocated to a process is 2 number of page faults=12 due to FIFO page replacementWhen number of frames allocated to a process is 3 number of page faults=9 due to FIFO page replacementWhen number of frames allocated to a process is 1 number of page faults=10 due to FIFO page replacementAs the number of frames allocated to a process increases page faults must decrease But in FIFO page replacement page faults increase2 Optimal Page replacement AlgorithmReplace the page that will not be used for longest period of time Used for measuring how well your algorithm performs It is not practical to implement OPT algorithm as it is not possible to know what the future references will beex

            Optimal page replacement 9 page faults

            3 Least Recently used (LRU) Page replacement algorithm Replace the page that has not been used for longest period of time LRU makes use of time of last use of the page Ex

            Number of page faults = 12

            LRU requires hardware assistance to determine the time of last use of the page We can use 1) Stack or 2) Counter to implement LRU page replacement algorithm

            1) Stack implementation ndash keep a stack of page numbers in a double link formwhen a Page is referenced it is moved to the top of stackThis implementation requires 6 pointers to be changed

            2) counter implementation Time-of-use field is allocated to each frame allocatedCPU maintains a counter Counter is incremented for every page reference Counter value is copied to the time-of-use field of the referenced page

            Disadvantages of LRU page replacement algorithm Updating counter or stack must be done for every memory reference HENCE LRU IS SLOW and implementation requires hardware assistance ==============================================================================Q) LRU approximation page replacement1) Using reference bit2) Using additional reference bits algorithm3) Second chance algorithm4) Enhanced Second chance algorithm5) Counting based Page replacement

            Has two schemes1) Least Frequently Used (LFU)2) Most Frequently Used (MFU)

            6) Page Buffering AlgorithmsQ) What is ThrashingAns A process is thrashing if it is spending more time paging than executingThrashing may occur when global (or) local page replacement is used as followsSolution to Thrashing is Working Set strategyIf we provide a process with as many frames as it needs thrashing can be avoided Since it is not possible to know how many frames a process needs we use locality model of process execution

            here Locality is set of pages that are actively used together Locality model states that as the process executes it moves from locality to locality If we allocate enough frames to a process to accommodate the size of current locality it will not fault again until it changes current localityWorking Set model uses working-set and a parameter ∆ = working-set windowThe set of pages in the most recent ∆ page references is the working setLet WS S i be the working-set size for process Pi ie Pi needs WS S i framesLet D be the total demand for frames then D=sum WS Si

            and mbe the total number of available frames

            If iquestm thrashing occurs because some processes will not have enough framesSoIF (Dgtm iquest

            OS selects a process to suspendelse OS may initiate another processThe working set strategy prevents thrashing while keeping the degree of multi programming as high as possibleThe main difficulty is to keep track of moving working set window ========================================================================Q) How does thrashing occur in global and local page replacementAns In Global page replacement when a page fault occurs for a process any frame in main memory can be replaced causing page fault to some other process that needs the replaced frame Page fault processes must use paging device to swap pages in and out As more number of processes queue up for the paging device

            the READY queue empties And CPU becomes idle So OS introduces new processes as CPU is free This further increases number of page faults

            In Local page replacement Each process is allocated certain frames in main memory and when a page fault occurs only the frames allocated to it will be replacedWhen a process starts thrashing other processes are not affected But since the processes will be in queue for paging device most of the time average service time for page fault increases Hence effective access time will increase even for a process that is not thrashingQ) Page Table Structure Or Structure of page table in memory

            Hierarchical Paging Hashed Page Tables Inverted Page Tables

            Hierarchical Paging If Page table is large then we break up the page table into multiple page tables ie page table is paged A simple technique is a two-level page tableTwo-Level Paging Example A logical address (on 32-bit machine with 4K page size) is divided into 1 a page number consisting of 20 bits 2 a page offset consisting of 12 bitsSince the page table is paged the page number is further divided into 1 a 10-bit page number 2 a 10-bit page offset Thus a logical address is as follows page number page offset

            pi p2 d

            10 10 12 where p1 is an index into the outer page table and p2 is the displacement within the page of the outer page table Two-Level Page-Table Scheme

            Address-Translation Scheme Address-translation scheme for a two-level 32-bit paging architecture

            Hashed Page Tables (or) Hash table Hashed page tables are common in address spaces gt 32 bitsEach entry in hash table contains a linked list of elements that hash to same locationHash table is indexed by hash valueEach element consists of 3 fields

            1 Page number2 Frame number3 Pointer to next element in linked list

            The algorithm works as follows Page number is passed to hash function to get hash value Locating the hash value in hash table is easy as Hash table is indexed by hash value Page number is then compared with field 1 in the first element in linked list If there is a match the corresponding frame number is used to get physical address If there is no match next entries in linked list are searched

            Inverted Page Table

            Invertible page table has one entry for each frame of physical memory Each entry in invertible page table has

            Process-id Page number

            Logical address also has 3 parts Process-id Page number

            OffsetProcess-id and page number of the logical address is compared with each entry in the invertible page tableIf a match is found at ith entry i is added to offset to get logical address If there is no match it means it is illegal addressAdvantages and disadvantages

            Decreases memory needed to store each page table Increases time needed to search the table when a page reference occurs Solution Use hash table to

            limit the search to one mdash or at most a few mdash page-table entries Inverted Page Table Architecture

            Unit-IV CPU Scheduling

            Basic conceptsTo maximize CPU utilization some process must be running at all times

            In multi programming several processes will be in main memory at a given time If running process has to wait for IO the OS takes the CPU away from that process and assigns CPU to other process in ready queue So CPU must be scheduled and CPU scheduling is fundamental OS function

            CPU- IO burst cycleProcess execution begins with a CPU burst that is followed by an IO burst which is followed by another CPU burst then another IO burst and so on as shown in the figure

            An IO bound program has many short CPU bursts A CPU-bound program has few long CPU bursts

            Schedulers a module in OS for scheduling decisions1048708 Long-term scheduler (or job scheduler) ndash selects which processes should be brought into the ready queue1048708 Medium-term scheduler ndash selects which processes should be swapped inout the memory1048708 Short-term scheduler (or CPU scheduler) ndash Whenever the CPU becomes idle the operating system must select one of the processes in ready queue to be executed This selection is done by short term scheduler (or) CPU scheduler

            Dispatcher gives CPU control to the process selected by short term scheduler This function involves following

            1 switching context2 switching to user mode3 jumping to proper location in the user program to restart that program

            Time taken to stop one process and start another process is known as dispatch latency This must be kept smallReady QueueReady queue is implemented as a FIFO queue a priority queue a tree or simply an unordered linked listThe records in ready queue are PCBs of processes

            Pre-emptive Scheduling

            CPU scheduling decisions take place under one of four conditions

            1 When a process switches from the running state to the waiting state2 When a process switches from the running state to the ready state for example in response to an

            interrupt3 When a process switches from the waiting state to the ready state say at completion of IO 4 When a process terminates

            For conditions 1 and 4 there is no choice - A new process must be selected For conditions 2 and 3 there is a choice - continue running the current process (or) select a different

            one If CPU scheduling takes place only under conditions 1 and 4 the system is said to be non-

            preemptive or cooperative In non-preemptive a process runs until it itself gives up the CPU ie when it is waiting for IO (or) when process completes Otherwise the system is said to be pre-emptive

            Windows used non-preemptive scheduling up to Windows 3x and started using pre-emptive scheduling with Win95 Macs used non-preemptive prior to OSX and pre-emptive since then Note that pre-emptive scheduling is only possible on hardware that supports a timer interrupt

            Note that pre-emptive scheduling can cause problems when two processes share data because one process may get interrupted in the middle of updating shared data structures

            Preemption can also be a problem if the kernel is busy implementing a system call ( eg updating critical kernel data structures ) when the preemption occurs Solution wait until the system call has either completed or blocked before allowing the pre-emption But this solution is problematic for real-time systems

            Some critical sections of code protect themselves from concurrency problems by disabling interrupts before entering the critical section and re-enabling interrupts on exiting the section But this should only be done only on very short pieces of code that will take less CPU time

            Scheduling Criteria to decide CPU scheduling algorithm1 CPU utilization in real system cpu utilization will range from 40 to 902 Throughput Number of processes that are completed per unit time3 Turnaround time Time when the process has completed ndash Time when the process was submittedie it includes waiting time and burst time4 Waiting time amount of time that a process spends waiting for CPU5 Response time Time when the first response came - Time when the process was submitted

            Optimization Criteria Max CPU utilizationMax throughput

            Min turnaround timeMin waiting timeMin response time CPU Scheduling Algorithms1) First-Come First-Served (FCFS) Scheduling First come first served CPU scheduling algorithm is non pre-emptive Processes are scheduled in the order they have arrivedFCFS is implemented by Queue When the CPU is free it is allocated to the process at the head of the queueAdvantages Easy to implementDisadvantages 1 Average waiting time is quite long

            2 Convoy effect (it occurs as small processes wait in queue for big process to leave CPU)Example Process Burst Time P1 24 P2 3 P3 3

            Suppose that the processes arrive in the order P1 P2 P3

            The Gantt Chart for the schedule is P1 P2 P3

            0 24 27 30 Waiting time for P1 = 0 P2 = 24 P3 = 27 Average waiting time (0 + 24 + 27)3 = 17 2) Shortest-Job-First (SJF) SchedulingSchedule the Process with shortest burst time If CPU burst time of two processe are same then FCFS is usedAdvantages Average waiting time decreasesDisadvantages It is difficult to know the length of next CPU burstSJF can be either pre-emptive (or) non pre-emptive1 Non pre-emptive SJF ndash once CPU is given to the process it cannot be preempted until completes its CPU burst2 Pre-emptive SJF (or) Shortest-Remaining-Time-First (SRTF) If the newly arrived process is shorter than what is remaining of currently executing process then pre-empt the currently executing process

            Example Process Arrival Time Burst Time P1 00 7

            P2 20 4 P3 40 1

            P4 50 4

            SJF (non-preemptive) Gantt Chart P1 P3 P2 P4

            0 7 8 12 16 Average waiting time = [0 +(8-2)+(7-4) +(12-5)] 4 =4

            Example of Preemptive SJF

            Process Arrival Time Burst TimeP1 00 7P2 20 4P3 40 1P4 50 4

            SJF (preemptive) P1 P2 P3 P2 P4 P1

            0 2 4 5 7 11 16 Average waiting time = (9 + 1 + 0 +2)4 =3 3) Priority SchedulingA priority number (integer) is associated with each processThe CPU is allocated to the process with the highest priority (smallest integer equiv highest priority)If two processes have equal priority then FCFS is usedSJF is a priority algorithm in which larger the CPU burst lower the priorityThere are two schemes 1 Preemptive 2 nonpreemptive

            Problem equiv Starvation ndash low priority processes may never executeSolution equiv Aging ndash as time progresses increase the priority of the process

            4) Round Robin (RR) Each process gets a small unit of CPU time (time quantum) usually 10-100 milliseconds After this time has elapsed the process is preempted and added to the end of the ready queueIf there are n processes in the ready queue and the time quantum is q then each process gets 1n of the CPU time in chunks of at most q time units at once No process waits more than (n-1)q time unitsPerformance 1 q large _ FIFO 2 q small _ q must be large with respect to context switch otherwise overhead is too highExample of RR with Time Quantum = 4 Process Burst Time P1 24 P2 3

            P3 3 The Gantt chart isP1 P2 P3 P1 P1 P1 P1 P1

            0 4 7 10 14 18 22 26 30 Average waiting time = [(30-24)+4+7]3 = 173 =566 5)Multilevel Queue scheduling Ready queue is partitioned into separate queuesFor ex ready queue is divided into 2 queues

            1 foreground (interactive) queue2 background (batch) queue

            Each queue has its own scheduling algorithmFor ex Round Robin scheduling algorithm can be used in foreground queue

            FCFS scheduling algorithm can be used in background queueScheduling must be done between the queues This can be done in 2 ways1 Fixed priority scheduling

            Foreground queue has highest priority All the processes in foreground queue must be completed and queue must be empty Then only the processes in background queue will be given CPU As shown in below figure Disadvantage starvation

            2 Time slice ndash each queue gets a certain amount of CPU timewhich it can schedule amongst its processes ie 80 to foreground in RR and 20 to background in FCFS

            6) Multilevel Feedback Queue Scheduling A process can move between queues The idea is to separate processes according to their CPU bursts IO ndash bound and interactive processes will be in highest priority queueIf a process takes more CPU time it is moved to lower priority queue If a process waits too long in lower priority queue it is moved to higher priority queue to prevent starvationAs shown in the below figure let there be 3 queues Q0 Q1 Q2

            1 Q0 ndash time quantum 8 milliseconds 2 Q1 ndash time quantum 16 milliseconds 3 Q2 ndash FCFS

            Scheduling

            1 A process entering ready queue is put in queue Q0 When it gains CPU the process receives 8 msec If it does not finish in 8 milliseconds process is moved to queue Q1 2 At Q1 process again receives 16 additional milliseconds If it still does not complete it is moved to queue Q2

            Q) SynchronizationSeveral processes run in an operating system Some process share resources due to which problems like data inconsistency may arise

            Example of process synchronization is Producer Consumer problem (or) Bounded-Buffer problemProducer Consumer problem To make sure that producer process should not add data when the buffer is full andConsumer process should not take data when the buffer is empty

            The code for Producer process can be modified as followswhile (true) Produce an item in next producedwhile (counter == BUFFER_SIZE) do nothingbuffer[in]=nextProducedin = (in+1) BUFFER_SIZEcounter++

            here an integer variable counter is initialized to zeroCounter is incremented every time a new item is added to the buffer and counter is decremented every time an item is removed from bufferin is an index that always points to an empty slot after the last filled slotout is an index that always points to the first filled slot after at the head of the circular queue

            The code for consumer process is as shown belowwhile(true)

            while (counter == 0) do nothingnextConsumed = buffer[out]out=(out+1)BUFFER_SIZEcounter--

            The above producer and consumer code produces RACE CONDITIONOutcome of execution of several co-operating processes depends on the order in which accesses takes place is called Race ConditionFor Example if counter =5

            Given counter =5 and if producer produces one item and consumer consumes one item then correct value of counter= 5 only But here the counter = 4 ie inconsistent resultIf T5 is executed before T4 then the counter =6 ie inconsistent result

            Solution to avoid race condition make sure only one process at a time must update the shared variable (here it is counter)

            Q) Critical section problemAns Each process has a segment of code called critical section Critical section code is the code that contains common variables of co-operating processesTo avoid data inconsistencyTwo processes must not execute critical section code at same timeFor this to happen1 Each process must request permission to enter its critical sectionmdashEntry section2 End of critical section ndashExit section3 Rest of the code after critical section ndash Remainder section

            Solution to critical section must satisfy the following three requirements1 Mutual exclusion if one process is in critical section no other process can be in critical section2 Progress Processes not in remainder section must decide which process will enter critical section next3 Bounded Waiting All the processes must be allowed to enter critical section No process must wait indefinitely

            Q) Petersonrsquos solution for critical section problem (Software based solution)AnsTime Process 1 Process2T0 do

            flag[1] = TRUE turn =2

            do flag[2] = TRUE

            T1 turn =1

            T2 While( flag[2] ampamp turn == 2)Since turn = 2 while loop will not run

            While( flag[1] ampamp turn == 1)Loops continuously until flag[1]

            becomes falseT3 Enter critical section

            T4 flag[1] = FALSE Enter Critical section

            T5 Enters remainder section flag[2]=FALSE

            while (TRUE) Enters remainder section

            while (TRUE)

            101 File Concept Data is stored in secondary memory in the form of files A file is a sequence of bits bytes lines or records A file has structure which depends on its type

            File Attributes

            1 Name ndash Name of the file for user understanding2 Identifier used by OS to identify a file ( eg inode number )3 Type ndash Text files executable files binary files etc 4 Location ndash location of the file on the hard drive 5 Size current size of the file6 Protection Controls who can read write or execute the file7 Time Date and user identification gives the information when the file was created last modified

            and last used etc This data is used for protection security and usage monitoring

            All information about files is kept in directory structure The directory entry consists of the filersquos name and its unique identifier The identifier locates the other file attributes Directory structure is kept on disk

            File Operations

            The file Abstract Data Type supports many common operations 1 Creating a file find free space on disk add entry to directory 2 Writing a file write data at current file position pointer location and update pointer3 Reading a file read data at current file position pointer location and update pointer4 Repositioning within a file or file seek change pointer location to a given value5 Deleting a file free the space allocated to file and remove entry from directory6 Truncating a file delete the data and update file size

            Open File Table

            Most OSes require that files be opened before access and closed after access Information about all files that are open currently in the system is stored in an open file table This

            system wide open file table is maintained by OS And information about files opened by the process is stored by the processrsquos open file table Open file table has the following information

            1 File pointer - records the current position in the file for the next read or write2 File-open count - How many times has the current file been opened ( simultaneously by

            different processes ) and not yet closedWhen this counter reaches zero the file can be removed from the table

            3 Disk location of the file4 Access rights

            Some systems provide support for file locking 1 A shared lock is for reading only2 A exclusive lock is for writing as well as reading3 An advisory lock OS makes sure the locks are acquired and released appropriately4 A mandatory lock software developers makes sure the locks are acquired and released

            appropriately5 UNIX used advisory locks and Windows uses mandatory locks

            File Types

            Implementing file types A common technique is to include the type as part of the filename The name is split into 2 parts

            1 File Name 2 Extension

            File name and extension are separated by a period The system uses the extension to indicate the type of file and the type of operations that can be done on that fileUser opens Microsoft word and clicks on File menu and Open option User will only specify the filename without extension application will look for a file with the given name and extension it expects Hence extensions are not supported by the operating systemUnix system uses a crude magic number stored at the beginning of the file to indicate the type of fileIn Mac OS X operating system file produced by word processor has the word processorrsquos name as its creator When user double clicks on the file word will open the required fileSome common file types are as shown below

            File Structure Some files contain an internal structure which may or may not be known to the OS for ex executable files have a specific structure so that it can determine

            where in memory to load the file and what is the location of the first instruction

            If OS supports many file formats it will increase the size and complexity of the OS UNIX treats all files as sequences of bytes with no internal structure (With the exception of

            executable binary programs which it must know how to load and find the first executable statement etc)

            Macintosh files have two forks - a resource fork and a data fork The resource fork contains information relating to the UI such as icons and button images and can be modified independently Data fork contains the code or data

            Internal File structure

            Disk files are accessed in units of physical blocks typically 512 bytes

            Internally files are organized in units of logical units like single byte (or)

            data record size (or) structure size

            The number of logical units which fit into one physical block determines its packingIf a student file is organized internally as records Each record takes 64 bytes of memoryAnd main memory is divided into blocks of 512 bytes Then packing = 51264=8 8 records can be stored per block

            Internal fragmentation may occur due to this

            102 Access Methods

            The information in file can be accessed in the following ways1 Sequential access2 Direct access (or) Relative access

            1 Sequential access we cannot randomly access any record

            Records are read and written sequentially one record after another

            A sequential access file emulates magnetic tape operation and generally supports the following operations read next - read a record and advance the tape to the next position write next - write a record and advance the tape to the next position rewind skip n records - May or may not be supported N may be limited to positive numbers or may be

            limited to +- 1

            2 Direct access (or) Relative accessA file is made up of fixed length records so that we can randomly access any record

            No restriction on order of reading or writing

            Direct access files are useful when we need to access large amounts of data randomly

            Ex database

            Operations supported include n is relative block number ie relative to beginning of file

            1 read n - read record number n 2 write n - write record number n 3 jump to record n - could be 0 or the end of file4 Query current record - used to return back to this record later5 Sequential access can be easily emulated using direct access The inverse is complicated and

            inefficient

            3 Other access methods can be built on top of direct access method We construct an index for the file which contains pointers to various blocks To search a file we first access index and then use pointer to access file

            If a student file has 120000 records of students sorted by student id We can create an index consisting of 1st student id in each blockif 64 records can be stored in each block then number of blocks needed to store 120000 records = 12000064 asymp 2000 blocks So index has 2000 entries Index is kept in main memory so that searching becomes faster

            If the index itself becomes large we create an index for index file (Ex B-tree)

            Q) Allocation Methods

            To allocate space to files on disk is such a way that disk space is utilized effectively and files can be accessed quickly

            Three major methods of allocating disk space are

            1 Contiguous 2 Linked

            3 Indexed

            1 Contiguous allocation Each file occupies set of contiguous blocks on disk

            If the file is n blocks long and starts at location b then it occupies blocks b b+1 b+2hellipb+n-1

            The directory entry for each file indicates the address of starting block(ie b) and length (n)

            Advantages

            1 For direct access to block i we can directly access by finding the block as b+i Since the file is stored sequentially sequential access is easy Hence access is easy

            2 Supports both direct and sequential access3 Number of disk seeks required is minimum 4 Seek time is minimum

            Disadvantages

            1 Difficult to find space to a new file (or) add more data to existing fileFirst fit and Best fit strategies are used to find a free hole( set of free contiguous blocks) from list of available holes But both are inefficient wrt storage utilization

            2 External fragmentation even though there are enough free blocks required for a new file they are not contiguous and hence cannot be allocated to new fileSolution copy all the files onto another disk or tape We then copy the files back onto original disk contiguously This is known as compaction But compaction is time consuming

            3 Size of the file must be known when the file is created Even though final size is known entire size may not be used by the file at the beginning Hence leads to wastage of space

            If size of the file is not known in advance then (i) If we allocate too little space to file file cannot be extended Solution find a larger hole copy the contents of the file to the new space and release the previous space This may be possible as long as space exists Also this is time consuming(ii) If we overestimate the amount of space needed much of the space is unused This will lead to internal fragmentation

            To minimize these drawbacks some os uses modified contiguous allocation scheme Here contiguous blocks of memory is allocated at firstIf the file needs more blocks another set of contiguous space known as extent is added Location and a block count plus a link to the first block of next extentIf extents are too large and of fixed size internal fragmentation occursIf extents are of variable sizes external fragmentation occurs

            Linked AllocationEach file is a linked list of disk blocks The disk blocks can be scattered anywhere on the diskThe directory contains a pointer to first and last blocks of the file For example a file of 5 blocks might start at a block 9 and continue at block 16 then block 1 and then block 10 and finally block 25 Each block contains a pointer to next block If each block is 512 bytes and disk address (pointer ) requires 4 bytes then the user sees blocks of 512-4=508 bytes To create a new file we simply create a new entry in the directory The pointer will have nil for empty file Also the size field of empty file is 0

            To add more data to existing file free block is found and data is written to it and is linked to end of file To read a file we read by following the pointers block to block

            Advantages 1 Easy to find space to a new file (or) add more data to existing file2 Size of the file need not be declared when the file is created3 There is no external fragmentation

            Disadvantages1 Seek time is more2 Number of disk seeks required is more3 Pointers use larger percentage of filersquos disk space4 Inefficient for direct access files To access i th record i disk reads are required5 Not Reliable if pointers were lost or damaged

            Solution to disadvantage 3 is to use to group multiple blocks as clusters(for ex 4 blocks = 1cluster) We allocate clusters instead of blocks to file Makes logical to physical block mapping simpleImproves disk throughput by reducing disk access timeReduces wastage of space due to pointers as only few pointers may be neededFree ndashlist management also becomes simpleDisadvantage internal fragmentation more space is wasted when the cluster is partially full

            Solution to disadvantage 5 is to use doubly linked list (or) store the file name and relative block number in each block

            An variation of linked allocation is the use of file allocation table (FAT)A section of disk at the beginning of each volume contains FAT FAT has one entry for each disk block and is indexed by block numberThe block number contains the block number of next block that contains the file The last block will have special end of the file value as table entry An unused block is indicated by table value 0

            Advantages

            1 Easy to find space to a new file (or) add more data to existing file2 Random access time is improved

            Disadvantage number of disk seeks is more The disk head must move to read the FAT and find the location of block then move to location of the block

            3 Indexed allocation

            Each file has its own index block Index block is an array of disk block addresses The ith entry in index block points to ith block of the file

            The directory contains the address of the index block

            When the file is created all pointers in the index block are set to nil

            To add more data to existing file free block is found and data is written to it The address of block is put in the index block

            Advantages

            1 Easy to find space to a new file (or) add more data to existing file2 Supports direct (or) random access efficiently3 No external fragmentation

            Disadvantage

            1 Seek time is more2 Number of disk seeks required is more3 Not Reliable4 Index file must be kept in memory If this memory is not available then we have the read the index

            block and then desired data block ie 2 disk accesses are required To access a block near the end of the file we need to read all the index blocks to read the needed data block

            5 Pointer overhead of index block is gt than that of linked allocation For example if we have file of only one or two blocks Entire index block is allocated to store these one or two pointers

            Various mechanisms followed for size of index block

            1 linked scheme size of index block is one disk block Index block contains a small header containing name of the file and set of the first 100 disk block addresses The next address contains the address of another index block for large files

            2 multilevel index to access a block the OS uses first-level index to find second-level index block which points to file block This approach can be continued to third or fourth level

            3 combined scheme

            For example Say there are 15 pointers of index block in the filersquos inode The first 12 of these pointers have address of blocks that contains file data The next 3 pointers point to indirect blocks The first points to single indirect block which is an index block that contain addresses of filersquos data blocks

            The second points to double indirect blocks and third points to triple indirect blocks as shown below

            Q) Free space management

            To keep track of free disk space the system maintains free space list To create a file we search free space list and allocate the required space to new file

            The free space list can be implemented as

            1 Bit vector (or) Bit map2 Linked list3 Grouping4 Counting 5 Space maps

            1 Bit vector (or) Bit map Each block is represented by 1 bit If the block is free the bit is 1 and if the block is allocated the bit is 0 For example consider a disk where blocks 23458 blocks are free The free-space bit map would be 001111001000

            Advantages simple and easy to get contiguous files

            Disadvantages

            1 Bit map requires extra space2 Bit map is inefficient if kept in disk Bit maps of smaller disk can be kept in main memory but may

            not be possible to keep the bit map of larger disk in main memory3 Bit map must be modified both when blocks are allocated and freed Freeing 1 GB of data on a 1-TB

            disk could cause thousands of blocks of bit maps to be updated because those data blocks are scattered all over the disk

            2 Linked List Free-space list is implemented by linking all the free disk blocks together A pointer to the first free block is stored in special location on the disk For example consider a disk where blocks 23458 blocks are freeWe keep a pointer to block 2 as the first free block Block 2 would contain a pointer to block 3 Block 3 would point to block 4 and block 4 would point to block 5and so onAdvantage No waste of spaceDisadvantage Cannot get contiguous space easily ie To traverse the free-space list we must read each block which requires more IO time

            3 Grouping First free block stores the addresses of n free blocks The first n-1 free blocks are free and the last block contains the addresses of another n free blocks and so onAdvantage large number of free blocks can be found quickly4 Counting free-space list can be maintained by keeping the address of first free block and the number (n)of free contiguous blocks that follow the first block

            Each entry in the free-space list consists of disk address and a countEach entry requires more space andlist is shorter only if count is gt1These entries can be stored in a B-tree for efficient insertion deletion and search

            5 Space maps ZFS divides the disk space into chunks of manageable sizes called metaslabs Each metaslab has a space mapZFS uses counting algorithm to store information about free blocks Space map is a log of all block activities in time order in counting formatWhen ZFS allocates or frees space from a metaslab it loads the associated space map into memory and updates it Finally free-space list is updated on diskSynchronization Hardware Hardware solution to synchronization is to provide atomic operations These operations operate as a single instruction without interruption Two such operations are 1 Test and Set() hardware instruction2 Swap() instruction

            1 Test and Set instruction is as shown

            Solution using Test and SetThe shared variable lock is initialized to false

            2 Swap() instruction definition is as follows

            Solution using Swap1 Shared Boolean variable lock is initialized to false and each process has a local boolean variable key

            Both Test and Set Swap instructions satisfy the mutual exclusion requirement but unfortunately do not guarantee bounded waiting If there are multiple processes trying to get into their critical sections there is no guarantee of what order they will enter and any one process could have the bad luck to wait forever until they got their turn in the critical section (Since there is no guarantee as to the relative rates of the processes a very fast process could theoretically release the lock whip through their remainder section and re-lock the lock before a slower process got a chance As more and more processes are involved vying for the same resource the odds of a slow process getting locked out completely increase )

            Below figure illustrates a solution using test-and-set that satisfies bounded waiting using two shared data structures boolean lock and boolean waiting[ N ] where N is the number of processes in contention for critical sections

            Bounded-waiting mutual exclusion with TestAndSet( )

            It first looks in an order ( starting with the next process on the list ) for a process that has been waiting and if it finds one then it releases that particular process from its waiting state without unlocking the critical section thereby allowing a specific process into the critical section while continuing to block all the others

            Q) What is semaphoreAns Semaphore is used to solve synchronization problems A semaphore S is an integer variable accessed only through 2 standard atomic operations 1 wait() 2 signal()Wait primitive Signal primitivewait(S)

            while(S lt= 0) do nothingS--

            signal (S)

            S++

            Each process that wishes to use a resource performs wait() operation When a process releases a resource it performs signal() operation

            Semaphores Usage1 binary semaphore or mutex locks can take values 0 (or) 1

            2 Counting Semaphore can take any integer value Counting semaphore is used to count the number remaining resources The counter is initialized to the number of resources available whenever the counting semaphore gt 0 then a process can enter a critical section and use one of the resources When the counter = 0 ( or negative in some implementations ) then the process blocks until another process frees a resource and increments the counting semaphore with a signal callFor example if 3 resources are there and 4 processes Value of counting semaphore =3if process P1 requires a resource it perform wait() operationNow counting semaphore = 2if process P2 requires a resource it perform wait() operationNow counting semaphore = 1if process P3 requires a resource it perform wait() operationNow counting semaphore = 0if process P4 requires a resource it perform wait() operation P4 waits until a resource is available

            3 Semaphores can also be used to synchronize certain operations

            Semaphore ImplementationWhen a process is in critical section and any other process that tries to enter critical section must loop continuously wasting CPU time (busy waiting) This type of semaphore is called spin lock as the process spins while waiting For example suppose it is important that process P1 execute statement S1 before process P2 executes statement S2

            First we create a semaphore named synch that is shared by the two processes and initialize it to zero Then in process P1 we insert the code

            S1signal( synch )

            and in process P2 we insert the code wait( synch )

            S2 Because synch was initialized to 0 process P2 will block on the wait until after P1 executes the call

            to signal Each semaphore has a integer value and maintains a waiting queue of waiting processes

            Semaphore implementation with no busy waiting To overcome busy waiting we block the waiting process and waiting process is restarted by wakeup() operation(ie the process is removed from waiting queue to ready state)

            Problems due to semaphore1 Deadlocks Deadlocks occur when multiple processes are blocked each waiting for a resource that can only be freed by one of the other (blocked) processes shown below

            2 Starvation In Starvation one or more processes gets blocked forever and never get a chance to enter critical section For example if we do not specify the algorithms for adding processes to the waiting queue or selecting one to be removed from the queue in the signal( ) call If a LIFO queue is chosen then the first process that starts waiting will never get a chance

            3 Priority inversionLet Process LMH have priorities as shown belowLltMltHprocess H is waiting for a resource that has been held by process L so Process L is running When process M enters it pre-empts process L and M is running nowHere M runs before H even though M has lower priority than HTo avoid this L is given high priority so that M does not pre-empt L=====================================================================Q) Classical problems on synchronization

            1 The Bounded Buffer Problem (also called the The Producer-Consumer Problem)2 The Readers-Writers Problem3 The Dining Philosophers Problem

            These problems are used to test newly proposed synchronization scheme

            1 The Bounded Buffer ProblemConsider

            n buffers and each buffer holds one item a producer process which creates the items (1 at a time) a consumer process which processes them (1 at a time)

            Producer process cannot produce an item if all the buffers are full and Consumer process cannot consume if all the buffers are empty As both processes modify the contents of buffer synchronization must be there For that we use 3 Semaphores

            1 empty = n (empty semaphore counts the number of empty buffers)2 full=0 (full semaphore counts the number of filled buffers)3 mutex=1 (mutex provides mutual exclusion for access to the buffer)

            Producer process Consumer processdo

            hellip produce an item wait(empty) wait(mutex) hellip

            do wait(full) wait(mutex) hellip remove an item from buffer hellip

            add item to buffer hellip signal(mutex) signal(full)

            while (1)

            signal(mutex) signal(empty) hellip consume the item hellip while (1)

            2 The Readers-Writers ProblemA data item such as a file is shared among several processesEach process is classified as reader or writerMultiple readers can read the file at same timeA writer must have exclusive access (ie cannot share with either a reader or another writer)Two versions of the readers-writers problem

            readers priority new reader need not wait because a writer is waiting writers priority if a writer is waiting to access the database no new readers can start reading

            A solution to both version may cause starvation in the readers priority version writers may starve in the writers priority version readers may starve

            A semaphore solution to the readers priority version uses 3 semaphores1 readcount = 0 (readcount semaphore counts the number of reading shared data2 mutex =1 (mutex semaphore is used for mutually exclusion when readcount is updated)3 wrt = 1 (wrt semaphore is common to both writer and reader)

            wrt provides mutually exclusive access to shared data Reader process Writer process

            wait(mutex) readcount++ if (readcount == 1) wait(rt) signal(mutex) hellip reading is performed hellip wait(mutex) readcount-- if (readcount == 0) signal(wrt) signal(mutex)

            do wait(wrt) hellip writing is performed hellip signal(wrt)

            while (TRUE)

            3 The Dining Philosophers Problemn philosophers sit around a table thinking and eatingAs time passes a philosopher gets hungry and tries to pick up the chopstick on his left and on his right A philosopher may only pick up one chopstick at a time and cannot pick up a chopstick already in the hand of neighbour philosopherThe dining philosophers problems is an example of a large class or concurrency control problems it is a simple representation of the need to allocate several resources among several processes in a deadlock-free and starvation-free mannerA semaphore solution represent each chopstick with a semaphore

            semaphore chopstick[5] Initially all values are 1

            Philosopher ido

            51

            wait(chopstick[i])wait(chopstick[(i+1) 5])hellipeathellipsignal(chopstick[i])signal(chopstick[(i+1) 5])hellipthinkhellip

            while (1)This solution guarantees no two neighbouring philosophers eat simultaneously but has the possibility of creating a deadlockQ) Types of operating systems

            MULTIPROCESSING SYSTEMS has multiple hardware CPUs

            In symmetric multiprocessing (SMP) systems The processes share the same main memory and io facilities All processors can perform the same functions (hence term symmetric)

            52

            ASYMMETRIC MULTIPROCESSING SYSTEMS In asymmetric multiprocessing (ASMP) all CPUs are not equal (Master and slave CPUs) ASMP have dedicated processors to specific tasks For example one processor may be dedicated to disk operations another to video operations etc These systems dont have the flexibility to assign processes to the least-loaded CPU unlike an SMP system

            REAL TIME SYSTEM is used to implement a computer application that must complete its execution within its time constraint

            Two kinds of real-time systems have evolved

            1 A hard real-time system can guarantee that response requirement will be met under all conditions 2 A soft real-time system cannot guarantee that response requirement meets under all conditions Ex

            Digital audio or multimedia systems Digital telephones

            DISTRIBUTED SYSTEMS execute parts of a computation in different systems at the same time It uses distributed control ie it spreads its decision-making actions across different computers in the system so that failures of individual computers or the network does not cripple its operation

            A distributed operating system appears as a uniprocessor system even though it has multiple processors The users may not know where their programs are being run or where their files are located that should all be handled automatically by the operating system

            Distributed systems allow applications to run on several processors at the same time thus requiring more complex processor scheduling algorithms

            DESKTOP SYSTEMS A desktop system is a personal computer (PC) system used at a single location Modern desktop systems support multiprogramming Common examples are Linux FreeBSD Windows 8 and the Macintosh operating system

            HANDHELD SYSTEMS (palmtop computers) Two of the most popular operating systems for handhelds are Symbian OS and Android OS They have very less memory

            53

            CLUSTERED SYSTEMS A computer cluster consists of a set of loosely connected computers that work together so that in many respects they can be viewed as a single system The components of a cluster are usually connected to each other through fast local area networks each node (computer used as a server) running its own instance of an operating system

            Q) Modern operating system

            New developments in hardware applications and security threats lead to the development of modern operating system

            New Hardware developments like increased machine speed high speed network increased size and variety of memory devices

            New Applications developments like multimedia applications internet and web access and client server computing

            Developments in modern operating system can be categorized as

            1 Microkernel architecture assigns only few functions to kernel ( IPC and CPU scheduling etc) Other OS services are provided by processes called servers that run in user mode This approach decouples server and kernel development This approach is well suited to distributed environment

            2 Multithreading a process is divided into threads that can run concurrently Useful for applications that perform a number of independent tasks Example database server that listens and processes many client requests

            3 Symmetric multiprocessing As now-a-days there are multiple microprocessors in a single system SMP operating system provides greater efficiency when there are multiple processors The processes share the same main memory and io facilities All processors can perform the same functions (hence term symmetric)

            SMP schedules processes or threads across all of the processors

            Advantages

            1 Increased performance if some portion of the program can be run in parallel the performance of SMP OS increases

            2 increased availability as all processes can perform the same task if one processor fails others will work 3 incremental growth one can add a new processor to increase the performance4 Scaling cost can be dependent on the number of processors used

            4 Distributed operating systems Distributed systems allow applications to run on several processors at the same time

            5 Object oriented design adds modularity to kernel Os can be customized without effecting system integrity

            Q) Linux Operating systems It is open source as its source code is freely available It is free to use Linux was designed considering UNIX compatibility Linux is a multi-user multitasking system Main design goals are speed efficiency and standardization Components of Linux System

            54

            Linux Operating System has primarily three components as shown in the below diagram

            Kernel - Kernel is the core part of Linux It is responsible for all major activities of this operating system It interacts directly with hardware Kernel hides low level hardware details to system or application programs

            System Library - System libraries are special functions or programs These are used by application programs to use Kernelrsquos features

            System Utility - System Utility programs are responsible to do specialized individual level tasks like updating log file accepting login requests from terminals etc

            Kernel Modules Sections of kernel code that can be compiled loaded and unloaded independent of the rest of the kernel A kernel module may typically implement a device driver a file system or a networking protocol

            Three components to Linux module support 1 module management Supports loading modules into memory and allows to talk to the rest of the kernel 2driver registration Allows modules to inform rest of the kernel that a new driver has become available 3 conflict resolution protect reserved resources of one driver from accidental use by another driver

            Kernel Mode vs User ModeKernel component code executes in a special privileged mode called kernel mode with full access to all resources of the computer This code represents a single process executes in single address space and do not require any context switch and hence is very efficient and fast Kernel runs each processes and provides system services to processes provides protected access to hardwares to processesCode which is not required to run in kernel mode is in System Library User programs and other system programs works in User Mode which has no access to system hardwares and kernel code User programs utilities use System libraries to access Kernel functions to get systems low level tasksBasic FeaturesFollowing are some of the important features of Linux Operating System

            Portable - Portability means softwares can works on different types of hardwares in same way Linux kernel and application programs supports their installation on any kind of hardware platform

            Open Source - Linux source code is freely available and it is community based development project Multiple teams works in collaboration to enhance the capability of Linux operating system and it is continuously evolving

            Multi-User - Linux is a multiuser system means multiple users can access system resources like memory ram application programs at same time

            Multiprogramming - Linux is a multiprogramming system means multiple applications can run at same time

            Hierarchical File System - Linux provides a standard file structure in which system files user files are arranged

            Shell - Linux provides a special interpreter program which can be used to execute commands of the operating system It can be used to do various types of operations call application programs etc

            55

            Security - Linux provides user security using authentication features like password protection controlled access to specific files encryption of data

            Architecture Linux System Architecture is consists of following layers

            Hardware layer - Hardware consists of all peripheral devices (RAM HDD CPU etc)

            Kernel - Core component of Operating System interacts directly with hardware provides low level services to upper layer components

            Shell - An interface to kernel Takes commands from user and executes kernels functions

            Utilities - Utility programs giving user most of the functionalities of an operating systems

            Q) windows XP operating System1Extensibility mdash layered architecture

            Executive which runs in protected mode provides the basic system services On top of the executive several server subsystems operate in user mode Modular structure allows additional environmental subsystems to be added without affecting the

            executive 2Portability mdash XP can be moved from on hardware architecture to another with relatively few changes

            Written in C and C++ Processor-dependent code is isolated in a dynamic link library (DLL) called the ldquohardware abstraction

            layerrdquo (HAL)3 Reliability mdashXP uses hardware protection for virtual memory and software protection mechanisms for operating system resources 4 Compatibility mdash applications that follow the IEEE 10031 (POSIX) standard can be complied to run on 2000 without changing the source code 5 Performance mdash XP subsystems can communicate with one another via high-performance message passing

            Preemption of low priority threads enables the system to respond quickly to external events Designed for symmetrical multiprocessing

            6 International support mdash supports different locales via the national language support (NLS) APIXP Architecture 1 Layered system of modules 2 Protected mode mdash hardware abstraction layer (HAL) kernel executive 3 User mode mdash collection of subsystems

            a Environmental subsystems emulate different operating systems b Protection subsystems provide security functions

            XP architecture

            56

            Q) Windows network Operating systemExample Windows 2000 operating system

            All the pictures are taken from Silberschatz Abraham et al Peter Baer Galvin and Greg Gagne Operating system concepts Reading Addison-Wesley

            Visit my blog enthusiaststudentblogspotin57

            mtechmessengerblogspotin

            Keywords OS exam notes OS notesOperating system notes Opearing system galvin notesSimple batch system Multi- programmed Batch System Time sharing System Difference between Batch Multi-programming and Time sharing What is process structure of process in memory Process states PCB Operations on Processes Process Creation Process Termination cascading terminationcontext switch Basic Concepts of threads two modes of CPU execution Inter process communication Shared ndash Memory Systems Message passing systemswhat is deadlock four conditions that are necessary for deadlock to occur Methods for handling deadlocks Deadlock prevention Deadlock avoidanceResource Allocation graphBankers AlgorithmSafe State Unsafe state Safety Algorithm Resource-Request Algorithm Memory management Address Binding Logical vs Physical Address Space MMU Memory-Management Unit Dynamic loading Dynamic Linking Overlays Swapping Contiguous memory allocation Single-partition allocation Multiple-partition allocation Internal fragmentation Variable Size partitions External fragmentation First-fit Best-fit Worst-fit Non-Contiguous Memory allocation Paging Implementation of Page Table PTLR PTBR page table base register Page-table length register translation look-aside buffers TLB Protection in Paging Shared pages in Paging Segmentation Shared Segments Virtual memory Demand paging Procedure for Handling a Page Fault pure demand paging Page Replacement Page Replacement Algorithms FIFO Optimal Page replacement AlgorithmLeast Recently used (LRU) Page replacement algorithm ThrashingWorking Set model global and local page replacement

            58

            • 101 File Concept Data is stored in secondary memory in the form of files A file is a sequence of bits bytes lines or records A file has structure which depends on its type
            • File Attributes
            • File Operations
            • Kernel Mode vs User Mode
            • Basic Features
            • Architecture

              3) Other processes that wish to communicate using shared memory segment must attach the shared memory to their address space4) Processes can exchange information by reading and writing data in shared areas5) Shared memory systems are convenient to communicate6) Shared memory systems are faster and provides maximum speed as

              i) System calls are required only to establish shared memory regionsii) Once shared memory is established all accesses are treated as routine memory access No

              assistance of kernel is required

              (b) Message passing systems

              1 Are useful for exchanging smaller amounts of data2 Easy to implement for inter computer communication3 More time consuming than shared memory system as implemented using system calls

              and need kernel intervention

              To send messages communication link must exist between them Communication link can be physically or logically implemented

              Different methods to logically implementing a link are

              1 Direct or indirect communication2 Synchronous or asynchronous communication3 Automatic or explicit buffering

              1a Direct communication

              A link is established automatically between every pair of processes that want to communicate

              A link is associated with exactly two processes

              Addressing

              i) Symmetry in addressing Sender process and receiver process must name each other to communicate Send() and Receive() primitives are as follows Send(P message) ndashsends a message to process P

              Receive (Q message) ndash receive a message from process Qii) Asymmetry in addressing Only sender names the receiver process Send() and Receive()

              primitives are as follows Send(P message) ndashsends a message to process P

              Receive (id message) ndash receive a message from any process

              Disadvantage of both types of addressing is limited modularity as changing the id of a process requires to find all the references to old id and then modify them

              1b Indirect communication

              1 Messages are sent received tofrom Mailboxes or ports2 Each mailbox has unique id (integer value) 3 Two processes can communicate only if the process have a shared mailbox4 A link is established between two processes if they have a shared mailbox

              P1 P2

              R1

              5 A link may be associated with more than two processes6 Mailbox may owned by process or OS

              a If mailbox is owned by process we can distinguish between the owner( can receive messages only) and user(can send messages only) When the processes that owns a mailbox terminates the mailbox disappears

              b If mailbox is owned by OS OS must provide mechanisms to i Create a new mailbox

              ii Send and receive messages through mailboxesiii Delete the mailboxiv Pass ownership to other processes

              2 Synchronous or asynchronous communicationMessage passing may be either blocking(synchronous) or non-blocking(Asynchronous)

              1 Blocking Send The sending process is blocked until the message is received by the receiving process or mailbox

              2 Nonblocking Send The sending process sends the message and resumes operation3 Blocking Receive The receiver blocks until message is available4 Nonblocking receive The receiver retrieves either a valid message or a null

              3 Automatic or explicit bufferingMessages exchanged by communicating processes reside in temporary queue Such queues can be implemented in 3 ways

              i Zero Capacity Queue length =0 the link cannot have messages waiting in itsender must block until receiver receives the message

              ii Bounded Capacity Queue length = finite (say n) When the queue is full the sender must block until space is available in queue

              iii Unbounded Capacity Queue length = infinitehellipthe sender never blocks===============================================================================Q) what is deadlock

              A set of processes is deadlocked when every process in the set is waiting for a resource that is currently allocated to another process in the set Here the process P1 is allocated resource R2 and P2 is allocated R1

              P1 requires R1 and P2 requires R2

              Process P1 and P2 will wait forever This situation is called deadlock

              Q) What are the four conditions that are necessary for deadlock to occur

              1 Mutual Exclusion - At least one resource must be held in a non-sharable mode If any other process requests this resource then that process must wait for the resource to be released

              2 Hold and Wait - A process must be simultaneously holding at least one resource and waiting for at least one resource that is currently being held by some other process

              3 No preemption - Once a process is holding a resource then that resource cannot be taken away from that process until the process releases it

              4 Circular Wait - A set of processes P0 P1 P2 PN must exist such that every P[ i ] is waiting for P[ ( i + 1 ) ( N + 1 ) ]

              Q)Methods for handling deadlocks

              1 By using deadlock prevention and avoidance protocols system will never enter a deadlocked state1 Allow the system to enter a deadlocked state detect it and recover it2 Ignore the problem and pretend that deadlock never occurs

              To make sure that the system must not enter a deadlocked state the system can use

              1 Deadlock prevention 2 Deadlock avoidance

              ==============================================================================

              Deadlock Prevention

              1 Mutual Exclusion We cannot prevent deadlocks by denying the mutual exclusion condition because some resources are nonsharable (Ex Printer)

              2 Hold and Wait

              To make sure that the hold-and-wait condition never occurs in the system two protocols that can be used are

              Protocol 1 All the resources requested must be allocated before process begins execution

              Protocol2 A process can request resources only when it has none If a process requires additional resources it must release all the resources that are currently allocated

              Example Consider a process that copies data from DVD drive to a file on disk sorts the file and then prints the results to a printer

              If Protocol1 is used it must request the DVD drive disk file and printer at the beginning and must hold them till the end

              Disadvantages

              1 Starvation A process may wait forever because at least one resource that is need is always allocated to some other process Hence Starvation is possible

              2 Resource Utilization is low Process will hold the printer from beginning till end even though it is used at the end

              If Protocol 2 is used the process will initially request DVD drive and disk file It copies from the DVD drive to disk and then releases both the DVD drive and disk file It then requests the disk file and printer

              Disadvantage There may be a chance that our data may not remain on the disk file

              3 No pre-emption

              To make sure that this condition does not hold the following protocol is used

              Protocol If a process (say A) requests some resources

              Case 1 If resources are available then Allocate them

              Case 2 if resources are allocated to some other process(say B) that is waiting for additional resources

              then Preempt the desired resources from the waiting process (B) and allocate them to requesting process(A)

              The process B can be restarted only when it is allocated additional resources it is requesting and takes away the resources that were given to process A

              Case 3 if resources are neither available nor held by a waiting process then Process A waits

              This protocol is applied to resources like CPU register and memory space as the state of the resources can be saved

              4 Circular Wait

              To make sure Circular Wait condition never occurs

              1 Each Resource is assigned a unique integer number

              2 Each Process must request resources in an increasing order of enumeration

              We define a one-to-one function F R rarr N where R is the set of resource types and N is the set of natural numbers

              A process has requested a resource type say Ri at the beginning

              Protocol 1 After that the process can request resource type R j if and only if F (Riquestiquest j)gtF(Riquestiquest i)iquestiquest

              Protocol 2 If a process requests a resource type R j it must release all the resources (say (Riquestiquest i )iquest ) whose F (Riquestiquest i)ge F (Riquestiquest j)iquestiquest

              Example Let F (tape drive) =1 F(disk drive) = 5 and F(printer)=12

              A process can request any number of tape drives disk drives and printers

              Protocol 1If a process A has already requested disk drive now A can request only printer and cannot request tape drive

              Protocol2 In order to request tape drive the process A must release the disk drive and then can request tape drive

              If the above two protocols are used then the circular wait condition never occurs We can prove this by contradiction

              Proof Assume circular wait exists Let the set of processes involved in the circular wait be P0 P1 helliphellip Pn where P0 is waiting for resource R0which is held by P1

              P1 is allocated R0 and P1 is waiting for resource R1which is held by P2 so F (Riquestiquest0)ltF(R iquestiquest1)iquestiquest

              P2 is allocated R1 and P2 is waiting for resource R2which is held by P3 so F (Riquestiquest1)ltF (Riquestiquest2)iquestiquest

              P1 P2

              R2R2

              R1

              Pn is allocated Rnminus1 and Pn is waiting for resource Rnwhich is held by P0 so F (Riquestiquestn)ltF(R iquestiquest0)iquestiquest

              Hence by transitivity F (Riquestiquest0)ltF(R iquestiquest0)iquestiquest Hence our assumption that circular wait exists is FALSE

              ===============================================================================Q) Resource Allocation graphDeadlocks can be understood more clearly through the use of Resource-Allocation Graphs having the following properties

              1 Resource Types are represented as square nodes on the graph Dots inside the square nodes indicate number of resources

              ( Ex two dots might represent two laser printers )2 Processes are represented as circles3 Request Edges - If P1 has requested R1 a directed edge from P1 to R1 is request

              edge4 Assignment Edges - A directed edge from R2 to P1 indicating that resource R2 has

              been allocated to process P1 and that P1 is currently holding resource R2 Note that a request edge can be converted into an assignment edge when request is granted

              If a resource-allocation graph does contain cycles AND each resource contains only a single instance then a deadlock exists If a resource category contains more than one instance then cycle in the resource-allocation graph indicates the possibility of a deadlock but does not guarantee one==================================================================Q) Deadlock AvoidanceFor each resource request system can decide whether the request should be granted or not To make this decision the system must have information like

              1 resources currently available2 resources currently allocated to each process3 Future requests and releases of each process4 Maximum number of resources it may need

              Given this information it is possible to construct an algorithm that makes sure that the system will never enter a deadlocked stateThere are Two deadlock-avoidance algorithms They are

              1 Resource-Allocation Graph Algorithm2 Bankers Algorithm

              Safe State A system is in safe state if there exists a safe sequence of processes P0 P1 P2 PN such that Resource Requests for Pi lt= Resources allocated to Pi + resources held by all processes Pj where j lt i All safe states are deadlock free

              Unsafe state If a safe sequence does not exist then the system is in an unsafe state which MAY lead to deadlock

              1 Resource-Allocation Graph Algorithm Resource-allocation graphs can detect deadlocks only if number of resources of each type are one In this case unsafe states can be recognized and avoided by adding claim edges denoted by dashed

              lines which point from a process to a resource that it may request in the future All the claim edges are added only at the beginning of process When a process makes a request the claim edge Pi-gtRj is converted to a request edge When a resource is released the assignment edge changes back to a claim edge This approach works by denying requests that would produce cycles in the resource-allocation graph

              taking claim edges into effectConsider for example the resource allocation graph as shown

              If P2 requests resource R2 then the claim edge from P1-gtR2 will be made request edge as follows

              The resulting resource-allocation graph would have a cycle in it and so the request cannot be granted Q) Bankerrsquos Algorithm or Deadlock avoidance algorithm with exampleThere are 12 tape driveslet the current state of the system is as shown in the below figureProcess Allocated Max Need Need= MaxNeed ndash AllocatedP0 5 10 5P1 2 4 2P2 2 9 7

              Available = 12-(5+2+2) = 3Resource- Request AlgorithmNow when a request for 1 tape drive by process P2 is made we run resource-request algorithm to check whether the request must be granted or not The request is granted if the after granting the request all the processes in the system can complete For thatWe check 1 Is the request of P2 lt= need of P2

              1 lt= 7 therefore TRUE2 Is the request of P2 lt= Available

              1 lt= 3 therefore TRUE3 Pretend the request is granted for P2

              Now the current state is as shown belowProcess Allocated Max Need Need= MaxNeed ndash AllocatedP0 5 10 5P1 2 4 2P2 2+1=3 9 7-1=6

              Available = 3-1= 2Now Run the safety algorithm to check whether the system is in safe state

              Safety Algorithm1 Let WORK = Available = 22 Find unfinished process such that Need of unfinished process lt= WORKCheck P0

              Need of P0 = 5Work = 2

              Is 5 lt= 2 FALSE

              Check P1Need of P1 = 2Work = 2

              Is 2 lt= 2TRUETherefore P1 can finishIf P1 finishes Work = Work+Allocated to P1

              Work=2+2= 4Now again check if P0 can completeNeed of P0 = 5Work =4Is 5 lt=4 FALSE

              Check if P3 completesNeed of P3= 6Work =4Is 6lt= 4FALSESo neither P0 can complete and P3 can completeSo the system is in unsafe stateThe request for 1 tape drive by P3 is not granted

              2 Bankers Algorithm For resources that contain more than one instance the resource-allocation graph method does not

              work So we use Bankerrsquos algorithm When a process starts up it must state in advance the maximum allocation of resources it may

              request When a request is made the scheduler determines whether granting the request would leave the

              system in a safe state If not then the process must wait until the request can be granted safely The bankers algorithm relies on several key data structures ( where n is the number of processes and

              m is the number of resource categories ) o Available[ m ] indicates how many resources are currently available o Max[ n ][ m ] indicates the maximum demand of each process of each resource

              o Allocation[ n ][ m ] indicates the number of each resources allocated to each processo Need[ n ][ m ] indicates the remaining resources needed of each type for each process ( Note

              that Need[ i ][ j ] = Max[ i ][ j ] - Allocation[ i ][ j ] for all i j ) For simplification of discussions we make the following notations observations

              o One row of the Need vector Need[ i ] can be treated as a vector corresponding to the needs of process i and similarly for Allocation and Max

              Safety Algorithm In order to apply the Bankers algorithm we first need an algorithm for determining whether or not a

              particular state is safe This algorithm determines if the current state of a system is safe according to the following steps

              1 Let Work and Finish be vectors of length m and n respectively Work is a working copy of the available resources Finish is a vector of booleans indicating whether a particular process can finish Initialize Work = Available and Finish to false for all elements

              2 Find an i such that both (A) Finish[ i ] == false and (B) Need[ i ] lt Work This process has not finished but could with the given available working set If no such i exists go to step 4

              3 Set Work = Work + Allocation[ i ] and set Finish[ i ] to true This corresponds to process i finishing up and releasing its resources back into the work pool Then loop back to step 2

              4 If finish[ i ] == true for all i then the state is a safe state because a safe sequence has been found

              Resource-Request Algorithm ( The Bankers Algorithm ) Now we have a tool for determining if a particular state is safe or not This algorithm determines if a new request is safe and grants it only if it is safe to do so When a request is made ( that does not exceed currently available resources )

              pretend it has been granted and then see if the resulting state is a safe one If so grant the request and if not deny the request as follows

              1 Let Request[ n ][ m ] indicate the number of resources of each type currently requested by processes If Request[ i ] gt Need[ i ] for any process i raise an error condition

              2 If Request[ i ] gt Available for any process i then that process must wait for resources to become available

              else the process can continue to step 3

              3 Check to see if the request can be granted safely by pretending it has been granted and then seeing if the resulting state is safe If resulting state is safe

              grant the requestelse

              then the process must wait until its request can be granted safelyThe procedure for granting a request ( or pretending to for testing purposes ) is

              Available = Available - Request

              Allocation = Allocation + Request Need = Need - Request

              Unit III Memory management

              Just as processes share the CPU they also share physical memory Memory management unit of OS takes care of memory allocation deallocation and other issues Program must be brought into memory for it to be run Addresses are two types

              i) relocatable or relative address wrt to the beginning of program ii) Absolute addresses

              Q)Address BindingAns Binding means mapping logical address space to physical address space Address binding can happen at three different stagesCompile time If you know at compile time only

              where in memory the program is going to be allocated then compiler generates absolute addresses

              Otherwise compiler generates relocatable addresses

              Load time loader binds the relocatable addresses generated by compiler to absolute address Hence binding is done at load timeIf Binding is done at compile and load time physical and logical addresses are same

              Execution time If address binding is done at run time the process can be moved during its execution from one memory segment to another Here we call logical addresses as virtual addressRun time mapping of virtual to physical address is done by a hardware device called Memory Management Unit (MMU)

              Q)Logical vs Physical Address SpaceAnsLogical address ndash generated by the CPU also referred to as virtual addressPhysical address ndash address seen by the memory unitSet of all logical addresses is called logical address space Set of all physical addresses is called physical address space

              Q) Memory-Management Unit (MMU)Ans MMU is hardware device that maps virtual to physical addressIn MMU scheme the value in the relocation register is added to every address generated by CPU to read the contents of memoryThe user program only knows logical addresses it never sees the real physical addresses

              Q)Dynamic loadingAns Since physical memory is small it may not be possible for the entire program to be in main memory so we can use dynamic loadingIt is responsibility of users to design their programs to take advantage of dynamic loadingOnly the main function is loaded into main memory and when main() calls another function the main() will check whether the function is in main memory IF not the loader will load the desired function to main memory and updates the programrsquos address table

              Routine is not loaded until it is called We achieve better memory-space utilization as unused routine is never loaded

              Q) Dynamic Linking+ Linking postponed until execution time+ Small piece of code stub used to locate the appropriate memory-resident library routine+ Stub replaces itself with the address of the routine and executes the routine+ Operating system needed to check if routine is in processesrsquo memory address+ Dynamic linking is particularly useful for libraries

              Q) OverlaysAns Overlays are needed when process is larger than amount of main memory allocated to it Instructions and data that are needed at any given time is kept in main memory Overlays can be implemented by user Programming design of overlay structure is complex Overlays for a two-pass assembler is as shown in the figure

              Q) SwappingAns A process can be swapped temporarily out of memory to a backing store and then brought back into memory for continued executionBacking store ndash fast disk large enough to store copies of all memory images of all usersPriority based scheduling uses a variant of swapping policy called roll out roll in If a higher priority process arrives the memory manager swaps out lower priority process and then swaps in higher priority process When the higher priority process finishes the lower priority process can be swapped in again to main memory A process that is swapped out will be swapped back into the same memory space it occupied previouslyMajor part of the swap time is transfer time We can swap idle process only and cannot swap a process that is waiting for IO

              Q) Contiguous memory allocation Ans Each process is contained in a single contiguous section of memory1 Fixed Size Partition (or) Single-partition allocation

              Divide the main memory into Fixed sized partitions Each partition may contain exactly one process Relocation register contains value of starting physical address Limit register contains range of logical addressesEvery address generated by a CPU is checked as follows

              If logical address lt Limit register then logical address is added with Relocation register to get the corresponding memory address

              else a trap to OS is generatedSince every address is checked we can protect the OS and other user programs from being modified by other running process

              2 Multiple-partition allocation

              1) Fixed Size Partitions Divide the main memory into Fixed sized blocks Here memory allocated to a process may be larger than required The difference between allocated memory and requested memory is called internal fragmentation Internal fragmentation is unused memory inside the partition

              0 P1 Block0(0th address to 3rd address)4 P1 Block1(4th address to 7th address)8 Block2(8th address to 11th address)

              12 Block3(11th address to 12th address)

              if P1 requires just 5 addresses still it is allocated 2 blocks ie 8 addresses So remaining 3 addresses are left unused

              2) Variable Size partitions Here main memory is divided into partitions of variable sizesOperating system maintains information about

              a) allocated partitions b) free partitions (hole)

              In the beginning all main memory is empty and is considered one large block of available memory a hole

              Exact memory required by process is only given When a process terminates it releases memory which can be allocated to another process

              Memory is allocated to processes until finally no available block of memory (or hole) is large enough to hold the next processExternal fragmentation exists when there is enough memory to satisfy a request but the available memory are not contiguous Wastage of memory outside the partitionOne Solution to problem of external fragmentation is

              a Compaction shuffle the memory contents so as to place all free memory together into one large block

              ===============================================================================Q) Dynamic Storage-Allocation Problem (or) Most commonly used strategies to select a free hole from the set of available holesAns To satisfy a request of size n from a list of free holes below 3 policies can be used

              1 First-fit Allocate the first hole that is big enough

              2 Best-fit Allocate the smallest hole that is big enough must search entire list if it is not ordered by size Produces the smallest leftover hole

              3 Worst-fit Allocate the largest hole must also search entire list Produces the largest leftover holeFirst-fit and best-fit better than worst-fit in terms of speed and storage utilization

              ===============================================================================Q) Non-Contiguous Memory allocation

              1 Paging Paging is a memory management scheme that provides non-contiguous memory allocation0 P1 Block0(0th address to 3rd address)4 P2 Block1(4th address to 7th address)8 Block2(8th address to 11th address)

              12 P1 Block3(11th address to 12th address)

              Logical address space of a process can be noncontiguous Ex P1 is allocated Block0 and Block3

              1 Divide physical memory into fixed-sized blocks called frames (size is power of 2)2 Divide logical memory into blocks of same size called pages3 Logical address is divided into 2 parts

              Page number (p) ndash Page table is indexed by page numberPage offset (d) ndash

              4 A page table is allocated to each process (A pointer to page table is stored in PCB of process )Page table translates logical to physical addresses Page 0 is in frame1 Page 1 is in frame 4 etc5 Internal fragmentation may occur due to paging

              6 If the size of logical address space= 2m and page size = 2n Then the higher order mminusn bits of logical address correspond to page number and lsquonrsquolower order bits for displacement within the page

              Ex size of logical address space = 8=23 so m=3

              page size =4=22 So n=2

              So mminusn = 1 bit for page number to represent 01

              remaining n=2 bits for displacement within the page

              Q) Implementation of Page Table

              Page table can be kept as

              1) Set of dedicated registers efficient if the page table is small

              2) Page table can be kept in memory page table base register (PTBR) points to page table Page-table length register (PTLR) indicates size of the page table Changing page table requires changing only the value in these registers Advantages of keeping page table in memorya Less context switch time

              b Two memory accesses are required One memory access to access the page table and another memory access to access the required memory address

              This problem can be solved by using a special fast-lookup hardware cache called associative memory or translation look-aside buffers(TLBs)

              i Each entry in TLB consists of 2 parts

              I) page number field

              II) Value field or frame number field

              ii When a logical address is generated by CPU its page number is presented to TLB The page number is compared with all the entries of TLB simultaneously

              If page number is found in TLB Itrsquos frame number is immediately available -gt TLB hitIf page number not found check in page table ndashTLB miss

              By using TLB Search is fast But TLB hardware is expensive So TLB size is kept small So TLB contains only frequently used few page table entries

              Q) Protection in Paging Validinvalid bit attached to each entry in the page table OS sets this validinvalid bit for each page to allow or disallow access to the page When the bit is set to invalid the page is not in processrsquos logical address space Hence it generates Trap to OS

              Q) Shared pages in Paging

              Reentrant code is the code that never changes during execution Reentrant code can be shared

              1 One copy of read-only (reentrant) code is shared among processes (ie text editors compilers window systems) Ex ed1ed2ed3 is shared among Process P1 and P2

              2 Shared code must appear in same location in the logical address space of all processes

              Each process keeps a separate copy of the private code and data EX data1 and data2

              The pages for the private code and data can appear anywhere in the logical address space

              ==============================================================================Q) Segmentation

              Segmentation is a memory management scheme that support userrsquos view of memory

              When the user program is compiled the compiler generates segments like

              1) The code segment

              2) Global variables segment

              3) Heap memory segment

              4) Stack segment etc

              Each entry in segment table has

              segment base (Starting physical address of segment) and limit register (specifies the length of the segment)

              Logical address is divided into 2 parts

              segment-number offset

              Segment table is indexed by segment number

              Segment-table base register (STBR) stores the location of segment table in main memory

              Segment-table length register (STLR) stores number of segments used by a program

              The segment number (say lsquosrsquo) is used to find the entry in segment table The required entry is lsquosrsquo locations away from the beginning of the segment table Once the required entry in the segment table is found the offset (lsquodrsquo) is compared with limit

              If( offset lt limit) then offset is added with the base entry to generate the physical address

              Q) Shared Segments Code sharing occurs at

              the segment level Shared segments must

              have same segment number

              Allocation - dynamic storage allocation problem

              use best fitfirst fit may cause external fragmentation

              Protection protection bits associated with segments

              readwriteexecute privileges array in a separate segment - hardware can check for illegal array indexes

              ===============================================================================Q) Virtual memory ndash

              Ans In paging and segmentation a program will execute only if the entire process is in main memory But here in virtual memory only part of the program needs to be in memory for execution

              1 A program that is large enough than the available main memory can still run as only part of program is only loaded into main memory Logical address space can therefore be much larger than physical address space

              2 Allows address spaces to be shared by several processes

              Virtual memory can be implemented via Demand paging Demand segmentation

              The large blank space between heap and stack is part of virtual address space Virtual memory allows files and memory to be shared by 2 or more processes through page sharing as shown in below figure (b) Figure (a) depicts that virtual memory that is larger than physical memory

              ==============================================================================

              Q)Demand pagingDemand paging is technique of loading pages from disk to main memory only when the page is needed Hence using less amount of physical memory we get faster response

              Demand paging is similar to PAGING + SWAPPING

              Instead of swapping a whole process the pager brings only those pages needed into memory

              To distinguish between pages that are in memory and pages that are on the disk validinvalid bit is used

              Validinvalid bit is attached to each entry in page table When this bit is set to valid the page is legal and in memory When this bit is set to invalid either the page is illegal or the page is in disk

              When CPU generates a logical address whose page table entry is set to invalid page fault occurs Hence the required page is in disk Some section of hard disk called swap space is used to hold pages that are not present in memory

              Q) Procedure for Handling a Page Fault

              1) CPU generates a logical address and if the validinvalid bit for the page that has this logical address is set to invalid page fault occurs

              2) Page fault causes a trap to operating system

              3)Check if the logical address is within the logical address space of process ie PTBR and PTLR is checked if (required logical address is not within the logical address space of process)

              Terminate the processelse

              page is not in memory and page is in disk

              4) To bring the required page into memory we need to find a free main memory frame map main memory address to disk block and fetch disk block and load the block into free frame5) When the required page is brought into memory update the page table to indicate the page is in memory6) Restart instruction interrupted by illegal address trap The process will continue as if page had always been in memory

              Q) What is pure demand pagingAns If no page belonging to the executing process is in main memory the process will fault for every page it needs Page faults will occur until every page that is needed is in memory This scheme is known as pure demand paging NEVER BRING A PAGE UNTIL IT IS REQUIRED

              Q) Page Replacement1 Find the location of the desired page on the disk2 Find a free

              i) If there is a free frame use itii) If there is no free frame use Page replacement Algorithms to find some page in memory that is

              not really in use and swap itiii) If the modify bit associated with the victim frame is set to 1 the page is written to disk Make the

              changes to validinvalid bit of victim frames page table entry to invalid to indicate the victim frame is no longer in memory

              else no need to write to disk3 Read the desired page into victim frame and change the frame and page tables4 Restart the user process

              =================================================================================================Q) Page Replacement AlgorithmsGoal Produce a low page-fault rateThe algorithms are evaluated by running it on a particular string of memory references (reference string) and by computing the number of page faults on that string

              1 First in First out (FIFO) Page replacement algorithm Uses the time when a page was brought into memory The page that was brought first into memory is replaced ie oldest page is chosen for replacement We create a FIFO queue to hold all pages in memory

              When a page is brought into memory we insert it at the tail of the queue and we replace the page at the head of the queue

              FIFO 15 page faultsProblems due to FIFO is Beladyrsquos anamolyFor a refrence string shown below123412512345When number of frames allocated to a process is 1 number of page faults=12 due to FIFO page replacementWhen number of frames allocated to a process is 2 number of page faults=12 due to FIFO page replacementWhen number of frames allocated to a process is 3 number of page faults=9 due to FIFO page replacementWhen number of frames allocated to a process is 1 number of page faults=10 due to FIFO page replacementAs the number of frames allocated to a process increases page faults must decrease But in FIFO page replacement page faults increase2 Optimal Page replacement AlgorithmReplace the page that will not be used for longest period of time Used for measuring how well your algorithm performs It is not practical to implement OPT algorithm as it is not possible to know what the future references will beex

              Optimal page replacement 9 page faults

              3 Least Recently used (LRU) Page replacement algorithm Replace the page that has not been used for longest period of time LRU makes use of time of last use of the page Ex

              Number of page faults = 12

              LRU requires hardware assistance to determine the time of last use of the page We can use 1) Stack or 2) Counter to implement LRU page replacement algorithm

              1) Stack implementation ndash keep a stack of page numbers in a double link formwhen a Page is referenced it is moved to the top of stackThis implementation requires 6 pointers to be changed

              2) counter implementation Time-of-use field is allocated to each frame allocatedCPU maintains a counter Counter is incremented for every page reference Counter value is copied to the time-of-use field of the referenced page

              Disadvantages of LRU page replacement algorithm Updating counter or stack must be done for every memory reference HENCE LRU IS SLOW and implementation requires hardware assistance ==============================================================================Q) LRU approximation page replacement1) Using reference bit2) Using additional reference bits algorithm3) Second chance algorithm4) Enhanced Second chance algorithm5) Counting based Page replacement

              Has two schemes1) Least Frequently Used (LFU)2) Most Frequently Used (MFU)

              6) Page Buffering AlgorithmsQ) What is ThrashingAns A process is thrashing if it is spending more time paging than executingThrashing may occur when global (or) local page replacement is used as followsSolution to Thrashing is Working Set strategyIf we provide a process with as many frames as it needs thrashing can be avoided Since it is not possible to know how many frames a process needs we use locality model of process execution

              here Locality is set of pages that are actively used together Locality model states that as the process executes it moves from locality to locality If we allocate enough frames to a process to accommodate the size of current locality it will not fault again until it changes current localityWorking Set model uses working-set and a parameter ∆ = working-set windowThe set of pages in the most recent ∆ page references is the working setLet WS S i be the working-set size for process Pi ie Pi needs WS S i framesLet D be the total demand for frames then D=sum WS Si

              and mbe the total number of available frames

              If iquestm thrashing occurs because some processes will not have enough framesSoIF (Dgtm iquest

              OS selects a process to suspendelse OS may initiate another processThe working set strategy prevents thrashing while keeping the degree of multi programming as high as possibleThe main difficulty is to keep track of moving working set window ========================================================================Q) How does thrashing occur in global and local page replacementAns In Global page replacement when a page fault occurs for a process any frame in main memory can be replaced causing page fault to some other process that needs the replaced frame Page fault processes must use paging device to swap pages in and out As more number of processes queue up for the paging device

              the READY queue empties And CPU becomes idle So OS introduces new processes as CPU is free This further increases number of page faults

              In Local page replacement Each process is allocated certain frames in main memory and when a page fault occurs only the frames allocated to it will be replacedWhen a process starts thrashing other processes are not affected But since the processes will be in queue for paging device most of the time average service time for page fault increases Hence effective access time will increase even for a process that is not thrashingQ) Page Table Structure Or Structure of page table in memory

              Hierarchical Paging Hashed Page Tables Inverted Page Tables

              Hierarchical Paging If Page table is large then we break up the page table into multiple page tables ie page table is paged A simple technique is a two-level page tableTwo-Level Paging Example A logical address (on 32-bit machine with 4K page size) is divided into 1 a page number consisting of 20 bits 2 a page offset consisting of 12 bitsSince the page table is paged the page number is further divided into 1 a 10-bit page number 2 a 10-bit page offset Thus a logical address is as follows page number page offset

              pi p2 d

              10 10 12 where p1 is an index into the outer page table and p2 is the displacement within the page of the outer page table Two-Level Page-Table Scheme

              Address-Translation Scheme Address-translation scheme for a two-level 32-bit paging architecture

              Hashed Page Tables (or) Hash table Hashed page tables are common in address spaces gt 32 bitsEach entry in hash table contains a linked list of elements that hash to same locationHash table is indexed by hash valueEach element consists of 3 fields

              1 Page number2 Frame number3 Pointer to next element in linked list

              The algorithm works as follows Page number is passed to hash function to get hash value Locating the hash value in hash table is easy as Hash table is indexed by hash value Page number is then compared with field 1 in the first element in linked list If there is a match the corresponding frame number is used to get physical address If there is no match next entries in linked list are searched

              Inverted Page Table

              Invertible page table has one entry for each frame of physical memory Each entry in invertible page table has

              Process-id Page number

              Logical address also has 3 parts Process-id Page number

              OffsetProcess-id and page number of the logical address is compared with each entry in the invertible page tableIf a match is found at ith entry i is added to offset to get logical address If there is no match it means it is illegal addressAdvantages and disadvantages

              Decreases memory needed to store each page table Increases time needed to search the table when a page reference occurs Solution Use hash table to

              limit the search to one mdash or at most a few mdash page-table entries Inverted Page Table Architecture

              Unit-IV CPU Scheduling

              Basic conceptsTo maximize CPU utilization some process must be running at all times

              In multi programming several processes will be in main memory at a given time If running process has to wait for IO the OS takes the CPU away from that process and assigns CPU to other process in ready queue So CPU must be scheduled and CPU scheduling is fundamental OS function

              CPU- IO burst cycleProcess execution begins with a CPU burst that is followed by an IO burst which is followed by another CPU burst then another IO burst and so on as shown in the figure

              An IO bound program has many short CPU bursts A CPU-bound program has few long CPU bursts

              Schedulers a module in OS for scheduling decisions1048708 Long-term scheduler (or job scheduler) ndash selects which processes should be brought into the ready queue1048708 Medium-term scheduler ndash selects which processes should be swapped inout the memory1048708 Short-term scheduler (or CPU scheduler) ndash Whenever the CPU becomes idle the operating system must select one of the processes in ready queue to be executed This selection is done by short term scheduler (or) CPU scheduler

              Dispatcher gives CPU control to the process selected by short term scheduler This function involves following

              1 switching context2 switching to user mode3 jumping to proper location in the user program to restart that program

              Time taken to stop one process and start another process is known as dispatch latency This must be kept smallReady QueueReady queue is implemented as a FIFO queue a priority queue a tree or simply an unordered linked listThe records in ready queue are PCBs of processes

              Pre-emptive Scheduling

              CPU scheduling decisions take place under one of four conditions

              1 When a process switches from the running state to the waiting state2 When a process switches from the running state to the ready state for example in response to an

              interrupt3 When a process switches from the waiting state to the ready state say at completion of IO 4 When a process terminates

              For conditions 1 and 4 there is no choice - A new process must be selected For conditions 2 and 3 there is a choice - continue running the current process (or) select a different

              one If CPU scheduling takes place only under conditions 1 and 4 the system is said to be non-

              preemptive or cooperative In non-preemptive a process runs until it itself gives up the CPU ie when it is waiting for IO (or) when process completes Otherwise the system is said to be pre-emptive

              Windows used non-preemptive scheduling up to Windows 3x and started using pre-emptive scheduling with Win95 Macs used non-preemptive prior to OSX and pre-emptive since then Note that pre-emptive scheduling is only possible on hardware that supports a timer interrupt

              Note that pre-emptive scheduling can cause problems when two processes share data because one process may get interrupted in the middle of updating shared data structures

              Preemption can also be a problem if the kernel is busy implementing a system call ( eg updating critical kernel data structures ) when the preemption occurs Solution wait until the system call has either completed or blocked before allowing the pre-emption But this solution is problematic for real-time systems

              Some critical sections of code protect themselves from concurrency problems by disabling interrupts before entering the critical section and re-enabling interrupts on exiting the section But this should only be done only on very short pieces of code that will take less CPU time

              Scheduling Criteria to decide CPU scheduling algorithm1 CPU utilization in real system cpu utilization will range from 40 to 902 Throughput Number of processes that are completed per unit time3 Turnaround time Time when the process has completed ndash Time when the process was submittedie it includes waiting time and burst time4 Waiting time amount of time that a process spends waiting for CPU5 Response time Time when the first response came - Time when the process was submitted

              Optimization Criteria Max CPU utilizationMax throughput

              Min turnaround timeMin waiting timeMin response time CPU Scheduling Algorithms1) First-Come First-Served (FCFS) Scheduling First come first served CPU scheduling algorithm is non pre-emptive Processes are scheduled in the order they have arrivedFCFS is implemented by Queue When the CPU is free it is allocated to the process at the head of the queueAdvantages Easy to implementDisadvantages 1 Average waiting time is quite long

              2 Convoy effect (it occurs as small processes wait in queue for big process to leave CPU)Example Process Burst Time P1 24 P2 3 P3 3

              Suppose that the processes arrive in the order P1 P2 P3

              The Gantt Chart for the schedule is P1 P2 P3

              0 24 27 30 Waiting time for P1 = 0 P2 = 24 P3 = 27 Average waiting time (0 + 24 + 27)3 = 17 2) Shortest-Job-First (SJF) SchedulingSchedule the Process with shortest burst time If CPU burst time of two processe are same then FCFS is usedAdvantages Average waiting time decreasesDisadvantages It is difficult to know the length of next CPU burstSJF can be either pre-emptive (or) non pre-emptive1 Non pre-emptive SJF ndash once CPU is given to the process it cannot be preempted until completes its CPU burst2 Pre-emptive SJF (or) Shortest-Remaining-Time-First (SRTF) If the newly arrived process is shorter than what is remaining of currently executing process then pre-empt the currently executing process

              Example Process Arrival Time Burst Time P1 00 7

              P2 20 4 P3 40 1

              P4 50 4

              SJF (non-preemptive) Gantt Chart P1 P3 P2 P4

              0 7 8 12 16 Average waiting time = [0 +(8-2)+(7-4) +(12-5)] 4 =4

              Example of Preemptive SJF

              Process Arrival Time Burst TimeP1 00 7P2 20 4P3 40 1P4 50 4

              SJF (preemptive) P1 P2 P3 P2 P4 P1

              0 2 4 5 7 11 16 Average waiting time = (9 + 1 + 0 +2)4 =3 3) Priority SchedulingA priority number (integer) is associated with each processThe CPU is allocated to the process with the highest priority (smallest integer equiv highest priority)If two processes have equal priority then FCFS is usedSJF is a priority algorithm in which larger the CPU burst lower the priorityThere are two schemes 1 Preemptive 2 nonpreemptive

              Problem equiv Starvation ndash low priority processes may never executeSolution equiv Aging ndash as time progresses increase the priority of the process

              4) Round Robin (RR) Each process gets a small unit of CPU time (time quantum) usually 10-100 milliseconds After this time has elapsed the process is preempted and added to the end of the ready queueIf there are n processes in the ready queue and the time quantum is q then each process gets 1n of the CPU time in chunks of at most q time units at once No process waits more than (n-1)q time unitsPerformance 1 q large _ FIFO 2 q small _ q must be large with respect to context switch otherwise overhead is too highExample of RR with Time Quantum = 4 Process Burst Time P1 24 P2 3

              P3 3 The Gantt chart isP1 P2 P3 P1 P1 P1 P1 P1

              0 4 7 10 14 18 22 26 30 Average waiting time = [(30-24)+4+7]3 = 173 =566 5)Multilevel Queue scheduling Ready queue is partitioned into separate queuesFor ex ready queue is divided into 2 queues

              1 foreground (interactive) queue2 background (batch) queue

              Each queue has its own scheduling algorithmFor ex Round Robin scheduling algorithm can be used in foreground queue

              FCFS scheduling algorithm can be used in background queueScheduling must be done between the queues This can be done in 2 ways1 Fixed priority scheduling

              Foreground queue has highest priority All the processes in foreground queue must be completed and queue must be empty Then only the processes in background queue will be given CPU As shown in below figure Disadvantage starvation

              2 Time slice ndash each queue gets a certain amount of CPU timewhich it can schedule amongst its processes ie 80 to foreground in RR and 20 to background in FCFS

              6) Multilevel Feedback Queue Scheduling A process can move between queues The idea is to separate processes according to their CPU bursts IO ndash bound and interactive processes will be in highest priority queueIf a process takes more CPU time it is moved to lower priority queue If a process waits too long in lower priority queue it is moved to higher priority queue to prevent starvationAs shown in the below figure let there be 3 queues Q0 Q1 Q2

              1 Q0 ndash time quantum 8 milliseconds 2 Q1 ndash time quantum 16 milliseconds 3 Q2 ndash FCFS

              Scheduling

              1 A process entering ready queue is put in queue Q0 When it gains CPU the process receives 8 msec If it does not finish in 8 milliseconds process is moved to queue Q1 2 At Q1 process again receives 16 additional milliseconds If it still does not complete it is moved to queue Q2

              Q) SynchronizationSeveral processes run in an operating system Some process share resources due to which problems like data inconsistency may arise

              Example of process synchronization is Producer Consumer problem (or) Bounded-Buffer problemProducer Consumer problem To make sure that producer process should not add data when the buffer is full andConsumer process should not take data when the buffer is empty

              The code for Producer process can be modified as followswhile (true) Produce an item in next producedwhile (counter == BUFFER_SIZE) do nothingbuffer[in]=nextProducedin = (in+1) BUFFER_SIZEcounter++

              here an integer variable counter is initialized to zeroCounter is incremented every time a new item is added to the buffer and counter is decremented every time an item is removed from bufferin is an index that always points to an empty slot after the last filled slotout is an index that always points to the first filled slot after at the head of the circular queue

              The code for consumer process is as shown belowwhile(true)

              while (counter == 0) do nothingnextConsumed = buffer[out]out=(out+1)BUFFER_SIZEcounter--

              The above producer and consumer code produces RACE CONDITIONOutcome of execution of several co-operating processes depends on the order in which accesses takes place is called Race ConditionFor Example if counter =5

              Given counter =5 and if producer produces one item and consumer consumes one item then correct value of counter= 5 only But here the counter = 4 ie inconsistent resultIf T5 is executed before T4 then the counter =6 ie inconsistent result

              Solution to avoid race condition make sure only one process at a time must update the shared variable (here it is counter)

              Q) Critical section problemAns Each process has a segment of code called critical section Critical section code is the code that contains common variables of co-operating processesTo avoid data inconsistencyTwo processes must not execute critical section code at same timeFor this to happen1 Each process must request permission to enter its critical sectionmdashEntry section2 End of critical section ndashExit section3 Rest of the code after critical section ndash Remainder section

              Solution to critical section must satisfy the following three requirements1 Mutual exclusion if one process is in critical section no other process can be in critical section2 Progress Processes not in remainder section must decide which process will enter critical section next3 Bounded Waiting All the processes must be allowed to enter critical section No process must wait indefinitely

              Q) Petersonrsquos solution for critical section problem (Software based solution)AnsTime Process 1 Process2T0 do

              flag[1] = TRUE turn =2

              do flag[2] = TRUE

              T1 turn =1

              T2 While( flag[2] ampamp turn == 2)Since turn = 2 while loop will not run

              While( flag[1] ampamp turn == 1)Loops continuously until flag[1]

              becomes falseT3 Enter critical section

              T4 flag[1] = FALSE Enter Critical section

              T5 Enters remainder section flag[2]=FALSE

              while (TRUE) Enters remainder section

              while (TRUE)

              101 File Concept Data is stored in secondary memory in the form of files A file is a sequence of bits bytes lines or records A file has structure which depends on its type

              File Attributes

              1 Name ndash Name of the file for user understanding2 Identifier used by OS to identify a file ( eg inode number )3 Type ndash Text files executable files binary files etc 4 Location ndash location of the file on the hard drive 5 Size current size of the file6 Protection Controls who can read write or execute the file7 Time Date and user identification gives the information when the file was created last modified

              and last used etc This data is used for protection security and usage monitoring

              All information about files is kept in directory structure The directory entry consists of the filersquos name and its unique identifier The identifier locates the other file attributes Directory structure is kept on disk

              File Operations

              The file Abstract Data Type supports many common operations 1 Creating a file find free space on disk add entry to directory 2 Writing a file write data at current file position pointer location and update pointer3 Reading a file read data at current file position pointer location and update pointer4 Repositioning within a file or file seek change pointer location to a given value5 Deleting a file free the space allocated to file and remove entry from directory6 Truncating a file delete the data and update file size

              Open File Table

              Most OSes require that files be opened before access and closed after access Information about all files that are open currently in the system is stored in an open file table This

              system wide open file table is maintained by OS And information about files opened by the process is stored by the processrsquos open file table Open file table has the following information

              1 File pointer - records the current position in the file for the next read or write2 File-open count - How many times has the current file been opened ( simultaneously by

              different processes ) and not yet closedWhen this counter reaches zero the file can be removed from the table

              3 Disk location of the file4 Access rights

              Some systems provide support for file locking 1 A shared lock is for reading only2 A exclusive lock is for writing as well as reading3 An advisory lock OS makes sure the locks are acquired and released appropriately4 A mandatory lock software developers makes sure the locks are acquired and released

              appropriately5 UNIX used advisory locks and Windows uses mandatory locks

              File Types

              Implementing file types A common technique is to include the type as part of the filename The name is split into 2 parts

              1 File Name 2 Extension

              File name and extension are separated by a period The system uses the extension to indicate the type of file and the type of operations that can be done on that fileUser opens Microsoft word and clicks on File menu and Open option User will only specify the filename without extension application will look for a file with the given name and extension it expects Hence extensions are not supported by the operating systemUnix system uses a crude magic number stored at the beginning of the file to indicate the type of fileIn Mac OS X operating system file produced by word processor has the word processorrsquos name as its creator When user double clicks on the file word will open the required fileSome common file types are as shown below

              File Structure Some files contain an internal structure which may or may not be known to the OS for ex executable files have a specific structure so that it can determine

              where in memory to load the file and what is the location of the first instruction

              If OS supports many file formats it will increase the size and complexity of the OS UNIX treats all files as sequences of bytes with no internal structure (With the exception of

              executable binary programs which it must know how to load and find the first executable statement etc)

              Macintosh files have two forks - a resource fork and a data fork The resource fork contains information relating to the UI such as icons and button images and can be modified independently Data fork contains the code or data

              Internal File structure

              Disk files are accessed in units of physical blocks typically 512 bytes

              Internally files are organized in units of logical units like single byte (or)

              data record size (or) structure size

              The number of logical units which fit into one physical block determines its packingIf a student file is organized internally as records Each record takes 64 bytes of memoryAnd main memory is divided into blocks of 512 bytes Then packing = 51264=8 8 records can be stored per block

              Internal fragmentation may occur due to this

              102 Access Methods

              The information in file can be accessed in the following ways1 Sequential access2 Direct access (or) Relative access

              1 Sequential access we cannot randomly access any record

              Records are read and written sequentially one record after another

              A sequential access file emulates magnetic tape operation and generally supports the following operations read next - read a record and advance the tape to the next position write next - write a record and advance the tape to the next position rewind skip n records - May or may not be supported N may be limited to positive numbers or may be

              limited to +- 1

              2 Direct access (or) Relative accessA file is made up of fixed length records so that we can randomly access any record

              No restriction on order of reading or writing

              Direct access files are useful when we need to access large amounts of data randomly

              Ex database

              Operations supported include n is relative block number ie relative to beginning of file

              1 read n - read record number n 2 write n - write record number n 3 jump to record n - could be 0 or the end of file4 Query current record - used to return back to this record later5 Sequential access can be easily emulated using direct access The inverse is complicated and

              inefficient

              3 Other access methods can be built on top of direct access method We construct an index for the file which contains pointers to various blocks To search a file we first access index and then use pointer to access file

              If a student file has 120000 records of students sorted by student id We can create an index consisting of 1st student id in each blockif 64 records can be stored in each block then number of blocks needed to store 120000 records = 12000064 asymp 2000 blocks So index has 2000 entries Index is kept in main memory so that searching becomes faster

              If the index itself becomes large we create an index for index file (Ex B-tree)

              Q) Allocation Methods

              To allocate space to files on disk is such a way that disk space is utilized effectively and files can be accessed quickly

              Three major methods of allocating disk space are

              1 Contiguous 2 Linked

              3 Indexed

              1 Contiguous allocation Each file occupies set of contiguous blocks on disk

              If the file is n blocks long and starts at location b then it occupies blocks b b+1 b+2hellipb+n-1

              The directory entry for each file indicates the address of starting block(ie b) and length (n)

              Advantages

              1 For direct access to block i we can directly access by finding the block as b+i Since the file is stored sequentially sequential access is easy Hence access is easy

              2 Supports both direct and sequential access3 Number of disk seeks required is minimum 4 Seek time is minimum

              Disadvantages

              1 Difficult to find space to a new file (or) add more data to existing fileFirst fit and Best fit strategies are used to find a free hole( set of free contiguous blocks) from list of available holes But both are inefficient wrt storage utilization

              2 External fragmentation even though there are enough free blocks required for a new file they are not contiguous and hence cannot be allocated to new fileSolution copy all the files onto another disk or tape We then copy the files back onto original disk contiguously This is known as compaction But compaction is time consuming

              3 Size of the file must be known when the file is created Even though final size is known entire size may not be used by the file at the beginning Hence leads to wastage of space

              If size of the file is not known in advance then (i) If we allocate too little space to file file cannot be extended Solution find a larger hole copy the contents of the file to the new space and release the previous space This may be possible as long as space exists Also this is time consuming(ii) If we overestimate the amount of space needed much of the space is unused This will lead to internal fragmentation

              To minimize these drawbacks some os uses modified contiguous allocation scheme Here contiguous blocks of memory is allocated at firstIf the file needs more blocks another set of contiguous space known as extent is added Location and a block count plus a link to the first block of next extentIf extents are too large and of fixed size internal fragmentation occursIf extents are of variable sizes external fragmentation occurs

              Linked AllocationEach file is a linked list of disk blocks The disk blocks can be scattered anywhere on the diskThe directory contains a pointer to first and last blocks of the file For example a file of 5 blocks might start at a block 9 and continue at block 16 then block 1 and then block 10 and finally block 25 Each block contains a pointer to next block If each block is 512 bytes and disk address (pointer ) requires 4 bytes then the user sees blocks of 512-4=508 bytes To create a new file we simply create a new entry in the directory The pointer will have nil for empty file Also the size field of empty file is 0

              To add more data to existing file free block is found and data is written to it and is linked to end of file To read a file we read by following the pointers block to block

              Advantages 1 Easy to find space to a new file (or) add more data to existing file2 Size of the file need not be declared when the file is created3 There is no external fragmentation

              Disadvantages1 Seek time is more2 Number of disk seeks required is more3 Pointers use larger percentage of filersquos disk space4 Inefficient for direct access files To access i th record i disk reads are required5 Not Reliable if pointers were lost or damaged

              Solution to disadvantage 3 is to use to group multiple blocks as clusters(for ex 4 blocks = 1cluster) We allocate clusters instead of blocks to file Makes logical to physical block mapping simpleImproves disk throughput by reducing disk access timeReduces wastage of space due to pointers as only few pointers may be neededFree ndashlist management also becomes simpleDisadvantage internal fragmentation more space is wasted when the cluster is partially full

              Solution to disadvantage 5 is to use doubly linked list (or) store the file name and relative block number in each block

              An variation of linked allocation is the use of file allocation table (FAT)A section of disk at the beginning of each volume contains FAT FAT has one entry for each disk block and is indexed by block numberThe block number contains the block number of next block that contains the file The last block will have special end of the file value as table entry An unused block is indicated by table value 0

              Advantages

              1 Easy to find space to a new file (or) add more data to existing file2 Random access time is improved

              Disadvantage number of disk seeks is more The disk head must move to read the FAT and find the location of block then move to location of the block

              3 Indexed allocation

              Each file has its own index block Index block is an array of disk block addresses The ith entry in index block points to ith block of the file

              The directory contains the address of the index block

              When the file is created all pointers in the index block are set to nil

              To add more data to existing file free block is found and data is written to it The address of block is put in the index block

              Advantages

              1 Easy to find space to a new file (or) add more data to existing file2 Supports direct (or) random access efficiently3 No external fragmentation

              Disadvantage

              1 Seek time is more2 Number of disk seeks required is more3 Not Reliable4 Index file must be kept in memory If this memory is not available then we have the read the index

              block and then desired data block ie 2 disk accesses are required To access a block near the end of the file we need to read all the index blocks to read the needed data block

              5 Pointer overhead of index block is gt than that of linked allocation For example if we have file of only one or two blocks Entire index block is allocated to store these one or two pointers

              Various mechanisms followed for size of index block

              1 linked scheme size of index block is one disk block Index block contains a small header containing name of the file and set of the first 100 disk block addresses The next address contains the address of another index block for large files

              2 multilevel index to access a block the OS uses first-level index to find second-level index block which points to file block This approach can be continued to third or fourth level

              3 combined scheme

              For example Say there are 15 pointers of index block in the filersquos inode The first 12 of these pointers have address of blocks that contains file data The next 3 pointers point to indirect blocks The first points to single indirect block which is an index block that contain addresses of filersquos data blocks

              The second points to double indirect blocks and third points to triple indirect blocks as shown below

              Q) Free space management

              To keep track of free disk space the system maintains free space list To create a file we search free space list and allocate the required space to new file

              The free space list can be implemented as

              1 Bit vector (or) Bit map2 Linked list3 Grouping4 Counting 5 Space maps

              1 Bit vector (or) Bit map Each block is represented by 1 bit If the block is free the bit is 1 and if the block is allocated the bit is 0 For example consider a disk where blocks 23458 blocks are free The free-space bit map would be 001111001000

              Advantages simple and easy to get contiguous files

              Disadvantages

              1 Bit map requires extra space2 Bit map is inefficient if kept in disk Bit maps of smaller disk can be kept in main memory but may

              not be possible to keep the bit map of larger disk in main memory3 Bit map must be modified both when blocks are allocated and freed Freeing 1 GB of data on a 1-TB

              disk could cause thousands of blocks of bit maps to be updated because those data blocks are scattered all over the disk

              2 Linked List Free-space list is implemented by linking all the free disk blocks together A pointer to the first free block is stored in special location on the disk For example consider a disk where blocks 23458 blocks are freeWe keep a pointer to block 2 as the first free block Block 2 would contain a pointer to block 3 Block 3 would point to block 4 and block 4 would point to block 5and so onAdvantage No waste of spaceDisadvantage Cannot get contiguous space easily ie To traverse the free-space list we must read each block which requires more IO time

              3 Grouping First free block stores the addresses of n free blocks The first n-1 free blocks are free and the last block contains the addresses of another n free blocks and so onAdvantage large number of free blocks can be found quickly4 Counting free-space list can be maintained by keeping the address of first free block and the number (n)of free contiguous blocks that follow the first block

              Each entry in the free-space list consists of disk address and a countEach entry requires more space andlist is shorter only if count is gt1These entries can be stored in a B-tree for efficient insertion deletion and search

              5 Space maps ZFS divides the disk space into chunks of manageable sizes called metaslabs Each metaslab has a space mapZFS uses counting algorithm to store information about free blocks Space map is a log of all block activities in time order in counting formatWhen ZFS allocates or frees space from a metaslab it loads the associated space map into memory and updates it Finally free-space list is updated on diskSynchronization Hardware Hardware solution to synchronization is to provide atomic operations These operations operate as a single instruction without interruption Two such operations are 1 Test and Set() hardware instruction2 Swap() instruction

              1 Test and Set instruction is as shown

              Solution using Test and SetThe shared variable lock is initialized to false

              2 Swap() instruction definition is as follows

              Solution using Swap1 Shared Boolean variable lock is initialized to false and each process has a local boolean variable key

              Both Test and Set Swap instructions satisfy the mutual exclusion requirement but unfortunately do not guarantee bounded waiting If there are multiple processes trying to get into their critical sections there is no guarantee of what order they will enter and any one process could have the bad luck to wait forever until they got their turn in the critical section (Since there is no guarantee as to the relative rates of the processes a very fast process could theoretically release the lock whip through their remainder section and re-lock the lock before a slower process got a chance As more and more processes are involved vying for the same resource the odds of a slow process getting locked out completely increase )

              Below figure illustrates a solution using test-and-set that satisfies bounded waiting using two shared data structures boolean lock and boolean waiting[ N ] where N is the number of processes in contention for critical sections

              Bounded-waiting mutual exclusion with TestAndSet( )

              It first looks in an order ( starting with the next process on the list ) for a process that has been waiting and if it finds one then it releases that particular process from its waiting state without unlocking the critical section thereby allowing a specific process into the critical section while continuing to block all the others

              Q) What is semaphoreAns Semaphore is used to solve synchronization problems A semaphore S is an integer variable accessed only through 2 standard atomic operations 1 wait() 2 signal()Wait primitive Signal primitivewait(S)

              while(S lt= 0) do nothingS--

              signal (S)

              S++

              Each process that wishes to use a resource performs wait() operation When a process releases a resource it performs signal() operation

              Semaphores Usage1 binary semaphore or mutex locks can take values 0 (or) 1

              2 Counting Semaphore can take any integer value Counting semaphore is used to count the number remaining resources The counter is initialized to the number of resources available whenever the counting semaphore gt 0 then a process can enter a critical section and use one of the resources When the counter = 0 ( or negative in some implementations ) then the process blocks until another process frees a resource and increments the counting semaphore with a signal callFor example if 3 resources are there and 4 processes Value of counting semaphore =3if process P1 requires a resource it perform wait() operationNow counting semaphore = 2if process P2 requires a resource it perform wait() operationNow counting semaphore = 1if process P3 requires a resource it perform wait() operationNow counting semaphore = 0if process P4 requires a resource it perform wait() operation P4 waits until a resource is available

              3 Semaphores can also be used to synchronize certain operations

              Semaphore ImplementationWhen a process is in critical section and any other process that tries to enter critical section must loop continuously wasting CPU time (busy waiting) This type of semaphore is called spin lock as the process spins while waiting For example suppose it is important that process P1 execute statement S1 before process P2 executes statement S2

              First we create a semaphore named synch that is shared by the two processes and initialize it to zero Then in process P1 we insert the code

              S1signal( synch )

              and in process P2 we insert the code wait( synch )

              S2 Because synch was initialized to 0 process P2 will block on the wait until after P1 executes the call

              to signal Each semaphore has a integer value and maintains a waiting queue of waiting processes

              Semaphore implementation with no busy waiting To overcome busy waiting we block the waiting process and waiting process is restarted by wakeup() operation(ie the process is removed from waiting queue to ready state)

              Problems due to semaphore1 Deadlocks Deadlocks occur when multiple processes are blocked each waiting for a resource that can only be freed by one of the other (blocked) processes shown below

              2 Starvation In Starvation one or more processes gets blocked forever and never get a chance to enter critical section For example if we do not specify the algorithms for adding processes to the waiting queue or selecting one to be removed from the queue in the signal( ) call If a LIFO queue is chosen then the first process that starts waiting will never get a chance

              3 Priority inversionLet Process LMH have priorities as shown belowLltMltHprocess H is waiting for a resource that has been held by process L so Process L is running When process M enters it pre-empts process L and M is running nowHere M runs before H even though M has lower priority than HTo avoid this L is given high priority so that M does not pre-empt L=====================================================================Q) Classical problems on synchronization

              1 The Bounded Buffer Problem (also called the The Producer-Consumer Problem)2 The Readers-Writers Problem3 The Dining Philosophers Problem

              These problems are used to test newly proposed synchronization scheme

              1 The Bounded Buffer ProblemConsider

              n buffers and each buffer holds one item a producer process which creates the items (1 at a time) a consumer process which processes them (1 at a time)

              Producer process cannot produce an item if all the buffers are full and Consumer process cannot consume if all the buffers are empty As both processes modify the contents of buffer synchronization must be there For that we use 3 Semaphores

              1 empty = n (empty semaphore counts the number of empty buffers)2 full=0 (full semaphore counts the number of filled buffers)3 mutex=1 (mutex provides mutual exclusion for access to the buffer)

              Producer process Consumer processdo

              hellip produce an item wait(empty) wait(mutex) hellip

              do wait(full) wait(mutex) hellip remove an item from buffer hellip

              add item to buffer hellip signal(mutex) signal(full)

              while (1)

              signal(mutex) signal(empty) hellip consume the item hellip while (1)

              2 The Readers-Writers ProblemA data item such as a file is shared among several processesEach process is classified as reader or writerMultiple readers can read the file at same timeA writer must have exclusive access (ie cannot share with either a reader or another writer)Two versions of the readers-writers problem

              readers priority new reader need not wait because a writer is waiting writers priority if a writer is waiting to access the database no new readers can start reading

              A solution to both version may cause starvation in the readers priority version writers may starve in the writers priority version readers may starve

              A semaphore solution to the readers priority version uses 3 semaphores1 readcount = 0 (readcount semaphore counts the number of reading shared data2 mutex =1 (mutex semaphore is used for mutually exclusion when readcount is updated)3 wrt = 1 (wrt semaphore is common to both writer and reader)

              wrt provides mutually exclusive access to shared data Reader process Writer process

              wait(mutex) readcount++ if (readcount == 1) wait(rt) signal(mutex) hellip reading is performed hellip wait(mutex) readcount-- if (readcount == 0) signal(wrt) signal(mutex)

              do wait(wrt) hellip writing is performed hellip signal(wrt)

              while (TRUE)

              3 The Dining Philosophers Problemn philosophers sit around a table thinking and eatingAs time passes a philosopher gets hungry and tries to pick up the chopstick on his left and on his right A philosopher may only pick up one chopstick at a time and cannot pick up a chopstick already in the hand of neighbour philosopherThe dining philosophers problems is an example of a large class or concurrency control problems it is a simple representation of the need to allocate several resources among several processes in a deadlock-free and starvation-free mannerA semaphore solution represent each chopstick with a semaphore

              semaphore chopstick[5] Initially all values are 1

              Philosopher ido

              51

              wait(chopstick[i])wait(chopstick[(i+1) 5])hellipeathellipsignal(chopstick[i])signal(chopstick[(i+1) 5])hellipthinkhellip

              while (1)This solution guarantees no two neighbouring philosophers eat simultaneously but has the possibility of creating a deadlockQ) Types of operating systems

              MULTIPROCESSING SYSTEMS has multiple hardware CPUs

              In symmetric multiprocessing (SMP) systems The processes share the same main memory and io facilities All processors can perform the same functions (hence term symmetric)

              52

              ASYMMETRIC MULTIPROCESSING SYSTEMS In asymmetric multiprocessing (ASMP) all CPUs are not equal (Master and slave CPUs) ASMP have dedicated processors to specific tasks For example one processor may be dedicated to disk operations another to video operations etc These systems dont have the flexibility to assign processes to the least-loaded CPU unlike an SMP system

              REAL TIME SYSTEM is used to implement a computer application that must complete its execution within its time constraint

              Two kinds of real-time systems have evolved

              1 A hard real-time system can guarantee that response requirement will be met under all conditions 2 A soft real-time system cannot guarantee that response requirement meets under all conditions Ex

              Digital audio or multimedia systems Digital telephones

              DISTRIBUTED SYSTEMS execute parts of a computation in different systems at the same time It uses distributed control ie it spreads its decision-making actions across different computers in the system so that failures of individual computers or the network does not cripple its operation

              A distributed operating system appears as a uniprocessor system even though it has multiple processors The users may not know where their programs are being run or where their files are located that should all be handled automatically by the operating system

              Distributed systems allow applications to run on several processors at the same time thus requiring more complex processor scheduling algorithms

              DESKTOP SYSTEMS A desktop system is a personal computer (PC) system used at a single location Modern desktop systems support multiprogramming Common examples are Linux FreeBSD Windows 8 and the Macintosh operating system

              HANDHELD SYSTEMS (palmtop computers) Two of the most popular operating systems for handhelds are Symbian OS and Android OS They have very less memory

              53

              CLUSTERED SYSTEMS A computer cluster consists of a set of loosely connected computers that work together so that in many respects they can be viewed as a single system The components of a cluster are usually connected to each other through fast local area networks each node (computer used as a server) running its own instance of an operating system

              Q) Modern operating system

              New developments in hardware applications and security threats lead to the development of modern operating system

              New Hardware developments like increased machine speed high speed network increased size and variety of memory devices

              New Applications developments like multimedia applications internet and web access and client server computing

              Developments in modern operating system can be categorized as

              1 Microkernel architecture assigns only few functions to kernel ( IPC and CPU scheduling etc) Other OS services are provided by processes called servers that run in user mode This approach decouples server and kernel development This approach is well suited to distributed environment

              2 Multithreading a process is divided into threads that can run concurrently Useful for applications that perform a number of independent tasks Example database server that listens and processes many client requests

              3 Symmetric multiprocessing As now-a-days there are multiple microprocessors in a single system SMP operating system provides greater efficiency when there are multiple processors The processes share the same main memory and io facilities All processors can perform the same functions (hence term symmetric)

              SMP schedules processes or threads across all of the processors

              Advantages

              1 Increased performance if some portion of the program can be run in parallel the performance of SMP OS increases

              2 increased availability as all processes can perform the same task if one processor fails others will work 3 incremental growth one can add a new processor to increase the performance4 Scaling cost can be dependent on the number of processors used

              4 Distributed operating systems Distributed systems allow applications to run on several processors at the same time

              5 Object oriented design adds modularity to kernel Os can be customized without effecting system integrity

              Q) Linux Operating systems It is open source as its source code is freely available It is free to use Linux was designed considering UNIX compatibility Linux is a multi-user multitasking system Main design goals are speed efficiency and standardization Components of Linux System

              54

              Linux Operating System has primarily three components as shown in the below diagram

              Kernel - Kernel is the core part of Linux It is responsible for all major activities of this operating system It interacts directly with hardware Kernel hides low level hardware details to system or application programs

              System Library - System libraries are special functions or programs These are used by application programs to use Kernelrsquos features

              System Utility - System Utility programs are responsible to do specialized individual level tasks like updating log file accepting login requests from terminals etc

              Kernel Modules Sections of kernel code that can be compiled loaded and unloaded independent of the rest of the kernel A kernel module may typically implement a device driver a file system or a networking protocol

              Three components to Linux module support 1 module management Supports loading modules into memory and allows to talk to the rest of the kernel 2driver registration Allows modules to inform rest of the kernel that a new driver has become available 3 conflict resolution protect reserved resources of one driver from accidental use by another driver

              Kernel Mode vs User ModeKernel component code executes in a special privileged mode called kernel mode with full access to all resources of the computer This code represents a single process executes in single address space and do not require any context switch and hence is very efficient and fast Kernel runs each processes and provides system services to processes provides protected access to hardwares to processesCode which is not required to run in kernel mode is in System Library User programs and other system programs works in User Mode which has no access to system hardwares and kernel code User programs utilities use System libraries to access Kernel functions to get systems low level tasksBasic FeaturesFollowing are some of the important features of Linux Operating System

              Portable - Portability means softwares can works on different types of hardwares in same way Linux kernel and application programs supports their installation on any kind of hardware platform

              Open Source - Linux source code is freely available and it is community based development project Multiple teams works in collaboration to enhance the capability of Linux operating system and it is continuously evolving

              Multi-User - Linux is a multiuser system means multiple users can access system resources like memory ram application programs at same time

              Multiprogramming - Linux is a multiprogramming system means multiple applications can run at same time

              Hierarchical File System - Linux provides a standard file structure in which system files user files are arranged

              Shell - Linux provides a special interpreter program which can be used to execute commands of the operating system It can be used to do various types of operations call application programs etc

              55

              Security - Linux provides user security using authentication features like password protection controlled access to specific files encryption of data

              Architecture Linux System Architecture is consists of following layers

              Hardware layer - Hardware consists of all peripheral devices (RAM HDD CPU etc)

              Kernel - Core component of Operating System interacts directly with hardware provides low level services to upper layer components

              Shell - An interface to kernel Takes commands from user and executes kernels functions

              Utilities - Utility programs giving user most of the functionalities of an operating systems

              Q) windows XP operating System1Extensibility mdash layered architecture

              Executive which runs in protected mode provides the basic system services On top of the executive several server subsystems operate in user mode Modular structure allows additional environmental subsystems to be added without affecting the

              executive 2Portability mdash XP can be moved from on hardware architecture to another with relatively few changes

              Written in C and C++ Processor-dependent code is isolated in a dynamic link library (DLL) called the ldquohardware abstraction

              layerrdquo (HAL)3 Reliability mdashXP uses hardware protection for virtual memory and software protection mechanisms for operating system resources 4 Compatibility mdash applications that follow the IEEE 10031 (POSIX) standard can be complied to run on 2000 without changing the source code 5 Performance mdash XP subsystems can communicate with one another via high-performance message passing

              Preemption of low priority threads enables the system to respond quickly to external events Designed for symmetrical multiprocessing

              6 International support mdash supports different locales via the national language support (NLS) APIXP Architecture 1 Layered system of modules 2 Protected mode mdash hardware abstraction layer (HAL) kernel executive 3 User mode mdash collection of subsystems

              a Environmental subsystems emulate different operating systems b Protection subsystems provide security functions

              XP architecture

              56

              Q) Windows network Operating systemExample Windows 2000 operating system

              All the pictures are taken from Silberschatz Abraham et al Peter Baer Galvin and Greg Gagne Operating system concepts Reading Addison-Wesley

              Visit my blog enthusiaststudentblogspotin57

              mtechmessengerblogspotin

              Keywords OS exam notes OS notesOperating system notes Opearing system galvin notesSimple batch system Multi- programmed Batch System Time sharing System Difference between Batch Multi-programming and Time sharing What is process structure of process in memory Process states PCB Operations on Processes Process Creation Process Termination cascading terminationcontext switch Basic Concepts of threads two modes of CPU execution Inter process communication Shared ndash Memory Systems Message passing systemswhat is deadlock four conditions that are necessary for deadlock to occur Methods for handling deadlocks Deadlock prevention Deadlock avoidanceResource Allocation graphBankers AlgorithmSafe State Unsafe state Safety Algorithm Resource-Request Algorithm Memory management Address Binding Logical vs Physical Address Space MMU Memory-Management Unit Dynamic loading Dynamic Linking Overlays Swapping Contiguous memory allocation Single-partition allocation Multiple-partition allocation Internal fragmentation Variable Size partitions External fragmentation First-fit Best-fit Worst-fit Non-Contiguous Memory allocation Paging Implementation of Page Table PTLR PTBR page table base register Page-table length register translation look-aside buffers TLB Protection in Paging Shared pages in Paging Segmentation Shared Segments Virtual memory Demand paging Procedure for Handling a Page Fault pure demand paging Page Replacement Page Replacement Algorithms FIFO Optimal Page replacement AlgorithmLeast Recently used (LRU) Page replacement algorithm ThrashingWorking Set model global and local page replacement

              58

              • 101 File Concept Data is stored in secondary memory in the form of files A file is a sequence of bits bytes lines or records A file has structure which depends on its type
              • File Attributes
              • File Operations
              • Kernel Mode vs User Mode
              • Basic Features
              • Architecture

                P1 P2

                R1

                5 A link may be associated with more than two processes6 Mailbox may owned by process or OS

                a If mailbox is owned by process we can distinguish between the owner( can receive messages only) and user(can send messages only) When the processes that owns a mailbox terminates the mailbox disappears

                b If mailbox is owned by OS OS must provide mechanisms to i Create a new mailbox

                ii Send and receive messages through mailboxesiii Delete the mailboxiv Pass ownership to other processes

                2 Synchronous or asynchronous communicationMessage passing may be either blocking(synchronous) or non-blocking(Asynchronous)

                1 Blocking Send The sending process is blocked until the message is received by the receiving process or mailbox

                2 Nonblocking Send The sending process sends the message and resumes operation3 Blocking Receive The receiver blocks until message is available4 Nonblocking receive The receiver retrieves either a valid message or a null

                3 Automatic or explicit bufferingMessages exchanged by communicating processes reside in temporary queue Such queues can be implemented in 3 ways

                i Zero Capacity Queue length =0 the link cannot have messages waiting in itsender must block until receiver receives the message

                ii Bounded Capacity Queue length = finite (say n) When the queue is full the sender must block until space is available in queue

                iii Unbounded Capacity Queue length = infinitehellipthe sender never blocks===============================================================================Q) what is deadlock

                A set of processes is deadlocked when every process in the set is waiting for a resource that is currently allocated to another process in the set Here the process P1 is allocated resource R2 and P2 is allocated R1

                P1 requires R1 and P2 requires R2

                Process P1 and P2 will wait forever This situation is called deadlock

                Q) What are the four conditions that are necessary for deadlock to occur

                1 Mutual Exclusion - At least one resource must be held in a non-sharable mode If any other process requests this resource then that process must wait for the resource to be released

                2 Hold and Wait - A process must be simultaneously holding at least one resource and waiting for at least one resource that is currently being held by some other process

                3 No preemption - Once a process is holding a resource then that resource cannot be taken away from that process until the process releases it

                4 Circular Wait - A set of processes P0 P1 P2 PN must exist such that every P[ i ] is waiting for P[ ( i + 1 ) ( N + 1 ) ]

                Q)Methods for handling deadlocks

                1 By using deadlock prevention and avoidance protocols system will never enter a deadlocked state1 Allow the system to enter a deadlocked state detect it and recover it2 Ignore the problem and pretend that deadlock never occurs

                To make sure that the system must not enter a deadlocked state the system can use

                1 Deadlock prevention 2 Deadlock avoidance

                ==============================================================================

                Deadlock Prevention

                1 Mutual Exclusion We cannot prevent deadlocks by denying the mutual exclusion condition because some resources are nonsharable (Ex Printer)

                2 Hold and Wait

                To make sure that the hold-and-wait condition never occurs in the system two protocols that can be used are

                Protocol 1 All the resources requested must be allocated before process begins execution

                Protocol2 A process can request resources only when it has none If a process requires additional resources it must release all the resources that are currently allocated

                Example Consider a process that copies data from DVD drive to a file on disk sorts the file and then prints the results to a printer

                If Protocol1 is used it must request the DVD drive disk file and printer at the beginning and must hold them till the end

                Disadvantages

                1 Starvation A process may wait forever because at least one resource that is need is always allocated to some other process Hence Starvation is possible

                2 Resource Utilization is low Process will hold the printer from beginning till end even though it is used at the end

                If Protocol 2 is used the process will initially request DVD drive and disk file It copies from the DVD drive to disk and then releases both the DVD drive and disk file It then requests the disk file and printer

                Disadvantage There may be a chance that our data may not remain on the disk file

                3 No pre-emption

                To make sure that this condition does not hold the following protocol is used

                Protocol If a process (say A) requests some resources

                Case 1 If resources are available then Allocate them

                Case 2 if resources are allocated to some other process(say B) that is waiting for additional resources

                then Preempt the desired resources from the waiting process (B) and allocate them to requesting process(A)

                The process B can be restarted only when it is allocated additional resources it is requesting and takes away the resources that were given to process A

                Case 3 if resources are neither available nor held by a waiting process then Process A waits

                This protocol is applied to resources like CPU register and memory space as the state of the resources can be saved

                4 Circular Wait

                To make sure Circular Wait condition never occurs

                1 Each Resource is assigned a unique integer number

                2 Each Process must request resources in an increasing order of enumeration

                We define a one-to-one function F R rarr N where R is the set of resource types and N is the set of natural numbers

                A process has requested a resource type say Ri at the beginning

                Protocol 1 After that the process can request resource type R j if and only if F (Riquestiquest j)gtF(Riquestiquest i)iquestiquest

                Protocol 2 If a process requests a resource type R j it must release all the resources (say (Riquestiquest i )iquest ) whose F (Riquestiquest i)ge F (Riquestiquest j)iquestiquest

                Example Let F (tape drive) =1 F(disk drive) = 5 and F(printer)=12

                A process can request any number of tape drives disk drives and printers

                Protocol 1If a process A has already requested disk drive now A can request only printer and cannot request tape drive

                Protocol2 In order to request tape drive the process A must release the disk drive and then can request tape drive

                If the above two protocols are used then the circular wait condition never occurs We can prove this by contradiction

                Proof Assume circular wait exists Let the set of processes involved in the circular wait be P0 P1 helliphellip Pn where P0 is waiting for resource R0which is held by P1

                P1 is allocated R0 and P1 is waiting for resource R1which is held by P2 so F (Riquestiquest0)ltF(R iquestiquest1)iquestiquest

                P2 is allocated R1 and P2 is waiting for resource R2which is held by P3 so F (Riquestiquest1)ltF (Riquestiquest2)iquestiquest

                P1 P2

                R2R2

                R1

                Pn is allocated Rnminus1 and Pn is waiting for resource Rnwhich is held by P0 so F (Riquestiquestn)ltF(R iquestiquest0)iquestiquest

                Hence by transitivity F (Riquestiquest0)ltF(R iquestiquest0)iquestiquest Hence our assumption that circular wait exists is FALSE

                ===============================================================================Q) Resource Allocation graphDeadlocks can be understood more clearly through the use of Resource-Allocation Graphs having the following properties

                1 Resource Types are represented as square nodes on the graph Dots inside the square nodes indicate number of resources

                ( Ex two dots might represent two laser printers )2 Processes are represented as circles3 Request Edges - If P1 has requested R1 a directed edge from P1 to R1 is request

                edge4 Assignment Edges - A directed edge from R2 to P1 indicating that resource R2 has

                been allocated to process P1 and that P1 is currently holding resource R2 Note that a request edge can be converted into an assignment edge when request is granted

                If a resource-allocation graph does contain cycles AND each resource contains only a single instance then a deadlock exists If a resource category contains more than one instance then cycle in the resource-allocation graph indicates the possibility of a deadlock but does not guarantee one==================================================================Q) Deadlock AvoidanceFor each resource request system can decide whether the request should be granted or not To make this decision the system must have information like

                1 resources currently available2 resources currently allocated to each process3 Future requests and releases of each process4 Maximum number of resources it may need

                Given this information it is possible to construct an algorithm that makes sure that the system will never enter a deadlocked stateThere are Two deadlock-avoidance algorithms They are

                1 Resource-Allocation Graph Algorithm2 Bankers Algorithm

                Safe State A system is in safe state if there exists a safe sequence of processes P0 P1 P2 PN such that Resource Requests for Pi lt= Resources allocated to Pi + resources held by all processes Pj where j lt i All safe states are deadlock free

                Unsafe state If a safe sequence does not exist then the system is in an unsafe state which MAY lead to deadlock

                1 Resource-Allocation Graph Algorithm Resource-allocation graphs can detect deadlocks only if number of resources of each type are one In this case unsafe states can be recognized and avoided by adding claim edges denoted by dashed

                lines which point from a process to a resource that it may request in the future All the claim edges are added only at the beginning of process When a process makes a request the claim edge Pi-gtRj is converted to a request edge When a resource is released the assignment edge changes back to a claim edge This approach works by denying requests that would produce cycles in the resource-allocation graph

                taking claim edges into effectConsider for example the resource allocation graph as shown

                If P2 requests resource R2 then the claim edge from P1-gtR2 will be made request edge as follows

                The resulting resource-allocation graph would have a cycle in it and so the request cannot be granted Q) Bankerrsquos Algorithm or Deadlock avoidance algorithm with exampleThere are 12 tape driveslet the current state of the system is as shown in the below figureProcess Allocated Max Need Need= MaxNeed ndash AllocatedP0 5 10 5P1 2 4 2P2 2 9 7

                Available = 12-(5+2+2) = 3Resource- Request AlgorithmNow when a request for 1 tape drive by process P2 is made we run resource-request algorithm to check whether the request must be granted or not The request is granted if the after granting the request all the processes in the system can complete For thatWe check 1 Is the request of P2 lt= need of P2

                1 lt= 7 therefore TRUE2 Is the request of P2 lt= Available

                1 lt= 3 therefore TRUE3 Pretend the request is granted for P2

                Now the current state is as shown belowProcess Allocated Max Need Need= MaxNeed ndash AllocatedP0 5 10 5P1 2 4 2P2 2+1=3 9 7-1=6

                Available = 3-1= 2Now Run the safety algorithm to check whether the system is in safe state

                Safety Algorithm1 Let WORK = Available = 22 Find unfinished process such that Need of unfinished process lt= WORKCheck P0

                Need of P0 = 5Work = 2

                Is 5 lt= 2 FALSE

                Check P1Need of P1 = 2Work = 2

                Is 2 lt= 2TRUETherefore P1 can finishIf P1 finishes Work = Work+Allocated to P1

                Work=2+2= 4Now again check if P0 can completeNeed of P0 = 5Work =4Is 5 lt=4 FALSE

                Check if P3 completesNeed of P3= 6Work =4Is 6lt= 4FALSESo neither P0 can complete and P3 can completeSo the system is in unsafe stateThe request for 1 tape drive by P3 is not granted

                2 Bankers Algorithm For resources that contain more than one instance the resource-allocation graph method does not

                work So we use Bankerrsquos algorithm When a process starts up it must state in advance the maximum allocation of resources it may

                request When a request is made the scheduler determines whether granting the request would leave the

                system in a safe state If not then the process must wait until the request can be granted safely The bankers algorithm relies on several key data structures ( where n is the number of processes and

                m is the number of resource categories ) o Available[ m ] indicates how many resources are currently available o Max[ n ][ m ] indicates the maximum demand of each process of each resource

                o Allocation[ n ][ m ] indicates the number of each resources allocated to each processo Need[ n ][ m ] indicates the remaining resources needed of each type for each process ( Note

                that Need[ i ][ j ] = Max[ i ][ j ] - Allocation[ i ][ j ] for all i j ) For simplification of discussions we make the following notations observations

                o One row of the Need vector Need[ i ] can be treated as a vector corresponding to the needs of process i and similarly for Allocation and Max

                Safety Algorithm In order to apply the Bankers algorithm we first need an algorithm for determining whether or not a

                particular state is safe This algorithm determines if the current state of a system is safe according to the following steps

                1 Let Work and Finish be vectors of length m and n respectively Work is a working copy of the available resources Finish is a vector of booleans indicating whether a particular process can finish Initialize Work = Available and Finish to false for all elements

                2 Find an i such that both (A) Finish[ i ] == false and (B) Need[ i ] lt Work This process has not finished but could with the given available working set If no such i exists go to step 4

                3 Set Work = Work + Allocation[ i ] and set Finish[ i ] to true This corresponds to process i finishing up and releasing its resources back into the work pool Then loop back to step 2

                4 If finish[ i ] == true for all i then the state is a safe state because a safe sequence has been found

                Resource-Request Algorithm ( The Bankers Algorithm ) Now we have a tool for determining if a particular state is safe or not This algorithm determines if a new request is safe and grants it only if it is safe to do so When a request is made ( that does not exceed currently available resources )

                pretend it has been granted and then see if the resulting state is a safe one If so grant the request and if not deny the request as follows

                1 Let Request[ n ][ m ] indicate the number of resources of each type currently requested by processes If Request[ i ] gt Need[ i ] for any process i raise an error condition

                2 If Request[ i ] gt Available for any process i then that process must wait for resources to become available

                else the process can continue to step 3

                3 Check to see if the request can be granted safely by pretending it has been granted and then seeing if the resulting state is safe If resulting state is safe

                grant the requestelse

                then the process must wait until its request can be granted safelyThe procedure for granting a request ( or pretending to for testing purposes ) is

                Available = Available - Request

                Allocation = Allocation + Request Need = Need - Request

                Unit III Memory management

                Just as processes share the CPU they also share physical memory Memory management unit of OS takes care of memory allocation deallocation and other issues Program must be brought into memory for it to be run Addresses are two types

                i) relocatable or relative address wrt to the beginning of program ii) Absolute addresses

                Q)Address BindingAns Binding means mapping logical address space to physical address space Address binding can happen at three different stagesCompile time If you know at compile time only

                where in memory the program is going to be allocated then compiler generates absolute addresses

                Otherwise compiler generates relocatable addresses

                Load time loader binds the relocatable addresses generated by compiler to absolute address Hence binding is done at load timeIf Binding is done at compile and load time physical and logical addresses are same

                Execution time If address binding is done at run time the process can be moved during its execution from one memory segment to another Here we call logical addresses as virtual addressRun time mapping of virtual to physical address is done by a hardware device called Memory Management Unit (MMU)

                Q)Logical vs Physical Address SpaceAnsLogical address ndash generated by the CPU also referred to as virtual addressPhysical address ndash address seen by the memory unitSet of all logical addresses is called logical address space Set of all physical addresses is called physical address space

                Q) Memory-Management Unit (MMU)Ans MMU is hardware device that maps virtual to physical addressIn MMU scheme the value in the relocation register is added to every address generated by CPU to read the contents of memoryThe user program only knows logical addresses it never sees the real physical addresses

                Q)Dynamic loadingAns Since physical memory is small it may not be possible for the entire program to be in main memory so we can use dynamic loadingIt is responsibility of users to design their programs to take advantage of dynamic loadingOnly the main function is loaded into main memory and when main() calls another function the main() will check whether the function is in main memory IF not the loader will load the desired function to main memory and updates the programrsquos address table

                Routine is not loaded until it is called We achieve better memory-space utilization as unused routine is never loaded

                Q) Dynamic Linking+ Linking postponed until execution time+ Small piece of code stub used to locate the appropriate memory-resident library routine+ Stub replaces itself with the address of the routine and executes the routine+ Operating system needed to check if routine is in processesrsquo memory address+ Dynamic linking is particularly useful for libraries

                Q) OverlaysAns Overlays are needed when process is larger than amount of main memory allocated to it Instructions and data that are needed at any given time is kept in main memory Overlays can be implemented by user Programming design of overlay structure is complex Overlays for a two-pass assembler is as shown in the figure

                Q) SwappingAns A process can be swapped temporarily out of memory to a backing store and then brought back into memory for continued executionBacking store ndash fast disk large enough to store copies of all memory images of all usersPriority based scheduling uses a variant of swapping policy called roll out roll in If a higher priority process arrives the memory manager swaps out lower priority process and then swaps in higher priority process When the higher priority process finishes the lower priority process can be swapped in again to main memory A process that is swapped out will be swapped back into the same memory space it occupied previouslyMajor part of the swap time is transfer time We can swap idle process only and cannot swap a process that is waiting for IO

                Q) Contiguous memory allocation Ans Each process is contained in a single contiguous section of memory1 Fixed Size Partition (or) Single-partition allocation

                Divide the main memory into Fixed sized partitions Each partition may contain exactly one process Relocation register contains value of starting physical address Limit register contains range of logical addressesEvery address generated by a CPU is checked as follows

                If logical address lt Limit register then logical address is added with Relocation register to get the corresponding memory address

                else a trap to OS is generatedSince every address is checked we can protect the OS and other user programs from being modified by other running process

                2 Multiple-partition allocation

                1) Fixed Size Partitions Divide the main memory into Fixed sized blocks Here memory allocated to a process may be larger than required The difference between allocated memory and requested memory is called internal fragmentation Internal fragmentation is unused memory inside the partition

                0 P1 Block0(0th address to 3rd address)4 P1 Block1(4th address to 7th address)8 Block2(8th address to 11th address)

                12 Block3(11th address to 12th address)

                if P1 requires just 5 addresses still it is allocated 2 blocks ie 8 addresses So remaining 3 addresses are left unused

                2) Variable Size partitions Here main memory is divided into partitions of variable sizesOperating system maintains information about

                a) allocated partitions b) free partitions (hole)

                In the beginning all main memory is empty and is considered one large block of available memory a hole

                Exact memory required by process is only given When a process terminates it releases memory which can be allocated to another process

                Memory is allocated to processes until finally no available block of memory (or hole) is large enough to hold the next processExternal fragmentation exists when there is enough memory to satisfy a request but the available memory are not contiguous Wastage of memory outside the partitionOne Solution to problem of external fragmentation is

                a Compaction shuffle the memory contents so as to place all free memory together into one large block

                ===============================================================================Q) Dynamic Storage-Allocation Problem (or) Most commonly used strategies to select a free hole from the set of available holesAns To satisfy a request of size n from a list of free holes below 3 policies can be used

                1 First-fit Allocate the first hole that is big enough

                2 Best-fit Allocate the smallest hole that is big enough must search entire list if it is not ordered by size Produces the smallest leftover hole

                3 Worst-fit Allocate the largest hole must also search entire list Produces the largest leftover holeFirst-fit and best-fit better than worst-fit in terms of speed and storage utilization

                ===============================================================================Q) Non-Contiguous Memory allocation

                1 Paging Paging is a memory management scheme that provides non-contiguous memory allocation0 P1 Block0(0th address to 3rd address)4 P2 Block1(4th address to 7th address)8 Block2(8th address to 11th address)

                12 P1 Block3(11th address to 12th address)

                Logical address space of a process can be noncontiguous Ex P1 is allocated Block0 and Block3

                1 Divide physical memory into fixed-sized blocks called frames (size is power of 2)2 Divide logical memory into blocks of same size called pages3 Logical address is divided into 2 parts

                Page number (p) ndash Page table is indexed by page numberPage offset (d) ndash

                4 A page table is allocated to each process (A pointer to page table is stored in PCB of process )Page table translates logical to physical addresses Page 0 is in frame1 Page 1 is in frame 4 etc5 Internal fragmentation may occur due to paging

                6 If the size of logical address space= 2m and page size = 2n Then the higher order mminusn bits of logical address correspond to page number and lsquonrsquolower order bits for displacement within the page

                Ex size of logical address space = 8=23 so m=3

                page size =4=22 So n=2

                So mminusn = 1 bit for page number to represent 01

                remaining n=2 bits for displacement within the page

                Q) Implementation of Page Table

                Page table can be kept as

                1) Set of dedicated registers efficient if the page table is small

                2) Page table can be kept in memory page table base register (PTBR) points to page table Page-table length register (PTLR) indicates size of the page table Changing page table requires changing only the value in these registers Advantages of keeping page table in memorya Less context switch time

                b Two memory accesses are required One memory access to access the page table and another memory access to access the required memory address

                This problem can be solved by using a special fast-lookup hardware cache called associative memory or translation look-aside buffers(TLBs)

                i Each entry in TLB consists of 2 parts

                I) page number field

                II) Value field or frame number field

                ii When a logical address is generated by CPU its page number is presented to TLB The page number is compared with all the entries of TLB simultaneously

                If page number is found in TLB Itrsquos frame number is immediately available -gt TLB hitIf page number not found check in page table ndashTLB miss

                By using TLB Search is fast But TLB hardware is expensive So TLB size is kept small So TLB contains only frequently used few page table entries

                Q) Protection in Paging Validinvalid bit attached to each entry in the page table OS sets this validinvalid bit for each page to allow or disallow access to the page When the bit is set to invalid the page is not in processrsquos logical address space Hence it generates Trap to OS

                Q) Shared pages in Paging

                Reentrant code is the code that never changes during execution Reentrant code can be shared

                1 One copy of read-only (reentrant) code is shared among processes (ie text editors compilers window systems) Ex ed1ed2ed3 is shared among Process P1 and P2

                2 Shared code must appear in same location in the logical address space of all processes

                Each process keeps a separate copy of the private code and data EX data1 and data2

                The pages for the private code and data can appear anywhere in the logical address space

                ==============================================================================Q) Segmentation

                Segmentation is a memory management scheme that support userrsquos view of memory

                When the user program is compiled the compiler generates segments like

                1) The code segment

                2) Global variables segment

                3) Heap memory segment

                4) Stack segment etc

                Each entry in segment table has

                segment base (Starting physical address of segment) and limit register (specifies the length of the segment)

                Logical address is divided into 2 parts

                segment-number offset

                Segment table is indexed by segment number

                Segment-table base register (STBR) stores the location of segment table in main memory

                Segment-table length register (STLR) stores number of segments used by a program

                The segment number (say lsquosrsquo) is used to find the entry in segment table The required entry is lsquosrsquo locations away from the beginning of the segment table Once the required entry in the segment table is found the offset (lsquodrsquo) is compared with limit

                If( offset lt limit) then offset is added with the base entry to generate the physical address

                Q) Shared Segments Code sharing occurs at

                the segment level Shared segments must

                have same segment number

                Allocation - dynamic storage allocation problem

                use best fitfirst fit may cause external fragmentation

                Protection protection bits associated with segments

                readwriteexecute privileges array in a separate segment - hardware can check for illegal array indexes

                ===============================================================================Q) Virtual memory ndash

                Ans In paging and segmentation a program will execute only if the entire process is in main memory But here in virtual memory only part of the program needs to be in memory for execution

                1 A program that is large enough than the available main memory can still run as only part of program is only loaded into main memory Logical address space can therefore be much larger than physical address space

                2 Allows address spaces to be shared by several processes

                Virtual memory can be implemented via Demand paging Demand segmentation

                The large blank space between heap and stack is part of virtual address space Virtual memory allows files and memory to be shared by 2 or more processes through page sharing as shown in below figure (b) Figure (a) depicts that virtual memory that is larger than physical memory

                ==============================================================================

                Q)Demand pagingDemand paging is technique of loading pages from disk to main memory only when the page is needed Hence using less amount of physical memory we get faster response

                Demand paging is similar to PAGING + SWAPPING

                Instead of swapping a whole process the pager brings only those pages needed into memory

                To distinguish between pages that are in memory and pages that are on the disk validinvalid bit is used

                Validinvalid bit is attached to each entry in page table When this bit is set to valid the page is legal and in memory When this bit is set to invalid either the page is illegal or the page is in disk

                When CPU generates a logical address whose page table entry is set to invalid page fault occurs Hence the required page is in disk Some section of hard disk called swap space is used to hold pages that are not present in memory

                Q) Procedure for Handling a Page Fault

                1) CPU generates a logical address and if the validinvalid bit for the page that has this logical address is set to invalid page fault occurs

                2) Page fault causes a trap to operating system

                3)Check if the logical address is within the logical address space of process ie PTBR and PTLR is checked if (required logical address is not within the logical address space of process)

                Terminate the processelse

                page is not in memory and page is in disk

                4) To bring the required page into memory we need to find a free main memory frame map main memory address to disk block and fetch disk block and load the block into free frame5) When the required page is brought into memory update the page table to indicate the page is in memory6) Restart instruction interrupted by illegal address trap The process will continue as if page had always been in memory

                Q) What is pure demand pagingAns If no page belonging to the executing process is in main memory the process will fault for every page it needs Page faults will occur until every page that is needed is in memory This scheme is known as pure demand paging NEVER BRING A PAGE UNTIL IT IS REQUIRED

                Q) Page Replacement1 Find the location of the desired page on the disk2 Find a free

                i) If there is a free frame use itii) If there is no free frame use Page replacement Algorithms to find some page in memory that is

                not really in use and swap itiii) If the modify bit associated with the victim frame is set to 1 the page is written to disk Make the

                changes to validinvalid bit of victim frames page table entry to invalid to indicate the victim frame is no longer in memory

                else no need to write to disk3 Read the desired page into victim frame and change the frame and page tables4 Restart the user process

                =================================================================================================Q) Page Replacement AlgorithmsGoal Produce a low page-fault rateThe algorithms are evaluated by running it on a particular string of memory references (reference string) and by computing the number of page faults on that string

                1 First in First out (FIFO) Page replacement algorithm Uses the time when a page was brought into memory The page that was brought first into memory is replaced ie oldest page is chosen for replacement We create a FIFO queue to hold all pages in memory

                When a page is brought into memory we insert it at the tail of the queue and we replace the page at the head of the queue

                FIFO 15 page faultsProblems due to FIFO is Beladyrsquos anamolyFor a refrence string shown below123412512345When number of frames allocated to a process is 1 number of page faults=12 due to FIFO page replacementWhen number of frames allocated to a process is 2 number of page faults=12 due to FIFO page replacementWhen number of frames allocated to a process is 3 number of page faults=9 due to FIFO page replacementWhen number of frames allocated to a process is 1 number of page faults=10 due to FIFO page replacementAs the number of frames allocated to a process increases page faults must decrease But in FIFO page replacement page faults increase2 Optimal Page replacement AlgorithmReplace the page that will not be used for longest period of time Used for measuring how well your algorithm performs It is not practical to implement OPT algorithm as it is not possible to know what the future references will beex

                Optimal page replacement 9 page faults

                3 Least Recently used (LRU) Page replacement algorithm Replace the page that has not been used for longest period of time LRU makes use of time of last use of the page Ex

                Number of page faults = 12

                LRU requires hardware assistance to determine the time of last use of the page We can use 1) Stack or 2) Counter to implement LRU page replacement algorithm

                1) Stack implementation ndash keep a stack of page numbers in a double link formwhen a Page is referenced it is moved to the top of stackThis implementation requires 6 pointers to be changed

                2) counter implementation Time-of-use field is allocated to each frame allocatedCPU maintains a counter Counter is incremented for every page reference Counter value is copied to the time-of-use field of the referenced page

                Disadvantages of LRU page replacement algorithm Updating counter or stack must be done for every memory reference HENCE LRU IS SLOW and implementation requires hardware assistance ==============================================================================Q) LRU approximation page replacement1) Using reference bit2) Using additional reference bits algorithm3) Second chance algorithm4) Enhanced Second chance algorithm5) Counting based Page replacement

                Has two schemes1) Least Frequently Used (LFU)2) Most Frequently Used (MFU)

                6) Page Buffering AlgorithmsQ) What is ThrashingAns A process is thrashing if it is spending more time paging than executingThrashing may occur when global (or) local page replacement is used as followsSolution to Thrashing is Working Set strategyIf we provide a process with as many frames as it needs thrashing can be avoided Since it is not possible to know how many frames a process needs we use locality model of process execution

                here Locality is set of pages that are actively used together Locality model states that as the process executes it moves from locality to locality If we allocate enough frames to a process to accommodate the size of current locality it will not fault again until it changes current localityWorking Set model uses working-set and a parameter ∆ = working-set windowThe set of pages in the most recent ∆ page references is the working setLet WS S i be the working-set size for process Pi ie Pi needs WS S i framesLet D be the total demand for frames then D=sum WS Si

                and mbe the total number of available frames

                If iquestm thrashing occurs because some processes will not have enough framesSoIF (Dgtm iquest

                OS selects a process to suspendelse OS may initiate another processThe working set strategy prevents thrashing while keeping the degree of multi programming as high as possibleThe main difficulty is to keep track of moving working set window ========================================================================Q) How does thrashing occur in global and local page replacementAns In Global page replacement when a page fault occurs for a process any frame in main memory can be replaced causing page fault to some other process that needs the replaced frame Page fault processes must use paging device to swap pages in and out As more number of processes queue up for the paging device

                the READY queue empties And CPU becomes idle So OS introduces new processes as CPU is free This further increases number of page faults

                In Local page replacement Each process is allocated certain frames in main memory and when a page fault occurs only the frames allocated to it will be replacedWhen a process starts thrashing other processes are not affected But since the processes will be in queue for paging device most of the time average service time for page fault increases Hence effective access time will increase even for a process that is not thrashingQ) Page Table Structure Or Structure of page table in memory

                Hierarchical Paging Hashed Page Tables Inverted Page Tables

                Hierarchical Paging If Page table is large then we break up the page table into multiple page tables ie page table is paged A simple technique is a two-level page tableTwo-Level Paging Example A logical address (on 32-bit machine with 4K page size) is divided into 1 a page number consisting of 20 bits 2 a page offset consisting of 12 bitsSince the page table is paged the page number is further divided into 1 a 10-bit page number 2 a 10-bit page offset Thus a logical address is as follows page number page offset

                pi p2 d

                10 10 12 where p1 is an index into the outer page table and p2 is the displacement within the page of the outer page table Two-Level Page-Table Scheme

                Address-Translation Scheme Address-translation scheme for a two-level 32-bit paging architecture

                Hashed Page Tables (or) Hash table Hashed page tables are common in address spaces gt 32 bitsEach entry in hash table contains a linked list of elements that hash to same locationHash table is indexed by hash valueEach element consists of 3 fields

                1 Page number2 Frame number3 Pointer to next element in linked list

                The algorithm works as follows Page number is passed to hash function to get hash value Locating the hash value in hash table is easy as Hash table is indexed by hash value Page number is then compared with field 1 in the first element in linked list If there is a match the corresponding frame number is used to get physical address If there is no match next entries in linked list are searched

                Inverted Page Table

                Invertible page table has one entry for each frame of physical memory Each entry in invertible page table has

                Process-id Page number

                Logical address also has 3 parts Process-id Page number

                OffsetProcess-id and page number of the logical address is compared with each entry in the invertible page tableIf a match is found at ith entry i is added to offset to get logical address If there is no match it means it is illegal addressAdvantages and disadvantages

                Decreases memory needed to store each page table Increases time needed to search the table when a page reference occurs Solution Use hash table to

                limit the search to one mdash or at most a few mdash page-table entries Inverted Page Table Architecture

                Unit-IV CPU Scheduling

                Basic conceptsTo maximize CPU utilization some process must be running at all times

                In multi programming several processes will be in main memory at a given time If running process has to wait for IO the OS takes the CPU away from that process and assigns CPU to other process in ready queue So CPU must be scheduled and CPU scheduling is fundamental OS function

                CPU- IO burst cycleProcess execution begins with a CPU burst that is followed by an IO burst which is followed by another CPU burst then another IO burst and so on as shown in the figure

                An IO bound program has many short CPU bursts A CPU-bound program has few long CPU bursts

                Schedulers a module in OS for scheduling decisions1048708 Long-term scheduler (or job scheduler) ndash selects which processes should be brought into the ready queue1048708 Medium-term scheduler ndash selects which processes should be swapped inout the memory1048708 Short-term scheduler (or CPU scheduler) ndash Whenever the CPU becomes idle the operating system must select one of the processes in ready queue to be executed This selection is done by short term scheduler (or) CPU scheduler

                Dispatcher gives CPU control to the process selected by short term scheduler This function involves following

                1 switching context2 switching to user mode3 jumping to proper location in the user program to restart that program

                Time taken to stop one process and start another process is known as dispatch latency This must be kept smallReady QueueReady queue is implemented as a FIFO queue a priority queue a tree or simply an unordered linked listThe records in ready queue are PCBs of processes

                Pre-emptive Scheduling

                CPU scheduling decisions take place under one of four conditions

                1 When a process switches from the running state to the waiting state2 When a process switches from the running state to the ready state for example in response to an

                interrupt3 When a process switches from the waiting state to the ready state say at completion of IO 4 When a process terminates

                For conditions 1 and 4 there is no choice - A new process must be selected For conditions 2 and 3 there is a choice - continue running the current process (or) select a different

                one If CPU scheduling takes place only under conditions 1 and 4 the system is said to be non-

                preemptive or cooperative In non-preemptive a process runs until it itself gives up the CPU ie when it is waiting for IO (or) when process completes Otherwise the system is said to be pre-emptive

                Windows used non-preemptive scheduling up to Windows 3x and started using pre-emptive scheduling with Win95 Macs used non-preemptive prior to OSX and pre-emptive since then Note that pre-emptive scheduling is only possible on hardware that supports a timer interrupt

                Note that pre-emptive scheduling can cause problems when two processes share data because one process may get interrupted in the middle of updating shared data structures

                Preemption can also be a problem if the kernel is busy implementing a system call ( eg updating critical kernel data structures ) when the preemption occurs Solution wait until the system call has either completed or blocked before allowing the pre-emption But this solution is problematic for real-time systems

                Some critical sections of code protect themselves from concurrency problems by disabling interrupts before entering the critical section and re-enabling interrupts on exiting the section But this should only be done only on very short pieces of code that will take less CPU time

                Scheduling Criteria to decide CPU scheduling algorithm1 CPU utilization in real system cpu utilization will range from 40 to 902 Throughput Number of processes that are completed per unit time3 Turnaround time Time when the process has completed ndash Time when the process was submittedie it includes waiting time and burst time4 Waiting time amount of time that a process spends waiting for CPU5 Response time Time when the first response came - Time when the process was submitted

                Optimization Criteria Max CPU utilizationMax throughput

                Min turnaround timeMin waiting timeMin response time CPU Scheduling Algorithms1) First-Come First-Served (FCFS) Scheduling First come first served CPU scheduling algorithm is non pre-emptive Processes are scheduled in the order they have arrivedFCFS is implemented by Queue When the CPU is free it is allocated to the process at the head of the queueAdvantages Easy to implementDisadvantages 1 Average waiting time is quite long

                2 Convoy effect (it occurs as small processes wait in queue for big process to leave CPU)Example Process Burst Time P1 24 P2 3 P3 3

                Suppose that the processes arrive in the order P1 P2 P3

                The Gantt Chart for the schedule is P1 P2 P3

                0 24 27 30 Waiting time for P1 = 0 P2 = 24 P3 = 27 Average waiting time (0 + 24 + 27)3 = 17 2) Shortest-Job-First (SJF) SchedulingSchedule the Process with shortest burst time If CPU burst time of two processe are same then FCFS is usedAdvantages Average waiting time decreasesDisadvantages It is difficult to know the length of next CPU burstSJF can be either pre-emptive (or) non pre-emptive1 Non pre-emptive SJF ndash once CPU is given to the process it cannot be preempted until completes its CPU burst2 Pre-emptive SJF (or) Shortest-Remaining-Time-First (SRTF) If the newly arrived process is shorter than what is remaining of currently executing process then pre-empt the currently executing process

                Example Process Arrival Time Burst Time P1 00 7

                P2 20 4 P3 40 1

                P4 50 4

                SJF (non-preemptive) Gantt Chart P1 P3 P2 P4

                0 7 8 12 16 Average waiting time = [0 +(8-2)+(7-4) +(12-5)] 4 =4

                Example of Preemptive SJF

                Process Arrival Time Burst TimeP1 00 7P2 20 4P3 40 1P4 50 4

                SJF (preemptive) P1 P2 P3 P2 P4 P1

                0 2 4 5 7 11 16 Average waiting time = (9 + 1 + 0 +2)4 =3 3) Priority SchedulingA priority number (integer) is associated with each processThe CPU is allocated to the process with the highest priority (smallest integer equiv highest priority)If two processes have equal priority then FCFS is usedSJF is a priority algorithm in which larger the CPU burst lower the priorityThere are two schemes 1 Preemptive 2 nonpreemptive

                Problem equiv Starvation ndash low priority processes may never executeSolution equiv Aging ndash as time progresses increase the priority of the process

                4) Round Robin (RR) Each process gets a small unit of CPU time (time quantum) usually 10-100 milliseconds After this time has elapsed the process is preempted and added to the end of the ready queueIf there are n processes in the ready queue and the time quantum is q then each process gets 1n of the CPU time in chunks of at most q time units at once No process waits more than (n-1)q time unitsPerformance 1 q large _ FIFO 2 q small _ q must be large with respect to context switch otherwise overhead is too highExample of RR with Time Quantum = 4 Process Burst Time P1 24 P2 3

                P3 3 The Gantt chart isP1 P2 P3 P1 P1 P1 P1 P1

                0 4 7 10 14 18 22 26 30 Average waiting time = [(30-24)+4+7]3 = 173 =566 5)Multilevel Queue scheduling Ready queue is partitioned into separate queuesFor ex ready queue is divided into 2 queues

                1 foreground (interactive) queue2 background (batch) queue

                Each queue has its own scheduling algorithmFor ex Round Robin scheduling algorithm can be used in foreground queue

                FCFS scheduling algorithm can be used in background queueScheduling must be done between the queues This can be done in 2 ways1 Fixed priority scheduling

                Foreground queue has highest priority All the processes in foreground queue must be completed and queue must be empty Then only the processes in background queue will be given CPU As shown in below figure Disadvantage starvation

                2 Time slice ndash each queue gets a certain amount of CPU timewhich it can schedule amongst its processes ie 80 to foreground in RR and 20 to background in FCFS

                6) Multilevel Feedback Queue Scheduling A process can move between queues The idea is to separate processes according to their CPU bursts IO ndash bound and interactive processes will be in highest priority queueIf a process takes more CPU time it is moved to lower priority queue If a process waits too long in lower priority queue it is moved to higher priority queue to prevent starvationAs shown in the below figure let there be 3 queues Q0 Q1 Q2

                1 Q0 ndash time quantum 8 milliseconds 2 Q1 ndash time quantum 16 milliseconds 3 Q2 ndash FCFS

                Scheduling

                1 A process entering ready queue is put in queue Q0 When it gains CPU the process receives 8 msec If it does not finish in 8 milliseconds process is moved to queue Q1 2 At Q1 process again receives 16 additional milliseconds If it still does not complete it is moved to queue Q2

                Q) SynchronizationSeveral processes run in an operating system Some process share resources due to which problems like data inconsistency may arise

                Example of process synchronization is Producer Consumer problem (or) Bounded-Buffer problemProducer Consumer problem To make sure that producer process should not add data when the buffer is full andConsumer process should not take data when the buffer is empty

                The code for Producer process can be modified as followswhile (true) Produce an item in next producedwhile (counter == BUFFER_SIZE) do nothingbuffer[in]=nextProducedin = (in+1) BUFFER_SIZEcounter++

                here an integer variable counter is initialized to zeroCounter is incremented every time a new item is added to the buffer and counter is decremented every time an item is removed from bufferin is an index that always points to an empty slot after the last filled slotout is an index that always points to the first filled slot after at the head of the circular queue

                The code for consumer process is as shown belowwhile(true)

                while (counter == 0) do nothingnextConsumed = buffer[out]out=(out+1)BUFFER_SIZEcounter--

                The above producer and consumer code produces RACE CONDITIONOutcome of execution of several co-operating processes depends on the order in which accesses takes place is called Race ConditionFor Example if counter =5

                Given counter =5 and if producer produces one item and consumer consumes one item then correct value of counter= 5 only But here the counter = 4 ie inconsistent resultIf T5 is executed before T4 then the counter =6 ie inconsistent result

                Solution to avoid race condition make sure only one process at a time must update the shared variable (here it is counter)

                Q) Critical section problemAns Each process has a segment of code called critical section Critical section code is the code that contains common variables of co-operating processesTo avoid data inconsistencyTwo processes must not execute critical section code at same timeFor this to happen1 Each process must request permission to enter its critical sectionmdashEntry section2 End of critical section ndashExit section3 Rest of the code after critical section ndash Remainder section

                Solution to critical section must satisfy the following three requirements1 Mutual exclusion if one process is in critical section no other process can be in critical section2 Progress Processes not in remainder section must decide which process will enter critical section next3 Bounded Waiting All the processes must be allowed to enter critical section No process must wait indefinitely

                Q) Petersonrsquos solution for critical section problem (Software based solution)AnsTime Process 1 Process2T0 do

                flag[1] = TRUE turn =2

                do flag[2] = TRUE

                T1 turn =1

                T2 While( flag[2] ampamp turn == 2)Since turn = 2 while loop will not run

                While( flag[1] ampamp turn == 1)Loops continuously until flag[1]

                becomes falseT3 Enter critical section

                T4 flag[1] = FALSE Enter Critical section

                T5 Enters remainder section flag[2]=FALSE

                while (TRUE) Enters remainder section

                while (TRUE)

                101 File Concept Data is stored in secondary memory in the form of files A file is a sequence of bits bytes lines or records A file has structure which depends on its type

                File Attributes

                1 Name ndash Name of the file for user understanding2 Identifier used by OS to identify a file ( eg inode number )3 Type ndash Text files executable files binary files etc 4 Location ndash location of the file on the hard drive 5 Size current size of the file6 Protection Controls who can read write or execute the file7 Time Date and user identification gives the information when the file was created last modified

                and last used etc This data is used for protection security and usage monitoring

                All information about files is kept in directory structure The directory entry consists of the filersquos name and its unique identifier The identifier locates the other file attributes Directory structure is kept on disk

                File Operations

                The file Abstract Data Type supports many common operations 1 Creating a file find free space on disk add entry to directory 2 Writing a file write data at current file position pointer location and update pointer3 Reading a file read data at current file position pointer location and update pointer4 Repositioning within a file or file seek change pointer location to a given value5 Deleting a file free the space allocated to file and remove entry from directory6 Truncating a file delete the data and update file size

                Open File Table

                Most OSes require that files be opened before access and closed after access Information about all files that are open currently in the system is stored in an open file table This

                system wide open file table is maintained by OS And information about files opened by the process is stored by the processrsquos open file table Open file table has the following information

                1 File pointer - records the current position in the file for the next read or write2 File-open count - How many times has the current file been opened ( simultaneously by

                different processes ) and not yet closedWhen this counter reaches zero the file can be removed from the table

                3 Disk location of the file4 Access rights

                Some systems provide support for file locking 1 A shared lock is for reading only2 A exclusive lock is for writing as well as reading3 An advisory lock OS makes sure the locks are acquired and released appropriately4 A mandatory lock software developers makes sure the locks are acquired and released

                appropriately5 UNIX used advisory locks and Windows uses mandatory locks

                File Types

                Implementing file types A common technique is to include the type as part of the filename The name is split into 2 parts

                1 File Name 2 Extension

                File name and extension are separated by a period The system uses the extension to indicate the type of file and the type of operations that can be done on that fileUser opens Microsoft word and clicks on File menu and Open option User will only specify the filename without extension application will look for a file with the given name and extension it expects Hence extensions are not supported by the operating systemUnix system uses a crude magic number stored at the beginning of the file to indicate the type of fileIn Mac OS X operating system file produced by word processor has the word processorrsquos name as its creator When user double clicks on the file word will open the required fileSome common file types are as shown below

                File Structure Some files contain an internal structure which may or may not be known to the OS for ex executable files have a specific structure so that it can determine

                where in memory to load the file and what is the location of the first instruction

                If OS supports many file formats it will increase the size and complexity of the OS UNIX treats all files as sequences of bytes with no internal structure (With the exception of

                executable binary programs which it must know how to load and find the first executable statement etc)

                Macintosh files have two forks - a resource fork and a data fork The resource fork contains information relating to the UI such as icons and button images and can be modified independently Data fork contains the code or data

                Internal File structure

                Disk files are accessed in units of physical blocks typically 512 bytes

                Internally files are organized in units of logical units like single byte (or)

                data record size (or) structure size

                The number of logical units which fit into one physical block determines its packingIf a student file is organized internally as records Each record takes 64 bytes of memoryAnd main memory is divided into blocks of 512 bytes Then packing = 51264=8 8 records can be stored per block

                Internal fragmentation may occur due to this

                102 Access Methods

                The information in file can be accessed in the following ways1 Sequential access2 Direct access (or) Relative access

                1 Sequential access we cannot randomly access any record

                Records are read and written sequentially one record after another

                A sequential access file emulates magnetic tape operation and generally supports the following operations read next - read a record and advance the tape to the next position write next - write a record and advance the tape to the next position rewind skip n records - May or may not be supported N may be limited to positive numbers or may be

                limited to +- 1

                2 Direct access (or) Relative accessA file is made up of fixed length records so that we can randomly access any record

                No restriction on order of reading or writing

                Direct access files are useful when we need to access large amounts of data randomly

                Ex database

                Operations supported include n is relative block number ie relative to beginning of file

                1 read n - read record number n 2 write n - write record number n 3 jump to record n - could be 0 or the end of file4 Query current record - used to return back to this record later5 Sequential access can be easily emulated using direct access The inverse is complicated and

                inefficient

                3 Other access methods can be built on top of direct access method We construct an index for the file which contains pointers to various blocks To search a file we first access index and then use pointer to access file

                If a student file has 120000 records of students sorted by student id We can create an index consisting of 1st student id in each blockif 64 records can be stored in each block then number of blocks needed to store 120000 records = 12000064 asymp 2000 blocks So index has 2000 entries Index is kept in main memory so that searching becomes faster

                If the index itself becomes large we create an index for index file (Ex B-tree)

                Q) Allocation Methods

                To allocate space to files on disk is such a way that disk space is utilized effectively and files can be accessed quickly

                Three major methods of allocating disk space are

                1 Contiguous 2 Linked

                3 Indexed

                1 Contiguous allocation Each file occupies set of contiguous blocks on disk

                If the file is n blocks long and starts at location b then it occupies blocks b b+1 b+2hellipb+n-1

                The directory entry for each file indicates the address of starting block(ie b) and length (n)

                Advantages

                1 For direct access to block i we can directly access by finding the block as b+i Since the file is stored sequentially sequential access is easy Hence access is easy

                2 Supports both direct and sequential access3 Number of disk seeks required is minimum 4 Seek time is minimum

                Disadvantages

                1 Difficult to find space to a new file (or) add more data to existing fileFirst fit and Best fit strategies are used to find a free hole( set of free contiguous blocks) from list of available holes But both are inefficient wrt storage utilization

                2 External fragmentation even though there are enough free blocks required for a new file they are not contiguous and hence cannot be allocated to new fileSolution copy all the files onto another disk or tape We then copy the files back onto original disk contiguously This is known as compaction But compaction is time consuming

                3 Size of the file must be known when the file is created Even though final size is known entire size may not be used by the file at the beginning Hence leads to wastage of space

                If size of the file is not known in advance then (i) If we allocate too little space to file file cannot be extended Solution find a larger hole copy the contents of the file to the new space and release the previous space This may be possible as long as space exists Also this is time consuming(ii) If we overestimate the amount of space needed much of the space is unused This will lead to internal fragmentation

                To minimize these drawbacks some os uses modified contiguous allocation scheme Here contiguous blocks of memory is allocated at firstIf the file needs more blocks another set of contiguous space known as extent is added Location and a block count plus a link to the first block of next extentIf extents are too large and of fixed size internal fragmentation occursIf extents are of variable sizes external fragmentation occurs

                Linked AllocationEach file is a linked list of disk blocks The disk blocks can be scattered anywhere on the diskThe directory contains a pointer to first and last blocks of the file For example a file of 5 blocks might start at a block 9 and continue at block 16 then block 1 and then block 10 and finally block 25 Each block contains a pointer to next block If each block is 512 bytes and disk address (pointer ) requires 4 bytes then the user sees blocks of 512-4=508 bytes To create a new file we simply create a new entry in the directory The pointer will have nil for empty file Also the size field of empty file is 0

                To add more data to existing file free block is found and data is written to it and is linked to end of file To read a file we read by following the pointers block to block

                Advantages 1 Easy to find space to a new file (or) add more data to existing file2 Size of the file need not be declared when the file is created3 There is no external fragmentation

                Disadvantages1 Seek time is more2 Number of disk seeks required is more3 Pointers use larger percentage of filersquos disk space4 Inefficient for direct access files To access i th record i disk reads are required5 Not Reliable if pointers were lost or damaged

                Solution to disadvantage 3 is to use to group multiple blocks as clusters(for ex 4 blocks = 1cluster) We allocate clusters instead of blocks to file Makes logical to physical block mapping simpleImproves disk throughput by reducing disk access timeReduces wastage of space due to pointers as only few pointers may be neededFree ndashlist management also becomes simpleDisadvantage internal fragmentation more space is wasted when the cluster is partially full

                Solution to disadvantage 5 is to use doubly linked list (or) store the file name and relative block number in each block

                An variation of linked allocation is the use of file allocation table (FAT)A section of disk at the beginning of each volume contains FAT FAT has one entry for each disk block and is indexed by block numberThe block number contains the block number of next block that contains the file The last block will have special end of the file value as table entry An unused block is indicated by table value 0

                Advantages

                1 Easy to find space to a new file (or) add more data to existing file2 Random access time is improved

                Disadvantage number of disk seeks is more The disk head must move to read the FAT and find the location of block then move to location of the block

                3 Indexed allocation

                Each file has its own index block Index block is an array of disk block addresses The ith entry in index block points to ith block of the file

                The directory contains the address of the index block

                When the file is created all pointers in the index block are set to nil

                To add more data to existing file free block is found and data is written to it The address of block is put in the index block

                Advantages

                1 Easy to find space to a new file (or) add more data to existing file2 Supports direct (or) random access efficiently3 No external fragmentation

                Disadvantage

                1 Seek time is more2 Number of disk seeks required is more3 Not Reliable4 Index file must be kept in memory If this memory is not available then we have the read the index

                block and then desired data block ie 2 disk accesses are required To access a block near the end of the file we need to read all the index blocks to read the needed data block

                5 Pointer overhead of index block is gt than that of linked allocation For example if we have file of only one or two blocks Entire index block is allocated to store these one or two pointers

                Various mechanisms followed for size of index block

                1 linked scheme size of index block is one disk block Index block contains a small header containing name of the file and set of the first 100 disk block addresses The next address contains the address of another index block for large files

                2 multilevel index to access a block the OS uses first-level index to find second-level index block which points to file block This approach can be continued to third or fourth level

                3 combined scheme

                For example Say there are 15 pointers of index block in the filersquos inode The first 12 of these pointers have address of blocks that contains file data The next 3 pointers point to indirect blocks The first points to single indirect block which is an index block that contain addresses of filersquos data blocks

                The second points to double indirect blocks and third points to triple indirect blocks as shown below

                Q) Free space management

                To keep track of free disk space the system maintains free space list To create a file we search free space list and allocate the required space to new file

                The free space list can be implemented as

                1 Bit vector (or) Bit map2 Linked list3 Grouping4 Counting 5 Space maps

                1 Bit vector (or) Bit map Each block is represented by 1 bit If the block is free the bit is 1 and if the block is allocated the bit is 0 For example consider a disk where blocks 23458 blocks are free The free-space bit map would be 001111001000

                Advantages simple and easy to get contiguous files

                Disadvantages

                1 Bit map requires extra space2 Bit map is inefficient if kept in disk Bit maps of smaller disk can be kept in main memory but may

                not be possible to keep the bit map of larger disk in main memory3 Bit map must be modified both when blocks are allocated and freed Freeing 1 GB of data on a 1-TB

                disk could cause thousands of blocks of bit maps to be updated because those data blocks are scattered all over the disk

                2 Linked List Free-space list is implemented by linking all the free disk blocks together A pointer to the first free block is stored in special location on the disk For example consider a disk where blocks 23458 blocks are freeWe keep a pointer to block 2 as the first free block Block 2 would contain a pointer to block 3 Block 3 would point to block 4 and block 4 would point to block 5and so onAdvantage No waste of spaceDisadvantage Cannot get contiguous space easily ie To traverse the free-space list we must read each block which requires more IO time

                3 Grouping First free block stores the addresses of n free blocks The first n-1 free blocks are free and the last block contains the addresses of another n free blocks and so onAdvantage large number of free blocks can be found quickly4 Counting free-space list can be maintained by keeping the address of first free block and the number (n)of free contiguous blocks that follow the first block

                Each entry in the free-space list consists of disk address and a countEach entry requires more space andlist is shorter only if count is gt1These entries can be stored in a B-tree for efficient insertion deletion and search

                5 Space maps ZFS divides the disk space into chunks of manageable sizes called metaslabs Each metaslab has a space mapZFS uses counting algorithm to store information about free blocks Space map is a log of all block activities in time order in counting formatWhen ZFS allocates or frees space from a metaslab it loads the associated space map into memory and updates it Finally free-space list is updated on diskSynchronization Hardware Hardware solution to synchronization is to provide atomic operations These operations operate as a single instruction without interruption Two such operations are 1 Test and Set() hardware instruction2 Swap() instruction

                1 Test and Set instruction is as shown

                Solution using Test and SetThe shared variable lock is initialized to false

                2 Swap() instruction definition is as follows

                Solution using Swap1 Shared Boolean variable lock is initialized to false and each process has a local boolean variable key

                Both Test and Set Swap instructions satisfy the mutual exclusion requirement but unfortunately do not guarantee bounded waiting If there are multiple processes trying to get into their critical sections there is no guarantee of what order they will enter and any one process could have the bad luck to wait forever until they got their turn in the critical section (Since there is no guarantee as to the relative rates of the processes a very fast process could theoretically release the lock whip through their remainder section and re-lock the lock before a slower process got a chance As more and more processes are involved vying for the same resource the odds of a slow process getting locked out completely increase )

                Below figure illustrates a solution using test-and-set that satisfies bounded waiting using two shared data structures boolean lock and boolean waiting[ N ] where N is the number of processes in contention for critical sections

                Bounded-waiting mutual exclusion with TestAndSet( )

                It first looks in an order ( starting with the next process on the list ) for a process that has been waiting and if it finds one then it releases that particular process from its waiting state without unlocking the critical section thereby allowing a specific process into the critical section while continuing to block all the others

                Q) What is semaphoreAns Semaphore is used to solve synchronization problems A semaphore S is an integer variable accessed only through 2 standard atomic operations 1 wait() 2 signal()Wait primitive Signal primitivewait(S)

                while(S lt= 0) do nothingS--

                signal (S)

                S++

                Each process that wishes to use a resource performs wait() operation When a process releases a resource it performs signal() operation

                Semaphores Usage1 binary semaphore or mutex locks can take values 0 (or) 1

                2 Counting Semaphore can take any integer value Counting semaphore is used to count the number remaining resources The counter is initialized to the number of resources available whenever the counting semaphore gt 0 then a process can enter a critical section and use one of the resources When the counter = 0 ( or negative in some implementations ) then the process blocks until another process frees a resource and increments the counting semaphore with a signal callFor example if 3 resources are there and 4 processes Value of counting semaphore =3if process P1 requires a resource it perform wait() operationNow counting semaphore = 2if process P2 requires a resource it perform wait() operationNow counting semaphore = 1if process P3 requires a resource it perform wait() operationNow counting semaphore = 0if process P4 requires a resource it perform wait() operation P4 waits until a resource is available

                3 Semaphores can also be used to synchronize certain operations

                Semaphore ImplementationWhen a process is in critical section and any other process that tries to enter critical section must loop continuously wasting CPU time (busy waiting) This type of semaphore is called spin lock as the process spins while waiting For example suppose it is important that process P1 execute statement S1 before process P2 executes statement S2

                First we create a semaphore named synch that is shared by the two processes and initialize it to zero Then in process P1 we insert the code

                S1signal( synch )

                and in process P2 we insert the code wait( synch )

                S2 Because synch was initialized to 0 process P2 will block on the wait until after P1 executes the call

                to signal Each semaphore has a integer value and maintains a waiting queue of waiting processes

                Semaphore implementation with no busy waiting To overcome busy waiting we block the waiting process and waiting process is restarted by wakeup() operation(ie the process is removed from waiting queue to ready state)

                Problems due to semaphore1 Deadlocks Deadlocks occur when multiple processes are blocked each waiting for a resource that can only be freed by one of the other (blocked) processes shown below

                2 Starvation In Starvation one or more processes gets blocked forever and never get a chance to enter critical section For example if we do not specify the algorithms for adding processes to the waiting queue or selecting one to be removed from the queue in the signal( ) call If a LIFO queue is chosen then the first process that starts waiting will never get a chance

                3 Priority inversionLet Process LMH have priorities as shown belowLltMltHprocess H is waiting for a resource that has been held by process L so Process L is running When process M enters it pre-empts process L and M is running nowHere M runs before H even though M has lower priority than HTo avoid this L is given high priority so that M does not pre-empt L=====================================================================Q) Classical problems on synchronization

                1 The Bounded Buffer Problem (also called the The Producer-Consumer Problem)2 The Readers-Writers Problem3 The Dining Philosophers Problem

                These problems are used to test newly proposed synchronization scheme

                1 The Bounded Buffer ProblemConsider

                n buffers and each buffer holds one item a producer process which creates the items (1 at a time) a consumer process which processes them (1 at a time)

                Producer process cannot produce an item if all the buffers are full and Consumer process cannot consume if all the buffers are empty As both processes modify the contents of buffer synchronization must be there For that we use 3 Semaphores

                1 empty = n (empty semaphore counts the number of empty buffers)2 full=0 (full semaphore counts the number of filled buffers)3 mutex=1 (mutex provides mutual exclusion for access to the buffer)

                Producer process Consumer processdo

                hellip produce an item wait(empty) wait(mutex) hellip

                do wait(full) wait(mutex) hellip remove an item from buffer hellip

                add item to buffer hellip signal(mutex) signal(full)

                while (1)

                signal(mutex) signal(empty) hellip consume the item hellip while (1)

                2 The Readers-Writers ProblemA data item such as a file is shared among several processesEach process is classified as reader or writerMultiple readers can read the file at same timeA writer must have exclusive access (ie cannot share with either a reader or another writer)Two versions of the readers-writers problem

                readers priority new reader need not wait because a writer is waiting writers priority if a writer is waiting to access the database no new readers can start reading

                A solution to both version may cause starvation in the readers priority version writers may starve in the writers priority version readers may starve

                A semaphore solution to the readers priority version uses 3 semaphores1 readcount = 0 (readcount semaphore counts the number of reading shared data2 mutex =1 (mutex semaphore is used for mutually exclusion when readcount is updated)3 wrt = 1 (wrt semaphore is common to both writer and reader)

                wrt provides mutually exclusive access to shared data Reader process Writer process

                wait(mutex) readcount++ if (readcount == 1) wait(rt) signal(mutex) hellip reading is performed hellip wait(mutex) readcount-- if (readcount == 0) signal(wrt) signal(mutex)

                do wait(wrt) hellip writing is performed hellip signal(wrt)

                while (TRUE)

                3 The Dining Philosophers Problemn philosophers sit around a table thinking and eatingAs time passes a philosopher gets hungry and tries to pick up the chopstick on his left and on his right A philosopher may only pick up one chopstick at a time and cannot pick up a chopstick already in the hand of neighbour philosopherThe dining philosophers problems is an example of a large class or concurrency control problems it is a simple representation of the need to allocate several resources among several processes in a deadlock-free and starvation-free mannerA semaphore solution represent each chopstick with a semaphore

                semaphore chopstick[5] Initially all values are 1

                Philosopher ido

                51

                wait(chopstick[i])wait(chopstick[(i+1) 5])hellipeathellipsignal(chopstick[i])signal(chopstick[(i+1) 5])hellipthinkhellip

                while (1)This solution guarantees no two neighbouring philosophers eat simultaneously but has the possibility of creating a deadlockQ) Types of operating systems

                MULTIPROCESSING SYSTEMS has multiple hardware CPUs

                In symmetric multiprocessing (SMP) systems The processes share the same main memory and io facilities All processors can perform the same functions (hence term symmetric)

                52

                ASYMMETRIC MULTIPROCESSING SYSTEMS In asymmetric multiprocessing (ASMP) all CPUs are not equal (Master and slave CPUs) ASMP have dedicated processors to specific tasks For example one processor may be dedicated to disk operations another to video operations etc These systems dont have the flexibility to assign processes to the least-loaded CPU unlike an SMP system

                REAL TIME SYSTEM is used to implement a computer application that must complete its execution within its time constraint

                Two kinds of real-time systems have evolved

                1 A hard real-time system can guarantee that response requirement will be met under all conditions 2 A soft real-time system cannot guarantee that response requirement meets under all conditions Ex

                Digital audio or multimedia systems Digital telephones

                DISTRIBUTED SYSTEMS execute parts of a computation in different systems at the same time It uses distributed control ie it spreads its decision-making actions across different computers in the system so that failures of individual computers or the network does not cripple its operation

                A distributed operating system appears as a uniprocessor system even though it has multiple processors The users may not know where their programs are being run or where their files are located that should all be handled automatically by the operating system

                Distributed systems allow applications to run on several processors at the same time thus requiring more complex processor scheduling algorithms

                DESKTOP SYSTEMS A desktop system is a personal computer (PC) system used at a single location Modern desktop systems support multiprogramming Common examples are Linux FreeBSD Windows 8 and the Macintosh operating system

                HANDHELD SYSTEMS (palmtop computers) Two of the most popular operating systems for handhelds are Symbian OS and Android OS They have very less memory

                53

                CLUSTERED SYSTEMS A computer cluster consists of a set of loosely connected computers that work together so that in many respects they can be viewed as a single system The components of a cluster are usually connected to each other through fast local area networks each node (computer used as a server) running its own instance of an operating system

                Q) Modern operating system

                New developments in hardware applications and security threats lead to the development of modern operating system

                New Hardware developments like increased machine speed high speed network increased size and variety of memory devices

                New Applications developments like multimedia applications internet and web access and client server computing

                Developments in modern operating system can be categorized as

                1 Microkernel architecture assigns only few functions to kernel ( IPC and CPU scheduling etc) Other OS services are provided by processes called servers that run in user mode This approach decouples server and kernel development This approach is well suited to distributed environment

                2 Multithreading a process is divided into threads that can run concurrently Useful for applications that perform a number of independent tasks Example database server that listens and processes many client requests

                3 Symmetric multiprocessing As now-a-days there are multiple microprocessors in a single system SMP operating system provides greater efficiency when there are multiple processors The processes share the same main memory and io facilities All processors can perform the same functions (hence term symmetric)

                SMP schedules processes or threads across all of the processors

                Advantages

                1 Increased performance if some portion of the program can be run in parallel the performance of SMP OS increases

                2 increased availability as all processes can perform the same task if one processor fails others will work 3 incremental growth one can add a new processor to increase the performance4 Scaling cost can be dependent on the number of processors used

                4 Distributed operating systems Distributed systems allow applications to run on several processors at the same time

                5 Object oriented design adds modularity to kernel Os can be customized without effecting system integrity

                Q) Linux Operating systems It is open source as its source code is freely available It is free to use Linux was designed considering UNIX compatibility Linux is a multi-user multitasking system Main design goals are speed efficiency and standardization Components of Linux System

                54

                Linux Operating System has primarily three components as shown in the below diagram

                Kernel - Kernel is the core part of Linux It is responsible for all major activities of this operating system It interacts directly with hardware Kernel hides low level hardware details to system or application programs

                System Library - System libraries are special functions or programs These are used by application programs to use Kernelrsquos features

                System Utility - System Utility programs are responsible to do specialized individual level tasks like updating log file accepting login requests from terminals etc

                Kernel Modules Sections of kernel code that can be compiled loaded and unloaded independent of the rest of the kernel A kernel module may typically implement a device driver a file system or a networking protocol

                Three components to Linux module support 1 module management Supports loading modules into memory and allows to talk to the rest of the kernel 2driver registration Allows modules to inform rest of the kernel that a new driver has become available 3 conflict resolution protect reserved resources of one driver from accidental use by another driver

                Kernel Mode vs User ModeKernel component code executes in a special privileged mode called kernel mode with full access to all resources of the computer This code represents a single process executes in single address space and do not require any context switch and hence is very efficient and fast Kernel runs each processes and provides system services to processes provides protected access to hardwares to processesCode which is not required to run in kernel mode is in System Library User programs and other system programs works in User Mode which has no access to system hardwares and kernel code User programs utilities use System libraries to access Kernel functions to get systems low level tasksBasic FeaturesFollowing are some of the important features of Linux Operating System

                Portable - Portability means softwares can works on different types of hardwares in same way Linux kernel and application programs supports their installation on any kind of hardware platform

                Open Source - Linux source code is freely available and it is community based development project Multiple teams works in collaboration to enhance the capability of Linux operating system and it is continuously evolving

                Multi-User - Linux is a multiuser system means multiple users can access system resources like memory ram application programs at same time

                Multiprogramming - Linux is a multiprogramming system means multiple applications can run at same time

                Hierarchical File System - Linux provides a standard file structure in which system files user files are arranged

                Shell - Linux provides a special interpreter program which can be used to execute commands of the operating system It can be used to do various types of operations call application programs etc

                55

                Security - Linux provides user security using authentication features like password protection controlled access to specific files encryption of data

                Architecture Linux System Architecture is consists of following layers

                Hardware layer - Hardware consists of all peripheral devices (RAM HDD CPU etc)

                Kernel - Core component of Operating System interacts directly with hardware provides low level services to upper layer components

                Shell - An interface to kernel Takes commands from user and executes kernels functions

                Utilities - Utility programs giving user most of the functionalities of an operating systems

                Q) windows XP operating System1Extensibility mdash layered architecture

                Executive which runs in protected mode provides the basic system services On top of the executive several server subsystems operate in user mode Modular structure allows additional environmental subsystems to be added without affecting the

                executive 2Portability mdash XP can be moved from on hardware architecture to another with relatively few changes

                Written in C and C++ Processor-dependent code is isolated in a dynamic link library (DLL) called the ldquohardware abstraction

                layerrdquo (HAL)3 Reliability mdashXP uses hardware protection for virtual memory and software protection mechanisms for operating system resources 4 Compatibility mdash applications that follow the IEEE 10031 (POSIX) standard can be complied to run on 2000 without changing the source code 5 Performance mdash XP subsystems can communicate with one another via high-performance message passing

                Preemption of low priority threads enables the system to respond quickly to external events Designed for symmetrical multiprocessing

                6 International support mdash supports different locales via the national language support (NLS) APIXP Architecture 1 Layered system of modules 2 Protected mode mdash hardware abstraction layer (HAL) kernel executive 3 User mode mdash collection of subsystems

                a Environmental subsystems emulate different operating systems b Protection subsystems provide security functions

                XP architecture

                56

                Q) Windows network Operating systemExample Windows 2000 operating system

                All the pictures are taken from Silberschatz Abraham et al Peter Baer Galvin and Greg Gagne Operating system concepts Reading Addison-Wesley

                Visit my blog enthusiaststudentblogspotin57

                mtechmessengerblogspotin

                Keywords OS exam notes OS notesOperating system notes Opearing system galvin notesSimple batch system Multi- programmed Batch System Time sharing System Difference between Batch Multi-programming and Time sharing What is process structure of process in memory Process states PCB Operations on Processes Process Creation Process Termination cascading terminationcontext switch Basic Concepts of threads two modes of CPU execution Inter process communication Shared ndash Memory Systems Message passing systemswhat is deadlock four conditions that are necessary for deadlock to occur Methods for handling deadlocks Deadlock prevention Deadlock avoidanceResource Allocation graphBankers AlgorithmSafe State Unsafe state Safety Algorithm Resource-Request Algorithm Memory management Address Binding Logical vs Physical Address Space MMU Memory-Management Unit Dynamic loading Dynamic Linking Overlays Swapping Contiguous memory allocation Single-partition allocation Multiple-partition allocation Internal fragmentation Variable Size partitions External fragmentation First-fit Best-fit Worst-fit Non-Contiguous Memory allocation Paging Implementation of Page Table PTLR PTBR page table base register Page-table length register translation look-aside buffers TLB Protection in Paging Shared pages in Paging Segmentation Shared Segments Virtual memory Demand paging Procedure for Handling a Page Fault pure demand paging Page Replacement Page Replacement Algorithms FIFO Optimal Page replacement AlgorithmLeast Recently used (LRU) Page replacement algorithm ThrashingWorking Set model global and local page replacement

                58

                • 101 File Concept Data is stored in secondary memory in the form of files A file is a sequence of bits bytes lines or records A file has structure which depends on its type
                • File Attributes
                • File Operations
                • Kernel Mode vs User Mode
                • Basic Features
                • Architecture

                  4 Circular Wait - A set of processes P0 P1 P2 PN must exist such that every P[ i ] is waiting for P[ ( i + 1 ) ( N + 1 ) ]

                  Q)Methods for handling deadlocks

                  1 By using deadlock prevention and avoidance protocols system will never enter a deadlocked state1 Allow the system to enter a deadlocked state detect it and recover it2 Ignore the problem and pretend that deadlock never occurs

                  To make sure that the system must not enter a deadlocked state the system can use

                  1 Deadlock prevention 2 Deadlock avoidance

                  ==============================================================================

                  Deadlock Prevention

                  1 Mutual Exclusion We cannot prevent deadlocks by denying the mutual exclusion condition because some resources are nonsharable (Ex Printer)

                  2 Hold and Wait

                  To make sure that the hold-and-wait condition never occurs in the system two protocols that can be used are

                  Protocol 1 All the resources requested must be allocated before process begins execution

                  Protocol2 A process can request resources only when it has none If a process requires additional resources it must release all the resources that are currently allocated

                  Example Consider a process that copies data from DVD drive to a file on disk sorts the file and then prints the results to a printer

                  If Protocol1 is used it must request the DVD drive disk file and printer at the beginning and must hold them till the end

                  Disadvantages

                  1 Starvation A process may wait forever because at least one resource that is need is always allocated to some other process Hence Starvation is possible

                  2 Resource Utilization is low Process will hold the printer from beginning till end even though it is used at the end

                  If Protocol 2 is used the process will initially request DVD drive and disk file It copies from the DVD drive to disk and then releases both the DVD drive and disk file It then requests the disk file and printer

                  Disadvantage There may be a chance that our data may not remain on the disk file

                  3 No pre-emption

                  To make sure that this condition does not hold the following protocol is used

                  Protocol If a process (say A) requests some resources

                  Case 1 If resources are available then Allocate them

                  Case 2 if resources are allocated to some other process(say B) that is waiting for additional resources

                  then Preempt the desired resources from the waiting process (B) and allocate them to requesting process(A)

                  The process B can be restarted only when it is allocated additional resources it is requesting and takes away the resources that were given to process A

                  Case 3 if resources are neither available nor held by a waiting process then Process A waits

                  This protocol is applied to resources like CPU register and memory space as the state of the resources can be saved

                  4 Circular Wait

                  To make sure Circular Wait condition never occurs

                  1 Each Resource is assigned a unique integer number

                  2 Each Process must request resources in an increasing order of enumeration

                  We define a one-to-one function F R rarr N where R is the set of resource types and N is the set of natural numbers

                  A process has requested a resource type say Ri at the beginning

                  Protocol 1 After that the process can request resource type R j if and only if F (Riquestiquest j)gtF(Riquestiquest i)iquestiquest

                  Protocol 2 If a process requests a resource type R j it must release all the resources (say (Riquestiquest i )iquest ) whose F (Riquestiquest i)ge F (Riquestiquest j)iquestiquest

                  Example Let F (tape drive) =1 F(disk drive) = 5 and F(printer)=12

                  A process can request any number of tape drives disk drives and printers

                  Protocol 1If a process A has already requested disk drive now A can request only printer and cannot request tape drive

                  Protocol2 In order to request tape drive the process A must release the disk drive and then can request tape drive

                  If the above two protocols are used then the circular wait condition never occurs We can prove this by contradiction

                  Proof Assume circular wait exists Let the set of processes involved in the circular wait be P0 P1 helliphellip Pn where P0 is waiting for resource R0which is held by P1

                  P1 is allocated R0 and P1 is waiting for resource R1which is held by P2 so F (Riquestiquest0)ltF(R iquestiquest1)iquestiquest

                  P2 is allocated R1 and P2 is waiting for resource R2which is held by P3 so F (Riquestiquest1)ltF (Riquestiquest2)iquestiquest

                  P1 P2

                  R2R2

                  R1

                  Pn is allocated Rnminus1 and Pn is waiting for resource Rnwhich is held by P0 so F (Riquestiquestn)ltF(R iquestiquest0)iquestiquest

                  Hence by transitivity F (Riquestiquest0)ltF(R iquestiquest0)iquestiquest Hence our assumption that circular wait exists is FALSE

                  ===============================================================================Q) Resource Allocation graphDeadlocks can be understood more clearly through the use of Resource-Allocation Graphs having the following properties

                  1 Resource Types are represented as square nodes on the graph Dots inside the square nodes indicate number of resources

                  ( Ex two dots might represent two laser printers )2 Processes are represented as circles3 Request Edges - If P1 has requested R1 a directed edge from P1 to R1 is request

                  edge4 Assignment Edges - A directed edge from R2 to P1 indicating that resource R2 has

                  been allocated to process P1 and that P1 is currently holding resource R2 Note that a request edge can be converted into an assignment edge when request is granted

                  If a resource-allocation graph does contain cycles AND each resource contains only a single instance then a deadlock exists If a resource category contains more than one instance then cycle in the resource-allocation graph indicates the possibility of a deadlock but does not guarantee one==================================================================Q) Deadlock AvoidanceFor each resource request system can decide whether the request should be granted or not To make this decision the system must have information like

                  1 resources currently available2 resources currently allocated to each process3 Future requests and releases of each process4 Maximum number of resources it may need

                  Given this information it is possible to construct an algorithm that makes sure that the system will never enter a deadlocked stateThere are Two deadlock-avoidance algorithms They are

                  1 Resource-Allocation Graph Algorithm2 Bankers Algorithm

                  Safe State A system is in safe state if there exists a safe sequence of processes P0 P1 P2 PN such that Resource Requests for Pi lt= Resources allocated to Pi + resources held by all processes Pj where j lt i All safe states are deadlock free

                  Unsafe state If a safe sequence does not exist then the system is in an unsafe state which MAY lead to deadlock

                  1 Resource-Allocation Graph Algorithm Resource-allocation graphs can detect deadlocks only if number of resources of each type are one In this case unsafe states can be recognized and avoided by adding claim edges denoted by dashed

                  lines which point from a process to a resource that it may request in the future All the claim edges are added only at the beginning of process When a process makes a request the claim edge Pi-gtRj is converted to a request edge When a resource is released the assignment edge changes back to a claim edge This approach works by denying requests that would produce cycles in the resource-allocation graph

                  taking claim edges into effectConsider for example the resource allocation graph as shown

                  If P2 requests resource R2 then the claim edge from P1-gtR2 will be made request edge as follows

                  The resulting resource-allocation graph would have a cycle in it and so the request cannot be granted Q) Bankerrsquos Algorithm or Deadlock avoidance algorithm with exampleThere are 12 tape driveslet the current state of the system is as shown in the below figureProcess Allocated Max Need Need= MaxNeed ndash AllocatedP0 5 10 5P1 2 4 2P2 2 9 7

                  Available = 12-(5+2+2) = 3Resource- Request AlgorithmNow when a request for 1 tape drive by process P2 is made we run resource-request algorithm to check whether the request must be granted or not The request is granted if the after granting the request all the processes in the system can complete For thatWe check 1 Is the request of P2 lt= need of P2

                  1 lt= 7 therefore TRUE2 Is the request of P2 lt= Available

                  1 lt= 3 therefore TRUE3 Pretend the request is granted for P2

                  Now the current state is as shown belowProcess Allocated Max Need Need= MaxNeed ndash AllocatedP0 5 10 5P1 2 4 2P2 2+1=3 9 7-1=6

                  Available = 3-1= 2Now Run the safety algorithm to check whether the system is in safe state

                  Safety Algorithm1 Let WORK = Available = 22 Find unfinished process such that Need of unfinished process lt= WORKCheck P0

                  Need of P0 = 5Work = 2

                  Is 5 lt= 2 FALSE

                  Check P1Need of P1 = 2Work = 2

                  Is 2 lt= 2TRUETherefore P1 can finishIf P1 finishes Work = Work+Allocated to P1

                  Work=2+2= 4Now again check if P0 can completeNeed of P0 = 5Work =4Is 5 lt=4 FALSE

                  Check if P3 completesNeed of P3= 6Work =4Is 6lt= 4FALSESo neither P0 can complete and P3 can completeSo the system is in unsafe stateThe request for 1 tape drive by P3 is not granted

                  2 Bankers Algorithm For resources that contain more than one instance the resource-allocation graph method does not

                  work So we use Bankerrsquos algorithm When a process starts up it must state in advance the maximum allocation of resources it may

                  request When a request is made the scheduler determines whether granting the request would leave the

                  system in a safe state If not then the process must wait until the request can be granted safely The bankers algorithm relies on several key data structures ( where n is the number of processes and

                  m is the number of resource categories ) o Available[ m ] indicates how many resources are currently available o Max[ n ][ m ] indicates the maximum demand of each process of each resource

                  o Allocation[ n ][ m ] indicates the number of each resources allocated to each processo Need[ n ][ m ] indicates the remaining resources needed of each type for each process ( Note

                  that Need[ i ][ j ] = Max[ i ][ j ] - Allocation[ i ][ j ] for all i j ) For simplification of discussions we make the following notations observations

                  o One row of the Need vector Need[ i ] can be treated as a vector corresponding to the needs of process i and similarly for Allocation and Max

                  Safety Algorithm In order to apply the Bankers algorithm we first need an algorithm for determining whether or not a

                  particular state is safe This algorithm determines if the current state of a system is safe according to the following steps

                  1 Let Work and Finish be vectors of length m and n respectively Work is a working copy of the available resources Finish is a vector of booleans indicating whether a particular process can finish Initialize Work = Available and Finish to false for all elements

                  2 Find an i such that both (A) Finish[ i ] == false and (B) Need[ i ] lt Work This process has not finished but could with the given available working set If no such i exists go to step 4

                  3 Set Work = Work + Allocation[ i ] and set Finish[ i ] to true This corresponds to process i finishing up and releasing its resources back into the work pool Then loop back to step 2

                  4 If finish[ i ] == true for all i then the state is a safe state because a safe sequence has been found

                  Resource-Request Algorithm ( The Bankers Algorithm ) Now we have a tool for determining if a particular state is safe or not This algorithm determines if a new request is safe and grants it only if it is safe to do so When a request is made ( that does not exceed currently available resources )

                  pretend it has been granted and then see if the resulting state is a safe one If so grant the request and if not deny the request as follows

                  1 Let Request[ n ][ m ] indicate the number of resources of each type currently requested by processes If Request[ i ] gt Need[ i ] for any process i raise an error condition

                  2 If Request[ i ] gt Available for any process i then that process must wait for resources to become available

                  else the process can continue to step 3

                  3 Check to see if the request can be granted safely by pretending it has been granted and then seeing if the resulting state is safe If resulting state is safe

                  grant the requestelse

                  then the process must wait until its request can be granted safelyThe procedure for granting a request ( or pretending to for testing purposes ) is

                  Available = Available - Request

                  Allocation = Allocation + Request Need = Need - Request

                  Unit III Memory management

                  Just as processes share the CPU they also share physical memory Memory management unit of OS takes care of memory allocation deallocation and other issues Program must be brought into memory for it to be run Addresses are two types

                  i) relocatable or relative address wrt to the beginning of program ii) Absolute addresses

                  Q)Address BindingAns Binding means mapping logical address space to physical address space Address binding can happen at three different stagesCompile time If you know at compile time only

                  where in memory the program is going to be allocated then compiler generates absolute addresses

                  Otherwise compiler generates relocatable addresses

                  Load time loader binds the relocatable addresses generated by compiler to absolute address Hence binding is done at load timeIf Binding is done at compile and load time physical and logical addresses are same

                  Execution time If address binding is done at run time the process can be moved during its execution from one memory segment to another Here we call logical addresses as virtual addressRun time mapping of virtual to physical address is done by a hardware device called Memory Management Unit (MMU)

                  Q)Logical vs Physical Address SpaceAnsLogical address ndash generated by the CPU also referred to as virtual addressPhysical address ndash address seen by the memory unitSet of all logical addresses is called logical address space Set of all physical addresses is called physical address space

                  Q) Memory-Management Unit (MMU)Ans MMU is hardware device that maps virtual to physical addressIn MMU scheme the value in the relocation register is added to every address generated by CPU to read the contents of memoryThe user program only knows logical addresses it never sees the real physical addresses

                  Q)Dynamic loadingAns Since physical memory is small it may not be possible for the entire program to be in main memory so we can use dynamic loadingIt is responsibility of users to design their programs to take advantage of dynamic loadingOnly the main function is loaded into main memory and when main() calls another function the main() will check whether the function is in main memory IF not the loader will load the desired function to main memory and updates the programrsquos address table

                  Routine is not loaded until it is called We achieve better memory-space utilization as unused routine is never loaded

                  Q) Dynamic Linking+ Linking postponed until execution time+ Small piece of code stub used to locate the appropriate memory-resident library routine+ Stub replaces itself with the address of the routine and executes the routine+ Operating system needed to check if routine is in processesrsquo memory address+ Dynamic linking is particularly useful for libraries

                  Q) OverlaysAns Overlays are needed when process is larger than amount of main memory allocated to it Instructions and data that are needed at any given time is kept in main memory Overlays can be implemented by user Programming design of overlay structure is complex Overlays for a two-pass assembler is as shown in the figure

                  Q) SwappingAns A process can be swapped temporarily out of memory to a backing store and then brought back into memory for continued executionBacking store ndash fast disk large enough to store copies of all memory images of all usersPriority based scheduling uses a variant of swapping policy called roll out roll in If a higher priority process arrives the memory manager swaps out lower priority process and then swaps in higher priority process When the higher priority process finishes the lower priority process can be swapped in again to main memory A process that is swapped out will be swapped back into the same memory space it occupied previouslyMajor part of the swap time is transfer time We can swap idle process only and cannot swap a process that is waiting for IO

                  Q) Contiguous memory allocation Ans Each process is contained in a single contiguous section of memory1 Fixed Size Partition (or) Single-partition allocation

                  Divide the main memory into Fixed sized partitions Each partition may contain exactly one process Relocation register contains value of starting physical address Limit register contains range of logical addressesEvery address generated by a CPU is checked as follows

                  If logical address lt Limit register then logical address is added with Relocation register to get the corresponding memory address

                  else a trap to OS is generatedSince every address is checked we can protect the OS and other user programs from being modified by other running process

                  2 Multiple-partition allocation

                  1) Fixed Size Partitions Divide the main memory into Fixed sized blocks Here memory allocated to a process may be larger than required The difference between allocated memory and requested memory is called internal fragmentation Internal fragmentation is unused memory inside the partition

                  0 P1 Block0(0th address to 3rd address)4 P1 Block1(4th address to 7th address)8 Block2(8th address to 11th address)

                  12 Block3(11th address to 12th address)

                  if P1 requires just 5 addresses still it is allocated 2 blocks ie 8 addresses So remaining 3 addresses are left unused

                  2) Variable Size partitions Here main memory is divided into partitions of variable sizesOperating system maintains information about

                  a) allocated partitions b) free partitions (hole)

                  In the beginning all main memory is empty and is considered one large block of available memory a hole

                  Exact memory required by process is only given When a process terminates it releases memory which can be allocated to another process

                  Memory is allocated to processes until finally no available block of memory (or hole) is large enough to hold the next processExternal fragmentation exists when there is enough memory to satisfy a request but the available memory are not contiguous Wastage of memory outside the partitionOne Solution to problem of external fragmentation is

                  a Compaction shuffle the memory contents so as to place all free memory together into one large block

                  ===============================================================================Q) Dynamic Storage-Allocation Problem (or) Most commonly used strategies to select a free hole from the set of available holesAns To satisfy a request of size n from a list of free holes below 3 policies can be used

                  1 First-fit Allocate the first hole that is big enough

                  2 Best-fit Allocate the smallest hole that is big enough must search entire list if it is not ordered by size Produces the smallest leftover hole

                  3 Worst-fit Allocate the largest hole must also search entire list Produces the largest leftover holeFirst-fit and best-fit better than worst-fit in terms of speed and storage utilization

                  ===============================================================================Q) Non-Contiguous Memory allocation

                  1 Paging Paging is a memory management scheme that provides non-contiguous memory allocation0 P1 Block0(0th address to 3rd address)4 P2 Block1(4th address to 7th address)8 Block2(8th address to 11th address)

                  12 P1 Block3(11th address to 12th address)

                  Logical address space of a process can be noncontiguous Ex P1 is allocated Block0 and Block3

                  1 Divide physical memory into fixed-sized blocks called frames (size is power of 2)2 Divide logical memory into blocks of same size called pages3 Logical address is divided into 2 parts

                  Page number (p) ndash Page table is indexed by page numberPage offset (d) ndash

                  4 A page table is allocated to each process (A pointer to page table is stored in PCB of process )Page table translates logical to physical addresses Page 0 is in frame1 Page 1 is in frame 4 etc5 Internal fragmentation may occur due to paging

                  6 If the size of logical address space= 2m and page size = 2n Then the higher order mminusn bits of logical address correspond to page number and lsquonrsquolower order bits for displacement within the page

                  Ex size of logical address space = 8=23 so m=3

                  page size =4=22 So n=2

                  So mminusn = 1 bit for page number to represent 01

                  remaining n=2 bits for displacement within the page

                  Q) Implementation of Page Table

                  Page table can be kept as

                  1) Set of dedicated registers efficient if the page table is small

                  2) Page table can be kept in memory page table base register (PTBR) points to page table Page-table length register (PTLR) indicates size of the page table Changing page table requires changing only the value in these registers Advantages of keeping page table in memorya Less context switch time

                  b Two memory accesses are required One memory access to access the page table and another memory access to access the required memory address

                  This problem can be solved by using a special fast-lookup hardware cache called associative memory or translation look-aside buffers(TLBs)

                  i Each entry in TLB consists of 2 parts

                  I) page number field

                  II) Value field or frame number field

                  ii When a logical address is generated by CPU its page number is presented to TLB The page number is compared with all the entries of TLB simultaneously

                  If page number is found in TLB Itrsquos frame number is immediately available -gt TLB hitIf page number not found check in page table ndashTLB miss

                  By using TLB Search is fast But TLB hardware is expensive So TLB size is kept small So TLB contains only frequently used few page table entries

                  Q) Protection in Paging Validinvalid bit attached to each entry in the page table OS sets this validinvalid bit for each page to allow or disallow access to the page When the bit is set to invalid the page is not in processrsquos logical address space Hence it generates Trap to OS

                  Q) Shared pages in Paging

                  Reentrant code is the code that never changes during execution Reentrant code can be shared

                  1 One copy of read-only (reentrant) code is shared among processes (ie text editors compilers window systems) Ex ed1ed2ed3 is shared among Process P1 and P2

                  2 Shared code must appear in same location in the logical address space of all processes

                  Each process keeps a separate copy of the private code and data EX data1 and data2

                  The pages for the private code and data can appear anywhere in the logical address space

                  ==============================================================================Q) Segmentation

                  Segmentation is a memory management scheme that support userrsquos view of memory

                  When the user program is compiled the compiler generates segments like

                  1) The code segment

                  2) Global variables segment

                  3) Heap memory segment

                  4) Stack segment etc

                  Each entry in segment table has

                  segment base (Starting physical address of segment) and limit register (specifies the length of the segment)

                  Logical address is divided into 2 parts

                  segment-number offset

                  Segment table is indexed by segment number

                  Segment-table base register (STBR) stores the location of segment table in main memory

                  Segment-table length register (STLR) stores number of segments used by a program

                  The segment number (say lsquosrsquo) is used to find the entry in segment table The required entry is lsquosrsquo locations away from the beginning of the segment table Once the required entry in the segment table is found the offset (lsquodrsquo) is compared with limit

                  If( offset lt limit) then offset is added with the base entry to generate the physical address

                  Q) Shared Segments Code sharing occurs at

                  the segment level Shared segments must

                  have same segment number

                  Allocation - dynamic storage allocation problem

                  use best fitfirst fit may cause external fragmentation

                  Protection protection bits associated with segments

                  readwriteexecute privileges array in a separate segment - hardware can check for illegal array indexes

                  ===============================================================================Q) Virtual memory ndash

                  Ans In paging and segmentation a program will execute only if the entire process is in main memory But here in virtual memory only part of the program needs to be in memory for execution

                  1 A program that is large enough than the available main memory can still run as only part of program is only loaded into main memory Logical address space can therefore be much larger than physical address space

                  2 Allows address spaces to be shared by several processes

                  Virtual memory can be implemented via Demand paging Demand segmentation

                  The large blank space between heap and stack is part of virtual address space Virtual memory allows files and memory to be shared by 2 or more processes through page sharing as shown in below figure (b) Figure (a) depicts that virtual memory that is larger than physical memory

                  ==============================================================================

                  Q)Demand pagingDemand paging is technique of loading pages from disk to main memory only when the page is needed Hence using less amount of physical memory we get faster response

                  Demand paging is similar to PAGING + SWAPPING

                  Instead of swapping a whole process the pager brings only those pages needed into memory

                  To distinguish between pages that are in memory and pages that are on the disk validinvalid bit is used

                  Validinvalid bit is attached to each entry in page table When this bit is set to valid the page is legal and in memory When this bit is set to invalid either the page is illegal or the page is in disk

                  When CPU generates a logical address whose page table entry is set to invalid page fault occurs Hence the required page is in disk Some section of hard disk called swap space is used to hold pages that are not present in memory

                  Q) Procedure for Handling a Page Fault

                  1) CPU generates a logical address and if the validinvalid bit for the page that has this logical address is set to invalid page fault occurs

                  2) Page fault causes a trap to operating system

                  3)Check if the logical address is within the logical address space of process ie PTBR and PTLR is checked if (required logical address is not within the logical address space of process)

                  Terminate the processelse

                  page is not in memory and page is in disk

                  4) To bring the required page into memory we need to find a free main memory frame map main memory address to disk block and fetch disk block and load the block into free frame5) When the required page is brought into memory update the page table to indicate the page is in memory6) Restart instruction interrupted by illegal address trap The process will continue as if page had always been in memory

                  Q) What is pure demand pagingAns If no page belonging to the executing process is in main memory the process will fault for every page it needs Page faults will occur until every page that is needed is in memory This scheme is known as pure demand paging NEVER BRING A PAGE UNTIL IT IS REQUIRED

                  Q) Page Replacement1 Find the location of the desired page on the disk2 Find a free

                  i) If there is a free frame use itii) If there is no free frame use Page replacement Algorithms to find some page in memory that is

                  not really in use and swap itiii) If the modify bit associated with the victim frame is set to 1 the page is written to disk Make the

                  changes to validinvalid bit of victim frames page table entry to invalid to indicate the victim frame is no longer in memory

                  else no need to write to disk3 Read the desired page into victim frame and change the frame and page tables4 Restart the user process

                  =================================================================================================Q) Page Replacement AlgorithmsGoal Produce a low page-fault rateThe algorithms are evaluated by running it on a particular string of memory references (reference string) and by computing the number of page faults on that string

                  1 First in First out (FIFO) Page replacement algorithm Uses the time when a page was brought into memory The page that was brought first into memory is replaced ie oldest page is chosen for replacement We create a FIFO queue to hold all pages in memory

                  When a page is brought into memory we insert it at the tail of the queue and we replace the page at the head of the queue

                  FIFO 15 page faultsProblems due to FIFO is Beladyrsquos anamolyFor a refrence string shown below123412512345When number of frames allocated to a process is 1 number of page faults=12 due to FIFO page replacementWhen number of frames allocated to a process is 2 number of page faults=12 due to FIFO page replacementWhen number of frames allocated to a process is 3 number of page faults=9 due to FIFO page replacementWhen number of frames allocated to a process is 1 number of page faults=10 due to FIFO page replacementAs the number of frames allocated to a process increases page faults must decrease But in FIFO page replacement page faults increase2 Optimal Page replacement AlgorithmReplace the page that will not be used for longest period of time Used for measuring how well your algorithm performs It is not practical to implement OPT algorithm as it is not possible to know what the future references will beex

                  Optimal page replacement 9 page faults

                  3 Least Recently used (LRU) Page replacement algorithm Replace the page that has not been used for longest period of time LRU makes use of time of last use of the page Ex

                  Number of page faults = 12

                  LRU requires hardware assistance to determine the time of last use of the page We can use 1) Stack or 2) Counter to implement LRU page replacement algorithm

                  1) Stack implementation ndash keep a stack of page numbers in a double link formwhen a Page is referenced it is moved to the top of stackThis implementation requires 6 pointers to be changed

                  2) counter implementation Time-of-use field is allocated to each frame allocatedCPU maintains a counter Counter is incremented for every page reference Counter value is copied to the time-of-use field of the referenced page

                  Disadvantages of LRU page replacement algorithm Updating counter or stack must be done for every memory reference HENCE LRU IS SLOW and implementation requires hardware assistance ==============================================================================Q) LRU approximation page replacement1) Using reference bit2) Using additional reference bits algorithm3) Second chance algorithm4) Enhanced Second chance algorithm5) Counting based Page replacement

                  Has two schemes1) Least Frequently Used (LFU)2) Most Frequently Used (MFU)

                  6) Page Buffering AlgorithmsQ) What is ThrashingAns A process is thrashing if it is spending more time paging than executingThrashing may occur when global (or) local page replacement is used as followsSolution to Thrashing is Working Set strategyIf we provide a process with as many frames as it needs thrashing can be avoided Since it is not possible to know how many frames a process needs we use locality model of process execution

                  here Locality is set of pages that are actively used together Locality model states that as the process executes it moves from locality to locality If we allocate enough frames to a process to accommodate the size of current locality it will not fault again until it changes current localityWorking Set model uses working-set and a parameter ∆ = working-set windowThe set of pages in the most recent ∆ page references is the working setLet WS S i be the working-set size for process Pi ie Pi needs WS S i framesLet D be the total demand for frames then D=sum WS Si

                  and mbe the total number of available frames

                  If iquestm thrashing occurs because some processes will not have enough framesSoIF (Dgtm iquest

                  OS selects a process to suspendelse OS may initiate another processThe working set strategy prevents thrashing while keeping the degree of multi programming as high as possibleThe main difficulty is to keep track of moving working set window ========================================================================Q) How does thrashing occur in global and local page replacementAns In Global page replacement when a page fault occurs for a process any frame in main memory can be replaced causing page fault to some other process that needs the replaced frame Page fault processes must use paging device to swap pages in and out As more number of processes queue up for the paging device

                  the READY queue empties And CPU becomes idle So OS introduces new processes as CPU is free This further increases number of page faults

                  In Local page replacement Each process is allocated certain frames in main memory and when a page fault occurs only the frames allocated to it will be replacedWhen a process starts thrashing other processes are not affected But since the processes will be in queue for paging device most of the time average service time for page fault increases Hence effective access time will increase even for a process that is not thrashingQ) Page Table Structure Or Structure of page table in memory

                  Hierarchical Paging Hashed Page Tables Inverted Page Tables

                  Hierarchical Paging If Page table is large then we break up the page table into multiple page tables ie page table is paged A simple technique is a two-level page tableTwo-Level Paging Example A logical address (on 32-bit machine with 4K page size) is divided into 1 a page number consisting of 20 bits 2 a page offset consisting of 12 bitsSince the page table is paged the page number is further divided into 1 a 10-bit page number 2 a 10-bit page offset Thus a logical address is as follows page number page offset

                  pi p2 d

                  10 10 12 where p1 is an index into the outer page table and p2 is the displacement within the page of the outer page table Two-Level Page-Table Scheme

                  Address-Translation Scheme Address-translation scheme for a two-level 32-bit paging architecture

                  Hashed Page Tables (or) Hash table Hashed page tables are common in address spaces gt 32 bitsEach entry in hash table contains a linked list of elements that hash to same locationHash table is indexed by hash valueEach element consists of 3 fields

                  1 Page number2 Frame number3 Pointer to next element in linked list

                  The algorithm works as follows Page number is passed to hash function to get hash value Locating the hash value in hash table is easy as Hash table is indexed by hash value Page number is then compared with field 1 in the first element in linked list If there is a match the corresponding frame number is used to get physical address If there is no match next entries in linked list are searched

                  Inverted Page Table

                  Invertible page table has one entry for each frame of physical memory Each entry in invertible page table has

                  Process-id Page number

                  Logical address also has 3 parts Process-id Page number

                  OffsetProcess-id and page number of the logical address is compared with each entry in the invertible page tableIf a match is found at ith entry i is added to offset to get logical address If there is no match it means it is illegal addressAdvantages and disadvantages

                  Decreases memory needed to store each page table Increases time needed to search the table when a page reference occurs Solution Use hash table to

                  limit the search to one mdash or at most a few mdash page-table entries Inverted Page Table Architecture

                  Unit-IV CPU Scheduling

                  Basic conceptsTo maximize CPU utilization some process must be running at all times

                  In multi programming several processes will be in main memory at a given time If running process has to wait for IO the OS takes the CPU away from that process and assigns CPU to other process in ready queue So CPU must be scheduled and CPU scheduling is fundamental OS function

                  CPU- IO burst cycleProcess execution begins with a CPU burst that is followed by an IO burst which is followed by another CPU burst then another IO burst and so on as shown in the figure

                  An IO bound program has many short CPU bursts A CPU-bound program has few long CPU bursts

                  Schedulers a module in OS for scheduling decisions1048708 Long-term scheduler (or job scheduler) ndash selects which processes should be brought into the ready queue1048708 Medium-term scheduler ndash selects which processes should be swapped inout the memory1048708 Short-term scheduler (or CPU scheduler) ndash Whenever the CPU becomes idle the operating system must select one of the processes in ready queue to be executed This selection is done by short term scheduler (or) CPU scheduler

                  Dispatcher gives CPU control to the process selected by short term scheduler This function involves following

                  1 switching context2 switching to user mode3 jumping to proper location in the user program to restart that program

                  Time taken to stop one process and start another process is known as dispatch latency This must be kept smallReady QueueReady queue is implemented as a FIFO queue a priority queue a tree or simply an unordered linked listThe records in ready queue are PCBs of processes

                  Pre-emptive Scheduling

                  CPU scheduling decisions take place under one of four conditions

                  1 When a process switches from the running state to the waiting state2 When a process switches from the running state to the ready state for example in response to an

                  interrupt3 When a process switches from the waiting state to the ready state say at completion of IO 4 When a process terminates

                  For conditions 1 and 4 there is no choice - A new process must be selected For conditions 2 and 3 there is a choice - continue running the current process (or) select a different

                  one If CPU scheduling takes place only under conditions 1 and 4 the system is said to be non-

                  preemptive or cooperative In non-preemptive a process runs until it itself gives up the CPU ie when it is waiting for IO (or) when process completes Otherwise the system is said to be pre-emptive

                  Windows used non-preemptive scheduling up to Windows 3x and started using pre-emptive scheduling with Win95 Macs used non-preemptive prior to OSX and pre-emptive since then Note that pre-emptive scheduling is only possible on hardware that supports a timer interrupt

                  Note that pre-emptive scheduling can cause problems when two processes share data because one process may get interrupted in the middle of updating shared data structures

                  Preemption can also be a problem if the kernel is busy implementing a system call ( eg updating critical kernel data structures ) when the preemption occurs Solution wait until the system call has either completed or blocked before allowing the pre-emption But this solution is problematic for real-time systems

                  Some critical sections of code protect themselves from concurrency problems by disabling interrupts before entering the critical section and re-enabling interrupts on exiting the section But this should only be done only on very short pieces of code that will take less CPU time

                  Scheduling Criteria to decide CPU scheduling algorithm1 CPU utilization in real system cpu utilization will range from 40 to 902 Throughput Number of processes that are completed per unit time3 Turnaround time Time when the process has completed ndash Time when the process was submittedie it includes waiting time and burst time4 Waiting time amount of time that a process spends waiting for CPU5 Response time Time when the first response came - Time when the process was submitted

                  Optimization Criteria Max CPU utilizationMax throughput

                  Min turnaround timeMin waiting timeMin response time CPU Scheduling Algorithms1) First-Come First-Served (FCFS) Scheduling First come first served CPU scheduling algorithm is non pre-emptive Processes are scheduled in the order they have arrivedFCFS is implemented by Queue When the CPU is free it is allocated to the process at the head of the queueAdvantages Easy to implementDisadvantages 1 Average waiting time is quite long

                  2 Convoy effect (it occurs as small processes wait in queue for big process to leave CPU)Example Process Burst Time P1 24 P2 3 P3 3

                  Suppose that the processes arrive in the order P1 P2 P3

                  The Gantt Chart for the schedule is P1 P2 P3

                  0 24 27 30 Waiting time for P1 = 0 P2 = 24 P3 = 27 Average waiting time (0 + 24 + 27)3 = 17 2) Shortest-Job-First (SJF) SchedulingSchedule the Process with shortest burst time If CPU burst time of two processe are same then FCFS is usedAdvantages Average waiting time decreasesDisadvantages It is difficult to know the length of next CPU burstSJF can be either pre-emptive (or) non pre-emptive1 Non pre-emptive SJF ndash once CPU is given to the process it cannot be preempted until completes its CPU burst2 Pre-emptive SJF (or) Shortest-Remaining-Time-First (SRTF) If the newly arrived process is shorter than what is remaining of currently executing process then pre-empt the currently executing process

                  Example Process Arrival Time Burst Time P1 00 7

                  P2 20 4 P3 40 1

                  P4 50 4

                  SJF (non-preemptive) Gantt Chart P1 P3 P2 P4

                  0 7 8 12 16 Average waiting time = [0 +(8-2)+(7-4) +(12-5)] 4 =4

                  Example of Preemptive SJF

                  Process Arrival Time Burst TimeP1 00 7P2 20 4P3 40 1P4 50 4

                  SJF (preemptive) P1 P2 P3 P2 P4 P1

                  0 2 4 5 7 11 16 Average waiting time = (9 + 1 + 0 +2)4 =3 3) Priority SchedulingA priority number (integer) is associated with each processThe CPU is allocated to the process with the highest priority (smallest integer equiv highest priority)If two processes have equal priority then FCFS is usedSJF is a priority algorithm in which larger the CPU burst lower the priorityThere are two schemes 1 Preemptive 2 nonpreemptive

                  Problem equiv Starvation ndash low priority processes may never executeSolution equiv Aging ndash as time progresses increase the priority of the process

                  4) Round Robin (RR) Each process gets a small unit of CPU time (time quantum) usually 10-100 milliseconds After this time has elapsed the process is preempted and added to the end of the ready queueIf there are n processes in the ready queue and the time quantum is q then each process gets 1n of the CPU time in chunks of at most q time units at once No process waits more than (n-1)q time unitsPerformance 1 q large _ FIFO 2 q small _ q must be large with respect to context switch otherwise overhead is too highExample of RR with Time Quantum = 4 Process Burst Time P1 24 P2 3

                  P3 3 The Gantt chart isP1 P2 P3 P1 P1 P1 P1 P1

                  0 4 7 10 14 18 22 26 30 Average waiting time = [(30-24)+4+7]3 = 173 =566 5)Multilevel Queue scheduling Ready queue is partitioned into separate queuesFor ex ready queue is divided into 2 queues

                  1 foreground (interactive) queue2 background (batch) queue

                  Each queue has its own scheduling algorithmFor ex Round Robin scheduling algorithm can be used in foreground queue

                  FCFS scheduling algorithm can be used in background queueScheduling must be done between the queues This can be done in 2 ways1 Fixed priority scheduling

                  Foreground queue has highest priority All the processes in foreground queue must be completed and queue must be empty Then only the processes in background queue will be given CPU As shown in below figure Disadvantage starvation

                  2 Time slice ndash each queue gets a certain amount of CPU timewhich it can schedule amongst its processes ie 80 to foreground in RR and 20 to background in FCFS

                  6) Multilevel Feedback Queue Scheduling A process can move between queues The idea is to separate processes according to their CPU bursts IO ndash bound and interactive processes will be in highest priority queueIf a process takes more CPU time it is moved to lower priority queue If a process waits too long in lower priority queue it is moved to higher priority queue to prevent starvationAs shown in the below figure let there be 3 queues Q0 Q1 Q2

                  1 Q0 ndash time quantum 8 milliseconds 2 Q1 ndash time quantum 16 milliseconds 3 Q2 ndash FCFS

                  Scheduling

                  1 A process entering ready queue is put in queue Q0 When it gains CPU the process receives 8 msec If it does not finish in 8 milliseconds process is moved to queue Q1 2 At Q1 process again receives 16 additional milliseconds If it still does not complete it is moved to queue Q2

                  Q) SynchronizationSeveral processes run in an operating system Some process share resources due to which problems like data inconsistency may arise

                  Example of process synchronization is Producer Consumer problem (or) Bounded-Buffer problemProducer Consumer problem To make sure that producer process should not add data when the buffer is full andConsumer process should not take data when the buffer is empty

                  The code for Producer process can be modified as followswhile (true) Produce an item in next producedwhile (counter == BUFFER_SIZE) do nothingbuffer[in]=nextProducedin = (in+1) BUFFER_SIZEcounter++

                  here an integer variable counter is initialized to zeroCounter is incremented every time a new item is added to the buffer and counter is decremented every time an item is removed from bufferin is an index that always points to an empty slot after the last filled slotout is an index that always points to the first filled slot after at the head of the circular queue

                  The code for consumer process is as shown belowwhile(true)

                  while (counter == 0) do nothingnextConsumed = buffer[out]out=(out+1)BUFFER_SIZEcounter--

                  The above producer and consumer code produces RACE CONDITIONOutcome of execution of several co-operating processes depends on the order in which accesses takes place is called Race ConditionFor Example if counter =5

                  Given counter =5 and if producer produces one item and consumer consumes one item then correct value of counter= 5 only But here the counter = 4 ie inconsistent resultIf T5 is executed before T4 then the counter =6 ie inconsistent result

                  Solution to avoid race condition make sure only one process at a time must update the shared variable (here it is counter)

                  Q) Critical section problemAns Each process has a segment of code called critical section Critical section code is the code that contains common variables of co-operating processesTo avoid data inconsistencyTwo processes must not execute critical section code at same timeFor this to happen1 Each process must request permission to enter its critical sectionmdashEntry section2 End of critical section ndashExit section3 Rest of the code after critical section ndash Remainder section

                  Solution to critical section must satisfy the following three requirements1 Mutual exclusion if one process is in critical section no other process can be in critical section2 Progress Processes not in remainder section must decide which process will enter critical section next3 Bounded Waiting All the processes must be allowed to enter critical section No process must wait indefinitely

                  Q) Petersonrsquos solution for critical section problem (Software based solution)AnsTime Process 1 Process2T0 do

                  flag[1] = TRUE turn =2

                  do flag[2] = TRUE

                  T1 turn =1

                  T2 While( flag[2] ampamp turn == 2)Since turn = 2 while loop will not run

                  While( flag[1] ampamp turn == 1)Loops continuously until flag[1]

                  becomes falseT3 Enter critical section

                  T4 flag[1] = FALSE Enter Critical section

                  T5 Enters remainder section flag[2]=FALSE

                  while (TRUE) Enters remainder section

                  while (TRUE)

                  101 File Concept Data is stored in secondary memory in the form of files A file is a sequence of bits bytes lines or records A file has structure which depends on its type

                  File Attributes

                  1 Name ndash Name of the file for user understanding2 Identifier used by OS to identify a file ( eg inode number )3 Type ndash Text files executable files binary files etc 4 Location ndash location of the file on the hard drive 5 Size current size of the file6 Protection Controls who can read write or execute the file7 Time Date and user identification gives the information when the file was created last modified

                  and last used etc This data is used for protection security and usage monitoring

                  All information about files is kept in directory structure The directory entry consists of the filersquos name and its unique identifier The identifier locates the other file attributes Directory structure is kept on disk

                  File Operations

                  The file Abstract Data Type supports many common operations 1 Creating a file find free space on disk add entry to directory 2 Writing a file write data at current file position pointer location and update pointer3 Reading a file read data at current file position pointer location and update pointer4 Repositioning within a file or file seek change pointer location to a given value5 Deleting a file free the space allocated to file and remove entry from directory6 Truncating a file delete the data and update file size

                  Open File Table

                  Most OSes require that files be opened before access and closed after access Information about all files that are open currently in the system is stored in an open file table This

                  system wide open file table is maintained by OS And information about files opened by the process is stored by the processrsquos open file table Open file table has the following information

                  1 File pointer - records the current position in the file for the next read or write2 File-open count - How many times has the current file been opened ( simultaneously by

                  different processes ) and not yet closedWhen this counter reaches zero the file can be removed from the table

                  3 Disk location of the file4 Access rights

                  Some systems provide support for file locking 1 A shared lock is for reading only2 A exclusive lock is for writing as well as reading3 An advisory lock OS makes sure the locks are acquired and released appropriately4 A mandatory lock software developers makes sure the locks are acquired and released

                  appropriately5 UNIX used advisory locks and Windows uses mandatory locks

                  File Types

                  Implementing file types A common technique is to include the type as part of the filename The name is split into 2 parts

                  1 File Name 2 Extension

                  File name and extension are separated by a period The system uses the extension to indicate the type of file and the type of operations that can be done on that fileUser opens Microsoft word and clicks on File menu and Open option User will only specify the filename without extension application will look for a file with the given name and extension it expects Hence extensions are not supported by the operating systemUnix system uses a crude magic number stored at the beginning of the file to indicate the type of fileIn Mac OS X operating system file produced by word processor has the word processorrsquos name as its creator When user double clicks on the file word will open the required fileSome common file types are as shown below

                  File Structure Some files contain an internal structure which may or may not be known to the OS for ex executable files have a specific structure so that it can determine

                  where in memory to load the file and what is the location of the first instruction

                  If OS supports many file formats it will increase the size and complexity of the OS UNIX treats all files as sequences of bytes with no internal structure (With the exception of

                  executable binary programs which it must know how to load and find the first executable statement etc)

                  Macintosh files have two forks - a resource fork and a data fork The resource fork contains information relating to the UI such as icons and button images and can be modified independently Data fork contains the code or data

                  Internal File structure

                  Disk files are accessed in units of physical blocks typically 512 bytes

                  Internally files are organized in units of logical units like single byte (or)

                  data record size (or) structure size

                  The number of logical units which fit into one physical block determines its packingIf a student file is organized internally as records Each record takes 64 bytes of memoryAnd main memory is divided into blocks of 512 bytes Then packing = 51264=8 8 records can be stored per block

                  Internal fragmentation may occur due to this

                  102 Access Methods

                  The information in file can be accessed in the following ways1 Sequential access2 Direct access (or) Relative access

                  1 Sequential access we cannot randomly access any record

                  Records are read and written sequentially one record after another

                  A sequential access file emulates magnetic tape operation and generally supports the following operations read next - read a record and advance the tape to the next position write next - write a record and advance the tape to the next position rewind skip n records - May or may not be supported N may be limited to positive numbers or may be

                  limited to +- 1

                  2 Direct access (or) Relative accessA file is made up of fixed length records so that we can randomly access any record

                  No restriction on order of reading or writing

                  Direct access files are useful when we need to access large amounts of data randomly

                  Ex database

                  Operations supported include n is relative block number ie relative to beginning of file

                  1 read n - read record number n 2 write n - write record number n 3 jump to record n - could be 0 or the end of file4 Query current record - used to return back to this record later5 Sequential access can be easily emulated using direct access The inverse is complicated and

                  inefficient

                  3 Other access methods can be built on top of direct access method We construct an index for the file which contains pointers to various blocks To search a file we first access index and then use pointer to access file

                  If a student file has 120000 records of students sorted by student id We can create an index consisting of 1st student id in each blockif 64 records can be stored in each block then number of blocks needed to store 120000 records = 12000064 asymp 2000 blocks So index has 2000 entries Index is kept in main memory so that searching becomes faster

                  If the index itself becomes large we create an index for index file (Ex B-tree)

                  Q) Allocation Methods

                  To allocate space to files on disk is such a way that disk space is utilized effectively and files can be accessed quickly

                  Three major methods of allocating disk space are

                  1 Contiguous 2 Linked

                  3 Indexed

                  1 Contiguous allocation Each file occupies set of contiguous blocks on disk

                  If the file is n blocks long and starts at location b then it occupies blocks b b+1 b+2hellipb+n-1

                  The directory entry for each file indicates the address of starting block(ie b) and length (n)

                  Advantages

                  1 For direct access to block i we can directly access by finding the block as b+i Since the file is stored sequentially sequential access is easy Hence access is easy

                  2 Supports both direct and sequential access3 Number of disk seeks required is minimum 4 Seek time is minimum

                  Disadvantages

                  1 Difficult to find space to a new file (or) add more data to existing fileFirst fit and Best fit strategies are used to find a free hole( set of free contiguous blocks) from list of available holes But both are inefficient wrt storage utilization

                  2 External fragmentation even though there are enough free blocks required for a new file they are not contiguous and hence cannot be allocated to new fileSolution copy all the files onto another disk or tape We then copy the files back onto original disk contiguously This is known as compaction But compaction is time consuming

                  3 Size of the file must be known when the file is created Even though final size is known entire size may not be used by the file at the beginning Hence leads to wastage of space

                  If size of the file is not known in advance then (i) If we allocate too little space to file file cannot be extended Solution find a larger hole copy the contents of the file to the new space and release the previous space This may be possible as long as space exists Also this is time consuming(ii) If we overestimate the amount of space needed much of the space is unused This will lead to internal fragmentation

                  To minimize these drawbacks some os uses modified contiguous allocation scheme Here contiguous blocks of memory is allocated at firstIf the file needs more blocks another set of contiguous space known as extent is added Location and a block count plus a link to the first block of next extentIf extents are too large and of fixed size internal fragmentation occursIf extents are of variable sizes external fragmentation occurs

                  Linked AllocationEach file is a linked list of disk blocks The disk blocks can be scattered anywhere on the diskThe directory contains a pointer to first and last blocks of the file For example a file of 5 blocks might start at a block 9 and continue at block 16 then block 1 and then block 10 and finally block 25 Each block contains a pointer to next block If each block is 512 bytes and disk address (pointer ) requires 4 bytes then the user sees blocks of 512-4=508 bytes To create a new file we simply create a new entry in the directory The pointer will have nil for empty file Also the size field of empty file is 0

                  To add more data to existing file free block is found and data is written to it and is linked to end of file To read a file we read by following the pointers block to block

                  Advantages 1 Easy to find space to a new file (or) add more data to existing file2 Size of the file need not be declared when the file is created3 There is no external fragmentation

                  Disadvantages1 Seek time is more2 Number of disk seeks required is more3 Pointers use larger percentage of filersquos disk space4 Inefficient for direct access files To access i th record i disk reads are required5 Not Reliable if pointers were lost or damaged

                  Solution to disadvantage 3 is to use to group multiple blocks as clusters(for ex 4 blocks = 1cluster) We allocate clusters instead of blocks to file Makes logical to physical block mapping simpleImproves disk throughput by reducing disk access timeReduces wastage of space due to pointers as only few pointers may be neededFree ndashlist management also becomes simpleDisadvantage internal fragmentation more space is wasted when the cluster is partially full

                  Solution to disadvantage 5 is to use doubly linked list (or) store the file name and relative block number in each block

                  An variation of linked allocation is the use of file allocation table (FAT)A section of disk at the beginning of each volume contains FAT FAT has one entry for each disk block and is indexed by block numberThe block number contains the block number of next block that contains the file The last block will have special end of the file value as table entry An unused block is indicated by table value 0

                  Advantages

                  1 Easy to find space to a new file (or) add more data to existing file2 Random access time is improved

                  Disadvantage number of disk seeks is more The disk head must move to read the FAT and find the location of block then move to location of the block

                  3 Indexed allocation

                  Each file has its own index block Index block is an array of disk block addresses The ith entry in index block points to ith block of the file

                  The directory contains the address of the index block

                  When the file is created all pointers in the index block are set to nil

                  To add more data to existing file free block is found and data is written to it The address of block is put in the index block

                  Advantages

                  1 Easy to find space to a new file (or) add more data to existing file2 Supports direct (or) random access efficiently3 No external fragmentation

                  Disadvantage

                  1 Seek time is more2 Number of disk seeks required is more3 Not Reliable4 Index file must be kept in memory If this memory is not available then we have the read the index

                  block and then desired data block ie 2 disk accesses are required To access a block near the end of the file we need to read all the index blocks to read the needed data block

                  5 Pointer overhead of index block is gt than that of linked allocation For example if we have file of only one or two blocks Entire index block is allocated to store these one or two pointers

                  Various mechanisms followed for size of index block

                  1 linked scheme size of index block is one disk block Index block contains a small header containing name of the file and set of the first 100 disk block addresses The next address contains the address of another index block for large files

                  2 multilevel index to access a block the OS uses first-level index to find second-level index block which points to file block This approach can be continued to third or fourth level

                  3 combined scheme

                  For example Say there are 15 pointers of index block in the filersquos inode The first 12 of these pointers have address of blocks that contains file data The next 3 pointers point to indirect blocks The first points to single indirect block which is an index block that contain addresses of filersquos data blocks

                  The second points to double indirect blocks and third points to triple indirect blocks as shown below

                  Q) Free space management

                  To keep track of free disk space the system maintains free space list To create a file we search free space list and allocate the required space to new file

                  The free space list can be implemented as

                  1 Bit vector (or) Bit map2 Linked list3 Grouping4 Counting 5 Space maps

                  1 Bit vector (or) Bit map Each block is represented by 1 bit If the block is free the bit is 1 and if the block is allocated the bit is 0 For example consider a disk where blocks 23458 blocks are free The free-space bit map would be 001111001000

                  Advantages simple and easy to get contiguous files

                  Disadvantages

                  1 Bit map requires extra space2 Bit map is inefficient if kept in disk Bit maps of smaller disk can be kept in main memory but may

                  not be possible to keep the bit map of larger disk in main memory3 Bit map must be modified both when blocks are allocated and freed Freeing 1 GB of data on a 1-TB

                  disk could cause thousands of blocks of bit maps to be updated because those data blocks are scattered all over the disk

                  2 Linked List Free-space list is implemented by linking all the free disk blocks together A pointer to the first free block is stored in special location on the disk For example consider a disk where blocks 23458 blocks are freeWe keep a pointer to block 2 as the first free block Block 2 would contain a pointer to block 3 Block 3 would point to block 4 and block 4 would point to block 5and so onAdvantage No waste of spaceDisadvantage Cannot get contiguous space easily ie To traverse the free-space list we must read each block which requires more IO time

                  3 Grouping First free block stores the addresses of n free blocks The first n-1 free blocks are free and the last block contains the addresses of another n free blocks and so onAdvantage large number of free blocks can be found quickly4 Counting free-space list can be maintained by keeping the address of first free block and the number (n)of free contiguous blocks that follow the first block

                  Each entry in the free-space list consists of disk address and a countEach entry requires more space andlist is shorter only if count is gt1These entries can be stored in a B-tree for efficient insertion deletion and search

                  5 Space maps ZFS divides the disk space into chunks of manageable sizes called metaslabs Each metaslab has a space mapZFS uses counting algorithm to store information about free blocks Space map is a log of all block activities in time order in counting formatWhen ZFS allocates or frees space from a metaslab it loads the associated space map into memory and updates it Finally free-space list is updated on diskSynchronization Hardware Hardware solution to synchronization is to provide atomic operations These operations operate as a single instruction without interruption Two such operations are 1 Test and Set() hardware instruction2 Swap() instruction

                  1 Test and Set instruction is as shown

                  Solution using Test and SetThe shared variable lock is initialized to false

                  2 Swap() instruction definition is as follows

                  Solution using Swap1 Shared Boolean variable lock is initialized to false and each process has a local boolean variable key

                  Both Test and Set Swap instructions satisfy the mutual exclusion requirement but unfortunately do not guarantee bounded waiting If there are multiple processes trying to get into their critical sections there is no guarantee of what order they will enter and any one process could have the bad luck to wait forever until they got their turn in the critical section (Since there is no guarantee as to the relative rates of the processes a very fast process could theoretically release the lock whip through their remainder section and re-lock the lock before a slower process got a chance As more and more processes are involved vying for the same resource the odds of a slow process getting locked out completely increase )

                  Below figure illustrates a solution using test-and-set that satisfies bounded waiting using two shared data structures boolean lock and boolean waiting[ N ] where N is the number of processes in contention for critical sections

                  Bounded-waiting mutual exclusion with TestAndSet( )

                  It first looks in an order ( starting with the next process on the list ) for a process that has been waiting and if it finds one then it releases that particular process from its waiting state without unlocking the critical section thereby allowing a specific process into the critical section while continuing to block all the others

                  Q) What is semaphoreAns Semaphore is used to solve synchronization problems A semaphore S is an integer variable accessed only through 2 standard atomic operations 1 wait() 2 signal()Wait primitive Signal primitivewait(S)

                  while(S lt= 0) do nothingS--

                  signal (S)

                  S++

                  Each process that wishes to use a resource performs wait() operation When a process releases a resource it performs signal() operation

                  Semaphores Usage1 binary semaphore or mutex locks can take values 0 (or) 1

                  2 Counting Semaphore can take any integer value Counting semaphore is used to count the number remaining resources The counter is initialized to the number of resources available whenever the counting semaphore gt 0 then a process can enter a critical section and use one of the resources When the counter = 0 ( or negative in some implementations ) then the process blocks until another process frees a resource and increments the counting semaphore with a signal callFor example if 3 resources are there and 4 processes Value of counting semaphore =3if process P1 requires a resource it perform wait() operationNow counting semaphore = 2if process P2 requires a resource it perform wait() operationNow counting semaphore = 1if process P3 requires a resource it perform wait() operationNow counting semaphore = 0if process P4 requires a resource it perform wait() operation P4 waits until a resource is available

                  3 Semaphores can also be used to synchronize certain operations

                  Semaphore ImplementationWhen a process is in critical section and any other process that tries to enter critical section must loop continuously wasting CPU time (busy waiting) This type of semaphore is called spin lock as the process spins while waiting For example suppose it is important that process P1 execute statement S1 before process P2 executes statement S2

                  First we create a semaphore named synch that is shared by the two processes and initialize it to zero Then in process P1 we insert the code

                  S1signal( synch )

                  and in process P2 we insert the code wait( synch )

                  S2 Because synch was initialized to 0 process P2 will block on the wait until after P1 executes the call

                  to signal Each semaphore has a integer value and maintains a waiting queue of waiting processes

                  Semaphore implementation with no busy waiting To overcome busy waiting we block the waiting process and waiting process is restarted by wakeup() operation(ie the process is removed from waiting queue to ready state)

                  Problems due to semaphore1 Deadlocks Deadlocks occur when multiple processes are blocked each waiting for a resource that can only be freed by one of the other (blocked) processes shown below

                  2 Starvation In Starvation one or more processes gets blocked forever and never get a chance to enter critical section For example if we do not specify the algorithms for adding processes to the waiting queue or selecting one to be removed from the queue in the signal( ) call If a LIFO queue is chosen then the first process that starts waiting will never get a chance

                  3 Priority inversionLet Process LMH have priorities as shown belowLltMltHprocess H is waiting for a resource that has been held by process L so Process L is running When process M enters it pre-empts process L and M is running nowHere M runs before H even though M has lower priority than HTo avoid this L is given high priority so that M does not pre-empt L=====================================================================Q) Classical problems on synchronization

                  1 The Bounded Buffer Problem (also called the The Producer-Consumer Problem)2 The Readers-Writers Problem3 The Dining Philosophers Problem

                  These problems are used to test newly proposed synchronization scheme

                  1 The Bounded Buffer ProblemConsider

                  n buffers and each buffer holds one item a producer process which creates the items (1 at a time) a consumer process which processes them (1 at a time)

                  Producer process cannot produce an item if all the buffers are full and Consumer process cannot consume if all the buffers are empty As both processes modify the contents of buffer synchronization must be there For that we use 3 Semaphores

                  1 empty = n (empty semaphore counts the number of empty buffers)2 full=0 (full semaphore counts the number of filled buffers)3 mutex=1 (mutex provides mutual exclusion for access to the buffer)

                  Producer process Consumer processdo

                  hellip produce an item wait(empty) wait(mutex) hellip

                  do wait(full) wait(mutex) hellip remove an item from buffer hellip

                  add item to buffer hellip signal(mutex) signal(full)

                  while (1)

                  signal(mutex) signal(empty) hellip consume the item hellip while (1)

                  2 The Readers-Writers ProblemA data item such as a file is shared among several processesEach process is classified as reader or writerMultiple readers can read the file at same timeA writer must have exclusive access (ie cannot share with either a reader or another writer)Two versions of the readers-writers problem

                  readers priority new reader need not wait because a writer is waiting writers priority if a writer is waiting to access the database no new readers can start reading

                  A solution to both version may cause starvation in the readers priority version writers may starve in the writers priority version readers may starve

                  A semaphore solution to the readers priority version uses 3 semaphores1 readcount = 0 (readcount semaphore counts the number of reading shared data2 mutex =1 (mutex semaphore is used for mutually exclusion when readcount is updated)3 wrt = 1 (wrt semaphore is common to both writer and reader)

                  wrt provides mutually exclusive access to shared data Reader process Writer process

                  wait(mutex) readcount++ if (readcount == 1) wait(rt) signal(mutex) hellip reading is performed hellip wait(mutex) readcount-- if (readcount == 0) signal(wrt) signal(mutex)

                  do wait(wrt) hellip writing is performed hellip signal(wrt)

                  while (TRUE)

                  3 The Dining Philosophers Problemn philosophers sit around a table thinking and eatingAs time passes a philosopher gets hungry and tries to pick up the chopstick on his left and on his right A philosopher may only pick up one chopstick at a time and cannot pick up a chopstick already in the hand of neighbour philosopherThe dining philosophers problems is an example of a large class or concurrency control problems it is a simple representation of the need to allocate several resources among several processes in a deadlock-free and starvation-free mannerA semaphore solution represent each chopstick with a semaphore

                  semaphore chopstick[5] Initially all values are 1

                  Philosopher ido

                  51

                  wait(chopstick[i])wait(chopstick[(i+1) 5])hellipeathellipsignal(chopstick[i])signal(chopstick[(i+1) 5])hellipthinkhellip

                  while (1)This solution guarantees no two neighbouring philosophers eat simultaneously but has the possibility of creating a deadlockQ) Types of operating systems

                  MULTIPROCESSING SYSTEMS has multiple hardware CPUs

                  In symmetric multiprocessing (SMP) systems The processes share the same main memory and io facilities All processors can perform the same functions (hence term symmetric)

                  52

                  ASYMMETRIC MULTIPROCESSING SYSTEMS In asymmetric multiprocessing (ASMP) all CPUs are not equal (Master and slave CPUs) ASMP have dedicated processors to specific tasks For example one processor may be dedicated to disk operations another to video operations etc These systems dont have the flexibility to assign processes to the least-loaded CPU unlike an SMP system

                  REAL TIME SYSTEM is used to implement a computer application that must complete its execution within its time constraint

                  Two kinds of real-time systems have evolved

                  1 A hard real-time system can guarantee that response requirement will be met under all conditions 2 A soft real-time system cannot guarantee that response requirement meets under all conditions Ex

                  Digital audio or multimedia systems Digital telephones

                  DISTRIBUTED SYSTEMS execute parts of a computation in different systems at the same time It uses distributed control ie it spreads its decision-making actions across different computers in the system so that failures of individual computers or the network does not cripple its operation

                  A distributed operating system appears as a uniprocessor system even though it has multiple processors The users may not know where their programs are being run or where their files are located that should all be handled automatically by the operating system

                  Distributed systems allow applications to run on several processors at the same time thus requiring more complex processor scheduling algorithms

                  DESKTOP SYSTEMS A desktop system is a personal computer (PC) system used at a single location Modern desktop systems support multiprogramming Common examples are Linux FreeBSD Windows 8 and the Macintosh operating system

                  HANDHELD SYSTEMS (palmtop computers) Two of the most popular operating systems for handhelds are Symbian OS and Android OS They have very less memory

                  53

                  CLUSTERED SYSTEMS A computer cluster consists of a set of loosely connected computers that work together so that in many respects they can be viewed as a single system The components of a cluster are usually connected to each other through fast local area networks each node (computer used as a server) running its own instance of an operating system

                  Q) Modern operating system

                  New developments in hardware applications and security threats lead to the development of modern operating system

                  New Hardware developments like increased machine speed high speed network increased size and variety of memory devices

                  New Applications developments like multimedia applications internet and web access and client server computing

                  Developments in modern operating system can be categorized as

                  1 Microkernel architecture assigns only few functions to kernel ( IPC and CPU scheduling etc) Other OS services are provided by processes called servers that run in user mode This approach decouples server and kernel development This approach is well suited to distributed environment

                  2 Multithreading a process is divided into threads that can run concurrently Useful for applications that perform a number of independent tasks Example database server that listens and processes many client requests

                  3 Symmetric multiprocessing As now-a-days there are multiple microprocessors in a single system SMP operating system provides greater efficiency when there are multiple processors The processes share the same main memory and io facilities All processors can perform the same functions (hence term symmetric)

                  SMP schedules processes or threads across all of the processors

                  Advantages

                  1 Increased performance if some portion of the program can be run in parallel the performance of SMP OS increases

                  2 increased availability as all processes can perform the same task if one processor fails others will work 3 incremental growth one can add a new processor to increase the performance4 Scaling cost can be dependent on the number of processors used

                  4 Distributed operating systems Distributed systems allow applications to run on several processors at the same time

                  5 Object oriented design adds modularity to kernel Os can be customized without effecting system integrity

                  Q) Linux Operating systems It is open source as its source code is freely available It is free to use Linux was designed considering UNIX compatibility Linux is a multi-user multitasking system Main design goals are speed efficiency and standardization Components of Linux System

                  54

                  Linux Operating System has primarily three components as shown in the below diagram

                  Kernel - Kernel is the core part of Linux It is responsible for all major activities of this operating system It interacts directly with hardware Kernel hides low level hardware details to system or application programs

                  System Library - System libraries are special functions or programs These are used by application programs to use Kernelrsquos features

                  System Utility - System Utility programs are responsible to do specialized individual level tasks like updating log file accepting login requests from terminals etc

                  Kernel Modules Sections of kernel code that can be compiled loaded and unloaded independent of the rest of the kernel A kernel module may typically implement a device driver a file system or a networking protocol

                  Three components to Linux module support 1 module management Supports loading modules into memory and allows to talk to the rest of the kernel 2driver registration Allows modules to inform rest of the kernel that a new driver has become available 3 conflict resolution protect reserved resources of one driver from accidental use by another driver

                  Kernel Mode vs User ModeKernel component code executes in a special privileged mode called kernel mode with full access to all resources of the computer This code represents a single process executes in single address space and do not require any context switch and hence is very efficient and fast Kernel runs each processes and provides system services to processes provides protected access to hardwares to processesCode which is not required to run in kernel mode is in System Library User programs and other system programs works in User Mode which has no access to system hardwares and kernel code User programs utilities use System libraries to access Kernel functions to get systems low level tasksBasic FeaturesFollowing are some of the important features of Linux Operating System

                  Portable - Portability means softwares can works on different types of hardwares in same way Linux kernel and application programs supports their installation on any kind of hardware platform

                  Open Source - Linux source code is freely available and it is community based development project Multiple teams works in collaboration to enhance the capability of Linux operating system and it is continuously evolving

                  Multi-User - Linux is a multiuser system means multiple users can access system resources like memory ram application programs at same time

                  Multiprogramming - Linux is a multiprogramming system means multiple applications can run at same time

                  Hierarchical File System - Linux provides a standard file structure in which system files user files are arranged

                  Shell - Linux provides a special interpreter program which can be used to execute commands of the operating system It can be used to do various types of operations call application programs etc

                  55

                  Security - Linux provides user security using authentication features like password protection controlled access to specific files encryption of data

                  Architecture Linux System Architecture is consists of following layers

                  Hardware layer - Hardware consists of all peripheral devices (RAM HDD CPU etc)

                  Kernel - Core component of Operating System interacts directly with hardware provides low level services to upper layer components

                  Shell - An interface to kernel Takes commands from user and executes kernels functions

                  Utilities - Utility programs giving user most of the functionalities of an operating systems

                  Q) windows XP operating System1Extensibility mdash layered architecture

                  Executive which runs in protected mode provides the basic system services On top of the executive several server subsystems operate in user mode Modular structure allows additional environmental subsystems to be added without affecting the

                  executive 2Portability mdash XP can be moved from on hardware architecture to another with relatively few changes

                  Written in C and C++ Processor-dependent code is isolated in a dynamic link library (DLL) called the ldquohardware abstraction

                  layerrdquo (HAL)3 Reliability mdashXP uses hardware protection for virtual memory and software protection mechanisms for operating system resources 4 Compatibility mdash applications that follow the IEEE 10031 (POSIX) standard can be complied to run on 2000 without changing the source code 5 Performance mdash XP subsystems can communicate with one another via high-performance message passing

                  Preemption of low priority threads enables the system to respond quickly to external events Designed for symmetrical multiprocessing

                  6 International support mdash supports different locales via the national language support (NLS) APIXP Architecture 1 Layered system of modules 2 Protected mode mdash hardware abstraction layer (HAL) kernel executive 3 User mode mdash collection of subsystems

                  a Environmental subsystems emulate different operating systems b Protection subsystems provide security functions

                  XP architecture

                  56

                  Q) Windows network Operating systemExample Windows 2000 operating system

                  All the pictures are taken from Silberschatz Abraham et al Peter Baer Galvin and Greg Gagne Operating system concepts Reading Addison-Wesley

                  Visit my blog enthusiaststudentblogspotin57

                  mtechmessengerblogspotin

                  Keywords OS exam notes OS notesOperating system notes Opearing system galvin notesSimple batch system Multi- programmed Batch System Time sharing System Difference between Batch Multi-programming and Time sharing What is process structure of process in memory Process states PCB Operations on Processes Process Creation Process Termination cascading terminationcontext switch Basic Concepts of threads two modes of CPU execution Inter process communication Shared ndash Memory Systems Message passing systemswhat is deadlock four conditions that are necessary for deadlock to occur Methods for handling deadlocks Deadlock prevention Deadlock avoidanceResource Allocation graphBankers AlgorithmSafe State Unsafe state Safety Algorithm Resource-Request Algorithm Memory management Address Binding Logical vs Physical Address Space MMU Memory-Management Unit Dynamic loading Dynamic Linking Overlays Swapping Contiguous memory allocation Single-partition allocation Multiple-partition allocation Internal fragmentation Variable Size partitions External fragmentation First-fit Best-fit Worst-fit Non-Contiguous Memory allocation Paging Implementation of Page Table PTLR PTBR page table base register Page-table length register translation look-aside buffers TLB Protection in Paging Shared pages in Paging Segmentation Shared Segments Virtual memory Demand paging Procedure for Handling a Page Fault pure demand paging Page Replacement Page Replacement Algorithms FIFO Optimal Page replacement AlgorithmLeast Recently used (LRU) Page replacement algorithm ThrashingWorking Set model global and local page replacement

                  58

                  • 101 File Concept Data is stored in secondary memory in the form of files A file is a sequence of bits bytes lines or records A file has structure which depends on its type
                  • File Attributes
                  • File Operations
                  • Kernel Mode vs User Mode
                  • Basic Features
                  • Architecture

                    Case 1 If resources are available then Allocate them

                    Case 2 if resources are allocated to some other process(say B) that is waiting for additional resources

                    then Preempt the desired resources from the waiting process (B) and allocate them to requesting process(A)

                    The process B can be restarted only when it is allocated additional resources it is requesting and takes away the resources that were given to process A

                    Case 3 if resources are neither available nor held by a waiting process then Process A waits

                    This protocol is applied to resources like CPU register and memory space as the state of the resources can be saved

                    4 Circular Wait

                    To make sure Circular Wait condition never occurs

                    1 Each Resource is assigned a unique integer number

                    2 Each Process must request resources in an increasing order of enumeration

                    We define a one-to-one function F R rarr N where R is the set of resource types and N is the set of natural numbers

                    A process has requested a resource type say Ri at the beginning

                    Protocol 1 After that the process can request resource type R j if and only if F (Riquestiquest j)gtF(Riquestiquest i)iquestiquest

                    Protocol 2 If a process requests a resource type R j it must release all the resources (say (Riquestiquest i )iquest ) whose F (Riquestiquest i)ge F (Riquestiquest j)iquestiquest

                    Example Let F (tape drive) =1 F(disk drive) = 5 and F(printer)=12

                    A process can request any number of tape drives disk drives and printers

                    Protocol 1If a process A has already requested disk drive now A can request only printer and cannot request tape drive

                    Protocol2 In order to request tape drive the process A must release the disk drive and then can request tape drive

                    If the above two protocols are used then the circular wait condition never occurs We can prove this by contradiction

                    Proof Assume circular wait exists Let the set of processes involved in the circular wait be P0 P1 helliphellip Pn where P0 is waiting for resource R0which is held by P1

                    P1 is allocated R0 and P1 is waiting for resource R1which is held by P2 so F (Riquestiquest0)ltF(R iquestiquest1)iquestiquest

                    P2 is allocated R1 and P2 is waiting for resource R2which is held by P3 so F (Riquestiquest1)ltF (Riquestiquest2)iquestiquest

                    P1 P2

                    R2R2

                    R1

                    Pn is allocated Rnminus1 and Pn is waiting for resource Rnwhich is held by P0 so F (Riquestiquestn)ltF(R iquestiquest0)iquestiquest

                    Hence by transitivity F (Riquestiquest0)ltF(R iquestiquest0)iquestiquest Hence our assumption that circular wait exists is FALSE

                    ===============================================================================Q) Resource Allocation graphDeadlocks can be understood more clearly through the use of Resource-Allocation Graphs having the following properties

                    1 Resource Types are represented as square nodes on the graph Dots inside the square nodes indicate number of resources

                    ( Ex two dots might represent two laser printers )2 Processes are represented as circles3 Request Edges - If P1 has requested R1 a directed edge from P1 to R1 is request

                    edge4 Assignment Edges - A directed edge from R2 to P1 indicating that resource R2 has

                    been allocated to process P1 and that P1 is currently holding resource R2 Note that a request edge can be converted into an assignment edge when request is granted

                    If a resource-allocation graph does contain cycles AND each resource contains only a single instance then a deadlock exists If a resource category contains more than one instance then cycle in the resource-allocation graph indicates the possibility of a deadlock but does not guarantee one==================================================================Q) Deadlock AvoidanceFor each resource request system can decide whether the request should be granted or not To make this decision the system must have information like

                    1 resources currently available2 resources currently allocated to each process3 Future requests and releases of each process4 Maximum number of resources it may need

                    Given this information it is possible to construct an algorithm that makes sure that the system will never enter a deadlocked stateThere are Two deadlock-avoidance algorithms They are

                    1 Resource-Allocation Graph Algorithm2 Bankers Algorithm

                    Safe State A system is in safe state if there exists a safe sequence of processes P0 P1 P2 PN such that Resource Requests for Pi lt= Resources allocated to Pi + resources held by all processes Pj where j lt i All safe states are deadlock free

                    Unsafe state If a safe sequence does not exist then the system is in an unsafe state which MAY lead to deadlock

                    1 Resource-Allocation Graph Algorithm Resource-allocation graphs can detect deadlocks only if number of resources of each type are one In this case unsafe states can be recognized and avoided by adding claim edges denoted by dashed

                    lines which point from a process to a resource that it may request in the future All the claim edges are added only at the beginning of process When a process makes a request the claim edge Pi-gtRj is converted to a request edge When a resource is released the assignment edge changes back to a claim edge This approach works by denying requests that would produce cycles in the resource-allocation graph

                    taking claim edges into effectConsider for example the resource allocation graph as shown

                    If P2 requests resource R2 then the claim edge from P1-gtR2 will be made request edge as follows

                    The resulting resource-allocation graph would have a cycle in it and so the request cannot be granted Q) Bankerrsquos Algorithm or Deadlock avoidance algorithm with exampleThere are 12 tape driveslet the current state of the system is as shown in the below figureProcess Allocated Max Need Need= MaxNeed ndash AllocatedP0 5 10 5P1 2 4 2P2 2 9 7

                    Available = 12-(5+2+2) = 3Resource- Request AlgorithmNow when a request for 1 tape drive by process P2 is made we run resource-request algorithm to check whether the request must be granted or not The request is granted if the after granting the request all the processes in the system can complete For thatWe check 1 Is the request of P2 lt= need of P2

                    1 lt= 7 therefore TRUE2 Is the request of P2 lt= Available

                    1 lt= 3 therefore TRUE3 Pretend the request is granted for P2

                    Now the current state is as shown belowProcess Allocated Max Need Need= MaxNeed ndash AllocatedP0 5 10 5P1 2 4 2P2 2+1=3 9 7-1=6

                    Available = 3-1= 2Now Run the safety algorithm to check whether the system is in safe state

                    Safety Algorithm1 Let WORK = Available = 22 Find unfinished process such that Need of unfinished process lt= WORKCheck P0

                    Need of P0 = 5Work = 2

                    Is 5 lt= 2 FALSE

                    Check P1Need of P1 = 2Work = 2

                    Is 2 lt= 2TRUETherefore P1 can finishIf P1 finishes Work = Work+Allocated to P1

                    Work=2+2= 4Now again check if P0 can completeNeed of P0 = 5Work =4Is 5 lt=4 FALSE

                    Check if P3 completesNeed of P3= 6Work =4Is 6lt= 4FALSESo neither P0 can complete and P3 can completeSo the system is in unsafe stateThe request for 1 tape drive by P3 is not granted

                    2 Bankers Algorithm For resources that contain more than one instance the resource-allocation graph method does not

                    work So we use Bankerrsquos algorithm When a process starts up it must state in advance the maximum allocation of resources it may

                    request When a request is made the scheduler determines whether granting the request would leave the

                    system in a safe state If not then the process must wait until the request can be granted safely The bankers algorithm relies on several key data structures ( where n is the number of processes and

                    m is the number of resource categories ) o Available[ m ] indicates how many resources are currently available o Max[ n ][ m ] indicates the maximum demand of each process of each resource

                    o Allocation[ n ][ m ] indicates the number of each resources allocated to each processo Need[ n ][ m ] indicates the remaining resources needed of each type for each process ( Note

                    that Need[ i ][ j ] = Max[ i ][ j ] - Allocation[ i ][ j ] for all i j ) For simplification of discussions we make the following notations observations

                    o One row of the Need vector Need[ i ] can be treated as a vector corresponding to the needs of process i and similarly for Allocation and Max

                    Safety Algorithm In order to apply the Bankers algorithm we first need an algorithm for determining whether or not a

                    particular state is safe This algorithm determines if the current state of a system is safe according to the following steps

                    1 Let Work and Finish be vectors of length m and n respectively Work is a working copy of the available resources Finish is a vector of booleans indicating whether a particular process can finish Initialize Work = Available and Finish to false for all elements

                    2 Find an i such that both (A) Finish[ i ] == false and (B) Need[ i ] lt Work This process has not finished but could with the given available working set If no such i exists go to step 4

                    3 Set Work = Work + Allocation[ i ] and set Finish[ i ] to true This corresponds to process i finishing up and releasing its resources back into the work pool Then loop back to step 2

                    4 If finish[ i ] == true for all i then the state is a safe state because a safe sequence has been found

                    Resource-Request Algorithm ( The Bankers Algorithm ) Now we have a tool for determining if a particular state is safe or not This algorithm determines if a new request is safe and grants it only if it is safe to do so When a request is made ( that does not exceed currently available resources )

                    pretend it has been granted and then see if the resulting state is a safe one If so grant the request and if not deny the request as follows

                    1 Let Request[ n ][ m ] indicate the number of resources of each type currently requested by processes If Request[ i ] gt Need[ i ] for any process i raise an error condition

                    2 If Request[ i ] gt Available for any process i then that process must wait for resources to become available

                    else the process can continue to step 3

                    3 Check to see if the request can be granted safely by pretending it has been granted and then seeing if the resulting state is safe If resulting state is safe

                    grant the requestelse

                    then the process must wait until its request can be granted safelyThe procedure for granting a request ( or pretending to for testing purposes ) is

                    Available = Available - Request

                    Allocation = Allocation + Request Need = Need - Request

                    Unit III Memory management

                    Just as processes share the CPU they also share physical memory Memory management unit of OS takes care of memory allocation deallocation and other issues Program must be brought into memory for it to be run Addresses are two types

                    i) relocatable or relative address wrt to the beginning of program ii) Absolute addresses

                    Q)Address BindingAns Binding means mapping logical address space to physical address space Address binding can happen at three different stagesCompile time If you know at compile time only

                    where in memory the program is going to be allocated then compiler generates absolute addresses

                    Otherwise compiler generates relocatable addresses

                    Load time loader binds the relocatable addresses generated by compiler to absolute address Hence binding is done at load timeIf Binding is done at compile and load time physical and logical addresses are same

                    Execution time If address binding is done at run time the process can be moved during its execution from one memory segment to another Here we call logical addresses as virtual addressRun time mapping of virtual to physical address is done by a hardware device called Memory Management Unit (MMU)

                    Q)Logical vs Physical Address SpaceAnsLogical address ndash generated by the CPU also referred to as virtual addressPhysical address ndash address seen by the memory unitSet of all logical addresses is called logical address space Set of all physical addresses is called physical address space

                    Q) Memory-Management Unit (MMU)Ans MMU is hardware device that maps virtual to physical addressIn MMU scheme the value in the relocation register is added to every address generated by CPU to read the contents of memoryThe user program only knows logical addresses it never sees the real physical addresses

                    Q)Dynamic loadingAns Since physical memory is small it may not be possible for the entire program to be in main memory so we can use dynamic loadingIt is responsibility of users to design their programs to take advantage of dynamic loadingOnly the main function is loaded into main memory and when main() calls another function the main() will check whether the function is in main memory IF not the loader will load the desired function to main memory and updates the programrsquos address table

                    Routine is not loaded until it is called We achieve better memory-space utilization as unused routine is never loaded

                    Q) Dynamic Linking+ Linking postponed until execution time+ Small piece of code stub used to locate the appropriate memory-resident library routine+ Stub replaces itself with the address of the routine and executes the routine+ Operating system needed to check if routine is in processesrsquo memory address+ Dynamic linking is particularly useful for libraries

                    Q) OverlaysAns Overlays are needed when process is larger than amount of main memory allocated to it Instructions and data that are needed at any given time is kept in main memory Overlays can be implemented by user Programming design of overlay structure is complex Overlays for a two-pass assembler is as shown in the figure

                    Q) SwappingAns A process can be swapped temporarily out of memory to a backing store and then brought back into memory for continued executionBacking store ndash fast disk large enough to store copies of all memory images of all usersPriority based scheduling uses a variant of swapping policy called roll out roll in If a higher priority process arrives the memory manager swaps out lower priority process and then swaps in higher priority process When the higher priority process finishes the lower priority process can be swapped in again to main memory A process that is swapped out will be swapped back into the same memory space it occupied previouslyMajor part of the swap time is transfer time We can swap idle process only and cannot swap a process that is waiting for IO

                    Q) Contiguous memory allocation Ans Each process is contained in a single contiguous section of memory1 Fixed Size Partition (or) Single-partition allocation

                    Divide the main memory into Fixed sized partitions Each partition may contain exactly one process Relocation register contains value of starting physical address Limit register contains range of logical addressesEvery address generated by a CPU is checked as follows

                    If logical address lt Limit register then logical address is added with Relocation register to get the corresponding memory address

                    else a trap to OS is generatedSince every address is checked we can protect the OS and other user programs from being modified by other running process

                    2 Multiple-partition allocation

                    1) Fixed Size Partitions Divide the main memory into Fixed sized blocks Here memory allocated to a process may be larger than required The difference between allocated memory and requested memory is called internal fragmentation Internal fragmentation is unused memory inside the partition

                    0 P1 Block0(0th address to 3rd address)4 P1 Block1(4th address to 7th address)8 Block2(8th address to 11th address)

                    12 Block3(11th address to 12th address)

                    if P1 requires just 5 addresses still it is allocated 2 blocks ie 8 addresses So remaining 3 addresses are left unused

                    2) Variable Size partitions Here main memory is divided into partitions of variable sizesOperating system maintains information about

                    a) allocated partitions b) free partitions (hole)

                    In the beginning all main memory is empty and is considered one large block of available memory a hole

                    Exact memory required by process is only given When a process terminates it releases memory which can be allocated to another process

                    Memory is allocated to processes until finally no available block of memory (or hole) is large enough to hold the next processExternal fragmentation exists when there is enough memory to satisfy a request but the available memory are not contiguous Wastage of memory outside the partitionOne Solution to problem of external fragmentation is

                    a Compaction shuffle the memory contents so as to place all free memory together into one large block

                    ===============================================================================Q) Dynamic Storage-Allocation Problem (or) Most commonly used strategies to select a free hole from the set of available holesAns To satisfy a request of size n from a list of free holes below 3 policies can be used

                    1 First-fit Allocate the first hole that is big enough

                    2 Best-fit Allocate the smallest hole that is big enough must search entire list if it is not ordered by size Produces the smallest leftover hole

                    3 Worst-fit Allocate the largest hole must also search entire list Produces the largest leftover holeFirst-fit and best-fit better than worst-fit in terms of speed and storage utilization

                    ===============================================================================Q) Non-Contiguous Memory allocation

                    1 Paging Paging is a memory management scheme that provides non-contiguous memory allocation0 P1 Block0(0th address to 3rd address)4 P2 Block1(4th address to 7th address)8 Block2(8th address to 11th address)

                    12 P1 Block3(11th address to 12th address)

                    Logical address space of a process can be noncontiguous Ex P1 is allocated Block0 and Block3

                    1 Divide physical memory into fixed-sized blocks called frames (size is power of 2)2 Divide logical memory into blocks of same size called pages3 Logical address is divided into 2 parts

                    Page number (p) ndash Page table is indexed by page numberPage offset (d) ndash

                    4 A page table is allocated to each process (A pointer to page table is stored in PCB of process )Page table translates logical to physical addresses Page 0 is in frame1 Page 1 is in frame 4 etc5 Internal fragmentation may occur due to paging

                    6 If the size of logical address space= 2m and page size = 2n Then the higher order mminusn bits of logical address correspond to page number and lsquonrsquolower order bits for displacement within the page

                    Ex size of logical address space = 8=23 so m=3

                    page size =4=22 So n=2

                    So mminusn = 1 bit for page number to represent 01

                    remaining n=2 bits for displacement within the page

                    Q) Implementation of Page Table

                    Page table can be kept as

                    1) Set of dedicated registers efficient if the page table is small

                    2) Page table can be kept in memory page table base register (PTBR) points to page table Page-table length register (PTLR) indicates size of the page table Changing page table requires changing only the value in these registers Advantages of keeping page table in memorya Less context switch time

                    b Two memory accesses are required One memory access to access the page table and another memory access to access the required memory address

                    This problem can be solved by using a special fast-lookup hardware cache called associative memory or translation look-aside buffers(TLBs)

                    i Each entry in TLB consists of 2 parts

                    I) page number field

                    II) Value field or frame number field

                    ii When a logical address is generated by CPU its page number is presented to TLB The page number is compared with all the entries of TLB simultaneously

                    If page number is found in TLB Itrsquos frame number is immediately available -gt TLB hitIf page number not found check in page table ndashTLB miss

                    By using TLB Search is fast But TLB hardware is expensive So TLB size is kept small So TLB contains only frequently used few page table entries

                    Q) Protection in Paging Validinvalid bit attached to each entry in the page table OS sets this validinvalid bit for each page to allow or disallow access to the page When the bit is set to invalid the page is not in processrsquos logical address space Hence it generates Trap to OS

                    Q) Shared pages in Paging

                    Reentrant code is the code that never changes during execution Reentrant code can be shared

                    1 One copy of read-only (reentrant) code is shared among processes (ie text editors compilers window systems) Ex ed1ed2ed3 is shared among Process P1 and P2

                    2 Shared code must appear in same location in the logical address space of all processes

                    Each process keeps a separate copy of the private code and data EX data1 and data2

                    The pages for the private code and data can appear anywhere in the logical address space

                    ==============================================================================Q) Segmentation

                    Segmentation is a memory management scheme that support userrsquos view of memory

                    When the user program is compiled the compiler generates segments like

                    1) The code segment

                    2) Global variables segment

                    3) Heap memory segment

                    4) Stack segment etc

                    Each entry in segment table has

                    segment base (Starting physical address of segment) and limit register (specifies the length of the segment)

                    Logical address is divided into 2 parts

                    segment-number offset

                    Segment table is indexed by segment number

                    Segment-table base register (STBR) stores the location of segment table in main memory

                    Segment-table length register (STLR) stores number of segments used by a program

                    The segment number (say lsquosrsquo) is used to find the entry in segment table The required entry is lsquosrsquo locations away from the beginning of the segment table Once the required entry in the segment table is found the offset (lsquodrsquo) is compared with limit

                    If( offset lt limit) then offset is added with the base entry to generate the physical address

                    Q) Shared Segments Code sharing occurs at

                    the segment level Shared segments must

                    have same segment number

                    Allocation - dynamic storage allocation problem

                    use best fitfirst fit may cause external fragmentation

                    Protection protection bits associated with segments

                    readwriteexecute privileges array in a separate segment - hardware can check for illegal array indexes

                    ===============================================================================Q) Virtual memory ndash

                    Ans In paging and segmentation a program will execute only if the entire process is in main memory But here in virtual memory only part of the program needs to be in memory for execution

                    1 A program that is large enough than the available main memory can still run as only part of program is only loaded into main memory Logical address space can therefore be much larger than physical address space

                    2 Allows address spaces to be shared by several processes

                    Virtual memory can be implemented via Demand paging Demand segmentation

                    The large blank space between heap and stack is part of virtual address space Virtual memory allows files and memory to be shared by 2 or more processes through page sharing as shown in below figure (b) Figure (a) depicts that virtual memory that is larger than physical memory

                    ==============================================================================

                    Q)Demand pagingDemand paging is technique of loading pages from disk to main memory only when the page is needed Hence using less amount of physical memory we get faster response

                    Demand paging is similar to PAGING + SWAPPING

                    Instead of swapping a whole process the pager brings only those pages needed into memory

                    To distinguish between pages that are in memory and pages that are on the disk validinvalid bit is used

                    Validinvalid bit is attached to each entry in page table When this bit is set to valid the page is legal and in memory When this bit is set to invalid either the page is illegal or the page is in disk

                    When CPU generates a logical address whose page table entry is set to invalid page fault occurs Hence the required page is in disk Some section of hard disk called swap space is used to hold pages that are not present in memory

                    Q) Procedure for Handling a Page Fault

                    1) CPU generates a logical address and if the validinvalid bit for the page that has this logical address is set to invalid page fault occurs

                    2) Page fault causes a trap to operating system

                    3)Check if the logical address is within the logical address space of process ie PTBR and PTLR is checked if (required logical address is not within the logical address space of process)

                    Terminate the processelse

                    page is not in memory and page is in disk

                    4) To bring the required page into memory we need to find a free main memory frame map main memory address to disk block and fetch disk block and load the block into free frame5) When the required page is brought into memory update the page table to indicate the page is in memory6) Restart instruction interrupted by illegal address trap The process will continue as if page had always been in memory

                    Q) What is pure demand pagingAns If no page belonging to the executing process is in main memory the process will fault for every page it needs Page faults will occur until every page that is needed is in memory This scheme is known as pure demand paging NEVER BRING A PAGE UNTIL IT IS REQUIRED

                    Q) Page Replacement1 Find the location of the desired page on the disk2 Find a free

                    i) If there is a free frame use itii) If there is no free frame use Page replacement Algorithms to find some page in memory that is

                    not really in use and swap itiii) If the modify bit associated with the victim frame is set to 1 the page is written to disk Make the

                    changes to validinvalid bit of victim frames page table entry to invalid to indicate the victim frame is no longer in memory

                    else no need to write to disk3 Read the desired page into victim frame and change the frame and page tables4 Restart the user process

                    =================================================================================================Q) Page Replacement AlgorithmsGoal Produce a low page-fault rateThe algorithms are evaluated by running it on a particular string of memory references (reference string) and by computing the number of page faults on that string

                    1 First in First out (FIFO) Page replacement algorithm Uses the time when a page was brought into memory The page that was brought first into memory is replaced ie oldest page is chosen for replacement We create a FIFO queue to hold all pages in memory

                    When a page is brought into memory we insert it at the tail of the queue and we replace the page at the head of the queue

                    FIFO 15 page faultsProblems due to FIFO is Beladyrsquos anamolyFor a refrence string shown below123412512345When number of frames allocated to a process is 1 number of page faults=12 due to FIFO page replacementWhen number of frames allocated to a process is 2 number of page faults=12 due to FIFO page replacementWhen number of frames allocated to a process is 3 number of page faults=9 due to FIFO page replacementWhen number of frames allocated to a process is 1 number of page faults=10 due to FIFO page replacementAs the number of frames allocated to a process increases page faults must decrease But in FIFO page replacement page faults increase2 Optimal Page replacement AlgorithmReplace the page that will not be used for longest period of time Used for measuring how well your algorithm performs It is not practical to implement OPT algorithm as it is not possible to know what the future references will beex

                    Optimal page replacement 9 page faults

                    3 Least Recently used (LRU) Page replacement algorithm Replace the page that has not been used for longest period of time LRU makes use of time of last use of the page Ex

                    Number of page faults = 12

                    LRU requires hardware assistance to determine the time of last use of the page We can use 1) Stack or 2) Counter to implement LRU page replacement algorithm

                    1) Stack implementation ndash keep a stack of page numbers in a double link formwhen a Page is referenced it is moved to the top of stackThis implementation requires 6 pointers to be changed

                    2) counter implementation Time-of-use field is allocated to each frame allocatedCPU maintains a counter Counter is incremented for every page reference Counter value is copied to the time-of-use field of the referenced page

                    Disadvantages of LRU page replacement algorithm Updating counter or stack must be done for every memory reference HENCE LRU IS SLOW and implementation requires hardware assistance ==============================================================================Q) LRU approximation page replacement1) Using reference bit2) Using additional reference bits algorithm3) Second chance algorithm4) Enhanced Second chance algorithm5) Counting based Page replacement

                    Has two schemes1) Least Frequently Used (LFU)2) Most Frequently Used (MFU)

                    6) Page Buffering AlgorithmsQ) What is ThrashingAns A process is thrashing if it is spending more time paging than executingThrashing may occur when global (or) local page replacement is used as followsSolution to Thrashing is Working Set strategyIf we provide a process with as many frames as it needs thrashing can be avoided Since it is not possible to know how many frames a process needs we use locality model of process execution

                    here Locality is set of pages that are actively used together Locality model states that as the process executes it moves from locality to locality If we allocate enough frames to a process to accommodate the size of current locality it will not fault again until it changes current localityWorking Set model uses working-set and a parameter ∆ = working-set windowThe set of pages in the most recent ∆ page references is the working setLet WS S i be the working-set size for process Pi ie Pi needs WS S i framesLet D be the total demand for frames then D=sum WS Si

                    and mbe the total number of available frames

                    If iquestm thrashing occurs because some processes will not have enough framesSoIF (Dgtm iquest

                    OS selects a process to suspendelse OS may initiate another processThe working set strategy prevents thrashing while keeping the degree of multi programming as high as possibleThe main difficulty is to keep track of moving working set window ========================================================================Q) How does thrashing occur in global and local page replacementAns In Global page replacement when a page fault occurs for a process any frame in main memory can be replaced causing page fault to some other process that needs the replaced frame Page fault processes must use paging device to swap pages in and out As more number of processes queue up for the paging device

                    the READY queue empties And CPU becomes idle So OS introduces new processes as CPU is free This further increases number of page faults

                    In Local page replacement Each process is allocated certain frames in main memory and when a page fault occurs only the frames allocated to it will be replacedWhen a process starts thrashing other processes are not affected But since the processes will be in queue for paging device most of the time average service time for page fault increases Hence effective access time will increase even for a process that is not thrashingQ) Page Table Structure Or Structure of page table in memory

                    Hierarchical Paging Hashed Page Tables Inverted Page Tables

                    Hierarchical Paging If Page table is large then we break up the page table into multiple page tables ie page table is paged A simple technique is a two-level page tableTwo-Level Paging Example A logical address (on 32-bit machine with 4K page size) is divided into 1 a page number consisting of 20 bits 2 a page offset consisting of 12 bitsSince the page table is paged the page number is further divided into 1 a 10-bit page number 2 a 10-bit page offset Thus a logical address is as follows page number page offset

                    pi p2 d

                    10 10 12 where p1 is an index into the outer page table and p2 is the displacement within the page of the outer page table Two-Level Page-Table Scheme

                    Address-Translation Scheme Address-translation scheme for a two-level 32-bit paging architecture

                    Hashed Page Tables (or) Hash table Hashed page tables are common in address spaces gt 32 bitsEach entry in hash table contains a linked list of elements that hash to same locationHash table is indexed by hash valueEach element consists of 3 fields

                    1 Page number2 Frame number3 Pointer to next element in linked list

                    The algorithm works as follows Page number is passed to hash function to get hash value Locating the hash value in hash table is easy as Hash table is indexed by hash value Page number is then compared with field 1 in the first element in linked list If there is a match the corresponding frame number is used to get physical address If there is no match next entries in linked list are searched

                    Inverted Page Table

                    Invertible page table has one entry for each frame of physical memory Each entry in invertible page table has

                    Process-id Page number

                    Logical address also has 3 parts Process-id Page number

                    OffsetProcess-id and page number of the logical address is compared with each entry in the invertible page tableIf a match is found at ith entry i is added to offset to get logical address If there is no match it means it is illegal addressAdvantages and disadvantages

                    Decreases memory needed to store each page table Increases time needed to search the table when a page reference occurs Solution Use hash table to

                    limit the search to one mdash or at most a few mdash page-table entries Inverted Page Table Architecture

                    Unit-IV CPU Scheduling

                    Basic conceptsTo maximize CPU utilization some process must be running at all times

                    In multi programming several processes will be in main memory at a given time If running process has to wait for IO the OS takes the CPU away from that process and assigns CPU to other process in ready queue So CPU must be scheduled and CPU scheduling is fundamental OS function

                    CPU- IO burst cycleProcess execution begins with a CPU burst that is followed by an IO burst which is followed by another CPU burst then another IO burst and so on as shown in the figure

                    An IO bound program has many short CPU bursts A CPU-bound program has few long CPU bursts

                    Schedulers a module in OS for scheduling decisions1048708 Long-term scheduler (or job scheduler) ndash selects which processes should be brought into the ready queue1048708 Medium-term scheduler ndash selects which processes should be swapped inout the memory1048708 Short-term scheduler (or CPU scheduler) ndash Whenever the CPU becomes idle the operating system must select one of the processes in ready queue to be executed This selection is done by short term scheduler (or) CPU scheduler

                    Dispatcher gives CPU control to the process selected by short term scheduler This function involves following

                    1 switching context2 switching to user mode3 jumping to proper location in the user program to restart that program

                    Time taken to stop one process and start another process is known as dispatch latency This must be kept smallReady QueueReady queue is implemented as a FIFO queue a priority queue a tree or simply an unordered linked listThe records in ready queue are PCBs of processes

                    Pre-emptive Scheduling

                    CPU scheduling decisions take place under one of four conditions

                    1 When a process switches from the running state to the waiting state2 When a process switches from the running state to the ready state for example in response to an

                    interrupt3 When a process switches from the waiting state to the ready state say at completion of IO 4 When a process terminates

                    For conditions 1 and 4 there is no choice - A new process must be selected For conditions 2 and 3 there is a choice - continue running the current process (or) select a different

                    one If CPU scheduling takes place only under conditions 1 and 4 the system is said to be non-

                    preemptive or cooperative In non-preemptive a process runs until it itself gives up the CPU ie when it is waiting for IO (or) when process completes Otherwise the system is said to be pre-emptive

                    Windows used non-preemptive scheduling up to Windows 3x and started using pre-emptive scheduling with Win95 Macs used non-preemptive prior to OSX and pre-emptive since then Note that pre-emptive scheduling is only possible on hardware that supports a timer interrupt

                    Note that pre-emptive scheduling can cause problems when two processes share data because one process may get interrupted in the middle of updating shared data structures

                    Preemption can also be a problem if the kernel is busy implementing a system call ( eg updating critical kernel data structures ) when the preemption occurs Solution wait until the system call has either completed or blocked before allowing the pre-emption But this solution is problematic for real-time systems

                    Some critical sections of code protect themselves from concurrency problems by disabling interrupts before entering the critical section and re-enabling interrupts on exiting the section But this should only be done only on very short pieces of code that will take less CPU time

                    Scheduling Criteria to decide CPU scheduling algorithm1 CPU utilization in real system cpu utilization will range from 40 to 902 Throughput Number of processes that are completed per unit time3 Turnaround time Time when the process has completed ndash Time when the process was submittedie it includes waiting time and burst time4 Waiting time amount of time that a process spends waiting for CPU5 Response time Time when the first response came - Time when the process was submitted

                    Optimization Criteria Max CPU utilizationMax throughput

                    Min turnaround timeMin waiting timeMin response time CPU Scheduling Algorithms1) First-Come First-Served (FCFS) Scheduling First come first served CPU scheduling algorithm is non pre-emptive Processes are scheduled in the order they have arrivedFCFS is implemented by Queue When the CPU is free it is allocated to the process at the head of the queueAdvantages Easy to implementDisadvantages 1 Average waiting time is quite long

                    2 Convoy effect (it occurs as small processes wait in queue for big process to leave CPU)Example Process Burst Time P1 24 P2 3 P3 3

                    Suppose that the processes arrive in the order P1 P2 P3

                    The Gantt Chart for the schedule is P1 P2 P3

                    0 24 27 30 Waiting time for P1 = 0 P2 = 24 P3 = 27 Average waiting time (0 + 24 + 27)3 = 17 2) Shortest-Job-First (SJF) SchedulingSchedule the Process with shortest burst time If CPU burst time of two processe are same then FCFS is usedAdvantages Average waiting time decreasesDisadvantages It is difficult to know the length of next CPU burstSJF can be either pre-emptive (or) non pre-emptive1 Non pre-emptive SJF ndash once CPU is given to the process it cannot be preempted until completes its CPU burst2 Pre-emptive SJF (or) Shortest-Remaining-Time-First (SRTF) If the newly arrived process is shorter than what is remaining of currently executing process then pre-empt the currently executing process

                    Example Process Arrival Time Burst Time P1 00 7

                    P2 20 4 P3 40 1

                    P4 50 4

                    SJF (non-preemptive) Gantt Chart P1 P3 P2 P4

                    0 7 8 12 16 Average waiting time = [0 +(8-2)+(7-4) +(12-5)] 4 =4

                    Example of Preemptive SJF

                    Process Arrival Time Burst TimeP1 00 7P2 20 4P3 40 1P4 50 4

                    SJF (preemptive) P1 P2 P3 P2 P4 P1

                    0 2 4 5 7 11 16 Average waiting time = (9 + 1 + 0 +2)4 =3 3) Priority SchedulingA priority number (integer) is associated with each processThe CPU is allocated to the process with the highest priority (smallest integer equiv highest priority)If two processes have equal priority then FCFS is usedSJF is a priority algorithm in which larger the CPU burst lower the priorityThere are two schemes 1 Preemptive 2 nonpreemptive

                    Problem equiv Starvation ndash low priority processes may never executeSolution equiv Aging ndash as time progresses increase the priority of the process

                    4) Round Robin (RR) Each process gets a small unit of CPU time (time quantum) usually 10-100 milliseconds After this time has elapsed the process is preempted and added to the end of the ready queueIf there are n processes in the ready queue and the time quantum is q then each process gets 1n of the CPU time in chunks of at most q time units at once No process waits more than (n-1)q time unitsPerformance 1 q large _ FIFO 2 q small _ q must be large with respect to context switch otherwise overhead is too highExample of RR with Time Quantum = 4 Process Burst Time P1 24 P2 3

                    P3 3 The Gantt chart isP1 P2 P3 P1 P1 P1 P1 P1

                    0 4 7 10 14 18 22 26 30 Average waiting time = [(30-24)+4+7]3 = 173 =566 5)Multilevel Queue scheduling Ready queue is partitioned into separate queuesFor ex ready queue is divided into 2 queues

                    1 foreground (interactive) queue2 background (batch) queue

                    Each queue has its own scheduling algorithmFor ex Round Robin scheduling algorithm can be used in foreground queue

                    FCFS scheduling algorithm can be used in background queueScheduling must be done between the queues This can be done in 2 ways1 Fixed priority scheduling

                    Foreground queue has highest priority All the processes in foreground queue must be completed and queue must be empty Then only the processes in background queue will be given CPU As shown in below figure Disadvantage starvation

                    2 Time slice ndash each queue gets a certain amount of CPU timewhich it can schedule amongst its processes ie 80 to foreground in RR and 20 to background in FCFS

                    6) Multilevel Feedback Queue Scheduling A process can move between queues The idea is to separate processes according to their CPU bursts IO ndash bound and interactive processes will be in highest priority queueIf a process takes more CPU time it is moved to lower priority queue If a process waits too long in lower priority queue it is moved to higher priority queue to prevent starvationAs shown in the below figure let there be 3 queues Q0 Q1 Q2

                    1 Q0 ndash time quantum 8 milliseconds 2 Q1 ndash time quantum 16 milliseconds 3 Q2 ndash FCFS

                    Scheduling

                    1 A process entering ready queue is put in queue Q0 When it gains CPU the process receives 8 msec If it does not finish in 8 milliseconds process is moved to queue Q1 2 At Q1 process again receives 16 additional milliseconds If it still does not complete it is moved to queue Q2

                    Q) SynchronizationSeveral processes run in an operating system Some process share resources due to which problems like data inconsistency may arise

                    Example of process synchronization is Producer Consumer problem (or) Bounded-Buffer problemProducer Consumer problem To make sure that producer process should not add data when the buffer is full andConsumer process should not take data when the buffer is empty

                    The code for Producer process can be modified as followswhile (true) Produce an item in next producedwhile (counter == BUFFER_SIZE) do nothingbuffer[in]=nextProducedin = (in+1) BUFFER_SIZEcounter++

                    here an integer variable counter is initialized to zeroCounter is incremented every time a new item is added to the buffer and counter is decremented every time an item is removed from bufferin is an index that always points to an empty slot after the last filled slotout is an index that always points to the first filled slot after at the head of the circular queue

                    The code for consumer process is as shown belowwhile(true)

                    while (counter == 0) do nothingnextConsumed = buffer[out]out=(out+1)BUFFER_SIZEcounter--

                    The above producer and consumer code produces RACE CONDITIONOutcome of execution of several co-operating processes depends on the order in which accesses takes place is called Race ConditionFor Example if counter =5

                    Given counter =5 and if producer produces one item and consumer consumes one item then correct value of counter= 5 only But here the counter = 4 ie inconsistent resultIf T5 is executed before T4 then the counter =6 ie inconsistent result

                    Solution to avoid race condition make sure only one process at a time must update the shared variable (here it is counter)

                    Q) Critical section problemAns Each process has a segment of code called critical section Critical section code is the code that contains common variables of co-operating processesTo avoid data inconsistencyTwo processes must not execute critical section code at same timeFor this to happen1 Each process must request permission to enter its critical sectionmdashEntry section2 End of critical section ndashExit section3 Rest of the code after critical section ndash Remainder section

                    Solution to critical section must satisfy the following three requirements1 Mutual exclusion if one process is in critical section no other process can be in critical section2 Progress Processes not in remainder section must decide which process will enter critical section next3 Bounded Waiting All the processes must be allowed to enter critical section No process must wait indefinitely

                    Q) Petersonrsquos solution for critical section problem (Software based solution)AnsTime Process 1 Process2T0 do

                    flag[1] = TRUE turn =2

                    do flag[2] = TRUE

                    T1 turn =1

                    T2 While( flag[2] ampamp turn == 2)Since turn = 2 while loop will not run

                    While( flag[1] ampamp turn == 1)Loops continuously until flag[1]

                    becomes falseT3 Enter critical section

                    T4 flag[1] = FALSE Enter Critical section

                    T5 Enters remainder section flag[2]=FALSE

                    while (TRUE) Enters remainder section

                    while (TRUE)

                    101 File Concept Data is stored in secondary memory in the form of files A file is a sequence of bits bytes lines or records A file has structure which depends on its type

                    File Attributes

                    1 Name ndash Name of the file for user understanding2 Identifier used by OS to identify a file ( eg inode number )3 Type ndash Text files executable files binary files etc 4 Location ndash location of the file on the hard drive 5 Size current size of the file6 Protection Controls who can read write or execute the file7 Time Date and user identification gives the information when the file was created last modified

                    and last used etc This data is used for protection security and usage monitoring

                    All information about files is kept in directory structure The directory entry consists of the filersquos name and its unique identifier The identifier locates the other file attributes Directory structure is kept on disk

                    File Operations

                    The file Abstract Data Type supports many common operations 1 Creating a file find free space on disk add entry to directory 2 Writing a file write data at current file position pointer location and update pointer3 Reading a file read data at current file position pointer location and update pointer4 Repositioning within a file or file seek change pointer location to a given value5 Deleting a file free the space allocated to file and remove entry from directory6 Truncating a file delete the data and update file size

                    Open File Table

                    Most OSes require that files be opened before access and closed after access Information about all files that are open currently in the system is stored in an open file table This

                    system wide open file table is maintained by OS And information about files opened by the process is stored by the processrsquos open file table Open file table has the following information

                    1 File pointer - records the current position in the file for the next read or write2 File-open count - How many times has the current file been opened ( simultaneously by

                    different processes ) and not yet closedWhen this counter reaches zero the file can be removed from the table

                    3 Disk location of the file4 Access rights

                    Some systems provide support for file locking 1 A shared lock is for reading only2 A exclusive lock is for writing as well as reading3 An advisory lock OS makes sure the locks are acquired and released appropriately4 A mandatory lock software developers makes sure the locks are acquired and released

                    appropriately5 UNIX used advisory locks and Windows uses mandatory locks

                    File Types

                    Implementing file types A common technique is to include the type as part of the filename The name is split into 2 parts

                    1 File Name 2 Extension

                    File name and extension are separated by a period The system uses the extension to indicate the type of file and the type of operations that can be done on that fileUser opens Microsoft word and clicks on File menu and Open option User will only specify the filename without extension application will look for a file with the given name and extension it expects Hence extensions are not supported by the operating systemUnix system uses a crude magic number stored at the beginning of the file to indicate the type of fileIn Mac OS X operating system file produced by word processor has the word processorrsquos name as its creator When user double clicks on the file word will open the required fileSome common file types are as shown below

                    File Structure Some files contain an internal structure which may or may not be known to the OS for ex executable files have a specific structure so that it can determine

                    where in memory to load the file and what is the location of the first instruction

                    If OS supports many file formats it will increase the size and complexity of the OS UNIX treats all files as sequences of bytes with no internal structure (With the exception of

                    executable binary programs which it must know how to load and find the first executable statement etc)

                    Macintosh files have two forks - a resource fork and a data fork The resource fork contains information relating to the UI such as icons and button images and can be modified independently Data fork contains the code or data

                    Internal File structure

                    Disk files are accessed in units of physical blocks typically 512 bytes

                    Internally files are organized in units of logical units like single byte (or)

                    data record size (or) structure size

                    The number of logical units which fit into one physical block determines its packingIf a student file is organized internally as records Each record takes 64 bytes of memoryAnd main memory is divided into blocks of 512 bytes Then packing = 51264=8 8 records can be stored per block

                    Internal fragmentation may occur due to this

                    102 Access Methods

                    The information in file can be accessed in the following ways1 Sequential access2 Direct access (or) Relative access

                    1 Sequential access we cannot randomly access any record

                    Records are read and written sequentially one record after another

                    A sequential access file emulates magnetic tape operation and generally supports the following operations read next - read a record and advance the tape to the next position write next - write a record and advance the tape to the next position rewind skip n records - May or may not be supported N may be limited to positive numbers or may be

                    limited to +- 1

                    2 Direct access (or) Relative accessA file is made up of fixed length records so that we can randomly access any record

                    No restriction on order of reading or writing

                    Direct access files are useful when we need to access large amounts of data randomly

                    Ex database

                    Operations supported include n is relative block number ie relative to beginning of file

                    1 read n - read record number n 2 write n - write record number n 3 jump to record n - could be 0 or the end of file4 Query current record - used to return back to this record later5 Sequential access can be easily emulated using direct access The inverse is complicated and

                    inefficient

                    3 Other access methods can be built on top of direct access method We construct an index for the file which contains pointers to various blocks To search a file we first access index and then use pointer to access file

                    If a student file has 120000 records of students sorted by student id We can create an index consisting of 1st student id in each blockif 64 records can be stored in each block then number of blocks needed to store 120000 records = 12000064 asymp 2000 blocks So index has 2000 entries Index is kept in main memory so that searching becomes faster

                    If the index itself becomes large we create an index for index file (Ex B-tree)

                    Q) Allocation Methods

                    To allocate space to files on disk is such a way that disk space is utilized effectively and files can be accessed quickly

                    Three major methods of allocating disk space are

                    1 Contiguous 2 Linked

                    3 Indexed

                    1 Contiguous allocation Each file occupies set of contiguous blocks on disk

                    If the file is n blocks long and starts at location b then it occupies blocks b b+1 b+2hellipb+n-1

                    The directory entry for each file indicates the address of starting block(ie b) and length (n)

                    Advantages

                    1 For direct access to block i we can directly access by finding the block as b+i Since the file is stored sequentially sequential access is easy Hence access is easy

                    2 Supports both direct and sequential access3 Number of disk seeks required is minimum 4 Seek time is minimum

                    Disadvantages

                    1 Difficult to find space to a new file (or) add more data to existing fileFirst fit and Best fit strategies are used to find a free hole( set of free contiguous blocks) from list of available holes But both are inefficient wrt storage utilization

                    2 External fragmentation even though there are enough free blocks required for a new file they are not contiguous and hence cannot be allocated to new fileSolution copy all the files onto another disk or tape We then copy the files back onto original disk contiguously This is known as compaction But compaction is time consuming

                    3 Size of the file must be known when the file is created Even though final size is known entire size may not be used by the file at the beginning Hence leads to wastage of space

                    If size of the file is not known in advance then (i) If we allocate too little space to file file cannot be extended Solution find a larger hole copy the contents of the file to the new space and release the previous space This may be possible as long as space exists Also this is time consuming(ii) If we overestimate the amount of space needed much of the space is unused This will lead to internal fragmentation

                    To minimize these drawbacks some os uses modified contiguous allocation scheme Here contiguous blocks of memory is allocated at firstIf the file needs more blocks another set of contiguous space known as extent is added Location and a block count plus a link to the first block of next extentIf extents are too large and of fixed size internal fragmentation occursIf extents are of variable sizes external fragmentation occurs

                    Linked AllocationEach file is a linked list of disk blocks The disk blocks can be scattered anywhere on the diskThe directory contains a pointer to first and last blocks of the file For example a file of 5 blocks might start at a block 9 and continue at block 16 then block 1 and then block 10 and finally block 25 Each block contains a pointer to next block If each block is 512 bytes and disk address (pointer ) requires 4 bytes then the user sees blocks of 512-4=508 bytes To create a new file we simply create a new entry in the directory The pointer will have nil for empty file Also the size field of empty file is 0

                    To add more data to existing file free block is found and data is written to it and is linked to end of file To read a file we read by following the pointers block to block

                    Advantages 1 Easy to find space to a new file (or) add more data to existing file2 Size of the file need not be declared when the file is created3 There is no external fragmentation

                    Disadvantages1 Seek time is more2 Number of disk seeks required is more3 Pointers use larger percentage of filersquos disk space4 Inefficient for direct access files To access i th record i disk reads are required5 Not Reliable if pointers were lost or damaged

                    Solution to disadvantage 3 is to use to group multiple blocks as clusters(for ex 4 blocks = 1cluster) We allocate clusters instead of blocks to file Makes logical to physical block mapping simpleImproves disk throughput by reducing disk access timeReduces wastage of space due to pointers as only few pointers may be neededFree ndashlist management also becomes simpleDisadvantage internal fragmentation more space is wasted when the cluster is partially full

                    Solution to disadvantage 5 is to use doubly linked list (or) store the file name and relative block number in each block

                    An variation of linked allocation is the use of file allocation table (FAT)A section of disk at the beginning of each volume contains FAT FAT has one entry for each disk block and is indexed by block numberThe block number contains the block number of next block that contains the file The last block will have special end of the file value as table entry An unused block is indicated by table value 0

                    Advantages

                    1 Easy to find space to a new file (or) add more data to existing file2 Random access time is improved

                    Disadvantage number of disk seeks is more The disk head must move to read the FAT and find the location of block then move to location of the block

                    3 Indexed allocation

                    Each file has its own index block Index block is an array of disk block addresses The ith entry in index block points to ith block of the file

                    The directory contains the address of the index block

                    When the file is created all pointers in the index block are set to nil

                    To add more data to existing file free block is found and data is written to it The address of block is put in the index block

                    Advantages

                    1 Easy to find space to a new file (or) add more data to existing file2 Supports direct (or) random access efficiently3 No external fragmentation

                    Disadvantage

                    1 Seek time is more2 Number of disk seeks required is more3 Not Reliable4 Index file must be kept in memory If this memory is not available then we have the read the index

                    block and then desired data block ie 2 disk accesses are required To access a block near the end of the file we need to read all the index blocks to read the needed data block

                    5 Pointer overhead of index block is gt than that of linked allocation For example if we have file of only one or two blocks Entire index block is allocated to store these one or two pointers

                    Various mechanisms followed for size of index block

                    1 linked scheme size of index block is one disk block Index block contains a small header containing name of the file and set of the first 100 disk block addresses The next address contains the address of another index block for large files

                    2 multilevel index to access a block the OS uses first-level index to find second-level index block which points to file block This approach can be continued to third or fourth level

                    3 combined scheme

                    For example Say there are 15 pointers of index block in the filersquos inode The first 12 of these pointers have address of blocks that contains file data The next 3 pointers point to indirect blocks The first points to single indirect block which is an index block that contain addresses of filersquos data blocks

                    The second points to double indirect blocks and third points to triple indirect blocks as shown below

                    Q) Free space management

                    To keep track of free disk space the system maintains free space list To create a file we search free space list and allocate the required space to new file

                    The free space list can be implemented as

                    1 Bit vector (or) Bit map2 Linked list3 Grouping4 Counting 5 Space maps

                    1 Bit vector (or) Bit map Each block is represented by 1 bit If the block is free the bit is 1 and if the block is allocated the bit is 0 For example consider a disk where blocks 23458 blocks are free The free-space bit map would be 001111001000

                    Advantages simple and easy to get contiguous files

                    Disadvantages

                    1 Bit map requires extra space2 Bit map is inefficient if kept in disk Bit maps of smaller disk can be kept in main memory but may

                    not be possible to keep the bit map of larger disk in main memory3 Bit map must be modified both when blocks are allocated and freed Freeing 1 GB of data on a 1-TB

                    disk could cause thousands of blocks of bit maps to be updated because those data blocks are scattered all over the disk

                    2 Linked List Free-space list is implemented by linking all the free disk blocks together A pointer to the first free block is stored in special location on the disk For example consider a disk where blocks 23458 blocks are freeWe keep a pointer to block 2 as the first free block Block 2 would contain a pointer to block 3 Block 3 would point to block 4 and block 4 would point to block 5and so onAdvantage No waste of spaceDisadvantage Cannot get contiguous space easily ie To traverse the free-space list we must read each block which requires more IO time

                    3 Grouping First free block stores the addresses of n free blocks The first n-1 free blocks are free and the last block contains the addresses of another n free blocks and so onAdvantage large number of free blocks can be found quickly4 Counting free-space list can be maintained by keeping the address of first free block and the number (n)of free contiguous blocks that follow the first block

                    Each entry in the free-space list consists of disk address and a countEach entry requires more space andlist is shorter only if count is gt1These entries can be stored in a B-tree for efficient insertion deletion and search

                    5 Space maps ZFS divides the disk space into chunks of manageable sizes called metaslabs Each metaslab has a space mapZFS uses counting algorithm to store information about free blocks Space map is a log of all block activities in time order in counting formatWhen ZFS allocates or frees space from a metaslab it loads the associated space map into memory and updates it Finally free-space list is updated on diskSynchronization Hardware Hardware solution to synchronization is to provide atomic operations These operations operate as a single instruction without interruption Two such operations are 1 Test and Set() hardware instruction2 Swap() instruction

                    1 Test and Set instruction is as shown

                    Solution using Test and SetThe shared variable lock is initialized to false

                    2 Swap() instruction definition is as follows

                    Solution using Swap1 Shared Boolean variable lock is initialized to false and each process has a local boolean variable key

                    Both Test and Set Swap instructions satisfy the mutual exclusion requirement but unfortunately do not guarantee bounded waiting If there are multiple processes trying to get into their critical sections there is no guarantee of what order they will enter and any one process could have the bad luck to wait forever until they got their turn in the critical section (Since there is no guarantee as to the relative rates of the processes a very fast process could theoretically release the lock whip through their remainder section and re-lock the lock before a slower process got a chance As more and more processes are involved vying for the same resource the odds of a slow process getting locked out completely increase )

                    Below figure illustrates a solution using test-and-set that satisfies bounded waiting using two shared data structures boolean lock and boolean waiting[ N ] where N is the number of processes in contention for critical sections

                    Bounded-waiting mutual exclusion with TestAndSet( )

                    It first looks in an order ( starting with the next process on the list ) for a process that has been waiting and if it finds one then it releases that particular process from its waiting state without unlocking the critical section thereby allowing a specific process into the critical section while continuing to block all the others

                    Q) What is semaphoreAns Semaphore is used to solve synchronization problems A semaphore S is an integer variable accessed only through 2 standard atomic operations 1 wait() 2 signal()Wait primitive Signal primitivewait(S)

                    while(S lt= 0) do nothingS--

                    signal (S)

                    S++

                    Each process that wishes to use a resource performs wait() operation When a process releases a resource it performs signal() operation

                    Semaphores Usage1 binary semaphore or mutex locks can take values 0 (or) 1

                    2 Counting Semaphore can take any integer value Counting semaphore is used to count the number remaining resources The counter is initialized to the number of resources available whenever the counting semaphore gt 0 then a process can enter a critical section and use one of the resources When the counter = 0 ( or negative in some implementations ) then the process blocks until another process frees a resource and increments the counting semaphore with a signal callFor example if 3 resources are there and 4 processes Value of counting semaphore =3if process P1 requires a resource it perform wait() operationNow counting semaphore = 2if process P2 requires a resource it perform wait() operationNow counting semaphore = 1if process P3 requires a resource it perform wait() operationNow counting semaphore = 0if process P4 requires a resource it perform wait() operation P4 waits until a resource is available

                    3 Semaphores can also be used to synchronize certain operations

                    Semaphore ImplementationWhen a process is in critical section and any other process that tries to enter critical section must loop continuously wasting CPU time (busy waiting) This type of semaphore is called spin lock as the process spins while waiting For example suppose it is important that process P1 execute statement S1 before process P2 executes statement S2

                    First we create a semaphore named synch that is shared by the two processes and initialize it to zero Then in process P1 we insert the code

                    S1signal( synch )

                    and in process P2 we insert the code wait( synch )

                    S2 Because synch was initialized to 0 process P2 will block on the wait until after P1 executes the call

                    to signal Each semaphore has a integer value and maintains a waiting queue of waiting processes

                    Semaphore implementation with no busy waiting To overcome busy waiting we block the waiting process and waiting process is restarted by wakeup() operation(ie the process is removed from waiting queue to ready state)

                    Problems due to semaphore1 Deadlocks Deadlocks occur when multiple processes are blocked each waiting for a resource that can only be freed by one of the other (blocked) processes shown below

                    2 Starvation In Starvation one or more processes gets blocked forever and never get a chance to enter critical section For example if we do not specify the algorithms for adding processes to the waiting queue or selecting one to be removed from the queue in the signal( ) call If a LIFO queue is chosen then the first process that starts waiting will never get a chance

                    3 Priority inversionLet Process LMH have priorities as shown belowLltMltHprocess H is waiting for a resource that has been held by process L so Process L is running When process M enters it pre-empts process L and M is running nowHere M runs before H even though M has lower priority than HTo avoid this L is given high priority so that M does not pre-empt L=====================================================================Q) Classical problems on synchronization

                    1 The Bounded Buffer Problem (also called the The Producer-Consumer Problem)2 The Readers-Writers Problem3 The Dining Philosophers Problem

                    These problems are used to test newly proposed synchronization scheme

                    1 The Bounded Buffer ProblemConsider

                    n buffers and each buffer holds one item a producer process which creates the items (1 at a time) a consumer process which processes them (1 at a time)

                    Producer process cannot produce an item if all the buffers are full and Consumer process cannot consume if all the buffers are empty As both processes modify the contents of buffer synchronization must be there For that we use 3 Semaphores

                    1 empty = n (empty semaphore counts the number of empty buffers)2 full=0 (full semaphore counts the number of filled buffers)3 mutex=1 (mutex provides mutual exclusion for access to the buffer)

                    Producer process Consumer processdo

                    hellip produce an item wait(empty) wait(mutex) hellip

                    do wait(full) wait(mutex) hellip remove an item from buffer hellip

                    add item to buffer hellip signal(mutex) signal(full)

                    while (1)

                    signal(mutex) signal(empty) hellip consume the item hellip while (1)

                    2 The Readers-Writers ProblemA data item such as a file is shared among several processesEach process is classified as reader or writerMultiple readers can read the file at same timeA writer must have exclusive access (ie cannot share with either a reader or another writer)Two versions of the readers-writers problem

                    readers priority new reader need not wait because a writer is waiting writers priority if a writer is waiting to access the database no new readers can start reading

                    A solution to both version may cause starvation in the readers priority version writers may starve in the writers priority version readers may starve

                    A semaphore solution to the readers priority version uses 3 semaphores1 readcount = 0 (readcount semaphore counts the number of reading shared data2 mutex =1 (mutex semaphore is used for mutually exclusion when readcount is updated)3 wrt = 1 (wrt semaphore is common to both writer and reader)

                    wrt provides mutually exclusive access to shared data Reader process Writer process

                    wait(mutex) readcount++ if (readcount == 1) wait(rt) signal(mutex) hellip reading is performed hellip wait(mutex) readcount-- if (readcount == 0) signal(wrt) signal(mutex)

                    do wait(wrt) hellip writing is performed hellip signal(wrt)

                    while (TRUE)

                    3 The Dining Philosophers Problemn philosophers sit around a table thinking and eatingAs time passes a philosopher gets hungry and tries to pick up the chopstick on his left and on his right A philosopher may only pick up one chopstick at a time and cannot pick up a chopstick already in the hand of neighbour philosopherThe dining philosophers problems is an example of a large class or concurrency control problems it is a simple representation of the need to allocate several resources among several processes in a deadlock-free and starvation-free mannerA semaphore solution represent each chopstick with a semaphore

                    semaphore chopstick[5] Initially all values are 1

                    Philosopher ido

                    51

                    wait(chopstick[i])wait(chopstick[(i+1) 5])hellipeathellipsignal(chopstick[i])signal(chopstick[(i+1) 5])hellipthinkhellip

                    while (1)This solution guarantees no two neighbouring philosophers eat simultaneously but has the possibility of creating a deadlockQ) Types of operating systems

                    MULTIPROCESSING SYSTEMS has multiple hardware CPUs

                    In symmetric multiprocessing (SMP) systems The processes share the same main memory and io facilities All processors can perform the same functions (hence term symmetric)

                    52

                    ASYMMETRIC MULTIPROCESSING SYSTEMS In asymmetric multiprocessing (ASMP) all CPUs are not equal (Master and slave CPUs) ASMP have dedicated processors to specific tasks For example one processor may be dedicated to disk operations another to video operations etc These systems dont have the flexibility to assign processes to the least-loaded CPU unlike an SMP system

                    REAL TIME SYSTEM is used to implement a computer application that must complete its execution within its time constraint

                    Two kinds of real-time systems have evolved

                    1 A hard real-time system can guarantee that response requirement will be met under all conditions 2 A soft real-time system cannot guarantee that response requirement meets under all conditions Ex

                    Digital audio or multimedia systems Digital telephones

                    DISTRIBUTED SYSTEMS execute parts of a computation in different systems at the same time It uses distributed control ie it spreads its decision-making actions across different computers in the system so that failures of individual computers or the network does not cripple its operation

                    A distributed operating system appears as a uniprocessor system even though it has multiple processors The users may not know where their programs are being run or where their files are located that should all be handled automatically by the operating system

                    Distributed systems allow applications to run on several processors at the same time thus requiring more complex processor scheduling algorithms

                    DESKTOP SYSTEMS A desktop system is a personal computer (PC) system used at a single location Modern desktop systems support multiprogramming Common examples are Linux FreeBSD Windows 8 and the Macintosh operating system

                    HANDHELD SYSTEMS (palmtop computers) Two of the most popular operating systems for handhelds are Symbian OS and Android OS They have very less memory

                    53

                    CLUSTERED SYSTEMS A computer cluster consists of a set of loosely connected computers that work together so that in many respects they can be viewed as a single system The components of a cluster are usually connected to each other through fast local area networks each node (computer used as a server) running its own instance of an operating system

                    Q) Modern operating system

                    New developments in hardware applications and security threats lead to the development of modern operating system

                    New Hardware developments like increased machine speed high speed network increased size and variety of memory devices

                    New Applications developments like multimedia applications internet and web access and client server computing

                    Developments in modern operating system can be categorized as

                    1 Microkernel architecture assigns only few functions to kernel ( IPC and CPU scheduling etc) Other OS services are provided by processes called servers that run in user mode This approach decouples server and kernel development This approach is well suited to distributed environment

                    2 Multithreading a process is divided into threads that can run concurrently Useful for applications that perform a number of independent tasks Example database server that listens and processes many client requests

                    3 Symmetric multiprocessing As now-a-days there are multiple microprocessors in a single system SMP operating system provides greater efficiency when there are multiple processors The processes share the same main memory and io facilities All processors can perform the same functions (hence term symmetric)

                    SMP schedules processes or threads across all of the processors

                    Advantages

                    1 Increased performance if some portion of the program can be run in parallel the performance of SMP OS increases

                    2 increased availability as all processes can perform the same task if one processor fails others will work 3 incremental growth one can add a new processor to increase the performance4 Scaling cost can be dependent on the number of processors used

                    4 Distributed operating systems Distributed systems allow applications to run on several processors at the same time

                    5 Object oriented design adds modularity to kernel Os can be customized without effecting system integrity

                    Q) Linux Operating systems It is open source as its source code is freely available It is free to use Linux was designed considering UNIX compatibility Linux is a multi-user multitasking system Main design goals are speed efficiency and standardization Components of Linux System

                    54

                    Linux Operating System has primarily three components as shown in the below diagram

                    Kernel - Kernel is the core part of Linux It is responsible for all major activities of this operating system It interacts directly with hardware Kernel hides low level hardware details to system or application programs

                    System Library - System libraries are special functions or programs These are used by application programs to use Kernelrsquos features

                    System Utility - System Utility programs are responsible to do specialized individual level tasks like updating log file accepting login requests from terminals etc

                    Kernel Modules Sections of kernel code that can be compiled loaded and unloaded independent of the rest of the kernel A kernel module may typically implement a device driver a file system or a networking protocol

                    Three components to Linux module support 1 module management Supports loading modules into memory and allows to talk to the rest of the kernel 2driver registration Allows modules to inform rest of the kernel that a new driver has become available 3 conflict resolution protect reserved resources of one driver from accidental use by another driver

                    Kernel Mode vs User ModeKernel component code executes in a special privileged mode called kernel mode with full access to all resources of the computer This code represents a single process executes in single address space and do not require any context switch and hence is very efficient and fast Kernel runs each processes and provides system services to processes provides protected access to hardwares to processesCode which is not required to run in kernel mode is in System Library User programs and other system programs works in User Mode which has no access to system hardwares and kernel code User programs utilities use System libraries to access Kernel functions to get systems low level tasksBasic FeaturesFollowing are some of the important features of Linux Operating System

                    Portable - Portability means softwares can works on different types of hardwares in same way Linux kernel and application programs supports their installation on any kind of hardware platform

                    Open Source - Linux source code is freely available and it is community based development project Multiple teams works in collaboration to enhance the capability of Linux operating system and it is continuously evolving

                    Multi-User - Linux is a multiuser system means multiple users can access system resources like memory ram application programs at same time

                    Multiprogramming - Linux is a multiprogramming system means multiple applications can run at same time

                    Hierarchical File System - Linux provides a standard file structure in which system files user files are arranged

                    Shell - Linux provides a special interpreter program which can be used to execute commands of the operating system It can be used to do various types of operations call application programs etc

                    55

                    Security - Linux provides user security using authentication features like password protection controlled access to specific files encryption of data

                    Architecture Linux System Architecture is consists of following layers

                    Hardware layer - Hardware consists of all peripheral devices (RAM HDD CPU etc)

                    Kernel - Core component of Operating System interacts directly with hardware provides low level services to upper layer components

                    Shell - An interface to kernel Takes commands from user and executes kernels functions

                    Utilities - Utility programs giving user most of the functionalities of an operating systems

                    Q) windows XP operating System1Extensibility mdash layered architecture

                    Executive which runs in protected mode provides the basic system services On top of the executive several server subsystems operate in user mode Modular structure allows additional environmental subsystems to be added without affecting the

                    executive 2Portability mdash XP can be moved from on hardware architecture to another with relatively few changes

                    Written in C and C++ Processor-dependent code is isolated in a dynamic link library (DLL) called the ldquohardware abstraction

                    layerrdquo (HAL)3 Reliability mdashXP uses hardware protection for virtual memory and software protection mechanisms for operating system resources 4 Compatibility mdash applications that follow the IEEE 10031 (POSIX) standard can be complied to run on 2000 without changing the source code 5 Performance mdash XP subsystems can communicate with one another via high-performance message passing

                    Preemption of low priority threads enables the system to respond quickly to external events Designed for symmetrical multiprocessing

                    6 International support mdash supports different locales via the national language support (NLS) APIXP Architecture 1 Layered system of modules 2 Protected mode mdash hardware abstraction layer (HAL) kernel executive 3 User mode mdash collection of subsystems

                    a Environmental subsystems emulate different operating systems b Protection subsystems provide security functions

                    XP architecture

                    56

                    Q) Windows network Operating systemExample Windows 2000 operating system

                    All the pictures are taken from Silberschatz Abraham et al Peter Baer Galvin and Greg Gagne Operating system concepts Reading Addison-Wesley

                    Visit my blog enthusiaststudentblogspotin57

                    mtechmessengerblogspotin

                    Keywords OS exam notes OS notesOperating system notes Opearing system galvin notesSimple batch system Multi- programmed Batch System Time sharing System Difference between Batch Multi-programming and Time sharing What is process structure of process in memory Process states PCB Operations on Processes Process Creation Process Termination cascading terminationcontext switch Basic Concepts of threads two modes of CPU execution Inter process communication Shared ndash Memory Systems Message passing systemswhat is deadlock four conditions that are necessary for deadlock to occur Methods for handling deadlocks Deadlock prevention Deadlock avoidanceResource Allocation graphBankers AlgorithmSafe State Unsafe state Safety Algorithm Resource-Request Algorithm Memory management Address Binding Logical vs Physical Address Space MMU Memory-Management Unit Dynamic loading Dynamic Linking Overlays Swapping Contiguous memory allocation Single-partition allocation Multiple-partition allocation Internal fragmentation Variable Size partitions External fragmentation First-fit Best-fit Worst-fit Non-Contiguous Memory allocation Paging Implementation of Page Table PTLR PTBR page table base register Page-table length register translation look-aside buffers TLB Protection in Paging Shared pages in Paging Segmentation Shared Segments Virtual memory Demand paging Procedure for Handling a Page Fault pure demand paging Page Replacement Page Replacement Algorithms FIFO Optimal Page replacement AlgorithmLeast Recently used (LRU) Page replacement algorithm ThrashingWorking Set model global and local page replacement

                    58

                    • 101 File Concept Data is stored in secondary memory in the form of files A file is a sequence of bits bytes lines or records A file has structure which depends on its type
                    • File Attributes
                    • File Operations
                    • Kernel Mode vs User Mode
                    • Basic Features
                    • Architecture

                      P1 P2

                      R2R2

                      R1

                      Pn is allocated Rnminus1 and Pn is waiting for resource Rnwhich is held by P0 so F (Riquestiquestn)ltF(R iquestiquest0)iquestiquest

                      Hence by transitivity F (Riquestiquest0)ltF(R iquestiquest0)iquestiquest Hence our assumption that circular wait exists is FALSE

                      ===============================================================================Q) Resource Allocation graphDeadlocks can be understood more clearly through the use of Resource-Allocation Graphs having the following properties

                      1 Resource Types are represented as square nodes on the graph Dots inside the square nodes indicate number of resources

                      ( Ex two dots might represent two laser printers )2 Processes are represented as circles3 Request Edges - If P1 has requested R1 a directed edge from P1 to R1 is request

                      edge4 Assignment Edges - A directed edge from R2 to P1 indicating that resource R2 has

                      been allocated to process P1 and that P1 is currently holding resource R2 Note that a request edge can be converted into an assignment edge when request is granted

                      If a resource-allocation graph does contain cycles AND each resource contains only a single instance then a deadlock exists If a resource category contains more than one instance then cycle in the resource-allocation graph indicates the possibility of a deadlock but does not guarantee one==================================================================Q) Deadlock AvoidanceFor each resource request system can decide whether the request should be granted or not To make this decision the system must have information like

                      1 resources currently available2 resources currently allocated to each process3 Future requests and releases of each process4 Maximum number of resources it may need

                      Given this information it is possible to construct an algorithm that makes sure that the system will never enter a deadlocked stateThere are Two deadlock-avoidance algorithms They are

                      1 Resource-Allocation Graph Algorithm2 Bankers Algorithm

                      Safe State A system is in safe state if there exists a safe sequence of processes P0 P1 P2 PN such that Resource Requests for Pi lt= Resources allocated to Pi + resources held by all processes Pj where j lt i All safe states are deadlock free

                      Unsafe state If a safe sequence does not exist then the system is in an unsafe state which MAY lead to deadlock

                      1 Resource-Allocation Graph Algorithm Resource-allocation graphs can detect deadlocks only if number of resources of each type are one In this case unsafe states can be recognized and avoided by adding claim edges denoted by dashed

                      lines which point from a process to a resource that it may request in the future All the claim edges are added only at the beginning of process When a process makes a request the claim edge Pi-gtRj is converted to a request edge When a resource is released the assignment edge changes back to a claim edge This approach works by denying requests that would produce cycles in the resource-allocation graph

                      taking claim edges into effectConsider for example the resource allocation graph as shown

                      If P2 requests resource R2 then the claim edge from P1-gtR2 will be made request edge as follows

                      The resulting resource-allocation graph would have a cycle in it and so the request cannot be granted Q) Bankerrsquos Algorithm or Deadlock avoidance algorithm with exampleThere are 12 tape driveslet the current state of the system is as shown in the below figureProcess Allocated Max Need Need= MaxNeed ndash AllocatedP0 5 10 5P1 2 4 2P2 2 9 7

                      Available = 12-(5+2+2) = 3Resource- Request AlgorithmNow when a request for 1 tape drive by process P2 is made we run resource-request algorithm to check whether the request must be granted or not The request is granted if the after granting the request all the processes in the system can complete For thatWe check 1 Is the request of P2 lt= need of P2

                      1 lt= 7 therefore TRUE2 Is the request of P2 lt= Available

                      1 lt= 3 therefore TRUE3 Pretend the request is granted for P2

                      Now the current state is as shown belowProcess Allocated Max Need Need= MaxNeed ndash AllocatedP0 5 10 5P1 2 4 2P2 2+1=3 9 7-1=6

                      Available = 3-1= 2Now Run the safety algorithm to check whether the system is in safe state

                      Safety Algorithm1 Let WORK = Available = 22 Find unfinished process such that Need of unfinished process lt= WORKCheck P0

                      Need of P0 = 5Work = 2

                      Is 5 lt= 2 FALSE

                      Check P1Need of P1 = 2Work = 2

                      Is 2 lt= 2TRUETherefore P1 can finishIf P1 finishes Work = Work+Allocated to P1

                      Work=2+2= 4Now again check if P0 can completeNeed of P0 = 5Work =4Is 5 lt=4 FALSE

                      Check if P3 completesNeed of P3= 6Work =4Is 6lt= 4FALSESo neither P0 can complete and P3 can completeSo the system is in unsafe stateThe request for 1 tape drive by P3 is not granted

                      2 Bankers Algorithm For resources that contain more than one instance the resource-allocation graph method does not

                      work So we use Bankerrsquos algorithm When a process starts up it must state in advance the maximum allocation of resources it may

                      request When a request is made the scheduler determines whether granting the request would leave the

                      system in a safe state If not then the process must wait until the request can be granted safely The bankers algorithm relies on several key data structures ( where n is the number of processes and

                      m is the number of resource categories ) o Available[ m ] indicates how many resources are currently available o Max[ n ][ m ] indicates the maximum demand of each process of each resource

                      o Allocation[ n ][ m ] indicates the number of each resources allocated to each processo Need[ n ][ m ] indicates the remaining resources needed of each type for each process ( Note

                      that Need[ i ][ j ] = Max[ i ][ j ] - Allocation[ i ][ j ] for all i j ) For simplification of discussions we make the following notations observations

                      o One row of the Need vector Need[ i ] can be treated as a vector corresponding to the needs of process i and similarly for Allocation and Max

                      Safety Algorithm In order to apply the Bankers algorithm we first need an algorithm for determining whether or not a

                      particular state is safe This algorithm determines if the current state of a system is safe according to the following steps

                      1 Let Work and Finish be vectors of length m and n respectively Work is a working copy of the available resources Finish is a vector of booleans indicating whether a particular process can finish Initialize Work = Available and Finish to false for all elements

                      2 Find an i such that both (A) Finish[ i ] == false and (B) Need[ i ] lt Work This process has not finished but could with the given available working set If no such i exists go to step 4

                      3 Set Work = Work + Allocation[ i ] and set Finish[ i ] to true This corresponds to process i finishing up and releasing its resources back into the work pool Then loop back to step 2

                      4 If finish[ i ] == true for all i then the state is a safe state because a safe sequence has been found

                      Resource-Request Algorithm ( The Bankers Algorithm ) Now we have a tool for determining if a particular state is safe or not This algorithm determines if a new request is safe and grants it only if it is safe to do so When a request is made ( that does not exceed currently available resources )

                      pretend it has been granted and then see if the resulting state is a safe one If so grant the request and if not deny the request as follows

                      1 Let Request[ n ][ m ] indicate the number of resources of each type currently requested by processes If Request[ i ] gt Need[ i ] for any process i raise an error condition

                      2 If Request[ i ] gt Available for any process i then that process must wait for resources to become available

                      else the process can continue to step 3

                      3 Check to see if the request can be granted safely by pretending it has been granted and then seeing if the resulting state is safe If resulting state is safe

                      grant the requestelse

                      then the process must wait until its request can be granted safelyThe procedure for granting a request ( or pretending to for testing purposes ) is

                      Available = Available - Request

                      Allocation = Allocation + Request Need = Need - Request

                      Unit III Memory management

                      Just as processes share the CPU they also share physical memory Memory management unit of OS takes care of memory allocation deallocation and other issues Program must be brought into memory for it to be run Addresses are two types

                      i) relocatable or relative address wrt to the beginning of program ii) Absolute addresses

                      Q)Address BindingAns Binding means mapping logical address space to physical address space Address binding can happen at three different stagesCompile time If you know at compile time only

                      where in memory the program is going to be allocated then compiler generates absolute addresses

                      Otherwise compiler generates relocatable addresses

                      Load time loader binds the relocatable addresses generated by compiler to absolute address Hence binding is done at load timeIf Binding is done at compile and load time physical and logical addresses are same

                      Execution time If address binding is done at run time the process can be moved during its execution from one memory segment to another Here we call logical addresses as virtual addressRun time mapping of virtual to physical address is done by a hardware device called Memory Management Unit (MMU)

                      Q)Logical vs Physical Address SpaceAnsLogical address ndash generated by the CPU also referred to as virtual addressPhysical address ndash address seen by the memory unitSet of all logical addresses is called logical address space Set of all physical addresses is called physical address space

                      Q) Memory-Management Unit (MMU)Ans MMU is hardware device that maps virtual to physical addressIn MMU scheme the value in the relocation register is added to every address generated by CPU to read the contents of memoryThe user program only knows logical addresses it never sees the real physical addresses

                      Q)Dynamic loadingAns Since physical memory is small it may not be possible for the entire program to be in main memory so we can use dynamic loadingIt is responsibility of users to design their programs to take advantage of dynamic loadingOnly the main function is loaded into main memory and when main() calls another function the main() will check whether the function is in main memory IF not the loader will load the desired function to main memory and updates the programrsquos address table

                      Routine is not loaded until it is called We achieve better memory-space utilization as unused routine is never loaded

                      Q) Dynamic Linking+ Linking postponed until execution time+ Small piece of code stub used to locate the appropriate memory-resident library routine+ Stub replaces itself with the address of the routine and executes the routine+ Operating system needed to check if routine is in processesrsquo memory address+ Dynamic linking is particularly useful for libraries

                      Q) OverlaysAns Overlays are needed when process is larger than amount of main memory allocated to it Instructions and data that are needed at any given time is kept in main memory Overlays can be implemented by user Programming design of overlay structure is complex Overlays for a two-pass assembler is as shown in the figure

                      Q) SwappingAns A process can be swapped temporarily out of memory to a backing store and then brought back into memory for continued executionBacking store ndash fast disk large enough to store copies of all memory images of all usersPriority based scheduling uses a variant of swapping policy called roll out roll in If a higher priority process arrives the memory manager swaps out lower priority process and then swaps in higher priority process When the higher priority process finishes the lower priority process can be swapped in again to main memory A process that is swapped out will be swapped back into the same memory space it occupied previouslyMajor part of the swap time is transfer time We can swap idle process only and cannot swap a process that is waiting for IO

                      Q) Contiguous memory allocation Ans Each process is contained in a single contiguous section of memory1 Fixed Size Partition (or) Single-partition allocation

                      Divide the main memory into Fixed sized partitions Each partition may contain exactly one process Relocation register contains value of starting physical address Limit register contains range of logical addressesEvery address generated by a CPU is checked as follows

                      If logical address lt Limit register then logical address is added with Relocation register to get the corresponding memory address

                      else a trap to OS is generatedSince every address is checked we can protect the OS and other user programs from being modified by other running process

                      2 Multiple-partition allocation

                      1) Fixed Size Partitions Divide the main memory into Fixed sized blocks Here memory allocated to a process may be larger than required The difference between allocated memory and requested memory is called internal fragmentation Internal fragmentation is unused memory inside the partition

                      0 P1 Block0(0th address to 3rd address)4 P1 Block1(4th address to 7th address)8 Block2(8th address to 11th address)

                      12 Block3(11th address to 12th address)

                      if P1 requires just 5 addresses still it is allocated 2 blocks ie 8 addresses So remaining 3 addresses are left unused

                      2) Variable Size partitions Here main memory is divided into partitions of variable sizesOperating system maintains information about

                      a) allocated partitions b) free partitions (hole)

                      In the beginning all main memory is empty and is considered one large block of available memory a hole

                      Exact memory required by process is only given When a process terminates it releases memory which can be allocated to another process

                      Memory is allocated to processes until finally no available block of memory (or hole) is large enough to hold the next processExternal fragmentation exists when there is enough memory to satisfy a request but the available memory are not contiguous Wastage of memory outside the partitionOne Solution to problem of external fragmentation is

                      a Compaction shuffle the memory contents so as to place all free memory together into one large block

                      ===============================================================================Q) Dynamic Storage-Allocation Problem (or) Most commonly used strategies to select a free hole from the set of available holesAns To satisfy a request of size n from a list of free holes below 3 policies can be used

                      1 First-fit Allocate the first hole that is big enough

                      2 Best-fit Allocate the smallest hole that is big enough must search entire list if it is not ordered by size Produces the smallest leftover hole

                      3 Worst-fit Allocate the largest hole must also search entire list Produces the largest leftover holeFirst-fit and best-fit better than worst-fit in terms of speed and storage utilization

                      ===============================================================================Q) Non-Contiguous Memory allocation

                      1 Paging Paging is a memory management scheme that provides non-contiguous memory allocation0 P1 Block0(0th address to 3rd address)4 P2 Block1(4th address to 7th address)8 Block2(8th address to 11th address)

                      12 P1 Block3(11th address to 12th address)

                      Logical address space of a process can be noncontiguous Ex P1 is allocated Block0 and Block3

                      1 Divide physical memory into fixed-sized blocks called frames (size is power of 2)2 Divide logical memory into blocks of same size called pages3 Logical address is divided into 2 parts

                      Page number (p) ndash Page table is indexed by page numberPage offset (d) ndash

                      4 A page table is allocated to each process (A pointer to page table is stored in PCB of process )Page table translates logical to physical addresses Page 0 is in frame1 Page 1 is in frame 4 etc5 Internal fragmentation may occur due to paging

                      6 If the size of logical address space= 2m and page size = 2n Then the higher order mminusn bits of logical address correspond to page number and lsquonrsquolower order bits for displacement within the page

                      Ex size of logical address space = 8=23 so m=3

                      page size =4=22 So n=2

                      So mminusn = 1 bit for page number to represent 01

                      remaining n=2 bits for displacement within the page

                      Q) Implementation of Page Table

                      Page table can be kept as

                      1) Set of dedicated registers efficient if the page table is small

                      2) Page table can be kept in memory page table base register (PTBR) points to page table Page-table length register (PTLR) indicates size of the page table Changing page table requires changing only the value in these registers Advantages of keeping page table in memorya Less context switch time

                      b Two memory accesses are required One memory access to access the page table and another memory access to access the required memory address

                      This problem can be solved by using a special fast-lookup hardware cache called associative memory or translation look-aside buffers(TLBs)

                      i Each entry in TLB consists of 2 parts

                      I) page number field

                      II) Value field or frame number field

                      ii When a logical address is generated by CPU its page number is presented to TLB The page number is compared with all the entries of TLB simultaneously

                      If page number is found in TLB Itrsquos frame number is immediately available -gt TLB hitIf page number not found check in page table ndashTLB miss

                      By using TLB Search is fast But TLB hardware is expensive So TLB size is kept small So TLB contains only frequently used few page table entries

                      Q) Protection in Paging Validinvalid bit attached to each entry in the page table OS sets this validinvalid bit for each page to allow or disallow access to the page When the bit is set to invalid the page is not in processrsquos logical address space Hence it generates Trap to OS

                      Q) Shared pages in Paging

                      Reentrant code is the code that never changes during execution Reentrant code can be shared

                      1 One copy of read-only (reentrant) code is shared among processes (ie text editors compilers window systems) Ex ed1ed2ed3 is shared among Process P1 and P2

                      2 Shared code must appear in same location in the logical address space of all processes

                      Each process keeps a separate copy of the private code and data EX data1 and data2

                      The pages for the private code and data can appear anywhere in the logical address space

                      ==============================================================================Q) Segmentation

                      Segmentation is a memory management scheme that support userrsquos view of memory

                      When the user program is compiled the compiler generates segments like

                      1) The code segment

                      2) Global variables segment

                      3) Heap memory segment

                      4) Stack segment etc

                      Each entry in segment table has

                      segment base (Starting physical address of segment) and limit register (specifies the length of the segment)

                      Logical address is divided into 2 parts

                      segment-number offset

                      Segment table is indexed by segment number

                      Segment-table base register (STBR) stores the location of segment table in main memory

                      Segment-table length register (STLR) stores number of segments used by a program

                      The segment number (say lsquosrsquo) is used to find the entry in segment table The required entry is lsquosrsquo locations away from the beginning of the segment table Once the required entry in the segment table is found the offset (lsquodrsquo) is compared with limit

                      If( offset lt limit) then offset is added with the base entry to generate the physical address

                      Q) Shared Segments Code sharing occurs at

                      the segment level Shared segments must

                      have same segment number

                      Allocation - dynamic storage allocation problem

                      use best fitfirst fit may cause external fragmentation

                      Protection protection bits associated with segments

                      readwriteexecute privileges array in a separate segment - hardware can check for illegal array indexes

                      ===============================================================================Q) Virtual memory ndash

                      Ans In paging and segmentation a program will execute only if the entire process is in main memory But here in virtual memory only part of the program needs to be in memory for execution

                      1 A program that is large enough than the available main memory can still run as only part of program is only loaded into main memory Logical address space can therefore be much larger than physical address space

                      2 Allows address spaces to be shared by several processes

                      Virtual memory can be implemented via Demand paging Demand segmentation

                      The large blank space between heap and stack is part of virtual address space Virtual memory allows files and memory to be shared by 2 or more processes through page sharing as shown in below figure (b) Figure (a) depicts that virtual memory that is larger than physical memory

                      ==============================================================================

                      Q)Demand pagingDemand paging is technique of loading pages from disk to main memory only when the page is needed Hence using less amount of physical memory we get faster response

                      Demand paging is similar to PAGING + SWAPPING

                      Instead of swapping a whole process the pager brings only those pages needed into memory

                      To distinguish between pages that are in memory and pages that are on the disk validinvalid bit is used

                      Validinvalid bit is attached to each entry in page table When this bit is set to valid the page is legal and in memory When this bit is set to invalid either the page is illegal or the page is in disk

                      When CPU generates a logical address whose page table entry is set to invalid page fault occurs Hence the required page is in disk Some section of hard disk called swap space is used to hold pages that are not present in memory

                      Q) Procedure for Handling a Page Fault

                      1) CPU generates a logical address and if the validinvalid bit for the page that has this logical address is set to invalid page fault occurs

                      2) Page fault causes a trap to operating system

                      3)Check if the logical address is within the logical address space of process ie PTBR and PTLR is checked if (required logical address is not within the logical address space of process)

                      Terminate the processelse

                      page is not in memory and page is in disk

                      4) To bring the required page into memory we need to find a free main memory frame map main memory address to disk block and fetch disk block and load the block into free frame5) When the required page is brought into memory update the page table to indicate the page is in memory6) Restart instruction interrupted by illegal address trap The process will continue as if page had always been in memory

                      Q) What is pure demand pagingAns If no page belonging to the executing process is in main memory the process will fault for every page it needs Page faults will occur until every page that is needed is in memory This scheme is known as pure demand paging NEVER BRING A PAGE UNTIL IT IS REQUIRED

                      Q) Page Replacement1 Find the location of the desired page on the disk2 Find a free

                      i) If there is a free frame use itii) If there is no free frame use Page replacement Algorithms to find some page in memory that is

                      not really in use and swap itiii) If the modify bit associated with the victim frame is set to 1 the page is written to disk Make the

                      changes to validinvalid bit of victim frames page table entry to invalid to indicate the victim frame is no longer in memory

                      else no need to write to disk3 Read the desired page into victim frame and change the frame and page tables4 Restart the user process

                      =================================================================================================Q) Page Replacement AlgorithmsGoal Produce a low page-fault rateThe algorithms are evaluated by running it on a particular string of memory references (reference string) and by computing the number of page faults on that string

                      1 First in First out (FIFO) Page replacement algorithm Uses the time when a page was brought into memory The page that was brought first into memory is replaced ie oldest page is chosen for replacement We create a FIFO queue to hold all pages in memory

                      When a page is brought into memory we insert it at the tail of the queue and we replace the page at the head of the queue

                      FIFO 15 page faultsProblems due to FIFO is Beladyrsquos anamolyFor a refrence string shown below123412512345When number of frames allocated to a process is 1 number of page faults=12 due to FIFO page replacementWhen number of frames allocated to a process is 2 number of page faults=12 due to FIFO page replacementWhen number of frames allocated to a process is 3 number of page faults=9 due to FIFO page replacementWhen number of frames allocated to a process is 1 number of page faults=10 due to FIFO page replacementAs the number of frames allocated to a process increases page faults must decrease But in FIFO page replacement page faults increase2 Optimal Page replacement AlgorithmReplace the page that will not be used for longest period of time Used for measuring how well your algorithm performs It is not practical to implement OPT algorithm as it is not possible to know what the future references will beex

                      Optimal page replacement 9 page faults

                      3 Least Recently used (LRU) Page replacement algorithm Replace the page that has not been used for longest period of time LRU makes use of time of last use of the page Ex

                      Number of page faults = 12

                      LRU requires hardware assistance to determine the time of last use of the page We can use 1) Stack or 2) Counter to implement LRU page replacement algorithm

                      1) Stack implementation ndash keep a stack of page numbers in a double link formwhen a Page is referenced it is moved to the top of stackThis implementation requires 6 pointers to be changed

                      2) counter implementation Time-of-use field is allocated to each frame allocatedCPU maintains a counter Counter is incremented for every page reference Counter value is copied to the time-of-use field of the referenced page

                      Disadvantages of LRU page replacement algorithm Updating counter or stack must be done for every memory reference HENCE LRU IS SLOW and implementation requires hardware assistance ==============================================================================Q) LRU approximation page replacement1) Using reference bit2) Using additional reference bits algorithm3) Second chance algorithm4) Enhanced Second chance algorithm5) Counting based Page replacement

                      Has two schemes1) Least Frequently Used (LFU)2) Most Frequently Used (MFU)

                      6) Page Buffering AlgorithmsQ) What is ThrashingAns A process is thrashing if it is spending more time paging than executingThrashing may occur when global (or) local page replacement is used as followsSolution to Thrashing is Working Set strategyIf we provide a process with as many frames as it needs thrashing can be avoided Since it is not possible to know how many frames a process needs we use locality model of process execution

                      here Locality is set of pages that are actively used together Locality model states that as the process executes it moves from locality to locality If we allocate enough frames to a process to accommodate the size of current locality it will not fault again until it changes current localityWorking Set model uses working-set and a parameter ∆ = working-set windowThe set of pages in the most recent ∆ page references is the working setLet WS S i be the working-set size for process Pi ie Pi needs WS S i framesLet D be the total demand for frames then D=sum WS Si

                      and mbe the total number of available frames

                      If iquestm thrashing occurs because some processes will not have enough framesSoIF (Dgtm iquest

                      OS selects a process to suspendelse OS may initiate another processThe working set strategy prevents thrashing while keeping the degree of multi programming as high as possibleThe main difficulty is to keep track of moving working set window ========================================================================Q) How does thrashing occur in global and local page replacementAns In Global page replacement when a page fault occurs for a process any frame in main memory can be replaced causing page fault to some other process that needs the replaced frame Page fault processes must use paging device to swap pages in and out As more number of processes queue up for the paging device

                      the READY queue empties And CPU becomes idle So OS introduces new processes as CPU is free This further increases number of page faults

                      In Local page replacement Each process is allocated certain frames in main memory and when a page fault occurs only the frames allocated to it will be replacedWhen a process starts thrashing other processes are not affected But since the processes will be in queue for paging device most of the time average service time for page fault increases Hence effective access time will increase even for a process that is not thrashingQ) Page Table Structure Or Structure of page table in memory

                      Hierarchical Paging Hashed Page Tables Inverted Page Tables

                      Hierarchical Paging If Page table is large then we break up the page table into multiple page tables ie page table is paged A simple technique is a two-level page tableTwo-Level Paging Example A logical address (on 32-bit machine with 4K page size) is divided into 1 a page number consisting of 20 bits 2 a page offset consisting of 12 bitsSince the page table is paged the page number is further divided into 1 a 10-bit page number 2 a 10-bit page offset Thus a logical address is as follows page number page offset

                      pi p2 d

                      10 10 12 where p1 is an index into the outer page table and p2 is the displacement within the page of the outer page table Two-Level Page-Table Scheme

                      Address-Translation Scheme Address-translation scheme for a two-level 32-bit paging architecture

                      Hashed Page Tables (or) Hash table Hashed page tables are common in address spaces gt 32 bitsEach entry in hash table contains a linked list of elements that hash to same locationHash table is indexed by hash valueEach element consists of 3 fields

                      1 Page number2 Frame number3 Pointer to next element in linked list

                      The algorithm works as follows Page number is passed to hash function to get hash value Locating the hash value in hash table is easy as Hash table is indexed by hash value Page number is then compared with field 1 in the first element in linked list If there is a match the corresponding frame number is used to get physical address If there is no match next entries in linked list are searched

                      Inverted Page Table

                      Invertible page table has one entry for each frame of physical memory Each entry in invertible page table has

                      Process-id Page number

                      Logical address also has 3 parts Process-id Page number

                      OffsetProcess-id and page number of the logical address is compared with each entry in the invertible page tableIf a match is found at ith entry i is added to offset to get logical address If there is no match it means it is illegal addressAdvantages and disadvantages

                      Decreases memory needed to store each page table Increases time needed to search the table when a page reference occurs Solution Use hash table to

                      limit the search to one mdash or at most a few mdash page-table entries Inverted Page Table Architecture

                      Unit-IV CPU Scheduling

                      Basic conceptsTo maximize CPU utilization some process must be running at all times

                      In multi programming several processes will be in main memory at a given time If running process has to wait for IO the OS takes the CPU away from that process and assigns CPU to other process in ready queue So CPU must be scheduled and CPU scheduling is fundamental OS function

                      CPU- IO burst cycleProcess execution begins with a CPU burst that is followed by an IO burst which is followed by another CPU burst then another IO burst and so on as shown in the figure

                      An IO bound program has many short CPU bursts A CPU-bound program has few long CPU bursts

                      Schedulers a module in OS for scheduling decisions1048708 Long-term scheduler (or job scheduler) ndash selects which processes should be brought into the ready queue1048708 Medium-term scheduler ndash selects which processes should be swapped inout the memory1048708 Short-term scheduler (or CPU scheduler) ndash Whenever the CPU becomes idle the operating system must select one of the processes in ready queue to be executed This selection is done by short term scheduler (or) CPU scheduler

                      Dispatcher gives CPU control to the process selected by short term scheduler This function involves following

                      1 switching context2 switching to user mode3 jumping to proper location in the user program to restart that program

                      Time taken to stop one process and start another process is known as dispatch latency This must be kept smallReady QueueReady queue is implemented as a FIFO queue a priority queue a tree or simply an unordered linked listThe records in ready queue are PCBs of processes

                      Pre-emptive Scheduling

                      CPU scheduling decisions take place under one of four conditions

                      1 When a process switches from the running state to the waiting state2 When a process switches from the running state to the ready state for example in response to an

                      interrupt3 When a process switches from the waiting state to the ready state say at completion of IO 4 When a process terminates

                      For conditions 1 and 4 there is no choice - A new process must be selected For conditions 2 and 3 there is a choice - continue running the current process (or) select a different

                      one If CPU scheduling takes place only under conditions 1 and 4 the system is said to be non-

                      preemptive or cooperative In non-preemptive a process runs until it itself gives up the CPU ie when it is waiting for IO (or) when process completes Otherwise the system is said to be pre-emptive

                      Windows used non-preemptive scheduling up to Windows 3x and started using pre-emptive scheduling with Win95 Macs used non-preemptive prior to OSX and pre-emptive since then Note that pre-emptive scheduling is only possible on hardware that supports a timer interrupt

                      Note that pre-emptive scheduling can cause problems when two processes share data because one process may get interrupted in the middle of updating shared data structures

                      Preemption can also be a problem if the kernel is busy implementing a system call ( eg updating critical kernel data structures ) when the preemption occurs Solution wait until the system call has either completed or blocked before allowing the pre-emption But this solution is problematic for real-time systems

                      Some critical sections of code protect themselves from concurrency problems by disabling interrupts before entering the critical section and re-enabling interrupts on exiting the section But this should only be done only on very short pieces of code that will take less CPU time

                      Scheduling Criteria to decide CPU scheduling algorithm1 CPU utilization in real system cpu utilization will range from 40 to 902 Throughput Number of processes that are completed per unit time3 Turnaround time Time when the process has completed ndash Time when the process was submittedie it includes waiting time and burst time4 Waiting time amount of time that a process spends waiting for CPU5 Response time Time when the first response came - Time when the process was submitted

                      Optimization Criteria Max CPU utilizationMax throughput

                      Min turnaround timeMin waiting timeMin response time CPU Scheduling Algorithms1) First-Come First-Served (FCFS) Scheduling First come first served CPU scheduling algorithm is non pre-emptive Processes are scheduled in the order they have arrivedFCFS is implemented by Queue When the CPU is free it is allocated to the process at the head of the queueAdvantages Easy to implementDisadvantages 1 Average waiting time is quite long

                      2 Convoy effect (it occurs as small processes wait in queue for big process to leave CPU)Example Process Burst Time P1 24 P2 3 P3 3

                      Suppose that the processes arrive in the order P1 P2 P3

                      The Gantt Chart for the schedule is P1 P2 P3

                      0 24 27 30 Waiting time for P1 = 0 P2 = 24 P3 = 27 Average waiting time (0 + 24 + 27)3 = 17 2) Shortest-Job-First (SJF) SchedulingSchedule the Process with shortest burst time If CPU burst time of two processe are same then FCFS is usedAdvantages Average waiting time decreasesDisadvantages It is difficult to know the length of next CPU burstSJF can be either pre-emptive (or) non pre-emptive1 Non pre-emptive SJF ndash once CPU is given to the process it cannot be preempted until completes its CPU burst2 Pre-emptive SJF (or) Shortest-Remaining-Time-First (SRTF) If the newly arrived process is shorter than what is remaining of currently executing process then pre-empt the currently executing process

                      Example Process Arrival Time Burst Time P1 00 7

                      P2 20 4 P3 40 1

                      P4 50 4

                      SJF (non-preemptive) Gantt Chart P1 P3 P2 P4

                      0 7 8 12 16 Average waiting time = [0 +(8-2)+(7-4) +(12-5)] 4 =4

                      Example of Preemptive SJF

                      Process Arrival Time Burst TimeP1 00 7P2 20 4P3 40 1P4 50 4

                      SJF (preemptive) P1 P2 P3 P2 P4 P1

                      0 2 4 5 7 11 16 Average waiting time = (9 + 1 + 0 +2)4 =3 3) Priority SchedulingA priority number (integer) is associated with each processThe CPU is allocated to the process with the highest priority (smallest integer equiv highest priority)If two processes have equal priority then FCFS is usedSJF is a priority algorithm in which larger the CPU burst lower the priorityThere are two schemes 1 Preemptive 2 nonpreemptive

                      Problem equiv Starvation ndash low priority processes may never executeSolution equiv Aging ndash as time progresses increase the priority of the process

                      4) Round Robin (RR) Each process gets a small unit of CPU time (time quantum) usually 10-100 milliseconds After this time has elapsed the process is preempted and added to the end of the ready queueIf there are n processes in the ready queue and the time quantum is q then each process gets 1n of the CPU time in chunks of at most q time units at once No process waits more than (n-1)q time unitsPerformance 1 q large _ FIFO 2 q small _ q must be large with respect to context switch otherwise overhead is too highExample of RR with Time Quantum = 4 Process Burst Time P1 24 P2 3

                      P3 3 The Gantt chart isP1 P2 P3 P1 P1 P1 P1 P1

                      0 4 7 10 14 18 22 26 30 Average waiting time = [(30-24)+4+7]3 = 173 =566 5)Multilevel Queue scheduling Ready queue is partitioned into separate queuesFor ex ready queue is divided into 2 queues

                      1 foreground (interactive) queue2 background (batch) queue

                      Each queue has its own scheduling algorithmFor ex Round Robin scheduling algorithm can be used in foreground queue

                      FCFS scheduling algorithm can be used in background queueScheduling must be done between the queues This can be done in 2 ways1 Fixed priority scheduling

                      Foreground queue has highest priority All the processes in foreground queue must be completed and queue must be empty Then only the processes in background queue will be given CPU As shown in below figure Disadvantage starvation

                      2 Time slice ndash each queue gets a certain amount of CPU timewhich it can schedule amongst its processes ie 80 to foreground in RR and 20 to background in FCFS

                      6) Multilevel Feedback Queue Scheduling A process can move between queues The idea is to separate processes according to their CPU bursts IO ndash bound and interactive processes will be in highest priority queueIf a process takes more CPU time it is moved to lower priority queue If a process waits too long in lower priority queue it is moved to higher priority queue to prevent starvationAs shown in the below figure let there be 3 queues Q0 Q1 Q2

                      1 Q0 ndash time quantum 8 milliseconds 2 Q1 ndash time quantum 16 milliseconds 3 Q2 ndash FCFS

                      Scheduling

                      1 A process entering ready queue is put in queue Q0 When it gains CPU the process receives 8 msec If it does not finish in 8 milliseconds process is moved to queue Q1 2 At Q1 process again receives 16 additional milliseconds If it still does not complete it is moved to queue Q2

                      Q) SynchronizationSeveral processes run in an operating system Some process share resources due to which problems like data inconsistency may arise

                      Example of process synchronization is Producer Consumer problem (or) Bounded-Buffer problemProducer Consumer problem To make sure that producer process should not add data when the buffer is full andConsumer process should not take data when the buffer is empty

                      The code for Producer process can be modified as followswhile (true) Produce an item in next producedwhile (counter == BUFFER_SIZE) do nothingbuffer[in]=nextProducedin = (in+1) BUFFER_SIZEcounter++

                      here an integer variable counter is initialized to zeroCounter is incremented every time a new item is added to the buffer and counter is decremented every time an item is removed from bufferin is an index that always points to an empty slot after the last filled slotout is an index that always points to the first filled slot after at the head of the circular queue

                      The code for consumer process is as shown belowwhile(true)

                      while (counter == 0) do nothingnextConsumed = buffer[out]out=(out+1)BUFFER_SIZEcounter--

                      The above producer and consumer code produces RACE CONDITIONOutcome of execution of several co-operating processes depends on the order in which accesses takes place is called Race ConditionFor Example if counter =5

                      Given counter =5 and if producer produces one item and consumer consumes one item then correct value of counter= 5 only But here the counter = 4 ie inconsistent resultIf T5 is executed before T4 then the counter =6 ie inconsistent result

                      Solution to avoid race condition make sure only one process at a time must update the shared variable (here it is counter)

                      Q) Critical section problemAns Each process has a segment of code called critical section Critical section code is the code that contains common variables of co-operating processesTo avoid data inconsistencyTwo processes must not execute critical section code at same timeFor this to happen1 Each process must request permission to enter its critical sectionmdashEntry section2 End of critical section ndashExit section3 Rest of the code after critical section ndash Remainder section

                      Solution to critical section must satisfy the following three requirements1 Mutual exclusion if one process is in critical section no other process can be in critical section2 Progress Processes not in remainder section must decide which process will enter critical section next3 Bounded Waiting All the processes must be allowed to enter critical section No process must wait indefinitely

                      Q) Petersonrsquos solution for critical section problem (Software based solution)AnsTime Process 1 Process2T0 do

                      flag[1] = TRUE turn =2

                      do flag[2] = TRUE

                      T1 turn =1

                      T2 While( flag[2] ampamp turn == 2)Since turn = 2 while loop will not run

                      While( flag[1] ampamp turn == 1)Loops continuously until flag[1]

                      becomes falseT3 Enter critical section

                      T4 flag[1] = FALSE Enter Critical section

                      T5 Enters remainder section flag[2]=FALSE

                      while (TRUE) Enters remainder section

                      while (TRUE)

                      101 File Concept Data is stored in secondary memory in the form of files A file is a sequence of bits bytes lines or records A file has structure which depends on its type

                      File Attributes

                      1 Name ndash Name of the file for user understanding2 Identifier used by OS to identify a file ( eg inode number )3 Type ndash Text files executable files binary files etc 4 Location ndash location of the file on the hard drive 5 Size current size of the file6 Protection Controls who can read write or execute the file7 Time Date and user identification gives the information when the file was created last modified

                      and last used etc This data is used for protection security and usage monitoring

                      All information about files is kept in directory structure The directory entry consists of the filersquos name and its unique identifier The identifier locates the other file attributes Directory structure is kept on disk

                      File Operations

                      The file Abstract Data Type supports many common operations 1 Creating a file find free space on disk add entry to directory 2 Writing a file write data at current file position pointer location and update pointer3 Reading a file read data at current file position pointer location and update pointer4 Repositioning within a file or file seek change pointer location to a given value5 Deleting a file free the space allocated to file and remove entry from directory6 Truncating a file delete the data and update file size

                      Open File Table

                      Most OSes require that files be opened before access and closed after access Information about all files that are open currently in the system is stored in an open file table This

                      system wide open file table is maintained by OS And information about files opened by the process is stored by the processrsquos open file table Open file table has the following information

                      1 File pointer - records the current position in the file for the next read or write2 File-open count - How many times has the current file been opened ( simultaneously by

                      different processes ) and not yet closedWhen this counter reaches zero the file can be removed from the table

                      3 Disk location of the file4 Access rights

                      Some systems provide support for file locking 1 A shared lock is for reading only2 A exclusive lock is for writing as well as reading3 An advisory lock OS makes sure the locks are acquired and released appropriately4 A mandatory lock software developers makes sure the locks are acquired and released

                      appropriately5 UNIX used advisory locks and Windows uses mandatory locks

                      File Types

                      Implementing file types A common technique is to include the type as part of the filename The name is split into 2 parts

                      1 File Name 2 Extension

                      File name and extension are separated by a period The system uses the extension to indicate the type of file and the type of operations that can be done on that fileUser opens Microsoft word and clicks on File menu and Open option User will only specify the filename without extension application will look for a file with the given name and extension it expects Hence extensions are not supported by the operating systemUnix system uses a crude magic number stored at the beginning of the file to indicate the type of fileIn Mac OS X operating system file produced by word processor has the word processorrsquos name as its creator When user double clicks on the file word will open the required fileSome common file types are as shown below

                      File Structure Some files contain an internal structure which may or may not be known to the OS for ex executable files have a specific structure so that it can determine

                      where in memory to load the file and what is the location of the first instruction

                      If OS supports many file formats it will increase the size and complexity of the OS UNIX treats all files as sequences of bytes with no internal structure (With the exception of

                      executable binary programs which it must know how to load and find the first executable statement etc)

                      Macintosh files have two forks - a resource fork and a data fork The resource fork contains information relating to the UI such as icons and button images and can be modified independently Data fork contains the code or data

                      Internal File structure

                      Disk files are accessed in units of physical blocks typically 512 bytes

                      Internally files are organized in units of logical units like single byte (or)

                      data record size (or) structure size

                      The number of logical units which fit into one physical block determines its packingIf a student file is organized internally as records Each record takes 64 bytes of memoryAnd main memory is divided into blocks of 512 bytes Then packing = 51264=8 8 records can be stored per block

                      Internal fragmentation may occur due to this

                      102 Access Methods

                      The information in file can be accessed in the following ways1 Sequential access2 Direct access (or) Relative access

                      1 Sequential access we cannot randomly access any record

                      Records are read and written sequentially one record after another

                      A sequential access file emulates magnetic tape operation and generally supports the following operations read next - read a record and advance the tape to the next position write next - write a record and advance the tape to the next position rewind skip n records - May or may not be supported N may be limited to positive numbers or may be

                      limited to +- 1

                      2 Direct access (or) Relative accessA file is made up of fixed length records so that we can randomly access any record

                      No restriction on order of reading or writing

                      Direct access files are useful when we need to access large amounts of data randomly

                      Ex database

                      Operations supported include n is relative block number ie relative to beginning of file

                      1 read n - read record number n 2 write n - write record number n 3 jump to record n - could be 0 or the end of file4 Query current record - used to return back to this record later5 Sequential access can be easily emulated using direct access The inverse is complicated and

                      inefficient

                      3 Other access methods can be built on top of direct access method We construct an index for the file which contains pointers to various blocks To search a file we first access index and then use pointer to access file

                      If a student file has 120000 records of students sorted by student id We can create an index consisting of 1st student id in each blockif 64 records can be stored in each block then number of blocks needed to store 120000 records = 12000064 asymp 2000 blocks So index has 2000 entries Index is kept in main memory so that searching becomes faster

                      If the index itself becomes large we create an index for index file (Ex B-tree)

                      Q) Allocation Methods

                      To allocate space to files on disk is such a way that disk space is utilized effectively and files can be accessed quickly

                      Three major methods of allocating disk space are

                      1 Contiguous 2 Linked

                      3 Indexed

                      1 Contiguous allocation Each file occupies set of contiguous blocks on disk

                      If the file is n blocks long and starts at location b then it occupies blocks b b+1 b+2hellipb+n-1

                      The directory entry for each file indicates the address of starting block(ie b) and length (n)

                      Advantages

                      1 For direct access to block i we can directly access by finding the block as b+i Since the file is stored sequentially sequential access is easy Hence access is easy

                      2 Supports both direct and sequential access3 Number of disk seeks required is minimum 4 Seek time is minimum

                      Disadvantages

                      1 Difficult to find space to a new file (or) add more data to existing fileFirst fit and Best fit strategies are used to find a free hole( set of free contiguous blocks) from list of available holes But both are inefficient wrt storage utilization

                      2 External fragmentation even though there are enough free blocks required for a new file they are not contiguous and hence cannot be allocated to new fileSolution copy all the files onto another disk or tape We then copy the files back onto original disk contiguously This is known as compaction But compaction is time consuming

                      3 Size of the file must be known when the file is created Even though final size is known entire size may not be used by the file at the beginning Hence leads to wastage of space

                      If size of the file is not known in advance then (i) If we allocate too little space to file file cannot be extended Solution find a larger hole copy the contents of the file to the new space and release the previous space This may be possible as long as space exists Also this is time consuming(ii) If we overestimate the amount of space needed much of the space is unused This will lead to internal fragmentation

                      To minimize these drawbacks some os uses modified contiguous allocation scheme Here contiguous blocks of memory is allocated at firstIf the file needs more blocks another set of contiguous space known as extent is added Location and a block count plus a link to the first block of next extentIf extents are too large and of fixed size internal fragmentation occursIf extents are of variable sizes external fragmentation occurs

                      Linked AllocationEach file is a linked list of disk blocks The disk blocks can be scattered anywhere on the diskThe directory contains a pointer to first and last blocks of the file For example a file of 5 blocks might start at a block 9 and continue at block 16 then block 1 and then block 10 and finally block 25 Each block contains a pointer to next block If each block is 512 bytes and disk address (pointer ) requires 4 bytes then the user sees blocks of 512-4=508 bytes To create a new file we simply create a new entry in the directory The pointer will have nil for empty file Also the size field of empty file is 0

                      To add more data to existing file free block is found and data is written to it and is linked to end of file To read a file we read by following the pointers block to block

                      Advantages 1 Easy to find space to a new file (or) add more data to existing file2 Size of the file need not be declared when the file is created3 There is no external fragmentation

                      Disadvantages1 Seek time is more2 Number of disk seeks required is more3 Pointers use larger percentage of filersquos disk space4 Inefficient for direct access files To access i th record i disk reads are required5 Not Reliable if pointers were lost or damaged

                      Solution to disadvantage 3 is to use to group multiple blocks as clusters(for ex 4 blocks = 1cluster) We allocate clusters instead of blocks to file Makes logical to physical block mapping simpleImproves disk throughput by reducing disk access timeReduces wastage of space due to pointers as only few pointers may be neededFree ndashlist management also becomes simpleDisadvantage internal fragmentation more space is wasted when the cluster is partially full

                      Solution to disadvantage 5 is to use doubly linked list (or) store the file name and relative block number in each block

                      An variation of linked allocation is the use of file allocation table (FAT)A section of disk at the beginning of each volume contains FAT FAT has one entry for each disk block and is indexed by block numberThe block number contains the block number of next block that contains the file The last block will have special end of the file value as table entry An unused block is indicated by table value 0

                      Advantages

                      1 Easy to find space to a new file (or) add more data to existing file2 Random access time is improved

                      Disadvantage number of disk seeks is more The disk head must move to read the FAT and find the location of block then move to location of the block

                      3 Indexed allocation

                      Each file has its own index block Index block is an array of disk block addresses The ith entry in index block points to ith block of the file

                      The directory contains the address of the index block

                      When the file is created all pointers in the index block are set to nil

                      To add more data to existing file free block is found and data is written to it The address of block is put in the index block

                      Advantages

                      1 Easy to find space to a new file (or) add more data to existing file2 Supports direct (or) random access efficiently3 No external fragmentation

                      Disadvantage

                      1 Seek time is more2 Number of disk seeks required is more3 Not Reliable4 Index file must be kept in memory If this memory is not available then we have the read the index

                      block and then desired data block ie 2 disk accesses are required To access a block near the end of the file we need to read all the index blocks to read the needed data block

                      5 Pointer overhead of index block is gt than that of linked allocation For example if we have file of only one or two blocks Entire index block is allocated to store these one or two pointers

                      Various mechanisms followed for size of index block

                      1 linked scheme size of index block is one disk block Index block contains a small header containing name of the file and set of the first 100 disk block addresses The next address contains the address of another index block for large files

                      2 multilevel index to access a block the OS uses first-level index to find second-level index block which points to file block This approach can be continued to third or fourth level

                      3 combined scheme

                      For example Say there are 15 pointers of index block in the filersquos inode The first 12 of these pointers have address of blocks that contains file data The next 3 pointers point to indirect blocks The first points to single indirect block which is an index block that contain addresses of filersquos data blocks

                      The second points to double indirect blocks and third points to triple indirect blocks as shown below

                      Q) Free space management

                      To keep track of free disk space the system maintains free space list To create a file we search free space list and allocate the required space to new file

                      The free space list can be implemented as

                      1 Bit vector (or) Bit map2 Linked list3 Grouping4 Counting 5 Space maps

                      1 Bit vector (or) Bit map Each block is represented by 1 bit If the block is free the bit is 1 and if the block is allocated the bit is 0 For example consider a disk where blocks 23458 blocks are free The free-space bit map would be 001111001000

                      Advantages simple and easy to get contiguous files

                      Disadvantages

                      1 Bit map requires extra space2 Bit map is inefficient if kept in disk Bit maps of smaller disk can be kept in main memory but may

                      not be possible to keep the bit map of larger disk in main memory3 Bit map must be modified both when blocks are allocated and freed Freeing 1 GB of data on a 1-TB

                      disk could cause thousands of blocks of bit maps to be updated because those data blocks are scattered all over the disk

                      2 Linked List Free-space list is implemented by linking all the free disk blocks together A pointer to the first free block is stored in special location on the disk For example consider a disk where blocks 23458 blocks are freeWe keep a pointer to block 2 as the first free block Block 2 would contain a pointer to block 3 Block 3 would point to block 4 and block 4 would point to block 5and so onAdvantage No waste of spaceDisadvantage Cannot get contiguous space easily ie To traverse the free-space list we must read each block which requires more IO time

                      3 Grouping First free block stores the addresses of n free blocks The first n-1 free blocks are free and the last block contains the addresses of another n free blocks and so onAdvantage large number of free blocks can be found quickly4 Counting free-space list can be maintained by keeping the address of first free block and the number (n)of free contiguous blocks that follow the first block

                      Each entry in the free-space list consists of disk address and a countEach entry requires more space andlist is shorter only if count is gt1These entries can be stored in a B-tree for efficient insertion deletion and search

                      5 Space maps ZFS divides the disk space into chunks of manageable sizes called metaslabs Each metaslab has a space mapZFS uses counting algorithm to store information about free blocks Space map is a log of all block activities in time order in counting formatWhen ZFS allocates or frees space from a metaslab it loads the associated space map into memory and updates it Finally free-space list is updated on diskSynchronization Hardware Hardware solution to synchronization is to provide atomic operations These operations operate as a single instruction without interruption Two such operations are 1 Test and Set() hardware instruction2 Swap() instruction

                      1 Test and Set instruction is as shown

                      Solution using Test and SetThe shared variable lock is initialized to false

                      2 Swap() instruction definition is as follows

                      Solution using Swap1 Shared Boolean variable lock is initialized to false and each process has a local boolean variable key

                      Both Test and Set Swap instructions satisfy the mutual exclusion requirement but unfortunately do not guarantee bounded waiting If there are multiple processes trying to get into their critical sections there is no guarantee of what order they will enter and any one process could have the bad luck to wait forever until they got their turn in the critical section (Since there is no guarantee as to the relative rates of the processes a very fast process could theoretically release the lock whip through their remainder section and re-lock the lock before a slower process got a chance As more and more processes are involved vying for the same resource the odds of a slow process getting locked out completely increase )

                      Below figure illustrates a solution using test-and-set that satisfies bounded waiting using two shared data structures boolean lock and boolean waiting[ N ] where N is the number of processes in contention for critical sections

                      Bounded-waiting mutual exclusion with TestAndSet( )

                      It first looks in an order ( starting with the next process on the list ) for a process that has been waiting and if it finds one then it releases that particular process from its waiting state without unlocking the critical section thereby allowing a specific process into the critical section while continuing to block all the others

                      Q) What is semaphoreAns Semaphore is used to solve synchronization problems A semaphore S is an integer variable accessed only through 2 standard atomic operations 1 wait() 2 signal()Wait primitive Signal primitivewait(S)

                      while(S lt= 0) do nothingS--

                      signal (S)

                      S++

                      Each process that wishes to use a resource performs wait() operation When a process releases a resource it performs signal() operation

                      Semaphores Usage1 binary semaphore or mutex locks can take values 0 (or) 1

                      2 Counting Semaphore can take any integer value Counting semaphore is used to count the number remaining resources The counter is initialized to the number of resources available whenever the counting semaphore gt 0 then a process can enter a critical section and use one of the resources When the counter = 0 ( or negative in some implementations ) then the process blocks until another process frees a resource and increments the counting semaphore with a signal callFor example if 3 resources are there and 4 processes Value of counting semaphore =3if process P1 requires a resource it perform wait() operationNow counting semaphore = 2if process P2 requires a resource it perform wait() operationNow counting semaphore = 1if process P3 requires a resource it perform wait() operationNow counting semaphore = 0if process P4 requires a resource it perform wait() operation P4 waits until a resource is available

                      3 Semaphores can also be used to synchronize certain operations

                      Semaphore ImplementationWhen a process is in critical section and any other process that tries to enter critical section must loop continuously wasting CPU time (busy waiting) This type of semaphore is called spin lock as the process spins while waiting For example suppose it is important that process P1 execute statement S1 before process P2 executes statement S2

                      First we create a semaphore named synch that is shared by the two processes and initialize it to zero Then in process P1 we insert the code

                      S1signal( synch )

                      and in process P2 we insert the code wait( synch )

                      S2 Because synch was initialized to 0 process P2 will block on the wait until after P1 executes the call

                      to signal Each semaphore has a integer value and maintains a waiting queue of waiting processes

                      Semaphore implementation with no busy waiting To overcome busy waiting we block the waiting process and waiting process is restarted by wakeup() operation(ie the process is removed from waiting queue to ready state)

                      Problems due to semaphore1 Deadlocks Deadlocks occur when multiple processes are blocked each waiting for a resource that can only be freed by one of the other (blocked) processes shown below

                      2 Starvation In Starvation one or more processes gets blocked forever and never get a chance to enter critical section For example if we do not specify the algorithms for adding processes to the waiting queue or selecting one to be removed from the queue in the signal( ) call If a LIFO queue is chosen then the first process that starts waiting will never get a chance

                      3 Priority inversionLet Process LMH have priorities as shown belowLltMltHprocess H is waiting for a resource that has been held by process L so Process L is running When process M enters it pre-empts process L and M is running nowHere M runs before H even though M has lower priority than HTo avoid this L is given high priority so that M does not pre-empt L=====================================================================Q) Classical problems on synchronization

                      1 The Bounded Buffer Problem (also called the The Producer-Consumer Problem)2 The Readers-Writers Problem3 The Dining Philosophers Problem

                      These problems are used to test newly proposed synchronization scheme

                      1 The Bounded Buffer ProblemConsider

                      n buffers and each buffer holds one item a producer process which creates the items (1 at a time) a consumer process which processes them (1 at a time)

                      Producer process cannot produce an item if all the buffers are full and Consumer process cannot consume if all the buffers are empty As both processes modify the contents of buffer synchronization must be there For that we use 3 Semaphores

                      1 empty = n (empty semaphore counts the number of empty buffers)2 full=0 (full semaphore counts the number of filled buffers)3 mutex=1 (mutex provides mutual exclusion for access to the buffer)

                      Producer process Consumer processdo

                      hellip produce an item wait(empty) wait(mutex) hellip

                      do wait(full) wait(mutex) hellip remove an item from buffer hellip

                      add item to buffer hellip signal(mutex) signal(full)

                      while (1)

                      signal(mutex) signal(empty) hellip consume the item hellip while (1)

                      2 The Readers-Writers ProblemA data item such as a file is shared among several processesEach process is classified as reader or writerMultiple readers can read the file at same timeA writer must have exclusive access (ie cannot share with either a reader or another writer)Two versions of the readers-writers problem

                      readers priority new reader need not wait because a writer is waiting writers priority if a writer is waiting to access the database no new readers can start reading

                      A solution to both version may cause starvation in the readers priority version writers may starve in the writers priority version readers may starve

                      A semaphore solution to the readers priority version uses 3 semaphores1 readcount = 0 (readcount semaphore counts the number of reading shared data2 mutex =1 (mutex semaphore is used for mutually exclusion when readcount is updated)3 wrt = 1 (wrt semaphore is common to both writer and reader)

                      wrt provides mutually exclusive access to shared data Reader process Writer process

                      wait(mutex) readcount++ if (readcount == 1) wait(rt) signal(mutex) hellip reading is performed hellip wait(mutex) readcount-- if (readcount == 0) signal(wrt) signal(mutex)

                      do wait(wrt) hellip writing is performed hellip signal(wrt)

                      while (TRUE)

                      3 The Dining Philosophers Problemn philosophers sit around a table thinking and eatingAs time passes a philosopher gets hungry and tries to pick up the chopstick on his left and on his right A philosopher may only pick up one chopstick at a time and cannot pick up a chopstick already in the hand of neighbour philosopherThe dining philosophers problems is an example of a large class or concurrency control problems it is a simple representation of the need to allocate several resources among several processes in a deadlock-free and starvation-free mannerA semaphore solution represent each chopstick with a semaphore

                      semaphore chopstick[5] Initially all values are 1

                      Philosopher ido

                      51

                      wait(chopstick[i])wait(chopstick[(i+1) 5])hellipeathellipsignal(chopstick[i])signal(chopstick[(i+1) 5])hellipthinkhellip

                      while (1)This solution guarantees no two neighbouring philosophers eat simultaneously but has the possibility of creating a deadlockQ) Types of operating systems

                      MULTIPROCESSING SYSTEMS has multiple hardware CPUs

                      In symmetric multiprocessing (SMP) systems The processes share the same main memory and io facilities All processors can perform the same functions (hence term symmetric)

                      52

                      ASYMMETRIC MULTIPROCESSING SYSTEMS In asymmetric multiprocessing (ASMP) all CPUs are not equal (Master and slave CPUs) ASMP have dedicated processors to specific tasks For example one processor may be dedicated to disk operations another to video operations etc These systems dont have the flexibility to assign processes to the least-loaded CPU unlike an SMP system

                      REAL TIME SYSTEM is used to implement a computer application that must complete its execution within its time constraint

                      Two kinds of real-time systems have evolved

                      1 A hard real-time system can guarantee that response requirement will be met under all conditions 2 A soft real-time system cannot guarantee that response requirement meets under all conditions Ex

                      Digital audio or multimedia systems Digital telephones

                      DISTRIBUTED SYSTEMS execute parts of a computation in different systems at the same time It uses distributed control ie it spreads its decision-making actions across different computers in the system so that failures of individual computers or the network does not cripple its operation

                      A distributed operating system appears as a uniprocessor system even though it has multiple processors The users may not know where their programs are being run or where their files are located that should all be handled automatically by the operating system

                      Distributed systems allow applications to run on several processors at the same time thus requiring more complex processor scheduling algorithms

                      DESKTOP SYSTEMS A desktop system is a personal computer (PC) system used at a single location Modern desktop systems support multiprogramming Common examples are Linux FreeBSD Windows 8 and the Macintosh operating system

                      HANDHELD SYSTEMS (palmtop computers) Two of the most popular operating systems for handhelds are Symbian OS and Android OS They have very less memory

                      53

                      CLUSTERED SYSTEMS A computer cluster consists of a set of loosely connected computers that work together so that in many respects they can be viewed as a single system The components of a cluster are usually connected to each other through fast local area networks each node (computer used as a server) running its own instance of an operating system

                      Q) Modern operating system

                      New developments in hardware applications and security threats lead to the development of modern operating system

                      New Hardware developments like increased machine speed high speed network increased size and variety of memory devices

                      New Applications developments like multimedia applications internet and web access and client server computing

                      Developments in modern operating system can be categorized as

                      1 Microkernel architecture assigns only few functions to kernel ( IPC and CPU scheduling etc) Other OS services are provided by processes called servers that run in user mode This approach decouples server and kernel development This approach is well suited to distributed environment

                      2 Multithreading a process is divided into threads that can run concurrently Useful for applications that perform a number of independent tasks Example database server that listens and processes many client requests

                      3 Symmetric multiprocessing As now-a-days there are multiple microprocessors in a single system SMP operating system provides greater efficiency when there are multiple processors The processes share the same main memory and io facilities All processors can perform the same functions (hence term symmetric)

                      SMP schedules processes or threads across all of the processors

                      Advantages

                      1 Increased performance if some portion of the program can be run in parallel the performance of SMP OS increases

                      2 increased availability as all processes can perform the same task if one processor fails others will work 3 incremental growth one can add a new processor to increase the performance4 Scaling cost can be dependent on the number of processors used

                      4 Distributed operating systems Distributed systems allow applications to run on several processors at the same time

                      5 Object oriented design adds modularity to kernel Os can be customized without effecting system integrity

                      Q) Linux Operating systems It is open source as its source code is freely available It is free to use Linux was designed considering UNIX compatibility Linux is a multi-user multitasking system Main design goals are speed efficiency and standardization Components of Linux System

                      54

                      Linux Operating System has primarily three components as shown in the below diagram

                      Kernel - Kernel is the core part of Linux It is responsible for all major activities of this operating system It interacts directly with hardware Kernel hides low level hardware details to system or application programs

                      System Library - System libraries are special functions or programs These are used by application programs to use Kernelrsquos features

                      System Utility - System Utility programs are responsible to do specialized individual level tasks like updating log file accepting login requests from terminals etc

                      Kernel Modules Sections of kernel code that can be compiled loaded and unloaded independent of the rest of the kernel A kernel module may typically implement a device driver a file system or a networking protocol

                      Three components to Linux module support 1 module management Supports loading modules into memory and allows to talk to the rest of the kernel 2driver registration Allows modules to inform rest of the kernel that a new driver has become available 3 conflict resolution protect reserved resources of one driver from accidental use by another driver

                      Kernel Mode vs User ModeKernel component code executes in a special privileged mode called kernel mode with full access to all resources of the computer This code represents a single process executes in single address space and do not require any context switch and hence is very efficient and fast Kernel runs each processes and provides system services to processes provides protected access to hardwares to processesCode which is not required to run in kernel mode is in System Library User programs and other system programs works in User Mode which has no access to system hardwares and kernel code User programs utilities use System libraries to access Kernel functions to get systems low level tasksBasic FeaturesFollowing are some of the important features of Linux Operating System

                      Portable - Portability means softwares can works on different types of hardwares in same way Linux kernel and application programs supports their installation on any kind of hardware platform

                      Open Source - Linux source code is freely available and it is community based development project Multiple teams works in collaboration to enhance the capability of Linux operating system and it is continuously evolving

                      Multi-User - Linux is a multiuser system means multiple users can access system resources like memory ram application programs at same time

                      Multiprogramming - Linux is a multiprogramming system means multiple applications can run at same time

                      Hierarchical File System - Linux provides a standard file structure in which system files user files are arranged

                      Shell - Linux provides a special interpreter program which can be used to execute commands of the operating system It can be used to do various types of operations call application programs etc

                      55

                      Security - Linux provides user security using authentication features like password protection controlled access to specific files encryption of data

                      Architecture Linux System Architecture is consists of following layers

                      Hardware layer - Hardware consists of all peripheral devices (RAM HDD CPU etc)

                      Kernel - Core component of Operating System interacts directly with hardware provides low level services to upper layer components

                      Shell - An interface to kernel Takes commands from user and executes kernels functions

                      Utilities - Utility programs giving user most of the functionalities of an operating systems

                      Q) windows XP operating System1Extensibility mdash layered architecture

                      Executive which runs in protected mode provides the basic system services On top of the executive several server subsystems operate in user mode Modular structure allows additional environmental subsystems to be added without affecting the

                      executive 2Portability mdash XP can be moved from on hardware architecture to another with relatively few changes

                      Written in C and C++ Processor-dependent code is isolated in a dynamic link library (DLL) called the ldquohardware abstraction

                      layerrdquo (HAL)3 Reliability mdashXP uses hardware protection for virtual memory and software protection mechanisms for operating system resources 4 Compatibility mdash applications that follow the IEEE 10031 (POSIX) standard can be complied to run on 2000 without changing the source code 5 Performance mdash XP subsystems can communicate with one another via high-performance message passing

                      Preemption of low priority threads enables the system to respond quickly to external events Designed for symmetrical multiprocessing

                      6 International support mdash supports different locales via the national language support (NLS) APIXP Architecture 1 Layered system of modules 2 Protected mode mdash hardware abstraction layer (HAL) kernel executive 3 User mode mdash collection of subsystems

                      a Environmental subsystems emulate different operating systems b Protection subsystems provide security functions

                      XP architecture

                      56

                      Q) Windows network Operating systemExample Windows 2000 operating system

                      All the pictures are taken from Silberschatz Abraham et al Peter Baer Galvin and Greg Gagne Operating system concepts Reading Addison-Wesley

                      Visit my blog enthusiaststudentblogspotin57

                      mtechmessengerblogspotin

                      Keywords OS exam notes OS notesOperating system notes Opearing system galvin notesSimple batch system Multi- programmed Batch System Time sharing System Difference between Batch Multi-programming and Time sharing What is process structure of process in memory Process states PCB Operations on Processes Process Creation Process Termination cascading terminationcontext switch Basic Concepts of threads two modes of CPU execution Inter process communication Shared ndash Memory Systems Message passing systemswhat is deadlock four conditions that are necessary for deadlock to occur Methods for handling deadlocks Deadlock prevention Deadlock avoidanceResource Allocation graphBankers AlgorithmSafe State Unsafe state Safety Algorithm Resource-Request Algorithm Memory management Address Binding Logical vs Physical Address Space MMU Memory-Management Unit Dynamic loading Dynamic Linking Overlays Swapping Contiguous memory allocation Single-partition allocation Multiple-partition allocation Internal fragmentation Variable Size partitions External fragmentation First-fit Best-fit Worst-fit Non-Contiguous Memory allocation Paging Implementation of Page Table PTLR PTBR page table base register Page-table length register translation look-aside buffers TLB Protection in Paging Shared pages in Paging Segmentation Shared Segments Virtual memory Demand paging Procedure for Handling a Page Fault pure demand paging Page Replacement Page Replacement Algorithms FIFO Optimal Page replacement AlgorithmLeast Recently used (LRU) Page replacement algorithm ThrashingWorking Set model global and local page replacement

                      58

                      • 101 File Concept Data is stored in secondary memory in the form of files A file is a sequence of bits bytes lines or records A file has structure which depends on its type
                      • File Attributes
                      • File Operations
                      • Kernel Mode vs User Mode
                      • Basic Features
                      • Architecture

                        1 Resource-Allocation Graph Algorithm Resource-allocation graphs can detect deadlocks only if number of resources of each type are one In this case unsafe states can be recognized and avoided by adding claim edges denoted by dashed

                        lines which point from a process to a resource that it may request in the future All the claim edges are added only at the beginning of process When a process makes a request the claim edge Pi-gtRj is converted to a request edge When a resource is released the assignment edge changes back to a claim edge This approach works by denying requests that would produce cycles in the resource-allocation graph

                        taking claim edges into effectConsider for example the resource allocation graph as shown

                        If P2 requests resource R2 then the claim edge from P1-gtR2 will be made request edge as follows

                        The resulting resource-allocation graph would have a cycle in it and so the request cannot be granted Q) Bankerrsquos Algorithm or Deadlock avoidance algorithm with exampleThere are 12 tape driveslet the current state of the system is as shown in the below figureProcess Allocated Max Need Need= MaxNeed ndash AllocatedP0 5 10 5P1 2 4 2P2 2 9 7

                        Available = 12-(5+2+2) = 3Resource- Request AlgorithmNow when a request for 1 tape drive by process P2 is made we run resource-request algorithm to check whether the request must be granted or not The request is granted if the after granting the request all the processes in the system can complete For thatWe check 1 Is the request of P2 lt= need of P2

                        1 lt= 7 therefore TRUE2 Is the request of P2 lt= Available

                        1 lt= 3 therefore TRUE3 Pretend the request is granted for P2

                        Now the current state is as shown belowProcess Allocated Max Need Need= MaxNeed ndash AllocatedP0 5 10 5P1 2 4 2P2 2+1=3 9 7-1=6

                        Available = 3-1= 2Now Run the safety algorithm to check whether the system is in safe state

                        Safety Algorithm1 Let WORK = Available = 22 Find unfinished process such that Need of unfinished process lt= WORKCheck P0

                        Need of P0 = 5Work = 2

                        Is 5 lt= 2 FALSE

                        Check P1Need of P1 = 2Work = 2

                        Is 2 lt= 2TRUETherefore P1 can finishIf P1 finishes Work = Work+Allocated to P1

                        Work=2+2= 4Now again check if P0 can completeNeed of P0 = 5Work =4Is 5 lt=4 FALSE

                        Check if P3 completesNeed of P3= 6Work =4Is 6lt= 4FALSESo neither P0 can complete and P3 can completeSo the system is in unsafe stateThe request for 1 tape drive by P3 is not granted

                        2 Bankers Algorithm For resources that contain more than one instance the resource-allocation graph method does not

                        work So we use Bankerrsquos algorithm When a process starts up it must state in advance the maximum allocation of resources it may

                        request When a request is made the scheduler determines whether granting the request would leave the

                        system in a safe state If not then the process must wait until the request can be granted safely The bankers algorithm relies on several key data structures ( where n is the number of processes and

                        m is the number of resource categories ) o Available[ m ] indicates how many resources are currently available o Max[ n ][ m ] indicates the maximum demand of each process of each resource

                        o Allocation[ n ][ m ] indicates the number of each resources allocated to each processo Need[ n ][ m ] indicates the remaining resources needed of each type for each process ( Note

                        that Need[ i ][ j ] = Max[ i ][ j ] - Allocation[ i ][ j ] for all i j ) For simplification of discussions we make the following notations observations

                        o One row of the Need vector Need[ i ] can be treated as a vector corresponding to the needs of process i and similarly for Allocation and Max

                        Safety Algorithm In order to apply the Bankers algorithm we first need an algorithm for determining whether or not a

                        particular state is safe This algorithm determines if the current state of a system is safe according to the following steps

                        1 Let Work and Finish be vectors of length m and n respectively Work is a working copy of the available resources Finish is a vector of booleans indicating whether a particular process can finish Initialize Work = Available and Finish to false for all elements

                        2 Find an i such that both (A) Finish[ i ] == false and (B) Need[ i ] lt Work This process has not finished but could with the given available working set If no such i exists go to step 4

                        3 Set Work = Work + Allocation[ i ] and set Finish[ i ] to true This corresponds to process i finishing up and releasing its resources back into the work pool Then loop back to step 2

                        4 If finish[ i ] == true for all i then the state is a safe state because a safe sequence has been found

                        Resource-Request Algorithm ( The Bankers Algorithm ) Now we have a tool for determining if a particular state is safe or not This algorithm determines if a new request is safe and grants it only if it is safe to do so When a request is made ( that does not exceed currently available resources )

                        pretend it has been granted and then see if the resulting state is a safe one If so grant the request and if not deny the request as follows

                        1 Let Request[ n ][ m ] indicate the number of resources of each type currently requested by processes If Request[ i ] gt Need[ i ] for any process i raise an error condition

                        2 If Request[ i ] gt Available for any process i then that process must wait for resources to become available

                        else the process can continue to step 3

                        3 Check to see if the request can be granted safely by pretending it has been granted and then seeing if the resulting state is safe If resulting state is safe

                        grant the requestelse

                        then the process must wait until its request can be granted safelyThe procedure for granting a request ( or pretending to for testing purposes ) is

                        Available = Available - Request

                        Allocation = Allocation + Request Need = Need - Request

                        Unit III Memory management

                        Just as processes share the CPU they also share physical memory Memory management unit of OS takes care of memory allocation deallocation and other issues Program must be brought into memory for it to be run Addresses are two types

                        i) relocatable or relative address wrt to the beginning of program ii) Absolute addresses

                        Q)Address BindingAns Binding means mapping logical address space to physical address space Address binding can happen at three different stagesCompile time If you know at compile time only

                        where in memory the program is going to be allocated then compiler generates absolute addresses

                        Otherwise compiler generates relocatable addresses

                        Load time loader binds the relocatable addresses generated by compiler to absolute address Hence binding is done at load timeIf Binding is done at compile and load time physical and logical addresses are same

                        Execution time If address binding is done at run time the process can be moved during its execution from one memory segment to another Here we call logical addresses as virtual addressRun time mapping of virtual to physical address is done by a hardware device called Memory Management Unit (MMU)

                        Q)Logical vs Physical Address SpaceAnsLogical address ndash generated by the CPU also referred to as virtual addressPhysical address ndash address seen by the memory unitSet of all logical addresses is called logical address space Set of all physical addresses is called physical address space

                        Q) Memory-Management Unit (MMU)Ans MMU is hardware device that maps virtual to physical addressIn MMU scheme the value in the relocation register is added to every address generated by CPU to read the contents of memoryThe user program only knows logical addresses it never sees the real physical addresses

                        Q)Dynamic loadingAns Since physical memory is small it may not be possible for the entire program to be in main memory so we can use dynamic loadingIt is responsibility of users to design their programs to take advantage of dynamic loadingOnly the main function is loaded into main memory and when main() calls another function the main() will check whether the function is in main memory IF not the loader will load the desired function to main memory and updates the programrsquos address table

                        Routine is not loaded until it is called We achieve better memory-space utilization as unused routine is never loaded

                        Q) Dynamic Linking+ Linking postponed until execution time+ Small piece of code stub used to locate the appropriate memory-resident library routine+ Stub replaces itself with the address of the routine and executes the routine+ Operating system needed to check if routine is in processesrsquo memory address+ Dynamic linking is particularly useful for libraries

                        Q) OverlaysAns Overlays are needed when process is larger than amount of main memory allocated to it Instructions and data that are needed at any given time is kept in main memory Overlays can be implemented by user Programming design of overlay structure is complex Overlays for a two-pass assembler is as shown in the figure

                        Q) SwappingAns A process can be swapped temporarily out of memory to a backing store and then brought back into memory for continued executionBacking store ndash fast disk large enough to store copies of all memory images of all usersPriority based scheduling uses a variant of swapping policy called roll out roll in If a higher priority process arrives the memory manager swaps out lower priority process and then swaps in higher priority process When the higher priority process finishes the lower priority process can be swapped in again to main memory A process that is swapped out will be swapped back into the same memory space it occupied previouslyMajor part of the swap time is transfer time We can swap idle process only and cannot swap a process that is waiting for IO

                        Q) Contiguous memory allocation Ans Each process is contained in a single contiguous section of memory1 Fixed Size Partition (or) Single-partition allocation

                        Divide the main memory into Fixed sized partitions Each partition may contain exactly one process Relocation register contains value of starting physical address Limit register contains range of logical addressesEvery address generated by a CPU is checked as follows

                        If logical address lt Limit register then logical address is added with Relocation register to get the corresponding memory address

                        else a trap to OS is generatedSince every address is checked we can protect the OS and other user programs from being modified by other running process

                        2 Multiple-partition allocation

                        1) Fixed Size Partitions Divide the main memory into Fixed sized blocks Here memory allocated to a process may be larger than required The difference between allocated memory and requested memory is called internal fragmentation Internal fragmentation is unused memory inside the partition

                        0 P1 Block0(0th address to 3rd address)4 P1 Block1(4th address to 7th address)8 Block2(8th address to 11th address)

                        12 Block3(11th address to 12th address)

                        if P1 requires just 5 addresses still it is allocated 2 blocks ie 8 addresses So remaining 3 addresses are left unused

                        2) Variable Size partitions Here main memory is divided into partitions of variable sizesOperating system maintains information about

                        a) allocated partitions b) free partitions (hole)

                        In the beginning all main memory is empty and is considered one large block of available memory a hole

                        Exact memory required by process is only given When a process terminates it releases memory which can be allocated to another process

                        Memory is allocated to processes until finally no available block of memory (or hole) is large enough to hold the next processExternal fragmentation exists when there is enough memory to satisfy a request but the available memory are not contiguous Wastage of memory outside the partitionOne Solution to problem of external fragmentation is

                        a Compaction shuffle the memory contents so as to place all free memory together into one large block

                        ===============================================================================Q) Dynamic Storage-Allocation Problem (or) Most commonly used strategies to select a free hole from the set of available holesAns To satisfy a request of size n from a list of free holes below 3 policies can be used

                        1 First-fit Allocate the first hole that is big enough

                        2 Best-fit Allocate the smallest hole that is big enough must search entire list if it is not ordered by size Produces the smallest leftover hole

                        3 Worst-fit Allocate the largest hole must also search entire list Produces the largest leftover holeFirst-fit and best-fit better than worst-fit in terms of speed and storage utilization

                        ===============================================================================Q) Non-Contiguous Memory allocation

                        1 Paging Paging is a memory management scheme that provides non-contiguous memory allocation0 P1 Block0(0th address to 3rd address)4 P2 Block1(4th address to 7th address)8 Block2(8th address to 11th address)

                        12 P1 Block3(11th address to 12th address)

                        Logical address space of a process can be noncontiguous Ex P1 is allocated Block0 and Block3

                        1 Divide physical memory into fixed-sized blocks called frames (size is power of 2)2 Divide logical memory into blocks of same size called pages3 Logical address is divided into 2 parts

                        Page number (p) ndash Page table is indexed by page numberPage offset (d) ndash

                        4 A page table is allocated to each process (A pointer to page table is stored in PCB of process )Page table translates logical to physical addresses Page 0 is in frame1 Page 1 is in frame 4 etc5 Internal fragmentation may occur due to paging

                        6 If the size of logical address space= 2m and page size = 2n Then the higher order mminusn bits of logical address correspond to page number and lsquonrsquolower order bits for displacement within the page

                        Ex size of logical address space = 8=23 so m=3

                        page size =4=22 So n=2

                        So mminusn = 1 bit for page number to represent 01

                        remaining n=2 bits for displacement within the page

                        Q) Implementation of Page Table

                        Page table can be kept as

                        1) Set of dedicated registers efficient if the page table is small

                        2) Page table can be kept in memory page table base register (PTBR) points to page table Page-table length register (PTLR) indicates size of the page table Changing page table requires changing only the value in these registers Advantages of keeping page table in memorya Less context switch time

                        b Two memory accesses are required One memory access to access the page table and another memory access to access the required memory address

                        This problem can be solved by using a special fast-lookup hardware cache called associative memory or translation look-aside buffers(TLBs)

                        i Each entry in TLB consists of 2 parts

                        I) page number field

                        II) Value field or frame number field

                        ii When a logical address is generated by CPU its page number is presented to TLB The page number is compared with all the entries of TLB simultaneously

                        If page number is found in TLB Itrsquos frame number is immediately available -gt TLB hitIf page number not found check in page table ndashTLB miss

                        By using TLB Search is fast But TLB hardware is expensive So TLB size is kept small So TLB contains only frequently used few page table entries

                        Q) Protection in Paging Validinvalid bit attached to each entry in the page table OS sets this validinvalid bit for each page to allow or disallow access to the page When the bit is set to invalid the page is not in processrsquos logical address space Hence it generates Trap to OS

                        Q) Shared pages in Paging

                        Reentrant code is the code that never changes during execution Reentrant code can be shared

                        1 One copy of read-only (reentrant) code is shared among processes (ie text editors compilers window systems) Ex ed1ed2ed3 is shared among Process P1 and P2

                        2 Shared code must appear in same location in the logical address space of all processes

                        Each process keeps a separate copy of the private code and data EX data1 and data2

                        The pages for the private code and data can appear anywhere in the logical address space

                        ==============================================================================Q) Segmentation

                        Segmentation is a memory management scheme that support userrsquos view of memory

                        When the user program is compiled the compiler generates segments like

                        1) The code segment

                        2) Global variables segment

                        3) Heap memory segment

                        4) Stack segment etc

                        Each entry in segment table has

                        segment base (Starting physical address of segment) and limit register (specifies the length of the segment)

                        Logical address is divided into 2 parts

                        segment-number offset

                        Segment table is indexed by segment number

                        Segment-table base register (STBR) stores the location of segment table in main memory

                        Segment-table length register (STLR) stores number of segments used by a program

                        The segment number (say lsquosrsquo) is used to find the entry in segment table The required entry is lsquosrsquo locations away from the beginning of the segment table Once the required entry in the segment table is found the offset (lsquodrsquo) is compared with limit

                        If( offset lt limit) then offset is added with the base entry to generate the physical address

                        Q) Shared Segments Code sharing occurs at

                        the segment level Shared segments must

                        have same segment number

                        Allocation - dynamic storage allocation problem

                        use best fitfirst fit may cause external fragmentation

                        Protection protection bits associated with segments

                        readwriteexecute privileges array in a separate segment - hardware can check for illegal array indexes

                        ===============================================================================Q) Virtual memory ndash

                        Ans In paging and segmentation a program will execute only if the entire process is in main memory But here in virtual memory only part of the program needs to be in memory for execution

                        1 A program that is large enough than the available main memory can still run as only part of program is only loaded into main memory Logical address space can therefore be much larger than physical address space

                        2 Allows address spaces to be shared by several processes

                        Virtual memory can be implemented via Demand paging Demand segmentation

                        The large blank space between heap and stack is part of virtual address space Virtual memory allows files and memory to be shared by 2 or more processes through page sharing as shown in below figure (b) Figure (a) depicts that virtual memory that is larger than physical memory

                        ==============================================================================

                        Q)Demand pagingDemand paging is technique of loading pages from disk to main memory only when the page is needed Hence using less amount of physical memory we get faster response

                        Demand paging is similar to PAGING + SWAPPING

                        Instead of swapping a whole process the pager brings only those pages needed into memory

                        To distinguish between pages that are in memory and pages that are on the disk validinvalid bit is used

                        Validinvalid bit is attached to each entry in page table When this bit is set to valid the page is legal and in memory When this bit is set to invalid either the page is illegal or the page is in disk

                        When CPU generates a logical address whose page table entry is set to invalid page fault occurs Hence the required page is in disk Some section of hard disk called swap space is used to hold pages that are not present in memory

                        Q) Procedure for Handling a Page Fault

                        1) CPU generates a logical address and if the validinvalid bit for the page that has this logical address is set to invalid page fault occurs

                        2) Page fault causes a trap to operating system

                        3)Check if the logical address is within the logical address space of process ie PTBR and PTLR is checked if (required logical address is not within the logical address space of process)

                        Terminate the processelse

                        page is not in memory and page is in disk

                        4) To bring the required page into memory we need to find a free main memory frame map main memory address to disk block and fetch disk block and load the block into free frame5) When the required page is brought into memory update the page table to indicate the page is in memory6) Restart instruction interrupted by illegal address trap The process will continue as if page had always been in memory

                        Q) What is pure demand pagingAns If no page belonging to the executing process is in main memory the process will fault for every page it needs Page faults will occur until every page that is needed is in memory This scheme is known as pure demand paging NEVER BRING A PAGE UNTIL IT IS REQUIRED

                        Q) Page Replacement1 Find the location of the desired page on the disk2 Find a free

                        i) If there is a free frame use itii) If there is no free frame use Page replacement Algorithms to find some page in memory that is

                        not really in use and swap itiii) If the modify bit associated with the victim frame is set to 1 the page is written to disk Make the

                        changes to validinvalid bit of victim frames page table entry to invalid to indicate the victim frame is no longer in memory

                        else no need to write to disk3 Read the desired page into victim frame and change the frame and page tables4 Restart the user process

                        =================================================================================================Q) Page Replacement AlgorithmsGoal Produce a low page-fault rateThe algorithms are evaluated by running it on a particular string of memory references (reference string) and by computing the number of page faults on that string

                        1 First in First out (FIFO) Page replacement algorithm Uses the time when a page was brought into memory The page that was brought first into memory is replaced ie oldest page is chosen for replacement We create a FIFO queue to hold all pages in memory

                        When a page is brought into memory we insert it at the tail of the queue and we replace the page at the head of the queue

                        FIFO 15 page faultsProblems due to FIFO is Beladyrsquos anamolyFor a refrence string shown below123412512345When number of frames allocated to a process is 1 number of page faults=12 due to FIFO page replacementWhen number of frames allocated to a process is 2 number of page faults=12 due to FIFO page replacementWhen number of frames allocated to a process is 3 number of page faults=9 due to FIFO page replacementWhen number of frames allocated to a process is 1 number of page faults=10 due to FIFO page replacementAs the number of frames allocated to a process increases page faults must decrease But in FIFO page replacement page faults increase2 Optimal Page replacement AlgorithmReplace the page that will not be used for longest period of time Used for measuring how well your algorithm performs It is not practical to implement OPT algorithm as it is not possible to know what the future references will beex

                        Optimal page replacement 9 page faults

                        3 Least Recently used (LRU) Page replacement algorithm Replace the page that has not been used for longest period of time LRU makes use of time of last use of the page Ex

                        Number of page faults = 12

                        LRU requires hardware assistance to determine the time of last use of the page We can use 1) Stack or 2) Counter to implement LRU page replacement algorithm

                        1) Stack implementation ndash keep a stack of page numbers in a double link formwhen a Page is referenced it is moved to the top of stackThis implementation requires 6 pointers to be changed

                        2) counter implementation Time-of-use field is allocated to each frame allocatedCPU maintains a counter Counter is incremented for every page reference Counter value is copied to the time-of-use field of the referenced page

                        Disadvantages of LRU page replacement algorithm Updating counter or stack must be done for every memory reference HENCE LRU IS SLOW and implementation requires hardware assistance ==============================================================================Q) LRU approximation page replacement1) Using reference bit2) Using additional reference bits algorithm3) Second chance algorithm4) Enhanced Second chance algorithm5) Counting based Page replacement

                        Has two schemes1) Least Frequently Used (LFU)2) Most Frequently Used (MFU)

                        6) Page Buffering AlgorithmsQ) What is ThrashingAns A process is thrashing if it is spending more time paging than executingThrashing may occur when global (or) local page replacement is used as followsSolution to Thrashing is Working Set strategyIf we provide a process with as many frames as it needs thrashing can be avoided Since it is not possible to know how many frames a process needs we use locality model of process execution

                        here Locality is set of pages that are actively used together Locality model states that as the process executes it moves from locality to locality If we allocate enough frames to a process to accommodate the size of current locality it will not fault again until it changes current localityWorking Set model uses working-set and a parameter ∆ = working-set windowThe set of pages in the most recent ∆ page references is the working setLet WS S i be the working-set size for process Pi ie Pi needs WS S i framesLet D be the total demand for frames then D=sum WS Si

                        and mbe the total number of available frames

                        If iquestm thrashing occurs because some processes will not have enough framesSoIF (Dgtm iquest

                        OS selects a process to suspendelse OS may initiate another processThe working set strategy prevents thrashing while keeping the degree of multi programming as high as possibleThe main difficulty is to keep track of moving working set window ========================================================================Q) How does thrashing occur in global and local page replacementAns In Global page replacement when a page fault occurs for a process any frame in main memory can be replaced causing page fault to some other process that needs the replaced frame Page fault processes must use paging device to swap pages in and out As more number of processes queue up for the paging device

                        the READY queue empties And CPU becomes idle So OS introduces new processes as CPU is free This further increases number of page faults

                        In Local page replacement Each process is allocated certain frames in main memory and when a page fault occurs only the frames allocated to it will be replacedWhen a process starts thrashing other processes are not affected But since the processes will be in queue for paging device most of the time average service time for page fault increases Hence effective access time will increase even for a process that is not thrashingQ) Page Table Structure Or Structure of page table in memory

                        Hierarchical Paging Hashed Page Tables Inverted Page Tables

                        Hierarchical Paging If Page table is large then we break up the page table into multiple page tables ie page table is paged A simple technique is a two-level page tableTwo-Level Paging Example A logical address (on 32-bit machine with 4K page size) is divided into 1 a page number consisting of 20 bits 2 a page offset consisting of 12 bitsSince the page table is paged the page number is further divided into 1 a 10-bit page number 2 a 10-bit page offset Thus a logical address is as follows page number page offset

                        pi p2 d

                        10 10 12 where p1 is an index into the outer page table and p2 is the displacement within the page of the outer page table Two-Level Page-Table Scheme

                        Address-Translation Scheme Address-translation scheme for a two-level 32-bit paging architecture

                        Hashed Page Tables (or) Hash table Hashed page tables are common in address spaces gt 32 bitsEach entry in hash table contains a linked list of elements that hash to same locationHash table is indexed by hash valueEach element consists of 3 fields

                        1 Page number2 Frame number3 Pointer to next element in linked list

                        The algorithm works as follows Page number is passed to hash function to get hash value Locating the hash value in hash table is easy as Hash table is indexed by hash value Page number is then compared with field 1 in the first element in linked list If there is a match the corresponding frame number is used to get physical address If there is no match next entries in linked list are searched

                        Inverted Page Table

                        Invertible page table has one entry for each frame of physical memory Each entry in invertible page table has

                        Process-id Page number

                        Logical address also has 3 parts Process-id Page number

                        OffsetProcess-id and page number of the logical address is compared with each entry in the invertible page tableIf a match is found at ith entry i is added to offset to get logical address If there is no match it means it is illegal addressAdvantages and disadvantages

                        Decreases memory needed to store each page table Increases time needed to search the table when a page reference occurs Solution Use hash table to

                        limit the search to one mdash or at most a few mdash page-table entries Inverted Page Table Architecture

                        Unit-IV CPU Scheduling

                        Basic conceptsTo maximize CPU utilization some process must be running at all times

                        In multi programming several processes will be in main memory at a given time If running process has to wait for IO the OS takes the CPU away from that process and assigns CPU to other process in ready queue So CPU must be scheduled and CPU scheduling is fundamental OS function

                        CPU- IO burst cycleProcess execution begins with a CPU burst that is followed by an IO burst which is followed by another CPU burst then another IO burst and so on as shown in the figure

                        An IO bound program has many short CPU bursts A CPU-bound program has few long CPU bursts

                        Schedulers a module in OS for scheduling decisions1048708 Long-term scheduler (or job scheduler) ndash selects which processes should be brought into the ready queue1048708 Medium-term scheduler ndash selects which processes should be swapped inout the memory1048708 Short-term scheduler (or CPU scheduler) ndash Whenever the CPU becomes idle the operating system must select one of the processes in ready queue to be executed This selection is done by short term scheduler (or) CPU scheduler

                        Dispatcher gives CPU control to the process selected by short term scheduler This function involves following

                        1 switching context2 switching to user mode3 jumping to proper location in the user program to restart that program

                        Time taken to stop one process and start another process is known as dispatch latency This must be kept smallReady QueueReady queue is implemented as a FIFO queue a priority queue a tree or simply an unordered linked listThe records in ready queue are PCBs of processes

                        Pre-emptive Scheduling

                        CPU scheduling decisions take place under one of four conditions

                        1 When a process switches from the running state to the waiting state2 When a process switches from the running state to the ready state for example in response to an

                        interrupt3 When a process switches from the waiting state to the ready state say at completion of IO 4 When a process terminates

                        For conditions 1 and 4 there is no choice - A new process must be selected For conditions 2 and 3 there is a choice - continue running the current process (or) select a different

                        one If CPU scheduling takes place only under conditions 1 and 4 the system is said to be non-

                        preemptive or cooperative In non-preemptive a process runs until it itself gives up the CPU ie when it is waiting for IO (or) when process completes Otherwise the system is said to be pre-emptive

                        Windows used non-preemptive scheduling up to Windows 3x and started using pre-emptive scheduling with Win95 Macs used non-preemptive prior to OSX and pre-emptive since then Note that pre-emptive scheduling is only possible on hardware that supports a timer interrupt

                        Note that pre-emptive scheduling can cause problems when two processes share data because one process may get interrupted in the middle of updating shared data structures

                        Preemption can also be a problem if the kernel is busy implementing a system call ( eg updating critical kernel data structures ) when the preemption occurs Solution wait until the system call has either completed or blocked before allowing the pre-emption But this solution is problematic for real-time systems

                        Some critical sections of code protect themselves from concurrency problems by disabling interrupts before entering the critical section and re-enabling interrupts on exiting the section But this should only be done only on very short pieces of code that will take less CPU time

                        Scheduling Criteria to decide CPU scheduling algorithm1 CPU utilization in real system cpu utilization will range from 40 to 902 Throughput Number of processes that are completed per unit time3 Turnaround time Time when the process has completed ndash Time when the process was submittedie it includes waiting time and burst time4 Waiting time amount of time that a process spends waiting for CPU5 Response time Time when the first response came - Time when the process was submitted

                        Optimization Criteria Max CPU utilizationMax throughput

                        Min turnaround timeMin waiting timeMin response time CPU Scheduling Algorithms1) First-Come First-Served (FCFS) Scheduling First come first served CPU scheduling algorithm is non pre-emptive Processes are scheduled in the order they have arrivedFCFS is implemented by Queue When the CPU is free it is allocated to the process at the head of the queueAdvantages Easy to implementDisadvantages 1 Average waiting time is quite long

                        2 Convoy effect (it occurs as small processes wait in queue for big process to leave CPU)Example Process Burst Time P1 24 P2 3 P3 3

                        Suppose that the processes arrive in the order P1 P2 P3

                        The Gantt Chart for the schedule is P1 P2 P3

                        0 24 27 30 Waiting time for P1 = 0 P2 = 24 P3 = 27 Average waiting time (0 + 24 + 27)3 = 17 2) Shortest-Job-First (SJF) SchedulingSchedule the Process with shortest burst time If CPU burst time of two processe are same then FCFS is usedAdvantages Average waiting time decreasesDisadvantages It is difficult to know the length of next CPU burstSJF can be either pre-emptive (or) non pre-emptive1 Non pre-emptive SJF ndash once CPU is given to the process it cannot be preempted until completes its CPU burst2 Pre-emptive SJF (or) Shortest-Remaining-Time-First (SRTF) If the newly arrived process is shorter than what is remaining of currently executing process then pre-empt the currently executing process

                        Example Process Arrival Time Burst Time P1 00 7

                        P2 20 4 P3 40 1

                        P4 50 4

                        SJF (non-preemptive) Gantt Chart P1 P3 P2 P4

                        0 7 8 12 16 Average waiting time = [0 +(8-2)+(7-4) +(12-5)] 4 =4

                        Example of Preemptive SJF

                        Process Arrival Time Burst TimeP1 00 7P2 20 4P3 40 1P4 50 4

                        SJF (preemptive) P1 P2 P3 P2 P4 P1

                        0 2 4 5 7 11 16 Average waiting time = (9 + 1 + 0 +2)4 =3 3) Priority SchedulingA priority number (integer) is associated with each processThe CPU is allocated to the process with the highest priority (smallest integer equiv highest priority)If two processes have equal priority then FCFS is usedSJF is a priority algorithm in which larger the CPU burst lower the priorityThere are two schemes 1 Preemptive 2 nonpreemptive

                        Problem equiv Starvation ndash low priority processes may never executeSolution equiv Aging ndash as time progresses increase the priority of the process

                        4) Round Robin (RR) Each process gets a small unit of CPU time (time quantum) usually 10-100 milliseconds After this time has elapsed the process is preempted and added to the end of the ready queueIf there are n processes in the ready queue and the time quantum is q then each process gets 1n of the CPU time in chunks of at most q time units at once No process waits more than (n-1)q time unitsPerformance 1 q large _ FIFO 2 q small _ q must be large with respect to context switch otherwise overhead is too highExample of RR with Time Quantum = 4 Process Burst Time P1 24 P2 3

                        P3 3 The Gantt chart isP1 P2 P3 P1 P1 P1 P1 P1

                        0 4 7 10 14 18 22 26 30 Average waiting time = [(30-24)+4+7]3 = 173 =566 5)Multilevel Queue scheduling Ready queue is partitioned into separate queuesFor ex ready queue is divided into 2 queues

                        1 foreground (interactive) queue2 background (batch) queue

                        Each queue has its own scheduling algorithmFor ex Round Robin scheduling algorithm can be used in foreground queue

                        FCFS scheduling algorithm can be used in background queueScheduling must be done between the queues This can be done in 2 ways1 Fixed priority scheduling

                        Foreground queue has highest priority All the processes in foreground queue must be completed and queue must be empty Then only the processes in background queue will be given CPU As shown in below figure Disadvantage starvation

                        2 Time slice ndash each queue gets a certain amount of CPU timewhich it can schedule amongst its processes ie 80 to foreground in RR and 20 to background in FCFS

                        6) Multilevel Feedback Queue Scheduling A process can move between queues The idea is to separate processes according to their CPU bursts IO ndash bound and interactive processes will be in highest priority queueIf a process takes more CPU time it is moved to lower priority queue If a process waits too long in lower priority queue it is moved to higher priority queue to prevent starvationAs shown in the below figure let there be 3 queues Q0 Q1 Q2

                        1 Q0 ndash time quantum 8 milliseconds 2 Q1 ndash time quantum 16 milliseconds 3 Q2 ndash FCFS

                        Scheduling

                        1 A process entering ready queue is put in queue Q0 When it gains CPU the process receives 8 msec If it does not finish in 8 milliseconds process is moved to queue Q1 2 At Q1 process again receives 16 additional milliseconds If it still does not complete it is moved to queue Q2

                        Q) SynchronizationSeveral processes run in an operating system Some process share resources due to which problems like data inconsistency may arise

                        Example of process synchronization is Producer Consumer problem (or) Bounded-Buffer problemProducer Consumer problem To make sure that producer process should not add data when the buffer is full andConsumer process should not take data when the buffer is empty

                        The code for Producer process can be modified as followswhile (true) Produce an item in next producedwhile (counter == BUFFER_SIZE) do nothingbuffer[in]=nextProducedin = (in+1) BUFFER_SIZEcounter++

                        here an integer variable counter is initialized to zeroCounter is incremented every time a new item is added to the buffer and counter is decremented every time an item is removed from bufferin is an index that always points to an empty slot after the last filled slotout is an index that always points to the first filled slot after at the head of the circular queue

                        The code for consumer process is as shown belowwhile(true)

                        while (counter == 0) do nothingnextConsumed = buffer[out]out=(out+1)BUFFER_SIZEcounter--

                        The above producer and consumer code produces RACE CONDITIONOutcome of execution of several co-operating processes depends on the order in which accesses takes place is called Race ConditionFor Example if counter =5

                        Given counter =5 and if producer produces one item and consumer consumes one item then correct value of counter= 5 only But here the counter = 4 ie inconsistent resultIf T5 is executed before T4 then the counter =6 ie inconsistent result

                        Solution to avoid race condition make sure only one process at a time must update the shared variable (here it is counter)

                        Q) Critical section problemAns Each process has a segment of code called critical section Critical section code is the code that contains common variables of co-operating processesTo avoid data inconsistencyTwo processes must not execute critical section code at same timeFor this to happen1 Each process must request permission to enter its critical sectionmdashEntry section2 End of critical section ndashExit section3 Rest of the code after critical section ndash Remainder section

                        Solution to critical section must satisfy the following three requirements1 Mutual exclusion if one process is in critical section no other process can be in critical section2 Progress Processes not in remainder section must decide which process will enter critical section next3 Bounded Waiting All the processes must be allowed to enter critical section No process must wait indefinitely

                        Q) Petersonrsquos solution for critical section problem (Software based solution)AnsTime Process 1 Process2T0 do

                        flag[1] = TRUE turn =2

                        do flag[2] = TRUE

                        T1 turn =1

                        T2 While( flag[2] ampamp turn == 2)Since turn = 2 while loop will not run

                        While( flag[1] ampamp turn == 1)Loops continuously until flag[1]

                        becomes falseT3 Enter critical section

                        T4 flag[1] = FALSE Enter Critical section

                        T5 Enters remainder section flag[2]=FALSE

                        while (TRUE) Enters remainder section

                        while (TRUE)

                        101 File Concept Data is stored in secondary memory in the form of files A file is a sequence of bits bytes lines or records A file has structure which depends on its type

                        File Attributes

                        1 Name ndash Name of the file for user understanding2 Identifier used by OS to identify a file ( eg inode number )3 Type ndash Text files executable files binary files etc 4 Location ndash location of the file on the hard drive 5 Size current size of the file6 Protection Controls who can read write or execute the file7 Time Date and user identification gives the information when the file was created last modified

                        and last used etc This data is used for protection security and usage monitoring

                        All information about files is kept in directory structure The directory entry consists of the filersquos name and its unique identifier The identifier locates the other file attributes Directory structure is kept on disk

                        File Operations

                        The file Abstract Data Type supports many common operations 1 Creating a file find free space on disk add entry to directory 2 Writing a file write data at current file position pointer location and update pointer3 Reading a file read data at current file position pointer location and update pointer4 Repositioning within a file or file seek change pointer location to a given value5 Deleting a file free the space allocated to file and remove entry from directory6 Truncating a file delete the data and update file size

                        Open File Table

                        Most OSes require that files be opened before access and closed after access Information about all files that are open currently in the system is stored in an open file table This

                        system wide open file table is maintained by OS And information about files opened by the process is stored by the processrsquos open file table Open file table has the following information

                        1 File pointer - records the current position in the file for the next read or write2 File-open count - How many times has the current file been opened ( simultaneously by

                        different processes ) and not yet closedWhen this counter reaches zero the file can be removed from the table

                        3 Disk location of the file4 Access rights

                        Some systems provide support for file locking 1 A shared lock is for reading only2 A exclusive lock is for writing as well as reading3 An advisory lock OS makes sure the locks are acquired and released appropriately4 A mandatory lock software developers makes sure the locks are acquired and released

                        appropriately5 UNIX used advisory locks and Windows uses mandatory locks

                        File Types

                        Implementing file types A common technique is to include the type as part of the filename The name is split into 2 parts

                        1 File Name 2 Extension

                        File name and extension are separated by a period The system uses the extension to indicate the type of file and the type of operations that can be done on that fileUser opens Microsoft word and clicks on File menu and Open option User will only specify the filename without extension application will look for a file with the given name and extension it expects Hence extensions are not supported by the operating systemUnix system uses a crude magic number stored at the beginning of the file to indicate the type of fileIn Mac OS X operating system file produced by word processor has the word processorrsquos name as its creator When user double clicks on the file word will open the required fileSome common file types are as shown below

                        File Structure Some files contain an internal structure which may or may not be known to the OS for ex executable files have a specific structure so that it can determine

                        where in memory to load the file and what is the location of the first instruction

                        If OS supports many file formats it will increase the size and complexity of the OS UNIX treats all files as sequences of bytes with no internal structure (With the exception of

                        executable binary programs which it must know how to load and find the first executable statement etc)

                        Macintosh files have two forks - a resource fork and a data fork The resource fork contains information relating to the UI such as icons and button images and can be modified independently Data fork contains the code or data

                        Internal File structure

                        Disk files are accessed in units of physical blocks typically 512 bytes

                        Internally files are organized in units of logical units like single byte (or)

                        data record size (or) structure size

                        The number of logical units which fit into one physical block determines its packingIf a student file is organized internally as records Each record takes 64 bytes of memoryAnd main memory is divided into blocks of 512 bytes Then packing = 51264=8 8 records can be stored per block

                        Internal fragmentation may occur due to this

                        102 Access Methods

                        The information in file can be accessed in the following ways1 Sequential access2 Direct access (or) Relative access

                        1 Sequential access we cannot randomly access any record

                        Records are read and written sequentially one record after another

                        A sequential access file emulates magnetic tape operation and generally supports the following operations read next - read a record and advance the tape to the next position write next - write a record and advance the tape to the next position rewind skip n records - May or may not be supported N may be limited to positive numbers or may be

                        limited to +- 1

                        2 Direct access (or) Relative accessA file is made up of fixed length records so that we can randomly access any record

                        No restriction on order of reading or writing

                        Direct access files are useful when we need to access large amounts of data randomly

                        Ex database

                        Operations supported include n is relative block number ie relative to beginning of file

                        1 read n - read record number n 2 write n - write record number n 3 jump to record n - could be 0 or the end of file4 Query current record - used to return back to this record later5 Sequential access can be easily emulated using direct access The inverse is complicated and

                        inefficient

                        3 Other access methods can be built on top of direct access method We construct an index for the file which contains pointers to various blocks To search a file we first access index and then use pointer to access file

                        If a student file has 120000 records of students sorted by student id We can create an index consisting of 1st student id in each blockif 64 records can be stored in each block then number of blocks needed to store 120000 records = 12000064 asymp 2000 blocks So index has 2000 entries Index is kept in main memory so that searching becomes faster

                        If the index itself becomes large we create an index for index file (Ex B-tree)

                        Q) Allocation Methods

                        To allocate space to files on disk is such a way that disk space is utilized effectively and files can be accessed quickly

                        Three major methods of allocating disk space are

                        1 Contiguous 2 Linked

                        3 Indexed

                        1 Contiguous allocation Each file occupies set of contiguous blocks on disk

                        If the file is n blocks long and starts at location b then it occupies blocks b b+1 b+2hellipb+n-1

                        The directory entry for each file indicates the address of starting block(ie b) and length (n)

                        Advantages

                        1 For direct access to block i we can directly access by finding the block as b+i Since the file is stored sequentially sequential access is easy Hence access is easy

                        2 Supports both direct and sequential access3 Number of disk seeks required is minimum 4 Seek time is minimum

                        Disadvantages

                        1 Difficult to find space to a new file (or) add more data to existing fileFirst fit and Best fit strategies are used to find a free hole( set of free contiguous blocks) from list of available holes But both are inefficient wrt storage utilization

                        2 External fragmentation even though there are enough free blocks required for a new file they are not contiguous and hence cannot be allocated to new fileSolution copy all the files onto another disk or tape We then copy the files back onto original disk contiguously This is known as compaction But compaction is time consuming

                        3 Size of the file must be known when the file is created Even though final size is known entire size may not be used by the file at the beginning Hence leads to wastage of space

                        If size of the file is not known in advance then (i) If we allocate too little space to file file cannot be extended Solution find a larger hole copy the contents of the file to the new space and release the previous space This may be possible as long as space exists Also this is time consuming(ii) If we overestimate the amount of space needed much of the space is unused This will lead to internal fragmentation

                        To minimize these drawbacks some os uses modified contiguous allocation scheme Here contiguous blocks of memory is allocated at firstIf the file needs more blocks another set of contiguous space known as extent is added Location and a block count plus a link to the first block of next extentIf extents are too large and of fixed size internal fragmentation occursIf extents are of variable sizes external fragmentation occurs

                        Linked AllocationEach file is a linked list of disk blocks The disk blocks can be scattered anywhere on the diskThe directory contains a pointer to first and last blocks of the file For example a file of 5 blocks might start at a block 9 and continue at block 16 then block 1 and then block 10 and finally block 25 Each block contains a pointer to next block If each block is 512 bytes and disk address (pointer ) requires 4 bytes then the user sees blocks of 512-4=508 bytes To create a new file we simply create a new entry in the directory The pointer will have nil for empty file Also the size field of empty file is 0

                        To add more data to existing file free block is found and data is written to it and is linked to end of file To read a file we read by following the pointers block to block

                        Advantages 1 Easy to find space to a new file (or) add more data to existing file2 Size of the file need not be declared when the file is created3 There is no external fragmentation

                        Disadvantages1 Seek time is more2 Number of disk seeks required is more3 Pointers use larger percentage of filersquos disk space4 Inefficient for direct access files To access i th record i disk reads are required5 Not Reliable if pointers were lost or damaged

                        Solution to disadvantage 3 is to use to group multiple blocks as clusters(for ex 4 blocks = 1cluster) We allocate clusters instead of blocks to file Makes logical to physical block mapping simpleImproves disk throughput by reducing disk access timeReduces wastage of space due to pointers as only few pointers may be neededFree ndashlist management also becomes simpleDisadvantage internal fragmentation more space is wasted when the cluster is partially full

                        Solution to disadvantage 5 is to use doubly linked list (or) store the file name and relative block number in each block

                        An variation of linked allocation is the use of file allocation table (FAT)A section of disk at the beginning of each volume contains FAT FAT has one entry for each disk block and is indexed by block numberThe block number contains the block number of next block that contains the file The last block will have special end of the file value as table entry An unused block is indicated by table value 0

                        Advantages

                        1 Easy to find space to a new file (or) add more data to existing file2 Random access time is improved

                        Disadvantage number of disk seeks is more The disk head must move to read the FAT and find the location of block then move to location of the block

                        3 Indexed allocation

                        Each file has its own index block Index block is an array of disk block addresses The ith entry in index block points to ith block of the file

                        The directory contains the address of the index block

                        When the file is created all pointers in the index block are set to nil

                        To add more data to existing file free block is found and data is written to it The address of block is put in the index block

                        Advantages

                        1 Easy to find space to a new file (or) add more data to existing file2 Supports direct (or) random access efficiently3 No external fragmentation

                        Disadvantage

                        1 Seek time is more2 Number of disk seeks required is more3 Not Reliable4 Index file must be kept in memory If this memory is not available then we have the read the index

                        block and then desired data block ie 2 disk accesses are required To access a block near the end of the file we need to read all the index blocks to read the needed data block

                        5 Pointer overhead of index block is gt than that of linked allocation For example if we have file of only one or two blocks Entire index block is allocated to store these one or two pointers

                        Various mechanisms followed for size of index block

                        1 linked scheme size of index block is one disk block Index block contains a small header containing name of the file and set of the first 100 disk block addresses The next address contains the address of another index block for large files

                        2 multilevel index to access a block the OS uses first-level index to find second-level index block which points to file block This approach can be continued to third or fourth level

                        3 combined scheme

                        For example Say there are 15 pointers of index block in the filersquos inode The first 12 of these pointers have address of blocks that contains file data The next 3 pointers point to indirect blocks The first points to single indirect block which is an index block that contain addresses of filersquos data blocks

                        The second points to double indirect blocks and third points to triple indirect blocks as shown below

                        Q) Free space management

                        To keep track of free disk space the system maintains free space list To create a file we search free space list and allocate the required space to new file

                        The free space list can be implemented as

                        1 Bit vector (or) Bit map2 Linked list3 Grouping4 Counting 5 Space maps

                        1 Bit vector (or) Bit map Each block is represented by 1 bit If the block is free the bit is 1 and if the block is allocated the bit is 0 For example consider a disk where blocks 23458 blocks are free The free-space bit map would be 001111001000

                        Advantages simple and easy to get contiguous files

                        Disadvantages

                        1 Bit map requires extra space2 Bit map is inefficient if kept in disk Bit maps of smaller disk can be kept in main memory but may

                        not be possible to keep the bit map of larger disk in main memory3 Bit map must be modified both when blocks are allocated and freed Freeing 1 GB of data on a 1-TB

                        disk could cause thousands of blocks of bit maps to be updated because those data blocks are scattered all over the disk

                        2 Linked List Free-space list is implemented by linking all the free disk blocks together A pointer to the first free block is stored in special location on the disk For example consider a disk where blocks 23458 blocks are freeWe keep a pointer to block 2 as the first free block Block 2 would contain a pointer to block 3 Block 3 would point to block 4 and block 4 would point to block 5and so onAdvantage No waste of spaceDisadvantage Cannot get contiguous space easily ie To traverse the free-space list we must read each block which requires more IO time

                        3 Grouping First free block stores the addresses of n free blocks The first n-1 free blocks are free and the last block contains the addresses of another n free blocks and so onAdvantage large number of free blocks can be found quickly4 Counting free-space list can be maintained by keeping the address of first free block and the number (n)of free contiguous blocks that follow the first block

                        Each entry in the free-space list consists of disk address and a countEach entry requires more space andlist is shorter only if count is gt1These entries can be stored in a B-tree for efficient insertion deletion and search

                        5 Space maps ZFS divides the disk space into chunks of manageable sizes called metaslabs Each metaslab has a space mapZFS uses counting algorithm to store information about free blocks Space map is a log of all block activities in time order in counting formatWhen ZFS allocates or frees space from a metaslab it loads the associated space map into memory and updates it Finally free-space list is updated on diskSynchronization Hardware Hardware solution to synchronization is to provide atomic operations These operations operate as a single instruction without interruption Two such operations are 1 Test and Set() hardware instruction2 Swap() instruction

                        1 Test and Set instruction is as shown

                        Solution using Test and SetThe shared variable lock is initialized to false

                        2 Swap() instruction definition is as follows

                        Solution using Swap1 Shared Boolean variable lock is initialized to false and each process has a local boolean variable key

                        Both Test and Set Swap instructions satisfy the mutual exclusion requirement but unfortunately do not guarantee bounded waiting If there are multiple processes trying to get into their critical sections there is no guarantee of what order they will enter and any one process could have the bad luck to wait forever until they got their turn in the critical section (Since there is no guarantee as to the relative rates of the processes a very fast process could theoretically release the lock whip through their remainder section and re-lock the lock before a slower process got a chance As more and more processes are involved vying for the same resource the odds of a slow process getting locked out completely increase )

                        Below figure illustrates a solution using test-and-set that satisfies bounded waiting using two shared data structures boolean lock and boolean waiting[ N ] where N is the number of processes in contention for critical sections

                        Bounded-waiting mutual exclusion with TestAndSet( )

                        It first looks in an order ( starting with the next process on the list ) for a process that has been waiting and if it finds one then it releases that particular process from its waiting state without unlocking the critical section thereby allowing a specific process into the critical section while continuing to block all the others

                        Q) What is semaphoreAns Semaphore is used to solve synchronization problems A semaphore S is an integer variable accessed only through 2 standard atomic operations 1 wait() 2 signal()Wait primitive Signal primitivewait(S)

                        while(S lt= 0) do nothingS--

                        signal (S)

                        S++

                        Each process that wishes to use a resource performs wait() operation When a process releases a resource it performs signal() operation

                        Semaphores Usage1 binary semaphore or mutex locks can take values 0 (or) 1

                        2 Counting Semaphore can take any integer value Counting semaphore is used to count the number remaining resources The counter is initialized to the number of resources available whenever the counting semaphore gt 0 then a process can enter a critical section and use one of the resources When the counter = 0 ( or negative in some implementations ) then the process blocks until another process frees a resource and increments the counting semaphore with a signal callFor example if 3 resources are there and 4 processes Value of counting semaphore =3if process P1 requires a resource it perform wait() operationNow counting semaphore = 2if process P2 requires a resource it perform wait() operationNow counting semaphore = 1if process P3 requires a resource it perform wait() operationNow counting semaphore = 0if process P4 requires a resource it perform wait() operation P4 waits until a resource is available

                        3 Semaphores can also be used to synchronize certain operations

                        Semaphore ImplementationWhen a process is in critical section and any other process that tries to enter critical section must loop continuously wasting CPU time (busy waiting) This type of semaphore is called spin lock as the process spins while waiting For example suppose it is important that process P1 execute statement S1 before process P2 executes statement S2

                        First we create a semaphore named synch that is shared by the two processes and initialize it to zero Then in process P1 we insert the code

                        S1signal( synch )

                        and in process P2 we insert the code wait( synch )

                        S2 Because synch was initialized to 0 process P2 will block on the wait until after P1 executes the call

                        to signal Each semaphore has a integer value and maintains a waiting queue of waiting processes

                        Semaphore implementation with no busy waiting To overcome busy waiting we block the waiting process and waiting process is restarted by wakeup() operation(ie the process is removed from waiting queue to ready state)

                        Problems due to semaphore1 Deadlocks Deadlocks occur when multiple processes are blocked each waiting for a resource that can only be freed by one of the other (blocked) processes shown below

                        2 Starvation In Starvation one or more processes gets blocked forever and never get a chance to enter critical section For example if we do not specify the algorithms for adding processes to the waiting queue or selecting one to be removed from the queue in the signal( ) call If a LIFO queue is chosen then the first process that starts waiting will never get a chance

                        3 Priority inversionLet Process LMH have priorities as shown belowLltMltHprocess H is waiting for a resource that has been held by process L so Process L is running When process M enters it pre-empts process L and M is running nowHere M runs before H even though M has lower priority than HTo avoid this L is given high priority so that M does not pre-empt L=====================================================================Q) Classical problems on synchronization

                        1 The Bounded Buffer Problem (also called the The Producer-Consumer Problem)2 The Readers-Writers Problem3 The Dining Philosophers Problem

                        These problems are used to test newly proposed synchronization scheme

                        1 The Bounded Buffer ProblemConsider

                        n buffers and each buffer holds one item a producer process which creates the items (1 at a time) a consumer process which processes them (1 at a time)

                        Producer process cannot produce an item if all the buffers are full and Consumer process cannot consume if all the buffers are empty As both processes modify the contents of buffer synchronization must be there For that we use 3 Semaphores

                        1 empty = n (empty semaphore counts the number of empty buffers)2 full=0 (full semaphore counts the number of filled buffers)3 mutex=1 (mutex provides mutual exclusion for access to the buffer)

                        Producer process Consumer processdo

                        hellip produce an item wait(empty) wait(mutex) hellip

                        do wait(full) wait(mutex) hellip remove an item from buffer hellip

                        add item to buffer hellip signal(mutex) signal(full)

                        while (1)

                        signal(mutex) signal(empty) hellip consume the item hellip while (1)

                        2 The Readers-Writers ProblemA data item such as a file is shared among several processesEach process is classified as reader or writerMultiple readers can read the file at same timeA writer must have exclusive access (ie cannot share with either a reader or another writer)Two versions of the readers-writers problem

                        readers priority new reader need not wait because a writer is waiting writers priority if a writer is waiting to access the database no new readers can start reading

                        A solution to both version may cause starvation in the readers priority version writers may starve in the writers priority version readers may starve

                        A semaphore solution to the readers priority version uses 3 semaphores1 readcount = 0 (readcount semaphore counts the number of reading shared data2 mutex =1 (mutex semaphore is used for mutually exclusion when readcount is updated)3 wrt = 1 (wrt semaphore is common to both writer and reader)

                        wrt provides mutually exclusive access to shared data Reader process Writer process

                        wait(mutex) readcount++ if (readcount == 1) wait(rt) signal(mutex) hellip reading is performed hellip wait(mutex) readcount-- if (readcount == 0) signal(wrt) signal(mutex)

                        do wait(wrt) hellip writing is performed hellip signal(wrt)

                        while (TRUE)

                        3 The Dining Philosophers Problemn philosophers sit around a table thinking and eatingAs time passes a philosopher gets hungry and tries to pick up the chopstick on his left and on his right A philosopher may only pick up one chopstick at a time and cannot pick up a chopstick already in the hand of neighbour philosopherThe dining philosophers problems is an example of a large class or concurrency control problems it is a simple representation of the need to allocate several resources among several processes in a deadlock-free and starvation-free mannerA semaphore solution represent each chopstick with a semaphore

                        semaphore chopstick[5] Initially all values are 1

                        Philosopher ido

                        51

                        wait(chopstick[i])wait(chopstick[(i+1) 5])hellipeathellipsignal(chopstick[i])signal(chopstick[(i+1) 5])hellipthinkhellip

                        while (1)This solution guarantees no two neighbouring philosophers eat simultaneously but has the possibility of creating a deadlockQ) Types of operating systems

                        MULTIPROCESSING SYSTEMS has multiple hardware CPUs

                        In symmetric multiprocessing (SMP) systems The processes share the same main memory and io facilities All processors can perform the same functions (hence term symmetric)

                        52

                        ASYMMETRIC MULTIPROCESSING SYSTEMS In asymmetric multiprocessing (ASMP) all CPUs are not equal (Master and slave CPUs) ASMP have dedicated processors to specific tasks For example one processor may be dedicated to disk operations another to video operations etc These systems dont have the flexibility to assign processes to the least-loaded CPU unlike an SMP system

                        REAL TIME SYSTEM is used to implement a computer application that must complete its execution within its time constraint

                        Two kinds of real-time systems have evolved

                        1 A hard real-time system can guarantee that response requirement will be met under all conditions 2 A soft real-time system cannot guarantee that response requirement meets under all conditions Ex

                        Digital audio or multimedia systems Digital telephones

                        DISTRIBUTED SYSTEMS execute parts of a computation in different systems at the same time It uses distributed control ie it spreads its decision-making actions across different computers in the system so that failures of individual computers or the network does not cripple its operation

                        A distributed operating system appears as a uniprocessor system even though it has multiple processors The users may not know where their programs are being run or where their files are located that should all be handled automatically by the operating system

                        Distributed systems allow applications to run on several processors at the same time thus requiring more complex processor scheduling algorithms

                        DESKTOP SYSTEMS A desktop system is a personal computer (PC) system used at a single location Modern desktop systems support multiprogramming Common examples are Linux FreeBSD Windows 8 and the Macintosh operating system

                        HANDHELD SYSTEMS (palmtop computers) Two of the most popular operating systems for handhelds are Symbian OS and Android OS They have very less memory

                        53

                        CLUSTERED SYSTEMS A computer cluster consists of a set of loosely connected computers that work together so that in many respects they can be viewed as a single system The components of a cluster are usually connected to each other through fast local area networks each node (computer used as a server) running its own instance of an operating system

                        Q) Modern operating system

                        New developments in hardware applications and security threats lead to the development of modern operating system

                        New Hardware developments like increased machine speed high speed network increased size and variety of memory devices

                        New Applications developments like multimedia applications internet and web access and client server computing

                        Developments in modern operating system can be categorized as

                        1 Microkernel architecture assigns only few functions to kernel ( IPC and CPU scheduling etc) Other OS services are provided by processes called servers that run in user mode This approach decouples server and kernel development This approach is well suited to distributed environment

                        2 Multithreading a process is divided into threads that can run concurrently Useful for applications that perform a number of independent tasks Example database server that listens and processes many client requests

                        3 Symmetric multiprocessing As now-a-days there are multiple microprocessors in a single system SMP operating system provides greater efficiency when there are multiple processors The processes share the same main memory and io facilities All processors can perform the same functions (hence term symmetric)

                        SMP schedules processes or threads across all of the processors

                        Advantages

                        1 Increased performance if some portion of the program can be run in parallel the performance of SMP OS increases

                        2 increased availability as all processes can perform the same task if one processor fails others will work 3 incremental growth one can add a new processor to increase the performance4 Scaling cost can be dependent on the number of processors used

                        4 Distributed operating systems Distributed systems allow applications to run on several processors at the same time

                        5 Object oriented design adds modularity to kernel Os can be customized without effecting system integrity

                        Q) Linux Operating systems It is open source as its source code is freely available It is free to use Linux was designed considering UNIX compatibility Linux is a multi-user multitasking system Main design goals are speed efficiency and standardization Components of Linux System

                        54

                        Linux Operating System has primarily three components as shown in the below diagram

                        Kernel - Kernel is the core part of Linux It is responsible for all major activities of this operating system It interacts directly with hardware Kernel hides low level hardware details to system or application programs

                        System Library - System libraries are special functions or programs These are used by application programs to use Kernelrsquos features

                        System Utility - System Utility programs are responsible to do specialized individual level tasks like updating log file accepting login requests from terminals etc

                        Kernel Modules Sections of kernel code that can be compiled loaded and unloaded independent of the rest of the kernel A kernel module may typically implement a device driver a file system or a networking protocol

                        Three components to Linux module support 1 module management Supports loading modules into memory and allows to talk to the rest of the kernel 2driver registration Allows modules to inform rest of the kernel that a new driver has become available 3 conflict resolution protect reserved resources of one driver from accidental use by another driver

                        Kernel Mode vs User ModeKernel component code executes in a special privileged mode called kernel mode with full access to all resources of the computer This code represents a single process executes in single address space and do not require any context switch and hence is very efficient and fast Kernel runs each processes and provides system services to processes provides protected access to hardwares to processesCode which is not required to run in kernel mode is in System Library User programs and other system programs works in User Mode which has no access to system hardwares and kernel code User programs utilities use System libraries to access Kernel functions to get systems low level tasksBasic FeaturesFollowing are some of the important features of Linux Operating System

                        Portable - Portability means softwares can works on different types of hardwares in same way Linux kernel and application programs supports their installation on any kind of hardware platform

                        Open Source - Linux source code is freely available and it is community based development project Multiple teams works in collaboration to enhance the capability of Linux operating system and it is continuously evolving

                        Multi-User - Linux is a multiuser system means multiple users can access system resources like memory ram application programs at same time

                        Multiprogramming - Linux is a multiprogramming system means multiple applications can run at same time

                        Hierarchical File System - Linux provides a standard file structure in which system files user files are arranged

                        Shell - Linux provides a special interpreter program which can be used to execute commands of the operating system It can be used to do various types of operations call application programs etc

                        55

                        Security - Linux provides user security using authentication features like password protection controlled access to specific files encryption of data

                        Architecture Linux System Architecture is consists of following layers

                        Hardware layer - Hardware consists of all peripheral devices (RAM HDD CPU etc)

                        Kernel - Core component of Operating System interacts directly with hardware provides low level services to upper layer components

                        Shell - An interface to kernel Takes commands from user and executes kernels functions

                        Utilities - Utility programs giving user most of the functionalities of an operating systems

                        Q) windows XP operating System1Extensibility mdash layered architecture

                        Executive which runs in protected mode provides the basic system services On top of the executive several server subsystems operate in user mode Modular structure allows additional environmental subsystems to be added without affecting the

                        executive 2Portability mdash XP can be moved from on hardware architecture to another with relatively few changes

                        Written in C and C++ Processor-dependent code is isolated in a dynamic link library (DLL) called the ldquohardware abstraction

                        layerrdquo (HAL)3 Reliability mdashXP uses hardware protection for virtual memory and software protection mechanisms for operating system resources 4 Compatibility mdash applications that follow the IEEE 10031 (POSIX) standard can be complied to run on 2000 without changing the source code 5 Performance mdash XP subsystems can communicate with one another via high-performance message passing

                        Preemption of low priority threads enables the system to respond quickly to external events Designed for symmetrical multiprocessing

                        6 International support mdash supports different locales via the national language support (NLS) APIXP Architecture 1 Layered system of modules 2 Protected mode mdash hardware abstraction layer (HAL) kernel executive 3 User mode mdash collection of subsystems

                        a Environmental subsystems emulate different operating systems b Protection subsystems provide security functions

                        XP architecture

                        56

                        Q) Windows network Operating systemExample Windows 2000 operating system

                        All the pictures are taken from Silberschatz Abraham et al Peter Baer Galvin and Greg Gagne Operating system concepts Reading Addison-Wesley

                        Visit my blog enthusiaststudentblogspotin57

                        mtechmessengerblogspotin

                        Keywords OS exam notes OS notesOperating system notes Opearing system galvin notesSimple batch system Multi- programmed Batch System Time sharing System Difference between Batch Multi-programming and Time sharing What is process structure of process in memory Process states PCB Operations on Processes Process Creation Process Termination cascading terminationcontext switch Basic Concepts of threads two modes of CPU execution Inter process communication Shared ndash Memory Systems Message passing systemswhat is deadlock four conditions that are necessary for deadlock to occur Methods for handling deadlocks Deadlock prevention Deadlock avoidanceResource Allocation graphBankers AlgorithmSafe State Unsafe state Safety Algorithm Resource-Request Algorithm Memory management Address Binding Logical vs Physical Address Space MMU Memory-Management Unit Dynamic loading Dynamic Linking Overlays Swapping Contiguous memory allocation Single-partition allocation Multiple-partition allocation Internal fragmentation Variable Size partitions External fragmentation First-fit Best-fit Worst-fit Non-Contiguous Memory allocation Paging Implementation of Page Table PTLR PTBR page table base register Page-table length register translation look-aside buffers TLB Protection in Paging Shared pages in Paging Segmentation Shared Segments Virtual memory Demand paging Procedure for Handling a Page Fault pure demand paging Page Replacement Page Replacement Algorithms FIFO Optimal Page replacement AlgorithmLeast Recently used (LRU) Page replacement algorithm ThrashingWorking Set model global and local page replacement

                        58

                        • 101 File Concept Data is stored in secondary memory in the form of files A file is a sequence of bits bytes lines or records A file has structure which depends on its type
                        • File Attributes
                        • File Operations
                        • Kernel Mode vs User Mode
                        • Basic Features
                        • Architecture

                          Now the current state is as shown belowProcess Allocated Max Need Need= MaxNeed ndash AllocatedP0 5 10 5P1 2 4 2P2 2+1=3 9 7-1=6

                          Available = 3-1= 2Now Run the safety algorithm to check whether the system is in safe state

                          Safety Algorithm1 Let WORK = Available = 22 Find unfinished process such that Need of unfinished process lt= WORKCheck P0

                          Need of P0 = 5Work = 2

                          Is 5 lt= 2 FALSE

                          Check P1Need of P1 = 2Work = 2

                          Is 2 lt= 2TRUETherefore P1 can finishIf P1 finishes Work = Work+Allocated to P1

                          Work=2+2= 4Now again check if P0 can completeNeed of P0 = 5Work =4Is 5 lt=4 FALSE

                          Check if P3 completesNeed of P3= 6Work =4Is 6lt= 4FALSESo neither P0 can complete and P3 can completeSo the system is in unsafe stateThe request for 1 tape drive by P3 is not granted

                          2 Bankers Algorithm For resources that contain more than one instance the resource-allocation graph method does not

                          work So we use Bankerrsquos algorithm When a process starts up it must state in advance the maximum allocation of resources it may

                          request When a request is made the scheduler determines whether granting the request would leave the

                          system in a safe state If not then the process must wait until the request can be granted safely The bankers algorithm relies on several key data structures ( where n is the number of processes and

                          m is the number of resource categories ) o Available[ m ] indicates how many resources are currently available o Max[ n ][ m ] indicates the maximum demand of each process of each resource

                          o Allocation[ n ][ m ] indicates the number of each resources allocated to each processo Need[ n ][ m ] indicates the remaining resources needed of each type for each process ( Note

                          that Need[ i ][ j ] = Max[ i ][ j ] - Allocation[ i ][ j ] for all i j ) For simplification of discussions we make the following notations observations

                          o One row of the Need vector Need[ i ] can be treated as a vector corresponding to the needs of process i and similarly for Allocation and Max

                          Safety Algorithm In order to apply the Bankers algorithm we first need an algorithm for determining whether or not a

                          particular state is safe This algorithm determines if the current state of a system is safe according to the following steps

                          1 Let Work and Finish be vectors of length m and n respectively Work is a working copy of the available resources Finish is a vector of booleans indicating whether a particular process can finish Initialize Work = Available and Finish to false for all elements

                          2 Find an i such that both (A) Finish[ i ] == false and (B) Need[ i ] lt Work This process has not finished but could with the given available working set If no such i exists go to step 4

                          3 Set Work = Work + Allocation[ i ] and set Finish[ i ] to true This corresponds to process i finishing up and releasing its resources back into the work pool Then loop back to step 2

                          4 If finish[ i ] == true for all i then the state is a safe state because a safe sequence has been found

                          Resource-Request Algorithm ( The Bankers Algorithm ) Now we have a tool for determining if a particular state is safe or not This algorithm determines if a new request is safe and grants it only if it is safe to do so When a request is made ( that does not exceed currently available resources )

                          pretend it has been granted and then see if the resulting state is a safe one If so grant the request and if not deny the request as follows

                          1 Let Request[ n ][ m ] indicate the number of resources of each type currently requested by processes If Request[ i ] gt Need[ i ] for any process i raise an error condition

                          2 If Request[ i ] gt Available for any process i then that process must wait for resources to become available

                          else the process can continue to step 3

                          3 Check to see if the request can be granted safely by pretending it has been granted and then seeing if the resulting state is safe If resulting state is safe

                          grant the requestelse

                          then the process must wait until its request can be granted safelyThe procedure for granting a request ( or pretending to for testing purposes ) is

                          Available = Available - Request

                          Allocation = Allocation + Request Need = Need - Request

                          Unit III Memory management

                          Just as processes share the CPU they also share physical memory Memory management unit of OS takes care of memory allocation deallocation and other issues Program must be brought into memory for it to be run Addresses are two types

                          i) relocatable or relative address wrt to the beginning of program ii) Absolute addresses

                          Q)Address BindingAns Binding means mapping logical address space to physical address space Address binding can happen at three different stagesCompile time If you know at compile time only

                          where in memory the program is going to be allocated then compiler generates absolute addresses

                          Otherwise compiler generates relocatable addresses

                          Load time loader binds the relocatable addresses generated by compiler to absolute address Hence binding is done at load timeIf Binding is done at compile and load time physical and logical addresses are same

                          Execution time If address binding is done at run time the process can be moved during its execution from one memory segment to another Here we call logical addresses as virtual addressRun time mapping of virtual to physical address is done by a hardware device called Memory Management Unit (MMU)

                          Q)Logical vs Physical Address SpaceAnsLogical address ndash generated by the CPU also referred to as virtual addressPhysical address ndash address seen by the memory unitSet of all logical addresses is called logical address space Set of all physical addresses is called physical address space

                          Q) Memory-Management Unit (MMU)Ans MMU is hardware device that maps virtual to physical addressIn MMU scheme the value in the relocation register is added to every address generated by CPU to read the contents of memoryThe user program only knows logical addresses it never sees the real physical addresses

                          Q)Dynamic loadingAns Since physical memory is small it may not be possible for the entire program to be in main memory so we can use dynamic loadingIt is responsibility of users to design their programs to take advantage of dynamic loadingOnly the main function is loaded into main memory and when main() calls another function the main() will check whether the function is in main memory IF not the loader will load the desired function to main memory and updates the programrsquos address table

                          Routine is not loaded until it is called We achieve better memory-space utilization as unused routine is never loaded

                          Q) Dynamic Linking+ Linking postponed until execution time+ Small piece of code stub used to locate the appropriate memory-resident library routine+ Stub replaces itself with the address of the routine and executes the routine+ Operating system needed to check if routine is in processesrsquo memory address+ Dynamic linking is particularly useful for libraries

                          Q) OverlaysAns Overlays are needed when process is larger than amount of main memory allocated to it Instructions and data that are needed at any given time is kept in main memory Overlays can be implemented by user Programming design of overlay structure is complex Overlays for a two-pass assembler is as shown in the figure

                          Q) SwappingAns A process can be swapped temporarily out of memory to a backing store and then brought back into memory for continued executionBacking store ndash fast disk large enough to store copies of all memory images of all usersPriority based scheduling uses a variant of swapping policy called roll out roll in If a higher priority process arrives the memory manager swaps out lower priority process and then swaps in higher priority process When the higher priority process finishes the lower priority process can be swapped in again to main memory A process that is swapped out will be swapped back into the same memory space it occupied previouslyMajor part of the swap time is transfer time We can swap idle process only and cannot swap a process that is waiting for IO

                          Q) Contiguous memory allocation Ans Each process is contained in a single contiguous section of memory1 Fixed Size Partition (or) Single-partition allocation

                          Divide the main memory into Fixed sized partitions Each partition may contain exactly one process Relocation register contains value of starting physical address Limit register contains range of logical addressesEvery address generated by a CPU is checked as follows

                          If logical address lt Limit register then logical address is added with Relocation register to get the corresponding memory address

                          else a trap to OS is generatedSince every address is checked we can protect the OS and other user programs from being modified by other running process

                          2 Multiple-partition allocation

                          1) Fixed Size Partitions Divide the main memory into Fixed sized blocks Here memory allocated to a process may be larger than required The difference between allocated memory and requested memory is called internal fragmentation Internal fragmentation is unused memory inside the partition

                          0 P1 Block0(0th address to 3rd address)4 P1 Block1(4th address to 7th address)8 Block2(8th address to 11th address)

                          12 Block3(11th address to 12th address)

                          if P1 requires just 5 addresses still it is allocated 2 blocks ie 8 addresses So remaining 3 addresses are left unused

                          2) Variable Size partitions Here main memory is divided into partitions of variable sizesOperating system maintains information about

                          a) allocated partitions b) free partitions (hole)

                          In the beginning all main memory is empty and is considered one large block of available memory a hole

                          Exact memory required by process is only given When a process terminates it releases memory which can be allocated to another process

                          Memory is allocated to processes until finally no available block of memory (or hole) is large enough to hold the next processExternal fragmentation exists when there is enough memory to satisfy a request but the available memory are not contiguous Wastage of memory outside the partitionOne Solution to problem of external fragmentation is

                          a Compaction shuffle the memory contents so as to place all free memory together into one large block

                          ===============================================================================Q) Dynamic Storage-Allocation Problem (or) Most commonly used strategies to select a free hole from the set of available holesAns To satisfy a request of size n from a list of free holes below 3 policies can be used

                          1 First-fit Allocate the first hole that is big enough

                          2 Best-fit Allocate the smallest hole that is big enough must search entire list if it is not ordered by size Produces the smallest leftover hole

                          3 Worst-fit Allocate the largest hole must also search entire list Produces the largest leftover holeFirst-fit and best-fit better than worst-fit in terms of speed and storage utilization

                          ===============================================================================Q) Non-Contiguous Memory allocation

                          1 Paging Paging is a memory management scheme that provides non-contiguous memory allocation0 P1 Block0(0th address to 3rd address)4 P2 Block1(4th address to 7th address)8 Block2(8th address to 11th address)

                          12 P1 Block3(11th address to 12th address)

                          Logical address space of a process can be noncontiguous Ex P1 is allocated Block0 and Block3

                          1 Divide physical memory into fixed-sized blocks called frames (size is power of 2)2 Divide logical memory into blocks of same size called pages3 Logical address is divided into 2 parts

                          Page number (p) ndash Page table is indexed by page numberPage offset (d) ndash

                          4 A page table is allocated to each process (A pointer to page table is stored in PCB of process )Page table translates logical to physical addresses Page 0 is in frame1 Page 1 is in frame 4 etc5 Internal fragmentation may occur due to paging

                          6 If the size of logical address space= 2m and page size = 2n Then the higher order mminusn bits of logical address correspond to page number and lsquonrsquolower order bits for displacement within the page

                          Ex size of logical address space = 8=23 so m=3

                          page size =4=22 So n=2

                          So mminusn = 1 bit for page number to represent 01

                          remaining n=2 bits for displacement within the page

                          Q) Implementation of Page Table

                          Page table can be kept as

                          1) Set of dedicated registers efficient if the page table is small

                          2) Page table can be kept in memory page table base register (PTBR) points to page table Page-table length register (PTLR) indicates size of the page table Changing page table requires changing only the value in these registers Advantages of keeping page table in memorya Less context switch time

                          b Two memory accesses are required One memory access to access the page table and another memory access to access the required memory address

                          This problem can be solved by using a special fast-lookup hardware cache called associative memory or translation look-aside buffers(TLBs)

                          i Each entry in TLB consists of 2 parts

                          I) page number field

                          II) Value field or frame number field

                          ii When a logical address is generated by CPU its page number is presented to TLB The page number is compared with all the entries of TLB simultaneously

                          If page number is found in TLB Itrsquos frame number is immediately available -gt TLB hitIf page number not found check in page table ndashTLB miss

                          By using TLB Search is fast But TLB hardware is expensive So TLB size is kept small So TLB contains only frequently used few page table entries

                          Q) Protection in Paging Validinvalid bit attached to each entry in the page table OS sets this validinvalid bit for each page to allow or disallow access to the page When the bit is set to invalid the page is not in processrsquos logical address space Hence it generates Trap to OS

                          Q) Shared pages in Paging

                          Reentrant code is the code that never changes during execution Reentrant code can be shared

                          1 One copy of read-only (reentrant) code is shared among processes (ie text editors compilers window systems) Ex ed1ed2ed3 is shared among Process P1 and P2

                          2 Shared code must appear in same location in the logical address space of all processes

                          Each process keeps a separate copy of the private code and data EX data1 and data2

                          The pages for the private code and data can appear anywhere in the logical address space

                          ==============================================================================Q) Segmentation

                          Segmentation is a memory management scheme that support userrsquos view of memory

                          When the user program is compiled the compiler generates segments like

                          1) The code segment

                          2) Global variables segment

                          3) Heap memory segment

                          4) Stack segment etc

                          Each entry in segment table has

                          segment base (Starting physical address of segment) and limit register (specifies the length of the segment)

                          Logical address is divided into 2 parts

                          segment-number offset

                          Segment table is indexed by segment number

                          Segment-table base register (STBR) stores the location of segment table in main memory

                          Segment-table length register (STLR) stores number of segments used by a program

                          The segment number (say lsquosrsquo) is used to find the entry in segment table The required entry is lsquosrsquo locations away from the beginning of the segment table Once the required entry in the segment table is found the offset (lsquodrsquo) is compared with limit

                          If( offset lt limit) then offset is added with the base entry to generate the physical address

                          Q) Shared Segments Code sharing occurs at

                          the segment level Shared segments must

                          have same segment number

                          Allocation - dynamic storage allocation problem

                          use best fitfirst fit may cause external fragmentation

                          Protection protection bits associated with segments

                          readwriteexecute privileges array in a separate segment - hardware can check for illegal array indexes

                          ===============================================================================Q) Virtual memory ndash

                          Ans In paging and segmentation a program will execute only if the entire process is in main memory But here in virtual memory only part of the program needs to be in memory for execution

                          1 A program that is large enough than the available main memory can still run as only part of program is only loaded into main memory Logical address space can therefore be much larger than physical address space

                          2 Allows address spaces to be shared by several processes

                          Virtual memory can be implemented via Demand paging Demand segmentation

                          The large blank space between heap and stack is part of virtual address space Virtual memory allows files and memory to be shared by 2 or more processes through page sharing as shown in below figure (b) Figure (a) depicts that virtual memory that is larger than physical memory

                          ==============================================================================

                          Q)Demand pagingDemand paging is technique of loading pages from disk to main memory only when the page is needed Hence using less amount of physical memory we get faster response

                          Demand paging is similar to PAGING + SWAPPING

                          Instead of swapping a whole process the pager brings only those pages needed into memory

                          To distinguish between pages that are in memory and pages that are on the disk validinvalid bit is used

                          Validinvalid bit is attached to each entry in page table When this bit is set to valid the page is legal and in memory When this bit is set to invalid either the page is illegal or the page is in disk

                          When CPU generates a logical address whose page table entry is set to invalid page fault occurs Hence the required page is in disk Some section of hard disk called swap space is used to hold pages that are not present in memory

                          Q) Procedure for Handling a Page Fault

                          1) CPU generates a logical address and if the validinvalid bit for the page that has this logical address is set to invalid page fault occurs

                          2) Page fault causes a trap to operating system

                          3)Check if the logical address is within the logical address space of process ie PTBR and PTLR is checked if (required logical address is not within the logical address space of process)

                          Terminate the processelse

                          page is not in memory and page is in disk

                          4) To bring the required page into memory we need to find a free main memory frame map main memory address to disk block and fetch disk block and load the block into free frame5) When the required page is brought into memory update the page table to indicate the page is in memory6) Restart instruction interrupted by illegal address trap The process will continue as if page had always been in memory

                          Q) What is pure demand pagingAns If no page belonging to the executing process is in main memory the process will fault for every page it needs Page faults will occur until every page that is needed is in memory This scheme is known as pure demand paging NEVER BRING A PAGE UNTIL IT IS REQUIRED

                          Q) Page Replacement1 Find the location of the desired page on the disk2 Find a free

                          i) If there is a free frame use itii) If there is no free frame use Page replacement Algorithms to find some page in memory that is

                          not really in use and swap itiii) If the modify bit associated with the victim frame is set to 1 the page is written to disk Make the

                          changes to validinvalid bit of victim frames page table entry to invalid to indicate the victim frame is no longer in memory

                          else no need to write to disk3 Read the desired page into victim frame and change the frame and page tables4 Restart the user process

                          =================================================================================================Q) Page Replacement AlgorithmsGoal Produce a low page-fault rateThe algorithms are evaluated by running it on a particular string of memory references (reference string) and by computing the number of page faults on that string

                          1 First in First out (FIFO) Page replacement algorithm Uses the time when a page was brought into memory The page that was brought first into memory is replaced ie oldest page is chosen for replacement We create a FIFO queue to hold all pages in memory

                          When a page is brought into memory we insert it at the tail of the queue and we replace the page at the head of the queue

                          FIFO 15 page faultsProblems due to FIFO is Beladyrsquos anamolyFor a refrence string shown below123412512345When number of frames allocated to a process is 1 number of page faults=12 due to FIFO page replacementWhen number of frames allocated to a process is 2 number of page faults=12 due to FIFO page replacementWhen number of frames allocated to a process is 3 number of page faults=9 due to FIFO page replacementWhen number of frames allocated to a process is 1 number of page faults=10 due to FIFO page replacementAs the number of frames allocated to a process increases page faults must decrease But in FIFO page replacement page faults increase2 Optimal Page replacement AlgorithmReplace the page that will not be used for longest period of time Used for measuring how well your algorithm performs It is not practical to implement OPT algorithm as it is not possible to know what the future references will beex

                          Optimal page replacement 9 page faults

                          3 Least Recently used (LRU) Page replacement algorithm Replace the page that has not been used for longest period of time LRU makes use of time of last use of the page Ex

                          Number of page faults = 12

                          LRU requires hardware assistance to determine the time of last use of the page We can use 1) Stack or 2) Counter to implement LRU page replacement algorithm

                          1) Stack implementation ndash keep a stack of page numbers in a double link formwhen a Page is referenced it is moved to the top of stackThis implementation requires 6 pointers to be changed

                          2) counter implementation Time-of-use field is allocated to each frame allocatedCPU maintains a counter Counter is incremented for every page reference Counter value is copied to the time-of-use field of the referenced page

                          Disadvantages of LRU page replacement algorithm Updating counter or stack must be done for every memory reference HENCE LRU IS SLOW and implementation requires hardware assistance ==============================================================================Q) LRU approximation page replacement1) Using reference bit2) Using additional reference bits algorithm3) Second chance algorithm4) Enhanced Second chance algorithm5) Counting based Page replacement

                          Has two schemes1) Least Frequently Used (LFU)2) Most Frequently Used (MFU)

                          6) Page Buffering AlgorithmsQ) What is ThrashingAns A process is thrashing if it is spending more time paging than executingThrashing may occur when global (or) local page replacement is used as followsSolution to Thrashing is Working Set strategyIf we provide a process with as many frames as it needs thrashing can be avoided Since it is not possible to know how many frames a process needs we use locality model of process execution

                          here Locality is set of pages that are actively used together Locality model states that as the process executes it moves from locality to locality If we allocate enough frames to a process to accommodate the size of current locality it will not fault again until it changes current localityWorking Set model uses working-set and a parameter ∆ = working-set windowThe set of pages in the most recent ∆ page references is the working setLet WS S i be the working-set size for process Pi ie Pi needs WS S i framesLet D be the total demand for frames then D=sum WS Si

                          and mbe the total number of available frames

                          If iquestm thrashing occurs because some processes will not have enough framesSoIF (Dgtm iquest

                          OS selects a process to suspendelse OS may initiate another processThe working set strategy prevents thrashing while keeping the degree of multi programming as high as possibleThe main difficulty is to keep track of moving working set window ========================================================================Q) How does thrashing occur in global and local page replacementAns In Global page replacement when a page fault occurs for a process any frame in main memory can be replaced causing page fault to some other process that needs the replaced frame Page fault processes must use paging device to swap pages in and out As more number of processes queue up for the paging device

                          the READY queue empties And CPU becomes idle So OS introduces new processes as CPU is free This further increases number of page faults

                          In Local page replacement Each process is allocated certain frames in main memory and when a page fault occurs only the frames allocated to it will be replacedWhen a process starts thrashing other processes are not affected But since the processes will be in queue for paging device most of the time average service time for page fault increases Hence effective access time will increase even for a process that is not thrashingQ) Page Table Structure Or Structure of page table in memory

                          Hierarchical Paging Hashed Page Tables Inverted Page Tables

                          Hierarchical Paging If Page table is large then we break up the page table into multiple page tables ie page table is paged A simple technique is a two-level page tableTwo-Level Paging Example A logical address (on 32-bit machine with 4K page size) is divided into 1 a page number consisting of 20 bits 2 a page offset consisting of 12 bitsSince the page table is paged the page number is further divided into 1 a 10-bit page number 2 a 10-bit page offset Thus a logical address is as follows page number page offset

                          pi p2 d

                          10 10 12 where p1 is an index into the outer page table and p2 is the displacement within the page of the outer page table Two-Level Page-Table Scheme

                          Address-Translation Scheme Address-translation scheme for a two-level 32-bit paging architecture

                          Hashed Page Tables (or) Hash table Hashed page tables are common in address spaces gt 32 bitsEach entry in hash table contains a linked list of elements that hash to same locationHash table is indexed by hash valueEach element consists of 3 fields

                          1 Page number2 Frame number3 Pointer to next element in linked list

                          The algorithm works as follows Page number is passed to hash function to get hash value Locating the hash value in hash table is easy as Hash table is indexed by hash value Page number is then compared with field 1 in the first element in linked list If there is a match the corresponding frame number is used to get physical address If there is no match next entries in linked list are searched

                          Inverted Page Table

                          Invertible page table has one entry for each frame of physical memory Each entry in invertible page table has

                          Process-id Page number

                          Logical address also has 3 parts Process-id Page number

                          OffsetProcess-id and page number of the logical address is compared with each entry in the invertible page tableIf a match is found at ith entry i is added to offset to get logical address If there is no match it means it is illegal addressAdvantages and disadvantages

                          Decreases memory needed to store each page table Increases time needed to search the table when a page reference occurs Solution Use hash table to

                          limit the search to one mdash or at most a few mdash page-table entries Inverted Page Table Architecture

                          Unit-IV CPU Scheduling

                          Basic conceptsTo maximize CPU utilization some process must be running at all times

                          In multi programming several processes will be in main memory at a given time If running process has to wait for IO the OS takes the CPU away from that process and assigns CPU to other process in ready queue So CPU must be scheduled and CPU scheduling is fundamental OS function

                          CPU- IO burst cycleProcess execution begins with a CPU burst that is followed by an IO burst which is followed by another CPU burst then another IO burst and so on as shown in the figure

                          An IO bound program has many short CPU bursts A CPU-bound program has few long CPU bursts

                          Schedulers a module in OS for scheduling decisions1048708 Long-term scheduler (or job scheduler) ndash selects which processes should be brought into the ready queue1048708 Medium-term scheduler ndash selects which processes should be swapped inout the memory1048708 Short-term scheduler (or CPU scheduler) ndash Whenever the CPU becomes idle the operating system must select one of the processes in ready queue to be executed This selection is done by short term scheduler (or) CPU scheduler

                          Dispatcher gives CPU control to the process selected by short term scheduler This function involves following

                          1 switching context2 switching to user mode3 jumping to proper location in the user program to restart that program

                          Time taken to stop one process and start another process is known as dispatch latency This must be kept smallReady QueueReady queue is implemented as a FIFO queue a priority queue a tree or simply an unordered linked listThe records in ready queue are PCBs of processes

                          Pre-emptive Scheduling

                          CPU scheduling decisions take place under one of four conditions

                          1 When a process switches from the running state to the waiting state2 When a process switches from the running state to the ready state for example in response to an

                          interrupt3 When a process switches from the waiting state to the ready state say at completion of IO 4 When a process terminates

                          For conditions 1 and 4 there is no choice - A new process must be selected For conditions 2 and 3 there is a choice - continue running the current process (or) select a different

                          one If CPU scheduling takes place only under conditions 1 and 4 the system is said to be non-

                          preemptive or cooperative In non-preemptive a process runs until it itself gives up the CPU ie when it is waiting for IO (or) when process completes Otherwise the system is said to be pre-emptive

                          Windows used non-preemptive scheduling up to Windows 3x and started using pre-emptive scheduling with Win95 Macs used non-preemptive prior to OSX and pre-emptive since then Note that pre-emptive scheduling is only possible on hardware that supports a timer interrupt

                          Note that pre-emptive scheduling can cause problems when two processes share data because one process may get interrupted in the middle of updating shared data structures

                          Preemption can also be a problem if the kernel is busy implementing a system call ( eg updating critical kernel data structures ) when the preemption occurs Solution wait until the system call has either completed or blocked before allowing the pre-emption But this solution is problematic for real-time systems

                          Some critical sections of code protect themselves from concurrency problems by disabling interrupts before entering the critical section and re-enabling interrupts on exiting the section But this should only be done only on very short pieces of code that will take less CPU time

                          Scheduling Criteria to decide CPU scheduling algorithm1 CPU utilization in real system cpu utilization will range from 40 to 902 Throughput Number of processes that are completed per unit time3 Turnaround time Time when the process has completed ndash Time when the process was submittedie it includes waiting time and burst time4 Waiting time amount of time that a process spends waiting for CPU5 Response time Time when the first response came - Time when the process was submitted

                          Optimization Criteria Max CPU utilizationMax throughput

                          Min turnaround timeMin waiting timeMin response time CPU Scheduling Algorithms1) First-Come First-Served (FCFS) Scheduling First come first served CPU scheduling algorithm is non pre-emptive Processes are scheduled in the order they have arrivedFCFS is implemented by Queue When the CPU is free it is allocated to the process at the head of the queueAdvantages Easy to implementDisadvantages 1 Average waiting time is quite long

                          2 Convoy effect (it occurs as small processes wait in queue for big process to leave CPU)Example Process Burst Time P1 24 P2 3 P3 3

                          Suppose that the processes arrive in the order P1 P2 P3

                          The Gantt Chart for the schedule is P1 P2 P3

                          0 24 27 30 Waiting time for P1 = 0 P2 = 24 P3 = 27 Average waiting time (0 + 24 + 27)3 = 17 2) Shortest-Job-First (SJF) SchedulingSchedule the Process with shortest burst time If CPU burst time of two processe are same then FCFS is usedAdvantages Average waiting time decreasesDisadvantages It is difficult to know the length of next CPU burstSJF can be either pre-emptive (or) non pre-emptive1 Non pre-emptive SJF ndash once CPU is given to the process it cannot be preempted until completes its CPU burst2 Pre-emptive SJF (or) Shortest-Remaining-Time-First (SRTF) If the newly arrived process is shorter than what is remaining of currently executing process then pre-empt the currently executing process

                          Example Process Arrival Time Burst Time P1 00 7

                          P2 20 4 P3 40 1

                          P4 50 4

                          SJF (non-preemptive) Gantt Chart P1 P3 P2 P4

                          0 7 8 12 16 Average waiting time = [0 +(8-2)+(7-4) +(12-5)] 4 =4

                          Example of Preemptive SJF

                          Process Arrival Time Burst TimeP1 00 7P2 20 4P3 40 1P4 50 4

                          SJF (preemptive) P1 P2 P3 P2 P4 P1

                          0 2 4 5 7 11 16 Average waiting time = (9 + 1 + 0 +2)4 =3 3) Priority SchedulingA priority number (integer) is associated with each processThe CPU is allocated to the process with the highest priority (smallest integer equiv highest priority)If two processes have equal priority then FCFS is usedSJF is a priority algorithm in which larger the CPU burst lower the priorityThere are two schemes 1 Preemptive 2 nonpreemptive

                          Problem equiv Starvation ndash low priority processes may never executeSolution equiv Aging ndash as time progresses increase the priority of the process

                          4) Round Robin (RR) Each process gets a small unit of CPU time (time quantum) usually 10-100 milliseconds After this time has elapsed the process is preempted and added to the end of the ready queueIf there are n processes in the ready queue and the time quantum is q then each process gets 1n of the CPU time in chunks of at most q time units at once No process waits more than (n-1)q time unitsPerformance 1 q large _ FIFO 2 q small _ q must be large with respect to context switch otherwise overhead is too highExample of RR with Time Quantum = 4 Process Burst Time P1 24 P2 3

                          P3 3 The Gantt chart isP1 P2 P3 P1 P1 P1 P1 P1

                          0 4 7 10 14 18 22 26 30 Average waiting time = [(30-24)+4+7]3 = 173 =566 5)Multilevel Queue scheduling Ready queue is partitioned into separate queuesFor ex ready queue is divided into 2 queues

                          1 foreground (interactive) queue2 background (batch) queue

                          Each queue has its own scheduling algorithmFor ex Round Robin scheduling algorithm can be used in foreground queue

                          FCFS scheduling algorithm can be used in background queueScheduling must be done between the queues This can be done in 2 ways1 Fixed priority scheduling

                          Foreground queue has highest priority All the processes in foreground queue must be completed and queue must be empty Then only the processes in background queue will be given CPU As shown in below figure Disadvantage starvation

                          2 Time slice ndash each queue gets a certain amount of CPU timewhich it can schedule amongst its processes ie 80 to foreground in RR and 20 to background in FCFS

                          6) Multilevel Feedback Queue Scheduling A process can move between queues The idea is to separate processes according to their CPU bursts IO ndash bound and interactive processes will be in highest priority queueIf a process takes more CPU time it is moved to lower priority queue If a process waits too long in lower priority queue it is moved to higher priority queue to prevent starvationAs shown in the below figure let there be 3 queues Q0 Q1 Q2

                          1 Q0 ndash time quantum 8 milliseconds 2 Q1 ndash time quantum 16 milliseconds 3 Q2 ndash FCFS

                          Scheduling

                          1 A process entering ready queue is put in queue Q0 When it gains CPU the process receives 8 msec If it does not finish in 8 milliseconds process is moved to queue Q1 2 At Q1 process again receives 16 additional milliseconds If it still does not complete it is moved to queue Q2

                          Q) SynchronizationSeveral processes run in an operating system Some process share resources due to which problems like data inconsistency may arise

                          Example of process synchronization is Producer Consumer problem (or) Bounded-Buffer problemProducer Consumer problem To make sure that producer process should not add data when the buffer is full andConsumer process should not take data when the buffer is empty

                          The code for Producer process can be modified as followswhile (true) Produce an item in next producedwhile (counter == BUFFER_SIZE) do nothingbuffer[in]=nextProducedin = (in+1) BUFFER_SIZEcounter++

                          here an integer variable counter is initialized to zeroCounter is incremented every time a new item is added to the buffer and counter is decremented every time an item is removed from bufferin is an index that always points to an empty slot after the last filled slotout is an index that always points to the first filled slot after at the head of the circular queue

                          The code for consumer process is as shown belowwhile(true)

                          while (counter == 0) do nothingnextConsumed = buffer[out]out=(out+1)BUFFER_SIZEcounter--

                          The above producer and consumer code produces RACE CONDITIONOutcome of execution of several co-operating processes depends on the order in which accesses takes place is called Race ConditionFor Example if counter =5

                          Given counter =5 and if producer produces one item and consumer consumes one item then correct value of counter= 5 only But here the counter = 4 ie inconsistent resultIf T5 is executed before T4 then the counter =6 ie inconsistent result

                          Solution to avoid race condition make sure only one process at a time must update the shared variable (here it is counter)

                          Q) Critical section problemAns Each process has a segment of code called critical section Critical section code is the code that contains common variables of co-operating processesTo avoid data inconsistencyTwo processes must not execute critical section code at same timeFor this to happen1 Each process must request permission to enter its critical sectionmdashEntry section2 End of critical section ndashExit section3 Rest of the code after critical section ndash Remainder section

                          Solution to critical section must satisfy the following three requirements1 Mutual exclusion if one process is in critical section no other process can be in critical section2 Progress Processes not in remainder section must decide which process will enter critical section next3 Bounded Waiting All the processes must be allowed to enter critical section No process must wait indefinitely

                          Q) Petersonrsquos solution for critical section problem (Software based solution)AnsTime Process 1 Process2T0 do

                          flag[1] = TRUE turn =2

                          do flag[2] = TRUE

                          T1 turn =1

                          T2 While( flag[2] ampamp turn == 2)Since turn = 2 while loop will not run

                          While( flag[1] ampamp turn == 1)Loops continuously until flag[1]

                          becomes falseT3 Enter critical section

                          T4 flag[1] = FALSE Enter Critical section

                          T5 Enters remainder section flag[2]=FALSE

                          while (TRUE) Enters remainder section

                          while (TRUE)

                          101 File Concept Data is stored in secondary memory in the form of files A file is a sequence of bits bytes lines or records A file has structure which depends on its type

                          File Attributes

                          1 Name ndash Name of the file for user understanding2 Identifier used by OS to identify a file ( eg inode number )3 Type ndash Text files executable files binary files etc 4 Location ndash location of the file on the hard drive 5 Size current size of the file6 Protection Controls who can read write or execute the file7 Time Date and user identification gives the information when the file was created last modified

                          and last used etc This data is used for protection security and usage monitoring

                          All information about files is kept in directory structure The directory entry consists of the filersquos name and its unique identifier The identifier locates the other file attributes Directory structure is kept on disk

                          File Operations

                          The file Abstract Data Type supports many common operations 1 Creating a file find free space on disk add entry to directory 2 Writing a file write data at current file position pointer location and update pointer3 Reading a file read data at current file position pointer location and update pointer4 Repositioning within a file or file seek change pointer location to a given value5 Deleting a file free the space allocated to file and remove entry from directory6 Truncating a file delete the data and update file size

                          Open File Table

                          Most OSes require that files be opened before access and closed after access Information about all files that are open currently in the system is stored in an open file table This

                          system wide open file table is maintained by OS And information about files opened by the process is stored by the processrsquos open file table Open file table has the following information

                          1 File pointer - records the current position in the file for the next read or write2 File-open count - How many times has the current file been opened ( simultaneously by

                          different processes ) and not yet closedWhen this counter reaches zero the file can be removed from the table

                          3 Disk location of the file4 Access rights

                          Some systems provide support for file locking 1 A shared lock is for reading only2 A exclusive lock is for writing as well as reading3 An advisory lock OS makes sure the locks are acquired and released appropriately4 A mandatory lock software developers makes sure the locks are acquired and released

                          appropriately5 UNIX used advisory locks and Windows uses mandatory locks

                          File Types

                          Implementing file types A common technique is to include the type as part of the filename The name is split into 2 parts

                          1 File Name 2 Extension

                          File name and extension are separated by a period The system uses the extension to indicate the type of file and the type of operations that can be done on that fileUser opens Microsoft word and clicks on File menu and Open option User will only specify the filename without extension application will look for a file with the given name and extension it expects Hence extensions are not supported by the operating systemUnix system uses a crude magic number stored at the beginning of the file to indicate the type of fileIn Mac OS X operating system file produced by word processor has the word processorrsquos name as its creator When user double clicks on the file word will open the required fileSome common file types are as shown below

                          File Structure Some files contain an internal structure which may or may not be known to the OS for ex executable files have a specific structure so that it can determine

                          where in memory to load the file and what is the location of the first instruction

                          If OS supports many file formats it will increase the size and complexity of the OS UNIX treats all files as sequences of bytes with no internal structure (With the exception of

                          executable binary programs which it must know how to load and find the first executable statement etc)

                          Macintosh files have two forks - a resource fork and a data fork The resource fork contains information relating to the UI such as icons and button images and can be modified independently Data fork contains the code or data

                          Internal File structure

                          Disk files are accessed in units of physical blocks typically 512 bytes

                          Internally files are organized in units of logical units like single byte (or)

                          data record size (or) structure size

                          The number of logical units which fit into one physical block determines its packingIf a student file is organized internally as records Each record takes 64 bytes of memoryAnd main memory is divided into blocks of 512 bytes Then packing = 51264=8 8 records can be stored per block

                          Internal fragmentation may occur due to this

                          102 Access Methods

                          The information in file can be accessed in the following ways1 Sequential access2 Direct access (or) Relative access

                          1 Sequential access we cannot randomly access any record

                          Records are read and written sequentially one record after another

                          A sequential access file emulates magnetic tape operation and generally supports the following operations read next - read a record and advance the tape to the next position write next - write a record and advance the tape to the next position rewind skip n records - May or may not be supported N may be limited to positive numbers or may be

                          limited to +- 1

                          2 Direct access (or) Relative accessA file is made up of fixed length records so that we can randomly access any record

                          No restriction on order of reading or writing

                          Direct access files are useful when we need to access large amounts of data randomly

                          Ex database

                          Operations supported include n is relative block number ie relative to beginning of file

                          1 read n - read record number n 2 write n - write record number n 3 jump to record n - could be 0 or the end of file4 Query current record - used to return back to this record later5 Sequential access can be easily emulated using direct access The inverse is complicated and

                          inefficient

                          3 Other access methods can be built on top of direct access method We construct an index for the file which contains pointers to various blocks To search a file we first access index and then use pointer to access file

                          If a student file has 120000 records of students sorted by student id We can create an index consisting of 1st student id in each blockif 64 records can be stored in each block then number of blocks needed to store 120000 records = 12000064 asymp 2000 blocks So index has 2000 entries Index is kept in main memory so that searching becomes faster

                          If the index itself becomes large we create an index for index file (Ex B-tree)

                          Q) Allocation Methods

                          To allocate space to files on disk is such a way that disk space is utilized effectively and files can be accessed quickly

                          Three major methods of allocating disk space are

                          1 Contiguous 2 Linked

                          3 Indexed

                          1 Contiguous allocation Each file occupies set of contiguous blocks on disk

                          If the file is n blocks long and starts at location b then it occupies blocks b b+1 b+2hellipb+n-1

                          The directory entry for each file indicates the address of starting block(ie b) and length (n)

                          Advantages

                          1 For direct access to block i we can directly access by finding the block as b+i Since the file is stored sequentially sequential access is easy Hence access is easy

                          2 Supports both direct and sequential access3 Number of disk seeks required is minimum 4 Seek time is minimum

                          Disadvantages

                          1 Difficult to find space to a new file (or) add more data to existing fileFirst fit and Best fit strategies are used to find a free hole( set of free contiguous blocks) from list of available holes But both are inefficient wrt storage utilization

                          2 External fragmentation even though there are enough free blocks required for a new file they are not contiguous and hence cannot be allocated to new fileSolution copy all the files onto another disk or tape We then copy the files back onto original disk contiguously This is known as compaction But compaction is time consuming

                          3 Size of the file must be known when the file is created Even though final size is known entire size may not be used by the file at the beginning Hence leads to wastage of space

                          If size of the file is not known in advance then (i) If we allocate too little space to file file cannot be extended Solution find a larger hole copy the contents of the file to the new space and release the previous space This may be possible as long as space exists Also this is time consuming(ii) If we overestimate the amount of space needed much of the space is unused This will lead to internal fragmentation

                          To minimize these drawbacks some os uses modified contiguous allocation scheme Here contiguous blocks of memory is allocated at firstIf the file needs more blocks another set of contiguous space known as extent is added Location and a block count plus a link to the first block of next extentIf extents are too large and of fixed size internal fragmentation occursIf extents are of variable sizes external fragmentation occurs

                          Linked AllocationEach file is a linked list of disk blocks The disk blocks can be scattered anywhere on the diskThe directory contains a pointer to first and last blocks of the file For example a file of 5 blocks might start at a block 9 and continue at block 16 then block 1 and then block 10 and finally block 25 Each block contains a pointer to next block If each block is 512 bytes and disk address (pointer ) requires 4 bytes then the user sees blocks of 512-4=508 bytes To create a new file we simply create a new entry in the directory The pointer will have nil for empty file Also the size field of empty file is 0

                          To add more data to existing file free block is found and data is written to it and is linked to end of file To read a file we read by following the pointers block to block

                          Advantages 1 Easy to find space to a new file (or) add more data to existing file2 Size of the file need not be declared when the file is created3 There is no external fragmentation

                          Disadvantages1 Seek time is more2 Number of disk seeks required is more3 Pointers use larger percentage of filersquos disk space4 Inefficient for direct access files To access i th record i disk reads are required5 Not Reliable if pointers were lost or damaged

                          Solution to disadvantage 3 is to use to group multiple blocks as clusters(for ex 4 blocks = 1cluster) We allocate clusters instead of blocks to file Makes logical to physical block mapping simpleImproves disk throughput by reducing disk access timeReduces wastage of space due to pointers as only few pointers may be neededFree ndashlist management also becomes simpleDisadvantage internal fragmentation more space is wasted when the cluster is partially full

                          Solution to disadvantage 5 is to use doubly linked list (or) store the file name and relative block number in each block

                          An variation of linked allocation is the use of file allocation table (FAT)A section of disk at the beginning of each volume contains FAT FAT has one entry for each disk block and is indexed by block numberThe block number contains the block number of next block that contains the file The last block will have special end of the file value as table entry An unused block is indicated by table value 0

                          Advantages

                          1 Easy to find space to a new file (or) add more data to existing file2 Random access time is improved

                          Disadvantage number of disk seeks is more The disk head must move to read the FAT and find the location of block then move to location of the block

                          3 Indexed allocation

                          Each file has its own index block Index block is an array of disk block addresses The ith entry in index block points to ith block of the file

                          The directory contains the address of the index block

                          When the file is created all pointers in the index block are set to nil

                          To add more data to existing file free block is found and data is written to it The address of block is put in the index block

                          Advantages

                          1 Easy to find space to a new file (or) add more data to existing file2 Supports direct (or) random access efficiently3 No external fragmentation

                          Disadvantage

                          1 Seek time is more2 Number of disk seeks required is more3 Not Reliable4 Index file must be kept in memory If this memory is not available then we have the read the index

                          block and then desired data block ie 2 disk accesses are required To access a block near the end of the file we need to read all the index blocks to read the needed data block

                          5 Pointer overhead of index block is gt than that of linked allocation For example if we have file of only one or two blocks Entire index block is allocated to store these one or two pointers

                          Various mechanisms followed for size of index block

                          1 linked scheme size of index block is one disk block Index block contains a small header containing name of the file and set of the first 100 disk block addresses The next address contains the address of another index block for large files

                          2 multilevel index to access a block the OS uses first-level index to find second-level index block which points to file block This approach can be continued to third or fourth level

                          3 combined scheme

                          For example Say there are 15 pointers of index block in the filersquos inode The first 12 of these pointers have address of blocks that contains file data The next 3 pointers point to indirect blocks The first points to single indirect block which is an index block that contain addresses of filersquos data blocks

                          The second points to double indirect blocks and third points to triple indirect blocks as shown below

                          Q) Free space management

                          To keep track of free disk space the system maintains free space list To create a file we search free space list and allocate the required space to new file

                          The free space list can be implemented as

                          1 Bit vector (or) Bit map2 Linked list3 Grouping4 Counting 5 Space maps

                          1 Bit vector (or) Bit map Each block is represented by 1 bit If the block is free the bit is 1 and if the block is allocated the bit is 0 For example consider a disk where blocks 23458 blocks are free The free-space bit map would be 001111001000

                          Advantages simple and easy to get contiguous files

                          Disadvantages

                          1 Bit map requires extra space2 Bit map is inefficient if kept in disk Bit maps of smaller disk can be kept in main memory but may

                          not be possible to keep the bit map of larger disk in main memory3 Bit map must be modified both when blocks are allocated and freed Freeing 1 GB of data on a 1-TB

                          disk could cause thousands of blocks of bit maps to be updated because those data blocks are scattered all over the disk

                          2 Linked List Free-space list is implemented by linking all the free disk blocks together A pointer to the first free block is stored in special location on the disk For example consider a disk where blocks 23458 blocks are freeWe keep a pointer to block 2 as the first free block Block 2 would contain a pointer to block 3 Block 3 would point to block 4 and block 4 would point to block 5and so onAdvantage No waste of spaceDisadvantage Cannot get contiguous space easily ie To traverse the free-space list we must read each block which requires more IO time

                          3 Grouping First free block stores the addresses of n free blocks The first n-1 free blocks are free and the last block contains the addresses of another n free blocks and so onAdvantage large number of free blocks can be found quickly4 Counting free-space list can be maintained by keeping the address of first free block and the number (n)of free contiguous blocks that follow the first block

                          Each entry in the free-space list consists of disk address and a countEach entry requires more space andlist is shorter only if count is gt1These entries can be stored in a B-tree for efficient insertion deletion and search

                          5 Space maps ZFS divides the disk space into chunks of manageable sizes called metaslabs Each metaslab has a space mapZFS uses counting algorithm to store information about free blocks Space map is a log of all block activities in time order in counting formatWhen ZFS allocates or frees space from a metaslab it loads the associated space map into memory and updates it Finally free-space list is updated on diskSynchronization Hardware Hardware solution to synchronization is to provide atomic operations These operations operate as a single instruction without interruption Two such operations are 1 Test and Set() hardware instruction2 Swap() instruction

                          1 Test and Set instruction is as shown

                          Solution using Test and SetThe shared variable lock is initialized to false

                          2 Swap() instruction definition is as follows

                          Solution using Swap1 Shared Boolean variable lock is initialized to false and each process has a local boolean variable key

                          Both Test and Set Swap instructions satisfy the mutual exclusion requirement but unfortunately do not guarantee bounded waiting If there are multiple processes trying to get into their critical sections there is no guarantee of what order they will enter and any one process could have the bad luck to wait forever until they got their turn in the critical section (Since there is no guarantee as to the relative rates of the processes a very fast process could theoretically release the lock whip through their remainder section and re-lock the lock before a slower process got a chance As more and more processes are involved vying for the same resource the odds of a slow process getting locked out completely increase )

                          Below figure illustrates a solution using test-and-set that satisfies bounded waiting using two shared data structures boolean lock and boolean waiting[ N ] where N is the number of processes in contention for critical sections

                          Bounded-waiting mutual exclusion with TestAndSet( )

                          It first looks in an order ( starting with the next process on the list ) for a process that has been waiting and if it finds one then it releases that particular process from its waiting state without unlocking the critical section thereby allowing a specific process into the critical section while continuing to block all the others

                          Q) What is semaphoreAns Semaphore is used to solve synchronization problems A semaphore S is an integer variable accessed only through 2 standard atomic operations 1 wait() 2 signal()Wait primitive Signal primitivewait(S)

                          while(S lt= 0) do nothingS--

                          signal (S)

                          S++

                          Each process that wishes to use a resource performs wait() operation When a process releases a resource it performs signal() operation

                          Semaphores Usage1 binary semaphore or mutex locks can take values 0 (or) 1

                          2 Counting Semaphore can take any integer value Counting semaphore is used to count the number remaining resources The counter is initialized to the number of resources available whenever the counting semaphore gt 0 then a process can enter a critical section and use one of the resources When the counter = 0 ( or negative in some implementations ) then the process blocks until another process frees a resource and increments the counting semaphore with a signal callFor example if 3 resources are there and 4 processes Value of counting semaphore =3if process P1 requires a resource it perform wait() operationNow counting semaphore = 2if process P2 requires a resource it perform wait() operationNow counting semaphore = 1if process P3 requires a resource it perform wait() operationNow counting semaphore = 0if process P4 requires a resource it perform wait() operation P4 waits until a resource is available

                          3 Semaphores can also be used to synchronize certain operations

                          Semaphore ImplementationWhen a process is in critical section and any other process that tries to enter critical section must loop continuously wasting CPU time (busy waiting) This type of semaphore is called spin lock as the process spins while waiting For example suppose it is important that process P1 execute statement S1 before process P2 executes statement S2

                          First we create a semaphore named synch that is shared by the two processes and initialize it to zero Then in process P1 we insert the code

                          S1signal( synch )

                          and in process P2 we insert the code wait( synch )

                          S2 Because synch was initialized to 0 process P2 will block on the wait until after P1 executes the call

                          to signal Each semaphore has a integer value and maintains a waiting queue of waiting processes

                          Semaphore implementation with no busy waiting To overcome busy waiting we block the waiting process and waiting process is restarted by wakeup() operation(ie the process is removed from waiting queue to ready state)

                          Problems due to semaphore1 Deadlocks Deadlocks occur when multiple processes are blocked each waiting for a resource that can only be freed by one of the other (blocked) processes shown below

                          2 Starvation In Starvation one or more processes gets blocked forever and never get a chance to enter critical section For example if we do not specify the algorithms for adding processes to the waiting queue or selecting one to be removed from the queue in the signal( ) call If a LIFO queue is chosen then the first process that starts waiting will never get a chance

                          3 Priority inversionLet Process LMH have priorities as shown belowLltMltHprocess H is waiting for a resource that has been held by process L so Process L is running When process M enters it pre-empts process L and M is running nowHere M runs before H even though M has lower priority than HTo avoid this L is given high priority so that M does not pre-empt L=====================================================================Q) Classical problems on synchronization

                          1 The Bounded Buffer Problem (also called the The Producer-Consumer Problem)2 The Readers-Writers Problem3 The Dining Philosophers Problem

                          These problems are used to test newly proposed synchronization scheme

                          1 The Bounded Buffer ProblemConsider

                          n buffers and each buffer holds one item a producer process which creates the items (1 at a time) a consumer process which processes them (1 at a time)

                          Producer process cannot produce an item if all the buffers are full and Consumer process cannot consume if all the buffers are empty As both processes modify the contents of buffer synchronization must be there For that we use 3 Semaphores

                          1 empty = n (empty semaphore counts the number of empty buffers)2 full=0 (full semaphore counts the number of filled buffers)3 mutex=1 (mutex provides mutual exclusion for access to the buffer)

                          Producer process Consumer processdo

                          hellip produce an item wait(empty) wait(mutex) hellip

                          do wait(full) wait(mutex) hellip remove an item from buffer hellip

                          add item to buffer hellip signal(mutex) signal(full)

                          while (1)

                          signal(mutex) signal(empty) hellip consume the item hellip while (1)

                          2 The Readers-Writers ProblemA data item such as a file is shared among several processesEach process is classified as reader or writerMultiple readers can read the file at same timeA writer must have exclusive access (ie cannot share with either a reader or another writer)Two versions of the readers-writers problem

                          readers priority new reader need not wait because a writer is waiting writers priority if a writer is waiting to access the database no new readers can start reading

                          A solution to both version may cause starvation in the readers priority version writers may starve in the writers priority version readers may starve

                          A semaphore solution to the readers priority version uses 3 semaphores1 readcount = 0 (readcount semaphore counts the number of reading shared data2 mutex =1 (mutex semaphore is used for mutually exclusion when readcount is updated)3 wrt = 1 (wrt semaphore is common to both writer and reader)

                          wrt provides mutually exclusive access to shared data Reader process Writer process

                          wait(mutex) readcount++ if (readcount == 1) wait(rt) signal(mutex) hellip reading is performed hellip wait(mutex) readcount-- if (readcount == 0) signal(wrt) signal(mutex)

                          do wait(wrt) hellip writing is performed hellip signal(wrt)

                          while (TRUE)

                          3 The Dining Philosophers Problemn philosophers sit around a table thinking and eatingAs time passes a philosopher gets hungry and tries to pick up the chopstick on his left and on his right A philosopher may only pick up one chopstick at a time and cannot pick up a chopstick already in the hand of neighbour philosopherThe dining philosophers problems is an example of a large class or concurrency control problems it is a simple representation of the need to allocate several resources among several processes in a deadlock-free and starvation-free mannerA semaphore solution represent each chopstick with a semaphore

                          semaphore chopstick[5] Initially all values are 1

                          Philosopher ido

                          51

                          wait(chopstick[i])wait(chopstick[(i+1) 5])hellipeathellipsignal(chopstick[i])signal(chopstick[(i+1) 5])hellipthinkhellip

                          while (1)This solution guarantees no two neighbouring philosophers eat simultaneously but has the possibility of creating a deadlockQ) Types of operating systems

                          MULTIPROCESSING SYSTEMS has multiple hardware CPUs

                          In symmetric multiprocessing (SMP) systems The processes share the same main memory and io facilities All processors can perform the same functions (hence term symmetric)

                          52

                          ASYMMETRIC MULTIPROCESSING SYSTEMS In asymmetric multiprocessing (ASMP) all CPUs are not equal (Master and slave CPUs) ASMP have dedicated processors to specific tasks For example one processor may be dedicated to disk operations another to video operations etc These systems dont have the flexibility to assign processes to the least-loaded CPU unlike an SMP system

                          REAL TIME SYSTEM is used to implement a computer application that must complete its execution within its time constraint

                          Two kinds of real-time systems have evolved

                          1 A hard real-time system can guarantee that response requirement will be met under all conditions 2 A soft real-time system cannot guarantee that response requirement meets under all conditions Ex

                          Digital audio or multimedia systems Digital telephones

                          DISTRIBUTED SYSTEMS execute parts of a computation in different systems at the same time It uses distributed control ie it spreads its decision-making actions across different computers in the system so that failures of individual computers or the network does not cripple its operation

                          A distributed operating system appears as a uniprocessor system even though it has multiple processors The users may not know where their programs are being run or where their files are located that should all be handled automatically by the operating system

                          Distributed systems allow applications to run on several processors at the same time thus requiring more complex processor scheduling algorithms

                          DESKTOP SYSTEMS A desktop system is a personal computer (PC) system used at a single location Modern desktop systems support multiprogramming Common examples are Linux FreeBSD Windows 8 and the Macintosh operating system

                          HANDHELD SYSTEMS (palmtop computers) Two of the most popular operating systems for handhelds are Symbian OS and Android OS They have very less memory

                          53

                          CLUSTERED SYSTEMS A computer cluster consists of a set of loosely connected computers that work together so that in many respects they can be viewed as a single system The components of a cluster are usually connected to each other through fast local area networks each node (computer used as a server) running its own instance of an operating system

                          Q) Modern operating system

                          New developments in hardware applications and security threats lead to the development of modern operating system

                          New Hardware developments like increased machine speed high speed network increased size and variety of memory devices

                          New Applications developments like multimedia applications internet and web access and client server computing

                          Developments in modern operating system can be categorized as

                          1 Microkernel architecture assigns only few functions to kernel ( IPC and CPU scheduling etc) Other OS services are provided by processes called servers that run in user mode This approach decouples server and kernel development This approach is well suited to distributed environment

                          2 Multithreading a process is divided into threads that can run concurrently Useful for applications that perform a number of independent tasks Example database server that listens and processes many client requests

                          3 Symmetric multiprocessing As now-a-days there are multiple microprocessors in a single system SMP operating system provides greater efficiency when there are multiple processors The processes share the same main memory and io facilities All processors can perform the same functions (hence term symmetric)

                          SMP schedules processes or threads across all of the processors

                          Advantages

                          1 Increased performance if some portion of the program can be run in parallel the performance of SMP OS increases

                          2 increased availability as all processes can perform the same task if one processor fails others will work 3 incremental growth one can add a new processor to increase the performance4 Scaling cost can be dependent on the number of processors used

                          4 Distributed operating systems Distributed systems allow applications to run on several processors at the same time

                          5 Object oriented design adds modularity to kernel Os can be customized without effecting system integrity

                          Q) Linux Operating systems It is open source as its source code is freely available It is free to use Linux was designed considering UNIX compatibility Linux is a multi-user multitasking system Main design goals are speed efficiency and standardization Components of Linux System

                          54

                          Linux Operating System has primarily three components as shown in the below diagram

                          Kernel - Kernel is the core part of Linux It is responsible for all major activities of this operating system It interacts directly with hardware Kernel hides low level hardware details to system or application programs

                          System Library - System libraries are special functions or programs These are used by application programs to use Kernelrsquos features

                          System Utility - System Utility programs are responsible to do specialized individual level tasks like updating log file accepting login requests from terminals etc

                          Kernel Modules Sections of kernel code that can be compiled loaded and unloaded independent of the rest of the kernel A kernel module may typically implement a device driver a file system or a networking protocol

                          Three components to Linux module support 1 module management Supports loading modules into memory and allows to talk to the rest of the kernel 2driver registration Allows modules to inform rest of the kernel that a new driver has become available 3 conflict resolution protect reserved resources of one driver from accidental use by another driver

                          Kernel Mode vs User ModeKernel component code executes in a special privileged mode called kernel mode with full access to all resources of the computer This code represents a single process executes in single address space and do not require any context switch and hence is very efficient and fast Kernel runs each processes and provides system services to processes provides protected access to hardwares to processesCode which is not required to run in kernel mode is in System Library User programs and other system programs works in User Mode which has no access to system hardwares and kernel code User programs utilities use System libraries to access Kernel functions to get systems low level tasksBasic FeaturesFollowing are some of the important features of Linux Operating System

                          Portable - Portability means softwares can works on different types of hardwares in same way Linux kernel and application programs supports their installation on any kind of hardware platform

                          Open Source - Linux source code is freely available and it is community based development project Multiple teams works in collaboration to enhance the capability of Linux operating system and it is continuously evolving

                          Multi-User - Linux is a multiuser system means multiple users can access system resources like memory ram application programs at same time

                          Multiprogramming - Linux is a multiprogramming system means multiple applications can run at same time

                          Hierarchical File System - Linux provides a standard file structure in which system files user files are arranged

                          Shell - Linux provides a special interpreter program which can be used to execute commands of the operating system It can be used to do various types of operations call application programs etc

                          55

                          Security - Linux provides user security using authentication features like password protection controlled access to specific files encryption of data

                          Architecture Linux System Architecture is consists of following layers

                          Hardware layer - Hardware consists of all peripheral devices (RAM HDD CPU etc)

                          Kernel - Core component of Operating System interacts directly with hardware provides low level services to upper layer components

                          Shell - An interface to kernel Takes commands from user and executes kernels functions

                          Utilities - Utility programs giving user most of the functionalities of an operating systems

                          Q) windows XP operating System1Extensibility mdash layered architecture

                          Executive which runs in protected mode provides the basic system services On top of the executive several server subsystems operate in user mode Modular structure allows additional environmental subsystems to be added without affecting the

                          executive 2Portability mdash XP can be moved from on hardware architecture to another with relatively few changes

                          Written in C and C++ Processor-dependent code is isolated in a dynamic link library (DLL) called the ldquohardware abstraction

                          layerrdquo (HAL)3 Reliability mdashXP uses hardware protection for virtual memory and software protection mechanisms for operating system resources 4 Compatibility mdash applications that follow the IEEE 10031 (POSIX) standard can be complied to run on 2000 without changing the source code 5 Performance mdash XP subsystems can communicate with one another via high-performance message passing

                          Preemption of low priority threads enables the system to respond quickly to external events Designed for symmetrical multiprocessing

                          6 International support mdash supports different locales via the national language support (NLS) APIXP Architecture 1 Layered system of modules 2 Protected mode mdash hardware abstraction layer (HAL) kernel executive 3 User mode mdash collection of subsystems

                          a Environmental subsystems emulate different operating systems b Protection subsystems provide security functions

                          XP architecture

                          56

                          Q) Windows network Operating systemExample Windows 2000 operating system

                          All the pictures are taken from Silberschatz Abraham et al Peter Baer Galvin and Greg Gagne Operating system concepts Reading Addison-Wesley

                          Visit my blog enthusiaststudentblogspotin57

                          mtechmessengerblogspotin

                          Keywords OS exam notes OS notesOperating system notes Opearing system galvin notesSimple batch system Multi- programmed Batch System Time sharing System Difference between Batch Multi-programming and Time sharing What is process structure of process in memory Process states PCB Operations on Processes Process Creation Process Termination cascading terminationcontext switch Basic Concepts of threads two modes of CPU execution Inter process communication Shared ndash Memory Systems Message passing systemswhat is deadlock four conditions that are necessary for deadlock to occur Methods for handling deadlocks Deadlock prevention Deadlock avoidanceResource Allocation graphBankers AlgorithmSafe State Unsafe state Safety Algorithm Resource-Request Algorithm Memory management Address Binding Logical vs Physical Address Space MMU Memory-Management Unit Dynamic loading Dynamic Linking Overlays Swapping Contiguous memory allocation Single-partition allocation Multiple-partition allocation Internal fragmentation Variable Size partitions External fragmentation First-fit Best-fit Worst-fit Non-Contiguous Memory allocation Paging Implementation of Page Table PTLR PTBR page table base register Page-table length register translation look-aside buffers TLB Protection in Paging Shared pages in Paging Segmentation Shared Segments Virtual memory Demand paging Procedure for Handling a Page Fault pure demand paging Page Replacement Page Replacement Algorithms FIFO Optimal Page replacement AlgorithmLeast Recently used (LRU) Page replacement algorithm ThrashingWorking Set model global and local page replacement

                          58

                          • 101 File Concept Data is stored in secondary memory in the form of files A file is a sequence of bits bytes lines or records A file has structure which depends on its type
                          • File Attributes
                          • File Operations
                          • Kernel Mode vs User Mode
                          • Basic Features
                          • Architecture

                            o Allocation[ n ][ m ] indicates the number of each resources allocated to each processo Need[ n ][ m ] indicates the remaining resources needed of each type for each process ( Note

                            that Need[ i ][ j ] = Max[ i ][ j ] - Allocation[ i ][ j ] for all i j ) For simplification of discussions we make the following notations observations

                            o One row of the Need vector Need[ i ] can be treated as a vector corresponding to the needs of process i and similarly for Allocation and Max

                            Safety Algorithm In order to apply the Bankers algorithm we first need an algorithm for determining whether or not a

                            particular state is safe This algorithm determines if the current state of a system is safe according to the following steps

                            1 Let Work and Finish be vectors of length m and n respectively Work is a working copy of the available resources Finish is a vector of booleans indicating whether a particular process can finish Initialize Work = Available and Finish to false for all elements

                            2 Find an i such that both (A) Finish[ i ] == false and (B) Need[ i ] lt Work This process has not finished but could with the given available working set If no such i exists go to step 4

                            3 Set Work = Work + Allocation[ i ] and set Finish[ i ] to true This corresponds to process i finishing up and releasing its resources back into the work pool Then loop back to step 2

                            4 If finish[ i ] == true for all i then the state is a safe state because a safe sequence has been found

                            Resource-Request Algorithm ( The Bankers Algorithm ) Now we have a tool for determining if a particular state is safe or not This algorithm determines if a new request is safe and grants it only if it is safe to do so When a request is made ( that does not exceed currently available resources )

                            pretend it has been granted and then see if the resulting state is a safe one If so grant the request and if not deny the request as follows

                            1 Let Request[ n ][ m ] indicate the number of resources of each type currently requested by processes If Request[ i ] gt Need[ i ] for any process i raise an error condition

                            2 If Request[ i ] gt Available for any process i then that process must wait for resources to become available

                            else the process can continue to step 3

                            3 Check to see if the request can be granted safely by pretending it has been granted and then seeing if the resulting state is safe If resulting state is safe

                            grant the requestelse

                            then the process must wait until its request can be granted safelyThe procedure for granting a request ( or pretending to for testing purposes ) is

                            Available = Available - Request

                            Allocation = Allocation + Request Need = Need - Request

                            Unit III Memory management

                            Just as processes share the CPU they also share physical memory Memory management unit of OS takes care of memory allocation deallocation and other issues Program must be brought into memory for it to be run Addresses are two types

                            i) relocatable or relative address wrt to the beginning of program ii) Absolute addresses

                            Q)Address BindingAns Binding means mapping logical address space to physical address space Address binding can happen at three different stagesCompile time If you know at compile time only

                            where in memory the program is going to be allocated then compiler generates absolute addresses

                            Otherwise compiler generates relocatable addresses

                            Load time loader binds the relocatable addresses generated by compiler to absolute address Hence binding is done at load timeIf Binding is done at compile and load time physical and logical addresses are same

                            Execution time If address binding is done at run time the process can be moved during its execution from one memory segment to another Here we call logical addresses as virtual addressRun time mapping of virtual to physical address is done by a hardware device called Memory Management Unit (MMU)

                            Q)Logical vs Physical Address SpaceAnsLogical address ndash generated by the CPU also referred to as virtual addressPhysical address ndash address seen by the memory unitSet of all logical addresses is called logical address space Set of all physical addresses is called physical address space

                            Q) Memory-Management Unit (MMU)Ans MMU is hardware device that maps virtual to physical addressIn MMU scheme the value in the relocation register is added to every address generated by CPU to read the contents of memoryThe user program only knows logical addresses it never sees the real physical addresses

                            Q)Dynamic loadingAns Since physical memory is small it may not be possible for the entire program to be in main memory so we can use dynamic loadingIt is responsibility of users to design their programs to take advantage of dynamic loadingOnly the main function is loaded into main memory and when main() calls another function the main() will check whether the function is in main memory IF not the loader will load the desired function to main memory and updates the programrsquos address table

                            Routine is not loaded until it is called We achieve better memory-space utilization as unused routine is never loaded

                            Q) Dynamic Linking+ Linking postponed until execution time+ Small piece of code stub used to locate the appropriate memory-resident library routine+ Stub replaces itself with the address of the routine and executes the routine+ Operating system needed to check if routine is in processesrsquo memory address+ Dynamic linking is particularly useful for libraries

                            Q) OverlaysAns Overlays are needed when process is larger than amount of main memory allocated to it Instructions and data that are needed at any given time is kept in main memory Overlays can be implemented by user Programming design of overlay structure is complex Overlays for a two-pass assembler is as shown in the figure

                            Q) SwappingAns A process can be swapped temporarily out of memory to a backing store and then brought back into memory for continued executionBacking store ndash fast disk large enough to store copies of all memory images of all usersPriority based scheduling uses a variant of swapping policy called roll out roll in If a higher priority process arrives the memory manager swaps out lower priority process and then swaps in higher priority process When the higher priority process finishes the lower priority process can be swapped in again to main memory A process that is swapped out will be swapped back into the same memory space it occupied previouslyMajor part of the swap time is transfer time We can swap idle process only and cannot swap a process that is waiting for IO

                            Q) Contiguous memory allocation Ans Each process is contained in a single contiguous section of memory1 Fixed Size Partition (or) Single-partition allocation

                            Divide the main memory into Fixed sized partitions Each partition may contain exactly one process Relocation register contains value of starting physical address Limit register contains range of logical addressesEvery address generated by a CPU is checked as follows

                            If logical address lt Limit register then logical address is added with Relocation register to get the corresponding memory address

                            else a trap to OS is generatedSince every address is checked we can protect the OS and other user programs from being modified by other running process

                            2 Multiple-partition allocation

                            1) Fixed Size Partitions Divide the main memory into Fixed sized blocks Here memory allocated to a process may be larger than required The difference between allocated memory and requested memory is called internal fragmentation Internal fragmentation is unused memory inside the partition

                            0 P1 Block0(0th address to 3rd address)4 P1 Block1(4th address to 7th address)8 Block2(8th address to 11th address)

                            12 Block3(11th address to 12th address)

                            if P1 requires just 5 addresses still it is allocated 2 blocks ie 8 addresses So remaining 3 addresses are left unused

                            2) Variable Size partitions Here main memory is divided into partitions of variable sizesOperating system maintains information about

                            a) allocated partitions b) free partitions (hole)

                            In the beginning all main memory is empty and is considered one large block of available memory a hole

                            Exact memory required by process is only given When a process terminates it releases memory which can be allocated to another process

                            Memory is allocated to processes until finally no available block of memory (or hole) is large enough to hold the next processExternal fragmentation exists when there is enough memory to satisfy a request but the available memory are not contiguous Wastage of memory outside the partitionOne Solution to problem of external fragmentation is

                            a Compaction shuffle the memory contents so as to place all free memory together into one large block

                            ===============================================================================Q) Dynamic Storage-Allocation Problem (or) Most commonly used strategies to select a free hole from the set of available holesAns To satisfy a request of size n from a list of free holes below 3 policies can be used

                            1 First-fit Allocate the first hole that is big enough

                            2 Best-fit Allocate the smallest hole that is big enough must search entire list if it is not ordered by size Produces the smallest leftover hole

                            3 Worst-fit Allocate the largest hole must also search entire list Produces the largest leftover holeFirst-fit and best-fit better than worst-fit in terms of speed and storage utilization

                            ===============================================================================Q) Non-Contiguous Memory allocation

                            1 Paging Paging is a memory management scheme that provides non-contiguous memory allocation0 P1 Block0(0th address to 3rd address)4 P2 Block1(4th address to 7th address)8 Block2(8th address to 11th address)

                            12 P1 Block3(11th address to 12th address)

                            Logical address space of a process can be noncontiguous Ex P1 is allocated Block0 and Block3

                            1 Divide physical memory into fixed-sized blocks called frames (size is power of 2)2 Divide logical memory into blocks of same size called pages3 Logical address is divided into 2 parts

                            Page number (p) ndash Page table is indexed by page numberPage offset (d) ndash

                            4 A page table is allocated to each process (A pointer to page table is stored in PCB of process )Page table translates logical to physical addresses Page 0 is in frame1 Page 1 is in frame 4 etc5 Internal fragmentation may occur due to paging

                            6 If the size of logical address space= 2m and page size = 2n Then the higher order mminusn bits of logical address correspond to page number and lsquonrsquolower order bits for displacement within the page

                            Ex size of logical address space = 8=23 so m=3

                            page size =4=22 So n=2

                            So mminusn = 1 bit for page number to represent 01

                            remaining n=2 bits for displacement within the page

                            Q) Implementation of Page Table

                            Page table can be kept as

                            1) Set of dedicated registers efficient if the page table is small

                            2) Page table can be kept in memory page table base register (PTBR) points to page table Page-table length register (PTLR) indicates size of the page table Changing page table requires changing only the value in these registers Advantages of keeping page table in memorya Less context switch time

                            b Two memory accesses are required One memory access to access the page table and another memory access to access the required memory address

                            This problem can be solved by using a special fast-lookup hardware cache called associative memory or translation look-aside buffers(TLBs)

                            i Each entry in TLB consists of 2 parts

                            I) page number field

                            II) Value field or frame number field

                            ii When a logical address is generated by CPU its page number is presented to TLB The page number is compared with all the entries of TLB simultaneously

                            If page number is found in TLB Itrsquos frame number is immediately available -gt TLB hitIf page number not found check in page table ndashTLB miss

                            By using TLB Search is fast But TLB hardware is expensive So TLB size is kept small So TLB contains only frequently used few page table entries

                            Q) Protection in Paging Validinvalid bit attached to each entry in the page table OS sets this validinvalid bit for each page to allow or disallow access to the page When the bit is set to invalid the page is not in processrsquos logical address space Hence it generates Trap to OS

                            Q) Shared pages in Paging

                            Reentrant code is the code that never changes during execution Reentrant code can be shared

                            1 One copy of read-only (reentrant) code is shared among processes (ie text editors compilers window systems) Ex ed1ed2ed3 is shared among Process P1 and P2

                            2 Shared code must appear in same location in the logical address space of all processes

                            Each process keeps a separate copy of the private code and data EX data1 and data2

                            The pages for the private code and data can appear anywhere in the logical address space

                            ==============================================================================Q) Segmentation

                            Segmentation is a memory management scheme that support userrsquos view of memory

                            When the user program is compiled the compiler generates segments like

                            1) The code segment

                            2) Global variables segment

                            3) Heap memory segment

                            4) Stack segment etc

                            Each entry in segment table has

                            segment base (Starting physical address of segment) and limit register (specifies the length of the segment)

                            Logical address is divided into 2 parts

                            segment-number offset

                            Segment table is indexed by segment number

                            Segment-table base register (STBR) stores the location of segment table in main memory

                            Segment-table length register (STLR) stores number of segments used by a program

                            The segment number (say lsquosrsquo) is used to find the entry in segment table The required entry is lsquosrsquo locations away from the beginning of the segment table Once the required entry in the segment table is found the offset (lsquodrsquo) is compared with limit

                            If( offset lt limit) then offset is added with the base entry to generate the physical address

                            Q) Shared Segments Code sharing occurs at

                            the segment level Shared segments must

                            have same segment number

                            Allocation - dynamic storage allocation problem

                            use best fitfirst fit may cause external fragmentation

                            Protection protection bits associated with segments

                            readwriteexecute privileges array in a separate segment - hardware can check for illegal array indexes

                            ===============================================================================Q) Virtual memory ndash

                            Ans In paging and segmentation a program will execute only if the entire process is in main memory But here in virtual memory only part of the program needs to be in memory for execution

                            1 A program that is large enough than the available main memory can still run as only part of program is only loaded into main memory Logical address space can therefore be much larger than physical address space

                            2 Allows address spaces to be shared by several processes

                            Virtual memory can be implemented via Demand paging Demand segmentation

                            The large blank space between heap and stack is part of virtual address space Virtual memory allows files and memory to be shared by 2 or more processes through page sharing as shown in below figure (b) Figure (a) depicts that virtual memory that is larger than physical memory

                            ==============================================================================

                            Q)Demand pagingDemand paging is technique of loading pages from disk to main memory only when the page is needed Hence using less amount of physical memory we get faster response

                            Demand paging is similar to PAGING + SWAPPING

                            Instead of swapping a whole process the pager brings only those pages needed into memory

                            To distinguish between pages that are in memory and pages that are on the disk validinvalid bit is used

                            Validinvalid bit is attached to each entry in page table When this bit is set to valid the page is legal and in memory When this bit is set to invalid either the page is illegal or the page is in disk

                            When CPU generates a logical address whose page table entry is set to invalid page fault occurs Hence the required page is in disk Some section of hard disk called swap space is used to hold pages that are not present in memory

                            Q) Procedure for Handling a Page Fault

                            1) CPU generates a logical address and if the validinvalid bit for the page that has this logical address is set to invalid page fault occurs

                            2) Page fault causes a trap to operating system

                            3)Check if the logical address is within the logical address space of process ie PTBR and PTLR is checked if (required logical address is not within the logical address space of process)

                            Terminate the processelse

                            page is not in memory and page is in disk

                            4) To bring the required page into memory we need to find a free main memory frame map main memory address to disk block and fetch disk block and load the block into free frame5) When the required page is brought into memory update the page table to indicate the page is in memory6) Restart instruction interrupted by illegal address trap The process will continue as if page had always been in memory

                            Q) What is pure demand pagingAns If no page belonging to the executing process is in main memory the process will fault for every page it needs Page faults will occur until every page that is needed is in memory This scheme is known as pure demand paging NEVER BRING A PAGE UNTIL IT IS REQUIRED

                            Q) Page Replacement1 Find the location of the desired page on the disk2 Find a free

                            i) If there is a free frame use itii) If there is no free frame use Page replacement Algorithms to find some page in memory that is

                            not really in use and swap itiii) If the modify bit associated with the victim frame is set to 1 the page is written to disk Make the

                            changes to validinvalid bit of victim frames page table entry to invalid to indicate the victim frame is no longer in memory

                            else no need to write to disk3 Read the desired page into victim frame and change the frame and page tables4 Restart the user process

                            =================================================================================================Q) Page Replacement AlgorithmsGoal Produce a low page-fault rateThe algorithms are evaluated by running it on a particular string of memory references (reference string) and by computing the number of page faults on that string

                            1 First in First out (FIFO) Page replacement algorithm Uses the time when a page was brought into memory The page that was brought first into memory is replaced ie oldest page is chosen for replacement We create a FIFO queue to hold all pages in memory

                            When a page is brought into memory we insert it at the tail of the queue and we replace the page at the head of the queue

                            FIFO 15 page faultsProblems due to FIFO is Beladyrsquos anamolyFor a refrence string shown below123412512345When number of frames allocated to a process is 1 number of page faults=12 due to FIFO page replacementWhen number of frames allocated to a process is 2 number of page faults=12 due to FIFO page replacementWhen number of frames allocated to a process is 3 number of page faults=9 due to FIFO page replacementWhen number of frames allocated to a process is 1 number of page faults=10 due to FIFO page replacementAs the number of frames allocated to a process increases page faults must decrease But in FIFO page replacement page faults increase2 Optimal Page replacement AlgorithmReplace the page that will not be used for longest period of time Used for measuring how well your algorithm performs It is not practical to implement OPT algorithm as it is not possible to know what the future references will beex

                            Optimal page replacement 9 page faults

                            3 Least Recently used (LRU) Page replacement algorithm Replace the page that has not been used for longest period of time LRU makes use of time of last use of the page Ex

                            Number of page faults = 12

                            LRU requires hardware assistance to determine the time of last use of the page We can use 1) Stack or 2) Counter to implement LRU page replacement algorithm

                            1) Stack implementation ndash keep a stack of page numbers in a double link formwhen a Page is referenced it is moved to the top of stackThis implementation requires 6 pointers to be changed

                            2) counter implementation Time-of-use field is allocated to each frame allocatedCPU maintains a counter Counter is incremented for every page reference Counter value is copied to the time-of-use field of the referenced page

                            Disadvantages of LRU page replacement algorithm Updating counter or stack must be done for every memory reference HENCE LRU IS SLOW and implementation requires hardware assistance ==============================================================================Q) LRU approximation page replacement1) Using reference bit2) Using additional reference bits algorithm3) Second chance algorithm4) Enhanced Second chance algorithm5) Counting based Page replacement

                            Has two schemes1) Least Frequently Used (LFU)2) Most Frequently Used (MFU)

                            6) Page Buffering AlgorithmsQ) What is ThrashingAns A process is thrashing if it is spending more time paging than executingThrashing may occur when global (or) local page replacement is used as followsSolution to Thrashing is Working Set strategyIf we provide a process with as many frames as it needs thrashing can be avoided Since it is not possible to know how many frames a process needs we use locality model of process execution

                            here Locality is set of pages that are actively used together Locality model states that as the process executes it moves from locality to locality If we allocate enough frames to a process to accommodate the size of current locality it will not fault again until it changes current localityWorking Set model uses working-set and a parameter ∆ = working-set windowThe set of pages in the most recent ∆ page references is the working setLet WS S i be the working-set size for process Pi ie Pi needs WS S i framesLet D be the total demand for frames then D=sum WS Si

                            and mbe the total number of available frames

                            If iquestm thrashing occurs because some processes will not have enough framesSoIF (Dgtm iquest

                            OS selects a process to suspendelse OS may initiate another processThe working set strategy prevents thrashing while keeping the degree of multi programming as high as possibleThe main difficulty is to keep track of moving working set window ========================================================================Q) How does thrashing occur in global and local page replacementAns In Global page replacement when a page fault occurs for a process any frame in main memory can be replaced causing page fault to some other process that needs the replaced frame Page fault processes must use paging device to swap pages in and out As more number of processes queue up for the paging device

                            the READY queue empties And CPU becomes idle So OS introduces new processes as CPU is free This further increases number of page faults

                            In Local page replacement Each process is allocated certain frames in main memory and when a page fault occurs only the frames allocated to it will be replacedWhen a process starts thrashing other processes are not affected But since the processes will be in queue for paging device most of the time average service time for page fault increases Hence effective access time will increase even for a process that is not thrashingQ) Page Table Structure Or Structure of page table in memory

                            Hierarchical Paging Hashed Page Tables Inverted Page Tables

                            Hierarchical Paging If Page table is large then we break up the page table into multiple page tables ie page table is paged A simple technique is a two-level page tableTwo-Level Paging Example A logical address (on 32-bit machine with 4K page size) is divided into 1 a page number consisting of 20 bits 2 a page offset consisting of 12 bitsSince the page table is paged the page number is further divided into 1 a 10-bit page number 2 a 10-bit page offset Thus a logical address is as follows page number page offset

                            pi p2 d

                            10 10 12 where p1 is an index into the outer page table and p2 is the displacement within the page of the outer page table Two-Level Page-Table Scheme

                            Address-Translation Scheme Address-translation scheme for a two-level 32-bit paging architecture

                            Hashed Page Tables (or) Hash table Hashed page tables are common in address spaces gt 32 bitsEach entry in hash table contains a linked list of elements that hash to same locationHash table is indexed by hash valueEach element consists of 3 fields

                            1 Page number2 Frame number3 Pointer to next element in linked list

                            The algorithm works as follows Page number is passed to hash function to get hash value Locating the hash value in hash table is easy as Hash table is indexed by hash value Page number is then compared with field 1 in the first element in linked list If there is a match the corresponding frame number is used to get physical address If there is no match next entries in linked list are searched

                            Inverted Page Table

                            Invertible page table has one entry for each frame of physical memory Each entry in invertible page table has

                            Process-id Page number

                            Logical address also has 3 parts Process-id Page number

                            OffsetProcess-id and page number of the logical address is compared with each entry in the invertible page tableIf a match is found at ith entry i is added to offset to get logical address If there is no match it means it is illegal addressAdvantages and disadvantages

                            Decreases memory needed to store each page table Increases time needed to search the table when a page reference occurs Solution Use hash table to

                            limit the search to one mdash or at most a few mdash page-table entries Inverted Page Table Architecture

                            Unit-IV CPU Scheduling

                            Basic conceptsTo maximize CPU utilization some process must be running at all times

                            In multi programming several processes will be in main memory at a given time If running process has to wait for IO the OS takes the CPU away from that process and assigns CPU to other process in ready queue So CPU must be scheduled and CPU scheduling is fundamental OS function

                            CPU- IO burst cycleProcess execution begins with a CPU burst that is followed by an IO burst which is followed by another CPU burst then another IO burst and so on as shown in the figure

                            An IO bound program has many short CPU bursts A CPU-bound program has few long CPU bursts

                            Schedulers a module in OS for scheduling decisions1048708 Long-term scheduler (or job scheduler) ndash selects which processes should be brought into the ready queue1048708 Medium-term scheduler ndash selects which processes should be swapped inout the memory1048708 Short-term scheduler (or CPU scheduler) ndash Whenever the CPU becomes idle the operating system must select one of the processes in ready queue to be executed This selection is done by short term scheduler (or) CPU scheduler

                            Dispatcher gives CPU control to the process selected by short term scheduler This function involves following

                            1 switching context2 switching to user mode3 jumping to proper location in the user program to restart that program

                            Time taken to stop one process and start another process is known as dispatch latency This must be kept smallReady QueueReady queue is implemented as a FIFO queue a priority queue a tree or simply an unordered linked listThe records in ready queue are PCBs of processes

                            Pre-emptive Scheduling

                            CPU scheduling decisions take place under one of four conditions

                            1 When a process switches from the running state to the waiting state2 When a process switches from the running state to the ready state for example in response to an

                            interrupt3 When a process switches from the waiting state to the ready state say at completion of IO 4 When a process terminates

                            For conditions 1 and 4 there is no choice - A new process must be selected For conditions 2 and 3 there is a choice - continue running the current process (or) select a different

                            one If CPU scheduling takes place only under conditions 1 and 4 the system is said to be non-

                            preemptive or cooperative In non-preemptive a process runs until it itself gives up the CPU ie when it is waiting for IO (or) when process completes Otherwise the system is said to be pre-emptive

                            Windows used non-preemptive scheduling up to Windows 3x and started using pre-emptive scheduling with Win95 Macs used non-preemptive prior to OSX and pre-emptive since then Note that pre-emptive scheduling is only possible on hardware that supports a timer interrupt

                            Note that pre-emptive scheduling can cause problems when two processes share data because one process may get interrupted in the middle of updating shared data structures

                            Preemption can also be a problem if the kernel is busy implementing a system call ( eg updating critical kernel data structures ) when the preemption occurs Solution wait until the system call has either completed or blocked before allowing the pre-emption But this solution is problematic for real-time systems

                            Some critical sections of code protect themselves from concurrency problems by disabling interrupts before entering the critical section and re-enabling interrupts on exiting the section But this should only be done only on very short pieces of code that will take less CPU time

                            Scheduling Criteria to decide CPU scheduling algorithm1 CPU utilization in real system cpu utilization will range from 40 to 902 Throughput Number of processes that are completed per unit time3 Turnaround time Time when the process has completed ndash Time when the process was submittedie it includes waiting time and burst time4 Waiting time amount of time that a process spends waiting for CPU5 Response time Time when the first response came - Time when the process was submitted

                            Optimization Criteria Max CPU utilizationMax throughput

                            Min turnaround timeMin waiting timeMin response time CPU Scheduling Algorithms1) First-Come First-Served (FCFS) Scheduling First come first served CPU scheduling algorithm is non pre-emptive Processes are scheduled in the order they have arrivedFCFS is implemented by Queue When the CPU is free it is allocated to the process at the head of the queueAdvantages Easy to implementDisadvantages 1 Average waiting time is quite long

                            2 Convoy effect (it occurs as small processes wait in queue for big process to leave CPU)Example Process Burst Time P1 24 P2 3 P3 3

                            Suppose that the processes arrive in the order P1 P2 P3

                            The Gantt Chart for the schedule is P1 P2 P3

                            0 24 27 30 Waiting time for P1 = 0 P2 = 24 P3 = 27 Average waiting time (0 + 24 + 27)3 = 17 2) Shortest-Job-First (SJF) SchedulingSchedule the Process with shortest burst time If CPU burst time of two processe are same then FCFS is usedAdvantages Average waiting time decreasesDisadvantages It is difficult to know the length of next CPU burstSJF can be either pre-emptive (or) non pre-emptive1 Non pre-emptive SJF ndash once CPU is given to the process it cannot be preempted until completes its CPU burst2 Pre-emptive SJF (or) Shortest-Remaining-Time-First (SRTF) If the newly arrived process is shorter than what is remaining of currently executing process then pre-empt the currently executing process

                            Example Process Arrival Time Burst Time P1 00 7

                            P2 20 4 P3 40 1

                            P4 50 4

                            SJF (non-preemptive) Gantt Chart P1 P3 P2 P4

                            0 7 8 12 16 Average waiting time = [0 +(8-2)+(7-4) +(12-5)] 4 =4

                            Example of Preemptive SJF

                            Process Arrival Time Burst TimeP1 00 7P2 20 4P3 40 1P4 50 4

                            SJF (preemptive) P1 P2 P3 P2 P4 P1

                            0 2 4 5 7 11 16 Average waiting time = (9 + 1 + 0 +2)4 =3 3) Priority SchedulingA priority number (integer) is associated with each processThe CPU is allocated to the process with the highest priority (smallest integer equiv highest priority)If two processes have equal priority then FCFS is usedSJF is a priority algorithm in which larger the CPU burst lower the priorityThere are two schemes 1 Preemptive 2 nonpreemptive

                            Problem equiv Starvation ndash low priority processes may never executeSolution equiv Aging ndash as time progresses increase the priority of the process

                            4) Round Robin (RR) Each process gets a small unit of CPU time (time quantum) usually 10-100 milliseconds After this time has elapsed the process is preempted and added to the end of the ready queueIf there are n processes in the ready queue and the time quantum is q then each process gets 1n of the CPU time in chunks of at most q time units at once No process waits more than (n-1)q time unitsPerformance 1 q large _ FIFO 2 q small _ q must be large with respect to context switch otherwise overhead is too highExample of RR with Time Quantum = 4 Process Burst Time P1 24 P2 3

                            P3 3 The Gantt chart isP1 P2 P3 P1 P1 P1 P1 P1

                            0 4 7 10 14 18 22 26 30 Average waiting time = [(30-24)+4+7]3 = 173 =566 5)Multilevel Queue scheduling Ready queue is partitioned into separate queuesFor ex ready queue is divided into 2 queues

                            1 foreground (interactive) queue2 background (batch) queue

                            Each queue has its own scheduling algorithmFor ex Round Robin scheduling algorithm can be used in foreground queue

                            FCFS scheduling algorithm can be used in background queueScheduling must be done between the queues This can be done in 2 ways1 Fixed priority scheduling

                            Foreground queue has highest priority All the processes in foreground queue must be completed and queue must be empty Then only the processes in background queue will be given CPU As shown in below figure Disadvantage starvation

                            2 Time slice ndash each queue gets a certain amount of CPU timewhich it can schedule amongst its processes ie 80 to foreground in RR and 20 to background in FCFS

                            6) Multilevel Feedback Queue Scheduling A process can move between queues The idea is to separate processes according to their CPU bursts IO ndash bound and interactive processes will be in highest priority queueIf a process takes more CPU time it is moved to lower priority queue If a process waits too long in lower priority queue it is moved to higher priority queue to prevent starvationAs shown in the below figure let there be 3 queues Q0 Q1 Q2

                            1 Q0 ndash time quantum 8 milliseconds 2 Q1 ndash time quantum 16 milliseconds 3 Q2 ndash FCFS

                            Scheduling

                            1 A process entering ready queue is put in queue Q0 When it gains CPU the process receives 8 msec If it does not finish in 8 milliseconds process is moved to queue Q1 2 At Q1 process again receives 16 additional milliseconds If it still does not complete it is moved to queue Q2

                            Q) SynchronizationSeveral processes run in an operating system Some process share resources due to which problems like data inconsistency may arise

                            Example of process synchronization is Producer Consumer problem (or) Bounded-Buffer problemProducer Consumer problem To make sure that producer process should not add data when the buffer is full andConsumer process should not take data when the buffer is empty

                            The code for Producer process can be modified as followswhile (true) Produce an item in next producedwhile (counter == BUFFER_SIZE) do nothingbuffer[in]=nextProducedin = (in+1) BUFFER_SIZEcounter++

                            here an integer variable counter is initialized to zeroCounter is incremented every time a new item is added to the buffer and counter is decremented every time an item is removed from bufferin is an index that always points to an empty slot after the last filled slotout is an index that always points to the first filled slot after at the head of the circular queue

                            The code for consumer process is as shown belowwhile(true)

                            while (counter == 0) do nothingnextConsumed = buffer[out]out=(out+1)BUFFER_SIZEcounter--

                            The above producer and consumer code produces RACE CONDITIONOutcome of execution of several co-operating processes depends on the order in which accesses takes place is called Race ConditionFor Example if counter =5

                            Given counter =5 and if producer produces one item and consumer consumes one item then correct value of counter= 5 only But here the counter = 4 ie inconsistent resultIf T5 is executed before T4 then the counter =6 ie inconsistent result

                            Solution to avoid race condition make sure only one process at a time must update the shared variable (here it is counter)

                            Q) Critical section problemAns Each process has a segment of code called critical section Critical section code is the code that contains common variables of co-operating processesTo avoid data inconsistencyTwo processes must not execute critical section code at same timeFor this to happen1 Each process must request permission to enter its critical sectionmdashEntry section2 End of critical section ndashExit section3 Rest of the code after critical section ndash Remainder section

                            Solution to critical section must satisfy the following three requirements1 Mutual exclusion if one process is in critical section no other process can be in critical section2 Progress Processes not in remainder section must decide which process will enter critical section next3 Bounded Waiting All the processes must be allowed to enter critical section No process must wait indefinitely

                            Q) Petersonrsquos solution for critical section problem (Software based solution)AnsTime Process 1 Process2T0 do

                            flag[1] = TRUE turn =2

                            do flag[2] = TRUE

                            T1 turn =1

                            T2 While( flag[2] ampamp turn == 2)Since turn = 2 while loop will not run

                            While( flag[1] ampamp turn == 1)Loops continuously until flag[1]

                            becomes falseT3 Enter critical section

                            T4 flag[1] = FALSE Enter Critical section

                            T5 Enters remainder section flag[2]=FALSE

                            while (TRUE) Enters remainder section

                            while (TRUE)

                            101 File Concept Data is stored in secondary memory in the form of files A file is a sequence of bits bytes lines or records A file has structure which depends on its type

                            File Attributes

                            1 Name ndash Name of the file for user understanding2 Identifier used by OS to identify a file ( eg inode number )3 Type ndash Text files executable files binary files etc 4 Location ndash location of the file on the hard drive 5 Size current size of the file6 Protection Controls who can read write or execute the file7 Time Date and user identification gives the information when the file was created last modified

                            and last used etc This data is used for protection security and usage monitoring

                            All information about files is kept in directory structure The directory entry consists of the filersquos name and its unique identifier The identifier locates the other file attributes Directory structure is kept on disk

                            File Operations

                            The file Abstract Data Type supports many common operations 1 Creating a file find free space on disk add entry to directory 2 Writing a file write data at current file position pointer location and update pointer3 Reading a file read data at current file position pointer location and update pointer4 Repositioning within a file or file seek change pointer location to a given value5 Deleting a file free the space allocated to file and remove entry from directory6 Truncating a file delete the data and update file size

                            Open File Table

                            Most OSes require that files be opened before access and closed after access Information about all files that are open currently in the system is stored in an open file table This

                            system wide open file table is maintained by OS And information about files opened by the process is stored by the processrsquos open file table Open file table has the following information

                            1 File pointer - records the current position in the file for the next read or write2 File-open count - How many times has the current file been opened ( simultaneously by

                            different processes ) and not yet closedWhen this counter reaches zero the file can be removed from the table

                            3 Disk location of the file4 Access rights

                            Some systems provide support for file locking 1 A shared lock is for reading only2 A exclusive lock is for writing as well as reading3 An advisory lock OS makes sure the locks are acquired and released appropriately4 A mandatory lock software developers makes sure the locks are acquired and released

                            appropriately5 UNIX used advisory locks and Windows uses mandatory locks

                            File Types

                            Implementing file types A common technique is to include the type as part of the filename The name is split into 2 parts

                            1 File Name 2 Extension

                            File name and extension are separated by a period The system uses the extension to indicate the type of file and the type of operations that can be done on that fileUser opens Microsoft word and clicks on File menu and Open option User will only specify the filename without extension application will look for a file with the given name and extension it expects Hence extensions are not supported by the operating systemUnix system uses a crude magic number stored at the beginning of the file to indicate the type of fileIn Mac OS X operating system file produced by word processor has the word processorrsquos name as its creator When user double clicks on the file word will open the required fileSome common file types are as shown below

                            File Structure Some files contain an internal structure which may or may not be known to the OS for ex executable files have a specific structure so that it can determine

                            where in memory to load the file and what is the location of the first instruction

                            If OS supports many file formats it will increase the size and complexity of the OS UNIX treats all files as sequences of bytes with no internal structure (With the exception of

                            executable binary programs which it must know how to load and find the first executable statement etc)

                            Macintosh files have two forks - a resource fork and a data fork The resource fork contains information relating to the UI such as icons and button images and can be modified independently Data fork contains the code or data

                            Internal File structure

                            Disk files are accessed in units of physical blocks typically 512 bytes

                            Internally files are organized in units of logical units like single byte (or)

                            data record size (or) structure size

                            The number of logical units which fit into one physical block determines its packingIf a student file is organized internally as records Each record takes 64 bytes of memoryAnd main memory is divided into blocks of 512 bytes Then packing = 51264=8 8 records can be stored per block

                            Internal fragmentation may occur due to this

                            102 Access Methods

                            The information in file can be accessed in the following ways1 Sequential access2 Direct access (or) Relative access

                            1 Sequential access we cannot randomly access any record

                            Records are read and written sequentially one record after another

                            A sequential access file emulates magnetic tape operation and generally supports the following operations read next - read a record and advance the tape to the next position write next - write a record and advance the tape to the next position rewind skip n records - May or may not be supported N may be limited to positive numbers or may be

                            limited to +- 1

                            2 Direct access (or) Relative accessA file is made up of fixed length records so that we can randomly access any record

                            No restriction on order of reading or writing

                            Direct access files are useful when we need to access large amounts of data randomly

                            Ex database

                            Operations supported include n is relative block number ie relative to beginning of file

                            1 read n - read record number n 2 write n - write record number n 3 jump to record n - could be 0 or the end of file4 Query current record - used to return back to this record later5 Sequential access can be easily emulated using direct access The inverse is complicated and

                            inefficient

                            3 Other access methods can be built on top of direct access method We construct an index for the file which contains pointers to various blocks To search a file we first access index and then use pointer to access file

                            If a student file has 120000 records of students sorted by student id We can create an index consisting of 1st student id in each blockif 64 records can be stored in each block then number of blocks needed to store 120000 records = 12000064 asymp 2000 blocks So index has 2000 entries Index is kept in main memory so that searching becomes faster

                            If the index itself becomes large we create an index for index file (Ex B-tree)

                            Q) Allocation Methods

                            To allocate space to files on disk is such a way that disk space is utilized effectively and files can be accessed quickly

                            Three major methods of allocating disk space are

                            1 Contiguous 2 Linked

                            3 Indexed

                            1 Contiguous allocation Each file occupies set of contiguous blocks on disk

                            If the file is n blocks long and starts at location b then it occupies blocks b b+1 b+2hellipb+n-1

                            The directory entry for each file indicates the address of starting block(ie b) and length (n)

                            Advantages

                            1 For direct access to block i we can directly access by finding the block as b+i Since the file is stored sequentially sequential access is easy Hence access is easy

                            2 Supports both direct and sequential access3 Number of disk seeks required is minimum 4 Seek time is minimum

                            Disadvantages

                            1 Difficult to find space to a new file (or) add more data to existing fileFirst fit and Best fit strategies are used to find a free hole( set of free contiguous blocks) from list of available holes But both are inefficient wrt storage utilization

                            2 External fragmentation even though there are enough free blocks required for a new file they are not contiguous and hence cannot be allocated to new fileSolution copy all the files onto another disk or tape We then copy the files back onto original disk contiguously This is known as compaction But compaction is time consuming

                            3 Size of the file must be known when the file is created Even though final size is known entire size may not be used by the file at the beginning Hence leads to wastage of space

                            If size of the file is not known in advance then (i) If we allocate too little space to file file cannot be extended Solution find a larger hole copy the contents of the file to the new space and release the previous space This may be possible as long as space exists Also this is time consuming(ii) If we overestimate the amount of space needed much of the space is unused This will lead to internal fragmentation

                            To minimize these drawbacks some os uses modified contiguous allocation scheme Here contiguous blocks of memory is allocated at firstIf the file needs more blocks another set of contiguous space known as extent is added Location and a block count plus a link to the first block of next extentIf extents are too large and of fixed size internal fragmentation occursIf extents are of variable sizes external fragmentation occurs

                            Linked AllocationEach file is a linked list of disk blocks The disk blocks can be scattered anywhere on the diskThe directory contains a pointer to first and last blocks of the file For example a file of 5 blocks might start at a block 9 and continue at block 16 then block 1 and then block 10 and finally block 25 Each block contains a pointer to next block If each block is 512 bytes and disk address (pointer ) requires 4 bytes then the user sees blocks of 512-4=508 bytes To create a new file we simply create a new entry in the directory The pointer will have nil for empty file Also the size field of empty file is 0

                            To add more data to existing file free block is found and data is written to it and is linked to end of file To read a file we read by following the pointers block to block

                            Advantages 1 Easy to find space to a new file (or) add more data to existing file2 Size of the file need not be declared when the file is created3 There is no external fragmentation

                            Disadvantages1 Seek time is more2 Number of disk seeks required is more3 Pointers use larger percentage of filersquos disk space4 Inefficient for direct access files To access i th record i disk reads are required5 Not Reliable if pointers were lost or damaged

                            Solution to disadvantage 3 is to use to group multiple blocks as clusters(for ex 4 blocks = 1cluster) We allocate clusters instead of blocks to file Makes logical to physical block mapping simpleImproves disk throughput by reducing disk access timeReduces wastage of space due to pointers as only few pointers may be neededFree ndashlist management also becomes simpleDisadvantage internal fragmentation more space is wasted when the cluster is partially full

                            Solution to disadvantage 5 is to use doubly linked list (or) store the file name and relative block number in each block

                            An variation of linked allocation is the use of file allocation table (FAT)A section of disk at the beginning of each volume contains FAT FAT has one entry for each disk block and is indexed by block numberThe block number contains the block number of next block that contains the file The last block will have special end of the file value as table entry An unused block is indicated by table value 0

                            Advantages

                            1 Easy to find space to a new file (or) add more data to existing file2 Random access time is improved

                            Disadvantage number of disk seeks is more The disk head must move to read the FAT and find the location of block then move to location of the block

                            3 Indexed allocation

                            Each file has its own index block Index block is an array of disk block addresses The ith entry in index block points to ith block of the file

                            The directory contains the address of the index block

                            When the file is created all pointers in the index block are set to nil

                            To add more data to existing file free block is found and data is written to it The address of block is put in the index block

                            Advantages

                            1 Easy to find space to a new file (or) add more data to existing file2 Supports direct (or) random access efficiently3 No external fragmentation

                            Disadvantage

                            1 Seek time is more2 Number of disk seeks required is more3 Not Reliable4 Index file must be kept in memory If this memory is not available then we have the read the index

                            block and then desired data block ie 2 disk accesses are required To access a block near the end of the file we need to read all the index blocks to read the needed data block

                            5 Pointer overhead of index block is gt than that of linked allocation For example if we have file of only one or two blocks Entire index block is allocated to store these one or two pointers

                            Various mechanisms followed for size of index block

                            1 linked scheme size of index block is one disk block Index block contains a small header containing name of the file and set of the first 100 disk block addresses The next address contains the address of another index block for large files

                            2 multilevel index to access a block the OS uses first-level index to find second-level index block which points to file block This approach can be continued to third or fourth level

                            3 combined scheme

                            For example Say there are 15 pointers of index block in the filersquos inode The first 12 of these pointers have address of blocks that contains file data The next 3 pointers point to indirect blocks The first points to single indirect block which is an index block that contain addresses of filersquos data blocks

                            The second points to double indirect blocks and third points to triple indirect blocks as shown below

                            Q) Free space management

                            To keep track of free disk space the system maintains free space list To create a file we search free space list and allocate the required space to new file

                            The free space list can be implemented as

                            1 Bit vector (or) Bit map2 Linked list3 Grouping4 Counting 5 Space maps

                            1 Bit vector (or) Bit map Each block is represented by 1 bit If the block is free the bit is 1 and if the block is allocated the bit is 0 For example consider a disk where blocks 23458 blocks are free The free-space bit map would be 001111001000

                            Advantages simple and easy to get contiguous files

                            Disadvantages

                            1 Bit map requires extra space2 Bit map is inefficient if kept in disk Bit maps of smaller disk can be kept in main memory but may

                            not be possible to keep the bit map of larger disk in main memory3 Bit map must be modified both when blocks are allocated and freed Freeing 1 GB of data on a 1-TB

                            disk could cause thousands of blocks of bit maps to be updated because those data blocks are scattered all over the disk

                            2 Linked List Free-space list is implemented by linking all the free disk blocks together A pointer to the first free block is stored in special location on the disk For example consider a disk where blocks 23458 blocks are freeWe keep a pointer to block 2 as the first free block Block 2 would contain a pointer to block 3 Block 3 would point to block 4 and block 4 would point to block 5and so onAdvantage No waste of spaceDisadvantage Cannot get contiguous space easily ie To traverse the free-space list we must read each block which requires more IO time

                            3 Grouping First free block stores the addresses of n free blocks The first n-1 free blocks are free and the last block contains the addresses of another n free blocks and so onAdvantage large number of free blocks can be found quickly4 Counting free-space list can be maintained by keeping the address of first free block and the number (n)of free contiguous blocks that follow the first block

                            Each entry in the free-space list consists of disk address and a countEach entry requires more space andlist is shorter only if count is gt1These entries can be stored in a B-tree for efficient insertion deletion and search

                            5 Space maps ZFS divides the disk space into chunks of manageable sizes called metaslabs Each metaslab has a space mapZFS uses counting algorithm to store information about free blocks Space map is a log of all block activities in time order in counting formatWhen ZFS allocates or frees space from a metaslab it loads the associated space map into memory and updates it Finally free-space list is updated on diskSynchronization Hardware Hardware solution to synchronization is to provide atomic operations These operations operate as a single instruction without interruption Two such operations are 1 Test and Set() hardware instruction2 Swap() instruction

                            1 Test and Set instruction is as shown

                            Solution using Test and SetThe shared variable lock is initialized to false

                            2 Swap() instruction definition is as follows

                            Solution using Swap1 Shared Boolean variable lock is initialized to false and each process has a local boolean variable key

                            Both Test and Set Swap instructions satisfy the mutual exclusion requirement but unfortunately do not guarantee bounded waiting If there are multiple processes trying to get into their critical sections there is no guarantee of what order they will enter and any one process could have the bad luck to wait forever until they got their turn in the critical section (Since there is no guarantee as to the relative rates of the processes a very fast process could theoretically release the lock whip through their remainder section and re-lock the lock before a slower process got a chance As more and more processes are involved vying for the same resource the odds of a slow process getting locked out completely increase )

                            Below figure illustrates a solution using test-and-set that satisfies bounded waiting using two shared data structures boolean lock and boolean waiting[ N ] where N is the number of processes in contention for critical sections

                            Bounded-waiting mutual exclusion with TestAndSet( )

                            It first looks in an order ( starting with the next process on the list ) for a process that has been waiting and if it finds one then it releases that particular process from its waiting state without unlocking the critical section thereby allowing a specific process into the critical section while continuing to block all the others

                            Q) What is semaphoreAns Semaphore is used to solve synchronization problems A semaphore S is an integer variable accessed only through 2 standard atomic operations 1 wait() 2 signal()Wait primitive Signal primitivewait(S)

                            while(S lt= 0) do nothingS--

                            signal (S)

                            S++

                            Each process that wishes to use a resource performs wait() operation When a process releases a resource it performs signal() operation

                            Semaphores Usage1 binary semaphore or mutex locks can take values 0 (or) 1

                            2 Counting Semaphore can take any integer value Counting semaphore is used to count the number remaining resources The counter is initialized to the number of resources available whenever the counting semaphore gt 0 then a process can enter a critical section and use one of the resources When the counter = 0 ( or negative in some implementations ) then the process blocks until another process frees a resource and increments the counting semaphore with a signal callFor example if 3 resources are there and 4 processes Value of counting semaphore =3if process P1 requires a resource it perform wait() operationNow counting semaphore = 2if process P2 requires a resource it perform wait() operationNow counting semaphore = 1if process P3 requires a resource it perform wait() operationNow counting semaphore = 0if process P4 requires a resource it perform wait() operation P4 waits until a resource is available

                            3 Semaphores can also be used to synchronize certain operations

                            Semaphore ImplementationWhen a process is in critical section and any other process that tries to enter critical section must loop continuously wasting CPU time (busy waiting) This type of semaphore is called spin lock as the process spins while waiting For example suppose it is important that process P1 execute statement S1 before process P2 executes statement S2

                            First we create a semaphore named synch that is shared by the two processes and initialize it to zero Then in process P1 we insert the code

                            S1signal( synch )

                            and in process P2 we insert the code wait( synch )

                            S2 Because synch was initialized to 0 process P2 will block on the wait until after P1 executes the call

                            to signal Each semaphore has a integer value and maintains a waiting queue of waiting processes

                            Semaphore implementation with no busy waiting To overcome busy waiting we block the waiting process and waiting process is restarted by wakeup() operation(ie the process is removed from waiting queue to ready state)

                            Problems due to semaphore1 Deadlocks Deadlocks occur when multiple processes are blocked each waiting for a resource that can only be freed by one of the other (blocked) processes shown below

                            2 Starvation In Starvation one or more processes gets blocked forever and never get a chance to enter critical section For example if we do not specify the algorithms for adding processes to the waiting queue or selecting one to be removed from the queue in the signal( ) call If a LIFO queue is chosen then the first process that starts waiting will never get a chance

                            3 Priority inversionLet Process LMH have priorities as shown belowLltMltHprocess H is waiting for a resource that has been held by process L so Process L is running When process M enters it pre-empts process L and M is running nowHere M runs before H even though M has lower priority than HTo avoid this L is given high priority so that M does not pre-empt L=====================================================================Q) Classical problems on synchronization

                            1 The Bounded Buffer Problem (also called the The Producer-Consumer Problem)2 The Readers-Writers Problem3 The Dining Philosophers Problem

                            These problems are used to test newly proposed synchronization scheme

                            1 The Bounded Buffer ProblemConsider

                            n buffers and each buffer holds one item a producer process which creates the items (1 at a time) a consumer process which processes them (1 at a time)

                            Producer process cannot produce an item if all the buffers are full and Consumer process cannot consume if all the buffers are empty As both processes modify the contents of buffer synchronization must be there For that we use 3 Semaphores

                            1 empty = n (empty semaphore counts the number of empty buffers)2 full=0 (full semaphore counts the number of filled buffers)3 mutex=1 (mutex provides mutual exclusion for access to the buffer)

                            Producer process Consumer processdo

                            hellip produce an item wait(empty) wait(mutex) hellip

                            do wait(full) wait(mutex) hellip remove an item from buffer hellip

                            add item to buffer hellip signal(mutex) signal(full)

                            while (1)

                            signal(mutex) signal(empty) hellip consume the item hellip while (1)

                            2 The Readers-Writers ProblemA data item such as a file is shared among several processesEach process is classified as reader or writerMultiple readers can read the file at same timeA writer must have exclusive access (ie cannot share with either a reader or another writer)Two versions of the readers-writers problem

                            readers priority new reader need not wait because a writer is waiting writers priority if a writer is waiting to access the database no new readers can start reading

                            A solution to both version may cause starvation in the readers priority version writers may starve in the writers priority version readers may starve

                            A semaphore solution to the readers priority version uses 3 semaphores1 readcount = 0 (readcount semaphore counts the number of reading shared data2 mutex =1 (mutex semaphore is used for mutually exclusion when readcount is updated)3 wrt = 1 (wrt semaphore is common to both writer and reader)

                            wrt provides mutually exclusive access to shared data Reader process Writer process

                            wait(mutex) readcount++ if (readcount == 1) wait(rt) signal(mutex) hellip reading is performed hellip wait(mutex) readcount-- if (readcount == 0) signal(wrt) signal(mutex)

                            do wait(wrt) hellip writing is performed hellip signal(wrt)

                            while (TRUE)

                            3 The Dining Philosophers Problemn philosophers sit around a table thinking and eatingAs time passes a philosopher gets hungry and tries to pick up the chopstick on his left and on his right A philosopher may only pick up one chopstick at a time and cannot pick up a chopstick already in the hand of neighbour philosopherThe dining philosophers problems is an example of a large class or concurrency control problems it is a simple representation of the need to allocate several resources among several processes in a deadlock-free and starvation-free mannerA semaphore solution represent each chopstick with a semaphore

                            semaphore chopstick[5] Initially all values are 1

                            Philosopher ido

                            51

                            wait(chopstick[i])wait(chopstick[(i+1) 5])hellipeathellipsignal(chopstick[i])signal(chopstick[(i+1) 5])hellipthinkhellip

                            while (1)This solution guarantees no two neighbouring philosophers eat simultaneously but has the possibility of creating a deadlockQ) Types of operating systems

                            MULTIPROCESSING SYSTEMS has multiple hardware CPUs

                            In symmetric multiprocessing (SMP) systems The processes share the same main memory and io facilities All processors can perform the same functions (hence term symmetric)

                            52

                            ASYMMETRIC MULTIPROCESSING SYSTEMS In asymmetric multiprocessing (ASMP) all CPUs are not equal (Master and slave CPUs) ASMP have dedicated processors to specific tasks For example one processor may be dedicated to disk operations another to video operations etc These systems dont have the flexibility to assign processes to the least-loaded CPU unlike an SMP system

                            REAL TIME SYSTEM is used to implement a computer application that must complete its execution within its time constraint

                            Two kinds of real-time systems have evolved

                            1 A hard real-time system can guarantee that response requirement will be met under all conditions 2 A soft real-time system cannot guarantee that response requirement meets under all conditions Ex

                            Digital audio or multimedia systems Digital telephones

                            DISTRIBUTED SYSTEMS execute parts of a computation in different systems at the same time It uses distributed control ie it spreads its decision-making actions across different computers in the system so that failures of individual computers or the network does not cripple its operation

                            A distributed operating system appears as a uniprocessor system even though it has multiple processors The users may not know where their programs are being run or where their files are located that should all be handled automatically by the operating system

                            Distributed systems allow applications to run on several processors at the same time thus requiring more complex processor scheduling algorithms

                            DESKTOP SYSTEMS A desktop system is a personal computer (PC) system used at a single location Modern desktop systems support multiprogramming Common examples are Linux FreeBSD Windows 8 and the Macintosh operating system

                            HANDHELD SYSTEMS (palmtop computers) Two of the most popular operating systems for handhelds are Symbian OS and Android OS They have very less memory

                            53

                            CLUSTERED SYSTEMS A computer cluster consists of a set of loosely connected computers that work together so that in many respects they can be viewed as a single system The components of a cluster are usually connected to each other through fast local area networks each node (computer used as a server) running its own instance of an operating system

                            Q) Modern operating system

                            New developments in hardware applications and security threats lead to the development of modern operating system

                            New Hardware developments like increased machine speed high speed network increased size and variety of memory devices

                            New Applications developments like multimedia applications internet and web access and client server computing

                            Developments in modern operating system can be categorized as

                            1 Microkernel architecture assigns only few functions to kernel ( IPC and CPU scheduling etc) Other OS services are provided by processes called servers that run in user mode This approach decouples server and kernel development This approach is well suited to distributed environment

                            2 Multithreading a process is divided into threads that can run concurrently Useful for applications that perform a number of independent tasks Example database server that listens and processes many client requests

                            3 Symmetric multiprocessing As now-a-days there are multiple microprocessors in a single system SMP operating system provides greater efficiency when there are multiple processors The processes share the same main memory and io facilities All processors can perform the same functions (hence term symmetric)

                            SMP schedules processes or threads across all of the processors

                            Advantages

                            1 Increased performance if some portion of the program can be run in parallel the performance of SMP OS increases

                            2 increased availability as all processes can perform the same task if one processor fails others will work 3 incremental growth one can add a new processor to increase the performance4 Scaling cost can be dependent on the number of processors used

                            4 Distributed operating systems Distributed systems allow applications to run on several processors at the same time

                            5 Object oriented design adds modularity to kernel Os can be customized without effecting system integrity

                            Q) Linux Operating systems It is open source as its source code is freely available It is free to use Linux was designed considering UNIX compatibility Linux is a multi-user multitasking system Main design goals are speed efficiency and standardization Components of Linux System

                            54

                            Linux Operating System has primarily three components as shown in the below diagram

                            Kernel - Kernel is the core part of Linux It is responsible for all major activities of this operating system It interacts directly with hardware Kernel hides low level hardware details to system or application programs

                            System Library - System libraries are special functions or programs These are used by application programs to use Kernelrsquos features

                            System Utility - System Utility programs are responsible to do specialized individual level tasks like updating log file accepting login requests from terminals etc

                            Kernel Modules Sections of kernel code that can be compiled loaded and unloaded independent of the rest of the kernel A kernel module may typically implement a device driver a file system or a networking protocol

                            Three components to Linux module support 1 module management Supports loading modules into memory and allows to talk to the rest of the kernel 2driver registration Allows modules to inform rest of the kernel that a new driver has become available 3 conflict resolution protect reserved resources of one driver from accidental use by another driver

                            Kernel Mode vs User ModeKernel component code executes in a special privileged mode called kernel mode with full access to all resources of the computer This code represents a single process executes in single address space and do not require any context switch and hence is very efficient and fast Kernel runs each processes and provides system services to processes provides protected access to hardwares to processesCode which is not required to run in kernel mode is in System Library User programs and other system programs works in User Mode which has no access to system hardwares and kernel code User programs utilities use System libraries to access Kernel functions to get systems low level tasksBasic FeaturesFollowing are some of the important features of Linux Operating System

                            Portable - Portability means softwares can works on different types of hardwares in same way Linux kernel and application programs supports their installation on any kind of hardware platform

                            Open Source - Linux source code is freely available and it is community based development project Multiple teams works in collaboration to enhance the capability of Linux operating system and it is continuously evolving

                            Multi-User - Linux is a multiuser system means multiple users can access system resources like memory ram application programs at same time

                            Multiprogramming - Linux is a multiprogramming system means multiple applications can run at same time

                            Hierarchical File System - Linux provides a standard file structure in which system files user files are arranged

                            Shell - Linux provides a special interpreter program which can be used to execute commands of the operating system It can be used to do various types of operations call application programs etc

                            55

                            Security - Linux provides user security using authentication features like password protection controlled access to specific files encryption of data

                            Architecture Linux System Architecture is consists of following layers

                            Hardware layer - Hardware consists of all peripheral devices (RAM HDD CPU etc)

                            Kernel - Core component of Operating System interacts directly with hardware provides low level services to upper layer components

                            Shell - An interface to kernel Takes commands from user and executes kernels functions

                            Utilities - Utility programs giving user most of the functionalities of an operating systems

                            Q) windows XP operating System1Extensibility mdash layered architecture

                            Executive which runs in protected mode provides the basic system services On top of the executive several server subsystems operate in user mode Modular structure allows additional environmental subsystems to be added without affecting the

                            executive 2Portability mdash XP can be moved from on hardware architecture to another with relatively few changes

                            Written in C and C++ Processor-dependent code is isolated in a dynamic link library (DLL) called the ldquohardware abstraction

                            layerrdquo (HAL)3 Reliability mdashXP uses hardware protection for virtual memory and software protection mechanisms for operating system resources 4 Compatibility mdash applications that follow the IEEE 10031 (POSIX) standard can be complied to run on 2000 without changing the source code 5 Performance mdash XP subsystems can communicate with one another via high-performance message passing

                            Preemption of low priority threads enables the system to respond quickly to external events Designed for symmetrical multiprocessing

                            6 International support mdash supports different locales via the national language support (NLS) APIXP Architecture 1 Layered system of modules 2 Protected mode mdash hardware abstraction layer (HAL) kernel executive 3 User mode mdash collection of subsystems

                            a Environmental subsystems emulate different operating systems b Protection subsystems provide security functions

                            XP architecture

                            56

                            Q) Windows network Operating systemExample Windows 2000 operating system

                            All the pictures are taken from Silberschatz Abraham et al Peter Baer Galvin and Greg Gagne Operating system concepts Reading Addison-Wesley

                            Visit my blog enthusiaststudentblogspotin57

                            mtechmessengerblogspotin

                            Keywords OS exam notes OS notesOperating system notes Opearing system galvin notesSimple batch system Multi- programmed Batch System Time sharing System Difference between Batch Multi-programming and Time sharing What is process structure of process in memory Process states PCB Operations on Processes Process Creation Process Termination cascading terminationcontext switch Basic Concepts of threads two modes of CPU execution Inter process communication Shared ndash Memory Systems Message passing systemswhat is deadlock four conditions that are necessary for deadlock to occur Methods for handling deadlocks Deadlock prevention Deadlock avoidanceResource Allocation graphBankers AlgorithmSafe State Unsafe state Safety Algorithm Resource-Request Algorithm Memory management Address Binding Logical vs Physical Address Space MMU Memory-Management Unit Dynamic loading Dynamic Linking Overlays Swapping Contiguous memory allocation Single-partition allocation Multiple-partition allocation Internal fragmentation Variable Size partitions External fragmentation First-fit Best-fit Worst-fit Non-Contiguous Memory allocation Paging Implementation of Page Table PTLR PTBR page table base register Page-table length register translation look-aside buffers TLB Protection in Paging Shared pages in Paging Segmentation Shared Segments Virtual memory Demand paging Procedure for Handling a Page Fault pure demand paging Page Replacement Page Replacement Algorithms FIFO Optimal Page replacement AlgorithmLeast Recently used (LRU) Page replacement algorithm ThrashingWorking Set model global and local page replacement

                            58

                            • 101 File Concept Data is stored in secondary memory in the form of files A file is a sequence of bits bytes lines or records A file has structure which depends on its type
                            • File Attributes
                            • File Operations
                            • Kernel Mode vs User Mode
                            • Basic Features
                            • Architecture

                              Allocation = Allocation + Request Need = Need - Request

                              Unit III Memory management

                              Just as processes share the CPU they also share physical memory Memory management unit of OS takes care of memory allocation deallocation and other issues Program must be brought into memory for it to be run Addresses are two types

                              i) relocatable or relative address wrt to the beginning of program ii) Absolute addresses

                              Q)Address BindingAns Binding means mapping logical address space to physical address space Address binding can happen at three different stagesCompile time If you know at compile time only

                              where in memory the program is going to be allocated then compiler generates absolute addresses

                              Otherwise compiler generates relocatable addresses

                              Load time loader binds the relocatable addresses generated by compiler to absolute address Hence binding is done at load timeIf Binding is done at compile and load time physical and logical addresses are same

                              Execution time If address binding is done at run time the process can be moved during its execution from one memory segment to another Here we call logical addresses as virtual addressRun time mapping of virtual to physical address is done by a hardware device called Memory Management Unit (MMU)

                              Q)Logical vs Physical Address SpaceAnsLogical address ndash generated by the CPU also referred to as virtual addressPhysical address ndash address seen by the memory unitSet of all logical addresses is called logical address space Set of all physical addresses is called physical address space

                              Q) Memory-Management Unit (MMU)Ans MMU is hardware device that maps virtual to physical addressIn MMU scheme the value in the relocation register is added to every address generated by CPU to read the contents of memoryThe user program only knows logical addresses it never sees the real physical addresses

                              Q)Dynamic loadingAns Since physical memory is small it may not be possible for the entire program to be in main memory so we can use dynamic loadingIt is responsibility of users to design their programs to take advantage of dynamic loadingOnly the main function is loaded into main memory and when main() calls another function the main() will check whether the function is in main memory IF not the loader will load the desired function to main memory and updates the programrsquos address table

                              Routine is not loaded until it is called We achieve better memory-space utilization as unused routine is never loaded

                              Q) Dynamic Linking+ Linking postponed until execution time+ Small piece of code stub used to locate the appropriate memory-resident library routine+ Stub replaces itself with the address of the routine and executes the routine+ Operating system needed to check if routine is in processesrsquo memory address+ Dynamic linking is particularly useful for libraries

                              Q) OverlaysAns Overlays are needed when process is larger than amount of main memory allocated to it Instructions and data that are needed at any given time is kept in main memory Overlays can be implemented by user Programming design of overlay structure is complex Overlays for a two-pass assembler is as shown in the figure

                              Q) SwappingAns A process can be swapped temporarily out of memory to a backing store and then brought back into memory for continued executionBacking store ndash fast disk large enough to store copies of all memory images of all usersPriority based scheduling uses a variant of swapping policy called roll out roll in If a higher priority process arrives the memory manager swaps out lower priority process and then swaps in higher priority process When the higher priority process finishes the lower priority process can be swapped in again to main memory A process that is swapped out will be swapped back into the same memory space it occupied previouslyMajor part of the swap time is transfer time We can swap idle process only and cannot swap a process that is waiting for IO

                              Q) Contiguous memory allocation Ans Each process is contained in a single contiguous section of memory1 Fixed Size Partition (or) Single-partition allocation

                              Divide the main memory into Fixed sized partitions Each partition may contain exactly one process Relocation register contains value of starting physical address Limit register contains range of logical addressesEvery address generated by a CPU is checked as follows

                              If logical address lt Limit register then logical address is added with Relocation register to get the corresponding memory address

                              else a trap to OS is generatedSince every address is checked we can protect the OS and other user programs from being modified by other running process

                              2 Multiple-partition allocation

                              1) Fixed Size Partitions Divide the main memory into Fixed sized blocks Here memory allocated to a process may be larger than required The difference between allocated memory and requested memory is called internal fragmentation Internal fragmentation is unused memory inside the partition

                              0 P1 Block0(0th address to 3rd address)4 P1 Block1(4th address to 7th address)8 Block2(8th address to 11th address)

                              12 Block3(11th address to 12th address)

                              if P1 requires just 5 addresses still it is allocated 2 blocks ie 8 addresses So remaining 3 addresses are left unused

                              2) Variable Size partitions Here main memory is divided into partitions of variable sizesOperating system maintains information about

                              a) allocated partitions b) free partitions (hole)

                              In the beginning all main memory is empty and is considered one large block of available memory a hole

                              Exact memory required by process is only given When a process terminates it releases memory which can be allocated to another process

                              Memory is allocated to processes until finally no available block of memory (or hole) is large enough to hold the next processExternal fragmentation exists when there is enough memory to satisfy a request but the available memory are not contiguous Wastage of memory outside the partitionOne Solution to problem of external fragmentation is

                              a Compaction shuffle the memory contents so as to place all free memory together into one large block

                              ===============================================================================Q) Dynamic Storage-Allocation Problem (or) Most commonly used strategies to select a free hole from the set of available holesAns To satisfy a request of size n from a list of free holes below 3 policies can be used

                              1 First-fit Allocate the first hole that is big enough

                              2 Best-fit Allocate the smallest hole that is big enough must search entire list if it is not ordered by size Produces the smallest leftover hole

                              3 Worst-fit Allocate the largest hole must also search entire list Produces the largest leftover holeFirst-fit and best-fit better than worst-fit in terms of speed and storage utilization

                              ===============================================================================Q) Non-Contiguous Memory allocation

                              1 Paging Paging is a memory management scheme that provides non-contiguous memory allocation0 P1 Block0(0th address to 3rd address)4 P2 Block1(4th address to 7th address)8 Block2(8th address to 11th address)

                              12 P1 Block3(11th address to 12th address)

                              Logical address space of a process can be noncontiguous Ex P1 is allocated Block0 and Block3

                              1 Divide physical memory into fixed-sized blocks called frames (size is power of 2)2 Divide logical memory into blocks of same size called pages3 Logical address is divided into 2 parts

                              Page number (p) ndash Page table is indexed by page numberPage offset (d) ndash

                              4 A page table is allocated to each process (A pointer to page table is stored in PCB of process )Page table translates logical to physical addresses Page 0 is in frame1 Page 1 is in frame 4 etc5 Internal fragmentation may occur due to paging

                              6 If the size of logical address space= 2m and page size = 2n Then the higher order mminusn bits of logical address correspond to page number and lsquonrsquolower order bits for displacement within the page

                              Ex size of logical address space = 8=23 so m=3

                              page size =4=22 So n=2

                              So mminusn = 1 bit for page number to represent 01

                              remaining n=2 bits for displacement within the page

                              Q) Implementation of Page Table

                              Page table can be kept as

                              1) Set of dedicated registers efficient if the page table is small

                              2) Page table can be kept in memory page table base register (PTBR) points to page table Page-table length register (PTLR) indicates size of the page table Changing page table requires changing only the value in these registers Advantages of keeping page table in memorya Less context switch time

                              b Two memory accesses are required One memory access to access the page table and another memory access to access the required memory address

                              This problem can be solved by using a special fast-lookup hardware cache called associative memory or translation look-aside buffers(TLBs)

                              i Each entry in TLB consists of 2 parts

                              I) page number field

                              II) Value field or frame number field

                              ii When a logical address is generated by CPU its page number is presented to TLB The page number is compared with all the entries of TLB simultaneously

                              If page number is found in TLB Itrsquos frame number is immediately available -gt TLB hitIf page number not found check in page table ndashTLB miss

                              By using TLB Search is fast But TLB hardware is expensive So TLB size is kept small So TLB contains only frequently used few page table entries

                              Q) Protection in Paging Validinvalid bit attached to each entry in the page table OS sets this validinvalid bit for each page to allow or disallow access to the page When the bit is set to invalid the page is not in processrsquos logical address space Hence it generates Trap to OS

                              Q) Shared pages in Paging

                              Reentrant code is the code that never changes during execution Reentrant code can be shared

                              1 One copy of read-only (reentrant) code is shared among processes (ie text editors compilers window systems) Ex ed1ed2ed3 is shared among Process P1 and P2

                              2 Shared code must appear in same location in the logical address space of all processes

                              Each process keeps a separate copy of the private code and data EX data1 and data2

                              The pages for the private code and data can appear anywhere in the logical address space

                              ==============================================================================Q) Segmentation

                              Segmentation is a memory management scheme that support userrsquos view of memory

                              When the user program is compiled the compiler generates segments like

                              1) The code segment

                              2) Global variables segment

                              3) Heap memory segment

                              4) Stack segment etc

                              Each entry in segment table has

                              segment base (Starting physical address of segment) and limit register (specifies the length of the segment)

                              Logical address is divided into 2 parts

                              segment-number offset

                              Segment table is indexed by segment number

                              Segment-table base register (STBR) stores the location of segment table in main memory

                              Segment-table length register (STLR) stores number of segments used by a program

                              The segment number (say lsquosrsquo) is used to find the entry in segment table The required entry is lsquosrsquo locations away from the beginning of the segment table Once the required entry in the segment table is found the offset (lsquodrsquo) is compared with limit

                              If( offset lt limit) then offset is added with the base entry to generate the physical address

                              Q) Shared Segments Code sharing occurs at

                              the segment level Shared segments must

                              have same segment number

                              Allocation - dynamic storage allocation problem

                              use best fitfirst fit may cause external fragmentation

                              Protection protection bits associated with segments

                              readwriteexecute privileges array in a separate segment - hardware can check for illegal array indexes

                              ===============================================================================Q) Virtual memory ndash

                              Ans In paging and segmentation a program will execute only if the entire process is in main memory But here in virtual memory only part of the program needs to be in memory for execution

                              1 A program that is large enough than the available main memory can still run as only part of program is only loaded into main memory Logical address space can therefore be much larger than physical address space

                              2 Allows address spaces to be shared by several processes

                              Virtual memory can be implemented via Demand paging Demand segmentation

                              The large blank space between heap and stack is part of virtual address space Virtual memory allows files and memory to be shared by 2 or more processes through page sharing as shown in below figure (b) Figure (a) depicts that virtual memory that is larger than physical memory

                              ==============================================================================

                              Q)Demand pagingDemand paging is technique of loading pages from disk to main memory only when the page is needed Hence using less amount of physical memory we get faster response

                              Demand paging is similar to PAGING + SWAPPING

                              Instead of swapping a whole process the pager brings only those pages needed into memory

                              To distinguish between pages that are in memory and pages that are on the disk validinvalid bit is used

                              Validinvalid bit is attached to each entry in page table When this bit is set to valid the page is legal and in memory When this bit is set to invalid either the page is illegal or the page is in disk

                              When CPU generates a logical address whose page table entry is set to invalid page fault occurs Hence the required page is in disk Some section of hard disk called swap space is used to hold pages that are not present in memory

                              Q) Procedure for Handling a Page Fault

                              1) CPU generates a logical address and if the validinvalid bit for the page that has this logical address is set to invalid page fault occurs

                              2) Page fault causes a trap to operating system

                              3)Check if the logical address is within the logical address space of process ie PTBR and PTLR is checked if (required logical address is not within the logical address space of process)

                              Terminate the processelse

                              page is not in memory and page is in disk

                              4) To bring the required page into memory we need to find a free main memory frame map main memory address to disk block and fetch disk block and load the block into free frame5) When the required page is brought into memory update the page table to indicate the page is in memory6) Restart instruction interrupted by illegal address trap The process will continue as if page had always been in memory

                              Q) What is pure demand pagingAns If no page belonging to the executing process is in main memory the process will fault for every page it needs Page faults will occur until every page that is needed is in memory This scheme is known as pure demand paging NEVER BRING A PAGE UNTIL IT IS REQUIRED

                              Q) Page Replacement1 Find the location of the desired page on the disk2 Find a free

                              i) If there is a free frame use itii) If there is no free frame use Page replacement Algorithms to find some page in memory that is

                              not really in use and swap itiii) If the modify bit associated with the victim frame is set to 1 the page is written to disk Make the

                              changes to validinvalid bit of victim frames page table entry to invalid to indicate the victim frame is no longer in memory

                              else no need to write to disk3 Read the desired page into victim frame and change the frame and page tables4 Restart the user process

                              =================================================================================================Q) Page Replacement AlgorithmsGoal Produce a low page-fault rateThe algorithms are evaluated by running it on a particular string of memory references (reference string) and by computing the number of page faults on that string

                              1 First in First out (FIFO) Page replacement algorithm Uses the time when a page was brought into memory The page that was brought first into memory is replaced ie oldest page is chosen for replacement We create a FIFO queue to hold all pages in memory

                              When a page is brought into memory we insert it at the tail of the queue and we replace the page at the head of the queue

                              FIFO 15 page faultsProblems due to FIFO is Beladyrsquos anamolyFor a refrence string shown below123412512345When number of frames allocated to a process is 1 number of page faults=12 due to FIFO page replacementWhen number of frames allocated to a process is 2 number of page faults=12 due to FIFO page replacementWhen number of frames allocated to a process is 3 number of page faults=9 due to FIFO page replacementWhen number of frames allocated to a process is 1 number of page faults=10 due to FIFO page replacementAs the number of frames allocated to a process increases page faults must decrease But in FIFO page replacement page faults increase2 Optimal Page replacement AlgorithmReplace the page that will not be used for longest period of time Used for measuring how well your algorithm performs It is not practical to implement OPT algorithm as it is not possible to know what the future references will beex

                              Optimal page replacement 9 page faults

                              3 Least Recently used (LRU) Page replacement algorithm Replace the page that has not been used for longest period of time LRU makes use of time of last use of the page Ex

                              Number of page faults = 12

                              LRU requires hardware assistance to determine the time of last use of the page We can use 1) Stack or 2) Counter to implement LRU page replacement algorithm

                              1) Stack implementation ndash keep a stack of page numbers in a double link formwhen a Page is referenced it is moved to the top of stackThis implementation requires 6 pointers to be changed

                              2) counter implementation Time-of-use field is allocated to each frame allocatedCPU maintains a counter Counter is incremented for every page reference Counter value is copied to the time-of-use field of the referenced page

                              Disadvantages of LRU page replacement algorithm Updating counter or stack must be done for every memory reference HENCE LRU IS SLOW and implementation requires hardware assistance ==============================================================================Q) LRU approximation page replacement1) Using reference bit2) Using additional reference bits algorithm3) Second chance algorithm4) Enhanced Second chance algorithm5) Counting based Page replacement

                              Has two schemes1) Least Frequently Used (LFU)2) Most Frequently Used (MFU)

                              6) Page Buffering AlgorithmsQ) What is ThrashingAns A process is thrashing if it is spending more time paging than executingThrashing may occur when global (or) local page replacement is used as followsSolution to Thrashing is Working Set strategyIf we provide a process with as many frames as it needs thrashing can be avoided Since it is not possible to know how many frames a process needs we use locality model of process execution

                              here Locality is set of pages that are actively used together Locality model states that as the process executes it moves from locality to locality If we allocate enough frames to a process to accommodate the size of current locality it will not fault again until it changes current localityWorking Set model uses working-set and a parameter ∆ = working-set windowThe set of pages in the most recent ∆ page references is the working setLet WS S i be the working-set size for process Pi ie Pi needs WS S i framesLet D be the total demand for frames then D=sum WS Si

                              and mbe the total number of available frames

                              If iquestm thrashing occurs because some processes will not have enough framesSoIF (Dgtm iquest

                              OS selects a process to suspendelse OS may initiate another processThe working set strategy prevents thrashing while keeping the degree of multi programming as high as possibleThe main difficulty is to keep track of moving working set window ========================================================================Q) How does thrashing occur in global and local page replacementAns In Global page replacement when a page fault occurs for a process any frame in main memory can be replaced causing page fault to some other process that needs the replaced frame Page fault processes must use paging device to swap pages in and out As more number of processes queue up for the paging device

                              the READY queue empties And CPU becomes idle So OS introduces new processes as CPU is free This further increases number of page faults

                              In Local page replacement Each process is allocated certain frames in main memory and when a page fault occurs only the frames allocated to it will be replacedWhen a process starts thrashing other processes are not affected But since the processes will be in queue for paging device most of the time average service time for page fault increases Hence effective access time will increase even for a process that is not thrashingQ) Page Table Structure Or Structure of page table in memory

                              Hierarchical Paging Hashed Page Tables Inverted Page Tables

                              Hierarchical Paging If Page table is large then we break up the page table into multiple page tables ie page table is paged A simple technique is a two-level page tableTwo-Level Paging Example A logical address (on 32-bit machine with 4K page size) is divided into 1 a page number consisting of 20 bits 2 a page offset consisting of 12 bitsSince the page table is paged the page number is further divided into 1 a 10-bit page number 2 a 10-bit page offset Thus a logical address is as follows page number page offset

                              pi p2 d

                              10 10 12 where p1 is an index into the outer page table and p2 is the displacement within the page of the outer page table Two-Level Page-Table Scheme

                              Address-Translation Scheme Address-translation scheme for a two-level 32-bit paging architecture

                              Hashed Page Tables (or) Hash table Hashed page tables are common in address spaces gt 32 bitsEach entry in hash table contains a linked list of elements that hash to same locationHash table is indexed by hash valueEach element consists of 3 fields

                              1 Page number2 Frame number3 Pointer to next element in linked list

                              The algorithm works as follows Page number is passed to hash function to get hash value Locating the hash value in hash table is easy as Hash table is indexed by hash value Page number is then compared with field 1 in the first element in linked list If there is a match the corresponding frame number is used to get physical address If there is no match next entries in linked list are searched

                              Inverted Page Table

                              Invertible page table has one entry for each frame of physical memory Each entry in invertible page table has

                              Process-id Page number

                              Logical address also has 3 parts Process-id Page number

                              OffsetProcess-id and page number of the logical address is compared with each entry in the invertible page tableIf a match is found at ith entry i is added to offset to get logical address If there is no match it means it is illegal addressAdvantages and disadvantages

                              Decreases memory needed to store each page table Increases time needed to search the table when a page reference occurs Solution Use hash table to

                              limit the search to one mdash or at most a few mdash page-table entries Inverted Page Table Architecture

                              Unit-IV CPU Scheduling

                              Basic conceptsTo maximize CPU utilization some process must be running at all times

                              In multi programming several processes will be in main memory at a given time If running process has to wait for IO the OS takes the CPU away from that process and assigns CPU to other process in ready queue So CPU must be scheduled and CPU scheduling is fundamental OS function

                              CPU- IO burst cycleProcess execution begins with a CPU burst that is followed by an IO burst which is followed by another CPU burst then another IO burst and so on as shown in the figure

                              An IO bound program has many short CPU bursts A CPU-bound program has few long CPU bursts

                              Schedulers a module in OS for scheduling decisions1048708 Long-term scheduler (or job scheduler) ndash selects which processes should be brought into the ready queue1048708 Medium-term scheduler ndash selects which processes should be swapped inout the memory1048708 Short-term scheduler (or CPU scheduler) ndash Whenever the CPU becomes idle the operating system must select one of the processes in ready queue to be executed This selection is done by short term scheduler (or) CPU scheduler

                              Dispatcher gives CPU control to the process selected by short term scheduler This function involves following

                              1 switching context2 switching to user mode3 jumping to proper location in the user program to restart that program

                              Time taken to stop one process and start another process is known as dispatch latency This must be kept smallReady QueueReady queue is implemented as a FIFO queue a priority queue a tree or simply an unordered linked listThe records in ready queue are PCBs of processes

                              Pre-emptive Scheduling

                              CPU scheduling decisions take place under one of four conditions

                              1 When a process switches from the running state to the waiting state2 When a process switches from the running state to the ready state for example in response to an

                              interrupt3 When a process switches from the waiting state to the ready state say at completion of IO 4 When a process terminates

                              For conditions 1 and 4 there is no choice - A new process must be selected For conditions 2 and 3 there is a choice - continue running the current process (or) select a different

                              one If CPU scheduling takes place only under conditions 1 and 4 the system is said to be non-

                              preemptive or cooperative In non-preemptive a process runs until it itself gives up the CPU ie when it is waiting for IO (or) when process completes Otherwise the system is said to be pre-emptive

                              Windows used non-preemptive scheduling up to Windows 3x and started using pre-emptive scheduling with Win95 Macs used non-preemptive prior to OSX and pre-emptive since then Note that pre-emptive scheduling is only possible on hardware that supports a timer interrupt

                              Note that pre-emptive scheduling can cause problems when two processes share data because one process may get interrupted in the middle of updating shared data structures

                              Preemption can also be a problem if the kernel is busy implementing a system call ( eg updating critical kernel data structures ) when the preemption occurs Solution wait until the system call has either completed or blocked before allowing the pre-emption But this solution is problematic for real-time systems

                              Some critical sections of code protect themselves from concurrency problems by disabling interrupts before entering the critical section and re-enabling interrupts on exiting the section But this should only be done only on very short pieces of code that will take less CPU time

                              Scheduling Criteria to decide CPU scheduling algorithm1 CPU utilization in real system cpu utilization will range from 40 to 902 Throughput Number of processes that are completed per unit time3 Turnaround time Time when the process has completed ndash Time when the process was submittedie it includes waiting time and burst time4 Waiting time amount of time that a process spends waiting for CPU5 Response time Time when the first response came - Time when the process was submitted

                              Optimization Criteria Max CPU utilizationMax throughput

                              Min turnaround timeMin waiting timeMin response time CPU Scheduling Algorithms1) First-Come First-Served (FCFS) Scheduling First come first served CPU scheduling algorithm is non pre-emptive Processes are scheduled in the order they have arrivedFCFS is implemented by Queue When the CPU is free it is allocated to the process at the head of the queueAdvantages Easy to implementDisadvantages 1 Average waiting time is quite long

                              2 Convoy effect (it occurs as small processes wait in queue for big process to leave CPU)Example Process Burst Time P1 24 P2 3 P3 3

                              Suppose that the processes arrive in the order P1 P2 P3

                              The Gantt Chart for the schedule is P1 P2 P3

                              0 24 27 30 Waiting time for P1 = 0 P2 = 24 P3 = 27 Average waiting time (0 + 24 + 27)3 = 17 2) Shortest-Job-First (SJF) SchedulingSchedule the Process with shortest burst time If CPU burst time of two processe are same then FCFS is usedAdvantages Average waiting time decreasesDisadvantages It is difficult to know the length of next CPU burstSJF can be either pre-emptive (or) non pre-emptive1 Non pre-emptive SJF ndash once CPU is given to the process it cannot be preempted until completes its CPU burst2 Pre-emptive SJF (or) Shortest-Remaining-Time-First (SRTF) If the newly arrived process is shorter than what is remaining of currently executing process then pre-empt the currently executing process

                              Example Process Arrival Time Burst Time P1 00 7

                              P2 20 4 P3 40 1

                              P4 50 4

                              SJF (non-preemptive) Gantt Chart P1 P3 P2 P4

                              0 7 8 12 16 Average waiting time = [0 +(8-2)+(7-4) +(12-5)] 4 =4

                              Example of Preemptive SJF

                              Process Arrival Time Burst TimeP1 00 7P2 20 4P3 40 1P4 50 4

                              SJF (preemptive) P1 P2 P3 P2 P4 P1

                              0 2 4 5 7 11 16 Average waiting time = (9 + 1 + 0 +2)4 =3 3) Priority SchedulingA priority number (integer) is associated with each processThe CPU is allocated to the process with the highest priority (smallest integer equiv highest priority)If two processes have equal priority then FCFS is usedSJF is a priority algorithm in which larger the CPU burst lower the priorityThere are two schemes 1 Preemptive 2 nonpreemptive

                              Problem equiv Starvation ndash low priority processes may never executeSolution equiv Aging ndash as time progresses increase the priority of the process

                              4) Round Robin (RR) Each process gets a small unit of CPU time (time quantum) usually 10-100 milliseconds After this time has elapsed the process is preempted and added to the end of the ready queueIf there are n processes in the ready queue and the time quantum is q then each process gets 1n of the CPU time in chunks of at most q time units at once No process waits more than (n-1)q time unitsPerformance 1 q large _ FIFO 2 q small _ q must be large with respect to context switch otherwise overhead is too highExample of RR with Time Quantum = 4 Process Burst Time P1 24 P2 3

                              P3 3 The Gantt chart isP1 P2 P3 P1 P1 P1 P1 P1

                              0 4 7 10 14 18 22 26 30 Average waiting time = [(30-24)+4+7]3 = 173 =566 5)Multilevel Queue scheduling Ready queue is partitioned into separate queuesFor ex ready queue is divided into 2 queues

                              1 foreground (interactive) queue2 background (batch) queue

                              Each queue has its own scheduling algorithmFor ex Round Robin scheduling algorithm can be used in foreground queue

                              FCFS scheduling algorithm can be used in background queueScheduling must be done between the queues This can be done in 2 ways1 Fixed priority scheduling

                              Foreground queue has highest priority All the processes in foreground queue must be completed and queue must be empty Then only the processes in background queue will be given CPU As shown in below figure Disadvantage starvation

                              2 Time slice ndash each queue gets a certain amount of CPU timewhich it can schedule amongst its processes ie 80 to foreground in RR and 20 to background in FCFS

                              6) Multilevel Feedback Queue Scheduling A process can move between queues The idea is to separate processes according to their CPU bursts IO ndash bound and interactive processes will be in highest priority queueIf a process takes more CPU time it is moved to lower priority queue If a process waits too long in lower priority queue it is moved to higher priority queue to prevent starvationAs shown in the below figure let there be 3 queues Q0 Q1 Q2

                              1 Q0 ndash time quantum 8 milliseconds 2 Q1 ndash time quantum 16 milliseconds 3 Q2 ndash FCFS

                              Scheduling

                              1 A process entering ready queue is put in queue Q0 When it gains CPU the process receives 8 msec If it does not finish in 8 milliseconds process is moved to queue Q1 2 At Q1 process again receives 16 additional milliseconds If it still does not complete it is moved to queue Q2

                              Q) SynchronizationSeveral processes run in an operating system Some process share resources due to which problems like data inconsistency may arise

                              Example of process synchronization is Producer Consumer problem (or) Bounded-Buffer problemProducer Consumer problem To make sure that producer process should not add data when the buffer is full andConsumer process should not take data when the buffer is empty

                              The code for Producer process can be modified as followswhile (true) Produce an item in next producedwhile (counter == BUFFER_SIZE) do nothingbuffer[in]=nextProducedin = (in+1) BUFFER_SIZEcounter++

                              here an integer variable counter is initialized to zeroCounter is incremented every time a new item is added to the buffer and counter is decremented every time an item is removed from bufferin is an index that always points to an empty slot after the last filled slotout is an index that always points to the first filled slot after at the head of the circular queue

                              The code for consumer process is as shown belowwhile(true)

                              while (counter == 0) do nothingnextConsumed = buffer[out]out=(out+1)BUFFER_SIZEcounter--

                              The above producer and consumer code produces RACE CONDITIONOutcome of execution of several co-operating processes depends on the order in which accesses takes place is called Race ConditionFor Example if counter =5

                              Given counter =5 and if producer produces one item and consumer consumes one item then correct value of counter= 5 only But here the counter = 4 ie inconsistent resultIf T5 is executed before T4 then the counter =6 ie inconsistent result

                              Solution to avoid race condition make sure only one process at a time must update the shared variable (here it is counter)

                              Q) Critical section problemAns Each process has a segment of code called critical section Critical section code is the code that contains common variables of co-operating processesTo avoid data inconsistencyTwo processes must not execute critical section code at same timeFor this to happen1 Each process must request permission to enter its critical sectionmdashEntry section2 End of critical section ndashExit section3 Rest of the code after critical section ndash Remainder section

                              Solution to critical section must satisfy the following three requirements1 Mutual exclusion if one process is in critical section no other process can be in critical section2 Progress Processes not in remainder section must decide which process will enter critical section next3 Bounded Waiting All the processes must be allowed to enter critical section No process must wait indefinitely

                              Q) Petersonrsquos solution for critical section problem (Software based solution)AnsTime Process 1 Process2T0 do

                              flag[1] = TRUE turn =2

                              do flag[2] = TRUE

                              T1 turn =1

                              T2 While( flag[2] ampamp turn == 2)Since turn = 2 while loop will not run

                              While( flag[1] ampamp turn == 1)Loops continuously until flag[1]

                              becomes falseT3 Enter critical section

                              T4 flag[1] = FALSE Enter Critical section

                              T5 Enters remainder section flag[2]=FALSE

                              while (TRUE) Enters remainder section

                              while (TRUE)

                              101 File Concept Data is stored in secondary memory in the form of files A file is a sequence of bits bytes lines or records A file has structure which depends on its type

                              File Attributes

                              1 Name ndash Name of the file for user understanding2 Identifier used by OS to identify a file ( eg inode number )3 Type ndash Text files executable files binary files etc 4 Location ndash location of the file on the hard drive 5 Size current size of the file6 Protection Controls who can read write or execute the file7 Time Date and user identification gives the information when the file was created last modified

                              and last used etc This data is used for protection security and usage monitoring

                              All information about files is kept in directory structure The directory entry consists of the filersquos name and its unique identifier The identifier locates the other file attributes Directory structure is kept on disk

                              File Operations

                              The file Abstract Data Type supports many common operations 1 Creating a file find free space on disk add entry to directory 2 Writing a file write data at current file position pointer location and update pointer3 Reading a file read data at current file position pointer location and update pointer4 Repositioning within a file or file seek change pointer location to a given value5 Deleting a file free the space allocated to file and remove entry from directory6 Truncating a file delete the data and update file size

                              Open File Table

                              Most OSes require that files be opened before access and closed after access Information about all files that are open currently in the system is stored in an open file table This

                              system wide open file table is maintained by OS And information about files opened by the process is stored by the processrsquos open file table Open file table has the following information

                              1 File pointer - records the current position in the file for the next read or write2 File-open count - How many times has the current file been opened ( simultaneously by

                              different processes ) and not yet closedWhen this counter reaches zero the file can be removed from the table

                              3 Disk location of the file4 Access rights

                              Some systems provide support for file locking 1 A shared lock is for reading only2 A exclusive lock is for writing as well as reading3 An advisory lock OS makes sure the locks are acquired and released appropriately4 A mandatory lock software developers makes sure the locks are acquired and released

                              appropriately5 UNIX used advisory locks and Windows uses mandatory locks

                              File Types

                              Implementing file types A common technique is to include the type as part of the filename The name is split into 2 parts

                              1 File Name 2 Extension

                              File name and extension are separated by a period The system uses the extension to indicate the type of file and the type of operations that can be done on that fileUser opens Microsoft word and clicks on File menu and Open option User will only specify the filename without extension application will look for a file with the given name and extension it expects Hence extensions are not supported by the operating systemUnix system uses a crude magic number stored at the beginning of the file to indicate the type of fileIn Mac OS X operating system file produced by word processor has the word processorrsquos name as its creator When user double clicks on the file word will open the required fileSome common file types are as shown below

                              File Structure Some files contain an internal structure which may or may not be known to the OS for ex executable files have a specific structure so that it can determine

                              where in memory to load the file and what is the location of the first instruction

                              If OS supports many file formats it will increase the size and complexity of the OS UNIX treats all files as sequences of bytes with no internal structure (With the exception of

                              executable binary programs which it must know how to load and find the first executable statement etc)

                              Macintosh files have two forks - a resource fork and a data fork The resource fork contains information relating to the UI such as icons and button images and can be modified independently Data fork contains the code or data

                              Internal File structure

                              Disk files are accessed in units of physical blocks typically 512 bytes

                              Internally files are organized in units of logical units like single byte (or)

                              data record size (or) structure size

                              The number of logical units which fit into one physical block determines its packingIf a student file is organized internally as records Each record takes 64 bytes of memoryAnd main memory is divided into blocks of 512 bytes Then packing = 51264=8 8 records can be stored per block

                              Internal fragmentation may occur due to this

                              102 Access Methods

                              The information in file can be accessed in the following ways1 Sequential access2 Direct access (or) Relative access

                              1 Sequential access we cannot randomly access any record

                              Records are read and written sequentially one record after another

                              A sequential access file emulates magnetic tape operation and generally supports the following operations read next - read a record and advance the tape to the next position write next - write a record and advance the tape to the next position rewind skip n records - May or may not be supported N may be limited to positive numbers or may be

                              limited to +- 1

                              2 Direct access (or) Relative accessA file is made up of fixed length records so that we can randomly access any record

                              No restriction on order of reading or writing

                              Direct access files are useful when we need to access large amounts of data randomly

                              Ex database

                              Operations supported include n is relative block number ie relative to beginning of file

                              1 read n - read record number n 2 write n - write record number n 3 jump to record n - could be 0 or the end of file4 Query current record - used to return back to this record later5 Sequential access can be easily emulated using direct access The inverse is complicated and

                              inefficient

                              3 Other access methods can be built on top of direct access method We construct an index for the file which contains pointers to various blocks To search a file we first access index and then use pointer to access file

                              If a student file has 120000 records of students sorted by student id We can create an index consisting of 1st student id in each blockif 64 records can be stored in each block then number of blocks needed to store 120000 records = 12000064 asymp 2000 blocks So index has 2000 entries Index is kept in main memory so that searching becomes faster

                              If the index itself becomes large we create an index for index file (Ex B-tree)

                              Q) Allocation Methods

                              To allocate space to files on disk is such a way that disk space is utilized effectively and files can be accessed quickly

                              Three major methods of allocating disk space are

                              1 Contiguous 2 Linked

                              3 Indexed

                              1 Contiguous allocation Each file occupies set of contiguous blocks on disk

                              If the file is n blocks long and starts at location b then it occupies blocks b b+1 b+2hellipb+n-1

                              The directory entry for each file indicates the address of starting block(ie b) and length (n)

                              Advantages

                              1 For direct access to block i we can directly access by finding the block as b+i Since the file is stored sequentially sequential access is easy Hence access is easy

                              2 Supports both direct and sequential access3 Number of disk seeks required is minimum 4 Seek time is minimum

                              Disadvantages

                              1 Difficult to find space to a new file (or) add more data to existing fileFirst fit and Best fit strategies are used to find a free hole( set of free contiguous blocks) from list of available holes But both are inefficient wrt storage utilization

                              2 External fragmentation even though there are enough free blocks required for a new file they are not contiguous and hence cannot be allocated to new fileSolution copy all the files onto another disk or tape We then copy the files back onto original disk contiguously This is known as compaction But compaction is time consuming

                              3 Size of the file must be known when the file is created Even though final size is known entire size may not be used by the file at the beginning Hence leads to wastage of space

                              If size of the file is not known in advance then (i) If we allocate too little space to file file cannot be extended Solution find a larger hole copy the contents of the file to the new space and release the previous space This may be possible as long as space exists Also this is time consuming(ii) If we overestimate the amount of space needed much of the space is unused This will lead to internal fragmentation

                              To minimize these drawbacks some os uses modified contiguous allocation scheme Here contiguous blocks of memory is allocated at firstIf the file needs more blocks another set of contiguous space known as extent is added Location and a block count plus a link to the first block of next extentIf extents are too large and of fixed size internal fragmentation occursIf extents are of variable sizes external fragmentation occurs

                              Linked AllocationEach file is a linked list of disk blocks The disk blocks can be scattered anywhere on the diskThe directory contains a pointer to first and last blocks of the file For example a file of 5 blocks might start at a block 9 and continue at block 16 then block 1 and then block 10 and finally block 25 Each block contains a pointer to next block If each block is 512 bytes and disk address (pointer ) requires 4 bytes then the user sees blocks of 512-4=508 bytes To create a new file we simply create a new entry in the directory The pointer will have nil for empty file Also the size field of empty file is 0

                              To add more data to existing file free block is found and data is written to it and is linked to end of file To read a file we read by following the pointers block to block

                              Advantages 1 Easy to find space to a new file (or) add more data to existing file2 Size of the file need not be declared when the file is created3 There is no external fragmentation

                              Disadvantages1 Seek time is more2 Number of disk seeks required is more3 Pointers use larger percentage of filersquos disk space4 Inefficient for direct access files To access i th record i disk reads are required5 Not Reliable if pointers were lost or damaged

                              Solution to disadvantage 3 is to use to group multiple blocks as clusters(for ex 4 blocks = 1cluster) We allocate clusters instead of blocks to file Makes logical to physical block mapping simpleImproves disk throughput by reducing disk access timeReduces wastage of space due to pointers as only few pointers may be neededFree ndashlist management also becomes simpleDisadvantage internal fragmentation more space is wasted when the cluster is partially full

                              Solution to disadvantage 5 is to use doubly linked list (or) store the file name and relative block number in each block

                              An variation of linked allocation is the use of file allocation table (FAT)A section of disk at the beginning of each volume contains FAT FAT has one entry for each disk block and is indexed by block numberThe block number contains the block number of next block that contains the file The last block will have special end of the file value as table entry An unused block is indicated by table value 0

                              Advantages

                              1 Easy to find space to a new file (or) add more data to existing file2 Random access time is improved

                              Disadvantage number of disk seeks is more The disk head must move to read the FAT and find the location of block then move to location of the block

                              3 Indexed allocation

                              Each file has its own index block Index block is an array of disk block addresses The ith entry in index block points to ith block of the file

                              The directory contains the address of the index block

                              When the file is created all pointers in the index block are set to nil

                              To add more data to existing file free block is found and data is written to it The address of block is put in the index block

                              Advantages

                              1 Easy to find space to a new file (or) add more data to existing file2 Supports direct (or) random access efficiently3 No external fragmentation

                              Disadvantage

                              1 Seek time is more2 Number of disk seeks required is more3 Not Reliable4 Index file must be kept in memory If this memory is not available then we have the read the index

                              block and then desired data block ie 2 disk accesses are required To access a block near the end of the file we need to read all the index blocks to read the needed data block

                              5 Pointer overhead of index block is gt than that of linked allocation For example if we have file of only one or two blocks Entire index block is allocated to store these one or two pointers

                              Various mechanisms followed for size of index block

                              1 linked scheme size of index block is one disk block Index block contains a small header containing name of the file and set of the first 100 disk block addresses The next address contains the address of another index block for large files

                              2 multilevel index to access a block the OS uses first-level index to find second-level index block which points to file block This approach can be continued to third or fourth level

                              3 combined scheme

                              For example Say there are 15 pointers of index block in the filersquos inode The first 12 of these pointers have address of blocks that contains file data The next 3 pointers point to indirect blocks The first points to single indirect block which is an index block that contain addresses of filersquos data blocks

                              The second points to double indirect blocks and third points to triple indirect blocks as shown below

                              Q) Free space management

                              To keep track of free disk space the system maintains free space list To create a file we search free space list and allocate the required space to new file

                              The free space list can be implemented as

                              1 Bit vector (or) Bit map2 Linked list3 Grouping4 Counting 5 Space maps

                              1 Bit vector (or) Bit map Each block is represented by 1 bit If the block is free the bit is 1 and if the block is allocated the bit is 0 For example consider a disk where blocks 23458 blocks are free The free-space bit map would be 001111001000

                              Advantages simple and easy to get contiguous files

                              Disadvantages

                              1 Bit map requires extra space2 Bit map is inefficient if kept in disk Bit maps of smaller disk can be kept in main memory but may

                              not be possible to keep the bit map of larger disk in main memory3 Bit map must be modified both when blocks are allocated and freed Freeing 1 GB of data on a 1-TB

                              disk could cause thousands of blocks of bit maps to be updated because those data blocks are scattered all over the disk

                              2 Linked List Free-space list is implemented by linking all the free disk blocks together A pointer to the first free block is stored in special location on the disk For example consider a disk where blocks 23458 blocks are freeWe keep a pointer to block 2 as the first free block Block 2 would contain a pointer to block 3 Block 3 would point to block 4 and block 4 would point to block 5and so onAdvantage No waste of spaceDisadvantage Cannot get contiguous space easily ie To traverse the free-space list we must read each block which requires more IO time

                              3 Grouping First free block stores the addresses of n free blocks The first n-1 free blocks are free and the last block contains the addresses of another n free blocks and so onAdvantage large number of free blocks can be found quickly4 Counting free-space list can be maintained by keeping the address of first free block and the number (n)of free contiguous blocks that follow the first block

                              Each entry in the free-space list consists of disk address and a countEach entry requires more space andlist is shorter only if count is gt1These entries can be stored in a B-tree for efficient insertion deletion and search

                              5 Space maps ZFS divides the disk space into chunks of manageable sizes called metaslabs Each metaslab has a space mapZFS uses counting algorithm to store information about free blocks Space map is a log of all block activities in time order in counting formatWhen ZFS allocates or frees space from a metaslab it loads the associated space map into memory and updates it Finally free-space list is updated on diskSynchronization Hardware Hardware solution to synchronization is to provide atomic operations These operations operate as a single instruction without interruption Two such operations are 1 Test and Set() hardware instruction2 Swap() instruction

                              1 Test and Set instruction is as shown

                              Solution using Test and SetThe shared variable lock is initialized to false

                              2 Swap() instruction definition is as follows

                              Solution using Swap1 Shared Boolean variable lock is initialized to false and each process has a local boolean variable key

                              Both Test and Set Swap instructions satisfy the mutual exclusion requirement but unfortunately do not guarantee bounded waiting If there are multiple processes trying to get into their critical sections there is no guarantee of what order they will enter and any one process could have the bad luck to wait forever until they got their turn in the critical section (Since there is no guarantee as to the relative rates of the processes a very fast process could theoretically release the lock whip through their remainder section and re-lock the lock before a slower process got a chance As more and more processes are involved vying for the same resource the odds of a slow process getting locked out completely increase )

                              Below figure illustrates a solution using test-and-set that satisfies bounded waiting using two shared data structures boolean lock and boolean waiting[ N ] where N is the number of processes in contention for critical sections

                              Bounded-waiting mutual exclusion with TestAndSet( )

                              It first looks in an order ( starting with the next process on the list ) for a process that has been waiting and if it finds one then it releases that particular process from its waiting state without unlocking the critical section thereby allowing a specific process into the critical section while continuing to block all the others

                              Q) What is semaphoreAns Semaphore is used to solve synchronization problems A semaphore S is an integer variable accessed only through 2 standard atomic operations 1 wait() 2 signal()Wait primitive Signal primitivewait(S)

                              while(S lt= 0) do nothingS--

                              signal (S)

                              S++

                              Each process that wishes to use a resource performs wait() operation When a process releases a resource it performs signal() operation

                              Semaphores Usage1 binary semaphore or mutex locks can take values 0 (or) 1

                              2 Counting Semaphore can take any integer value Counting semaphore is used to count the number remaining resources The counter is initialized to the number of resources available whenever the counting semaphore gt 0 then a process can enter a critical section and use one of the resources When the counter = 0 ( or negative in some implementations ) then the process blocks until another process frees a resource and increments the counting semaphore with a signal callFor example if 3 resources are there and 4 processes Value of counting semaphore =3if process P1 requires a resource it perform wait() operationNow counting semaphore = 2if process P2 requires a resource it perform wait() operationNow counting semaphore = 1if process P3 requires a resource it perform wait() operationNow counting semaphore = 0if process P4 requires a resource it perform wait() operation P4 waits until a resource is available

                              3 Semaphores can also be used to synchronize certain operations

                              Semaphore ImplementationWhen a process is in critical section and any other process that tries to enter critical section must loop continuously wasting CPU time (busy waiting) This type of semaphore is called spin lock as the process spins while waiting For example suppose it is important that process P1 execute statement S1 before process P2 executes statement S2

                              First we create a semaphore named synch that is shared by the two processes and initialize it to zero Then in process P1 we insert the code

                              S1signal( synch )

                              and in process P2 we insert the code wait( synch )

                              S2 Because synch was initialized to 0 process P2 will block on the wait until after P1 executes the call

                              to signal Each semaphore has a integer value and maintains a waiting queue of waiting processes

                              Semaphore implementation with no busy waiting To overcome busy waiting we block the waiting process and waiting process is restarted by wakeup() operation(ie the process is removed from waiting queue to ready state)

                              Problems due to semaphore1 Deadlocks Deadlocks occur when multiple processes are blocked each waiting for a resource that can only be freed by one of the other (blocked) processes shown below

                              2 Starvation In Starvation one or more processes gets blocked forever and never get a chance to enter critical section For example if we do not specify the algorithms for adding processes to the waiting queue or selecting one to be removed from the queue in the signal( ) call If a LIFO queue is chosen then the first process that starts waiting will never get a chance

                              3 Priority inversionLet Process LMH have priorities as shown belowLltMltHprocess H is waiting for a resource that has been held by process L so Process L is running When process M enters it pre-empts process L and M is running nowHere M runs before H even though M has lower priority than HTo avoid this L is given high priority so that M does not pre-empt L=====================================================================Q) Classical problems on synchronization

                              1 The Bounded Buffer Problem (also called the The Producer-Consumer Problem)2 The Readers-Writers Problem3 The Dining Philosophers Problem

                              These problems are used to test newly proposed synchronization scheme

                              1 The Bounded Buffer ProblemConsider

                              n buffers and each buffer holds one item a producer process which creates the items (1 at a time) a consumer process which processes them (1 at a time)

                              Producer process cannot produce an item if all the buffers are full and Consumer process cannot consume if all the buffers are empty As both processes modify the contents of buffer synchronization must be there For that we use 3 Semaphores

                              1 empty = n (empty semaphore counts the number of empty buffers)2 full=0 (full semaphore counts the number of filled buffers)3 mutex=1 (mutex provides mutual exclusion for access to the buffer)

                              Producer process Consumer processdo

                              hellip produce an item wait(empty) wait(mutex) hellip

                              do wait(full) wait(mutex) hellip remove an item from buffer hellip

                              add item to buffer hellip signal(mutex) signal(full)

                              while (1)

                              signal(mutex) signal(empty) hellip consume the item hellip while (1)

                              2 The Readers-Writers ProblemA data item such as a file is shared among several processesEach process is classified as reader or writerMultiple readers can read the file at same timeA writer must have exclusive access (ie cannot share with either a reader or another writer)Two versions of the readers-writers problem

                              readers priority new reader need not wait because a writer is waiting writers priority if a writer is waiting to access the database no new readers can start reading

                              A solution to both version may cause starvation in the readers priority version writers may starve in the writers priority version readers may starve

                              A semaphore solution to the readers priority version uses 3 semaphores1 readcount = 0 (readcount semaphore counts the number of reading shared data2 mutex =1 (mutex semaphore is used for mutually exclusion when readcount is updated)3 wrt = 1 (wrt semaphore is common to both writer and reader)

                              wrt provides mutually exclusive access to shared data Reader process Writer process

                              wait(mutex) readcount++ if (readcount == 1) wait(rt) signal(mutex) hellip reading is performed hellip wait(mutex) readcount-- if (readcount == 0) signal(wrt) signal(mutex)

                              do wait(wrt) hellip writing is performed hellip signal(wrt)

                              while (TRUE)

                              3 The Dining Philosophers Problemn philosophers sit around a table thinking and eatingAs time passes a philosopher gets hungry and tries to pick up the chopstick on his left and on his right A philosopher may only pick up one chopstick at a time and cannot pick up a chopstick already in the hand of neighbour philosopherThe dining philosophers problems is an example of a large class or concurrency control problems it is a simple representation of the need to allocate several resources among several processes in a deadlock-free and starvation-free mannerA semaphore solution represent each chopstick with a semaphore

                              semaphore chopstick[5] Initially all values are 1

                              Philosopher ido

                              51

                              wait(chopstick[i])wait(chopstick[(i+1) 5])hellipeathellipsignal(chopstick[i])signal(chopstick[(i+1) 5])hellipthinkhellip

                              while (1)This solution guarantees no two neighbouring philosophers eat simultaneously but has the possibility of creating a deadlockQ) Types of operating systems

                              MULTIPROCESSING SYSTEMS has multiple hardware CPUs

                              In symmetric multiprocessing (SMP) systems The processes share the same main memory and io facilities All processors can perform the same functions (hence term symmetric)

                              52

                              ASYMMETRIC MULTIPROCESSING SYSTEMS In asymmetric multiprocessing (ASMP) all CPUs are not equal (Master and slave CPUs) ASMP have dedicated processors to specific tasks For example one processor may be dedicated to disk operations another to video operations etc These systems dont have the flexibility to assign processes to the least-loaded CPU unlike an SMP system

                              REAL TIME SYSTEM is used to implement a computer application that must complete its execution within its time constraint

                              Two kinds of real-time systems have evolved

                              1 A hard real-time system can guarantee that response requirement will be met under all conditions 2 A soft real-time system cannot guarantee that response requirement meets under all conditions Ex

                              Digital audio or multimedia systems Digital telephones

                              DISTRIBUTED SYSTEMS execute parts of a computation in different systems at the same time It uses distributed control ie it spreads its decision-making actions across different computers in the system so that failures of individual computers or the network does not cripple its operation

                              A distributed operating system appears as a uniprocessor system even though it has multiple processors The users may not know where their programs are being run or where their files are located that should all be handled automatically by the operating system

                              Distributed systems allow applications to run on several processors at the same time thus requiring more complex processor scheduling algorithms

                              DESKTOP SYSTEMS A desktop system is a personal computer (PC) system used at a single location Modern desktop systems support multiprogramming Common examples are Linux FreeBSD Windows 8 and the Macintosh operating system

                              HANDHELD SYSTEMS (palmtop computers) Two of the most popular operating systems for handhelds are Symbian OS and Android OS They have very less memory

                              53

                              CLUSTERED SYSTEMS A computer cluster consists of a set of loosely connected computers that work together so that in many respects they can be viewed as a single system The components of a cluster are usually connected to each other through fast local area networks each node (computer used as a server) running its own instance of an operating system

                              Q) Modern operating system

                              New developments in hardware applications and security threats lead to the development of modern operating system

                              New Hardware developments like increased machine speed high speed network increased size and variety of memory devices

                              New Applications developments like multimedia applications internet and web access and client server computing

                              Developments in modern operating system can be categorized as

                              1 Microkernel architecture assigns only few functions to kernel ( IPC and CPU scheduling etc) Other OS services are provided by processes called servers that run in user mode This approach decouples server and kernel development This approach is well suited to distributed environment

                              2 Multithreading a process is divided into threads that can run concurrently Useful for applications that perform a number of independent tasks Example database server that listens and processes many client requests

                              3 Symmetric multiprocessing As now-a-days there are multiple microprocessors in a single system SMP operating system provides greater efficiency when there are multiple processors The processes share the same main memory and io facilities All processors can perform the same functions (hence term symmetric)

                              SMP schedules processes or threads across all of the processors

                              Advantages

                              1 Increased performance if some portion of the program can be run in parallel the performance of SMP OS increases

                              2 increased availability as all processes can perform the same task if one processor fails others will work 3 incremental growth one can add a new processor to increase the performance4 Scaling cost can be dependent on the number of processors used

                              4 Distributed operating systems Distributed systems allow applications to run on several processors at the same time

                              5 Object oriented design adds modularity to kernel Os can be customized without effecting system integrity

                              Q) Linux Operating systems It is open source as its source code is freely available It is free to use Linux was designed considering UNIX compatibility Linux is a multi-user multitasking system Main design goals are speed efficiency and standardization Components of Linux System

                              54

                              Linux Operating System has primarily three components as shown in the below diagram

                              Kernel - Kernel is the core part of Linux It is responsible for all major activities of this operating system It interacts directly with hardware Kernel hides low level hardware details to system or application programs

                              System Library - System libraries are special functions or programs These are used by application programs to use Kernelrsquos features

                              System Utility - System Utility programs are responsible to do specialized individual level tasks like updating log file accepting login requests from terminals etc

                              Kernel Modules Sections of kernel code that can be compiled loaded and unloaded independent of the rest of the kernel A kernel module may typically implement a device driver a file system or a networking protocol

                              Three components to Linux module support 1 module management Supports loading modules into memory and allows to talk to the rest of the kernel 2driver registration Allows modules to inform rest of the kernel that a new driver has become available 3 conflict resolution protect reserved resources of one driver from accidental use by another driver

                              Kernel Mode vs User ModeKernel component code executes in a special privileged mode called kernel mode with full access to all resources of the computer This code represents a single process executes in single address space and do not require any context switch and hence is very efficient and fast Kernel runs each processes and provides system services to processes provides protected access to hardwares to processesCode which is not required to run in kernel mode is in System Library User programs and other system programs works in User Mode which has no access to system hardwares and kernel code User programs utilities use System libraries to access Kernel functions to get systems low level tasksBasic FeaturesFollowing are some of the important features of Linux Operating System

                              Portable - Portability means softwares can works on different types of hardwares in same way Linux kernel and application programs supports their installation on any kind of hardware platform

                              Open Source - Linux source code is freely available and it is community based development project Multiple teams works in collaboration to enhance the capability of Linux operating system and it is continuously evolving

                              Multi-User - Linux is a multiuser system means multiple users can access system resources like memory ram application programs at same time

                              Multiprogramming - Linux is a multiprogramming system means multiple applications can run at same time

                              Hierarchical File System - Linux provides a standard file structure in which system files user files are arranged

                              Shell - Linux provides a special interpreter program which can be used to execute commands of the operating system It can be used to do various types of operations call application programs etc

                              55

                              Security - Linux provides user security using authentication features like password protection controlled access to specific files encryption of data

                              Architecture Linux System Architecture is consists of following layers

                              Hardware layer - Hardware consists of all peripheral devices (RAM HDD CPU etc)

                              Kernel - Core component of Operating System interacts directly with hardware provides low level services to upper layer components

                              Shell - An interface to kernel Takes commands from user and executes kernels functions

                              Utilities - Utility programs giving user most of the functionalities of an operating systems

                              Q) windows XP operating System1Extensibility mdash layered architecture

                              Executive which runs in protected mode provides the basic system services On top of the executive several server subsystems operate in user mode Modular structure allows additional environmental subsystems to be added without affecting the

                              executive 2Portability mdash XP can be moved from on hardware architecture to another with relatively few changes

                              Written in C and C++ Processor-dependent code is isolated in a dynamic link library (DLL) called the ldquohardware abstraction

                              layerrdquo (HAL)3 Reliability mdashXP uses hardware protection for virtual memory and software protection mechanisms for operating system resources 4 Compatibility mdash applications that follow the IEEE 10031 (POSIX) standard can be complied to run on 2000 without changing the source code 5 Performance mdash XP subsystems can communicate with one another via high-performance message passing

                              Preemption of low priority threads enables the system to respond quickly to external events Designed for symmetrical multiprocessing

                              6 International support mdash supports different locales via the national language support (NLS) APIXP Architecture 1 Layered system of modules 2 Protected mode mdash hardware abstraction layer (HAL) kernel executive 3 User mode mdash collection of subsystems

                              a Environmental subsystems emulate different operating systems b Protection subsystems provide security functions

                              XP architecture

                              56

                              Q) Windows network Operating systemExample Windows 2000 operating system

                              All the pictures are taken from Silberschatz Abraham et al Peter Baer Galvin and Greg Gagne Operating system concepts Reading Addison-Wesley

                              Visit my blog enthusiaststudentblogspotin57

                              mtechmessengerblogspotin

                              Keywords OS exam notes OS notesOperating system notes Opearing system galvin notesSimple batch system Multi- programmed Batch System Time sharing System Difference between Batch Multi-programming and Time sharing What is process structure of process in memory Process states PCB Operations on Processes Process Creation Process Termination cascading terminationcontext switch Basic Concepts of threads two modes of CPU execution Inter process communication Shared ndash Memory Systems Message passing systemswhat is deadlock four conditions that are necessary for deadlock to occur Methods for handling deadlocks Deadlock prevention Deadlock avoidanceResource Allocation graphBankers AlgorithmSafe State Unsafe state Safety Algorithm Resource-Request Algorithm Memory management Address Binding Logical vs Physical Address Space MMU Memory-Management Unit Dynamic loading Dynamic Linking Overlays Swapping Contiguous memory allocation Single-partition allocation Multiple-partition allocation Internal fragmentation Variable Size partitions External fragmentation First-fit Best-fit Worst-fit Non-Contiguous Memory allocation Paging Implementation of Page Table PTLR PTBR page table base register Page-table length register translation look-aside buffers TLB Protection in Paging Shared pages in Paging Segmentation Shared Segments Virtual memory Demand paging Procedure for Handling a Page Fault pure demand paging Page Replacement Page Replacement Algorithms FIFO Optimal Page replacement AlgorithmLeast Recently used (LRU) Page replacement algorithm ThrashingWorking Set model global and local page replacement

                              58

                              • 101 File Concept Data is stored in secondary memory in the form of files A file is a sequence of bits bytes lines or records A file has structure which depends on its type
                              • File Attributes
                              • File Operations
                              • Kernel Mode vs User Mode
                              • Basic Features
                              • Architecture

                                Routine is not loaded until it is called We achieve better memory-space utilization as unused routine is never loaded

                                Q) Dynamic Linking+ Linking postponed until execution time+ Small piece of code stub used to locate the appropriate memory-resident library routine+ Stub replaces itself with the address of the routine and executes the routine+ Operating system needed to check if routine is in processesrsquo memory address+ Dynamic linking is particularly useful for libraries

                                Q) OverlaysAns Overlays are needed when process is larger than amount of main memory allocated to it Instructions and data that are needed at any given time is kept in main memory Overlays can be implemented by user Programming design of overlay structure is complex Overlays for a two-pass assembler is as shown in the figure

                                Q) SwappingAns A process can be swapped temporarily out of memory to a backing store and then brought back into memory for continued executionBacking store ndash fast disk large enough to store copies of all memory images of all usersPriority based scheduling uses a variant of swapping policy called roll out roll in If a higher priority process arrives the memory manager swaps out lower priority process and then swaps in higher priority process When the higher priority process finishes the lower priority process can be swapped in again to main memory A process that is swapped out will be swapped back into the same memory space it occupied previouslyMajor part of the swap time is transfer time We can swap idle process only and cannot swap a process that is waiting for IO

                                Q) Contiguous memory allocation Ans Each process is contained in a single contiguous section of memory1 Fixed Size Partition (or) Single-partition allocation

                                Divide the main memory into Fixed sized partitions Each partition may contain exactly one process Relocation register contains value of starting physical address Limit register contains range of logical addressesEvery address generated by a CPU is checked as follows

                                If logical address lt Limit register then logical address is added with Relocation register to get the corresponding memory address

                                else a trap to OS is generatedSince every address is checked we can protect the OS and other user programs from being modified by other running process

                                2 Multiple-partition allocation

                                1) Fixed Size Partitions Divide the main memory into Fixed sized blocks Here memory allocated to a process may be larger than required The difference between allocated memory and requested memory is called internal fragmentation Internal fragmentation is unused memory inside the partition

                                0 P1 Block0(0th address to 3rd address)4 P1 Block1(4th address to 7th address)8 Block2(8th address to 11th address)

                                12 Block3(11th address to 12th address)

                                if P1 requires just 5 addresses still it is allocated 2 blocks ie 8 addresses So remaining 3 addresses are left unused

                                2) Variable Size partitions Here main memory is divided into partitions of variable sizesOperating system maintains information about

                                a) allocated partitions b) free partitions (hole)

                                In the beginning all main memory is empty and is considered one large block of available memory a hole

                                Exact memory required by process is only given When a process terminates it releases memory which can be allocated to another process

                                Memory is allocated to processes until finally no available block of memory (or hole) is large enough to hold the next processExternal fragmentation exists when there is enough memory to satisfy a request but the available memory are not contiguous Wastage of memory outside the partitionOne Solution to problem of external fragmentation is

                                a Compaction shuffle the memory contents so as to place all free memory together into one large block

                                ===============================================================================Q) Dynamic Storage-Allocation Problem (or) Most commonly used strategies to select a free hole from the set of available holesAns To satisfy a request of size n from a list of free holes below 3 policies can be used

                                1 First-fit Allocate the first hole that is big enough

                                2 Best-fit Allocate the smallest hole that is big enough must search entire list if it is not ordered by size Produces the smallest leftover hole

                                3 Worst-fit Allocate the largest hole must also search entire list Produces the largest leftover holeFirst-fit and best-fit better than worst-fit in terms of speed and storage utilization

                                ===============================================================================Q) Non-Contiguous Memory allocation

                                1 Paging Paging is a memory management scheme that provides non-contiguous memory allocation0 P1 Block0(0th address to 3rd address)4 P2 Block1(4th address to 7th address)8 Block2(8th address to 11th address)

                                12 P1 Block3(11th address to 12th address)

                                Logical address space of a process can be noncontiguous Ex P1 is allocated Block0 and Block3

                                1 Divide physical memory into fixed-sized blocks called frames (size is power of 2)2 Divide logical memory into blocks of same size called pages3 Logical address is divided into 2 parts

                                Page number (p) ndash Page table is indexed by page numberPage offset (d) ndash

                                4 A page table is allocated to each process (A pointer to page table is stored in PCB of process )Page table translates logical to physical addresses Page 0 is in frame1 Page 1 is in frame 4 etc5 Internal fragmentation may occur due to paging

                                6 If the size of logical address space= 2m and page size = 2n Then the higher order mminusn bits of logical address correspond to page number and lsquonrsquolower order bits for displacement within the page

                                Ex size of logical address space = 8=23 so m=3

                                page size =4=22 So n=2

                                So mminusn = 1 bit for page number to represent 01

                                remaining n=2 bits for displacement within the page

                                Q) Implementation of Page Table

                                Page table can be kept as

                                1) Set of dedicated registers efficient if the page table is small

                                2) Page table can be kept in memory page table base register (PTBR) points to page table Page-table length register (PTLR) indicates size of the page table Changing page table requires changing only the value in these registers Advantages of keeping page table in memorya Less context switch time

                                b Two memory accesses are required One memory access to access the page table and another memory access to access the required memory address

                                This problem can be solved by using a special fast-lookup hardware cache called associative memory or translation look-aside buffers(TLBs)

                                i Each entry in TLB consists of 2 parts

                                I) page number field

                                II) Value field or frame number field

                                ii When a logical address is generated by CPU its page number is presented to TLB The page number is compared with all the entries of TLB simultaneously

                                If page number is found in TLB Itrsquos frame number is immediately available -gt TLB hitIf page number not found check in page table ndashTLB miss

                                By using TLB Search is fast But TLB hardware is expensive So TLB size is kept small So TLB contains only frequently used few page table entries

                                Q) Protection in Paging Validinvalid bit attached to each entry in the page table OS sets this validinvalid bit for each page to allow or disallow access to the page When the bit is set to invalid the page is not in processrsquos logical address space Hence it generates Trap to OS

                                Q) Shared pages in Paging

                                Reentrant code is the code that never changes during execution Reentrant code can be shared

                                1 One copy of read-only (reentrant) code is shared among processes (ie text editors compilers window systems) Ex ed1ed2ed3 is shared among Process P1 and P2

                                2 Shared code must appear in same location in the logical address space of all processes

                                Each process keeps a separate copy of the private code and data EX data1 and data2

                                The pages for the private code and data can appear anywhere in the logical address space

                                ==============================================================================Q) Segmentation

                                Segmentation is a memory management scheme that support userrsquos view of memory

                                When the user program is compiled the compiler generates segments like

                                1) The code segment

                                2) Global variables segment

                                3) Heap memory segment

                                4) Stack segment etc

                                Each entry in segment table has

                                segment base (Starting physical address of segment) and limit register (specifies the length of the segment)

                                Logical address is divided into 2 parts

                                segment-number offset

                                Segment table is indexed by segment number

                                Segment-table base register (STBR) stores the location of segment table in main memory

                                Segment-table length register (STLR) stores number of segments used by a program

                                The segment number (say lsquosrsquo) is used to find the entry in segment table The required entry is lsquosrsquo locations away from the beginning of the segment table Once the required entry in the segment table is found the offset (lsquodrsquo) is compared with limit

                                If( offset lt limit) then offset is added with the base entry to generate the physical address

                                Q) Shared Segments Code sharing occurs at

                                the segment level Shared segments must

                                have same segment number

                                Allocation - dynamic storage allocation problem

                                use best fitfirst fit may cause external fragmentation

                                Protection protection bits associated with segments

                                readwriteexecute privileges array in a separate segment - hardware can check for illegal array indexes

                                ===============================================================================Q) Virtual memory ndash

                                Ans In paging and segmentation a program will execute only if the entire process is in main memory But here in virtual memory only part of the program needs to be in memory for execution

                                1 A program that is large enough than the available main memory can still run as only part of program is only loaded into main memory Logical address space can therefore be much larger than physical address space

                                2 Allows address spaces to be shared by several processes

                                Virtual memory can be implemented via Demand paging Demand segmentation

                                The large blank space between heap and stack is part of virtual address space Virtual memory allows files and memory to be shared by 2 or more processes through page sharing as shown in below figure (b) Figure (a) depicts that virtual memory that is larger than physical memory

                                ==============================================================================

                                Q)Demand pagingDemand paging is technique of loading pages from disk to main memory only when the page is needed Hence using less amount of physical memory we get faster response

                                Demand paging is similar to PAGING + SWAPPING

                                Instead of swapping a whole process the pager brings only those pages needed into memory

                                To distinguish between pages that are in memory and pages that are on the disk validinvalid bit is used

                                Validinvalid bit is attached to each entry in page table When this bit is set to valid the page is legal and in memory When this bit is set to invalid either the page is illegal or the page is in disk

                                When CPU generates a logical address whose page table entry is set to invalid page fault occurs Hence the required page is in disk Some section of hard disk called swap space is used to hold pages that are not present in memory

                                Q) Procedure for Handling a Page Fault

                                1) CPU generates a logical address and if the validinvalid bit for the page that has this logical address is set to invalid page fault occurs

                                2) Page fault causes a trap to operating system

                                3)Check if the logical address is within the logical address space of process ie PTBR and PTLR is checked if (required logical address is not within the logical address space of process)

                                Terminate the processelse

                                page is not in memory and page is in disk

                                4) To bring the required page into memory we need to find a free main memory frame map main memory address to disk block and fetch disk block and load the block into free frame5) When the required page is brought into memory update the page table to indicate the page is in memory6) Restart instruction interrupted by illegal address trap The process will continue as if page had always been in memory

                                Q) What is pure demand pagingAns If no page belonging to the executing process is in main memory the process will fault for every page it needs Page faults will occur until every page that is needed is in memory This scheme is known as pure demand paging NEVER BRING A PAGE UNTIL IT IS REQUIRED

                                Q) Page Replacement1 Find the location of the desired page on the disk2 Find a free

                                i) If there is a free frame use itii) If there is no free frame use Page replacement Algorithms to find some page in memory that is

                                not really in use and swap itiii) If the modify bit associated with the victim frame is set to 1 the page is written to disk Make the

                                changes to validinvalid bit of victim frames page table entry to invalid to indicate the victim frame is no longer in memory

                                else no need to write to disk3 Read the desired page into victim frame and change the frame and page tables4 Restart the user process

                                =================================================================================================Q) Page Replacement AlgorithmsGoal Produce a low page-fault rateThe algorithms are evaluated by running it on a particular string of memory references (reference string) and by computing the number of page faults on that string

                                1 First in First out (FIFO) Page replacement algorithm Uses the time when a page was brought into memory The page that was brought first into memory is replaced ie oldest page is chosen for replacement We create a FIFO queue to hold all pages in memory

                                When a page is brought into memory we insert it at the tail of the queue and we replace the page at the head of the queue

                                FIFO 15 page faultsProblems due to FIFO is Beladyrsquos anamolyFor a refrence string shown below123412512345When number of frames allocated to a process is 1 number of page faults=12 due to FIFO page replacementWhen number of frames allocated to a process is 2 number of page faults=12 due to FIFO page replacementWhen number of frames allocated to a process is 3 number of page faults=9 due to FIFO page replacementWhen number of frames allocated to a process is 1 number of page faults=10 due to FIFO page replacementAs the number of frames allocated to a process increases page faults must decrease But in FIFO page replacement page faults increase2 Optimal Page replacement AlgorithmReplace the page that will not be used for longest period of time Used for measuring how well your algorithm performs It is not practical to implement OPT algorithm as it is not possible to know what the future references will beex

                                Optimal page replacement 9 page faults

                                3 Least Recently used (LRU) Page replacement algorithm Replace the page that has not been used for longest period of time LRU makes use of time of last use of the page Ex

                                Number of page faults = 12

                                LRU requires hardware assistance to determine the time of last use of the page We can use 1) Stack or 2) Counter to implement LRU page replacement algorithm

                                1) Stack implementation ndash keep a stack of page numbers in a double link formwhen a Page is referenced it is moved to the top of stackThis implementation requires 6 pointers to be changed

                                2) counter implementation Time-of-use field is allocated to each frame allocatedCPU maintains a counter Counter is incremented for every page reference Counter value is copied to the time-of-use field of the referenced page

                                Disadvantages of LRU page replacement algorithm Updating counter or stack must be done for every memory reference HENCE LRU IS SLOW and implementation requires hardware assistance ==============================================================================Q) LRU approximation page replacement1) Using reference bit2) Using additional reference bits algorithm3) Second chance algorithm4) Enhanced Second chance algorithm5) Counting based Page replacement

                                Has two schemes1) Least Frequently Used (LFU)2) Most Frequently Used (MFU)

                                6) Page Buffering AlgorithmsQ) What is ThrashingAns A process is thrashing if it is spending more time paging than executingThrashing may occur when global (or) local page replacement is used as followsSolution to Thrashing is Working Set strategyIf we provide a process with as many frames as it needs thrashing can be avoided Since it is not possible to know how many frames a process needs we use locality model of process execution

                                here Locality is set of pages that are actively used together Locality model states that as the process executes it moves from locality to locality If we allocate enough frames to a process to accommodate the size of current locality it will not fault again until it changes current localityWorking Set model uses working-set and a parameter ∆ = working-set windowThe set of pages in the most recent ∆ page references is the working setLet WS S i be the working-set size for process Pi ie Pi needs WS S i framesLet D be the total demand for frames then D=sum WS Si

                                and mbe the total number of available frames

                                If iquestm thrashing occurs because some processes will not have enough framesSoIF (Dgtm iquest

                                OS selects a process to suspendelse OS may initiate another processThe working set strategy prevents thrashing while keeping the degree of multi programming as high as possibleThe main difficulty is to keep track of moving working set window ========================================================================Q) How does thrashing occur in global and local page replacementAns In Global page replacement when a page fault occurs for a process any frame in main memory can be replaced causing page fault to some other process that needs the replaced frame Page fault processes must use paging device to swap pages in and out As more number of processes queue up for the paging device

                                the READY queue empties And CPU becomes idle So OS introduces new processes as CPU is free This further increases number of page faults

                                In Local page replacement Each process is allocated certain frames in main memory and when a page fault occurs only the frames allocated to it will be replacedWhen a process starts thrashing other processes are not affected But since the processes will be in queue for paging device most of the time average service time for page fault increases Hence effective access time will increase even for a process that is not thrashingQ) Page Table Structure Or Structure of page table in memory

                                Hierarchical Paging Hashed Page Tables Inverted Page Tables

                                Hierarchical Paging If Page table is large then we break up the page table into multiple page tables ie page table is paged A simple technique is a two-level page tableTwo-Level Paging Example A logical address (on 32-bit machine with 4K page size) is divided into 1 a page number consisting of 20 bits 2 a page offset consisting of 12 bitsSince the page table is paged the page number is further divided into 1 a 10-bit page number 2 a 10-bit page offset Thus a logical address is as follows page number page offset

                                pi p2 d

                                10 10 12 where p1 is an index into the outer page table and p2 is the displacement within the page of the outer page table Two-Level Page-Table Scheme

                                Address-Translation Scheme Address-translation scheme for a two-level 32-bit paging architecture

                                Hashed Page Tables (or) Hash table Hashed page tables are common in address spaces gt 32 bitsEach entry in hash table contains a linked list of elements that hash to same locationHash table is indexed by hash valueEach element consists of 3 fields

                                1 Page number2 Frame number3 Pointer to next element in linked list

                                The algorithm works as follows Page number is passed to hash function to get hash value Locating the hash value in hash table is easy as Hash table is indexed by hash value Page number is then compared with field 1 in the first element in linked list If there is a match the corresponding frame number is used to get physical address If there is no match next entries in linked list are searched

                                Inverted Page Table

                                Invertible page table has one entry for each frame of physical memory Each entry in invertible page table has

                                Process-id Page number

                                Logical address also has 3 parts Process-id Page number

                                OffsetProcess-id and page number of the logical address is compared with each entry in the invertible page tableIf a match is found at ith entry i is added to offset to get logical address If there is no match it means it is illegal addressAdvantages and disadvantages

                                Decreases memory needed to store each page table Increases time needed to search the table when a page reference occurs Solution Use hash table to

                                limit the search to one mdash or at most a few mdash page-table entries Inverted Page Table Architecture

                                Unit-IV CPU Scheduling

                                Basic conceptsTo maximize CPU utilization some process must be running at all times

                                In multi programming several processes will be in main memory at a given time If running process has to wait for IO the OS takes the CPU away from that process and assigns CPU to other process in ready queue So CPU must be scheduled and CPU scheduling is fundamental OS function

                                CPU- IO burst cycleProcess execution begins with a CPU burst that is followed by an IO burst which is followed by another CPU burst then another IO burst and so on as shown in the figure

                                An IO bound program has many short CPU bursts A CPU-bound program has few long CPU bursts

                                Schedulers a module in OS for scheduling decisions1048708 Long-term scheduler (or job scheduler) ndash selects which processes should be brought into the ready queue1048708 Medium-term scheduler ndash selects which processes should be swapped inout the memory1048708 Short-term scheduler (or CPU scheduler) ndash Whenever the CPU becomes idle the operating system must select one of the processes in ready queue to be executed This selection is done by short term scheduler (or) CPU scheduler

                                Dispatcher gives CPU control to the process selected by short term scheduler This function involves following

                                1 switching context2 switching to user mode3 jumping to proper location in the user program to restart that program

                                Time taken to stop one process and start another process is known as dispatch latency This must be kept smallReady QueueReady queue is implemented as a FIFO queue a priority queue a tree or simply an unordered linked listThe records in ready queue are PCBs of processes

                                Pre-emptive Scheduling

                                CPU scheduling decisions take place under one of four conditions

                                1 When a process switches from the running state to the waiting state2 When a process switches from the running state to the ready state for example in response to an

                                interrupt3 When a process switches from the waiting state to the ready state say at completion of IO 4 When a process terminates

                                For conditions 1 and 4 there is no choice - A new process must be selected For conditions 2 and 3 there is a choice - continue running the current process (or) select a different

                                one If CPU scheduling takes place only under conditions 1 and 4 the system is said to be non-

                                preemptive or cooperative In non-preemptive a process runs until it itself gives up the CPU ie when it is waiting for IO (or) when process completes Otherwise the system is said to be pre-emptive

                                Windows used non-preemptive scheduling up to Windows 3x and started using pre-emptive scheduling with Win95 Macs used non-preemptive prior to OSX and pre-emptive since then Note that pre-emptive scheduling is only possible on hardware that supports a timer interrupt

                                Note that pre-emptive scheduling can cause problems when two processes share data because one process may get interrupted in the middle of updating shared data structures

                                Preemption can also be a problem if the kernel is busy implementing a system call ( eg updating critical kernel data structures ) when the preemption occurs Solution wait until the system call has either completed or blocked before allowing the pre-emption But this solution is problematic for real-time systems

                                Some critical sections of code protect themselves from concurrency problems by disabling interrupts before entering the critical section and re-enabling interrupts on exiting the section But this should only be done only on very short pieces of code that will take less CPU time

                                Scheduling Criteria to decide CPU scheduling algorithm1 CPU utilization in real system cpu utilization will range from 40 to 902 Throughput Number of processes that are completed per unit time3 Turnaround time Time when the process has completed ndash Time when the process was submittedie it includes waiting time and burst time4 Waiting time amount of time that a process spends waiting for CPU5 Response time Time when the first response came - Time when the process was submitted

                                Optimization Criteria Max CPU utilizationMax throughput

                                Min turnaround timeMin waiting timeMin response time CPU Scheduling Algorithms1) First-Come First-Served (FCFS) Scheduling First come first served CPU scheduling algorithm is non pre-emptive Processes are scheduled in the order they have arrivedFCFS is implemented by Queue When the CPU is free it is allocated to the process at the head of the queueAdvantages Easy to implementDisadvantages 1 Average waiting time is quite long

                                2 Convoy effect (it occurs as small processes wait in queue for big process to leave CPU)Example Process Burst Time P1 24 P2 3 P3 3

                                Suppose that the processes arrive in the order P1 P2 P3

                                The Gantt Chart for the schedule is P1 P2 P3

                                0 24 27 30 Waiting time for P1 = 0 P2 = 24 P3 = 27 Average waiting time (0 + 24 + 27)3 = 17 2) Shortest-Job-First (SJF) SchedulingSchedule the Process with shortest burst time If CPU burst time of two processe are same then FCFS is usedAdvantages Average waiting time decreasesDisadvantages It is difficult to know the length of next CPU burstSJF can be either pre-emptive (or) non pre-emptive1 Non pre-emptive SJF ndash once CPU is given to the process it cannot be preempted until completes its CPU burst2 Pre-emptive SJF (or) Shortest-Remaining-Time-First (SRTF) If the newly arrived process is shorter than what is remaining of currently executing process then pre-empt the currently executing process

                                Example Process Arrival Time Burst Time P1 00 7

                                P2 20 4 P3 40 1

                                P4 50 4

                                SJF (non-preemptive) Gantt Chart P1 P3 P2 P4

                                0 7 8 12 16 Average waiting time = [0 +(8-2)+(7-4) +(12-5)] 4 =4

                                Example of Preemptive SJF

                                Process Arrival Time Burst TimeP1 00 7P2 20 4P3 40 1P4 50 4

                                SJF (preemptive) P1 P2 P3 P2 P4 P1

                                0 2 4 5 7 11 16 Average waiting time = (9 + 1 + 0 +2)4 =3 3) Priority SchedulingA priority number (integer) is associated with each processThe CPU is allocated to the process with the highest priority (smallest integer equiv highest priority)If two processes have equal priority then FCFS is usedSJF is a priority algorithm in which larger the CPU burst lower the priorityThere are two schemes 1 Preemptive 2 nonpreemptive

                                Problem equiv Starvation ndash low priority processes may never executeSolution equiv Aging ndash as time progresses increase the priority of the process

                                4) Round Robin (RR) Each process gets a small unit of CPU time (time quantum) usually 10-100 milliseconds After this time has elapsed the process is preempted and added to the end of the ready queueIf there are n processes in the ready queue and the time quantum is q then each process gets 1n of the CPU time in chunks of at most q time units at once No process waits more than (n-1)q time unitsPerformance 1 q large _ FIFO 2 q small _ q must be large with respect to context switch otherwise overhead is too highExample of RR with Time Quantum = 4 Process Burst Time P1 24 P2 3

                                P3 3 The Gantt chart isP1 P2 P3 P1 P1 P1 P1 P1

                                0 4 7 10 14 18 22 26 30 Average waiting time = [(30-24)+4+7]3 = 173 =566 5)Multilevel Queue scheduling Ready queue is partitioned into separate queuesFor ex ready queue is divided into 2 queues

                                1 foreground (interactive) queue2 background (batch) queue

                                Each queue has its own scheduling algorithmFor ex Round Robin scheduling algorithm can be used in foreground queue

                                FCFS scheduling algorithm can be used in background queueScheduling must be done between the queues This can be done in 2 ways1 Fixed priority scheduling

                                Foreground queue has highest priority All the processes in foreground queue must be completed and queue must be empty Then only the processes in background queue will be given CPU As shown in below figure Disadvantage starvation

                                2 Time slice ndash each queue gets a certain amount of CPU timewhich it can schedule amongst its processes ie 80 to foreground in RR and 20 to background in FCFS

                                6) Multilevel Feedback Queue Scheduling A process can move between queues The idea is to separate processes according to their CPU bursts IO ndash bound and interactive processes will be in highest priority queueIf a process takes more CPU time it is moved to lower priority queue If a process waits too long in lower priority queue it is moved to higher priority queue to prevent starvationAs shown in the below figure let there be 3 queues Q0 Q1 Q2

                                1 Q0 ndash time quantum 8 milliseconds 2 Q1 ndash time quantum 16 milliseconds 3 Q2 ndash FCFS

                                Scheduling

                                1 A process entering ready queue is put in queue Q0 When it gains CPU the process receives 8 msec If it does not finish in 8 milliseconds process is moved to queue Q1 2 At Q1 process again receives 16 additional milliseconds If it still does not complete it is moved to queue Q2

                                Q) SynchronizationSeveral processes run in an operating system Some process share resources due to which problems like data inconsistency may arise

                                Example of process synchronization is Producer Consumer problem (or) Bounded-Buffer problemProducer Consumer problem To make sure that producer process should not add data when the buffer is full andConsumer process should not take data when the buffer is empty

                                The code for Producer process can be modified as followswhile (true) Produce an item in next producedwhile (counter == BUFFER_SIZE) do nothingbuffer[in]=nextProducedin = (in+1) BUFFER_SIZEcounter++

                                here an integer variable counter is initialized to zeroCounter is incremented every time a new item is added to the buffer and counter is decremented every time an item is removed from bufferin is an index that always points to an empty slot after the last filled slotout is an index that always points to the first filled slot after at the head of the circular queue

                                The code for consumer process is as shown belowwhile(true)

                                while (counter == 0) do nothingnextConsumed = buffer[out]out=(out+1)BUFFER_SIZEcounter--

                                The above producer and consumer code produces RACE CONDITIONOutcome of execution of several co-operating processes depends on the order in which accesses takes place is called Race ConditionFor Example if counter =5

                                Given counter =5 and if producer produces one item and consumer consumes one item then correct value of counter= 5 only But here the counter = 4 ie inconsistent resultIf T5 is executed before T4 then the counter =6 ie inconsistent result

                                Solution to avoid race condition make sure only one process at a time must update the shared variable (here it is counter)

                                Q) Critical section problemAns Each process has a segment of code called critical section Critical section code is the code that contains common variables of co-operating processesTo avoid data inconsistencyTwo processes must not execute critical section code at same timeFor this to happen1 Each process must request permission to enter its critical sectionmdashEntry section2 End of critical section ndashExit section3 Rest of the code after critical section ndash Remainder section

                                Solution to critical section must satisfy the following three requirements1 Mutual exclusion if one process is in critical section no other process can be in critical section2 Progress Processes not in remainder section must decide which process will enter critical section next3 Bounded Waiting All the processes must be allowed to enter critical section No process must wait indefinitely

                                Q) Petersonrsquos solution for critical section problem (Software based solution)AnsTime Process 1 Process2T0 do

                                flag[1] = TRUE turn =2

                                do flag[2] = TRUE

                                T1 turn =1

                                T2 While( flag[2] ampamp turn == 2)Since turn = 2 while loop will not run

                                While( flag[1] ampamp turn == 1)Loops continuously until flag[1]

                                becomes falseT3 Enter critical section

                                T4 flag[1] = FALSE Enter Critical section

                                T5 Enters remainder section flag[2]=FALSE

                                while (TRUE) Enters remainder section

                                while (TRUE)

                                101 File Concept Data is stored in secondary memory in the form of files A file is a sequence of bits bytes lines or records A file has structure which depends on its type

                                File Attributes

                                1 Name ndash Name of the file for user understanding2 Identifier used by OS to identify a file ( eg inode number )3 Type ndash Text files executable files binary files etc 4 Location ndash location of the file on the hard drive 5 Size current size of the file6 Protection Controls who can read write or execute the file7 Time Date and user identification gives the information when the file was created last modified

                                and last used etc This data is used for protection security and usage monitoring

                                All information about files is kept in directory structure The directory entry consists of the filersquos name and its unique identifier The identifier locates the other file attributes Directory structure is kept on disk

                                File Operations

                                The file Abstract Data Type supports many common operations 1 Creating a file find free space on disk add entry to directory 2 Writing a file write data at current file position pointer location and update pointer3 Reading a file read data at current file position pointer location and update pointer4 Repositioning within a file or file seek change pointer location to a given value5 Deleting a file free the space allocated to file and remove entry from directory6 Truncating a file delete the data and update file size

                                Open File Table

                                Most OSes require that files be opened before access and closed after access Information about all files that are open currently in the system is stored in an open file table This

                                system wide open file table is maintained by OS And information about files opened by the process is stored by the processrsquos open file table Open file table has the following information

                                1 File pointer - records the current position in the file for the next read or write2 File-open count - How many times has the current file been opened ( simultaneously by

                                different processes ) and not yet closedWhen this counter reaches zero the file can be removed from the table

                                3 Disk location of the file4 Access rights

                                Some systems provide support for file locking 1 A shared lock is for reading only2 A exclusive lock is for writing as well as reading3 An advisory lock OS makes sure the locks are acquired and released appropriately4 A mandatory lock software developers makes sure the locks are acquired and released

                                appropriately5 UNIX used advisory locks and Windows uses mandatory locks

                                File Types

                                Implementing file types A common technique is to include the type as part of the filename The name is split into 2 parts

                                1 File Name 2 Extension

                                File name and extension are separated by a period The system uses the extension to indicate the type of file and the type of operations that can be done on that fileUser opens Microsoft word and clicks on File menu and Open option User will only specify the filename without extension application will look for a file with the given name and extension it expects Hence extensions are not supported by the operating systemUnix system uses a crude magic number stored at the beginning of the file to indicate the type of fileIn Mac OS X operating system file produced by word processor has the word processorrsquos name as its creator When user double clicks on the file word will open the required fileSome common file types are as shown below

                                File Structure Some files contain an internal structure which may or may not be known to the OS for ex executable files have a specific structure so that it can determine

                                where in memory to load the file and what is the location of the first instruction

                                If OS supports many file formats it will increase the size and complexity of the OS UNIX treats all files as sequences of bytes with no internal structure (With the exception of

                                executable binary programs which it must know how to load and find the first executable statement etc)

                                Macintosh files have two forks - a resource fork and a data fork The resource fork contains information relating to the UI such as icons and button images and can be modified independently Data fork contains the code or data

                                Internal File structure

                                Disk files are accessed in units of physical blocks typically 512 bytes

                                Internally files are organized in units of logical units like single byte (or)

                                data record size (or) structure size

                                The number of logical units which fit into one physical block determines its packingIf a student file is organized internally as records Each record takes 64 bytes of memoryAnd main memory is divided into blocks of 512 bytes Then packing = 51264=8 8 records can be stored per block

                                Internal fragmentation may occur due to this

                                102 Access Methods

                                The information in file can be accessed in the following ways1 Sequential access2 Direct access (or) Relative access

                                1 Sequential access we cannot randomly access any record

                                Records are read and written sequentially one record after another

                                A sequential access file emulates magnetic tape operation and generally supports the following operations read next - read a record and advance the tape to the next position write next - write a record and advance the tape to the next position rewind skip n records - May or may not be supported N may be limited to positive numbers or may be

                                limited to +- 1

                                2 Direct access (or) Relative accessA file is made up of fixed length records so that we can randomly access any record

                                No restriction on order of reading or writing

                                Direct access files are useful when we need to access large amounts of data randomly

                                Ex database

                                Operations supported include n is relative block number ie relative to beginning of file

                                1 read n - read record number n 2 write n - write record number n 3 jump to record n - could be 0 or the end of file4 Query current record - used to return back to this record later5 Sequential access can be easily emulated using direct access The inverse is complicated and

                                inefficient

                                3 Other access methods can be built on top of direct access method We construct an index for the file which contains pointers to various blocks To search a file we first access index and then use pointer to access file

                                If a student file has 120000 records of students sorted by student id We can create an index consisting of 1st student id in each blockif 64 records can be stored in each block then number of blocks needed to store 120000 records = 12000064 asymp 2000 blocks So index has 2000 entries Index is kept in main memory so that searching becomes faster

                                If the index itself becomes large we create an index for index file (Ex B-tree)

                                Q) Allocation Methods

                                To allocate space to files on disk is such a way that disk space is utilized effectively and files can be accessed quickly

                                Three major methods of allocating disk space are

                                1 Contiguous 2 Linked

                                3 Indexed

                                1 Contiguous allocation Each file occupies set of contiguous blocks on disk

                                If the file is n blocks long and starts at location b then it occupies blocks b b+1 b+2hellipb+n-1

                                The directory entry for each file indicates the address of starting block(ie b) and length (n)

                                Advantages

                                1 For direct access to block i we can directly access by finding the block as b+i Since the file is stored sequentially sequential access is easy Hence access is easy

                                2 Supports both direct and sequential access3 Number of disk seeks required is minimum 4 Seek time is minimum

                                Disadvantages

                                1 Difficult to find space to a new file (or) add more data to existing fileFirst fit and Best fit strategies are used to find a free hole( set of free contiguous blocks) from list of available holes But both are inefficient wrt storage utilization

                                2 External fragmentation even though there are enough free blocks required for a new file they are not contiguous and hence cannot be allocated to new fileSolution copy all the files onto another disk or tape We then copy the files back onto original disk contiguously This is known as compaction But compaction is time consuming

                                3 Size of the file must be known when the file is created Even though final size is known entire size may not be used by the file at the beginning Hence leads to wastage of space

                                If size of the file is not known in advance then (i) If we allocate too little space to file file cannot be extended Solution find a larger hole copy the contents of the file to the new space and release the previous space This may be possible as long as space exists Also this is time consuming(ii) If we overestimate the amount of space needed much of the space is unused This will lead to internal fragmentation

                                To minimize these drawbacks some os uses modified contiguous allocation scheme Here contiguous blocks of memory is allocated at firstIf the file needs more blocks another set of contiguous space known as extent is added Location and a block count plus a link to the first block of next extentIf extents are too large and of fixed size internal fragmentation occursIf extents are of variable sizes external fragmentation occurs

                                Linked AllocationEach file is a linked list of disk blocks The disk blocks can be scattered anywhere on the diskThe directory contains a pointer to first and last blocks of the file For example a file of 5 blocks might start at a block 9 and continue at block 16 then block 1 and then block 10 and finally block 25 Each block contains a pointer to next block If each block is 512 bytes and disk address (pointer ) requires 4 bytes then the user sees blocks of 512-4=508 bytes To create a new file we simply create a new entry in the directory The pointer will have nil for empty file Also the size field of empty file is 0

                                To add more data to existing file free block is found and data is written to it and is linked to end of file To read a file we read by following the pointers block to block

                                Advantages 1 Easy to find space to a new file (or) add more data to existing file2 Size of the file need not be declared when the file is created3 There is no external fragmentation

                                Disadvantages1 Seek time is more2 Number of disk seeks required is more3 Pointers use larger percentage of filersquos disk space4 Inefficient for direct access files To access i th record i disk reads are required5 Not Reliable if pointers were lost or damaged

                                Solution to disadvantage 3 is to use to group multiple blocks as clusters(for ex 4 blocks = 1cluster) We allocate clusters instead of blocks to file Makes logical to physical block mapping simpleImproves disk throughput by reducing disk access timeReduces wastage of space due to pointers as only few pointers may be neededFree ndashlist management also becomes simpleDisadvantage internal fragmentation more space is wasted when the cluster is partially full

                                Solution to disadvantage 5 is to use doubly linked list (or) store the file name and relative block number in each block

                                An variation of linked allocation is the use of file allocation table (FAT)A section of disk at the beginning of each volume contains FAT FAT has one entry for each disk block and is indexed by block numberThe block number contains the block number of next block that contains the file The last block will have special end of the file value as table entry An unused block is indicated by table value 0

                                Advantages

                                1 Easy to find space to a new file (or) add more data to existing file2 Random access time is improved

                                Disadvantage number of disk seeks is more The disk head must move to read the FAT and find the location of block then move to location of the block

                                3 Indexed allocation

                                Each file has its own index block Index block is an array of disk block addresses The ith entry in index block points to ith block of the file

                                The directory contains the address of the index block

                                When the file is created all pointers in the index block are set to nil

                                To add more data to existing file free block is found and data is written to it The address of block is put in the index block

                                Advantages

                                1 Easy to find space to a new file (or) add more data to existing file2 Supports direct (or) random access efficiently3 No external fragmentation

                                Disadvantage

                                1 Seek time is more2 Number of disk seeks required is more3 Not Reliable4 Index file must be kept in memory If this memory is not available then we have the read the index

                                block and then desired data block ie 2 disk accesses are required To access a block near the end of the file we need to read all the index blocks to read the needed data block

                                5 Pointer overhead of index block is gt than that of linked allocation For example if we have file of only one or two blocks Entire index block is allocated to store these one or two pointers

                                Various mechanisms followed for size of index block

                                1 linked scheme size of index block is one disk block Index block contains a small header containing name of the file and set of the first 100 disk block addresses The next address contains the address of another index block for large files

                                2 multilevel index to access a block the OS uses first-level index to find second-level index block which points to file block This approach can be continued to third or fourth level

                                3 combined scheme

                                For example Say there are 15 pointers of index block in the filersquos inode The first 12 of these pointers have address of blocks that contains file data The next 3 pointers point to indirect blocks The first points to single indirect block which is an index block that contain addresses of filersquos data blocks

                                The second points to double indirect blocks and third points to triple indirect blocks as shown below

                                Q) Free space management

                                To keep track of free disk space the system maintains free space list To create a file we search free space list and allocate the required space to new file

                                The free space list can be implemented as

                                1 Bit vector (or) Bit map2 Linked list3 Grouping4 Counting 5 Space maps

                                1 Bit vector (or) Bit map Each block is represented by 1 bit If the block is free the bit is 1 and if the block is allocated the bit is 0 For example consider a disk where blocks 23458 blocks are free The free-space bit map would be 001111001000

                                Advantages simple and easy to get contiguous files

                                Disadvantages

                                1 Bit map requires extra space2 Bit map is inefficient if kept in disk Bit maps of smaller disk can be kept in main memory but may

                                not be possible to keep the bit map of larger disk in main memory3 Bit map must be modified both when blocks are allocated and freed Freeing 1 GB of data on a 1-TB

                                disk could cause thousands of blocks of bit maps to be updated because those data blocks are scattered all over the disk

                                2 Linked List Free-space list is implemented by linking all the free disk blocks together A pointer to the first free block is stored in special location on the disk For example consider a disk where blocks 23458 blocks are freeWe keep a pointer to block 2 as the first free block Block 2 would contain a pointer to block 3 Block 3 would point to block 4 and block 4 would point to block 5and so onAdvantage No waste of spaceDisadvantage Cannot get contiguous space easily ie To traverse the free-space list we must read each block which requires more IO time

                                3 Grouping First free block stores the addresses of n free blocks The first n-1 free blocks are free and the last block contains the addresses of another n free blocks and so onAdvantage large number of free blocks can be found quickly4 Counting free-space list can be maintained by keeping the address of first free block and the number (n)of free contiguous blocks that follow the first block

                                Each entry in the free-space list consists of disk address and a countEach entry requires more space andlist is shorter only if count is gt1These entries can be stored in a B-tree for efficient insertion deletion and search

                                5 Space maps ZFS divides the disk space into chunks of manageable sizes called metaslabs Each metaslab has a space mapZFS uses counting algorithm to store information about free blocks Space map is a log of all block activities in time order in counting formatWhen ZFS allocates or frees space from a metaslab it loads the associated space map into memory and updates it Finally free-space list is updated on diskSynchronization Hardware Hardware solution to synchronization is to provide atomic operations These operations operate as a single instruction without interruption Two such operations are 1 Test and Set() hardware instruction2 Swap() instruction

                                1 Test and Set instruction is as shown

                                Solution using Test and SetThe shared variable lock is initialized to false

                                2 Swap() instruction definition is as follows

                                Solution using Swap1 Shared Boolean variable lock is initialized to false and each process has a local boolean variable key

                                Both Test and Set Swap instructions satisfy the mutual exclusion requirement but unfortunately do not guarantee bounded waiting If there are multiple processes trying to get into their critical sections there is no guarantee of what order they will enter and any one process could have the bad luck to wait forever until they got their turn in the critical section (Since there is no guarantee as to the relative rates of the processes a very fast process could theoretically release the lock whip through their remainder section and re-lock the lock before a slower process got a chance As more and more processes are involved vying for the same resource the odds of a slow process getting locked out completely increase )

                                Below figure illustrates a solution using test-and-set that satisfies bounded waiting using two shared data structures boolean lock and boolean waiting[ N ] where N is the number of processes in contention for critical sections

                                Bounded-waiting mutual exclusion with TestAndSet( )

                                It first looks in an order ( starting with the next process on the list ) for a process that has been waiting and if it finds one then it releases that particular process from its waiting state without unlocking the critical section thereby allowing a specific process into the critical section while continuing to block all the others

                                Q) What is semaphoreAns Semaphore is used to solve synchronization problems A semaphore S is an integer variable accessed only through 2 standard atomic operations 1 wait() 2 signal()Wait primitive Signal primitivewait(S)

                                while(S lt= 0) do nothingS--

                                signal (S)

                                S++

                                Each process that wishes to use a resource performs wait() operation When a process releases a resource it performs signal() operation

                                Semaphores Usage1 binary semaphore or mutex locks can take values 0 (or) 1

                                2 Counting Semaphore can take any integer value Counting semaphore is used to count the number remaining resources The counter is initialized to the number of resources available whenever the counting semaphore gt 0 then a process can enter a critical section and use one of the resources When the counter = 0 ( or negative in some implementations ) then the process blocks until another process frees a resource and increments the counting semaphore with a signal callFor example if 3 resources are there and 4 processes Value of counting semaphore =3if process P1 requires a resource it perform wait() operationNow counting semaphore = 2if process P2 requires a resource it perform wait() operationNow counting semaphore = 1if process P3 requires a resource it perform wait() operationNow counting semaphore = 0if process P4 requires a resource it perform wait() operation P4 waits until a resource is available

                                3 Semaphores can also be used to synchronize certain operations

                                Semaphore ImplementationWhen a process is in critical section and any other process that tries to enter critical section must loop continuously wasting CPU time (busy waiting) This type of semaphore is called spin lock as the process spins while waiting For example suppose it is important that process P1 execute statement S1 before process P2 executes statement S2

                                First we create a semaphore named synch that is shared by the two processes and initialize it to zero Then in process P1 we insert the code

                                S1signal( synch )

                                and in process P2 we insert the code wait( synch )

                                S2 Because synch was initialized to 0 process P2 will block on the wait until after P1 executes the call

                                to signal Each semaphore has a integer value and maintains a waiting queue of waiting processes

                                Semaphore implementation with no busy waiting To overcome busy waiting we block the waiting process and waiting process is restarted by wakeup() operation(ie the process is removed from waiting queue to ready state)

                                Problems due to semaphore1 Deadlocks Deadlocks occur when multiple processes are blocked each waiting for a resource that can only be freed by one of the other (blocked) processes shown below

                                2 Starvation In Starvation one or more processes gets blocked forever and never get a chance to enter critical section For example if we do not specify the algorithms for adding processes to the waiting queue or selecting one to be removed from the queue in the signal( ) call If a LIFO queue is chosen then the first process that starts waiting will never get a chance

                                3 Priority inversionLet Process LMH have priorities as shown belowLltMltHprocess H is waiting for a resource that has been held by process L so Process L is running When process M enters it pre-empts process L and M is running nowHere M runs before H even though M has lower priority than HTo avoid this L is given high priority so that M does not pre-empt L=====================================================================Q) Classical problems on synchronization

                                1 The Bounded Buffer Problem (also called the The Producer-Consumer Problem)2 The Readers-Writers Problem3 The Dining Philosophers Problem

                                These problems are used to test newly proposed synchronization scheme

                                1 The Bounded Buffer ProblemConsider

                                n buffers and each buffer holds one item a producer process which creates the items (1 at a time) a consumer process which processes them (1 at a time)

                                Producer process cannot produce an item if all the buffers are full and Consumer process cannot consume if all the buffers are empty As both processes modify the contents of buffer synchronization must be there For that we use 3 Semaphores

                                1 empty = n (empty semaphore counts the number of empty buffers)2 full=0 (full semaphore counts the number of filled buffers)3 mutex=1 (mutex provides mutual exclusion for access to the buffer)

                                Producer process Consumer processdo

                                hellip produce an item wait(empty) wait(mutex) hellip

                                do wait(full) wait(mutex) hellip remove an item from buffer hellip

                                add item to buffer hellip signal(mutex) signal(full)

                                while (1)

                                signal(mutex) signal(empty) hellip consume the item hellip while (1)

                                2 The Readers-Writers ProblemA data item such as a file is shared among several processesEach process is classified as reader or writerMultiple readers can read the file at same timeA writer must have exclusive access (ie cannot share with either a reader or another writer)Two versions of the readers-writers problem

                                readers priority new reader need not wait because a writer is waiting writers priority if a writer is waiting to access the database no new readers can start reading

                                A solution to both version may cause starvation in the readers priority version writers may starve in the writers priority version readers may starve

                                A semaphore solution to the readers priority version uses 3 semaphores1 readcount = 0 (readcount semaphore counts the number of reading shared data2 mutex =1 (mutex semaphore is used for mutually exclusion when readcount is updated)3 wrt = 1 (wrt semaphore is common to both writer and reader)

                                wrt provides mutually exclusive access to shared data Reader process Writer process

                                wait(mutex) readcount++ if (readcount == 1) wait(rt) signal(mutex) hellip reading is performed hellip wait(mutex) readcount-- if (readcount == 0) signal(wrt) signal(mutex)

                                do wait(wrt) hellip writing is performed hellip signal(wrt)

                                while (TRUE)

                                3 The Dining Philosophers Problemn philosophers sit around a table thinking and eatingAs time passes a philosopher gets hungry and tries to pick up the chopstick on his left and on his right A philosopher may only pick up one chopstick at a time and cannot pick up a chopstick already in the hand of neighbour philosopherThe dining philosophers problems is an example of a large class or concurrency control problems it is a simple representation of the need to allocate several resources among several processes in a deadlock-free and starvation-free mannerA semaphore solution represent each chopstick with a semaphore

                                semaphore chopstick[5] Initially all values are 1

                                Philosopher ido

                                51

                                wait(chopstick[i])wait(chopstick[(i+1) 5])hellipeathellipsignal(chopstick[i])signal(chopstick[(i+1) 5])hellipthinkhellip

                                while (1)This solution guarantees no two neighbouring philosophers eat simultaneously but has the possibility of creating a deadlockQ) Types of operating systems

                                MULTIPROCESSING SYSTEMS has multiple hardware CPUs

                                In symmetric multiprocessing (SMP) systems The processes share the same main memory and io facilities All processors can perform the same functions (hence term symmetric)

                                52

                                ASYMMETRIC MULTIPROCESSING SYSTEMS In asymmetric multiprocessing (ASMP) all CPUs are not equal (Master and slave CPUs) ASMP have dedicated processors to specific tasks For example one processor may be dedicated to disk operations another to video operations etc These systems dont have the flexibility to assign processes to the least-loaded CPU unlike an SMP system

                                REAL TIME SYSTEM is used to implement a computer application that must complete its execution within its time constraint

                                Two kinds of real-time systems have evolved

                                1 A hard real-time system can guarantee that response requirement will be met under all conditions 2 A soft real-time system cannot guarantee that response requirement meets under all conditions Ex

                                Digital audio or multimedia systems Digital telephones

                                DISTRIBUTED SYSTEMS execute parts of a computation in different systems at the same time It uses distributed control ie it spreads its decision-making actions across different computers in the system so that failures of individual computers or the network does not cripple its operation

                                A distributed operating system appears as a uniprocessor system even though it has multiple processors The users may not know where their programs are being run or where their files are located that should all be handled automatically by the operating system

                                Distributed systems allow applications to run on several processors at the same time thus requiring more complex processor scheduling algorithms

                                DESKTOP SYSTEMS A desktop system is a personal computer (PC) system used at a single location Modern desktop systems support multiprogramming Common examples are Linux FreeBSD Windows 8 and the Macintosh operating system

                                HANDHELD SYSTEMS (palmtop computers) Two of the most popular operating systems for handhelds are Symbian OS and Android OS They have very less memory

                                53

                                CLUSTERED SYSTEMS A computer cluster consists of a set of loosely connected computers that work together so that in many respects they can be viewed as a single system The components of a cluster are usually connected to each other through fast local area networks each node (computer used as a server) running its own instance of an operating system

                                Q) Modern operating system

                                New developments in hardware applications and security threats lead to the development of modern operating system

                                New Hardware developments like increased machine speed high speed network increased size and variety of memory devices

                                New Applications developments like multimedia applications internet and web access and client server computing

                                Developments in modern operating system can be categorized as

                                1 Microkernel architecture assigns only few functions to kernel ( IPC and CPU scheduling etc) Other OS services are provided by processes called servers that run in user mode This approach decouples server and kernel development This approach is well suited to distributed environment

                                2 Multithreading a process is divided into threads that can run concurrently Useful for applications that perform a number of independent tasks Example database server that listens and processes many client requests

                                3 Symmetric multiprocessing As now-a-days there are multiple microprocessors in a single system SMP operating system provides greater efficiency when there are multiple processors The processes share the same main memory and io facilities All processors can perform the same functions (hence term symmetric)

                                SMP schedules processes or threads across all of the processors

                                Advantages

                                1 Increased performance if some portion of the program can be run in parallel the performance of SMP OS increases

                                2 increased availability as all processes can perform the same task if one processor fails others will work 3 incremental growth one can add a new processor to increase the performance4 Scaling cost can be dependent on the number of processors used

                                4 Distributed operating systems Distributed systems allow applications to run on several processors at the same time

                                5 Object oriented design adds modularity to kernel Os can be customized without effecting system integrity

                                Q) Linux Operating systems It is open source as its source code is freely available It is free to use Linux was designed considering UNIX compatibility Linux is a multi-user multitasking system Main design goals are speed efficiency and standardization Components of Linux System

                                54

                                Linux Operating System has primarily three components as shown in the below diagram

                                Kernel - Kernel is the core part of Linux It is responsible for all major activities of this operating system It interacts directly with hardware Kernel hides low level hardware details to system or application programs

                                System Library - System libraries are special functions or programs These are used by application programs to use Kernelrsquos features

                                System Utility - System Utility programs are responsible to do specialized individual level tasks like updating log file accepting login requests from terminals etc

                                Kernel Modules Sections of kernel code that can be compiled loaded and unloaded independent of the rest of the kernel A kernel module may typically implement a device driver a file system or a networking protocol

                                Three components to Linux module support 1 module management Supports loading modules into memory and allows to talk to the rest of the kernel 2driver registration Allows modules to inform rest of the kernel that a new driver has become available 3 conflict resolution protect reserved resources of one driver from accidental use by another driver

                                Kernel Mode vs User ModeKernel component code executes in a special privileged mode called kernel mode with full access to all resources of the computer This code represents a single process executes in single address space and do not require any context switch and hence is very efficient and fast Kernel runs each processes and provides system services to processes provides protected access to hardwares to processesCode which is not required to run in kernel mode is in System Library User programs and other system programs works in User Mode which has no access to system hardwares and kernel code User programs utilities use System libraries to access Kernel functions to get systems low level tasksBasic FeaturesFollowing are some of the important features of Linux Operating System

                                Portable - Portability means softwares can works on different types of hardwares in same way Linux kernel and application programs supports their installation on any kind of hardware platform

                                Open Source - Linux source code is freely available and it is community based development project Multiple teams works in collaboration to enhance the capability of Linux operating system and it is continuously evolving

                                Multi-User - Linux is a multiuser system means multiple users can access system resources like memory ram application programs at same time

                                Multiprogramming - Linux is a multiprogramming system means multiple applications can run at same time

                                Hierarchical File System - Linux provides a standard file structure in which system files user files are arranged

                                Shell - Linux provides a special interpreter program which can be used to execute commands of the operating system It can be used to do various types of operations call application programs etc

                                55

                                Security - Linux provides user security using authentication features like password protection controlled access to specific files encryption of data

                                Architecture Linux System Architecture is consists of following layers

                                Hardware layer - Hardware consists of all peripheral devices (RAM HDD CPU etc)

                                Kernel - Core component of Operating System interacts directly with hardware provides low level services to upper layer components

                                Shell - An interface to kernel Takes commands from user and executes kernels functions

                                Utilities - Utility programs giving user most of the functionalities of an operating systems

                                Q) windows XP operating System1Extensibility mdash layered architecture

                                Executive which runs in protected mode provides the basic system services On top of the executive several server subsystems operate in user mode Modular structure allows additional environmental subsystems to be added without affecting the

                                executive 2Portability mdash XP can be moved from on hardware architecture to another with relatively few changes

                                Written in C and C++ Processor-dependent code is isolated in a dynamic link library (DLL) called the ldquohardware abstraction

                                layerrdquo (HAL)3 Reliability mdashXP uses hardware protection for virtual memory and software protection mechanisms for operating system resources 4 Compatibility mdash applications that follow the IEEE 10031 (POSIX) standard can be complied to run on 2000 without changing the source code 5 Performance mdash XP subsystems can communicate with one another via high-performance message passing

                                Preemption of low priority threads enables the system to respond quickly to external events Designed for symmetrical multiprocessing

                                6 International support mdash supports different locales via the national language support (NLS) APIXP Architecture 1 Layered system of modules 2 Protected mode mdash hardware abstraction layer (HAL) kernel executive 3 User mode mdash collection of subsystems

                                a Environmental subsystems emulate different operating systems b Protection subsystems provide security functions

                                XP architecture

                                56

                                Q) Windows network Operating systemExample Windows 2000 operating system

                                All the pictures are taken from Silberschatz Abraham et al Peter Baer Galvin and Greg Gagne Operating system concepts Reading Addison-Wesley

                                Visit my blog enthusiaststudentblogspotin57

                                mtechmessengerblogspotin

                                Keywords OS exam notes OS notesOperating system notes Opearing system galvin notesSimple batch system Multi- programmed Batch System Time sharing System Difference between Batch Multi-programming and Time sharing What is process structure of process in memory Process states PCB Operations on Processes Process Creation Process Termination cascading terminationcontext switch Basic Concepts of threads two modes of CPU execution Inter process communication Shared ndash Memory Systems Message passing systemswhat is deadlock four conditions that are necessary for deadlock to occur Methods for handling deadlocks Deadlock prevention Deadlock avoidanceResource Allocation graphBankers AlgorithmSafe State Unsafe state Safety Algorithm Resource-Request Algorithm Memory management Address Binding Logical vs Physical Address Space MMU Memory-Management Unit Dynamic loading Dynamic Linking Overlays Swapping Contiguous memory allocation Single-partition allocation Multiple-partition allocation Internal fragmentation Variable Size partitions External fragmentation First-fit Best-fit Worst-fit Non-Contiguous Memory allocation Paging Implementation of Page Table PTLR PTBR page table base register Page-table length register translation look-aside buffers TLB Protection in Paging Shared pages in Paging Segmentation Shared Segments Virtual memory Demand paging Procedure for Handling a Page Fault pure demand paging Page Replacement Page Replacement Algorithms FIFO Optimal Page replacement AlgorithmLeast Recently used (LRU) Page replacement algorithm ThrashingWorking Set model global and local page replacement

                                58

                                • 101 File Concept Data is stored in secondary memory in the form of files A file is a sequence of bits bytes lines or records A file has structure which depends on its type
                                • File Attributes
                                • File Operations
                                • Kernel Mode vs User Mode
                                • Basic Features
                                • Architecture

                                  1) Fixed Size Partitions Divide the main memory into Fixed sized blocks Here memory allocated to a process may be larger than required The difference between allocated memory and requested memory is called internal fragmentation Internal fragmentation is unused memory inside the partition

                                  0 P1 Block0(0th address to 3rd address)4 P1 Block1(4th address to 7th address)8 Block2(8th address to 11th address)

                                  12 Block3(11th address to 12th address)

                                  if P1 requires just 5 addresses still it is allocated 2 blocks ie 8 addresses So remaining 3 addresses are left unused

                                  2) Variable Size partitions Here main memory is divided into partitions of variable sizesOperating system maintains information about

                                  a) allocated partitions b) free partitions (hole)

                                  In the beginning all main memory is empty and is considered one large block of available memory a hole

                                  Exact memory required by process is only given When a process terminates it releases memory which can be allocated to another process

                                  Memory is allocated to processes until finally no available block of memory (or hole) is large enough to hold the next processExternal fragmentation exists when there is enough memory to satisfy a request but the available memory are not contiguous Wastage of memory outside the partitionOne Solution to problem of external fragmentation is

                                  a Compaction shuffle the memory contents so as to place all free memory together into one large block

                                  ===============================================================================Q) Dynamic Storage-Allocation Problem (or) Most commonly used strategies to select a free hole from the set of available holesAns To satisfy a request of size n from a list of free holes below 3 policies can be used

                                  1 First-fit Allocate the first hole that is big enough

                                  2 Best-fit Allocate the smallest hole that is big enough must search entire list if it is not ordered by size Produces the smallest leftover hole

                                  3 Worst-fit Allocate the largest hole must also search entire list Produces the largest leftover holeFirst-fit and best-fit better than worst-fit in terms of speed and storage utilization

                                  ===============================================================================Q) Non-Contiguous Memory allocation

                                  1 Paging Paging is a memory management scheme that provides non-contiguous memory allocation0 P1 Block0(0th address to 3rd address)4 P2 Block1(4th address to 7th address)8 Block2(8th address to 11th address)

                                  12 P1 Block3(11th address to 12th address)

                                  Logical address space of a process can be noncontiguous Ex P1 is allocated Block0 and Block3

                                  1 Divide physical memory into fixed-sized blocks called frames (size is power of 2)2 Divide logical memory into blocks of same size called pages3 Logical address is divided into 2 parts

                                  Page number (p) ndash Page table is indexed by page numberPage offset (d) ndash

                                  4 A page table is allocated to each process (A pointer to page table is stored in PCB of process )Page table translates logical to physical addresses Page 0 is in frame1 Page 1 is in frame 4 etc5 Internal fragmentation may occur due to paging

                                  6 If the size of logical address space= 2m and page size = 2n Then the higher order mminusn bits of logical address correspond to page number and lsquonrsquolower order bits for displacement within the page

                                  Ex size of logical address space = 8=23 so m=3

                                  page size =4=22 So n=2

                                  So mminusn = 1 bit for page number to represent 01

                                  remaining n=2 bits for displacement within the page

                                  Q) Implementation of Page Table

                                  Page table can be kept as

                                  1) Set of dedicated registers efficient if the page table is small

                                  2) Page table can be kept in memory page table base register (PTBR) points to page table Page-table length register (PTLR) indicates size of the page table Changing page table requires changing only the value in these registers Advantages of keeping page table in memorya Less context switch time

                                  b Two memory accesses are required One memory access to access the page table and another memory access to access the required memory address

                                  This problem can be solved by using a special fast-lookup hardware cache called associative memory or translation look-aside buffers(TLBs)

                                  i Each entry in TLB consists of 2 parts

                                  I) page number field

                                  II) Value field or frame number field

                                  ii When a logical address is generated by CPU its page number is presented to TLB The page number is compared with all the entries of TLB simultaneously

                                  If page number is found in TLB Itrsquos frame number is immediately available -gt TLB hitIf page number not found check in page table ndashTLB miss

                                  By using TLB Search is fast But TLB hardware is expensive So TLB size is kept small So TLB contains only frequently used few page table entries

                                  Q) Protection in Paging Validinvalid bit attached to each entry in the page table OS sets this validinvalid bit for each page to allow or disallow access to the page When the bit is set to invalid the page is not in processrsquos logical address space Hence it generates Trap to OS

                                  Q) Shared pages in Paging

                                  Reentrant code is the code that never changes during execution Reentrant code can be shared

                                  1 One copy of read-only (reentrant) code is shared among processes (ie text editors compilers window systems) Ex ed1ed2ed3 is shared among Process P1 and P2

                                  2 Shared code must appear in same location in the logical address space of all processes

                                  Each process keeps a separate copy of the private code and data EX data1 and data2

                                  The pages for the private code and data can appear anywhere in the logical address space

                                  ==============================================================================Q) Segmentation

                                  Segmentation is a memory management scheme that support userrsquos view of memory

                                  When the user program is compiled the compiler generates segments like

                                  1) The code segment

                                  2) Global variables segment

                                  3) Heap memory segment

                                  4) Stack segment etc

                                  Each entry in segment table has

                                  segment base (Starting physical address of segment) and limit register (specifies the length of the segment)

                                  Logical address is divided into 2 parts

                                  segment-number offset

                                  Segment table is indexed by segment number

                                  Segment-table base register (STBR) stores the location of segment table in main memory

                                  Segment-table length register (STLR) stores number of segments used by a program

                                  The segment number (say lsquosrsquo) is used to find the entry in segment table The required entry is lsquosrsquo locations away from the beginning of the segment table Once the required entry in the segment table is found the offset (lsquodrsquo) is compared with limit

                                  If( offset lt limit) then offset is added with the base entry to generate the physical address

                                  Q) Shared Segments Code sharing occurs at

                                  the segment level Shared segments must

                                  have same segment number

                                  Allocation - dynamic storage allocation problem

                                  use best fitfirst fit may cause external fragmentation

                                  Protection protection bits associated with segments

                                  readwriteexecute privileges array in a separate segment - hardware can check for illegal array indexes

                                  ===============================================================================Q) Virtual memory ndash

                                  Ans In paging and segmentation a program will execute only if the entire process is in main memory But here in virtual memory only part of the program needs to be in memory for execution

                                  1 A program that is large enough than the available main memory can still run as only part of program is only loaded into main memory Logical address space can therefore be much larger than physical address space

                                  2 Allows address spaces to be shared by several processes

                                  Virtual memory can be implemented via Demand paging Demand segmentation

                                  The large blank space between heap and stack is part of virtual address space Virtual memory allows files and memory to be shared by 2 or more processes through page sharing as shown in below figure (b) Figure (a) depicts that virtual memory that is larger than physical memory

                                  ==============================================================================

                                  Q)Demand pagingDemand paging is technique of loading pages from disk to main memory only when the page is needed Hence using less amount of physical memory we get faster response

                                  Demand paging is similar to PAGING + SWAPPING

                                  Instead of swapping a whole process the pager brings only those pages needed into memory

                                  To distinguish between pages that are in memory and pages that are on the disk validinvalid bit is used

                                  Validinvalid bit is attached to each entry in page table When this bit is set to valid the page is legal and in memory When this bit is set to invalid either the page is illegal or the page is in disk

                                  When CPU generates a logical address whose page table entry is set to invalid page fault occurs Hence the required page is in disk Some section of hard disk called swap space is used to hold pages that are not present in memory

                                  Q) Procedure for Handling a Page Fault

                                  1) CPU generates a logical address and if the validinvalid bit for the page that has this logical address is set to invalid page fault occurs

                                  2) Page fault causes a trap to operating system

                                  3)Check if the logical address is within the logical address space of process ie PTBR and PTLR is checked if (required logical address is not within the logical address space of process)

                                  Terminate the processelse

                                  page is not in memory and page is in disk

                                  4) To bring the required page into memory we need to find a free main memory frame map main memory address to disk block and fetch disk block and load the block into free frame5) When the required page is brought into memory update the page table to indicate the page is in memory6) Restart instruction interrupted by illegal address trap The process will continue as if page had always been in memory

                                  Q) What is pure demand pagingAns If no page belonging to the executing process is in main memory the process will fault for every page it needs Page faults will occur until every page that is needed is in memory This scheme is known as pure demand paging NEVER BRING A PAGE UNTIL IT IS REQUIRED

                                  Q) Page Replacement1 Find the location of the desired page on the disk2 Find a free

                                  i) If there is a free frame use itii) If there is no free frame use Page replacement Algorithms to find some page in memory that is

                                  not really in use and swap itiii) If the modify bit associated with the victim frame is set to 1 the page is written to disk Make the

                                  changes to validinvalid bit of victim frames page table entry to invalid to indicate the victim frame is no longer in memory

                                  else no need to write to disk3 Read the desired page into victim frame and change the frame and page tables4 Restart the user process

                                  =================================================================================================Q) Page Replacement AlgorithmsGoal Produce a low page-fault rateThe algorithms are evaluated by running it on a particular string of memory references (reference string) and by computing the number of page faults on that string

                                  1 First in First out (FIFO) Page replacement algorithm Uses the time when a page was brought into memory The page that was brought first into memory is replaced ie oldest page is chosen for replacement We create a FIFO queue to hold all pages in memory

                                  When a page is brought into memory we insert it at the tail of the queue and we replace the page at the head of the queue

                                  FIFO 15 page faultsProblems due to FIFO is Beladyrsquos anamolyFor a refrence string shown below123412512345When number of frames allocated to a process is 1 number of page faults=12 due to FIFO page replacementWhen number of frames allocated to a process is 2 number of page faults=12 due to FIFO page replacementWhen number of frames allocated to a process is 3 number of page faults=9 due to FIFO page replacementWhen number of frames allocated to a process is 1 number of page faults=10 due to FIFO page replacementAs the number of frames allocated to a process increases page faults must decrease But in FIFO page replacement page faults increase2 Optimal Page replacement AlgorithmReplace the page that will not be used for longest period of time Used for measuring how well your algorithm performs It is not practical to implement OPT algorithm as it is not possible to know what the future references will beex

                                  Optimal page replacement 9 page faults

                                  3 Least Recently used (LRU) Page replacement algorithm Replace the page that has not been used for longest period of time LRU makes use of time of last use of the page Ex

                                  Number of page faults = 12

                                  LRU requires hardware assistance to determine the time of last use of the page We can use 1) Stack or 2) Counter to implement LRU page replacement algorithm

                                  1) Stack implementation ndash keep a stack of page numbers in a double link formwhen a Page is referenced it is moved to the top of stackThis implementation requires 6 pointers to be changed

                                  2) counter implementation Time-of-use field is allocated to each frame allocatedCPU maintains a counter Counter is incremented for every page reference Counter value is copied to the time-of-use field of the referenced page

                                  Disadvantages of LRU page replacement algorithm Updating counter or stack must be done for every memory reference HENCE LRU IS SLOW and implementation requires hardware assistance ==============================================================================Q) LRU approximation page replacement1) Using reference bit2) Using additional reference bits algorithm3) Second chance algorithm4) Enhanced Second chance algorithm5) Counting based Page replacement

                                  Has two schemes1) Least Frequently Used (LFU)2) Most Frequently Used (MFU)

                                  6) Page Buffering AlgorithmsQ) What is ThrashingAns A process is thrashing if it is spending more time paging than executingThrashing may occur when global (or) local page replacement is used as followsSolution to Thrashing is Working Set strategyIf we provide a process with as many frames as it needs thrashing can be avoided Since it is not possible to know how many frames a process needs we use locality model of process execution

                                  here Locality is set of pages that are actively used together Locality model states that as the process executes it moves from locality to locality If we allocate enough frames to a process to accommodate the size of current locality it will not fault again until it changes current localityWorking Set model uses working-set and a parameter ∆ = working-set windowThe set of pages in the most recent ∆ page references is the working setLet WS S i be the working-set size for process Pi ie Pi needs WS S i framesLet D be the total demand for frames then D=sum WS Si

                                  and mbe the total number of available frames

                                  If iquestm thrashing occurs because some processes will not have enough framesSoIF (Dgtm iquest

                                  OS selects a process to suspendelse OS may initiate another processThe working set strategy prevents thrashing while keeping the degree of multi programming as high as possibleThe main difficulty is to keep track of moving working set window ========================================================================Q) How does thrashing occur in global and local page replacementAns In Global page replacement when a page fault occurs for a process any frame in main memory can be replaced causing page fault to some other process that needs the replaced frame Page fault processes must use paging device to swap pages in and out As more number of processes queue up for the paging device

                                  the READY queue empties And CPU becomes idle So OS introduces new processes as CPU is free This further increases number of page faults

                                  In Local page replacement Each process is allocated certain frames in main memory and when a page fault occurs only the frames allocated to it will be replacedWhen a process starts thrashing other processes are not affected But since the processes will be in queue for paging device most of the time average service time for page fault increases Hence effective access time will increase even for a process that is not thrashingQ) Page Table Structure Or Structure of page table in memory

                                  Hierarchical Paging Hashed Page Tables Inverted Page Tables

                                  Hierarchical Paging If Page table is large then we break up the page table into multiple page tables ie page table is paged A simple technique is a two-level page tableTwo-Level Paging Example A logical address (on 32-bit machine with 4K page size) is divided into 1 a page number consisting of 20 bits 2 a page offset consisting of 12 bitsSince the page table is paged the page number is further divided into 1 a 10-bit page number 2 a 10-bit page offset Thus a logical address is as follows page number page offset

                                  pi p2 d

                                  10 10 12 where p1 is an index into the outer page table and p2 is the displacement within the page of the outer page table Two-Level Page-Table Scheme

                                  Address-Translation Scheme Address-translation scheme for a two-level 32-bit paging architecture

                                  Hashed Page Tables (or) Hash table Hashed page tables are common in address spaces gt 32 bitsEach entry in hash table contains a linked list of elements that hash to same locationHash table is indexed by hash valueEach element consists of 3 fields

                                  1 Page number2 Frame number3 Pointer to next element in linked list

                                  The algorithm works as follows Page number is passed to hash function to get hash value Locating the hash value in hash table is easy as Hash table is indexed by hash value Page number is then compared with field 1 in the first element in linked list If there is a match the corresponding frame number is used to get physical address If there is no match next entries in linked list are searched

                                  Inverted Page Table

                                  Invertible page table has one entry for each frame of physical memory Each entry in invertible page table has

                                  Process-id Page number

                                  Logical address also has 3 parts Process-id Page number

                                  OffsetProcess-id and page number of the logical address is compared with each entry in the invertible page tableIf a match is found at ith entry i is added to offset to get logical address If there is no match it means it is illegal addressAdvantages and disadvantages

                                  Decreases memory needed to store each page table Increases time needed to search the table when a page reference occurs Solution Use hash table to

                                  limit the search to one mdash or at most a few mdash page-table entries Inverted Page Table Architecture

                                  Unit-IV CPU Scheduling

                                  Basic conceptsTo maximize CPU utilization some process must be running at all times

                                  In multi programming several processes will be in main memory at a given time If running process has to wait for IO the OS takes the CPU away from that process and assigns CPU to other process in ready queue So CPU must be scheduled and CPU scheduling is fundamental OS function

                                  CPU- IO burst cycleProcess execution begins with a CPU burst that is followed by an IO burst which is followed by another CPU burst then another IO burst and so on as shown in the figure

                                  An IO bound program has many short CPU bursts A CPU-bound program has few long CPU bursts

                                  Schedulers a module in OS for scheduling decisions1048708 Long-term scheduler (or job scheduler) ndash selects which processes should be brought into the ready queue1048708 Medium-term scheduler ndash selects which processes should be swapped inout the memory1048708 Short-term scheduler (or CPU scheduler) ndash Whenever the CPU becomes idle the operating system must select one of the processes in ready queue to be executed This selection is done by short term scheduler (or) CPU scheduler

                                  Dispatcher gives CPU control to the process selected by short term scheduler This function involves following

                                  1 switching context2 switching to user mode3 jumping to proper location in the user program to restart that program

                                  Time taken to stop one process and start another process is known as dispatch latency This must be kept smallReady QueueReady queue is implemented as a FIFO queue a priority queue a tree or simply an unordered linked listThe records in ready queue are PCBs of processes

                                  Pre-emptive Scheduling

                                  CPU scheduling decisions take place under one of four conditions

                                  1 When a process switches from the running state to the waiting state2 When a process switches from the running state to the ready state for example in response to an

                                  interrupt3 When a process switches from the waiting state to the ready state say at completion of IO 4 When a process terminates

                                  For conditions 1 and 4 there is no choice - A new process must be selected For conditions 2 and 3 there is a choice - continue running the current process (or) select a different

                                  one If CPU scheduling takes place only under conditions 1 and 4 the system is said to be non-

                                  preemptive or cooperative In non-preemptive a process runs until it itself gives up the CPU ie when it is waiting for IO (or) when process completes Otherwise the system is said to be pre-emptive

                                  Windows used non-preemptive scheduling up to Windows 3x and started using pre-emptive scheduling with Win95 Macs used non-preemptive prior to OSX and pre-emptive since then Note that pre-emptive scheduling is only possible on hardware that supports a timer interrupt

                                  Note that pre-emptive scheduling can cause problems when two processes share data because one process may get interrupted in the middle of updating shared data structures

                                  Preemption can also be a problem if the kernel is busy implementing a system call ( eg updating critical kernel data structures ) when the preemption occurs Solution wait until the system call has either completed or blocked before allowing the pre-emption But this solution is problematic for real-time systems

                                  Some critical sections of code protect themselves from concurrency problems by disabling interrupts before entering the critical section and re-enabling interrupts on exiting the section But this should only be done only on very short pieces of code that will take less CPU time

                                  Scheduling Criteria to decide CPU scheduling algorithm1 CPU utilization in real system cpu utilization will range from 40 to 902 Throughput Number of processes that are completed per unit time3 Turnaround time Time when the process has completed ndash Time when the process was submittedie it includes waiting time and burst time4 Waiting time amount of time that a process spends waiting for CPU5 Response time Time when the first response came - Time when the process was submitted

                                  Optimization Criteria Max CPU utilizationMax throughput

                                  Min turnaround timeMin waiting timeMin response time CPU Scheduling Algorithms1) First-Come First-Served (FCFS) Scheduling First come first served CPU scheduling algorithm is non pre-emptive Processes are scheduled in the order they have arrivedFCFS is implemented by Queue When the CPU is free it is allocated to the process at the head of the queueAdvantages Easy to implementDisadvantages 1 Average waiting time is quite long

                                  2 Convoy effect (it occurs as small processes wait in queue for big process to leave CPU)Example Process Burst Time P1 24 P2 3 P3 3

                                  Suppose that the processes arrive in the order P1 P2 P3

                                  The Gantt Chart for the schedule is P1 P2 P3

                                  0 24 27 30 Waiting time for P1 = 0 P2 = 24 P3 = 27 Average waiting time (0 + 24 + 27)3 = 17 2) Shortest-Job-First (SJF) SchedulingSchedule the Process with shortest burst time If CPU burst time of two processe are same then FCFS is usedAdvantages Average waiting time decreasesDisadvantages It is difficult to know the length of next CPU burstSJF can be either pre-emptive (or) non pre-emptive1 Non pre-emptive SJF ndash once CPU is given to the process it cannot be preempted until completes its CPU burst2 Pre-emptive SJF (or) Shortest-Remaining-Time-First (SRTF) If the newly arrived process is shorter than what is remaining of currently executing process then pre-empt the currently executing process

                                  Example Process Arrival Time Burst Time P1 00 7

                                  P2 20 4 P3 40 1

                                  P4 50 4

                                  SJF (non-preemptive) Gantt Chart P1 P3 P2 P4

                                  0 7 8 12 16 Average waiting time = [0 +(8-2)+(7-4) +(12-5)] 4 =4

                                  Example of Preemptive SJF

                                  Process Arrival Time Burst TimeP1 00 7P2 20 4P3 40 1P4 50 4

                                  SJF (preemptive) P1 P2 P3 P2 P4 P1

                                  0 2 4 5 7 11 16 Average waiting time = (9 + 1 + 0 +2)4 =3 3) Priority SchedulingA priority number (integer) is associated with each processThe CPU is allocated to the process with the highest priority (smallest integer equiv highest priority)If two processes have equal priority then FCFS is usedSJF is a priority algorithm in which larger the CPU burst lower the priorityThere are two schemes 1 Preemptive 2 nonpreemptive

                                  Problem equiv Starvation ndash low priority processes may never executeSolution equiv Aging ndash as time progresses increase the priority of the process

                                  4) Round Robin (RR) Each process gets a small unit of CPU time (time quantum) usually 10-100 milliseconds After this time has elapsed the process is preempted and added to the end of the ready queueIf there are n processes in the ready queue and the time quantum is q then each process gets 1n of the CPU time in chunks of at most q time units at once No process waits more than (n-1)q time unitsPerformance 1 q large _ FIFO 2 q small _ q must be large with respect to context switch otherwise overhead is too highExample of RR with Time Quantum = 4 Process Burst Time P1 24 P2 3

                                  P3 3 The Gantt chart isP1 P2 P3 P1 P1 P1 P1 P1

                                  0 4 7 10 14 18 22 26 30 Average waiting time = [(30-24)+4+7]3 = 173 =566 5)Multilevel Queue scheduling Ready queue is partitioned into separate queuesFor ex ready queue is divided into 2 queues

                                  1 foreground (interactive) queue2 background (batch) queue

                                  Each queue has its own scheduling algorithmFor ex Round Robin scheduling algorithm can be used in foreground queue

                                  FCFS scheduling algorithm can be used in background queueScheduling must be done between the queues This can be done in 2 ways1 Fixed priority scheduling

                                  Foreground queue has highest priority All the processes in foreground queue must be completed and queue must be empty Then only the processes in background queue will be given CPU As shown in below figure Disadvantage starvation

                                  2 Time slice ndash each queue gets a certain amount of CPU timewhich it can schedule amongst its processes ie 80 to foreground in RR and 20 to background in FCFS

                                  6) Multilevel Feedback Queue Scheduling A process can move between queues The idea is to separate processes according to their CPU bursts IO ndash bound and interactive processes will be in highest priority queueIf a process takes more CPU time it is moved to lower priority queue If a process waits too long in lower priority queue it is moved to higher priority queue to prevent starvationAs shown in the below figure let there be 3 queues Q0 Q1 Q2

                                  1 Q0 ndash time quantum 8 milliseconds 2 Q1 ndash time quantum 16 milliseconds 3 Q2 ndash FCFS

                                  Scheduling

                                  1 A process entering ready queue is put in queue Q0 When it gains CPU the process receives 8 msec If it does not finish in 8 milliseconds process is moved to queue Q1 2 At Q1 process again receives 16 additional milliseconds If it still does not complete it is moved to queue Q2

                                  Q) SynchronizationSeveral processes run in an operating system Some process share resources due to which problems like data inconsistency may arise

                                  Example of process synchronization is Producer Consumer problem (or) Bounded-Buffer problemProducer Consumer problem To make sure that producer process should not add data when the buffer is full andConsumer process should not take data when the buffer is empty

                                  The code for Producer process can be modified as followswhile (true) Produce an item in next producedwhile (counter == BUFFER_SIZE) do nothingbuffer[in]=nextProducedin = (in+1) BUFFER_SIZEcounter++

                                  here an integer variable counter is initialized to zeroCounter is incremented every time a new item is added to the buffer and counter is decremented every time an item is removed from bufferin is an index that always points to an empty slot after the last filled slotout is an index that always points to the first filled slot after at the head of the circular queue

                                  The code for consumer process is as shown belowwhile(true)

                                  while (counter == 0) do nothingnextConsumed = buffer[out]out=(out+1)BUFFER_SIZEcounter--

                                  The above producer and consumer code produces RACE CONDITIONOutcome of execution of several co-operating processes depends on the order in which accesses takes place is called Race ConditionFor Example if counter =5

                                  Given counter =5 and if producer produces one item and consumer consumes one item then correct value of counter= 5 only But here the counter = 4 ie inconsistent resultIf T5 is executed before T4 then the counter =6 ie inconsistent result

                                  Solution to avoid race condition make sure only one process at a time must update the shared variable (here it is counter)

                                  Q) Critical section problemAns Each process has a segment of code called critical section Critical section code is the code that contains common variables of co-operating processesTo avoid data inconsistencyTwo processes must not execute critical section code at same timeFor this to happen1 Each process must request permission to enter its critical sectionmdashEntry section2 End of critical section ndashExit section3 Rest of the code after critical section ndash Remainder section

                                  Solution to critical section must satisfy the following three requirements1 Mutual exclusion if one process is in critical section no other process can be in critical section2 Progress Processes not in remainder section must decide which process will enter critical section next3 Bounded Waiting All the processes must be allowed to enter critical section No process must wait indefinitely

                                  Q) Petersonrsquos solution for critical section problem (Software based solution)AnsTime Process 1 Process2T0 do

                                  flag[1] = TRUE turn =2

                                  do flag[2] = TRUE

                                  T1 turn =1

                                  T2 While( flag[2] ampamp turn == 2)Since turn = 2 while loop will not run

                                  While( flag[1] ampamp turn == 1)Loops continuously until flag[1]

                                  becomes falseT3 Enter critical section

                                  T4 flag[1] = FALSE Enter Critical section

                                  T5 Enters remainder section flag[2]=FALSE

                                  while (TRUE) Enters remainder section

                                  while (TRUE)

                                  101 File Concept Data is stored in secondary memory in the form of files A file is a sequence of bits bytes lines or records A file has structure which depends on its type

                                  File Attributes

                                  1 Name ndash Name of the file for user understanding2 Identifier used by OS to identify a file ( eg inode number )3 Type ndash Text files executable files binary files etc 4 Location ndash location of the file on the hard drive 5 Size current size of the file6 Protection Controls who can read write or execute the file7 Time Date and user identification gives the information when the file was created last modified

                                  and last used etc This data is used for protection security and usage monitoring

                                  All information about files is kept in directory structure The directory entry consists of the filersquos name and its unique identifier The identifier locates the other file attributes Directory structure is kept on disk

                                  File Operations

                                  The file Abstract Data Type supports many common operations 1 Creating a file find free space on disk add entry to directory 2 Writing a file write data at current file position pointer location and update pointer3 Reading a file read data at current file position pointer location and update pointer4 Repositioning within a file or file seek change pointer location to a given value5 Deleting a file free the space allocated to file and remove entry from directory6 Truncating a file delete the data and update file size

                                  Open File Table

                                  Most OSes require that files be opened before access and closed after access Information about all files that are open currently in the system is stored in an open file table This

                                  system wide open file table is maintained by OS And information about files opened by the process is stored by the processrsquos open file table Open file table has the following information

                                  1 File pointer - records the current position in the file for the next read or write2 File-open count - How many times has the current file been opened ( simultaneously by

                                  different processes ) and not yet closedWhen this counter reaches zero the file can be removed from the table

                                  3 Disk location of the file4 Access rights

                                  Some systems provide support for file locking 1 A shared lock is for reading only2 A exclusive lock is for writing as well as reading3 An advisory lock OS makes sure the locks are acquired and released appropriately4 A mandatory lock software developers makes sure the locks are acquired and released

                                  appropriately5 UNIX used advisory locks and Windows uses mandatory locks

                                  File Types

                                  Implementing file types A common technique is to include the type as part of the filename The name is split into 2 parts

                                  1 File Name 2 Extension

                                  File name and extension are separated by a period The system uses the extension to indicate the type of file and the type of operations that can be done on that fileUser opens Microsoft word and clicks on File menu and Open option User will only specify the filename without extension application will look for a file with the given name and extension it expects Hence extensions are not supported by the operating systemUnix system uses a crude magic number stored at the beginning of the file to indicate the type of fileIn Mac OS X operating system file produced by word processor has the word processorrsquos name as its creator When user double clicks on the file word will open the required fileSome common file types are as shown below

                                  File Structure Some files contain an internal structure which may or may not be known to the OS for ex executable files have a specific structure so that it can determine

                                  where in memory to load the file and what is the location of the first instruction

                                  If OS supports many file formats it will increase the size and complexity of the OS UNIX treats all files as sequences of bytes with no internal structure (With the exception of

                                  executable binary programs which it must know how to load and find the first executable statement etc)

                                  Macintosh files have two forks - a resource fork and a data fork The resource fork contains information relating to the UI such as icons and button images and can be modified independently Data fork contains the code or data

                                  Internal File structure

                                  Disk files are accessed in units of physical blocks typically 512 bytes

                                  Internally files are organized in units of logical units like single byte (or)

                                  data record size (or) structure size

                                  The number of logical units which fit into one physical block determines its packingIf a student file is organized internally as records Each record takes 64 bytes of memoryAnd main memory is divided into blocks of 512 bytes Then packing = 51264=8 8 records can be stored per block

                                  Internal fragmentation may occur due to this

                                  102 Access Methods

                                  The information in file can be accessed in the following ways1 Sequential access2 Direct access (or) Relative access

                                  1 Sequential access we cannot randomly access any record

                                  Records are read and written sequentially one record after another

                                  A sequential access file emulates magnetic tape operation and generally supports the following operations read next - read a record and advance the tape to the next position write next - write a record and advance the tape to the next position rewind skip n records - May or may not be supported N may be limited to positive numbers or may be

                                  limited to +- 1

                                  2 Direct access (or) Relative accessA file is made up of fixed length records so that we can randomly access any record

                                  No restriction on order of reading or writing

                                  Direct access files are useful when we need to access large amounts of data randomly

                                  Ex database

                                  Operations supported include n is relative block number ie relative to beginning of file

                                  1 read n - read record number n 2 write n - write record number n 3 jump to record n - could be 0 or the end of file4 Query current record - used to return back to this record later5 Sequential access can be easily emulated using direct access The inverse is complicated and

                                  inefficient

                                  3 Other access methods can be built on top of direct access method We construct an index for the file which contains pointers to various blocks To search a file we first access index and then use pointer to access file

                                  If a student file has 120000 records of students sorted by student id We can create an index consisting of 1st student id in each blockif 64 records can be stored in each block then number of blocks needed to store 120000 records = 12000064 asymp 2000 blocks So index has 2000 entries Index is kept in main memory so that searching becomes faster

                                  If the index itself becomes large we create an index for index file (Ex B-tree)

                                  Q) Allocation Methods

                                  To allocate space to files on disk is such a way that disk space is utilized effectively and files can be accessed quickly

                                  Three major methods of allocating disk space are

                                  1 Contiguous 2 Linked

                                  3 Indexed

                                  1 Contiguous allocation Each file occupies set of contiguous blocks on disk

                                  If the file is n blocks long and starts at location b then it occupies blocks b b+1 b+2hellipb+n-1

                                  The directory entry for each file indicates the address of starting block(ie b) and length (n)

                                  Advantages

                                  1 For direct access to block i we can directly access by finding the block as b+i Since the file is stored sequentially sequential access is easy Hence access is easy

                                  2 Supports both direct and sequential access3 Number of disk seeks required is minimum 4 Seek time is minimum

                                  Disadvantages

                                  1 Difficult to find space to a new file (or) add more data to existing fileFirst fit and Best fit strategies are used to find a free hole( set of free contiguous blocks) from list of available holes But both are inefficient wrt storage utilization

                                  2 External fragmentation even though there are enough free blocks required for a new file they are not contiguous and hence cannot be allocated to new fileSolution copy all the files onto another disk or tape We then copy the files back onto original disk contiguously This is known as compaction But compaction is time consuming

                                  3 Size of the file must be known when the file is created Even though final size is known entire size may not be used by the file at the beginning Hence leads to wastage of space

                                  If size of the file is not known in advance then (i) If we allocate too little space to file file cannot be extended Solution find a larger hole copy the contents of the file to the new space and release the previous space This may be possible as long as space exists Also this is time consuming(ii) If we overestimate the amount of space needed much of the space is unused This will lead to internal fragmentation

                                  To minimize these drawbacks some os uses modified contiguous allocation scheme Here contiguous blocks of memory is allocated at firstIf the file needs more blocks another set of contiguous space known as extent is added Location and a block count plus a link to the first block of next extentIf extents are too large and of fixed size internal fragmentation occursIf extents are of variable sizes external fragmentation occurs

                                  Linked AllocationEach file is a linked list of disk blocks The disk blocks can be scattered anywhere on the diskThe directory contains a pointer to first and last blocks of the file For example a file of 5 blocks might start at a block 9 and continue at block 16 then block 1 and then block 10 and finally block 25 Each block contains a pointer to next block If each block is 512 bytes and disk address (pointer ) requires 4 bytes then the user sees blocks of 512-4=508 bytes To create a new file we simply create a new entry in the directory The pointer will have nil for empty file Also the size field of empty file is 0

                                  To add more data to existing file free block is found and data is written to it and is linked to end of file To read a file we read by following the pointers block to block

                                  Advantages 1 Easy to find space to a new file (or) add more data to existing file2 Size of the file need not be declared when the file is created3 There is no external fragmentation

                                  Disadvantages1 Seek time is more2 Number of disk seeks required is more3 Pointers use larger percentage of filersquos disk space4 Inefficient for direct access files To access i th record i disk reads are required5 Not Reliable if pointers were lost or damaged

                                  Solution to disadvantage 3 is to use to group multiple blocks as clusters(for ex 4 blocks = 1cluster) We allocate clusters instead of blocks to file Makes logical to physical block mapping simpleImproves disk throughput by reducing disk access timeReduces wastage of space due to pointers as only few pointers may be neededFree ndashlist management also becomes simpleDisadvantage internal fragmentation more space is wasted when the cluster is partially full

                                  Solution to disadvantage 5 is to use doubly linked list (or) store the file name and relative block number in each block

                                  An variation of linked allocation is the use of file allocation table (FAT)A section of disk at the beginning of each volume contains FAT FAT has one entry for each disk block and is indexed by block numberThe block number contains the block number of next block that contains the file The last block will have special end of the file value as table entry An unused block is indicated by table value 0

                                  Advantages

                                  1 Easy to find space to a new file (or) add more data to existing file2 Random access time is improved

                                  Disadvantage number of disk seeks is more The disk head must move to read the FAT and find the location of block then move to location of the block

                                  3 Indexed allocation

                                  Each file has its own index block Index block is an array of disk block addresses The ith entry in index block points to ith block of the file

                                  The directory contains the address of the index block

                                  When the file is created all pointers in the index block are set to nil

                                  To add more data to existing file free block is found and data is written to it The address of block is put in the index block

                                  Advantages

                                  1 Easy to find space to a new file (or) add more data to existing file2 Supports direct (or) random access efficiently3 No external fragmentation

                                  Disadvantage

                                  1 Seek time is more2 Number of disk seeks required is more3 Not Reliable4 Index file must be kept in memory If this memory is not available then we have the read the index

                                  block and then desired data block ie 2 disk accesses are required To access a block near the end of the file we need to read all the index blocks to read the needed data block

                                  5 Pointer overhead of index block is gt than that of linked allocation For example if we have file of only one or two blocks Entire index block is allocated to store these one or two pointers

                                  Various mechanisms followed for size of index block

                                  1 linked scheme size of index block is one disk block Index block contains a small header containing name of the file and set of the first 100 disk block addresses The next address contains the address of another index block for large files

                                  2 multilevel index to access a block the OS uses first-level index to find second-level index block which points to file block This approach can be continued to third or fourth level

                                  3 combined scheme

                                  For example Say there are 15 pointers of index block in the filersquos inode The first 12 of these pointers have address of blocks that contains file data The next 3 pointers point to indirect blocks The first points to single indirect block which is an index block that contain addresses of filersquos data blocks

                                  The second points to double indirect blocks and third points to triple indirect blocks as shown below

                                  Q) Free space management

                                  To keep track of free disk space the system maintains free space list To create a file we search free space list and allocate the required space to new file

                                  The free space list can be implemented as

                                  1 Bit vector (or) Bit map2 Linked list3 Grouping4 Counting 5 Space maps

                                  1 Bit vector (or) Bit map Each block is represented by 1 bit If the block is free the bit is 1 and if the block is allocated the bit is 0 For example consider a disk where blocks 23458 blocks are free The free-space bit map would be 001111001000

                                  Advantages simple and easy to get contiguous files

                                  Disadvantages

                                  1 Bit map requires extra space2 Bit map is inefficient if kept in disk Bit maps of smaller disk can be kept in main memory but may

                                  not be possible to keep the bit map of larger disk in main memory3 Bit map must be modified both when blocks are allocated and freed Freeing 1 GB of data on a 1-TB

                                  disk could cause thousands of blocks of bit maps to be updated because those data blocks are scattered all over the disk

                                  2 Linked List Free-space list is implemented by linking all the free disk blocks together A pointer to the first free block is stored in special location on the disk For example consider a disk where blocks 23458 blocks are freeWe keep a pointer to block 2 as the first free block Block 2 would contain a pointer to block 3 Block 3 would point to block 4 and block 4 would point to block 5and so onAdvantage No waste of spaceDisadvantage Cannot get contiguous space easily ie To traverse the free-space list we must read each block which requires more IO time

                                  3 Grouping First free block stores the addresses of n free blocks The first n-1 free blocks are free and the last block contains the addresses of another n free blocks and so onAdvantage large number of free blocks can be found quickly4 Counting free-space list can be maintained by keeping the address of first free block and the number (n)of free contiguous blocks that follow the first block

                                  Each entry in the free-space list consists of disk address and a countEach entry requires more space andlist is shorter only if count is gt1These entries can be stored in a B-tree for efficient insertion deletion and search

                                  5 Space maps ZFS divides the disk space into chunks of manageable sizes called metaslabs Each metaslab has a space mapZFS uses counting algorithm to store information about free blocks Space map is a log of all block activities in time order in counting formatWhen ZFS allocates or frees space from a metaslab it loads the associated space map into memory and updates it Finally free-space list is updated on diskSynchronization Hardware Hardware solution to synchronization is to provide atomic operations These operations operate as a single instruction without interruption Two such operations are 1 Test and Set() hardware instruction2 Swap() instruction

                                  1 Test and Set instruction is as shown

                                  Solution using Test and SetThe shared variable lock is initialized to false

                                  2 Swap() instruction definition is as follows

                                  Solution using Swap1 Shared Boolean variable lock is initialized to false and each process has a local boolean variable key

                                  Both Test and Set Swap instructions satisfy the mutual exclusion requirement but unfortunately do not guarantee bounded waiting If there are multiple processes trying to get into their critical sections there is no guarantee of what order they will enter and any one process could have the bad luck to wait forever until they got their turn in the critical section (Since there is no guarantee as to the relative rates of the processes a very fast process could theoretically release the lock whip through their remainder section and re-lock the lock before a slower process got a chance As more and more processes are involved vying for the same resource the odds of a slow process getting locked out completely increase )

                                  Below figure illustrates a solution using test-and-set that satisfies bounded waiting using two shared data structures boolean lock and boolean waiting[ N ] where N is the number of processes in contention for critical sections

                                  Bounded-waiting mutual exclusion with TestAndSet( )

                                  It first looks in an order ( starting with the next process on the list ) for a process that has been waiting and if it finds one then it releases that particular process from its waiting state without unlocking the critical section thereby allowing a specific process into the critical section while continuing to block all the others

                                  Q) What is semaphoreAns Semaphore is used to solve synchronization problems A semaphore S is an integer variable accessed only through 2 standard atomic operations 1 wait() 2 signal()Wait primitive Signal primitivewait(S)

                                  while(S lt= 0) do nothingS--

                                  signal (S)

                                  S++

                                  Each process that wishes to use a resource performs wait() operation When a process releases a resource it performs signal() operation

                                  Semaphores Usage1 binary semaphore or mutex locks can take values 0 (or) 1

                                  2 Counting Semaphore can take any integer value Counting semaphore is used to count the number remaining resources The counter is initialized to the number of resources available whenever the counting semaphore gt 0 then a process can enter a critical section and use one of the resources When the counter = 0 ( or negative in some implementations ) then the process blocks until another process frees a resource and increments the counting semaphore with a signal callFor example if 3 resources are there and 4 processes Value of counting semaphore =3if process P1 requires a resource it perform wait() operationNow counting semaphore = 2if process P2 requires a resource it perform wait() operationNow counting semaphore = 1if process P3 requires a resource it perform wait() operationNow counting semaphore = 0if process P4 requires a resource it perform wait() operation P4 waits until a resource is available

                                  3 Semaphores can also be used to synchronize certain operations

                                  Semaphore ImplementationWhen a process is in critical section and any other process that tries to enter critical section must loop continuously wasting CPU time (busy waiting) This type of semaphore is called spin lock as the process spins while waiting For example suppose it is important that process P1 execute statement S1 before process P2 executes statement S2

                                  First we create a semaphore named synch that is shared by the two processes and initialize it to zero Then in process P1 we insert the code

                                  S1signal( synch )

                                  and in process P2 we insert the code wait( synch )

                                  S2 Because synch was initialized to 0 process P2 will block on the wait until after P1 executes the call

                                  to signal Each semaphore has a integer value and maintains a waiting queue of waiting processes

                                  Semaphore implementation with no busy waiting To overcome busy waiting we block the waiting process and waiting process is restarted by wakeup() operation(ie the process is removed from waiting queue to ready state)

                                  Problems due to semaphore1 Deadlocks Deadlocks occur when multiple processes are blocked each waiting for a resource that can only be freed by one of the other (blocked) processes shown below

                                  2 Starvation In Starvation one or more processes gets blocked forever and never get a chance to enter critical section For example if we do not specify the algorithms for adding processes to the waiting queue or selecting one to be removed from the queue in the signal( ) call If a LIFO queue is chosen then the first process that starts waiting will never get a chance

                                  3 Priority inversionLet Process LMH have priorities as shown belowLltMltHprocess H is waiting for a resource that has been held by process L so Process L is running When process M enters it pre-empts process L and M is running nowHere M runs before H even though M has lower priority than HTo avoid this L is given high priority so that M does not pre-empt L=====================================================================Q) Classical problems on synchronization

                                  1 The Bounded Buffer Problem (also called the The Producer-Consumer Problem)2 The Readers-Writers Problem3 The Dining Philosophers Problem

                                  These problems are used to test newly proposed synchronization scheme

                                  1 The Bounded Buffer ProblemConsider

                                  n buffers and each buffer holds one item a producer process which creates the items (1 at a time) a consumer process which processes them (1 at a time)

                                  Producer process cannot produce an item if all the buffers are full and Consumer process cannot consume if all the buffers are empty As both processes modify the contents of buffer synchronization must be there For that we use 3 Semaphores

                                  1 empty = n (empty semaphore counts the number of empty buffers)2 full=0 (full semaphore counts the number of filled buffers)3 mutex=1 (mutex provides mutual exclusion for access to the buffer)

                                  Producer process Consumer processdo

                                  hellip produce an item wait(empty) wait(mutex) hellip

                                  do wait(full) wait(mutex) hellip remove an item from buffer hellip

                                  add item to buffer hellip signal(mutex) signal(full)

                                  while (1)

                                  signal(mutex) signal(empty) hellip consume the item hellip while (1)

                                  2 The Readers-Writers ProblemA data item such as a file is shared among several processesEach process is classified as reader or writerMultiple readers can read the file at same timeA writer must have exclusive access (ie cannot share with either a reader or another writer)Two versions of the readers-writers problem

                                  readers priority new reader need not wait because a writer is waiting writers priority if a writer is waiting to access the database no new readers can start reading

                                  A solution to both version may cause starvation in the readers priority version writers may starve in the writers priority version readers may starve

                                  A semaphore solution to the readers priority version uses 3 semaphores1 readcount = 0 (readcount semaphore counts the number of reading shared data2 mutex =1 (mutex semaphore is used for mutually exclusion when readcount is updated)3 wrt = 1 (wrt semaphore is common to both writer and reader)

                                  wrt provides mutually exclusive access to shared data Reader process Writer process

                                  wait(mutex) readcount++ if (readcount == 1) wait(rt) signal(mutex) hellip reading is performed hellip wait(mutex) readcount-- if (readcount == 0) signal(wrt) signal(mutex)

                                  do wait(wrt) hellip writing is performed hellip signal(wrt)

                                  while (TRUE)

                                  3 The Dining Philosophers Problemn philosophers sit around a table thinking and eatingAs time passes a philosopher gets hungry and tries to pick up the chopstick on his left and on his right A philosopher may only pick up one chopstick at a time and cannot pick up a chopstick already in the hand of neighbour philosopherThe dining philosophers problems is an example of a large class or concurrency control problems it is a simple representation of the need to allocate several resources among several processes in a deadlock-free and starvation-free mannerA semaphore solution represent each chopstick with a semaphore

                                  semaphore chopstick[5] Initially all values are 1

                                  Philosopher ido

                                  51

                                  wait(chopstick[i])wait(chopstick[(i+1) 5])hellipeathellipsignal(chopstick[i])signal(chopstick[(i+1) 5])hellipthinkhellip

                                  while (1)This solution guarantees no two neighbouring philosophers eat simultaneously but has the possibility of creating a deadlockQ) Types of operating systems

                                  MULTIPROCESSING SYSTEMS has multiple hardware CPUs

                                  In symmetric multiprocessing (SMP) systems The processes share the same main memory and io facilities All processors can perform the same functions (hence term symmetric)

                                  52

                                  ASYMMETRIC MULTIPROCESSING SYSTEMS In asymmetric multiprocessing (ASMP) all CPUs are not equal (Master and slave CPUs) ASMP have dedicated processors to specific tasks For example one processor may be dedicated to disk operations another to video operations etc These systems dont have the flexibility to assign processes to the least-loaded CPU unlike an SMP system

                                  REAL TIME SYSTEM is used to implement a computer application that must complete its execution within its time constraint

                                  Two kinds of real-time systems have evolved

                                  1 A hard real-time system can guarantee that response requirement will be met under all conditions 2 A soft real-time system cannot guarantee that response requirement meets under all conditions Ex

                                  Digital audio or multimedia systems Digital telephones

                                  DISTRIBUTED SYSTEMS execute parts of a computation in different systems at the same time It uses distributed control ie it spreads its decision-making actions across different computers in the system so that failures of individual computers or the network does not cripple its operation

                                  A distributed operating system appears as a uniprocessor system even though it has multiple processors The users may not know where their programs are being run or where their files are located that should all be handled automatically by the operating system

                                  Distributed systems allow applications to run on several processors at the same time thus requiring more complex processor scheduling algorithms

                                  DESKTOP SYSTEMS A desktop system is a personal computer (PC) system used at a single location Modern desktop systems support multiprogramming Common examples are Linux FreeBSD Windows 8 and the Macintosh operating system

                                  HANDHELD SYSTEMS (palmtop computers) Two of the most popular operating systems for handhelds are Symbian OS and Android OS They have very less memory

                                  53

                                  CLUSTERED SYSTEMS A computer cluster consists of a set of loosely connected computers that work together so that in many respects they can be viewed as a single system The components of a cluster are usually connected to each other through fast local area networks each node (computer used as a server) running its own instance of an operating system

                                  Q) Modern operating system

                                  New developments in hardware applications and security threats lead to the development of modern operating system

                                  New Hardware developments like increased machine speed high speed network increased size and variety of memory devices

                                  New Applications developments like multimedia applications internet and web access and client server computing

                                  Developments in modern operating system can be categorized as

                                  1 Microkernel architecture assigns only few functions to kernel ( IPC and CPU scheduling etc) Other OS services are provided by processes called servers that run in user mode This approach decouples server and kernel development This approach is well suited to distributed environment

                                  2 Multithreading a process is divided into threads that can run concurrently Useful for applications that perform a number of independent tasks Example database server that listens and processes many client requests

                                  3 Symmetric multiprocessing As now-a-days there are multiple microprocessors in a single system SMP operating system provides greater efficiency when there are multiple processors The processes share the same main memory and io facilities All processors can perform the same functions (hence term symmetric)

                                  SMP schedules processes or threads across all of the processors

                                  Advantages

                                  1 Increased performance if some portion of the program can be run in parallel the performance of SMP OS increases

                                  2 increased availability as all processes can perform the same task if one processor fails others will work 3 incremental growth one can add a new processor to increase the performance4 Scaling cost can be dependent on the number of processors used

                                  4 Distributed operating systems Distributed systems allow applications to run on several processors at the same time

                                  5 Object oriented design adds modularity to kernel Os can be customized without effecting system integrity

                                  Q) Linux Operating systems It is open source as its source code is freely available It is free to use Linux was designed considering UNIX compatibility Linux is a multi-user multitasking system Main design goals are speed efficiency and standardization Components of Linux System

                                  54

                                  Linux Operating System has primarily three components as shown in the below diagram

                                  Kernel - Kernel is the core part of Linux It is responsible for all major activities of this operating system It interacts directly with hardware Kernel hides low level hardware details to system or application programs

                                  System Library - System libraries are special functions or programs These are used by application programs to use Kernelrsquos features

                                  System Utility - System Utility programs are responsible to do specialized individual level tasks like updating log file accepting login requests from terminals etc

                                  Kernel Modules Sections of kernel code that can be compiled loaded and unloaded independent of the rest of the kernel A kernel module may typically implement a device driver a file system or a networking protocol

                                  Three components to Linux module support 1 module management Supports loading modules into memory and allows to talk to the rest of the kernel 2driver registration Allows modules to inform rest of the kernel that a new driver has become available 3 conflict resolution protect reserved resources of one driver from accidental use by another driver

                                  Kernel Mode vs User ModeKernel component code executes in a special privileged mode called kernel mode with full access to all resources of the computer This code represents a single process executes in single address space and do not require any context switch and hence is very efficient and fast Kernel runs each processes and provides system services to processes provides protected access to hardwares to processesCode which is not required to run in kernel mode is in System Library User programs and other system programs works in User Mode which has no access to system hardwares and kernel code User programs utilities use System libraries to access Kernel functions to get systems low level tasksBasic FeaturesFollowing are some of the important features of Linux Operating System

                                  Portable - Portability means softwares can works on different types of hardwares in same way Linux kernel and application programs supports their installation on any kind of hardware platform

                                  Open Source - Linux source code is freely available and it is community based development project Multiple teams works in collaboration to enhance the capability of Linux operating system and it is continuously evolving

                                  Multi-User - Linux is a multiuser system means multiple users can access system resources like memory ram application programs at same time

                                  Multiprogramming - Linux is a multiprogramming system means multiple applications can run at same time

                                  Hierarchical File System - Linux provides a standard file structure in which system files user files are arranged

                                  Shell - Linux provides a special interpreter program which can be used to execute commands of the operating system It can be used to do various types of operations call application programs etc

                                  55

                                  Security - Linux provides user security using authentication features like password protection controlled access to specific files encryption of data

                                  Architecture Linux System Architecture is consists of following layers

                                  Hardware layer - Hardware consists of all peripheral devices (RAM HDD CPU etc)

                                  Kernel - Core component of Operating System interacts directly with hardware provides low level services to upper layer components

                                  Shell - An interface to kernel Takes commands from user and executes kernels functions

                                  Utilities - Utility programs giving user most of the functionalities of an operating systems

                                  Q) windows XP operating System1Extensibility mdash layered architecture

                                  Executive which runs in protected mode provides the basic system services On top of the executive several server subsystems operate in user mode Modular structure allows additional environmental subsystems to be added without affecting the

                                  executive 2Portability mdash XP can be moved from on hardware architecture to another with relatively few changes

                                  Written in C and C++ Processor-dependent code is isolated in a dynamic link library (DLL) called the ldquohardware abstraction

                                  layerrdquo (HAL)3 Reliability mdashXP uses hardware protection for virtual memory and software protection mechanisms for operating system resources 4 Compatibility mdash applications that follow the IEEE 10031 (POSIX) standard can be complied to run on 2000 without changing the source code 5 Performance mdash XP subsystems can communicate with one another via high-performance message passing

                                  Preemption of low priority threads enables the system to respond quickly to external events Designed for symmetrical multiprocessing

                                  6 International support mdash supports different locales via the national language support (NLS) APIXP Architecture 1 Layered system of modules 2 Protected mode mdash hardware abstraction layer (HAL) kernel executive 3 User mode mdash collection of subsystems

                                  a Environmental subsystems emulate different operating systems b Protection subsystems provide security functions

                                  XP architecture

                                  56

                                  Q) Windows network Operating systemExample Windows 2000 operating system

                                  All the pictures are taken from Silberschatz Abraham et al Peter Baer Galvin and Greg Gagne Operating system concepts Reading Addison-Wesley

                                  Visit my blog enthusiaststudentblogspotin57

                                  mtechmessengerblogspotin

                                  Keywords OS exam notes OS notesOperating system notes Opearing system galvin notesSimple batch system Multi- programmed Batch System Time sharing System Difference between Batch Multi-programming and Time sharing What is process structure of process in memory Process states PCB Operations on Processes Process Creation Process Termination cascading terminationcontext switch Basic Concepts of threads two modes of CPU execution Inter process communication Shared ndash Memory Systems Message passing systemswhat is deadlock four conditions that are necessary for deadlock to occur Methods for handling deadlocks Deadlock prevention Deadlock avoidanceResource Allocation graphBankers AlgorithmSafe State Unsafe state Safety Algorithm Resource-Request Algorithm Memory management Address Binding Logical vs Physical Address Space MMU Memory-Management Unit Dynamic loading Dynamic Linking Overlays Swapping Contiguous memory allocation Single-partition allocation Multiple-partition allocation Internal fragmentation Variable Size partitions External fragmentation First-fit Best-fit Worst-fit Non-Contiguous Memory allocation Paging Implementation of Page Table PTLR PTBR page table base register Page-table length register translation look-aside buffers TLB Protection in Paging Shared pages in Paging Segmentation Shared Segments Virtual memory Demand paging Procedure for Handling a Page Fault pure demand paging Page Replacement Page Replacement Algorithms FIFO Optimal Page replacement AlgorithmLeast Recently used (LRU) Page replacement algorithm ThrashingWorking Set model global and local page replacement

                                  58

                                  • 101 File Concept Data is stored in secondary memory in the form of files A file is a sequence of bits bytes lines or records A file has structure which depends on its type
                                  • File Attributes
                                  • File Operations
                                  • Kernel Mode vs User Mode
                                  • Basic Features
                                  • Architecture

                                    2 Best-fit Allocate the smallest hole that is big enough must search entire list if it is not ordered by size Produces the smallest leftover hole

                                    3 Worst-fit Allocate the largest hole must also search entire list Produces the largest leftover holeFirst-fit and best-fit better than worst-fit in terms of speed and storage utilization

                                    ===============================================================================Q) Non-Contiguous Memory allocation

                                    1 Paging Paging is a memory management scheme that provides non-contiguous memory allocation0 P1 Block0(0th address to 3rd address)4 P2 Block1(4th address to 7th address)8 Block2(8th address to 11th address)

                                    12 P1 Block3(11th address to 12th address)

                                    Logical address space of a process can be noncontiguous Ex P1 is allocated Block0 and Block3

                                    1 Divide physical memory into fixed-sized blocks called frames (size is power of 2)2 Divide logical memory into blocks of same size called pages3 Logical address is divided into 2 parts

                                    Page number (p) ndash Page table is indexed by page numberPage offset (d) ndash

                                    4 A page table is allocated to each process (A pointer to page table is stored in PCB of process )Page table translates logical to physical addresses Page 0 is in frame1 Page 1 is in frame 4 etc5 Internal fragmentation may occur due to paging

                                    6 If the size of logical address space= 2m and page size = 2n Then the higher order mminusn bits of logical address correspond to page number and lsquonrsquolower order bits for displacement within the page

                                    Ex size of logical address space = 8=23 so m=3

                                    page size =4=22 So n=2

                                    So mminusn = 1 bit for page number to represent 01

                                    remaining n=2 bits for displacement within the page

                                    Q) Implementation of Page Table

                                    Page table can be kept as

                                    1) Set of dedicated registers efficient if the page table is small

                                    2) Page table can be kept in memory page table base register (PTBR) points to page table Page-table length register (PTLR) indicates size of the page table Changing page table requires changing only the value in these registers Advantages of keeping page table in memorya Less context switch time

                                    b Two memory accesses are required One memory access to access the page table and another memory access to access the required memory address

                                    This problem can be solved by using a special fast-lookup hardware cache called associative memory or translation look-aside buffers(TLBs)

                                    i Each entry in TLB consists of 2 parts

                                    I) page number field

                                    II) Value field or frame number field

                                    ii When a logical address is generated by CPU its page number is presented to TLB The page number is compared with all the entries of TLB simultaneously

                                    If page number is found in TLB Itrsquos frame number is immediately available -gt TLB hitIf page number not found check in page table ndashTLB miss

                                    By using TLB Search is fast But TLB hardware is expensive So TLB size is kept small So TLB contains only frequently used few page table entries

                                    Q) Protection in Paging Validinvalid bit attached to each entry in the page table OS sets this validinvalid bit for each page to allow or disallow access to the page When the bit is set to invalid the page is not in processrsquos logical address space Hence it generates Trap to OS

                                    Q) Shared pages in Paging

                                    Reentrant code is the code that never changes during execution Reentrant code can be shared

                                    1 One copy of read-only (reentrant) code is shared among processes (ie text editors compilers window systems) Ex ed1ed2ed3 is shared among Process P1 and P2

                                    2 Shared code must appear in same location in the logical address space of all processes

                                    Each process keeps a separate copy of the private code and data EX data1 and data2

                                    The pages for the private code and data can appear anywhere in the logical address space

                                    ==============================================================================Q) Segmentation

                                    Segmentation is a memory management scheme that support userrsquos view of memory

                                    When the user program is compiled the compiler generates segments like

                                    1) The code segment

                                    2) Global variables segment

                                    3) Heap memory segment

                                    4) Stack segment etc

                                    Each entry in segment table has

                                    segment base (Starting physical address of segment) and limit register (specifies the length of the segment)

                                    Logical address is divided into 2 parts

                                    segment-number offset

                                    Segment table is indexed by segment number

                                    Segment-table base register (STBR) stores the location of segment table in main memory

                                    Segment-table length register (STLR) stores number of segments used by a program

                                    The segment number (say lsquosrsquo) is used to find the entry in segment table The required entry is lsquosrsquo locations away from the beginning of the segment table Once the required entry in the segment table is found the offset (lsquodrsquo) is compared with limit

                                    If( offset lt limit) then offset is added with the base entry to generate the physical address

                                    Q) Shared Segments Code sharing occurs at

                                    the segment level Shared segments must

                                    have same segment number

                                    Allocation - dynamic storage allocation problem

                                    use best fitfirst fit may cause external fragmentation

                                    Protection protection bits associated with segments

                                    readwriteexecute privileges array in a separate segment - hardware can check for illegal array indexes

                                    ===============================================================================Q) Virtual memory ndash

                                    Ans In paging and segmentation a program will execute only if the entire process is in main memory But here in virtual memory only part of the program needs to be in memory for execution

                                    1 A program that is large enough than the available main memory can still run as only part of program is only loaded into main memory Logical address space can therefore be much larger than physical address space

                                    2 Allows address spaces to be shared by several processes

                                    Virtual memory can be implemented via Demand paging Demand segmentation

                                    The large blank space between heap and stack is part of virtual address space Virtual memory allows files and memory to be shared by 2 or more processes through page sharing as shown in below figure (b) Figure (a) depicts that virtual memory that is larger than physical memory

                                    ==============================================================================

                                    Q)Demand pagingDemand paging is technique of loading pages from disk to main memory only when the page is needed Hence using less amount of physical memory we get faster response

                                    Demand paging is similar to PAGING + SWAPPING

                                    Instead of swapping a whole process the pager brings only those pages needed into memory

                                    To distinguish between pages that are in memory and pages that are on the disk validinvalid bit is used

                                    Validinvalid bit is attached to each entry in page table When this bit is set to valid the page is legal and in memory When this bit is set to invalid either the page is illegal or the page is in disk

                                    When CPU generates a logical address whose page table entry is set to invalid page fault occurs Hence the required page is in disk Some section of hard disk called swap space is used to hold pages that are not present in memory

                                    Q) Procedure for Handling a Page Fault

                                    1) CPU generates a logical address and if the validinvalid bit for the page that has this logical address is set to invalid page fault occurs

                                    2) Page fault causes a trap to operating system

                                    3)Check if the logical address is within the logical address space of process ie PTBR and PTLR is checked if (required logical address is not within the logical address space of process)

                                    Terminate the processelse

                                    page is not in memory and page is in disk

                                    4) To bring the required page into memory we need to find a free main memory frame map main memory address to disk block and fetch disk block and load the block into free frame5) When the required page is brought into memory update the page table to indicate the page is in memory6) Restart instruction interrupted by illegal address trap The process will continue as if page had always been in memory

                                    Q) What is pure demand pagingAns If no page belonging to the executing process is in main memory the process will fault for every page it needs Page faults will occur until every page that is needed is in memory This scheme is known as pure demand paging NEVER BRING A PAGE UNTIL IT IS REQUIRED

                                    Q) Page Replacement1 Find the location of the desired page on the disk2 Find a free

                                    i) If there is a free frame use itii) If there is no free frame use Page replacement Algorithms to find some page in memory that is

                                    not really in use and swap itiii) If the modify bit associated with the victim frame is set to 1 the page is written to disk Make the

                                    changes to validinvalid bit of victim frames page table entry to invalid to indicate the victim frame is no longer in memory

                                    else no need to write to disk3 Read the desired page into victim frame and change the frame and page tables4 Restart the user process

                                    =================================================================================================Q) Page Replacement AlgorithmsGoal Produce a low page-fault rateThe algorithms are evaluated by running it on a particular string of memory references (reference string) and by computing the number of page faults on that string

                                    1 First in First out (FIFO) Page replacement algorithm Uses the time when a page was brought into memory The page that was brought first into memory is replaced ie oldest page is chosen for replacement We create a FIFO queue to hold all pages in memory

                                    When a page is brought into memory we insert it at the tail of the queue and we replace the page at the head of the queue

                                    FIFO 15 page faultsProblems due to FIFO is Beladyrsquos anamolyFor a refrence string shown below123412512345When number of frames allocated to a process is 1 number of page faults=12 due to FIFO page replacementWhen number of frames allocated to a process is 2 number of page faults=12 due to FIFO page replacementWhen number of frames allocated to a process is 3 number of page faults=9 due to FIFO page replacementWhen number of frames allocated to a process is 1 number of page faults=10 due to FIFO page replacementAs the number of frames allocated to a process increases page faults must decrease But in FIFO page replacement page faults increase2 Optimal Page replacement AlgorithmReplace the page that will not be used for longest period of time Used for measuring how well your algorithm performs It is not practical to implement OPT algorithm as it is not possible to know what the future references will beex

                                    Optimal page replacement 9 page faults

                                    3 Least Recently used (LRU) Page replacement algorithm Replace the page that has not been used for longest period of time LRU makes use of time of last use of the page Ex

                                    Number of page faults = 12

                                    LRU requires hardware assistance to determine the time of last use of the page We can use 1) Stack or 2) Counter to implement LRU page replacement algorithm

                                    1) Stack implementation ndash keep a stack of page numbers in a double link formwhen a Page is referenced it is moved to the top of stackThis implementation requires 6 pointers to be changed

                                    2) counter implementation Time-of-use field is allocated to each frame allocatedCPU maintains a counter Counter is incremented for every page reference Counter value is copied to the time-of-use field of the referenced page

                                    Disadvantages of LRU page replacement algorithm Updating counter or stack must be done for every memory reference HENCE LRU IS SLOW and implementation requires hardware assistance ==============================================================================Q) LRU approximation page replacement1) Using reference bit2) Using additional reference bits algorithm3) Second chance algorithm4) Enhanced Second chance algorithm5) Counting based Page replacement

                                    Has two schemes1) Least Frequently Used (LFU)2) Most Frequently Used (MFU)

                                    6) Page Buffering AlgorithmsQ) What is ThrashingAns A process is thrashing if it is spending more time paging than executingThrashing may occur when global (or) local page replacement is used as followsSolution to Thrashing is Working Set strategyIf we provide a process with as many frames as it needs thrashing can be avoided Since it is not possible to know how many frames a process needs we use locality model of process execution

                                    here Locality is set of pages that are actively used together Locality model states that as the process executes it moves from locality to locality If we allocate enough frames to a process to accommodate the size of current locality it will not fault again until it changes current localityWorking Set model uses working-set and a parameter ∆ = working-set windowThe set of pages in the most recent ∆ page references is the working setLet WS S i be the working-set size for process Pi ie Pi needs WS S i framesLet D be the total demand for frames then D=sum WS Si

                                    and mbe the total number of available frames

                                    If iquestm thrashing occurs because some processes will not have enough framesSoIF (Dgtm iquest

                                    OS selects a process to suspendelse OS may initiate another processThe working set strategy prevents thrashing while keeping the degree of multi programming as high as possibleThe main difficulty is to keep track of moving working set window ========================================================================Q) How does thrashing occur in global and local page replacementAns In Global page replacement when a page fault occurs for a process any frame in main memory can be replaced causing page fault to some other process that needs the replaced frame Page fault processes must use paging device to swap pages in and out As more number of processes queue up for the paging device

                                    the READY queue empties And CPU becomes idle So OS introduces new processes as CPU is free This further increases number of page faults

                                    In Local page replacement Each process is allocated certain frames in main memory and when a page fault occurs only the frames allocated to it will be replacedWhen a process starts thrashing other processes are not affected But since the processes will be in queue for paging device most of the time average service time for page fault increases Hence effective access time will increase even for a process that is not thrashingQ) Page Table Structure Or Structure of page table in memory

                                    Hierarchical Paging Hashed Page Tables Inverted Page Tables

                                    Hierarchical Paging If Page table is large then we break up the page table into multiple page tables ie page table is paged A simple technique is a two-level page tableTwo-Level Paging Example A logical address (on 32-bit machine with 4K page size) is divided into 1 a page number consisting of 20 bits 2 a page offset consisting of 12 bitsSince the page table is paged the page number is further divided into 1 a 10-bit page number 2 a 10-bit page offset Thus a logical address is as follows page number page offset

                                    pi p2 d

                                    10 10 12 where p1 is an index into the outer page table and p2 is the displacement within the page of the outer page table Two-Level Page-Table Scheme

                                    Address-Translation Scheme Address-translation scheme for a two-level 32-bit paging architecture

                                    Hashed Page Tables (or) Hash table Hashed page tables are common in address spaces gt 32 bitsEach entry in hash table contains a linked list of elements that hash to same locationHash table is indexed by hash valueEach element consists of 3 fields

                                    1 Page number2 Frame number3 Pointer to next element in linked list

                                    The algorithm works as follows Page number is passed to hash function to get hash value Locating the hash value in hash table is easy as Hash table is indexed by hash value Page number is then compared with field 1 in the first element in linked list If there is a match the corresponding frame number is used to get physical address If there is no match next entries in linked list are searched

                                    Inverted Page Table

                                    Invertible page table has one entry for each frame of physical memory Each entry in invertible page table has

                                    Process-id Page number

                                    Logical address also has 3 parts Process-id Page number

                                    OffsetProcess-id and page number of the logical address is compared with each entry in the invertible page tableIf a match is found at ith entry i is added to offset to get logical address If there is no match it means it is illegal addressAdvantages and disadvantages

                                    Decreases memory needed to store each page table Increases time needed to search the table when a page reference occurs Solution Use hash table to

                                    limit the search to one mdash or at most a few mdash page-table entries Inverted Page Table Architecture

                                    Unit-IV CPU Scheduling

                                    Basic conceptsTo maximize CPU utilization some process must be running at all times

                                    In multi programming several processes will be in main memory at a given time If running process has to wait for IO the OS takes the CPU away from that process and assigns CPU to other process in ready queue So CPU must be scheduled and CPU scheduling is fundamental OS function

                                    CPU- IO burst cycleProcess execution begins with a CPU burst that is followed by an IO burst which is followed by another CPU burst then another IO burst and so on as shown in the figure

                                    An IO bound program has many short CPU bursts A CPU-bound program has few long CPU bursts

                                    Schedulers a module in OS for scheduling decisions1048708 Long-term scheduler (or job scheduler) ndash selects which processes should be brought into the ready queue1048708 Medium-term scheduler ndash selects which processes should be swapped inout the memory1048708 Short-term scheduler (or CPU scheduler) ndash Whenever the CPU becomes idle the operating system must select one of the processes in ready queue to be executed This selection is done by short term scheduler (or) CPU scheduler

                                    Dispatcher gives CPU control to the process selected by short term scheduler This function involves following

                                    1 switching context2 switching to user mode3 jumping to proper location in the user program to restart that program

                                    Time taken to stop one process and start another process is known as dispatch latency This must be kept smallReady QueueReady queue is implemented as a FIFO queue a priority queue a tree or simply an unordered linked listThe records in ready queue are PCBs of processes

                                    Pre-emptive Scheduling

                                    CPU scheduling decisions take place under one of four conditions

                                    1 When a process switches from the running state to the waiting state2 When a process switches from the running state to the ready state for example in response to an

                                    interrupt3 When a process switches from the waiting state to the ready state say at completion of IO 4 When a process terminates

                                    For conditions 1 and 4 there is no choice - A new process must be selected For conditions 2 and 3 there is a choice - continue running the current process (or) select a different

                                    one If CPU scheduling takes place only under conditions 1 and 4 the system is said to be non-

                                    preemptive or cooperative In non-preemptive a process runs until it itself gives up the CPU ie when it is waiting for IO (or) when process completes Otherwise the system is said to be pre-emptive

                                    Windows used non-preemptive scheduling up to Windows 3x and started using pre-emptive scheduling with Win95 Macs used non-preemptive prior to OSX and pre-emptive since then Note that pre-emptive scheduling is only possible on hardware that supports a timer interrupt

                                    Note that pre-emptive scheduling can cause problems when two processes share data because one process may get interrupted in the middle of updating shared data structures

                                    Preemption can also be a problem if the kernel is busy implementing a system call ( eg updating critical kernel data structures ) when the preemption occurs Solution wait until the system call has either completed or blocked before allowing the pre-emption But this solution is problematic for real-time systems

                                    Some critical sections of code protect themselves from concurrency problems by disabling interrupts before entering the critical section and re-enabling interrupts on exiting the section But this should only be done only on very short pieces of code that will take less CPU time

                                    Scheduling Criteria to decide CPU scheduling algorithm1 CPU utilization in real system cpu utilization will range from 40 to 902 Throughput Number of processes that are completed per unit time3 Turnaround time Time when the process has completed ndash Time when the process was submittedie it includes waiting time and burst time4 Waiting time amount of time that a process spends waiting for CPU5 Response time Time when the first response came - Time when the process was submitted

                                    Optimization Criteria Max CPU utilizationMax throughput

                                    Min turnaround timeMin waiting timeMin response time CPU Scheduling Algorithms1) First-Come First-Served (FCFS) Scheduling First come first served CPU scheduling algorithm is non pre-emptive Processes are scheduled in the order they have arrivedFCFS is implemented by Queue When the CPU is free it is allocated to the process at the head of the queueAdvantages Easy to implementDisadvantages 1 Average waiting time is quite long

                                    2 Convoy effect (it occurs as small processes wait in queue for big process to leave CPU)Example Process Burst Time P1 24 P2 3 P3 3

                                    Suppose that the processes arrive in the order P1 P2 P3

                                    The Gantt Chart for the schedule is P1 P2 P3

                                    0 24 27 30 Waiting time for P1 = 0 P2 = 24 P3 = 27 Average waiting time (0 + 24 + 27)3 = 17 2) Shortest-Job-First (SJF) SchedulingSchedule the Process with shortest burst time If CPU burst time of two processe are same then FCFS is usedAdvantages Average waiting time decreasesDisadvantages It is difficult to know the length of next CPU burstSJF can be either pre-emptive (or) non pre-emptive1 Non pre-emptive SJF ndash once CPU is given to the process it cannot be preempted until completes its CPU burst2 Pre-emptive SJF (or) Shortest-Remaining-Time-First (SRTF) If the newly arrived process is shorter than what is remaining of currently executing process then pre-empt the currently executing process

                                    Example Process Arrival Time Burst Time P1 00 7

                                    P2 20 4 P3 40 1

                                    P4 50 4

                                    SJF (non-preemptive) Gantt Chart P1 P3 P2 P4

                                    0 7 8 12 16 Average waiting time = [0 +(8-2)+(7-4) +(12-5)] 4 =4

                                    Example of Preemptive SJF

                                    Process Arrival Time Burst TimeP1 00 7P2 20 4P3 40 1P4 50 4

                                    SJF (preemptive) P1 P2 P3 P2 P4 P1

                                    0 2 4 5 7 11 16 Average waiting time = (9 + 1 + 0 +2)4 =3 3) Priority SchedulingA priority number (integer) is associated with each processThe CPU is allocated to the process with the highest priority (smallest integer equiv highest priority)If two processes have equal priority then FCFS is usedSJF is a priority algorithm in which larger the CPU burst lower the priorityThere are two schemes 1 Preemptive 2 nonpreemptive

                                    Problem equiv Starvation ndash low priority processes may never executeSolution equiv Aging ndash as time progresses increase the priority of the process

                                    4) Round Robin (RR) Each process gets a small unit of CPU time (time quantum) usually 10-100 milliseconds After this time has elapsed the process is preempted and added to the end of the ready queueIf there are n processes in the ready queue and the time quantum is q then each process gets 1n of the CPU time in chunks of at most q time units at once No process waits more than (n-1)q time unitsPerformance 1 q large _ FIFO 2 q small _ q must be large with respect to context switch otherwise overhead is too highExample of RR with Time Quantum = 4 Process Burst Time P1 24 P2 3

                                    P3 3 The Gantt chart isP1 P2 P3 P1 P1 P1 P1 P1

                                    0 4 7 10 14 18 22 26 30 Average waiting time = [(30-24)+4+7]3 = 173 =566 5)Multilevel Queue scheduling Ready queue is partitioned into separate queuesFor ex ready queue is divided into 2 queues

                                    1 foreground (interactive) queue2 background (batch) queue

                                    Each queue has its own scheduling algorithmFor ex Round Robin scheduling algorithm can be used in foreground queue

                                    FCFS scheduling algorithm can be used in background queueScheduling must be done between the queues This can be done in 2 ways1 Fixed priority scheduling

                                    Foreground queue has highest priority All the processes in foreground queue must be completed and queue must be empty Then only the processes in background queue will be given CPU As shown in below figure Disadvantage starvation

                                    2 Time slice ndash each queue gets a certain amount of CPU timewhich it can schedule amongst its processes ie 80 to foreground in RR and 20 to background in FCFS

                                    6) Multilevel Feedback Queue Scheduling A process can move between queues The idea is to separate processes according to their CPU bursts IO ndash bound and interactive processes will be in highest priority queueIf a process takes more CPU time it is moved to lower priority queue If a process waits too long in lower priority queue it is moved to higher priority queue to prevent starvationAs shown in the below figure let there be 3 queues Q0 Q1 Q2

                                    1 Q0 ndash time quantum 8 milliseconds 2 Q1 ndash time quantum 16 milliseconds 3 Q2 ndash FCFS

                                    Scheduling

                                    1 A process entering ready queue is put in queue Q0 When it gains CPU the process receives 8 msec If it does not finish in 8 milliseconds process is moved to queue Q1 2 At Q1 process again receives 16 additional milliseconds If it still does not complete it is moved to queue Q2

                                    Q) SynchronizationSeveral processes run in an operating system Some process share resources due to which problems like data inconsistency may arise

                                    Example of process synchronization is Producer Consumer problem (or) Bounded-Buffer problemProducer Consumer problem To make sure that producer process should not add data when the buffer is full andConsumer process should not take data when the buffer is empty

                                    The code for Producer process can be modified as followswhile (true) Produce an item in next producedwhile (counter == BUFFER_SIZE) do nothingbuffer[in]=nextProducedin = (in+1) BUFFER_SIZEcounter++

                                    here an integer variable counter is initialized to zeroCounter is incremented every time a new item is added to the buffer and counter is decremented every time an item is removed from bufferin is an index that always points to an empty slot after the last filled slotout is an index that always points to the first filled slot after at the head of the circular queue

                                    The code for consumer process is as shown belowwhile(true)

                                    while (counter == 0) do nothingnextConsumed = buffer[out]out=(out+1)BUFFER_SIZEcounter--

                                    The above producer and consumer code produces RACE CONDITIONOutcome of execution of several co-operating processes depends on the order in which accesses takes place is called Race ConditionFor Example if counter =5

                                    Given counter =5 and if producer produces one item and consumer consumes one item then correct value of counter= 5 only But here the counter = 4 ie inconsistent resultIf T5 is executed before T4 then the counter =6 ie inconsistent result

                                    Solution to avoid race condition make sure only one process at a time must update the shared variable (here it is counter)

                                    Q) Critical section problemAns Each process has a segment of code called critical section Critical section code is the code that contains common variables of co-operating processesTo avoid data inconsistencyTwo processes must not execute critical section code at same timeFor this to happen1 Each process must request permission to enter its critical sectionmdashEntry section2 End of critical section ndashExit section3 Rest of the code after critical section ndash Remainder section

                                    Solution to critical section must satisfy the following three requirements1 Mutual exclusion if one process is in critical section no other process can be in critical section2 Progress Processes not in remainder section must decide which process will enter critical section next3 Bounded Waiting All the processes must be allowed to enter critical section No process must wait indefinitely

                                    Q) Petersonrsquos solution for critical section problem (Software based solution)AnsTime Process 1 Process2T0 do

                                    flag[1] = TRUE turn =2

                                    do flag[2] = TRUE

                                    T1 turn =1

                                    T2 While( flag[2] ampamp turn == 2)Since turn = 2 while loop will not run

                                    While( flag[1] ampamp turn == 1)Loops continuously until flag[1]

                                    becomes falseT3 Enter critical section

                                    T4 flag[1] = FALSE Enter Critical section

                                    T5 Enters remainder section flag[2]=FALSE

                                    while (TRUE) Enters remainder section

                                    while (TRUE)

                                    101 File Concept Data is stored in secondary memory in the form of files A file is a sequence of bits bytes lines or records A file has structure which depends on its type

                                    File Attributes

                                    1 Name ndash Name of the file for user understanding2 Identifier used by OS to identify a file ( eg inode number )3 Type ndash Text files executable files binary files etc 4 Location ndash location of the file on the hard drive 5 Size current size of the file6 Protection Controls who can read write or execute the file7 Time Date and user identification gives the information when the file was created last modified

                                    and last used etc This data is used for protection security and usage monitoring

                                    All information about files is kept in directory structure The directory entry consists of the filersquos name and its unique identifier The identifier locates the other file attributes Directory structure is kept on disk

                                    File Operations

                                    The file Abstract Data Type supports many common operations 1 Creating a file find free space on disk add entry to directory 2 Writing a file write data at current file position pointer location and update pointer3 Reading a file read data at current file position pointer location and update pointer4 Repositioning within a file or file seek change pointer location to a given value5 Deleting a file free the space allocated to file and remove entry from directory6 Truncating a file delete the data and update file size

                                    Open File Table

                                    Most OSes require that files be opened before access and closed after access Information about all files that are open currently in the system is stored in an open file table This

                                    system wide open file table is maintained by OS And information about files opened by the process is stored by the processrsquos open file table Open file table has the following information

                                    1 File pointer - records the current position in the file for the next read or write2 File-open count - How many times has the current file been opened ( simultaneously by

                                    different processes ) and not yet closedWhen this counter reaches zero the file can be removed from the table

                                    3 Disk location of the file4 Access rights

                                    Some systems provide support for file locking 1 A shared lock is for reading only2 A exclusive lock is for writing as well as reading3 An advisory lock OS makes sure the locks are acquired and released appropriately4 A mandatory lock software developers makes sure the locks are acquired and released

                                    appropriately5 UNIX used advisory locks and Windows uses mandatory locks

                                    File Types

                                    Implementing file types A common technique is to include the type as part of the filename The name is split into 2 parts

                                    1 File Name 2 Extension

                                    File name and extension are separated by a period The system uses the extension to indicate the type of file and the type of operations that can be done on that fileUser opens Microsoft word and clicks on File menu and Open option User will only specify the filename without extension application will look for a file with the given name and extension it expects Hence extensions are not supported by the operating systemUnix system uses a crude magic number stored at the beginning of the file to indicate the type of fileIn Mac OS X operating system file produced by word processor has the word processorrsquos name as its creator When user double clicks on the file word will open the required fileSome common file types are as shown below

                                    File Structure Some files contain an internal structure which may or may not be known to the OS for ex executable files have a specific structure so that it can determine

                                    where in memory to load the file and what is the location of the first instruction

                                    If OS supports many file formats it will increase the size and complexity of the OS UNIX treats all files as sequences of bytes with no internal structure (With the exception of

                                    executable binary programs which it must know how to load and find the first executable statement etc)

                                    Macintosh files have two forks - a resource fork and a data fork The resource fork contains information relating to the UI such as icons and button images and can be modified independently Data fork contains the code or data

                                    Internal File structure

                                    Disk files are accessed in units of physical blocks typically 512 bytes

                                    Internally files are organized in units of logical units like single byte (or)

                                    data record size (or) structure size

                                    The number of logical units which fit into one physical block determines its packingIf a student file is organized internally as records Each record takes 64 bytes of memoryAnd main memory is divided into blocks of 512 bytes Then packing = 51264=8 8 records can be stored per block

                                    Internal fragmentation may occur due to this

                                    102 Access Methods

                                    The information in file can be accessed in the following ways1 Sequential access2 Direct access (or) Relative access

                                    1 Sequential access we cannot randomly access any record

                                    Records are read and written sequentially one record after another

                                    A sequential access file emulates magnetic tape operation and generally supports the following operations read next - read a record and advance the tape to the next position write next - write a record and advance the tape to the next position rewind skip n records - May or may not be supported N may be limited to positive numbers or may be

                                    limited to +- 1

                                    2 Direct access (or) Relative accessA file is made up of fixed length records so that we can randomly access any record

                                    No restriction on order of reading or writing

                                    Direct access files are useful when we need to access large amounts of data randomly

                                    Ex database

                                    Operations supported include n is relative block number ie relative to beginning of file

                                    1 read n - read record number n 2 write n - write record number n 3 jump to record n - could be 0 or the end of file4 Query current record - used to return back to this record later5 Sequential access can be easily emulated using direct access The inverse is complicated and

                                    inefficient

                                    3 Other access methods can be built on top of direct access method We construct an index for the file which contains pointers to various blocks To search a file we first access index and then use pointer to access file

                                    If a student file has 120000 records of students sorted by student id We can create an index consisting of 1st student id in each blockif 64 records can be stored in each block then number of blocks needed to store 120000 records = 12000064 asymp 2000 blocks So index has 2000 entries Index is kept in main memory so that searching becomes faster

                                    If the index itself becomes large we create an index for index file (Ex B-tree)

                                    Q) Allocation Methods

                                    To allocate space to files on disk is such a way that disk space is utilized effectively and files can be accessed quickly

                                    Three major methods of allocating disk space are

                                    1 Contiguous 2 Linked

                                    3 Indexed

                                    1 Contiguous allocation Each file occupies set of contiguous blocks on disk

                                    If the file is n blocks long and starts at location b then it occupies blocks b b+1 b+2hellipb+n-1

                                    The directory entry for each file indicates the address of starting block(ie b) and length (n)

                                    Advantages

                                    1 For direct access to block i we can directly access by finding the block as b+i Since the file is stored sequentially sequential access is easy Hence access is easy

                                    2 Supports both direct and sequential access3 Number of disk seeks required is minimum 4 Seek time is minimum

                                    Disadvantages

                                    1 Difficult to find space to a new file (or) add more data to existing fileFirst fit and Best fit strategies are used to find a free hole( set of free contiguous blocks) from list of available holes But both are inefficient wrt storage utilization

                                    2 External fragmentation even though there are enough free blocks required for a new file they are not contiguous and hence cannot be allocated to new fileSolution copy all the files onto another disk or tape We then copy the files back onto original disk contiguously This is known as compaction But compaction is time consuming

                                    3 Size of the file must be known when the file is created Even though final size is known entire size may not be used by the file at the beginning Hence leads to wastage of space

                                    If size of the file is not known in advance then (i) If we allocate too little space to file file cannot be extended Solution find a larger hole copy the contents of the file to the new space and release the previous space This may be possible as long as space exists Also this is time consuming(ii) If we overestimate the amount of space needed much of the space is unused This will lead to internal fragmentation

                                    To minimize these drawbacks some os uses modified contiguous allocation scheme Here contiguous blocks of memory is allocated at firstIf the file needs more blocks another set of contiguous space known as extent is added Location and a block count plus a link to the first block of next extentIf extents are too large and of fixed size internal fragmentation occursIf extents are of variable sizes external fragmentation occurs

                                    Linked AllocationEach file is a linked list of disk blocks The disk blocks can be scattered anywhere on the diskThe directory contains a pointer to first and last blocks of the file For example a file of 5 blocks might start at a block 9 and continue at block 16 then block 1 and then block 10 and finally block 25 Each block contains a pointer to next block If each block is 512 bytes and disk address (pointer ) requires 4 bytes then the user sees blocks of 512-4=508 bytes To create a new file we simply create a new entry in the directory The pointer will have nil for empty file Also the size field of empty file is 0

                                    To add more data to existing file free block is found and data is written to it and is linked to end of file To read a file we read by following the pointers block to block

                                    Advantages 1 Easy to find space to a new file (or) add more data to existing file2 Size of the file need not be declared when the file is created3 There is no external fragmentation

                                    Disadvantages1 Seek time is more2 Number of disk seeks required is more3 Pointers use larger percentage of filersquos disk space4 Inefficient for direct access files To access i th record i disk reads are required5 Not Reliable if pointers were lost or damaged

                                    Solution to disadvantage 3 is to use to group multiple blocks as clusters(for ex 4 blocks = 1cluster) We allocate clusters instead of blocks to file Makes logical to physical block mapping simpleImproves disk throughput by reducing disk access timeReduces wastage of space due to pointers as only few pointers may be neededFree ndashlist management also becomes simpleDisadvantage internal fragmentation more space is wasted when the cluster is partially full

                                    Solution to disadvantage 5 is to use doubly linked list (or) store the file name and relative block number in each block

                                    An variation of linked allocation is the use of file allocation table (FAT)A section of disk at the beginning of each volume contains FAT FAT has one entry for each disk block and is indexed by block numberThe block number contains the block number of next block that contains the file The last block will have special end of the file value as table entry An unused block is indicated by table value 0

                                    Advantages

                                    1 Easy to find space to a new file (or) add more data to existing file2 Random access time is improved

                                    Disadvantage number of disk seeks is more The disk head must move to read the FAT and find the location of block then move to location of the block

                                    3 Indexed allocation

                                    Each file has its own index block Index block is an array of disk block addresses The ith entry in index block points to ith block of the file

                                    The directory contains the address of the index block

                                    When the file is created all pointers in the index block are set to nil

                                    To add more data to existing file free block is found and data is written to it The address of block is put in the index block

                                    Advantages

                                    1 Easy to find space to a new file (or) add more data to existing file2 Supports direct (or) random access efficiently3 No external fragmentation

                                    Disadvantage

                                    1 Seek time is more2 Number of disk seeks required is more3 Not Reliable4 Index file must be kept in memory If this memory is not available then we have the read the index

                                    block and then desired data block ie 2 disk accesses are required To access a block near the end of the file we need to read all the index blocks to read the needed data block

                                    5 Pointer overhead of index block is gt than that of linked allocation For example if we have file of only one or two blocks Entire index block is allocated to store these one or two pointers

                                    Various mechanisms followed for size of index block

                                    1 linked scheme size of index block is one disk block Index block contains a small header containing name of the file and set of the first 100 disk block addresses The next address contains the address of another index block for large files

                                    2 multilevel index to access a block the OS uses first-level index to find second-level index block which points to file block This approach can be continued to third or fourth level

                                    3 combined scheme

                                    For example Say there are 15 pointers of index block in the filersquos inode The first 12 of these pointers have address of blocks that contains file data The next 3 pointers point to indirect blocks The first points to single indirect block which is an index block that contain addresses of filersquos data blocks

                                    The second points to double indirect blocks and third points to triple indirect blocks as shown below

                                    Q) Free space management

                                    To keep track of free disk space the system maintains free space list To create a file we search free space list and allocate the required space to new file

                                    The free space list can be implemented as

                                    1 Bit vector (or) Bit map2 Linked list3 Grouping4 Counting 5 Space maps

                                    1 Bit vector (or) Bit map Each block is represented by 1 bit If the block is free the bit is 1 and if the block is allocated the bit is 0 For example consider a disk where blocks 23458 blocks are free The free-space bit map would be 001111001000

                                    Advantages simple and easy to get contiguous files

                                    Disadvantages

                                    1 Bit map requires extra space2 Bit map is inefficient if kept in disk Bit maps of smaller disk can be kept in main memory but may

                                    not be possible to keep the bit map of larger disk in main memory3 Bit map must be modified both when blocks are allocated and freed Freeing 1 GB of data on a 1-TB

                                    disk could cause thousands of blocks of bit maps to be updated because those data blocks are scattered all over the disk

                                    2 Linked List Free-space list is implemented by linking all the free disk blocks together A pointer to the first free block is stored in special location on the disk For example consider a disk where blocks 23458 blocks are freeWe keep a pointer to block 2 as the first free block Block 2 would contain a pointer to block 3 Block 3 would point to block 4 and block 4 would point to block 5and so onAdvantage No waste of spaceDisadvantage Cannot get contiguous space easily ie To traverse the free-space list we must read each block which requires more IO time

                                    3 Grouping First free block stores the addresses of n free blocks The first n-1 free blocks are free and the last block contains the addresses of another n free blocks and so onAdvantage large number of free blocks can be found quickly4 Counting free-space list can be maintained by keeping the address of first free block and the number (n)of free contiguous blocks that follow the first block

                                    Each entry in the free-space list consists of disk address and a countEach entry requires more space andlist is shorter only if count is gt1These entries can be stored in a B-tree for efficient insertion deletion and search

                                    5 Space maps ZFS divides the disk space into chunks of manageable sizes called metaslabs Each metaslab has a space mapZFS uses counting algorithm to store information about free blocks Space map is a log of all block activities in time order in counting formatWhen ZFS allocates or frees space from a metaslab it loads the associated space map into memory and updates it Finally free-space list is updated on diskSynchronization Hardware Hardware solution to synchronization is to provide atomic operations These operations operate as a single instruction without interruption Two such operations are 1 Test and Set() hardware instruction2 Swap() instruction

                                    1 Test and Set instruction is as shown

                                    Solution using Test and SetThe shared variable lock is initialized to false

                                    2 Swap() instruction definition is as follows

                                    Solution using Swap1 Shared Boolean variable lock is initialized to false and each process has a local boolean variable key

                                    Both Test and Set Swap instructions satisfy the mutual exclusion requirement but unfortunately do not guarantee bounded waiting If there are multiple processes trying to get into their critical sections there is no guarantee of what order they will enter and any one process could have the bad luck to wait forever until they got their turn in the critical section (Since there is no guarantee as to the relative rates of the processes a very fast process could theoretically release the lock whip through their remainder section and re-lock the lock before a slower process got a chance As more and more processes are involved vying for the same resource the odds of a slow process getting locked out completely increase )

                                    Below figure illustrates a solution using test-and-set that satisfies bounded waiting using two shared data structures boolean lock and boolean waiting[ N ] where N is the number of processes in contention for critical sections

                                    Bounded-waiting mutual exclusion with TestAndSet( )

                                    It first looks in an order ( starting with the next process on the list ) for a process that has been waiting and if it finds one then it releases that particular process from its waiting state without unlocking the critical section thereby allowing a specific process into the critical section while continuing to block all the others

                                    Q) What is semaphoreAns Semaphore is used to solve synchronization problems A semaphore S is an integer variable accessed only through 2 standard atomic operations 1 wait() 2 signal()Wait primitive Signal primitivewait(S)

                                    while(S lt= 0) do nothingS--

                                    signal (S)

                                    S++

                                    Each process that wishes to use a resource performs wait() operation When a process releases a resource it performs signal() operation

                                    Semaphores Usage1 binary semaphore or mutex locks can take values 0 (or) 1

                                    2 Counting Semaphore can take any integer value Counting semaphore is used to count the number remaining resources The counter is initialized to the number of resources available whenever the counting semaphore gt 0 then a process can enter a critical section and use one of the resources When the counter = 0 ( or negative in some implementations ) then the process blocks until another process frees a resource and increments the counting semaphore with a signal callFor example if 3 resources are there and 4 processes Value of counting semaphore =3if process P1 requires a resource it perform wait() operationNow counting semaphore = 2if process P2 requires a resource it perform wait() operationNow counting semaphore = 1if process P3 requires a resource it perform wait() operationNow counting semaphore = 0if process P4 requires a resource it perform wait() operation P4 waits until a resource is available

                                    3 Semaphores can also be used to synchronize certain operations

                                    Semaphore ImplementationWhen a process is in critical section and any other process that tries to enter critical section must loop continuously wasting CPU time (busy waiting) This type of semaphore is called spin lock as the process spins while waiting For example suppose it is important that process P1 execute statement S1 before process P2 executes statement S2

                                    First we create a semaphore named synch that is shared by the two processes and initialize it to zero Then in process P1 we insert the code

                                    S1signal( synch )

                                    and in process P2 we insert the code wait( synch )

                                    S2 Because synch was initialized to 0 process P2 will block on the wait until after P1 executes the call

                                    to signal Each semaphore has a integer value and maintains a waiting queue of waiting processes

                                    Semaphore implementation with no busy waiting To overcome busy waiting we block the waiting process and waiting process is restarted by wakeup() operation(ie the process is removed from waiting queue to ready state)

                                    Problems due to semaphore1 Deadlocks Deadlocks occur when multiple processes are blocked each waiting for a resource that can only be freed by one of the other (blocked) processes shown below

                                    2 Starvation In Starvation one or more processes gets blocked forever and never get a chance to enter critical section For example if we do not specify the algorithms for adding processes to the waiting queue or selecting one to be removed from the queue in the signal( ) call If a LIFO queue is chosen then the first process that starts waiting will never get a chance

                                    3 Priority inversionLet Process LMH have priorities as shown belowLltMltHprocess H is waiting for a resource that has been held by process L so Process L is running When process M enters it pre-empts process L and M is running nowHere M runs before H even though M has lower priority than HTo avoid this L is given high priority so that M does not pre-empt L=====================================================================Q) Classical problems on synchronization

                                    1 The Bounded Buffer Problem (also called the The Producer-Consumer Problem)2 The Readers-Writers Problem3 The Dining Philosophers Problem

                                    These problems are used to test newly proposed synchronization scheme

                                    1 The Bounded Buffer ProblemConsider

                                    n buffers and each buffer holds one item a producer process which creates the items (1 at a time) a consumer process which processes them (1 at a time)

                                    Producer process cannot produce an item if all the buffers are full and Consumer process cannot consume if all the buffers are empty As both processes modify the contents of buffer synchronization must be there For that we use 3 Semaphores

                                    1 empty = n (empty semaphore counts the number of empty buffers)2 full=0 (full semaphore counts the number of filled buffers)3 mutex=1 (mutex provides mutual exclusion for access to the buffer)

                                    Producer process Consumer processdo

                                    hellip produce an item wait(empty) wait(mutex) hellip

                                    do wait(full) wait(mutex) hellip remove an item from buffer hellip

                                    add item to buffer hellip signal(mutex) signal(full)

                                    while (1)

                                    signal(mutex) signal(empty) hellip consume the item hellip while (1)

                                    2 The Readers-Writers ProblemA data item such as a file is shared among several processesEach process is classified as reader or writerMultiple readers can read the file at same timeA writer must have exclusive access (ie cannot share with either a reader or another writer)Two versions of the readers-writers problem

                                    readers priority new reader need not wait because a writer is waiting writers priority if a writer is waiting to access the database no new readers can start reading

                                    A solution to both version may cause starvation in the readers priority version writers may starve in the writers priority version readers may starve

                                    A semaphore solution to the readers priority version uses 3 semaphores1 readcount = 0 (readcount semaphore counts the number of reading shared data2 mutex =1 (mutex semaphore is used for mutually exclusion when readcount is updated)3 wrt = 1 (wrt semaphore is common to both writer and reader)

                                    wrt provides mutually exclusive access to shared data Reader process Writer process

                                    wait(mutex) readcount++ if (readcount == 1) wait(rt) signal(mutex) hellip reading is performed hellip wait(mutex) readcount-- if (readcount == 0) signal(wrt) signal(mutex)

                                    do wait(wrt) hellip writing is performed hellip signal(wrt)

                                    while (TRUE)

                                    3 The Dining Philosophers Problemn philosophers sit around a table thinking and eatingAs time passes a philosopher gets hungry and tries to pick up the chopstick on his left and on his right A philosopher may only pick up one chopstick at a time and cannot pick up a chopstick already in the hand of neighbour philosopherThe dining philosophers problems is an example of a large class or concurrency control problems it is a simple representation of the need to allocate several resources among several processes in a deadlock-free and starvation-free mannerA semaphore solution represent each chopstick with a semaphore

                                    semaphore chopstick[5] Initially all values are 1

                                    Philosopher ido

                                    51

                                    wait(chopstick[i])wait(chopstick[(i+1) 5])hellipeathellipsignal(chopstick[i])signal(chopstick[(i+1) 5])hellipthinkhellip

                                    while (1)This solution guarantees no two neighbouring philosophers eat simultaneously but has the possibility of creating a deadlockQ) Types of operating systems

                                    MULTIPROCESSING SYSTEMS has multiple hardware CPUs

                                    In symmetric multiprocessing (SMP) systems The processes share the same main memory and io facilities All processors can perform the same functions (hence term symmetric)

                                    52

                                    ASYMMETRIC MULTIPROCESSING SYSTEMS In asymmetric multiprocessing (ASMP) all CPUs are not equal (Master and slave CPUs) ASMP have dedicated processors to specific tasks For example one processor may be dedicated to disk operations another to video operations etc These systems dont have the flexibility to assign processes to the least-loaded CPU unlike an SMP system

                                    REAL TIME SYSTEM is used to implement a computer application that must complete its execution within its time constraint

                                    Two kinds of real-time systems have evolved

                                    1 A hard real-time system can guarantee that response requirement will be met under all conditions 2 A soft real-time system cannot guarantee that response requirement meets under all conditions Ex

                                    Digital audio or multimedia systems Digital telephones

                                    DISTRIBUTED SYSTEMS execute parts of a computation in different systems at the same time It uses distributed control ie it spreads its decision-making actions across different computers in the system so that failures of individual computers or the network does not cripple its operation

                                    A distributed operating system appears as a uniprocessor system even though it has multiple processors The users may not know where their programs are being run or where their files are located that should all be handled automatically by the operating system

                                    Distributed systems allow applications to run on several processors at the same time thus requiring more complex processor scheduling algorithms

                                    DESKTOP SYSTEMS A desktop system is a personal computer (PC) system used at a single location Modern desktop systems support multiprogramming Common examples are Linux FreeBSD Windows 8 and the Macintosh operating system

                                    HANDHELD SYSTEMS (palmtop computers) Two of the most popular operating systems for handhelds are Symbian OS and Android OS They have very less memory

                                    53

                                    CLUSTERED SYSTEMS A computer cluster consists of a set of loosely connected computers that work together so that in many respects they can be viewed as a single system The components of a cluster are usually connected to each other through fast local area networks each node (computer used as a server) running its own instance of an operating system

                                    Q) Modern operating system

                                    New developments in hardware applications and security threats lead to the development of modern operating system

                                    New Hardware developments like increased machine speed high speed network increased size and variety of memory devices

                                    New Applications developments like multimedia applications internet and web access and client server computing

                                    Developments in modern operating system can be categorized as

                                    1 Microkernel architecture assigns only few functions to kernel ( IPC and CPU scheduling etc) Other OS services are provided by processes called servers that run in user mode This approach decouples server and kernel development This approach is well suited to distributed environment

                                    2 Multithreading a process is divided into threads that can run concurrently Useful for applications that perform a number of independent tasks Example database server that listens and processes many client requests

                                    3 Symmetric multiprocessing As now-a-days there are multiple microprocessors in a single system SMP operating system provides greater efficiency when there are multiple processors The processes share the same main memory and io facilities All processors can perform the same functions (hence term symmetric)

                                    SMP schedules processes or threads across all of the processors

                                    Advantages

                                    1 Increased performance if some portion of the program can be run in parallel the performance of SMP OS increases

                                    2 increased availability as all processes can perform the same task if one processor fails others will work 3 incremental growth one can add a new processor to increase the performance4 Scaling cost can be dependent on the number of processors used

                                    4 Distributed operating systems Distributed systems allow applications to run on several processors at the same time

                                    5 Object oriented design adds modularity to kernel Os can be customized without effecting system integrity

                                    Q) Linux Operating systems It is open source as its source code is freely available It is free to use Linux was designed considering UNIX compatibility Linux is a multi-user multitasking system Main design goals are speed efficiency and standardization Components of Linux System

                                    54

                                    Linux Operating System has primarily three components as shown in the below diagram

                                    Kernel - Kernel is the core part of Linux It is responsible for all major activities of this operating system It interacts directly with hardware Kernel hides low level hardware details to system or application programs

                                    System Library - System libraries are special functions or programs These are used by application programs to use Kernelrsquos features

                                    System Utility - System Utility programs are responsible to do specialized individual level tasks like updating log file accepting login requests from terminals etc

                                    Kernel Modules Sections of kernel code that can be compiled loaded and unloaded independent of the rest of the kernel A kernel module may typically implement a device driver a file system or a networking protocol

                                    Three components to Linux module support 1 module management Supports loading modules into memory and allows to talk to the rest of the kernel 2driver registration Allows modules to inform rest of the kernel that a new driver has become available 3 conflict resolution protect reserved resources of one driver from accidental use by another driver

                                    Kernel Mode vs User ModeKernel component code executes in a special privileged mode called kernel mode with full access to all resources of the computer This code represents a single process executes in single address space and do not require any context switch and hence is very efficient and fast Kernel runs each processes and provides system services to processes provides protected access to hardwares to processesCode which is not required to run in kernel mode is in System Library User programs and other system programs works in User Mode which has no access to system hardwares and kernel code User programs utilities use System libraries to access Kernel functions to get systems low level tasksBasic FeaturesFollowing are some of the important features of Linux Operating System

                                    Portable - Portability means softwares can works on different types of hardwares in same way Linux kernel and application programs supports their installation on any kind of hardware platform

                                    Open Source - Linux source code is freely available and it is community based development project Multiple teams works in collaboration to enhance the capability of Linux operating system and it is continuously evolving

                                    Multi-User - Linux is a multiuser system means multiple users can access system resources like memory ram application programs at same time

                                    Multiprogramming - Linux is a multiprogramming system means multiple applications can run at same time

                                    Hierarchical File System - Linux provides a standard file structure in which system files user files are arranged

                                    Shell - Linux provides a special interpreter program which can be used to execute commands of the operating system It can be used to do various types of operations call application programs etc

                                    55

                                    Security - Linux provides user security using authentication features like password protection controlled access to specific files encryption of data

                                    Architecture Linux System Architecture is consists of following layers

                                    Hardware layer - Hardware consists of all peripheral devices (RAM HDD CPU etc)

                                    Kernel - Core component of Operating System interacts directly with hardware provides low level services to upper layer components

                                    Shell - An interface to kernel Takes commands from user and executes kernels functions

                                    Utilities - Utility programs giving user most of the functionalities of an operating systems

                                    Q) windows XP operating System1Extensibility mdash layered architecture

                                    Executive which runs in protected mode provides the basic system services On top of the executive several server subsystems operate in user mode Modular structure allows additional environmental subsystems to be added without affecting the

                                    executive 2Portability mdash XP can be moved from on hardware architecture to another with relatively few changes

                                    Written in C and C++ Processor-dependent code is isolated in a dynamic link library (DLL) called the ldquohardware abstraction

                                    layerrdquo (HAL)3 Reliability mdashXP uses hardware protection for virtual memory and software protection mechanisms for operating system resources 4 Compatibility mdash applications that follow the IEEE 10031 (POSIX) standard can be complied to run on 2000 without changing the source code 5 Performance mdash XP subsystems can communicate with one another via high-performance message passing

                                    Preemption of low priority threads enables the system to respond quickly to external events Designed for symmetrical multiprocessing

                                    6 International support mdash supports different locales via the national language support (NLS) APIXP Architecture 1 Layered system of modules 2 Protected mode mdash hardware abstraction layer (HAL) kernel executive 3 User mode mdash collection of subsystems

                                    a Environmental subsystems emulate different operating systems b Protection subsystems provide security functions

                                    XP architecture

                                    56

                                    Q) Windows network Operating systemExample Windows 2000 operating system

                                    All the pictures are taken from Silberschatz Abraham et al Peter Baer Galvin and Greg Gagne Operating system concepts Reading Addison-Wesley

                                    Visit my blog enthusiaststudentblogspotin57

                                    mtechmessengerblogspotin

                                    Keywords OS exam notes OS notesOperating system notes Opearing system galvin notesSimple batch system Multi- programmed Batch System Time sharing System Difference between Batch Multi-programming and Time sharing What is process structure of process in memory Process states PCB Operations on Processes Process Creation Process Termination cascading terminationcontext switch Basic Concepts of threads two modes of CPU execution Inter process communication Shared ndash Memory Systems Message passing systemswhat is deadlock four conditions that are necessary for deadlock to occur Methods for handling deadlocks Deadlock prevention Deadlock avoidanceResource Allocation graphBankers AlgorithmSafe State Unsafe state Safety Algorithm Resource-Request Algorithm Memory management Address Binding Logical vs Physical Address Space MMU Memory-Management Unit Dynamic loading Dynamic Linking Overlays Swapping Contiguous memory allocation Single-partition allocation Multiple-partition allocation Internal fragmentation Variable Size partitions External fragmentation First-fit Best-fit Worst-fit Non-Contiguous Memory allocation Paging Implementation of Page Table PTLR PTBR page table base register Page-table length register translation look-aside buffers TLB Protection in Paging Shared pages in Paging Segmentation Shared Segments Virtual memory Demand paging Procedure for Handling a Page Fault pure demand paging Page Replacement Page Replacement Algorithms FIFO Optimal Page replacement AlgorithmLeast Recently used (LRU) Page replacement algorithm ThrashingWorking Set model global and local page replacement

                                    58

                                    • 101 File Concept Data is stored in secondary memory in the form of files A file is a sequence of bits bytes lines or records A file has structure which depends on its type
                                    • File Attributes
                                    • File Operations
                                    • Kernel Mode vs User Mode
                                    • Basic Features
                                    • Architecture

                                      6 If the size of logical address space= 2m and page size = 2n Then the higher order mminusn bits of logical address correspond to page number and lsquonrsquolower order bits for displacement within the page

                                      Ex size of logical address space = 8=23 so m=3

                                      page size =4=22 So n=2

                                      So mminusn = 1 bit for page number to represent 01

                                      remaining n=2 bits for displacement within the page

                                      Q) Implementation of Page Table

                                      Page table can be kept as

                                      1) Set of dedicated registers efficient if the page table is small

                                      2) Page table can be kept in memory page table base register (PTBR) points to page table Page-table length register (PTLR) indicates size of the page table Changing page table requires changing only the value in these registers Advantages of keeping page table in memorya Less context switch time

                                      b Two memory accesses are required One memory access to access the page table and another memory access to access the required memory address

                                      This problem can be solved by using a special fast-lookup hardware cache called associative memory or translation look-aside buffers(TLBs)

                                      i Each entry in TLB consists of 2 parts

                                      I) page number field

                                      II) Value field or frame number field

                                      ii When a logical address is generated by CPU its page number is presented to TLB The page number is compared with all the entries of TLB simultaneously

                                      If page number is found in TLB Itrsquos frame number is immediately available -gt TLB hitIf page number not found check in page table ndashTLB miss

                                      By using TLB Search is fast But TLB hardware is expensive So TLB size is kept small So TLB contains only frequently used few page table entries

                                      Q) Protection in Paging Validinvalid bit attached to each entry in the page table OS sets this validinvalid bit for each page to allow or disallow access to the page When the bit is set to invalid the page is not in processrsquos logical address space Hence it generates Trap to OS

                                      Q) Shared pages in Paging

                                      Reentrant code is the code that never changes during execution Reentrant code can be shared

                                      1 One copy of read-only (reentrant) code is shared among processes (ie text editors compilers window systems) Ex ed1ed2ed3 is shared among Process P1 and P2

                                      2 Shared code must appear in same location in the logical address space of all processes

                                      Each process keeps a separate copy of the private code and data EX data1 and data2

                                      The pages for the private code and data can appear anywhere in the logical address space

                                      ==============================================================================Q) Segmentation

                                      Segmentation is a memory management scheme that support userrsquos view of memory

                                      When the user program is compiled the compiler generates segments like

                                      1) The code segment

                                      2) Global variables segment

                                      3) Heap memory segment

                                      4) Stack segment etc

                                      Each entry in segment table has

                                      segment base (Starting physical address of segment) and limit register (specifies the length of the segment)

                                      Logical address is divided into 2 parts

                                      segment-number offset

                                      Segment table is indexed by segment number

                                      Segment-table base register (STBR) stores the location of segment table in main memory

                                      Segment-table length register (STLR) stores number of segments used by a program

                                      The segment number (say lsquosrsquo) is used to find the entry in segment table The required entry is lsquosrsquo locations away from the beginning of the segment table Once the required entry in the segment table is found the offset (lsquodrsquo) is compared with limit

                                      If( offset lt limit) then offset is added with the base entry to generate the physical address

                                      Q) Shared Segments Code sharing occurs at

                                      the segment level Shared segments must

                                      have same segment number

                                      Allocation - dynamic storage allocation problem

                                      use best fitfirst fit may cause external fragmentation

                                      Protection protection bits associated with segments

                                      readwriteexecute privileges array in a separate segment - hardware can check for illegal array indexes

                                      ===============================================================================Q) Virtual memory ndash

                                      Ans In paging and segmentation a program will execute only if the entire process is in main memory But here in virtual memory only part of the program needs to be in memory for execution

                                      1 A program that is large enough than the available main memory can still run as only part of program is only loaded into main memory Logical address space can therefore be much larger than physical address space

                                      2 Allows address spaces to be shared by several processes

                                      Virtual memory can be implemented via Demand paging Demand segmentation

                                      The large blank space between heap and stack is part of virtual address space Virtual memory allows files and memory to be shared by 2 or more processes through page sharing as shown in below figure (b) Figure (a) depicts that virtual memory that is larger than physical memory

                                      ==============================================================================

                                      Q)Demand pagingDemand paging is technique of loading pages from disk to main memory only when the page is needed Hence using less amount of physical memory we get faster response

                                      Demand paging is similar to PAGING + SWAPPING

                                      Instead of swapping a whole process the pager brings only those pages needed into memory

                                      To distinguish between pages that are in memory and pages that are on the disk validinvalid bit is used

                                      Validinvalid bit is attached to each entry in page table When this bit is set to valid the page is legal and in memory When this bit is set to invalid either the page is illegal or the page is in disk

                                      When CPU generates a logical address whose page table entry is set to invalid page fault occurs Hence the required page is in disk Some section of hard disk called swap space is used to hold pages that are not present in memory

                                      Q) Procedure for Handling a Page Fault

                                      1) CPU generates a logical address and if the validinvalid bit for the page that has this logical address is set to invalid page fault occurs

                                      2) Page fault causes a trap to operating system

                                      3)Check if the logical address is within the logical address space of process ie PTBR and PTLR is checked if (required logical address is not within the logical address space of process)

                                      Terminate the processelse

                                      page is not in memory and page is in disk

                                      4) To bring the required page into memory we need to find a free main memory frame map main memory address to disk block and fetch disk block and load the block into free frame5) When the required page is brought into memory update the page table to indicate the page is in memory6) Restart instruction interrupted by illegal address trap The process will continue as if page had always been in memory

                                      Q) What is pure demand pagingAns If no page belonging to the executing process is in main memory the process will fault for every page it needs Page faults will occur until every page that is needed is in memory This scheme is known as pure demand paging NEVER BRING A PAGE UNTIL IT IS REQUIRED

                                      Q) Page Replacement1 Find the location of the desired page on the disk2 Find a free

                                      i) If there is a free frame use itii) If there is no free frame use Page replacement Algorithms to find some page in memory that is

                                      not really in use and swap itiii) If the modify bit associated with the victim frame is set to 1 the page is written to disk Make the

                                      changes to validinvalid bit of victim frames page table entry to invalid to indicate the victim frame is no longer in memory

                                      else no need to write to disk3 Read the desired page into victim frame and change the frame and page tables4 Restart the user process

                                      =================================================================================================Q) Page Replacement AlgorithmsGoal Produce a low page-fault rateThe algorithms are evaluated by running it on a particular string of memory references (reference string) and by computing the number of page faults on that string

                                      1 First in First out (FIFO) Page replacement algorithm Uses the time when a page was brought into memory The page that was brought first into memory is replaced ie oldest page is chosen for replacement We create a FIFO queue to hold all pages in memory

                                      When a page is brought into memory we insert it at the tail of the queue and we replace the page at the head of the queue

                                      FIFO 15 page faultsProblems due to FIFO is Beladyrsquos anamolyFor a refrence string shown below123412512345When number of frames allocated to a process is 1 number of page faults=12 due to FIFO page replacementWhen number of frames allocated to a process is 2 number of page faults=12 due to FIFO page replacementWhen number of frames allocated to a process is 3 number of page faults=9 due to FIFO page replacementWhen number of frames allocated to a process is 1 number of page faults=10 due to FIFO page replacementAs the number of frames allocated to a process increases page faults must decrease But in FIFO page replacement page faults increase2 Optimal Page replacement AlgorithmReplace the page that will not be used for longest period of time Used for measuring how well your algorithm performs It is not practical to implement OPT algorithm as it is not possible to know what the future references will beex

                                      Optimal page replacement 9 page faults

                                      3 Least Recently used (LRU) Page replacement algorithm Replace the page that has not been used for longest period of time LRU makes use of time of last use of the page Ex

                                      Number of page faults = 12

                                      LRU requires hardware assistance to determine the time of last use of the page We can use 1) Stack or 2) Counter to implement LRU page replacement algorithm

                                      1) Stack implementation ndash keep a stack of page numbers in a double link formwhen a Page is referenced it is moved to the top of stackThis implementation requires 6 pointers to be changed

                                      2) counter implementation Time-of-use field is allocated to each frame allocatedCPU maintains a counter Counter is incremented for every page reference Counter value is copied to the time-of-use field of the referenced page

                                      Disadvantages of LRU page replacement algorithm Updating counter or stack must be done for every memory reference HENCE LRU IS SLOW and implementation requires hardware assistance ==============================================================================Q) LRU approximation page replacement1) Using reference bit2) Using additional reference bits algorithm3) Second chance algorithm4) Enhanced Second chance algorithm5) Counting based Page replacement

                                      Has two schemes1) Least Frequently Used (LFU)2) Most Frequently Used (MFU)

                                      6) Page Buffering AlgorithmsQ) What is ThrashingAns A process is thrashing if it is spending more time paging than executingThrashing may occur when global (or) local page replacement is used as followsSolution to Thrashing is Working Set strategyIf we provide a process with as many frames as it needs thrashing can be avoided Since it is not possible to know how many frames a process needs we use locality model of process execution

                                      here Locality is set of pages that are actively used together Locality model states that as the process executes it moves from locality to locality If we allocate enough frames to a process to accommodate the size of current locality it will not fault again until it changes current localityWorking Set model uses working-set and a parameter ∆ = working-set windowThe set of pages in the most recent ∆ page references is the working setLet WS S i be the working-set size for process Pi ie Pi needs WS S i framesLet D be the total demand for frames then D=sum WS Si

                                      and mbe the total number of available frames

                                      If iquestm thrashing occurs because some processes will not have enough framesSoIF (Dgtm iquest

                                      OS selects a process to suspendelse OS may initiate another processThe working set strategy prevents thrashing while keeping the degree of multi programming as high as possibleThe main difficulty is to keep track of moving working set window ========================================================================Q) How does thrashing occur in global and local page replacementAns In Global page replacement when a page fault occurs for a process any frame in main memory can be replaced causing page fault to some other process that needs the replaced frame Page fault processes must use paging device to swap pages in and out As more number of processes queue up for the paging device

                                      the READY queue empties And CPU becomes idle So OS introduces new processes as CPU is free This further increases number of page faults

                                      In Local page replacement Each process is allocated certain frames in main memory and when a page fault occurs only the frames allocated to it will be replacedWhen a process starts thrashing other processes are not affected But since the processes will be in queue for paging device most of the time average service time for page fault increases Hence effective access time will increase even for a process that is not thrashingQ) Page Table Structure Or Structure of page table in memory

                                      Hierarchical Paging Hashed Page Tables Inverted Page Tables

                                      Hierarchical Paging If Page table is large then we break up the page table into multiple page tables ie page table is paged A simple technique is a two-level page tableTwo-Level Paging Example A logical address (on 32-bit machine with 4K page size) is divided into 1 a page number consisting of 20 bits 2 a page offset consisting of 12 bitsSince the page table is paged the page number is further divided into 1 a 10-bit page number 2 a 10-bit page offset Thus a logical address is as follows page number page offset

                                      pi p2 d

                                      10 10 12 where p1 is an index into the outer page table and p2 is the displacement within the page of the outer page table Two-Level Page-Table Scheme

                                      Address-Translation Scheme Address-translation scheme for a two-level 32-bit paging architecture

                                      Hashed Page Tables (or) Hash table Hashed page tables are common in address spaces gt 32 bitsEach entry in hash table contains a linked list of elements that hash to same locationHash table is indexed by hash valueEach element consists of 3 fields

                                      1 Page number2 Frame number3 Pointer to next element in linked list

                                      The algorithm works as follows Page number is passed to hash function to get hash value Locating the hash value in hash table is easy as Hash table is indexed by hash value Page number is then compared with field 1 in the first element in linked list If there is a match the corresponding frame number is used to get physical address If there is no match next entries in linked list are searched

                                      Inverted Page Table

                                      Invertible page table has one entry for each frame of physical memory Each entry in invertible page table has

                                      Process-id Page number

                                      Logical address also has 3 parts Process-id Page number

                                      OffsetProcess-id and page number of the logical address is compared with each entry in the invertible page tableIf a match is found at ith entry i is added to offset to get logical address If there is no match it means it is illegal addressAdvantages and disadvantages

                                      Decreases memory needed to store each page table Increases time needed to search the table when a page reference occurs Solution Use hash table to

                                      limit the search to one mdash or at most a few mdash page-table entries Inverted Page Table Architecture

                                      Unit-IV CPU Scheduling

                                      Basic conceptsTo maximize CPU utilization some process must be running at all times

                                      In multi programming several processes will be in main memory at a given time If running process has to wait for IO the OS takes the CPU away from that process and assigns CPU to other process in ready queue So CPU must be scheduled and CPU scheduling is fundamental OS function

                                      CPU- IO burst cycleProcess execution begins with a CPU burst that is followed by an IO burst which is followed by another CPU burst then another IO burst and so on as shown in the figure

                                      An IO bound program has many short CPU bursts A CPU-bound program has few long CPU bursts

                                      Schedulers a module in OS for scheduling decisions1048708 Long-term scheduler (or job scheduler) ndash selects which processes should be brought into the ready queue1048708 Medium-term scheduler ndash selects which processes should be swapped inout the memory1048708 Short-term scheduler (or CPU scheduler) ndash Whenever the CPU becomes idle the operating system must select one of the processes in ready queue to be executed This selection is done by short term scheduler (or) CPU scheduler

                                      Dispatcher gives CPU control to the process selected by short term scheduler This function involves following

                                      1 switching context2 switching to user mode3 jumping to proper location in the user program to restart that program

                                      Time taken to stop one process and start another process is known as dispatch latency This must be kept smallReady QueueReady queue is implemented as a FIFO queue a priority queue a tree or simply an unordered linked listThe records in ready queue are PCBs of processes

                                      Pre-emptive Scheduling

                                      CPU scheduling decisions take place under one of four conditions

                                      1 When a process switches from the running state to the waiting state2 When a process switches from the running state to the ready state for example in response to an

                                      interrupt3 When a process switches from the waiting state to the ready state say at completion of IO 4 When a process terminates

                                      For conditions 1 and 4 there is no choice - A new process must be selected For conditions 2 and 3 there is a choice - continue running the current process (or) select a different

                                      one If CPU scheduling takes place only under conditions 1 and 4 the system is said to be non-

                                      preemptive or cooperative In non-preemptive a process runs until it itself gives up the CPU ie when it is waiting for IO (or) when process completes Otherwise the system is said to be pre-emptive

                                      Windows used non-preemptive scheduling up to Windows 3x and started using pre-emptive scheduling with Win95 Macs used non-preemptive prior to OSX and pre-emptive since then Note that pre-emptive scheduling is only possible on hardware that supports a timer interrupt

                                      Note that pre-emptive scheduling can cause problems when two processes share data because one process may get interrupted in the middle of updating shared data structures

                                      Preemption can also be a problem if the kernel is busy implementing a system call ( eg updating critical kernel data structures ) when the preemption occurs Solution wait until the system call has either completed or blocked before allowing the pre-emption But this solution is problematic for real-time systems

                                      Some critical sections of code protect themselves from concurrency problems by disabling interrupts before entering the critical section and re-enabling interrupts on exiting the section But this should only be done only on very short pieces of code that will take less CPU time

                                      Scheduling Criteria to decide CPU scheduling algorithm1 CPU utilization in real system cpu utilization will range from 40 to 902 Throughput Number of processes that are completed per unit time3 Turnaround time Time when the process has completed ndash Time when the process was submittedie it includes waiting time and burst time4 Waiting time amount of time that a process spends waiting for CPU5 Response time Time when the first response came - Time when the process was submitted

                                      Optimization Criteria Max CPU utilizationMax throughput

                                      Min turnaround timeMin waiting timeMin response time CPU Scheduling Algorithms1) First-Come First-Served (FCFS) Scheduling First come first served CPU scheduling algorithm is non pre-emptive Processes are scheduled in the order they have arrivedFCFS is implemented by Queue When the CPU is free it is allocated to the process at the head of the queueAdvantages Easy to implementDisadvantages 1 Average waiting time is quite long

                                      2 Convoy effect (it occurs as small processes wait in queue for big process to leave CPU)Example Process Burst Time P1 24 P2 3 P3 3

                                      Suppose that the processes arrive in the order P1 P2 P3

                                      The Gantt Chart for the schedule is P1 P2 P3

                                      0 24 27 30 Waiting time for P1 = 0 P2 = 24 P3 = 27 Average waiting time (0 + 24 + 27)3 = 17 2) Shortest-Job-First (SJF) SchedulingSchedule the Process with shortest burst time If CPU burst time of two processe are same then FCFS is usedAdvantages Average waiting time decreasesDisadvantages It is difficult to know the length of next CPU burstSJF can be either pre-emptive (or) non pre-emptive1 Non pre-emptive SJF ndash once CPU is given to the process it cannot be preempted until completes its CPU burst2 Pre-emptive SJF (or) Shortest-Remaining-Time-First (SRTF) If the newly arrived process is shorter than what is remaining of currently executing process then pre-empt the currently executing process

                                      Example Process Arrival Time Burst Time P1 00 7

                                      P2 20 4 P3 40 1

                                      P4 50 4

                                      SJF (non-preemptive) Gantt Chart P1 P3 P2 P4

                                      0 7 8 12 16 Average waiting time = [0 +(8-2)+(7-4) +(12-5)] 4 =4

                                      Example of Preemptive SJF

                                      Process Arrival Time Burst TimeP1 00 7P2 20 4P3 40 1P4 50 4

                                      SJF (preemptive) P1 P2 P3 P2 P4 P1

                                      0 2 4 5 7 11 16 Average waiting time = (9 + 1 + 0 +2)4 =3 3) Priority SchedulingA priority number (integer) is associated with each processThe CPU is allocated to the process with the highest priority (smallest integer equiv highest priority)If two processes have equal priority then FCFS is usedSJF is a priority algorithm in which larger the CPU burst lower the priorityThere are two schemes 1 Preemptive 2 nonpreemptive

                                      Problem equiv Starvation ndash low priority processes may never executeSolution equiv Aging ndash as time progresses increase the priority of the process

                                      4) Round Robin (RR) Each process gets a small unit of CPU time (time quantum) usually 10-100 milliseconds After this time has elapsed the process is preempted and added to the end of the ready queueIf there are n processes in the ready queue and the time quantum is q then each process gets 1n of the CPU time in chunks of at most q time units at once No process waits more than (n-1)q time unitsPerformance 1 q large _ FIFO 2 q small _ q must be large with respect to context switch otherwise overhead is too highExample of RR with Time Quantum = 4 Process Burst Time P1 24 P2 3

                                      P3 3 The Gantt chart isP1 P2 P3 P1 P1 P1 P1 P1

                                      0 4 7 10 14 18 22 26 30 Average waiting time = [(30-24)+4+7]3 = 173 =566 5)Multilevel Queue scheduling Ready queue is partitioned into separate queuesFor ex ready queue is divided into 2 queues

                                      1 foreground (interactive) queue2 background (batch) queue

                                      Each queue has its own scheduling algorithmFor ex Round Robin scheduling algorithm can be used in foreground queue

                                      FCFS scheduling algorithm can be used in background queueScheduling must be done between the queues This can be done in 2 ways1 Fixed priority scheduling

                                      Foreground queue has highest priority All the processes in foreground queue must be completed and queue must be empty Then only the processes in background queue will be given CPU As shown in below figure Disadvantage starvation

                                      2 Time slice ndash each queue gets a certain amount of CPU timewhich it can schedule amongst its processes ie 80 to foreground in RR and 20 to background in FCFS

                                      6) Multilevel Feedback Queue Scheduling A process can move between queues The idea is to separate processes according to their CPU bursts IO ndash bound and interactive processes will be in highest priority queueIf a process takes more CPU time it is moved to lower priority queue If a process waits too long in lower priority queue it is moved to higher priority queue to prevent starvationAs shown in the below figure let there be 3 queues Q0 Q1 Q2

                                      1 Q0 ndash time quantum 8 milliseconds 2 Q1 ndash time quantum 16 milliseconds 3 Q2 ndash FCFS

                                      Scheduling

                                      1 A process entering ready queue is put in queue Q0 When it gains CPU the process receives 8 msec If it does not finish in 8 milliseconds process is moved to queue Q1 2 At Q1 process again receives 16 additional milliseconds If it still does not complete it is moved to queue Q2

                                      Q) SynchronizationSeveral processes run in an operating system Some process share resources due to which problems like data inconsistency may arise

                                      Example of process synchronization is Producer Consumer problem (or) Bounded-Buffer problemProducer Consumer problem To make sure that producer process should not add data when the buffer is full andConsumer process should not take data when the buffer is empty

                                      The code for Producer process can be modified as followswhile (true) Produce an item in next producedwhile (counter == BUFFER_SIZE) do nothingbuffer[in]=nextProducedin = (in+1) BUFFER_SIZEcounter++

                                      here an integer variable counter is initialized to zeroCounter is incremented every time a new item is added to the buffer and counter is decremented every time an item is removed from bufferin is an index that always points to an empty slot after the last filled slotout is an index that always points to the first filled slot after at the head of the circular queue

                                      The code for consumer process is as shown belowwhile(true)

                                      while (counter == 0) do nothingnextConsumed = buffer[out]out=(out+1)BUFFER_SIZEcounter--

                                      The above producer and consumer code produces RACE CONDITIONOutcome of execution of several co-operating processes depends on the order in which accesses takes place is called Race ConditionFor Example if counter =5

                                      Given counter =5 and if producer produces one item and consumer consumes one item then correct value of counter= 5 only But here the counter = 4 ie inconsistent resultIf T5 is executed before T4 then the counter =6 ie inconsistent result

                                      Solution to avoid race condition make sure only one process at a time must update the shared variable (here it is counter)

                                      Q) Critical section problemAns Each process has a segment of code called critical section Critical section code is the code that contains common variables of co-operating processesTo avoid data inconsistencyTwo processes must not execute critical section code at same timeFor this to happen1 Each process must request permission to enter its critical sectionmdashEntry section2 End of critical section ndashExit section3 Rest of the code after critical section ndash Remainder section

                                      Solution to critical section must satisfy the following three requirements1 Mutual exclusion if one process is in critical section no other process can be in critical section2 Progress Processes not in remainder section must decide which process will enter critical section next3 Bounded Waiting All the processes must be allowed to enter critical section No process must wait indefinitely

                                      Q) Petersonrsquos solution for critical section problem (Software based solution)AnsTime Process 1 Process2T0 do

                                      flag[1] = TRUE turn =2

                                      do flag[2] = TRUE

                                      T1 turn =1

                                      T2 While( flag[2] ampamp turn == 2)Since turn = 2 while loop will not run

                                      While( flag[1] ampamp turn == 1)Loops continuously until flag[1]

                                      becomes falseT3 Enter critical section

                                      T4 flag[1] = FALSE Enter Critical section

                                      T5 Enters remainder section flag[2]=FALSE

                                      while (TRUE) Enters remainder section

                                      while (TRUE)

                                      101 File Concept Data is stored in secondary memory in the form of files A file is a sequence of bits bytes lines or records A file has structure which depends on its type

                                      File Attributes

                                      1 Name ndash Name of the file for user understanding2 Identifier used by OS to identify a file ( eg inode number )3 Type ndash Text files executable files binary files etc 4 Location ndash location of the file on the hard drive 5 Size current size of the file6 Protection Controls who can read write or execute the file7 Time Date and user identification gives the information when the file was created last modified

                                      and last used etc This data is used for protection security and usage monitoring

                                      All information about files is kept in directory structure The directory entry consists of the filersquos name and its unique identifier The identifier locates the other file attributes Directory structure is kept on disk

                                      File Operations

                                      The file Abstract Data Type supports many common operations 1 Creating a file find free space on disk add entry to directory 2 Writing a file write data at current file position pointer location and update pointer3 Reading a file read data at current file position pointer location and update pointer4 Repositioning within a file or file seek change pointer location to a given value5 Deleting a file free the space allocated to file and remove entry from directory6 Truncating a file delete the data and update file size

                                      Open File Table

                                      Most OSes require that files be opened before access and closed after access Information about all files that are open currently in the system is stored in an open file table This

                                      system wide open file table is maintained by OS And information about files opened by the process is stored by the processrsquos open file table Open file table has the following information

                                      1 File pointer - records the current position in the file for the next read or write2 File-open count - How many times has the current file been opened ( simultaneously by

                                      different processes ) and not yet closedWhen this counter reaches zero the file can be removed from the table

                                      3 Disk location of the file4 Access rights

                                      Some systems provide support for file locking 1 A shared lock is for reading only2 A exclusive lock is for writing as well as reading3 An advisory lock OS makes sure the locks are acquired and released appropriately4 A mandatory lock software developers makes sure the locks are acquired and released

                                      appropriately5 UNIX used advisory locks and Windows uses mandatory locks

                                      File Types

                                      Implementing file types A common technique is to include the type as part of the filename The name is split into 2 parts

                                      1 File Name 2 Extension

                                      File name and extension are separated by a period The system uses the extension to indicate the type of file and the type of operations that can be done on that fileUser opens Microsoft word and clicks on File menu and Open option User will only specify the filename without extension application will look for a file with the given name and extension it expects Hence extensions are not supported by the operating systemUnix system uses a crude magic number stored at the beginning of the file to indicate the type of fileIn Mac OS X operating system file produced by word processor has the word processorrsquos name as its creator When user double clicks on the file word will open the required fileSome common file types are as shown below

                                      File Structure Some files contain an internal structure which may or may not be known to the OS for ex executable files have a specific structure so that it can determine

                                      where in memory to load the file and what is the location of the first instruction

                                      If OS supports many file formats it will increase the size and complexity of the OS UNIX treats all files as sequences of bytes with no internal structure (With the exception of

                                      executable binary programs which it must know how to load and find the first executable statement etc)

                                      Macintosh files have two forks - a resource fork and a data fork The resource fork contains information relating to the UI such as icons and button images and can be modified independently Data fork contains the code or data

                                      Internal File structure

                                      Disk files are accessed in units of physical blocks typically 512 bytes

                                      Internally files are organized in units of logical units like single byte (or)

                                      data record size (or) structure size

                                      The number of logical units which fit into one physical block determines its packingIf a student file is organized internally as records Each record takes 64 bytes of memoryAnd main memory is divided into blocks of 512 bytes Then packing = 51264=8 8 records can be stored per block

                                      Internal fragmentation may occur due to this

                                      102 Access Methods

                                      The information in file can be accessed in the following ways1 Sequential access2 Direct access (or) Relative access

                                      1 Sequential access we cannot randomly access any record

                                      Records are read and written sequentially one record after another

                                      A sequential access file emulates magnetic tape operation and generally supports the following operations read next - read a record and advance the tape to the next position write next - write a record and advance the tape to the next position rewind skip n records - May or may not be supported N may be limited to positive numbers or may be

                                      limited to +- 1

                                      2 Direct access (or) Relative accessA file is made up of fixed length records so that we can randomly access any record

                                      No restriction on order of reading or writing

                                      Direct access files are useful when we need to access large amounts of data randomly

                                      Ex database

                                      Operations supported include n is relative block number ie relative to beginning of file

                                      1 read n - read record number n 2 write n - write record number n 3 jump to record n - could be 0 or the end of file4 Query current record - used to return back to this record later5 Sequential access can be easily emulated using direct access The inverse is complicated and

                                      inefficient

                                      3 Other access methods can be built on top of direct access method We construct an index for the file which contains pointers to various blocks To search a file we first access index and then use pointer to access file

                                      If a student file has 120000 records of students sorted by student id We can create an index consisting of 1st student id in each blockif 64 records can be stored in each block then number of blocks needed to store 120000 records = 12000064 asymp 2000 blocks So index has 2000 entries Index is kept in main memory so that searching becomes faster

                                      If the index itself becomes large we create an index for index file (Ex B-tree)

                                      Q) Allocation Methods

                                      To allocate space to files on disk is such a way that disk space is utilized effectively and files can be accessed quickly

                                      Three major methods of allocating disk space are

                                      1 Contiguous 2 Linked

                                      3 Indexed

                                      1 Contiguous allocation Each file occupies set of contiguous blocks on disk

                                      If the file is n blocks long and starts at location b then it occupies blocks b b+1 b+2hellipb+n-1

                                      The directory entry for each file indicates the address of starting block(ie b) and length (n)

                                      Advantages

                                      1 For direct access to block i we can directly access by finding the block as b+i Since the file is stored sequentially sequential access is easy Hence access is easy

                                      2 Supports both direct and sequential access3 Number of disk seeks required is minimum 4 Seek time is minimum

                                      Disadvantages

                                      1 Difficult to find space to a new file (or) add more data to existing fileFirst fit and Best fit strategies are used to find a free hole( set of free contiguous blocks) from list of available holes But both are inefficient wrt storage utilization

                                      2 External fragmentation even though there are enough free blocks required for a new file they are not contiguous and hence cannot be allocated to new fileSolution copy all the files onto another disk or tape We then copy the files back onto original disk contiguously This is known as compaction But compaction is time consuming

                                      3 Size of the file must be known when the file is created Even though final size is known entire size may not be used by the file at the beginning Hence leads to wastage of space

                                      If size of the file is not known in advance then (i) If we allocate too little space to file file cannot be extended Solution find a larger hole copy the contents of the file to the new space and release the previous space This may be possible as long as space exists Also this is time consuming(ii) If we overestimate the amount of space needed much of the space is unused This will lead to internal fragmentation

                                      To minimize these drawbacks some os uses modified contiguous allocation scheme Here contiguous blocks of memory is allocated at firstIf the file needs more blocks another set of contiguous space known as extent is added Location and a block count plus a link to the first block of next extentIf extents are too large and of fixed size internal fragmentation occursIf extents are of variable sizes external fragmentation occurs

                                      Linked AllocationEach file is a linked list of disk blocks The disk blocks can be scattered anywhere on the diskThe directory contains a pointer to first and last blocks of the file For example a file of 5 blocks might start at a block 9 and continue at block 16 then block 1 and then block 10 and finally block 25 Each block contains a pointer to next block If each block is 512 bytes and disk address (pointer ) requires 4 bytes then the user sees blocks of 512-4=508 bytes To create a new file we simply create a new entry in the directory The pointer will have nil for empty file Also the size field of empty file is 0

                                      To add more data to existing file free block is found and data is written to it and is linked to end of file To read a file we read by following the pointers block to block

                                      Advantages 1 Easy to find space to a new file (or) add more data to existing file2 Size of the file need not be declared when the file is created3 There is no external fragmentation

                                      Disadvantages1 Seek time is more2 Number of disk seeks required is more3 Pointers use larger percentage of filersquos disk space4 Inefficient for direct access files To access i th record i disk reads are required5 Not Reliable if pointers were lost or damaged

                                      Solution to disadvantage 3 is to use to group multiple blocks as clusters(for ex 4 blocks = 1cluster) We allocate clusters instead of blocks to file Makes logical to physical block mapping simpleImproves disk throughput by reducing disk access timeReduces wastage of space due to pointers as only few pointers may be neededFree ndashlist management also becomes simpleDisadvantage internal fragmentation more space is wasted when the cluster is partially full

                                      Solution to disadvantage 5 is to use doubly linked list (or) store the file name and relative block number in each block

                                      An variation of linked allocation is the use of file allocation table (FAT)A section of disk at the beginning of each volume contains FAT FAT has one entry for each disk block and is indexed by block numberThe block number contains the block number of next block that contains the file The last block will have special end of the file value as table entry An unused block is indicated by table value 0

                                      Advantages

                                      1 Easy to find space to a new file (or) add more data to existing file2 Random access time is improved

                                      Disadvantage number of disk seeks is more The disk head must move to read the FAT and find the location of block then move to location of the block

                                      3 Indexed allocation

                                      Each file has its own index block Index block is an array of disk block addresses The ith entry in index block points to ith block of the file

                                      The directory contains the address of the index block

                                      When the file is created all pointers in the index block are set to nil

                                      To add more data to existing file free block is found and data is written to it The address of block is put in the index block

                                      Advantages

                                      1 Easy to find space to a new file (or) add more data to existing file2 Supports direct (or) random access efficiently3 No external fragmentation

                                      Disadvantage

                                      1 Seek time is more2 Number of disk seeks required is more3 Not Reliable4 Index file must be kept in memory If this memory is not available then we have the read the index

                                      block and then desired data block ie 2 disk accesses are required To access a block near the end of the file we need to read all the index blocks to read the needed data block

                                      5 Pointer overhead of index block is gt than that of linked allocation For example if we have file of only one or two blocks Entire index block is allocated to store these one or two pointers

                                      Various mechanisms followed for size of index block

                                      1 linked scheme size of index block is one disk block Index block contains a small header containing name of the file and set of the first 100 disk block addresses The next address contains the address of another index block for large files

                                      2 multilevel index to access a block the OS uses first-level index to find second-level index block which points to file block This approach can be continued to third or fourth level

                                      3 combined scheme

                                      For example Say there are 15 pointers of index block in the filersquos inode The first 12 of these pointers have address of blocks that contains file data The next 3 pointers point to indirect blocks The first points to single indirect block which is an index block that contain addresses of filersquos data blocks

                                      The second points to double indirect blocks and third points to triple indirect blocks as shown below

                                      Q) Free space management

                                      To keep track of free disk space the system maintains free space list To create a file we search free space list and allocate the required space to new file

                                      The free space list can be implemented as

                                      1 Bit vector (or) Bit map2 Linked list3 Grouping4 Counting 5 Space maps

                                      1 Bit vector (or) Bit map Each block is represented by 1 bit If the block is free the bit is 1 and if the block is allocated the bit is 0 For example consider a disk where blocks 23458 blocks are free The free-space bit map would be 001111001000

                                      Advantages simple and easy to get contiguous files

                                      Disadvantages

                                      1 Bit map requires extra space2 Bit map is inefficient if kept in disk Bit maps of smaller disk can be kept in main memory but may

                                      not be possible to keep the bit map of larger disk in main memory3 Bit map must be modified both when blocks are allocated and freed Freeing 1 GB of data on a 1-TB

                                      disk could cause thousands of blocks of bit maps to be updated because those data blocks are scattered all over the disk

                                      2 Linked List Free-space list is implemented by linking all the free disk blocks together A pointer to the first free block is stored in special location on the disk For example consider a disk where blocks 23458 blocks are freeWe keep a pointer to block 2 as the first free block Block 2 would contain a pointer to block 3 Block 3 would point to block 4 and block 4 would point to block 5and so onAdvantage No waste of spaceDisadvantage Cannot get contiguous space easily ie To traverse the free-space list we must read each block which requires more IO time

                                      3 Grouping First free block stores the addresses of n free blocks The first n-1 free blocks are free and the last block contains the addresses of another n free blocks and so onAdvantage large number of free blocks can be found quickly4 Counting free-space list can be maintained by keeping the address of first free block and the number (n)of free contiguous blocks that follow the first block

                                      Each entry in the free-space list consists of disk address and a countEach entry requires more space andlist is shorter only if count is gt1These entries can be stored in a B-tree for efficient insertion deletion and search

                                      5 Space maps ZFS divides the disk space into chunks of manageable sizes called metaslabs Each metaslab has a space mapZFS uses counting algorithm to store information about free blocks Space map is a log of all block activities in time order in counting formatWhen ZFS allocates or frees space from a metaslab it loads the associated space map into memory and updates it Finally free-space list is updated on diskSynchronization Hardware Hardware solution to synchronization is to provide atomic operations These operations operate as a single instruction without interruption Two such operations are 1 Test and Set() hardware instruction2 Swap() instruction

                                      1 Test and Set instruction is as shown

                                      Solution using Test and SetThe shared variable lock is initialized to false

                                      2 Swap() instruction definition is as follows

                                      Solution using Swap1 Shared Boolean variable lock is initialized to false and each process has a local boolean variable key

                                      Both Test and Set Swap instructions satisfy the mutual exclusion requirement but unfortunately do not guarantee bounded waiting If there are multiple processes trying to get into their critical sections there is no guarantee of what order they will enter and any one process could have the bad luck to wait forever until they got their turn in the critical section (Since there is no guarantee as to the relative rates of the processes a very fast process could theoretically release the lock whip through their remainder section and re-lock the lock before a slower process got a chance As more and more processes are involved vying for the same resource the odds of a slow process getting locked out completely increase )

                                      Below figure illustrates a solution using test-and-set that satisfies bounded waiting using two shared data structures boolean lock and boolean waiting[ N ] where N is the number of processes in contention for critical sections

                                      Bounded-waiting mutual exclusion with TestAndSet( )

                                      It first looks in an order ( starting with the next process on the list ) for a process that has been waiting and if it finds one then it releases that particular process from its waiting state without unlocking the critical section thereby allowing a specific process into the critical section while continuing to block all the others

                                      Q) What is semaphoreAns Semaphore is used to solve synchronization problems A semaphore S is an integer variable accessed only through 2 standard atomic operations 1 wait() 2 signal()Wait primitive Signal primitivewait(S)

                                      while(S lt= 0) do nothingS--

                                      signal (S)

                                      S++

                                      Each process that wishes to use a resource performs wait() operation When a process releases a resource it performs signal() operation

                                      Semaphores Usage1 binary semaphore or mutex locks can take values 0 (or) 1

                                      2 Counting Semaphore can take any integer value Counting semaphore is used to count the number remaining resources The counter is initialized to the number of resources available whenever the counting semaphore gt 0 then a process can enter a critical section and use one of the resources When the counter = 0 ( or negative in some implementations ) then the process blocks until another process frees a resource and increments the counting semaphore with a signal callFor example if 3 resources are there and 4 processes Value of counting semaphore =3if process P1 requires a resource it perform wait() operationNow counting semaphore = 2if process P2 requires a resource it perform wait() operationNow counting semaphore = 1if process P3 requires a resource it perform wait() operationNow counting semaphore = 0if process P4 requires a resource it perform wait() operation P4 waits until a resource is available

                                      3 Semaphores can also be used to synchronize certain operations

                                      Semaphore ImplementationWhen a process is in critical section and any other process that tries to enter critical section must loop continuously wasting CPU time (busy waiting) This type of semaphore is called spin lock as the process spins while waiting For example suppose it is important that process P1 execute statement S1 before process P2 executes statement S2

                                      First we create a semaphore named synch that is shared by the two processes and initialize it to zero Then in process P1 we insert the code

                                      S1signal( synch )

                                      and in process P2 we insert the code wait( synch )

                                      S2 Because synch was initialized to 0 process P2 will block on the wait until after P1 executes the call

                                      to signal Each semaphore has a integer value and maintains a waiting queue of waiting processes

                                      Semaphore implementation with no busy waiting To overcome busy waiting we block the waiting process and waiting process is restarted by wakeup() operation(ie the process is removed from waiting queue to ready state)

                                      Problems due to semaphore1 Deadlocks Deadlocks occur when multiple processes are blocked each waiting for a resource that can only be freed by one of the other (blocked) processes shown below

                                      2 Starvation In Starvation one or more processes gets blocked forever and never get a chance to enter critical section For example if we do not specify the algorithms for adding processes to the waiting queue or selecting one to be removed from the queue in the signal( ) call If a LIFO queue is chosen then the first process that starts waiting will never get a chance

                                      3 Priority inversionLet Process LMH have priorities as shown belowLltMltHprocess H is waiting for a resource that has been held by process L so Process L is running When process M enters it pre-empts process L and M is running nowHere M runs before H even though M has lower priority than HTo avoid this L is given high priority so that M does not pre-empt L=====================================================================Q) Classical problems on synchronization

                                      1 The Bounded Buffer Problem (also called the The Producer-Consumer Problem)2 The Readers-Writers Problem3 The Dining Philosophers Problem

                                      These problems are used to test newly proposed synchronization scheme

                                      1 The Bounded Buffer ProblemConsider

                                      n buffers and each buffer holds one item a producer process which creates the items (1 at a time) a consumer process which processes them (1 at a time)

                                      Producer process cannot produce an item if all the buffers are full and Consumer process cannot consume if all the buffers are empty As both processes modify the contents of buffer synchronization must be there For that we use 3 Semaphores

                                      1 empty = n (empty semaphore counts the number of empty buffers)2 full=0 (full semaphore counts the number of filled buffers)3 mutex=1 (mutex provides mutual exclusion for access to the buffer)

                                      Producer process Consumer processdo

                                      hellip produce an item wait(empty) wait(mutex) hellip

                                      do wait(full) wait(mutex) hellip remove an item from buffer hellip

                                      add item to buffer hellip signal(mutex) signal(full)

                                      while (1)

                                      signal(mutex) signal(empty) hellip consume the item hellip while (1)

                                      2 The Readers-Writers ProblemA data item such as a file is shared among several processesEach process is classified as reader or writerMultiple readers can read the file at same timeA writer must have exclusive access (ie cannot share with either a reader or another writer)Two versions of the readers-writers problem

                                      readers priority new reader need not wait because a writer is waiting writers priority if a writer is waiting to access the database no new readers can start reading

                                      A solution to both version may cause starvation in the readers priority version writers may starve in the writers priority version readers may starve

                                      A semaphore solution to the readers priority version uses 3 semaphores1 readcount = 0 (readcount semaphore counts the number of reading shared data2 mutex =1 (mutex semaphore is used for mutually exclusion when readcount is updated)3 wrt = 1 (wrt semaphore is common to both writer and reader)

                                      wrt provides mutually exclusive access to shared data Reader process Writer process

                                      wait(mutex) readcount++ if (readcount == 1) wait(rt) signal(mutex) hellip reading is performed hellip wait(mutex) readcount-- if (readcount == 0) signal(wrt) signal(mutex)

                                      do wait(wrt) hellip writing is performed hellip signal(wrt)

                                      while (TRUE)

                                      3 The Dining Philosophers Problemn philosophers sit around a table thinking and eatingAs time passes a philosopher gets hungry and tries to pick up the chopstick on his left and on his right A philosopher may only pick up one chopstick at a time and cannot pick up a chopstick already in the hand of neighbour philosopherThe dining philosophers problems is an example of a large class or concurrency control problems it is a simple representation of the need to allocate several resources among several processes in a deadlock-free and starvation-free mannerA semaphore solution represent each chopstick with a semaphore

                                      semaphore chopstick[5] Initially all values are 1

                                      Philosopher ido

                                      51

                                      wait(chopstick[i])wait(chopstick[(i+1) 5])hellipeathellipsignal(chopstick[i])signal(chopstick[(i+1) 5])hellipthinkhellip

                                      while (1)This solution guarantees no two neighbouring philosophers eat simultaneously but has the possibility of creating a deadlockQ) Types of operating systems

                                      MULTIPROCESSING SYSTEMS has multiple hardware CPUs

                                      In symmetric multiprocessing (SMP) systems The processes share the same main memory and io facilities All processors can perform the same functions (hence term symmetric)

                                      52

                                      ASYMMETRIC MULTIPROCESSING SYSTEMS In asymmetric multiprocessing (ASMP) all CPUs are not equal (Master and slave CPUs) ASMP have dedicated processors to specific tasks For example one processor may be dedicated to disk operations another to video operations etc These systems dont have the flexibility to assign processes to the least-loaded CPU unlike an SMP system

                                      REAL TIME SYSTEM is used to implement a computer application that must complete its execution within its time constraint

                                      Two kinds of real-time systems have evolved

                                      1 A hard real-time system can guarantee that response requirement will be met under all conditions 2 A soft real-time system cannot guarantee that response requirement meets under all conditions Ex

                                      Digital audio or multimedia systems Digital telephones

                                      DISTRIBUTED SYSTEMS execute parts of a computation in different systems at the same time It uses distributed control ie it spreads its decision-making actions across different computers in the system so that failures of individual computers or the network does not cripple its operation

                                      A distributed operating system appears as a uniprocessor system even though it has multiple processors The users may not know where their programs are being run or where their files are located that should all be handled automatically by the operating system

                                      Distributed systems allow applications to run on several processors at the same time thus requiring more complex processor scheduling algorithms

                                      DESKTOP SYSTEMS A desktop system is a personal computer (PC) system used at a single location Modern desktop systems support multiprogramming Common examples are Linux FreeBSD Windows 8 and the Macintosh operating system

                                      HANDHELD SYSTEMS (palmtop computers) Two of the most popular operating systems for handhelds are Symbian OS and Android OS They have very less memory

                                      53

                                      CLUSTERED SYSTEMS A computer cluster consists of a set of loosely connected computers that work together so that in many respects they can be viewed as a single system The components of a cluster are usually connected to each other through fast local area networks each node (computer used as a server) running its own instance of an operating system

                                      Q) Modern operating system

                                      New developments in hardware applications and security threats lead to the development of modern operating system

                                      New Hardware developments like increased machine speed high speed network increased size and variety of memory devices

                                      New Applications developments like multimedia applications internet and web access and client server computing

                                      Developments in modern operating system can be categorized as

                                      1 Microkernel architecture assigns only few functions to kernel ( IPC and CPU scheduling etc) Other OS services are provided by processes called servers that run in user mode This approach decouples server and kernel development This approach is well suited to distributed environment

                                      2 Multithreading a process is divided into threads that can run concurrently Useful for applications that perform a number of independent tasks Example database server that listens and processes many client requests

                                      3 Symmetric multiprocessing As now-a-days there are multiple microprocessors in a single system SMP operating system provides greater efficiency when there are multiple processors The processes share the same main memory and io facilities All processors can perform the same functions (hence term symmetric)

                                      SMP schedules processes or threads across all of the processors

                                      Advantages

                                      1 Increased performance if some portion of the program can be run in parallel the performance of SMP OS increases

                                      2 increased availability as all processes can perform the same task if one processor fails others will work 3 incremental growth one can add a new processor to increase the performance4 Scaling cost can be dependent on the number of processors used

                                      4 Distributed operating systems Distributed systems allow applications to run on several processors at the same time

                                      5 Object oriented design adds modularity to kernel Os can be customized without effecting system integrity

                                      Q) Linux Operating systems It is open source as its source code is freely available It is free to use Linux was designed considering UNIX compatibility Linux is a multi-user multitasking system Main design goals are speed efficiency and standardization Components of Linux System

                                      54

                                      Linux Operating System has primarily three components as shown in the below diagram

                                      Kernel - Kernel is the core part of Linux It is responsible for all major activities of this operating system It interacts directly with hardware Kernel hides low level hardware details to system or application programs

                                      System Library - System libraries are special functions or programs These are used by application programs to use Kernelrsquos features

                                      System Utility - System Utility programs are responsible to do specialized individual level tasks like updating log file accepting login requests from terminals etc

                                      Kernel Modules Sections of kernel code that can be compiled loaded and unloaded independent of the rest of the kernel A kernel module may typically implement a device driver a file system or a networking protocol

                                      Three components to Linux module support 1 module management Supports loading modules into memory and allows to talk to the rest of the kernel 2driver registration Allows modules to inform rest of the kernel that a new driver has become available 3 conflict resolution protect reserved resources of one driver from accidental use by another driver

                                      Kernel Mode vs User ModeKernel component code executes in a special privileged mode called kernel mode with full access to all resources of the computer This code represents a single process executes in single address space and do not require any context switch and hence is very efficient and fast Kernel runs each processes and provides system services to processes provides protected access to hardwares to processesCode which is not required to run in kernel mode is in System Library User programs and other system programs works in User Mode which has no access to system hardwares and kernel code User programs utilities use System libraries to access Kernel functions to get systems low level tasksBasic FeaturesFollowing are some of the important features of Linux Operating System

                                      Portable - Portability means softwares can works on different types of hardwares in same way Linux kernel and application programs supports their installation on any kind of hardware platform

                                      Open Source - Linux source code is freely available and it is community based development project Multiple teams works in collaboration to enhance the capability of Linux operating system and it is continuously evolving

                                      Multi-User - Linux is a multiuser system means multiple users can access system resources like memory ram application programs at same time

                                      Multiprogramming - Linux is a multiprogramming system means multiple applications can run at same time

                                      Hierarchical File System - Linux provides a standard file structure in which system files user files are arranged

                                      Shell - Linux provides a special interpreter program which can be used to execute commands of the operating system It can be used to do various types of operations call application programs etc

                                      55

                                      Security - Linux provides user security using authentication features like password protection controlled access to specific files encryption of data

                                      Architecture Linux System Architecture is consists of following layers

                                      Hardware layer - Hardware consists of all peripheral devices (RAM HDD CPU etc)

                                      Kernel - Core component of Operating System interacts directly with hardware provides low level services to upper layer components

                                      Shell - An interface to kernel Takes commands from user and executes kernels functions

                                      Utilities - Utility programs giving user most of the functionalities of an operating systems

                                      Q) windows XP operating System1Extensibility mdash layered architecture

                                      Executive which runs in protected mode provides the basic system services On top of the executive several server subsystems operate in user mode Modular structure allows additional environmental subsystems to be added without affecting the

                                      executive 2Portability mdash XP can be moved from on hardware architecture to another with relatively few changes

                                      Written in C and C++ Processor-dependent code is isolated in a dynamic link library (DLL) called the ldquohardware abstraction

                                      layerrdquo (HAL)3 Reliability mdashXP uses hardware protection for virtual memory and software protection mechanisms for operating system resources 4 Compatibility mdash applications that follow the IEEE 10031 (POSIX) standard can be complied to run on 2000 without changing the source code 5 Performance mdash XP subsystems can communicate with one another via high-performance message passing

                                      Preemption of low priority threads enables the system to respond quickly to external events Designed for symmetrical multiprocessing

                                      6 International support mdash supports different locales via the national language support (NLS) APIXP Architecture 1 Layered system of modules 2 Protected mode mdash hardware abstraction layer (HAL) kernel executive 3 User mode mdash collection of subsystems

                                      a Environmental subsystems emulate different operating systems b Protection subsystems provide security functions

                                      XP architecture

                                      56

                                      Q) Windows network Operating systemExample Windows 2000 operating system

                                      All the pictures are taken from Silberschatz Abraham et al Peter Baer Galvin and Greg Gagne Operating system concepts Reading Addison-Wesley

                                      Visit my blog enthusiaststudentblogspotin57

                                      mtechmessengerblogspotin

                                      Keywords OS exam notes OS notesOperating system notes Opearing system galvin notesSimple batch system Multi- programmed Batch System Time sharing System Difference between Batch Multi-programming and Time sharing What is process structure of process in memory Process states PCB Operations on Processes Process Creation Process Termination cascading terminationcontext switch Basic Concepts of threads two modes of CPU execution Inter process communication Shared ndash Memory Systems Message passing systemswhat is deadlock four conditions that are necessary for deadlock to occur Methods for handling deadlocks Deadlock prevention Deadlock avoidanceResource Allocation graphBankers AlgorithmSafe State Unsafe state Safety Algorithm Resource-Request Algorithm Memory management Address Binding Logical vs Physical Address Space MMU Memory-Management Unit Dynamic loading Dynamic Linking Overlays Swapping Contiguous memory allocation Single-partition allocation Multiple-partition allocation Internal fragmentation Variable Size partitions External fragmentation First-fit Best-fit Worst-fit Non-Contiguous Memory allocation Paging Implementation of Page Table PTLR PTBR page table base register Page-table length register translation look-aside buffers TLB Protection in Paging Shared pages in Paging Segmentation Shared Segments Virtual memory Demand paging Procedure for Handling a Page Fault pure demand paging Page Replacement Page Replacement Algorithms FIFO Optimal Page replacement AlgorithmLeast Recently used (LRU) Page replacement algorithm ThrashingWorking Set model global and local page replacement

                                      58

                                      • 101 File Concept Data is stored in secondary memory in the form of files A file is a sequence of bits bytes lines or records A file has structure which depends on its type
                                      • File Attributes
                                      • File Operations
                                      • Kernel Mode vs User Mode
                                      • Basic Features
                                      • Architecture

                                        I) page number field

                                        II) Value field or frame number field

                                        ii When a logical address is generated by CPU its page number is presented to TLB The page number is compared with all the entries of TLB simultaneously

                                        If page number is found in TLB Itrsquos frame number is immediately available -gt TLB hitIf page number not found check in page table ndashTLB miss

                                        By using TLB Search is fast But TLB hardware is expensive So TLB size is kept small So TLB contains only frequently used few page table entries

                                        Q) Protection in Paging Validinvalid bit attached to each entry in the page table OS sets this validinvalid bit for each page to allow or disallow access to the page When the bit is set to invalid the page is not in processrsquos logical address space Hence it generates Trap to OS

                                        Q) Shared pages in Paging

                                        Reentrant code is the code that never changes during execution Reentrant code can be shared

                                        1 One copy of read-only (reentrant) code is shared among processes (ie text editors compilers window systems) Ex ed1ed2ed3 is shared among Process P1 and P2

                                        2 Shared code must appear in same location in the logical address space of all processes

                                        Each process keeps a separate copy of the private code and data EX data1 and data2

                                        The pages for the private code and data can appear anywhere in the logical address space

                                        ==============================================================================Q) Segmentation

                                        Segmentation is a memory management scheme that support userrsquos view of memory

                                        When the user program is compiled the compiler generates segments like

                                        1) The code segment

                                        2) Global variables segment

                                        3) Heap memory segment

                                        4) Stack segment etc

                                        Each entry in segment table has

                                        segment base (Starting physical address of segment) and limit register (specifies the length of the segment)

                                        Logical address is divided into 2 parts

                                        segment-number offset

                                        Segment table is indexed by segment number

                                        Segment-table base register (STBR) stores the location of segment table in main memory

                                        Segment-table length register (STLR) stores number of segments used by a program

                                        The segment number (say lsquosrsquo) is used to find the entry in segment table The required entry is lsquosrsquo locations away from the beginning of the segment table Once the required entry in the segment table is found the offset (lsquodrsquo) is compared with limit

                                        If( offset lt limit) then offset is added with the base entry to generate the physical address

                                        Q) Shared Segments Code sharing occurs at

                                        the segment level Shared segments must

                                        have same segment number

                                        Allocation - dynamic storage allocation problem

                                        use best fitfirst fit may cause external fragmentation

                                        Protection protection bits associated with segments

                                        readwriteexecute privileges array in a separate segment - hardware can check for illegal array indexes

                                        ===============================================================================Q) Virtual memory ndash

                                        Ans In paging and segmentation a program will execute only if the entire process is in main memory But here in virtual memory only part of the program needs to be in memory for execution

                                        1 A program that is large enough than the available main memory can still run as only part of program is only loaded into main memory Logical address space can therefore be much larger than physical address space

                                        2 Allows address spaces to be shared by several processes

                                        Virtual memory can be implemented via Demand paging Demand segmentation

                                        The large blank space between heap and stack is part of virtual address space Virtual memory allows files and memory to be shared by 2 or more processes through page sharing as shown in below figure (b) Figure (a) depicts that virtual memory that is larger than physical memory

                                        ==============================================================================

                                        Q)Demand pagingDemand paging is technique of loading pages from disk to main memory only when the page is needed Hence using less amount of physical memory we get faster response

                                        Demand paging is similar to PAGING + SWAPPING

                                        Instead of swapping a whole process the pager brings only those pages needed into memory

                                        To distinguish between pages that are in memory and pages that are on the disk validinvalid bit is used

                                        Validinvalid bit is attached to each entry in page table When this bit is set to valid the page is legal and in memory When this bit is set to invalid either the page is illegal or the page is in disk

                                        When CPU generates a logical address whose page table entry is set to invalid page fault occurs Hence the required page is in disk Some section of hard disk called swap space is used to hold pages that are not present in memory

                                        Q) Procedure for Handling a Page Fault

                                        1) CPU generates a logical address and if the validinvalid bit for the page that has this logical address is set to invalid page fault occurs

                                        2) Page fault causes a trap to operating system

                                        3)Check if the logical address is within the logical address space of process ie PTBR and PTLR is checked if (required logical address is not within the logical address space of process)

                                        Terminate the processelse

                                        page is not in memory and page is in disk

                                        4) To bring the required page into memory we need to find a free main memory frame map main memory address to disk block and fetch disk block and load the block into free frame5) When the required page is brought into memory update the page table to indicate the page is in memory6) Restart instruction interrupted by illegal address trap The process will continue as if page had always been in memory

                                        Q) What is pure demand pagingAns If no page belonging to the executing process is in main memory the process will fault for every page it needs Page faults will occur until every page that is needed is in memory This scheme is known as pure demand paging NEVER BRING A PAGE UNTIL IT IS REQUIRED

                                        Q) Page Replacement1 Find the location of the desired page on the disk2 Find a free

                                        i) If there is a free frame use itii) If there is no free frame use Page replacement Algorithms to find some page in memory that is

                                        not really in use and swap itiii) If the modify bit associated with the victim frame is set to 1 the page is written to disk Make the

                                        changes to validinvalid bit of victim frames page table entry to invalid to indicate the victim frame is no longer in memory

                                        else no need to write to disk3 Read the desired page into victim frame and change the frame and page tables4 Restart the user process

                                        =================================================================================================Q) Page Replacement AlgorithmsGoal Produce a low page-fault rateThe algorithms are evaluated by running it on a particular string of memory references (reference string) and by computing the number of page faults on that string

                                        1 First in First out (FIFO) Page replacement algorithm Uses the time when a page was brought into memory The page that was brought first into memory is replaced ie oldest page is chosen for replacement We create a FIFO queue to hold all pages in memory

                                        When a page is brought into memory we insert it at the tail of the queue and we replace the page at the head of the queue

                                        FIFO 15 page faultsProblems due to FIFO is Beladyrsquos anamolyFor a refrence string shown below123412512345When number of frames allocated to a process is 1 number of page faults=12 due to FIFO page replacementWhen number of frames allocated to a process is 2 number of page faults=12 due to FIFO page replacementWhen number of frames allocated to a process is 3 number of page faults=9 due to FIFO page replacementWhen number of frames allocated to a process is 1 number of page faults=10 due to FIFO page replacementAs the number of frames allocated to a process increases page faults must decrease But in FIFO page replacement page faults increase2 Optimal Page replacement AlgorithmReplace the page that will not be used for longest period of time Used for measuring how well your algorithm performs It is not practical to implement OPT algorithm as it is not possible to know what the future references will beex

                                        Optimal page replacement 9 page faults

                                        3 Least Recently used (LRU) Page replacement algorithm Replace the page that has not been used for longest period of time LRU makes use of time of last use of the page Ex

                                        Number of page faults = 12

                                        LRU requires hardware assistance to determine the time of last use of the page We can use 1) Stack or 2) Counter to implement LRU page replacement algorithm

                                        1) Stack implementation ndash keep a stack of page numbers in a double link formwhen a Page is referenced it is moved to the top of stackThis implementation requires 6 pointers to be changed

                                        2) counter implementation Time-of-use field is allocated to each frame allocatedCPU maintains a counter Counter is incremented for every page reference Counter value is copied to the time-of-use field of the referenced page

                                        Disadvantages of LRU page replacement algorithm Updating counter or stack must be done for every memory reference HENCE LRU IS SLOW and implementation requires hardware assistance ==============================================================================Q) LRU approximation page replacement1) Using reference bit2) Using additional reference bits algorithm3) Second chance algorithm4) Enhanced Second chance algorithm5) Counting based Page replacement

                                        Has two schemes1) Least Frequently Used (LFU)2) Most Frequently Used (MFU)

                                        6) Page Buffering AlgorithmsQ) What is ThrashingAns A process is thrashing if it is spending more time paging than executingThrashing may occur when global (or) local page replacement is used as followsSolution to Thrashing is Working Set strategyIf we provide a process with as many frames as it needs thrashing can be avoided Since it is not possible to know how many frames a process needs we use locality model of process execution

                                        here Locality is set of pages that are actively used together Locality model states that as the process executes it moves from locality to locality If we allocate enough frames to a process to accommodate the size of current locality it will not fault again until it changes current localityWorking Set model uses working-set and a parameter ∆ = working-set windowThe set of pages in the most recent ∆ page references is the working setLet WS S i be the working-set size for process Pi ie Pi needs WS S i framesLet D be the total demand for frames then D=sum WS Si

                                        and mbe the total number of available frames

                                        If iquestm thrashing occurs because some processes will not have enough framesSoIF (Dgtm iquest

                                        OS selects a process to suspendelse OS may initiate another processThe working set strategy prevents thrashing while keeping the degree of multi programming as high as possibleThe main difficulty is to keep track of moving working set window ========================================================================Q) How does thrashing occur in global and local page replacementAns In Global page replacement when a page fault occurs for a process any frame in main memory can be replaced causing page fault to some other process that needs the replaced frame Page fault processes must use paging device to swap pages in and out As more number of processes queue up for the paging device

                                        the READY queue empties And CPU becomes idle So OS introduces new processes as CPU is free This further increases number of page faults

                                        In Local page replacement Each process is allocated certain frames in main memory and when a page fault occurs only the frames allocated to it will be replacedWhen a process starts thrashing other processes are not affected But since the processes will be in queue for paging device most of the time average service time for page fault increases Hence effective access time will increase even for a process that is not thrashingQ) Page Table Structure Or Structure of page table in memory

                                        Hierarchical Paging Hashed Page Tables Inverted Page Tables

                                        Hierarchical Paging If Page table is large then we break up the page table into multiple page tables ie page table is paged A simple technique is a two-level page tableTwo-Level Paging Example A logical address (on 32-bit machine with 4K page size) is divided into 1 a page number consisting of 20 bits 2 a page offset consisting of 12 bitsSince the page table is paged the page number is further divided into 1 a 10-bit page number 2 a 10-bit page offset Thus a logical address is as follows page number page offset

                                        pi p2 d

                                        10 10 12 where p1 is an index into the outer page table and p2 is the displacement within the page of the outer page table Two-Level Page-Table Scheme

                                        Address-Translation Scheme Address-translation scheme for a two-level 32-bit paging architecture

                                        Hashed Page Tables (or) Hash table Hashed page tables are common in address spaces gt 32 bitsEach entry in hash table contains a linked list of elements that hash to same locationHash table is indexed by hash valueEach element consists of 3 fields

                                        1 Page number2 Frame number3 Pointer to next element in linked list

                                        The algorithm works as follows Page number is passed to hash function to get hash value Locating the hash value in hash table is easy as Hash table is indexed by hash value Page number is then compared with field 1 in the first element in linked list If there is a match the corresponding frame number is used to get physical address If there is no match next entries in linked list are searched

                                        Inverted Page Table

                                        Invertible page table has one entry for each frame of physical memory Each entry in invertible page table has

                                        Process-id Page number

                                        Logical address also has 3 parts Process-id Page number

                                        OffsetProcess-id and page number of the logical address is compared with each entry in the invertible page tableIf a match is found at ith entry i is added to offset to get logical address If there is no match it means it is illegal addressAdvantages and disadvantages

                                        Decreases memory needed to store each page table Increases time needed to search the table when a page reference occurs Solution Use hash table to

                                        limit the search to one mdash or at most a few mdash page-table entries Inverted Page Table Architecture

                                        Unit-IV CPU Scheduling

                                        Basic conceptsTo maximize CPU utilization some process must be running at all times

                                        In multi programming several processes will be in main memory at a given time If running process has to wait for IO the OS takes the CPU away from that process and assigns CPU to other process in ready queue So CPU must be scheduled and CPU scheduling is fundamental OS function

                                        CPU- IO burst cycleProcess execution begins with a CPU burst that is followed by an IO burst which is followed by another CPU burst then another IO burst and so on as shown in the figure

                                        An IO bound program has many short CPU bursts A CPU-bound program has few long CPU bursts

                                        Schedulers a module in OS for scheduling decisions1048708 Long-term scheduler (or job scheduler) ndash selects which processes should be brought into the ready queue1048708 Medium-term scheduler ndash selects which processes should be swapped inout the memory1048708 Short-term scheduler (or CPU scheduler) ndash Whenever the CPU becomes idle the operating system must select one of the processes in ready queue to be executed This selection is done by short term scheduler (or) CPU scheduler

                                        Dispatcher gives CPU control to the process selected by short term scheduler This function involves following

                                        1 switching context2 switching to user mode3 jumping to proper location in the user program to restart that program

                                        Time taken to stop one process and start another process is known as dispatch latency This must be kept smallReady QueueReady queue is implemented as a FIFO queue a priority queue a tree or simply an unordered linked listThe records in ready queue are PCBs of processes

                                        Pre-emptive Scheduling

                                        CPU scheduling decisions take place under one of four conditions

                                        1 When a process switches from the running state to the waiting state2 When a process switches from the running state to the ready state for example in response to an

                                        interrupt3 When a process switches from the waiting state to the ready state say at completion of IO 4 When a process terminates

                                        For conditions 1 and 4 there is no choice - A new process must be selected For conditions 2 and 3 there is a choice - continue running the current process (or) select a different

                                        one If CPU scheduling takes place only under conditions 1 and 4 the system is said to be non-

                                        preemptive or cooperative In non-preemptive a process runs until it itself gives up the CPU ie when it is waiting for IO (or) when process completes Otherwise the system is said to be pre-emptive

                                        Windows used non-preemptive scheduling up to Windows 3x and started using pre-emptive scheduling with Win95 Macs used non-preemptive prior to OSX and pre-emptive since then Note that pre-emptive scheduling is only possible on hardware that supports a timer interrupt

                                        Note that pre-emptive scheduling can cause problems when two processes share data because one process may get interrupted in the middle of updating shared data structures

                                        Preemption can also be a problem if the kernel is busy implementing a system call ( eg updating critical kernel data structures ) when the preemption occurs Solution wait until the system call has either completed or blocked before allowing the pre-emption But this solution is problematic for real-time systems

                                        Some critical sections of code protect themselves from concurrency problems by disabling interrupts before entering the critical section and re-enabling interrupts on exiting the section But this should only be done only on very short pieces of code that will take less CPU time

                                        Scheduling Criteria to decide CPU scheduling algorithm1 CPU utilization in real system cpu utilization will range from 40 to 902 Throughput Number of processes that are completed per unit time3 Turnaround time Time when the process has completed ndash Time when the process was submittedie it includes waiting time and burst time4 Waiting time amount of time that a process spends waiting for CPU5 Response time Time when the first response came - Time when the process was submitted

                                        Optimization Criteria Max CPU utilizationMax throughput

                                        Min turnaround timeMin waiting timeMin response time CPU Scheduling Algorithms1) First-Come First-Served (FCFS) Scheduling First come first served CPU scheduling algorithm is non pre-emptive Processes are scheduled in the order they have arrivedFCFS is implemented by Queue When the CPU is free it is allocated to the process at the head of the queueAdvantages Easy to implementDisadvantages 1 Average waiting time is quite long

                                        2 Convoy effect (it occurs as small processes wait in queue for big process to leave CPU)Example Process Burst Time P1 24 P2 3 P3 3

                                        Suppose that the processes arrive in the order P1 P2 P3

                                        The Gantt Chart for the schedule is P1 P2 P3

                                        0 24 27 30 Waiting time for P1 = 0 P2 = 24 P3 = 27 Average waiting time (0 + 24 + 27)3 = 17 2) Shortest-Job-First (SJF) SchedulingSchedule the Process with shortest burst time If CPU burst time of two processe are same then FCFS is usedAdvantages Average waiting time decreasesDisadvantages It is difficult to know the length of next CPU burstSJF can be either pre-emptive (or) non pre-emptive1 Non pre-emptive SJF ndash once CPU is given to the process it cannot be preempted until completes its CPU burst2 Pre-emptive SJF (or) Shortest-Remaining-Time-First (SRTF) If the newly arrived process is shorter than what is remaining of currently executing process then pre-empt the currently executing process

                                        Example Process Arrival Time Burst Time P1 00 7

                                        P2 20 4 P3 40 1

                                        P4 50 4

                                        SJF (non-preemptive) Gantt Chart P1 P3 P2 P4

                                        0 7 8 12 16 Average waiting time = [0 +(8-2)+(7-4) +(12-5)] 4 =4

                                        Example of Preemptive SJF

                                        Process Arrival Time Burst TimeP1 00 7P2 20 4P3 40 1P4 50 4

                                        SJF (preemptive) P1 P2 P3 P2 P4 P1

                                        0 2 4 5 7 11 16 Average waiting time = (9 + 1 + 0 +2)4 =3 3) Priority SchedulingA priority number (integer) is associated with each processThe CPU is allocated to the process with the highest priority (smallest integer equiv highest priority)If two processes have equal priority then FCFS is usedSJF is a priority algorithm in which larger the CPU burst lower the priorityThere are two schemes 1 Preemptive 2 nonpreemptive

                                        Problem equiv Starvation ndash low priority processes may never executeSolution equiv Aging ndash as time progresses increase the priority of the process

                                        4) Round Robin (RR) Each process gets a small unit of CPU time (time quantum) usually 10-100 milliseconds After this time has elapsed the process is preempted and added to the end of the ready queueIf there are n processes in the ready queue and the time quantum is q then each process gets 1n of the CPU time in chunks of at most q time units at once No process waits more than (n-1)q time unitsPerformance 1 q large _ FIFO 2 q small _ q must be large with respect to context switch otherwise overhead is too highExample of RR with Time Quantum = 4 Process Burst Time P1 24 P2 3

                                        P3 3 The Gantt chart isP1 P2 P3 P1 P1 P1 P1 P1

                                        0 4 7 10 14 18 22 26 30 Average waiting time = [(30-24)+4+7]3 = 173 =566 5)Multilevel Queue scheduling Ready queue is partitioned into separate queuesFor ex ready queue is divided into 2 queues

                                        1 foreground (interactive) queue2 background (batch) queue

                                        Each queue has its own scheduling algorithmFor ex Round Robin scheduling algorithm can be used in foreground queue

                                        FCFS scheduling algorithm can be used in background queueScheduling must be done between the queues This can be done in 2 ways1 Fixed priority scheduling

                                        Foreground queue has highest priority All the processes in foreground queue must be completed and queue must be empty Then only the processes in background queue will be given CPU As shown in below figure Disadvantage starvation

                                        2 Time slice ndash each queue gets a certain amount of CPU timewhich it can schedule amongst its processes ie 80 to foreground in RR and 20 to background in FCFS

                                        6) Multilevel Feedback Queue Scheduling A process can move between queues The idea is to separate processes according to their CPU bursts IO ndash bound and interactive processes will be in highest priority queueIf a process takes more CPU time it is moved to lower priority queue If a process waits too long in lower priority queue it is moved to higher priority queue to prevent starvationAs shown in the below figure let there be 3 queues Q0 Q1 Q2

                                        1 Q0 ndash time quantum 8 milliseconds 2 Q1 ndash time quantum 16 milliseconds 3 Q2 ndash FCFS

                                        Scheduling

                                        1 A process entering ready queue is put in queue Q0 When it gains CPU the process receives 8 msec If it does not finish in 8 milliseconds process is moved to queue Q1 2 At Q1 process again receives 16 additional milliseconds If it still does not complete it is moved to queue Q2

                                        Q) SynchronizationSeveral processes run in an operating system Some process share resources due to which problems like data inconsistency may arise

                                        Example of process synchronization is Producer Consumer problem (or) Bounded-Buffer problemProducer Consumer problem To make sure that producer process should not add data when the buffer is full andConsumer process should not take data when the buffer is empty

                                        The code for Producer process can be modified as followswhile (true) Produce an item in next producedwhile (counter == BUFFER_SIZE) do nothingbuffer[in]=nextProducedin = (in+1) BUFFER_SIZEcounter++

                                        here an integer variable counter is initialized to zeroCounter is incremented every time a new item is added to the buffer and counter is decremented every time an item is removed from bufferin is an index that always points to an empty slot after the last filled slotout is an index that always points to the first filled slot after at the head of the circular queue

                                        The code for consumer process is as shown belowwhile(true)

                                        while (counter == 0) do nothingnextConsumed = buffer[out]out=(out+1)BUFFER_SIZEcounter--

                                        The above producer and consumer code produces RACE CONDITIONOutcome of execution of several co-operating processes depends on the order in which accesses takes place is called Race ConditionFor Example if counter =5

                                        Given counter =5 and if producer produces one item and consumer consumes one item then correct value of counter= 5 only But here the counter = 4 ie inconsistent resultIf T5 is executed before T4 then the counter =6 ie inconsistent result

                                        Solution to avoid race condition make sure only one process at a time must update the shared variable (here it is counter)

                                        Q) Critical section problemAns Each process has a segment of code called critical section Critical section code is the code that contains common variables of co-operating processesTo avoid data inconsistencyTwo processes must not execute critical section code at same timeFor this to happen1 Each process must request permission to enter its critical sectionmdashEntry section2 End of critical section ndashExit section3 Rest of the code after critical section ndash Remainder section

                                        Solution to critical section must satisfy the following three requirements1 Mutual exclusion if one process is in critical section no other process can be in critical section2 Progress Processes not in remainder section must decide which process will enter critical section next3 Bounded Waiting All the processes must be allowed to enter critical section No process must wait indefinitely

                                        Q) Petersonrsquos solution for critical section problem (Software based solution)AnsTime Process 1 Process2T0 do

                                        flag[1] = TRUE turn =2

                                        do flag[2] = TRUE

                                        T1 turn =1

                                        T2 While( flag[2] ampamp turn == 2)Since turn = 2 while loop will not run

                                        While( flag[1] ampamp turn == 1)Loops continuously until flag[1]

                                        becomes falseT3 Enter critical section

                                        T4 flag[1] = FALSE Enter Critical section

                                        T5 Enters remainder section flag[2]=FALSE

                                        while (TRUE) Enters remainder section

                                        while (TRUE)

                                        101 File Concept Data is stored in secondary memory in the form of files A file is a sequence of bits bytes lines or records A file has structure which depends on its type

                                        File Attributes

                                        1 Name ndash Name of the file for user understanding2 Identifier used by OS to identify a file ( eg inode number )3 Type ndash Text files executable files binary files etc 4 Location ndash location of the file on the hard drive 5 Size current size of the file6 Protection Controls who can read write or execute the file7 Time Date and user identification gives the information when the file was created last modified

                                        and last used etc This data is used for protection security and usage monitoring

                                        All information about files is kept in directory structure The directory entry consists of the filersquos name and its unique identifier The identifier locates the other file attributes Directory structure is kept on disk

                                        File Operations

                                        The file Abstract Data Type supports many common operations 1 Creating a file find free space on disk add entry to directory 2 Writing a file write data at current file position pointer location and update pointer3 Reading a file read data at current file position pointer location and update pointer4 Repositioning within a file or file seek change pointer location to a given value5 Deleting a file free the space allocated to file and remove entry from directory6 Truncating a file delete the data and update file size

                                        Open File Table

                                        Most OSes require that files be opened before access and closed after access Information about all files that are open currently in the system is stored in an open file table This

                                        system wide open file table is maintained by OS And information about files opened by the process is stored by the processrsquos open file table Open file table has the following information

                                        1 File pointer - records the current position in the file for the next read or write2 File-open count - How many times has the current file been opened ( simultaneously by

                                        different processes ) and not yet closedWhen this counter reaches zero the file can be removed from the table

                                        3 Disk location of the file4 Access rights

                                        Some systems provide support for file locking 1 A shared lock is for reading only2 A exclusive lock is for writing as well as reading3 An advisory lock OS makes sure the locks are acquired and released appropriately4 A mandatory lock software developers makes sure the locks are acquired and released

                                        appropriately5 UNIX used advisory locks and Windows uses mandatory locks

                                        File Types

                                        Implementing file types A common technique is to include the type as part of the filename The name is split into 2 parts

                                        1 File Name 2 Extension

                                        File name and extension are separated by a period The system uses the extension to indicate the type of file and the type of operations that can be done on that fileUser opens Microsoft word and clicks on File menu and Open option User will only specify the filename without extension application will look for a file with the given name and extension it expects Hence extensions are not supported by the operating systemUnix system uses a crude magic number stored at the beginning of the file to indicate the type of fileIn Mac OS X operating system file produced by word processor has the word processorrsquos name as its creator When user double clicks on the file word will open the required fileSome common file types are as shown below

                                        File Structure Some files contain an internal structure which may or may not be known to the OS for ex executable files have a specific structure so that it can determine

                                        where in memory to load the file and what is the location of the first instruction

                                        If OS supports many file formats it will increase the size and complexity of the OS UNIX treats all files as sequences of bytes with no internal structure (With the exception of

                                        executable binary programs which it must know how to load and find the first executable statement etc)

                                        Macintosh files have two forks - a resource fork and a data fork The resource fork contains information relating to the UI such as icons and button images and can be modified independently Data fork contains the code or data

                                        Internal File structure

                                        Disk files are accessed in units of physical blocks typically 512 bytes

                                        Internally files are organized in units of logical units like single byte (or)

                                        data record size (or) structure size

                                        The number of logical units which fit into one physical block determines its packingIf a student file is organized internally as records Each record takes 64 bytes of memoryAnd main memory is divided into blocks of 512 bytes Then packing = 51264=8 8 records can be stored per block

                                        Internal fragmentation may occur due to this

                                        102 Access Methods

                                        The information in file can be accessed in the following ways1 Sequential access2 Direct access (or) Relative access

                                        1 Sequential access we cannot randomly access any record

                                        Records are read and written sequentially one record after another

                                        A sequential access file emulates magnetic tape operation and generally supports the following operations read next - read a record and advance the tape to the next position write next - write a record and advance the tape to the next position rewind skip n records - May or may not be supported N may be limited to positive numbers or may be

                                        limited to +- 1

                                        2 Direct access (or) Relative accessA file is made up of fixed length records so that we can randomly access any record

                                        No restriction on order of reading or writing

                                        Direct access files are useful when we need to access large amounts of data randomly

                                        Ex database

                                        Operations supported include n is relative block number ie relative to beginning of file

                                        1 read n - read record number n 2 write n - write record number n 3 jump to record n - could be 0 or the end of file4 Query current record - used to return back to this record later5 Sequential access can be easily emulated using direct access The inverse is complicated and

                                        inefficient

                                        3 Other access methods can be built on top of direct access method We construct an index for the file which contains pointers to various blocks To search a file we first access index and then use pointer to access file

                                        If a student file has 120000 records of students sorted by student id We can create an index consisting of 1st student id in each blockif 64 records can be stored in each block then number of blocks needed to store 120000 records = 12000064 asymp 2000 blocks So index has 2000 entries Index is kept in main memory so that searching becomes faster

                                        If the index itself becomes large we create an index for index file (Ex B-tree)

                                        Q) Allocation Methods

                                        To allocate space to files on disk is such a way that disk space is utilized effectively and files can be accessed quickly

                                        Three major methods of allocating disk space are

                                        1 Contiguous 2 Linked

                                        3 Indexed

                                        1 Contiguous allocation Each file occupies set of contiguous blocks on disk

                                        If the file is n blocks long and starts at location b then it occupies blocks b b+1 b+2hellipb+n-1

                                        The directory entry for each file indicates the address of starting block(ie b) and length (n)

                                        Advantages

                                        1 For direct access to block i we can directly access by finding the block as b+i Since the file is stored sequentially sequential access is easy Hence access is easy

                                        2 Supports both direct and sequential access3 Number of disk seeks required is minimum 4 Seek time is minimum

                                        Disadvantages

                                        1 Difficult to find space to a new file (or) add more data to existing fileFirst fit and Best fit strategies are used to find a free hole( set of free contiguous blocks) from list of available holes But both are inefficient wrt storage utilization

                                        2 External fragmentation even though there are enough free blocks required for a new file they are not contiguous and hence cannot be allocated to new fileSolution copy all the files onto another disk or tape We then copy the files back onto original disk contiguously This is known as compaction But compaction is time consuming

                                        3 Size of the file must be known when the file is created Even though final size is known entire size may not be used by the file at the beginning Hence leads to wastage of space

                                        If size of the file is not known in advance then (i) If we allocate too little space to file file cannot be extended Solution find a larger hole copy the contents of the file to the new space and release the previous space This may be possible as long as space exists Also this is time consuming(ii) If we overestimate the amount of space needed much of the space is unused This will lead to internal fragmentation

                                        To minimize these drawbacks some os uses modified contiguous allocation scheme Here contiguous blocks of memory is allocated at firstIf the file needs more blocks another set of contiguous space known as extent is added Location and a block count plus a link to the first block of next extentIf extents are too large and of fixed size internal fragmentation occursIf extents are of variable sizes external fragmentation occurs

                                        Linked AllocationEach file is a linked list of disk blocks The disk blocks can be scattered anywhere on the diskThe directory contains a pointer to first and last blocks of the file For example a file of 5 blocks might start at a block 9 and continue at block 16 then block 1 and then block 10 and finally block 25 Each block contains a pointer to next block If each block is 512 bytes and disk address (pointer ) requires 4 bytes then the user sees blocks of 512-4=508 bytes To create a new file we simply create a new entry in the directory The pointer will have nil for empty file Also the size field of empty file is 0

                                        To add more data to existing file free block is found and data is written to it and is linked to end of file To read a file we read by following the pointers block to block

                                        Advantages 1 Easy to find space to a new file (or) add more data to existing file2 Size of the file need not be declared when the file is created3 There is no external fragmentation

                                        Disadvantages1 Seek time is more2 Number of disk seeks required is more3 Pointers use larger percentage of filersquos disk space4 Inefficient for direct access files To access i th record i disk reads are required5 Not Reliable if pointers were lost or damaged

                                        Solution to disadvantage 3 is to use to group multiple blocks as clusters(for ex 4 blocks = 1cluster) We allocate clusters instead of blocks to file Makes logical to physical block mapping simpleImproves disk throughput by reducing disk access timeReduces wastage of space due to pointers as only few pointers may be neededFree ndashlist management also becomes simpleDisadvantage internal fragmentation more space is wasted when the cluster is partially full

                                        Solution to disadvantage 5 is to use doubly linked list (or) store the file name and relative block number in each block

                                        An variation of linked allocation is the use of file allocation table (FAT)A section of disk at the beginning of each volume contains FAT FAT has one entry for each disk block and is indexed by block numberThe block number contains the block number of next block that contains the file The last block will have special end of the file value as table entry An unused block is indicated by table value 0

                                        Advantages

                                        1 Easy to find space to a new file (or) add more data to existing file2 Random access time is improved

                                        Disadvantage number of disk seeks is more The disk head must move to read the FAT and find the location of block then move to location of the block

                                        3 Indexed allocation

                                        Each file has its own index block Index block is an array of disk block addresses The ith entry in index block points to ith block of the file

                                        The directory contains the address of the index block

                                        When the file is created all pointers in the index block are set to nil

                                        To add more data to existing file free block is found and data is written to it The address of block is put in the index block

                                        Advantages

                                        1 Easy to find space to a new file (or) add more data to existing file2 Supports direct (or) random access efficiently3 No external fragmentation

                                        Disadvantage

                                        1 Seek time is more2 Number of disk seeks required is more3 Not Reliable4 Index file must be kept in memory If this memory is not available then we have the read the index

                                        block and then desired data block ie 2 disk accesses are required To access a block near the end of the file we need to read all the index blocks to read the needed data block

                                        5 Pointer overhead of index block is gt than that of linked allocation For example if we have file of only one or two blocks Entire index block is allocated to store these one or two pointers

                                        Various mechanisms followed for size of index block

                                        1 linked scheme size of index block is one disk block Index block contains a small header containing name of the file and set of the first 100 disk block addresses The next address contains the address of another index block for large files

                                        2 multilevel index to access a block the OS uses first-level index to find second-level index block which points to file block This approach can be continued to third or fourth level

                                        3 combined scheme

                                        For example Say there are 15 pointers of index block in the filersquos inode The first 12 of these pointers have address of blocks that contains file data The next 3 pointers point to indirect blocks The first points to single indirect block which is an index block that contain addresses of filersquos data blocks

                                        The second points to double indirect blocks and third points to triple indirect blocks as shown below

                                        Q) Free space management

                                        To keep track of free disk space the system maintains free space list To create a file we search free space list and allocate the required space to new file

                                        The free space list can be implemented as

                                        1 Bit vector (or) Bit map2 Linked list3 Grouping4 Counting 5 Space maps

                                        1 Bit vector (or) Bit map Each block is represented by 1 bit If the block is free the bit is 1 and if the block is allocated the bit is 0 For example consider a disk where blocks 23458 blocks are free The free-space bit map would be 001111001000

                                        Advantages simple and easy to get contiguous files

                                        Disadvantages

                                        1 Bit map requires extra space2 Bit map is inefficient if kept in disk Bit maps of smaller disk can be kept in main memory but may

                                        not be possible to keep the bit map of larger disk in main memory3 Bit map must be modified both when blocks are allocated and freed Freeing 1 GB of data on a 1-TB

                                        disk could cause thousands of blocks of bit maps to be updated because those data blocks are scattered all over the disk

                                        2 Linked List Free-space list is implemented by linking all the free disk blocks together A pointer to the first free block is stored in special location on the disk For example consider a disk where blocks 23458 blocks are freeWe keep a pointer to block 2 as the first free block Block 2 would contain a pointer to block 3 Block 3 would point to block 4 and block 4 would point to block 5and so onAdvantage No waste of spaceDisadvantage Cannot get contiguous space easily ie To traverse the free-space list we must read each block which requires more IO time

                                        3 Grouping First free block stores the addresses of n free blocks The first n-1 free blocks are free and the last block contains the addresses of another n free blocks and so onAdvantage large number of free blocks can be found quickly4 Counting free-space list can be maintained by keeping the address of first free block and the number (n)of free contiguous blocks that follow the first block

                                        Each entry in the free-space list consists of disk address and a countEach entry requires more space andlist is shorter only if count is gt1These entries can be stored in a B-tree for efficient insertion deletion and search

                                        5 Space maps ZFS divides the disk space into chunks of manageable sizes called metaslabs Each metaslab has a space mapZFS uses counting algorithm to store information about free blocks Space map is a log of all block activities in time order in counting formatWhen ZFS allocates or frees space from a metaslab it loads the associated space map into memory and updates it Finally free-space list is updated on diskSynchronization Hardware Hardware solution to synchronization is to provide atomic operations These operations operate as a single instruction without interruption Two such operations are 1 Test and Set() hardware instruction2 Swap() instruction

                                        1 Test and Set instruction is as shown

                                        Solution using Test and SetThe shared variable lock is initialized to false

                                        2 Swap() instruction definition is as follows

                                        Solution using Swap1 Shared Boolean variable lock is initialized to false and each process has a local boolean variable key

                                        Both Test and Set Swap instructions satisfy the mutual exclusion requirement but unfortunately do not guarantee bounded waiting If there are multiple processes trying to get into their critical sections there is no guarantee of what order they will enter and any one process could have the bad luck to wait forever until they got their turn in the critical section (Since there is no guarantee as to the relative rates of the processes a very fast process could theoretically release the lock whip through their remainder section and re-lock the lock before a slower process got a chance As more and more processes are involved vying for the same resource the odds of a slow process getting locked out completely increase )

                                        Below figure illustrates a solution using test-and-set that satisfies bounded waiting using two shared data structures boolean lock and boolean waiting[ N ] where N is the number of processes in contention for critical sections

                                        Bounded-waiting mutual exclusion with TestAndSet( )

                                        It first looks in an order ( starting with the next process on the list ) for a process that has been waiting and if it finds one then it releases that particular process from its waiting state without unlocking the critical section thereby allowing a specific process into the critical section while continuing to block all the others

                                        Q) What is semaphoreAns Semaphore is used to solve synchronization problems A semaphore S is an integer variable accessed only through 2 standard atomic operations 1 wait() 2 signal()Wait primitive Signal primitivewait(S)

                                        while(S lt= 0) do nothingS--

                                        signal (S)

                                        S++

                                        Each process that wishes to use a resource performs wait() operation When a process releases a resource it performs signal() operation

                                        Semaphores Usage1 binary semaphore or mutex locks can take values 0 (or) 1

                                        2 Counting Semaphore can take any integer value Counting semaphore is used to count the number remaining resources The counter is initialized to the number of resources available whenever the counting semaphore gt 0 then a process can enter a critical section and use one of the resources When the counter = 0 ( or negative in some implementations ) then the process blocks until another process frees a resource and increments the counting semaphore with a signal callFor example if 3 resources are there and 4 processes Value of counting semaphore =3if process P1 requires a resource it perform wait() operationNow counting semaphore = 2if process P2 requires a resource it perform wait() operationNow counting semaphore = 1if process P3 requires a resource it perform wait() operationNow counting semaphore = 0if process P4 requires a resource it perform wait() operation P4 waits until a resource is available

                                        3 Semaphores can also be used to synchronize certain operations

                                        Semaphore ImplementationWhen a process is in critical section and any other process that tries to enter critical section must loop continuously wasting CPU time (busy waiting) This type of semaphore is called spin lock as the process spins while waiting For example suppose it is important that process P1 execute statement S1 before process P2 executes statement S2

                                        First we create a semaphore named synch that is shared by the two processes and initialize it to zero Then in process P1 we insert the code

                                        S1signal( synch )

                                        and in process P2 we insert the code wait( synch )

                                        S2 Because synch was initialized to 0 process P2 will block on the wait until after P1 executes the call

                                        to signal Each semaphore has a integer value and maintains a waiting queue of waiting processes

                                        Semaphore implementation with no busy waiting To overcome busy waiting we block the waiting process and waiting process is restarted by wakeup() operation(ie the process is removed from waiting queue to ready state)

                                        Problems due to semaphore1 Deadlocks Deadlocks occur when multiple processes are blocked each waiting for a resource that can only be freed by one of the other (blocked) processes shown below

                                        2 Starvation In Starvation one or more processes gets blocked forever and never get a chance to enter critical section For example if we do not specify the algorithms for adding processes to the waiting queue or selecting one to be removed from the queue in the signal( ) call If a LIFO queue is chosen then the first process that starts waiting will never get a chance

                                        3 Priority inversionLet Process LMH have priorities as shown belowLltMltHprocess H is waiting for a resource that has been held by process L so Process L is running When process M enters it pre-empts process L and M is running nowHere M runs before H even though M has lower priority than HTo avoid this L is given high priority so that M does not pre-empt L=====================================================================Q) Classical problems on synchronization

                                        1 The Bounded Buffer Problem (also called the The Producer-Consumer Problem)2 The Readers-Writers Problem3 The Dining Philosophers Problem

                                        These problems are used to test newly proposed synchronization scheme

                                        1 The Bounded Buffer ProblemConsider

                                        n buffers and each buffer holds one item a producer process which creates the items (1 at a time) a consumer process which processes them (1 at a time)

                                        Producer process cannot produce an item if all the buffers are full and Consumer process cannot consume if all the buffers are empty As both processes modify the contents of buffer synchronization must be there For that we use 3 Semaphores

                                        1 empty = n (empty semaphore counts the number of empty buffers)2 full=0 (full semaphore counts the number of filled buffers)3 mutex=1 (mutex provides mutual exclusion for access to the buffer)

                                        Producer process Consumer processdo

                                        hellip produce an item wait(empty) wait(mutex) hellip

                                        do wait(full) wait(mutex) hellip remove an item from buffer hellip

                                        add item to buffer hellip signal(mutex) signal(full)

                                        while (1)

                                        signal(mutex) signal(empty) hellip consume the item hellip while (1)

                                        2 The Readers-Writers ProblemA data item such as a file is shared among several processesEach process is classified as reader or writerMultiple readers can read the file at same timeA writer must have exclusive access (ie cannot share with either a reader or another writer)Two versions of the readers-writers problem

                                        readers priority new reader need not wait because a writer is waiting writers priority if a writer is waiting to access the database no new readers can start reading

                                        A solution to both version may cause starvation in the readers priority version writers may starve in the writers priority version readers may starve

                                        A semaphore solution to the readers priority version uses 3 semaphores1 readcount = 0 (readcount semaphore counts the number of reading shared data2 mutex =1 (mutex semaphore is used for mutually exclusion when readcount is updated)3 wrt = 1 (wrt semaphore is common to both writer and reader)

                                        wrt provides mutually exclusive access to shared data Reader process Writer process

                                        wait(mutex) readcount++ if (readcount == 1) wait(rt) signal(mutex) hellip reading is performed hellip wait(mutex) readcount-- if (readcount == 0) signal(wrt) signal(mutex)

                                        do wait(wrt) hellip writing is performed hellip signal(wrt)

                                        while (TRUE)

                                        3 The Dining Philosophers Problemn philosophers sit around a table thinking and eatingAs time passes a philosopher gets hungry and tries to pick up the chopstick on his left and on his right A philosopher may only pick up one chopstick at a time and cannot pick up a chopstick already in the hand of neighbour philosopherThe dining philosophers problems is an example of a large class or concurrency control problems it is a simple representation of the need to allocate several resources among several processes in a deadlock-free and starvation-free mannerA semaphore solution represent each chopstick with a semaphore

                                        semaphore chopstick[5] Initially all values are 1

                                        Philosopher ido

                                        51

                                        wait(chopstick[i])wait(chopstick[(i+1) 5])hellipeathellipsignal(chopstick[i])signal(chopstick[(i+1) 5])hellipthinkhellip

                                        while (1)This solution guarantees no two neighbouring philosophers eat simultaneously but has the possibility of creating a deadlockQ) Types of operating systems

                                        MULTIPROCESSING SYSTEMS has multiple hardware CPUs

                                        In symmetric multiprocessing (SMP) systems The processes share the same main memory and io facilities All processors can perform the same functions (hence term symmetric)

                                        52

                                        ASYMMETRIC MULTIPROCESSING SYSTEMS In asymmetric multiprocessing (ASMP) all CPUs are not equal (Master and slave CPUs) ASMP have dedicated processors to specific tasks For example one processor may be dedicated to disk operations another to video operations etc These systems dont have the flexibility to assign processes to the least-loaded CPU unlike an SMP system

                                        REAL TIME SYSTEM is used to implement a computer application that must complete its execution within its time constraint

                                        Two kinds of real-time systems have evolved

                                        1 A hard real-time system can guarantee that response requirement will be met under all conditions 2 A soft real-time system cannot guarantee that response requirement meets under all conditions Ex

                                        Digital audio or multimedia systems Digital telephones

                                        DISTRIBUTED SYSTEMS execute parts of a computation in different systems at the same time It uses distributed control ie it spreads its decision-making actions across different computers in the system so that failures of individual computers or the network does not cripple its operation

                                        A distributed operating system appears as a uniprocessor system even though it has multiple processors The users may not know where their programs are being run or where their files are located that should all be handled automatically by the operating system

                                        Distributed systems allow applications to run on several processors at the same time thus requiring more complex processor scheduling algorithms

                                        DESKTOP SYSTEMS A desktop system is a personal computer (PC) system used at a single location Modern desktop systems support multiprogramming Common examples are Linux FreeBSD Windows 8 and the Macintosh operating system

                                        HANDHELD SYSTEMS (palmtop computers) Two of the most popular operating systems for handhelds are Symbian OS and Android OS They have very less memory

                                        53

                                        CLUSTERED SYSTEMS A computer cluster consists of a set of loosely connected computers that work together so that in many respects they can be viewed as a single system The components of a cluster are usually connected to each other through fast local area networks each node (computer used as a server) running its own instance of an operating system

                                        Q) Modern operating system

                                        New developments in hardware applications and security threats lead to the development of modern operating system

                                        New Hardware developments like increased machine speed high speed network increased size and variety of memory devices

                                        New Applications developments like multimedia applications internet and web access and client server computing

                                        Developments in modern operating system can be categorized as

                                        1 Microkernel architecture assigns only few functions to kernel ( IPC and CPU scheduling etc) Other OS services are provided by processes called servers that run in user mode This approach decouples server and kernel development This approach is well suited to distributed environment

                                        2 Multithreading a process is divided into threads that can run concurrently Useful for applications that perform a number of independent tasks Example database server that listens and processes many client requests

                                        3 Symmetric multiprocessing As now-a-days there are multiple microprocessors in a single system SMP operating system provides greater efficiency when there are multiple processors The processes share the same main memory and io facilities All processors can perform the same functions (hence term symmetric)

                                        SMP schedules processes or threads across all of the processors

                                        Advantages

                                        1 Increased performance if some portion of the program can be run in parallel the performance of SMP OS increases

                                        2 increased availability as all processes can perform the same task if one processor fails others will work 3 incremental growth one can add a new processor to increase the performance4 Scaling cost can be dependent on the number of processors used

                                        4 Distributed operating systems Distributed systems allow applications to run on several processors at the same time

                                        5 Object oriented design adds modularity to kernel Os can be customized without effecting system integrity

                                        Q) Linux Operating systems It is open source as its source code is freely available It is free to use Linux was designed considering UNIX compatibility Linux is a multi-user multitasking system Main design goals are speed efficiency and standardization Components of Linux System

                                        54

                                        Linux Operating System has primarily three components as shown in the below diagram

                                        Kernel - Kernel is the core part of Linux It is responsible for all major activities of this operating system It interacts directly with hardware Kernel hides low level hardware details to system or application programs

                                        System Library - System libraries are special functions or programs These are used by application programs to use Kernelrsquos features

                                        System Utility - System Utility programs are responsible to do specialized individual level tasks like updating log file accepting login requests from terminals etc

                                        Kernel Modules Sections of kernel code that can be compiled loaded and unloaded independent of the rest of the kernel A kernel module may typically implement a device driver a file system or a networking protocol

                                        Three components to Linux module support 1 module management Supports loading modules into memory and allows to talk to the rest of the kernel 2driver registration Allows modules to inform rest of the kernel that a new driver has become available 3 conflict resolution protect reserved resources of one driver from accidental use by another driver

                                        Kernel Mode vs User ModeKernel component code executes in a special privileged mode called kernel mode with full access to all resources of the computer This code represents a single process executes in single address space and do not require any context switch and hence is very efficient and fast Kernel runs each processes and provides system services to processes provides protected access to hardwares to processesCode which is not required to run in kernel mode is in System Library User programs and other system programs works in User Mode which has no access to system hardwares and kernel code User programs utilities use System libraries to access Kernel functions to get systems low level tasksBasic FeaturesFollowing are some of the important features of Linux Operating System

                                        Portable - Portability means softwares can works on different types of hardwares in same way Linux kernel and application programs supports their installation on any kind of hardware platform

                                        Open Source - Linux source code is freely available and it is community based development project Multiple teams works in collaboration to enhance the capability of Linux operating system and it is continuously evolving

                                        Multi-User - Linux is a multiuser system means multiple users can access system resources like memory ram application programs at same time

                                        Multiprogramming - Linux is a multiprogramming system means multiple applications can run at same time

                                        Hierarchical File System - Linux provides a standard file structure in which system files user files are arranged

                                        Shell - Linux provides a special interpreter program which can be used to execute commands of the operating system It can be used to do various types of operations call application programs etc

                                        55

                                        Security - Linux provides user security using authentication features like password protection controlled access to specific files encryption of data

                                        Architecture Linux System Architecture is consists of following layers

                                        Hardware layer - Hardware consists of all peripheral devices (RAM HDD CPU etc)

                                        Kernel - Core component of Operating System interacts directly with hardware provides low level services to upper layer components

                                        Shell - An interface to kernel Takes commands from user and executes kernels functions

                                        Utilities - Utility programs giving user most of the functionalities of an operating systems

                                        Q) windows XP operating System1Extensibility mdash layered architecture

                                        Executive which runs in protected mode provides the basic system services On top of the executive several server subsystems operate in user mode Modular structure allows additional environmental subsystems to be added without affecting the

                                        executive 2Portability mdash XP can be moved from on hardware architecture to another with relatively few changes

                                        Written in C and C++ Processor-dependent code is isolated in a dynamic link library (DLL) called the ldquohardware abstraction

                                        layerrdquo (HAL)3 Reliability mdashXP uses hardware protection for virtual memory and software protection mechanisms for operating system resources 4 Compatibility mdash applications that follow the IEEE 10031 (POSIX) standard can be complied to run on 2000 without changing the source code 5 Performance mdash XP subsystems can communicate with one another via high-performance message passing

                                        Preemption of low priority threads enables the system to respond quickly to external events Designed for symmetrical multiprocessing

                                        6 International support mdash supports different locales via the national language support (NLS) APIXP Architecture 1 Layered system of modules 2 Protected mode mdash hardware abstraction layer (HAL) kernel executive 3 User mode mdash collection of subsystems

                                        a Environmental subsystems emulate different operating systems b Protection subsystems provide security functions

                                        XP architecture

                                        56

                                        Q) Windows network Operating systemExample Windows 2000 operating system

                                        All the pictures are taken from Silberschatz Abraham et al Peter Baer Galvin and Greg Gagne Operating system concepts Reading Addison-Wesley

                                        Visit my blog enthusiaststudentblogspotin57

                                        mtechmessengerblogspotin

                                        Keywords OS exam notes OS notesOperating system notes Opearing system galvin notesSimple batch system Multi- programmed Batch System Time sharing System Difference between Batch Multi-programming and Time sharing What is process structure of process in memory Process states PCB Operations on Processes Process Creation Process Termination cascading terminationcontext switch Basic Concepts of threads two modes of CPU execution Inter process communication Shared ndash Memory Systems Message passing systemswhat is deadlock four conditions that are necessary for deadlock to occur Methods for handling deadlocks Deadlock prevention Deadlock avoidanceResource Allocation graphBankers AlgorithmSafe State Unsafe state Safety Algorithm Resource-Request Algorithm Memory management Address Binding Logical vs Physical Address Space MMU Memory-Management Unit Dynamic loading Dynamic Linking Overlays Swapping Contiguous memory allocation Single-partition allocation Multiple-partition allocation Internal fragmentation Variable Size partitions External fragmentation First-fit Best-fit Worst-fit Non-Contiguous Memory allocation Paging Implementation of Page Table PTLR PTBR page table base register Page-table length register translation look-aside buffers TLB Protection in Paging Shared pages in Paging Segmentation Shared Segments Virtual memory Demand paging Procedure for Handling a Page Fault pure demand paging Page Replacement Page Replacement Algorithms FIFO Optimal Page replacement AlgorithmLeast Recently used (LRU) Page replacement algorithm ThrashingWorking Set model global and local page replacement

                                        58

                                        • 101 File Concept Data is stored in secondary memory in the form of files A file is a sequence of bits bytes lines or records A file has structure which depends on its type
                                        • File Attributes
                                        • File Operations
                                        • Kernel Mode vs User Mode
                                        • Basic Features
                                        • Architecture

                                          The pages for the private code and data can appear anywhere in the logical address space

                                          ==============================================================================Q) Segmentation

                                          Segmentation is a memory management scheme that support userrsquos view of memory

                                          When the user program is compiled the compiler generates segments like

                                          1) The code segment

                                          2) Global variables segment

                                          3) Heap memory segment

                                          4) Stack segment etc

                                          Each entry in segment table has

                                          segment base (Starting physical address of segment) and limit register (specifies the length of the segment)

                                          Logical address is divided into 2 parts

                                          segment-number offset

                                          Segment table is indexed by segment number

                                          Segment-table base register (STBR) stores the location of segment table in main memory

                                          Segment-table length register (STLR) stores number of segments used by a program

                                          The segment number (say lsquosrsquo) is used to find the entry in segment table The required entry is lsquosrsquo locations away from the beginning of the segment table Once the required entry in the segment table is found the offset (lsquodrsquo) is compared with limit

                                          If( offset lt limit) then offset is added with the base entry to generate the physical address

                                          Q) Shared Segments Code sharing occurs at

                                          the segment level Shared segments must

                                          have same segment number

                                          Allocation - dynamic storage allocation problem

                                          use best fitfirst fit may cause external fragmentation

                                          Protection protection bits associated with segments

                                          readwriteexecute privileges array in a separate segment - hardware can check for illegal array indexes

                                          ===============================================================================Q) Virtual memory ndash

                                          Ans In paging and segmentation a program will execute only if the entire process is in main memory But here in virtual memory only part of the program needs to be in memory for execution

                                          1 A program that is large enough than the available main memory can still run as only part of program is only loaded into main memory Logical address space can therefore be much larger than physical address space

                                          2 Allows address spaces to be shared by several processes

                                          Virtual memory can be implemented via Demand paging Demand segmentation

                                          The large blank space between heap and stack is part of virtual address space Virtual memory allows files and memory to be shared by 2 or more processes through page sharing as shown in below figure (b) Figure (a) depicts that virtual memory that is larger than physical memory

                                          ==============================================================================

                                          Q)Demand pagingDemand paging is technique of loading pages from disk to main memory only when the page is needed Hence using less amount of physical memory we get faster response

                                          Demand paging is similar to PAGING + SWAPPING

                                          Instead of swapping a whole process the pager brings only those pages needed into memory

                                          To distinguish between pages that are in memory and pages that are on the disk validinvalid bit is used

                                          Validinvalid bit is attached to each entry in page table When this bit is set to valid the page is legal and in memory When this bit is set to invalid either the page is illegal or the page is in disk

                                          When CPU generates a logical address whose page table entry is set to invalid page fault occurs Hence the required page is in disk Some section of hard disk called swap space is used to hold pages that are not present in memory

                                          Q) Procedure for Handling a Page Fault

                                          1) CPU generates a logical address and if the validinvalid bit for the page that has this logical address is set to invalid page fault occurs

                                          2) Page fault causes a trap to operating system

                                          3)Check if the logical address is within the logical address space of process ie PTBR and PTLR is checked if (required logical address is not within the logical address space of process)

                                          Terminate the processelse

                                          page is not in memory and page is in disk

                                          4) To bring the required page into memory we need to find a free main memory frame map main memory address to disk block and fetch disk block and load the block into free frame5) When the required page is brought into memory update the page table to indicate the page is in memory6) Restart instruction interrupted by illegal address trap The process will continue as if page had always been in memory

                                          Q) What is pure demand pagingAns If no page belonging to the executing process is in main memory the process will fault for every page it needs Page faults will occur until every page that is needed is in memory This scheme is known as pure demand paging NEVER BRING A PAGE UNTIL IT IS REQUIRED

                                          Q) Page Replacement1 Find the location of the desired page on the disk2 Find a free

                                          i) If there is a free frame use itii) If there is no free frame use Page replacement Algorithms to find some page in memory that is

                                          not really in use and swap itiii) If the modify bit associated with the victim frame is set to 1 the page is written to disk Make the

                                          changes to validinvalid bit of victim frames page table entry to invalid to indicate the victim frame is no longer in memory

                                          else no need to write to disk3 Read the desired page into victim frame and change the frame and page tables4 Restart the user process

                                          =================================================================================================Q) Page Replacement AlgorithmsGoal Produce a low page-fault rateThe algorithms are evaluated by running it on a particular string of memory references (reference string) and by computing the number of page faults on that string

                                          1 First in First out (FIFO) Page replacement algorithm Uses the time when a page was brought into memory The page that was brought first into memory is replaced ie oldest page is chosen for replacement We create a FIFO queue to hold all pages in memory

                                          When a page is brought into memory we insert it at the tail of the queue and we replace the page at the head of the queue

                                          FIFO 15 page faultsProblems due to FIFO is Beladyrsquos anamolyFor a refrence string shown below123412512345When number of frames allocated to a process is 1 number of page faults=12 due to FIFO page replacementWhen number of frames allocated to a process is 2 number of page faults=12 due to FIFO page replacementWhen number of frames allocated to a process is 3 number of page faults=9 due to FIFO page replacementWhen number of frames allocated to a process is 1 number of page faults=10 due to FIFO page replacementAs the number of frames allocated to a process increases page faults must decrease But in FIFO page replacement page faults increase2 Optimal Page replacement AlgorithmReplace the page that will not be used for longest period of time Used for measuring how well your algorithm performs It is not practical to implement OPT algorithm as it is not possible to know what the future references will beex

                                          Optimal page replacement 9 page faults

                                          3 Least Recently used (LRU) Page replacement algorithm Replace the page that has not been used for longest period of time LRU makes use of time of last use of the page Ex

                                          Number of page faults = 12

                                          LRU requires hardware assistance to determine the time of last use of the page We can use 1) Stack or 2) Counter to implement LRU page replacement algorithm

                                          1) Stack implementation ndash keep a stack of page numbers in a double link formwhen a Page is referenced it is moved to the top of stackThis implementation requires 6 pointers to be changed

                                          2) counter implementation Time-of-use field is allocated to each frame allocatedCPU maintains a counter Counter is incremented for every page reference Counter value is copied to the time-of-use field of the referenced page

                                          Disadvantages of LRU page replacement algorithm Updating counter or stack must be done for every memory reference HENCE LRU IS SLOW and implementation requires hardware assistance ==============================================================================Q) LRU approximation page replacement1) Using reference bit2) Using additional reference bits algorithm3) Second chance algorithm4) Enhanced Second chance algorithm5) Counting based Page replacement

                                          Has two schemes1) Least Frequently Used (LFU)2) Most Frequently Used (MFU)

                                          6) Page Buffering AlgorithmsQ) What is ThrashingAns A process is thrashing if it is spending more time paging than executingThrashing may occur when global (or) local page replacement is used as followsSolution to Thrashing is Working Set strategyIf we provide a process with as many frames as it needs thrashing can be avoided Since it is not possible to know how many frames a process needs we use locality model of process execution

                                          here Locality is set of pages that are actively used together Locality model states that as the process executes it moves from locality to locality If we allocate enough frames to a process to accommodate the size of current locality it will not fault again until it changes current localityWorking Set model uses working-set and a parameter ∆ = working-set windowThe set of pages in the most recent ∆ page references is the working setLet WS S i be the working-set size for process Pi ie Pi needs WS S i framesLet D be the total demand for frames then D=sum WS Si

                                          and mbe the total number of available frames

                                          If iquestm thrashing occurs because some processes will not have enough framesSoIF (Dgtm iquest

                                          OS selects a process to suspendelse OS may initiate another processThe working set strategy prevents thrashing while keeping the degree of multi programming as high as possibleThe main difficulty is to keep track of moving working set window ========================================================================Q) How does thrashing occur in global and local page replacementAns In Global page replacement when a page fault occurs for a process any frame in main memory can be replaced causing page fault to some other process that needs the replaced frame Page fault processes must use paging device to swap pages in and out As more number of processes queue up for the paging device

                                          the READY queue empties And CPU becomes idle So OS introduces new processes as CPU is free This further increases number of page faults

                                          In Local page replacement Each process is allocated certain frames in main memory and when a page fault occurs only the frames allocated to it will be replacedWhen a process starts thrashing other processes are not affected But since the processes will be in queue for paging device most of the time average service time for page fault increases Hence effective access time will increase even for a process that is not thrashingQ) Page Table Structure Or Structure of page table in memory

                                          Hierarchical Paging Hashed Page Tables Inverted Page Tables

                                          Hierarchical Paging If Page table is large then we break up the page table into multiple page tables ie page table is paged A simple technique is a two-level page tableTwo-Level Paging Example A logical address (on 32-bit machine with 4K page size) is divided into 1 a page number consisting of 20 bits 2 a page offset consisting of 12 bitsSince the page table is paged the page number is further divided into 1 a 10-bit page number 2 a 10-bit page offset Thus a logical address is as follows page number page offset

                                          pi p2 d

                                          10 10 12 where p1 is an index into the outer page table and p2 is the displacement within the page of the outer page table Two-Level Page-Table Scheme

                                          Address-Translation Scheme Address-translation scheme for a two-level 32-bit paging architecture

                                          Hashed Page Tables (or) Hash table Hashed page tables are common in address spaces gt 32 bitsEach entry in hash table contains a linked list of elements that hash to same locationHash table is indexed by hash valueEach element consists of 3 fields

                                          1 Page number2 Frame number3 Pointer to next element in linked list

                                          The algorithm works as follows Page number is passed to hash function to get hash value Locating the hash value in hash table is easy as Hash table is indexed by hash value Page number is then compared with field 1 in the first element in linked list If there is a match the corresponding frame number is used to get physical address If there is no match next entries in linked list are searched

                                          Inverted Page Table

                                          Invertible page table has one entry for each frame of physical memory Each entry in invertible page table has

                                          Process-id Page number

                                          Logical address also has 3 parts Process-id Page number

                                          OffsetProcess-id and page number of the logical address is compared with each entry in the invertible page tableIf a match is found at ith entry i is added to offset to get logical address If there is no match it means it is illegal addressAdvantages and disadvantages

                                          Decreases memory needed to store each page table Increases time needed to search the table when a page reference occurs Solution Use hash table to

                                          limit the search to one mdash or at most a few mdash page-table entries Inverted Page Table Architecture

                                          Unit-IV CPU Scheduling

                                          Basic conceptsTo maximize CPU utilization some process must be running at all times

                                          In multi programming several processes will be in main memory at a given time If running process has to wait for IO the OS takes the CPU away from that process and assigns CPU to other process in ready queue So CPU must be scheduled and CPU scheduling is fundamental OS function

                                          CPU- IO burst cycleProcess execution begins with a CPU burst that is followed by an IO burst which is followed by another CPU burst then another IO burst and so on as shown in the figure

                                          An IO bound program has many short CPU bursts A CPU-bound program has few long CPU bursts

                                          Schedulers a module in OS for scheduling decisions1048708 Long-term scheduler (or job scheduler) ndash selects which processes should be brought into the ready queue1048708 Medium-term scheduler ndash selects which processes should be swapped inout the memory1048708 Short-term scheduler (or CPU scheduler) ndash Whenever the CPU becomes idle the operating system must select one of the processes in ready queue to be executed This selection is done by short term scheduler (or) CPU scheduler

                                          Dispatcher gives CPU control to the process selected by short term scheduler This function involves following

                                          1 switching context2 switching to user mode3 jumping to proper location in the user program to restart that program

                                          Time taken to stop one process and start another process is known as dispatch latency This must be kept smallReady QueueReady queue is implemented as a FIFO queue a priority queue a tree or simply an unordered linked listThe records in ready queue are PCBs of processes

                                          Pre-emptive Scheduling

                                          CPU scheduling decisions take place under one of four conditions

                                          1 When a process switches from the running state to the waiting state2 When a process switches from the running state to the ready state for example in response to an

                                          interrupt3 When a process switches from the waiting state to the ready state say at completion of IO 4 When a process terminates

                                          For conditions 1 and 4 there is no choice - A new process must be selected For conditions 2 and 3 there is a choice - continue running the current process (or) select a different

                                          one If CPU scheduling takes place only under conditions 1 and 4 the system is said to be non-

                                          preemptive or cooperative In non-preemptive a process runs until it itself gives up the CPU ie when it is waiting for IO (or) when process completes Otherwise the system is said to be pre-emptive

                                          Windows used non-preemptive scheduling up to Windows 3x and started using pre-emptive scheduling with Win95 Macs used non-preemptive prior to OSX and pre-emptive since then Note that pre-emptive scheduling is only possible on hardware that supports a timer interrupt

                                          Note that pre-emptive scheduling can cause problems when two processes share data because one process may get interrupted in the middle of updating shared data structures

                                          Preemption can also be a problem if the kernel is busy implementing a system call ( eg updating critical kernel data structures ) when the preemption occurs Solution wait until the system call has either completed or blocked before allowing the pre-emption But this solution is problematic for real-time systems

                                          Some critical sections of code protect themselves from concurrency problems by disabling interrupts before entering the critical section and re-enabling interrupts on exiting the section But this should only be done only on very short pieces of code that will take less CPU time

                                          Scheduling Criteria to decide CPU scheduling algorithm1 CPU utilization in real system cpu utilization will range from 40 to 902 Throughput Number of processes that are completed per unit time3 Turnaround time Time when the process has completed ndash Time when the process was submittedie it includes waiting time and burst time4 Waiting time amount of time that a process spends waiting for CPU5 Response time Time when the first response came - Time when the process was submitted

                                          Optimization Criteria Max CPU utilizationMax throughput

                                          Min turnaround timeMin waiting timeMin response time CPU Scheduling Algorithms1) First-Come First-Served (FCFS) Scheduling First come first served CPU scheduling algorithm is non pre-emptive Processes are scheduled in the order they have arrivedFCFS is implemented by Queue When the CPU is free it is allocated to the process at the head of the queueAdvantages Easy to implementDisadvantages 1 Average waiting time is quite long

                                          2 Convoy effect (it occurs as small processes wait in queue for big process to leave CPU)Example Process Burst Time P1 24 P2 3 P3 3

                                          Suppose that the processes arrive in the order P1 P2 P3

                                          The Gantt Chart for the schedule is P1 P2 P3

                                          0 24 27 30 Waiting time for P1 = 0 P2 = 24 P3 = 27 Average waiting time (0 + 24 + 27)3 = 17 2) Shortest-Job-First (SJF) SchedulingSchedule the Process with shortest burst time If CPU burst time of two processe are same then FCFS is usedAdvantages Average waiting time decreasesDisadvantages It is difficult to know the length of next CPU burstSJF can be either pre-emptive (or) non pre-emptive1 Non pre-emptive SJF ndash once CPU is given to the process it cannot be preempted until completes its CPU burst2 Pre-emptive SJF (or) Shortest-Remaining-Time-First (SRTF) If the newly arrived process is shorter than what is remaining of currently executing process then pre-empt the currently executing process

                                          Example Process Arrival Time Burst Time P1 00 7

                                          P2 20 4 P3 40 1

                                          P4 50 4

                                          SJF (non-preemptive) Gantt Chart P1 P3 P2 P4

                                          0 7 8 12 16 Average waiting time = [0 +(8-2)+(7-4) +(12-5)] 4 =4

                                          Example of Preemptive SJF

                                          Process Arrival Time Burst TimeP1 00 7P2 20 4P3 40 1P4 50 4

                                          SJF (preemptive) P1 P2 P3 P2 P4 P1

                                          0 2 4 5 7 11 16 Average waiting time = (9 + 1 + 0 +2)4 =3 3) Priority SchedulingA priority number (integer) is associated with each processThe CPU is allocated to the process with the highest priority (smallest integer equiv highest priority)If two processes have equal priority then FCFS is usedSJF is a priority algorithm in which larger the CPU burst lower the priorityThere are two schemes 1 Preemptive 2 nonpreemptive

                                          Problem equiv Starvation ndash low priority processes may never executeSolution equiv Aging ndash as time progresses increase the priority of the process

                                          4) Round Robin (RR) Each process gets a small unit of CPU time (time quantum) usually 10-100 milliseconds After this time has elapsed the process is preempted and added to the end of the ready queueIf there are n processes in the ready queue and the time quantum is q then each process gets 1n of the CPU time in chunks of at most q time units at once No process waits more than (n-1)q time unitsPerformance 1 q large _ FIFO 2 q small _ q must be large with respect to context switch otherwise overhead is too highExample of RR with Time Quantum = 4 Process Burst Time P1 24 P2 3

                                          P3 3 The Gantt chart isP1 P2 P3 P1 P1 P1 P1 P1

                                          0 4 7 10 14 18 22 26 30 Average waiting time = [(30-24)+4+7]3 = 173 =566 5)Multilevel Queue scheduling Ready queue is partitioned into separate queuesFor ex ready queue is divided into 2 queues

                                          1 foreground (interactive) queue2 background (batch) queue

                                          Each queue has its own scheduling algorithmFor ex Round Robin scheduling algorithm can be used in foreground queue

                                          FCFS scheduling algorithm can be used in background queueScheduling must be done between the queues This can be done in 2 ways1 Fixed priority scheduling

                                          Foreground queue has highest priority All the processes in foreground queue must be completed and queue must be empty Then only the processes in background queue will be given CPU As shown in below figure Disadvantage starvation

                                          2 Time slice ndash each queue gets a certain amount of CPU timewhich it can schedule amongst its processes ie 80 to foreground in RR and 20 to background in FCFS

                                          6) Multilevel Feedback Queue Scheduling A process can move between queues The idea is to separate processes according to their CPU bursts IO ndash bound and interactive processes will be in highest priority queueIf a process takes more CPU time it is moved to lower priority queue If a process waits too long in lower priority queue it is moved to higher priority queue to prevent starvationAs shown in the below figure let there be 3 queues Q0 Q1 Q2

                                          1 Q0 ndash time quantum 8 milliseconds 2 Q1 ndash time quantum 16 milliseconds 3 Q2 ndash FCFS

                                          Scheduling

                                          1 A process entering ready queue is put in queue Q0 When it gains CPU the process receives 8 msec If it does not finish in 8 milliseconds process is moved to queue Q1 2 At Q1 process again receives 16 additional milliseconds If it still does not complete it is moved to queue Q2

                                          Q) SynchronizationSeveral processes run in an operating system Some process share resources due to which problems like data inconsistency may arise

                                          Example of process synchronization is Producer Consumer problem (or) Bounded-Buffer problemProducer Consumer problem To make sure that producer process should not add data when the buffer is full andConsumer process should not take data when the buffer is empty

                                          The code for Producer process can be modified as followswhile (true) Produce an item in next producedwhile (counter == BUFFER_SIZE) do nothingbuffer[in]=nextProducedin = (in+1) BUFFER_SIZEcounter++

                                          here an integer variable counter is initialized to zeroCounter is incremented every time a new item is added to the buffer and counter is decremented every time an item is removed from bufferin is an index that always points to an empty slot after the last filled slotout is an index that always points to the first filled slot after at the head of the circular queue

                                          The code for consumer process is as shown belowwhile(true)

                                          while (counter == 0) do nothingnextConsumed = buffer[out]out=(out+1)BUFFER_SIZEcounter--

                                          The above producer and consumer code produces RACE CONDITIONOutcome of execution of several co-operating processes depends on the order in which accesses takes place is called Race ConditionFor Example if counter =5

                                          Given counter =5 and if producer produces one item and consumer consumes one item then correct value of counter= 5 only But here the counter = 4 ie inconsistent resultIf T5 is executed before T4 then the counter =6 ie inconsistent result

                                          Solution to avoid race condition make sure only one process at a time must update the shared variable (here it is counter)

                                          Q) Critical section problemAns Each process has a segment of code called critical section Critical section code is the code that contains common variables of co-operating processesTo avoid data inconsistencyTwo processes must not execute critical section code at same timeFor this to happen1 Each process must request permission to enter its critical sectionmdashEntry section2 End of critical section ndashExit section3 Rest of the code after critical section ndash Remainder section

                                          Solution to critical section must satisfy the following three requirements1 Mutual exclusion if one process is in critical section no other process can be in critical section2 Progress Processes not in remainder section must decide which process will enter critical section next3 Bounded Waiting All the processes must be allowed to enter critical section No process must wait indefinitely

                                          Q) Petersonrsquos solution for critical section problem (Software based solution)AnsTime Process 1 Process2T0 do

                                          flag[1] = TRUE turn =2

                                          do flag[2] = TRUE

                                          T1 turn =1

                                          T2 While( flag[2] ampamp turn == 2)Since turn = 2 while loop will not run

                                          While( flag[1] ampamp turn == 1)Loops continuously until flag[1]

                                          becomes falseT3 Enter critical section

                                          T4 flag[1] = FALSE Enter Critical section

                                          T5 Enters remainder section flag[2]=FALSE

                                          while (TRUE) Enters remainder section

                                          while (TRUE)

                                          101 File Concept Data is stored in secondary memory in the form of files A file is a sequence of bits bytes lines or records A file has structure which depends on its type

                                          File Attributes

                                          1 Name ndash Name of the file for user understanding2 Identifier used by OS to identify a file ( eg inode number )3 Type ndash Text files executable files binary files etc 4 Location ndash location of the file on the hard drive 5 Size current size of the file6 Protection Controls who can read write or execute the file7 Time Date and user identification gives the information when the file was created last modified

                                          and last used etc This data is used for protection security and usage monitoring

                                          All information about files is kept in directory structure The directory entry consists of the filersquos name and its unique identifier The identifier locates the other file attributes Directory structure is kept on disk

                                          File Operations

                                          The file Abstract Data Type supports many common operations 1 Creating a file find free space on disk add entry to directory 2 Writing a file write data at current file position pointer location and update pointer3 Reading a file read data at current file position pointer location and update pointer4 Repositioning within a file or file seek change pointer location to a given value5 Deleting a file free the space allocated to file and remove entry from directory6 Truncating a file delete the data and update file size

                                          Open File Table

                                          Most OSes require that files be opened before access and closed after access Information about all files that are open currently in the system is stored in an open file table This

                                          system wide open file table is maintained by OS And information about files opened by the process is stored by the processrsquos open file table Open file table has the following information

                                          1 File pointer - records the current position in the file for the next read or write2 File-open count - How many times has the current file been opened ( simultaneously by

                                          different processes ) and not yet closedWhen this counter reaches zero the file can be removed from the table

                                          3 Disk location of the file4 Access rights

                                          Some systems provide support for file locking 1 A shared lock is for reading only2 A exclusive lock is for writing as well as reading3 An advisory lock OS makes sure the locks are acquired and released appropriately4 A mandatory lock software developers makes sure the locks are acquired and released

                                          appropriately5 UNIX used advisory locks and Windows uses mandatory locks

                                          File Types

                                          Implementing file types A common technique is to include the type as part of the filename The name is split into 2 parts

                                          1 File Name 2 Extension

                                          File name and extension are separated by a period The system uses the extension to indicate the type of file and the type of operations that can be done on that fileUser opens Microsoft word and clicks on File menu and Open option User will only specify the filename without extension application will look for a file with the given name and extension it expects Hence extensions are not supported by the operating systemUnix system uses a crude magic number stored at the beginning of the file to indicate the type of fileIn Mac OS X operating system file produced by word processor has the word processorrsquos name as its creator When user double clicks on the file word will open the required fileSome common file types are as shown below

                                          File Structure Some files contain an internal structure which may or may not be known to the OS for ex executable files have a specific structure so that it can determine

                                          where in memory to load the file and what is the location of the first instruction

                                          If OS supports many file formats it will increase the size and complexity of the OS UNIX treats all files as sequences of bytes with no internal structure (With the exception of

                                          executable binary programs which it must know how to load and find the first executable statement etc)

                                          Macintosh files have two forks - a resource fork and a data fork The resource fork contains information relating to the UI such as icons and button images and can be modified independently Data fork contains the code or data

                                          Internal File structure

                                          Disk files are accessed in units of physical blocks typically 512 bytes

                                          Internally files are organized in units of logical units like single byte (or)

                                          data record size (or) structure size

                                          The number of logical units which fit into one physical block determines its packingIf a student file is organized internally as records Each record takes 64 bytes of memoryAnd main memory is divided into blocks of 512 bytes Then packing = 51264=8 8 records can be stored per block

                                          Internal fragmentation may occur due to this

                                          102 Access Methods

                                          The information in file can be accessed in the following ways1 Sequential access2 Direct access (or) Relative access

                                          1 Sequential access we cannot randomly access any record

                                          Records are read and written sequentially one record after another

                                          A sequential access file emulates magnetic tape operation and generally supports the following operations read next - read a record and advance the tape to the next position write next - write a record and advance the tape to the next position rewind skip n records - May or may not be supported N may be limited to positive numbers or may be

                                          limited to +- 1

                                          2 Direct access (or) Relative accessA file is made up of fixed length records so that we can randomly access any record

                                          No restriction on order of reading or writing

                                          Direct access files are useful when we need to access large amounts of data randomly

                                          Ex database

                                          Operations supported include n is relative block number ie relative to beginning of file

                                          1 read n - read record number n 2 write n - write record number n 3 jump to record n - could be 0 or the end of file4 Query current record - used to return back to this record later5 Sequential access can be easily emulated using direct access The inverse is complicated and

                                          inefficient

                                          3 Other access methods can be built on top of direct access method We construct an index for the file which contains pointers to various blocks To search a file we first access index and then use pointer to access file

                                          If a student file has 120000 records of students sorted by student id We can create an index consisting of 1st student id in each blockif 64 records can be stored in each block then number of blocks needed to store 120000 records = 12000064 asymp 2000 blocks So index has 2000 entries Index is kept in main memory so that searching becomes faster

                                          If the index itself becomes large we create an index for index file (Ex B-tree)

                                          Q) Allocation Methods

                                          To allocate space to files on disk is such a way that disk space is utilized effectively and files can be accessed quickly

                                          Three major methods of allocating disk space are

                                          1 Contiguous 2 Linked

                                          3 Indexed

                                          1 Contiguous allocation Each file occupies set of contiguous blocks on disk

                                          If the file is n blocks long and starts at location b then it occupies blocks b b+1 b+2hellipb+n-1

                                          The directory entry for each file indicates the address of starting block(ie b) and length (n)

                                          Advantages

                                          1 For direct access to block i we can directly access by finding the block as b+i Since the file is stored sequentially sequential access is easy Hence access is easy

                                          2 Supports both direct and sequential access3 Number of disk seeks required is minimum 4 Seek time is minimum

                                          Disadvantages

                                          1 Difficult to find space to a new file (or) add more data to existing fileFirst fit and Best fit strategies are used to find a free hole( set of free contiguous blocks) from list of available holes But both are inefficient wrt storage utilization

                                          2 External fragmentation even though there are enough free blocks required for a new file they are not contiguous and hence cannot be allocated to new fileSolution copy all the files onto another disk or tape We then copy the files back onto original disk contiguously This is known as compaction But compaction is time consuming

                                          3 Size of the file must be known when the file is created Even though final size is known entire size may not be used by the file at the beginning Hence leads to wastage of space

                                          If size of the file is not known in advance then (i) If we allocate too little space to file file cannot be extended Solution find a larger hole copy the contents of the file to the new space and release the previous space This may be possible as long as space exists Also this is time consuming(ii) If we overestimate the amount of space needed much of the space is unused This will lead to internal fragmentation

                                          To minimize these drawbacks some os uses modified contiguous allocation scheme Here contiguous blocks of memory is allocated at firstIf the file needs more blocks another set of contiguous space known as extent is added Location and a block count plus a link to the first block of next extentIf extents are too large and of fixed size internal fragmentation occursIf extents are of variable sizes external fragmentation occurs

                                          Linked AllocationEach file is a linked list of disk blocks The disk blocks can be scattered anywhere on the diskThe directory contains a pointer to first and last blocks of the file For example a file of 5 blocks might start at a block 9 and continue at block 16 then block 1 and then block 10 and finally block 25 Each block contains a pointer to next block If each block is 512 bytes and disk address (pointer ) requires 4 bytes then the user sees blocks of 512-4=508 bytes To create a new file we simply create a new entry in the directory The pointer will have nil for empty file Also the size field of empty file is 0

                                          To add more data to existing file free block is found and data is written to it and is linked to end of file To read a file we read by following the pointers block to block

                                          Advantages 1 Easy to find space to a new file (or) add more data to existing file2 Size of the file need not be declared when the file is created3 There is no external fragmentation

                                          Disadvantages1 Seek time is more2 Number of disk seeks required is more3 Pointers use larger percentage of filersquos disk space4 Inefficient for direct access files To access i th record i disk reads are required5 Not Reliable if pointers were lost or damaged

                                          Solution to disadvantage 3 is to use to group multiple blocks as clusters(for ex 4 blocks = 1cluster) We allocate clusters instead of blocks to file Makes logical to physical block mapping simpleImproves disk throughput by reducing disk access timeReduces wastage of space due to pointers as only few pointers may be neededFree ndashlist management also becomes simpleDisadvantage internal fragmentation more space is wasted when the cluster is partially full

                                          Solution to disadvantage 5 is to use doubly linked list (or) store the file name and relative block number in each block

                                          An variation of linked allocation is the use of file allocation table (FAT)A section of disk at the beginning of each volume contains FAT FAT has one entry for each disk block and is indexed by block numberThe block number contains the block number of next block that contains the file The last block will have special end of the file value as table entry An unused block is indicated by table value 0

                                          Advantages

                                          1 Easy to find space to a new file (or) add more data to existing file2 Random access time is improved

                                          Disadvantage number of disk seeks is more The disk head must move to read the FAT and find the location of block then move to location of the block

                                          3 Indexed allocation

                                          Each file has its own index block Index block is an array of disk block addresses The ith entry in index block points to ith block of the file

                                          The directory contains the address of the index block

                                          When the file is created all pointers in the index block are set to nil

                                          To add more data to existing file free block is found and data is written to it The address of block is put in the index block

                                          Advantages

                                          1 Easy to find space to a new file (or) add more data to existing file2 Supports direct (or) random access efficiently3 No external fragmentation

                                          Disadvantage

                                          1 Seek time is more2 Number of disk seeks required is more3 Not Reliable4 Index file must be kept in memory If this memory is not available then we have the read the index

                                          block and then desired data block ie 2 disk accesses are required To access a block near the end of the file we need to read all the index blocks to read the needed data block

                                          5 Pointer overhead of index block is gt than that of linked allocation For example if we have file of only one or two blocks Entire index block is allocated to store these one or two pointers

                                          Various mechanisms followed for size of index block

                                          1 linked scheme size of index block is one disk block Index block contains a small header containing name of the file and set of the first 100 disk block addresses The next address contains the address of another index block for large files

                                          2 multilevel index to access a block the OS uses first-level index to find second-level index block which points to file block This approach can be continued to third or fourth level

                                          3 combined scheme

                                          For example Say there are 15 pointers of index block in the filersquos inode The first 12 of these pointers have address of blocks that contains file data The next 3 pointers point to indirect blocks The first points to single indirect block which is an index block that contain addresses of filersquos data blocks

                                          The second points to double indirect blocks and third points to triple indirect blocks as shown below

                                          Q) Free space management

                                          To keep track of free disk space the system maintains free space list To create a file we search free space list and allocate the required space to new file

                                          The free space list can be implemented as

                                          1 Bit vector (or) Bit map2 Linked list3 Grouping4 Counting 5 Space maps

                                          1 Bit vector (or) Bit map Each block is represented by 1 bit If the block is free the bit is 1 and if the block is allocated the bit is 0 For example consider a disk where blocks 23458 blocks are free The free-space bit map would be 001111001000

                                          Advantages simple and easy to get contiguous files

                                          Disadvantages

                                          1 Bit map requires extra space2 Bit map is inefficient if kept in disk Bit maps of smaller disk can be kept in main memory but may

                                          not be possible to keep the bit map of larger disk in main memory3 Bit map must be modified both when blocks are allocated and freed Freeing 1 GB of data on a 1-TB

                                          disk could cause thousands of blocks of bit maps to be updated because those data blocks are scattered all over the disk

                                          2 Linked List Free-space list is implemented by linking all the free disk blocks together A pointer to the first free block is stored in special location on the disk For example consider a disk where blocks 23458 blocks are freeWe keep a pointer to block 2 as the first free block Block 2 would contain a pointer to block 3 Block 3 would point to block 4 and block 4 would point to block 5and so onAdvantage No waste of spaceDisadvantage Cannot get contiguous space easily ie To traverse the free-space list we must read each block which requires more IO time

                                          3 Grouping First free block stores the addresses of n free blocks The first n-1 free blocks are free and the last block contains the addresses of another n free blocks and so onAdvantage large number of free blocks can be found quickly4 Counting free-space list can be maintained by keeping the address of first free block and the number (n)of free contiguous blocks that follow the first block

                                          Each entry in the free-space list consists of disk address and a countEach entry requires more space andlist is shorter only if count is gt1These entries can be stored in a B-tree for efficient insertion deletion and search

                                          5 Space maps ZFS divides the disk space into chunks of manageable sizes called metaslabs Each metaslab has a space mapZFS uses counting algorithm to store information about free blocks Space map is a log of all block activities in time order in counting formatWhen ZFS allocates or frees space from a metaslab it loads the associated space map into memory and updates it Finally free-space list is updated on diskSynchronization Hardware Hardware solution to synchronization is to provide atomic operations These operations operate as a single instruction without interruption Two such operations are 1 Test and Set() hardware instruction2 Swap() instruction

                                          1 Test and Set instruction is as shown

                                          Solution using Test and SetThe shared variable lock is initialized to false

                                          2 Swap() instruction definition is as follows

                                          Solution using Swap1 Shared Boolean variable lock is initialized to false and each process has a local boolean variable key

                                          Both Test and Set Swap instructions satisfy the mutual exclusion requirement but unfortunately do not guarantee bounded waiting If there are multiple processes trying to get into their critical sections there is no guarantee of what order they will enter and any one process could have the bad luck to wait forever until they got their turn in the critical section (Since there is no guarantee as to the relative rates of the processes a very fast process could theoretically release the lock whip through their remainder section and re-lock the lock before a slower process got a chance As more and more processes are involved vying for the same resource the odds of a slow process getting locked out completely increase )

                                          Below figure illustrates a solution using test-and-set that satisfies bounded waiting using two shared data structures boolean lock and boolean waiting[ N ] where N is the number of processes in contention for critical sections

                                          Bounded-waiting mutual exclusion with TestAndSet( )

                                          It first looks in an order ( starting with the next process on the list ) for a process that has been waiting and if it finds one then it releases that particular process from its waiting state without unlocking the critical section thereby allowing a specific process into the critical section while continuing to block all the others

                                          Q) What is semaphoreAns Semaphore is used to solve synchronization problems A semaphore S is an integer variable accessed only through 2 standard atomic operations 1 wait() 2 signal()Wait primitive Signal primitivewait(S)

                                          while(S lt= 0) do nothingS--

                                          signal (S)

                                          S++

                                          Each process that wishes to use a resource performs wait() operation When a process releases a resource it performs signal() operation

                                          Semaphores Usage1 binary semaphore or mutex locks can take values 0 (or) 1

                                          2 Counting Semaphore can take any integer value Counting semaphore is used to count the number remaining resources The counter is initialized to the number of resources available whenever the counting semaphore gt 0 then a process can enter a critical section and use one of the resources When the counter = 0 ( or negative in some implementations ) then the process blocks until another process frees a resource and increments the counting semaphore with a signal callFor example if 3 resources are there and 4 processes Value of counting semaphore =3if process P1 requires a resource it perform wait() operationNow counting semaphore = 2if process P2 requires a resource it perform wait() operationNow counting semaphore = 1if process P3 requires a resource it perform wait() operationNow counting semaphore = 0if process P4 requires a resource it perform wait() operation P4 waits until a resource is available

                                          3 Semaphores can also be used to synchronize certain operations

                                          Semaphore ImplementationWhen a process is in critical section and any other process that tries to enter critical section must loop continuously wasting CPU time (busy waiting) This type of semaphore is called spin lock as the process spins while waiting For example suppose it is important that process P1 execute statement S1 before process P2 executes statement S2

                                          First we create a semaphore named synch that is shared by the two processes and initialize it to zero Then in process P1 we insert the code

                                          S1signal( synch )

                                          and in process P2 we insert the code wait( synch )

                                          S2 Because synch was initialized to 0 process P2 will block on the wait until after P1 executes the call

                                          to signal Each semaphore has a integer value and maintains a waiting queue of waiting processes

                                          Semaphore implementation with no busy waiting To overcome busy waiting we block the waiting process and waiting process is restarted by wakeup() operation(ie the process is removed from waiting queue to ready state)

                                          Problems due to semaphore1 Deadlocks Deadlocks occur when multiple processes are blocked each waiting for a resource that can only be freed by one of the other (blocked) processes shown below

                                          2 Starvation In Starvation one or more processes gets blocked forever and never get a chance to enter critical section For example if we do not specify the algorithms for adding processes to the waiting queue or selecting one to be removed from the queue in the signal( ) call If a LIFO queue is chosen then the first process that starts waiting will never get a chance

                                          3 Priority inversionLet Process LMH have priorities as shown belowLltMltHprocess H is waiting for a resource that has been held by process L so Process L is running When process M enters it pre-empts process L and M is running nowHere M runs before H even though M has lower priority than HTo avoid this L is given high priority so that M does not pre-empt L=====================================================================Q) Classical problems on synchronization

                                          1 The Bounded Buffer Problem (also called the The Producer-Consumer Problem)2 The Readers-Writers Problem3 The Dining Philosophers Problem

                                          These problems are used to test newly proposed synchronization scheme

                                          1 The Bounded Buffer ProblemConsider

                                          n buffers and each buffer holds one item a producer process which creates the items (1 at a time) a consumer process which processes them (1 at a time)

                                          Producer process cannot produce an item if all the buffers are full and Consumer process cannot consume if all the buffers are empty As both processes modify the contents of buffer synchronization must be there For that we use 3 Semaphores

                                          1 empty = n (empty semaphore counts the number of empty buffers)2 full=0 (full semaphore counts the number of filled buffers)3 mutex=1 (mutex provides mutual exclusion for access to the buffer)

                                          Producer process Consumer processdo

                                          hellip produce an item wait(empty) wait(mutex) hellip

                                          do wait(full) wait(mutex) hellip remove an item from buffer hellip

                                          add item to buffer hellip signal(mutex) signal(full)

                                          while (1)

                                          signal(mutex) signal(empty) hellip consume the item hellip while (1)

                                          2 The Readers-Writers ProblemA data item such as a file is shared among several processesEach process is classified as reader or writerMultiple readers can read the file at same timeA writer must have exclusive access (ie cannot share with either a reader or another writer)Two versions of the readers-writers problem

                                          readers priority new reader need not wait because a writer is waiting writers priority if a writer is waiting to access the database no new readers can start reading

                                          A solution to both version may cause starvation in the readers priority version writers may starve in the writers priority version readers may starve

                                          A semaphore solution to the readers priority version uses 3 semaphores1 readcount = 0 (readcount semaphore counts the number of reading shared data2 mutex =1 (mutex semaphore is used for mutually exclusion when readcount is updated)3 wrt = 1 (wrt semaphore is common to both writer and reader)

                                          wrt provides mutually exclusive access to shared data Reader process Writer process

                                          wait(mutex) readcount++ if (readcount == 1) wait(rt) signal(mutex) hellip reading is performed hellip wait(mutex) readcount-- if (readcount == 0) signal(wrt) signal(mutex)

                                          do wait(wrt) hellip writing is performed hellip signal(wrt)

                                          while (TRUE)

                                          3 The Dining Philosophers Problemn philosophers sit around a table thinking and eatingAs time passes a philosopher gets hungry and tries to pick up the chopstick on his left and on his right A philosopher may only pick up one chopstick at a time and cannot pick up a chopstick already in the hand of neighbour philosopherThe dining philosophers problems is an example of a large class or concurrency control problems it is a simple representation of the need to allocate several resources among several processes in a deadlock-free and starvation-free mannerA semaphore solution represent each chopstick with a semaphore

                                          semaphore chopstick[5] Initially all values are 1

                                          Philosopher ido

                                          51

                                          wait(chopstick[i])wait(chopstick[(i+1) 5])hellipeathellipsignal(chopstick[i])signal(chopstick[(i+1) 5])hellipthinkhellip

                                          while (1)This solution guarantees no two neighbouring philosophers eat simultaneously but has the possibility of creating a deadlockQ) Types of operating systems

                                          MULTIPROCESSING SYSTEMS has multiple hardware CPUs

                                          In symmetric multiprocessing (SMP) systems The processes share the same main memory and io facilities All processors can perform the same functions (hence term symmetric)

                                          52

                                          ASYMMETRIC MULTIPROCESSING SYSTEMS In asymmetric multiprocessing (ASMP) all CPUs are not equal (Master and slave CPUs) ASMP have dedicated processors to specific tasks For example one processor may be dedicated to disk operations another to video operations etc These systems dont have the flexibility to assign processes to the least-loaded CPU unlike an SMP system

                                          REAL TIME SYSTEM is used to implement a computer application that must complete its execution within its time constraint

                                          Two kinds of real-time systems have evolved

                                          1 A hard real-time system can guarantee that response requirement will be met under all conditions 2 A soft real-time system cannot guarantee that response requirement meets under all conditions Ex

                                          Digital audio or multimedia systems Digital telephones

                                          DISTRIBUTED SYSTEMS execute parts of a computation in different systems at the same time It uses distributed control ie it spreads its decision-making actions across different computers in the system so that failures of individual computers or the network does not cripple its operation

                                          A distributed operating system appears as a uniprocessor system even though it has multiple processors The users may not know where their programs are being run or where their files are located that should all be handled automatically by the operating system

                                          Distributed systems allow applications to run on several processors at the same time thus requiring more complex processor scheduling algorithms

                                          DESKTOP SYSTEMS A desktop system is a personal computer (PC) system used at a single location Modern desktop systems support multiprogramming Common examples are Linux FreeBSD Windows 8 and the Macintosh operating system

                                          HANDHELD SYSTEMS (palmtop computers) Two of the most popular operating systems for handhelds are Symbian OS and Android OS They have very less memory

                                          53

                                          CLUSTERED SYSTEMS A computer cluster consists of a set of loosely connected computers that work together so that in many respects they can be viewed as a single system The components of a cluster are usually connected to each other through fast local area networks each node (computer used as a server) running its own instance of an operating system

                                          Q) Modern operating system

                                          New developments in hardware applications and security threats lead to the development of modern operating system

                                          New Hardware developments like increased machine speed high speed network increased size and variety of memory devices

                                          New Applications developments like multimedia applications internet and web access and client server computing

                                          Developments in modern operating system can be categorized as

                                          1 Microkernel architecture assigns only few functions to kernel ( IPC and CPU scheduling etc) Other OS services are provided by processes called servers that run in user mode This approach decouples server and kernel development This approach is well suited to distributed environment

                                          2 Multithreading a process is divided into threads that can run concurrently Useful for applications that perform a number of independent tasks Example database server that listens and processes many client requests

                                          3 Symmetric multiprocessing As now-a-days there are multiple microprocessors in a single system SMP operating system provides greater efficiency when there are multiple processors The processes share the same main memory and io facilities All processors can perform the same functions (hence term symmetric)

                                          SMP schedules processes or threads across all of the processors

                                          Advantages

                                          1 Increased performance if some portion of the program can be run in parallel the performance of SMP OS increases

                                          2 increased availability as all processes can perform the same task if one processor fails others will work 3 incremental growth one can add a new processor to increase the performance4 Scaling cost can be dependent on the number of processors used

                                          4 Distributed operating systems Distributed systems allow applications to run on several processors at the same time

                                          5 Object oriented design adds modularity to kernel Os can be customized without effecting system integrity

                                          Q) Linux Operating systems It is open source as its source code is freely available It is free to use Linux was designed considering UNIX compatibility Linux is a multi-user multitasking system Main design goals are speed efficiency and standardization Components of Linux System

                                          54

                                          Linux Operating System has primarily three components as shown in the below diagram

                                          Kernel - Kernel is the core part of Linux It is responsible for all major activities of this operating system It interacts directly with hardware Kernel hides low level hardware details to system or application programs

                                          System Library - System libraries are special functions or programs These are used by application programs to use Kernelrsquos features

                                          System Utility - System Utility programs are responsible to do specialized individual level tasks like updating log file accepting login requests from terminals etc

                                          Kernel Modules Sections of kernel code that can be compiled loaded and unloaded independent of the rest of the kernel A kernel module may typically implement a device driver a file system or a networking protocol

                                          Three components to Linux module support 1 module management Supports loading modules into memory and allows to talk to the rest of the kernel 2driver registration Allows modules to inform rest of the kernel that a new driver has become available 3 conflict resolution protect reserved resources of one driver from accidental use by another driver

                                          Kernel Mode vs User ModeKernel component code executes in a special privileged mode called kernel mode with full access to all resources of the computer This code represents a single process executes in single address space and do not require any context switch and hence is very efficient and fast Kernel runs each processes and provides system services to processes provides protected access to hardwares to processesCode which is not required to run in kernel mode is in System Library User programs and other system programs works in User Mode which has no access to system hardwares and kernel code User programs utilities use System libraries to access Kernel functions to get systems low level tasksBasic FeaturesFollowing are some of the important features of Linux Operating System

                                          Portable - Portability means softwares can works on different types of hardwares in same way Linux kernel and application programs supports their installation on any kind of hardware platform

                                          Open Source - Linux source code is freely available and it is community based development project Multiple teams works in collaboration to enhance the capability of Linux operating system and it is continuously evolving

                                          Multi-User - Linux is a multiuser system means multiple users can access system resources like memory ram application programs at same time

                                          Multiprogramming - Linux is a multiprogramming system means multiple applications can run at same time

                                          Hierarchical File System - Linux provides a standard file structure in which system files user files are arranged

                                          Shell - Linux provides a special interpreter program which can be used to execute commands of the operating system It can be used to do various types of operations call application programs etc

                                          55

                                          Security - Linux provides user security using authentication features like password protection controlled access to specific files encryption of data

                                          Architecture Linux System Architecture is consists of following layers

                                          Hardware layer - Hardware consists of all peripheral devices (RAM HDD CPU etc)

                                          Kernel - Core component of Operating System interacts directly with hardware provides low level services to upper layer components

                                          Shell - An interface to kernel Takes commands from user and executes kernels functions

                                          Utilities - Utility programs giving user most of the functionalities of an operating systems

                                          Q) windows XP operating System1Extensibility mdash layered architecture

                                          Executive which runs in protected mode provides the basic system services On top of the executive several server subsystems operate in user mode Modular structure allows additional environmental subsystems to be added without affecting the

                                          executive 2Portability mdash XP can be moved from on hardware architecture to another with relatively few changes

                                          Written in C and C++ Processor-dependent code is isolated in a dynamic link library (DLL) called the ldquohardware abstraction

                                          layerrdquo (HAL)3 Reliability mdashXP uses hardware protection for virtual memory and software protection mechanisms for operating system resources 4 Compatibility mdash applications that follow the IEEE 10031 (POSIX) standard can be complied to run on 2000 without changing the source code 5 Performance mdash XP subsystems can communicate with one another via high-performance message passing

                                          Preemption of low priority threads enables the system to respond quickly to external events Designed for symmetrical multiprocessing

                                          6 International support mdash supports different locales via the national language support (NLS) APIXP Architecture 1 Layered system of modules 2 Protected mode mdash hardware abstraction layer (HAL) kernel executive 3 User mode mdash collection of subsystems

                                          a Environmental subsystems emulate different operating systems b Protection subsystems provide security functions

                                          XP architecture

                                          56

                                          Q) Windows network Operating systemExample Windows 2000 operating system

                                          All the pictures are taken from Silberschatz Abraham et al Peter Baer Galvin and Greg Gagne Operating system concepts Reading Addison-Wesley

                                          Visit my blog enthusiaststudentblogspotin57

                                          mtechmessengerblogspotin

                                          Keywords OS exam notes OS notesOperating system notes Opearing system galvin notesSimple batch system Multi- programmed Batch System Time sharing System Difference between Batch Multi-programming and Time sharing What is process structure of process in memory Process states PCB Operations on Processes Process Creation Process Termination cascading terminationcontext switch Basic Concepts of threads two modes of CPU execution Inter process communication Shared ndash Memory Systems Message passing systemswhat is deadlock four conditions that are necessary for deadlock to occur Methods for handling deadlocks Deadlock prevention Deadlock avoidanceResource Allocation graphBankers AlgorithmSafe State Unsafe state Safety Algorithm Resource-Request Algorithm Memory management Address Binding Logical vs Physical Address Space MMU Memory-Management Unit Dynamic loading Dynamic Linking Overlays Swapping Contiguous memory allocation Single-partition allocation Multiple-partition allocation Internal fragmentation Variable Size partitions External fragmentation First-fit Best-fit Worst-fit Non-Contiguous Memory allocation Paging Implementation of Page Table PTLR PTBR page table base register Page-table length register translation look-aside buffers TLB Protection in Paging Shared pages in Paging Segmentation Shared Segments Virtual memory Demand paging Procedure for Handling a Page Fault pure demand paging Page Replacement Page Replacement Algorithms FIFO Optimal Page replacement AlgorithmLeast Recently used (LRU) Page replacement algorithm ThrashingWorking Set model global and local page replacement

                                          58

                                          • 101 File Concept Data is stored in secondary memory in the form of files A file is a sequence of bits bytes lines or records A file has structure which depends on its type
                                          • File Attributes
                                          • File Operations
                                          • Kernel Mode vs User Mode
                                          • Basic Features
                                          • Architecture

                                            Protection protection bits associated with segments

                                            readwriteexecute privileges array in a separate segment - hardware can check for illegal array indexes

                                            ===============================================================================Q) Virtual memory ndash

                                            Ans In paging and segmentation a program will execute only if the entire process is in main memory But here in virtual memory only part of the program needs to be in memory for execution

                                            1 A program that is large enough than the available main memory can still run as only part of program is only loaded into main memory Logical address space can therefore be much larger than physical address space

                                            2 Allows address spaces to be shared by several processes

                                            Virtual memory can be implemented via Demand paging Demand segmentation

                                            The large blank space between heap and stack is part of virtual address space Virtual memory allows files and memory to be shared by 2 or more processes through page sharing as shown in below figure (b) Figure (a) depicts that virtual memory that is larger than physical memory

                                            ==============================================================================

                                            Q)Demand pagingDemand paging is technique of loading pages from disk to main memory only when the page is needed Hence using less amount of physical memory we get faster response

                                            Demand paging is similar to PAGING + SWAPPING

                                            Instead of swapping a whole process the pager brings only those pages needed into memory

                                            To distinguish between pages that are in memory and pages that are on the disk validinvalid bit is used

                                            Validinvalid bit is attached to each entry in page table When this bit is set to valid the page is legal and in memory When this bit is set to invalid either the page is illegal or the page is in disk

                                            When CPU generates a logical address whose page table entry is set to invalid page fault occurs Hence the required page is in disk Some section of hard disk called swap space is used to hold pages that are not present in memory

                                            Q) Procedure for Handling a Page Fault

                                            1) CPU generates a logical address and if the validinvalid bit for the page that has this logical address is set to invalid page fault occurs

                                            2) Page fault causes a trap to operating system

                                            3)Check if the logical address is within the logical address space of process ie PTBR and PTLR is checked if (required logical address is not within the logical address space of process)

                                            Terminate the processelse

                                            page is not in memory and page is in disk

                                            4) To bring the required page into memory we need to find a free main memory frame map main memory address to disk block and fetch disk block and load the block into free frame5) When the required page is brought into memory update the page table to indicate the page is in memory6) Restart instruction interrupted by illegal address trap The process will continue as if page had always been in memory

                                            Q) What is pure demand pagingAns If no page belonging to the executing process is in main memory the process will fault for every page it needs Page faults will occur until every page that is needed is in memory This scheme is known as pure demand paging NEVER BRING A PAGE UNTIL IT IS REQUIRED

                                            Q) Page Replacement1 Find the location of the desired page on the disk2 Find a free

                                            i) If there is a free frame use itii) If there is no free frame use Page replacement Algorithms to find some page in memory that is

                                            not really in use and swap itiii) If the modify bit associated with the victim frame is set to 1 the page is written to disk Make the

                                            changes to validinvalid bit of victim frames page table entry to invalid to indicate the victim frame is no longer in memory

                                            else no need to write to disk3 Read the desired page into victim frame and change the frame and page tables4 Restart the user process

                                            =================================================================================================Q) Page Replacement AlgorithmsGoal Produce a low page-fault rateThe algorithms are evaluated by running it on a particular string of memory references (reference string) and by computing the number of page faults on that string

                                            1 First in First out (FIFO) Page replacement algorithm Uses the time when a page was brought into memory The page that was brought first into memory is replaced ie oldest page is chosen for replacement We create a FIFO queue to hold all pages in memory

                                            When a page is brought into memory we insert it at the tail of the queue and we replace the page at the head of the queue

                                            FIFO 15 page faultsProblems due to FIFO is Beladyrsquos anamolyFor a refrence string shown below123412512345When number of frames allocated to a process is 1 number of page faults=12 due to FIFO page replacementWhen number of frames allocated to a process is 2 number of page faults=12 due to FIFO page replacementWhen number of frames allocated to a process is 3 number of page faults=9 due to FIFO page replacementWhen number of frames allocated to a process is 1 number of page faults=10 due to FIFO page replacementAs the number of frames allocated to a process increases page faults must decrease But in FIFO page replacement page faults increase2 Optimal Page replacement AlgorithmReplace the page that will not be used for longest period of time Used for measuring how well your algorithm performs It is not practical to implement OPT algorithm as it is not possible to know what the future references will beex

                                            Optimal page replacement 9 page faults

                                            3 Least Recently used (LRU) Page replacement algorithm Replace the page that has not been used for longest period of time LRU makes use of time of last use of the page Ex

                                            Number of page faults = 12

                                            LRU requires hardware assistance to determine the time of last use of the page We can use 1) Stack or 2) Counter to implement LRU page replacement algorithm

                                            1) Stack implementation ndash keep a stack of page numbers in a double link formwhen a Page is referenced it is moved to the top of stackThis implementation requires 6 pointers to be changed

                                            2) counter implementation Time-of-use field is allocated to each frame allocatedCPU maintains a counter Counter is incremented for every page reference Counter value is copied to the time-of-use field of the referenced page

                                            Disadvantages of LRU page replacement algorithm Updating counter or stack must be done for every memory reference HENCE LRU IS SLOW and implementation requires hardware assistance ==============================================================================Q) LRU approximation page replacement1) Using reference bit2) Using additional reference bits algorithm3) Second chance algorithm4) Enhanced Second chance algorithm5) Counting based Page replacement

                                            Has two schemes1) Least Frequently Used (LFU)2) Most Frequently Used (MFU)

                                            6) Page Buffering AlgorithmsQ) What is ThrashingAns A process is thrashing if it is spending more time paging than executingThrashing may occur when global (or) local page replacement is used as followsSolution to Thrashing is Working Set strategyIf we provide a process with as many frames as it needs thrashing can be avoided Since it is not possible to know how many frames a process needs we use locality model of process execution

                                            here Locality is set of pages that are actively used together Locality model states that as the process executes it moves from locality to locality If we allocate enough frames to a process to accommodate the size of current locality it will not fault again until it changes current localityWorking Set model uses working-set and a parameter ∆ = working-set windowThe set of pages in the most recent ∆ page references is the working setLet WS S i be the working-set size for process Pi ie Pi needs WS S i framesLet D be the total demand for frames then D=sum WS Si

                                            and mbe the total number of available frames

                                            If iquestm thrashing occurs because some processes will not have enough framesSoIF (Dgtm iquest

                                            OS selects a process to suspendelse OS may initiate another processThe working set strategy prevents thrashing while keeping the degree of multi programming as high as possibleThe main difficulty is to keep track of moving working set window ========================================================================Q) How does thrashing occur in global and local page replacementAns In Global page replacement when a page fault occurs for a process any frame in main memory can be replaced causing page fault to some other process that needs the replaced frame Page fault processes must use paging device to swap pages in and out As more number of processes queue up for the paging device

                                            the READY queue empties And CPU becomes idle So OS introduces new processes as CPU is free This further increases number of page faults

                                            In Local page replacement Each process is allocated certain frames in main memory and when a page fault occurs only the frames allocated to it will be replacedWhen a process starts thrashing other processes are not affected But since the processes will be in queue for paging device most of the time average service time for page fault increases Hence effective access time will increase even for a process that is not thrashingQ) Page Table Structure Or Structure of page table in memory

                                            Hierarchical Paging Hashed Page Tables Inverted Page Tables

                                            Hierarchical Paging If Page table is large then we break up the page table into multiple page tables ie page table is paged A simple technique is a two-level page tableTwo-Level Paging Example A logical address (on 32-bit machine with 4K page size) is divided into 1 a page number consisting of 20 bits 2 a page offset consisting of 12 bitsSince the page table is paged the page number is further divided into 1 a 10-bit page number 2 a 10-bit page offset Thus a logical address is as follows page number page offset

                                            pi p2 d

                                            10 10 12 where p1 is an index into the outer page table and p2 is the displacement within the page of the outer page table Two-Level Page-Table Scheme

                                            Address-Translation Scheme Address-translation scheme for a two-level 32-bit paging architecture

                                            Hashed Page Tables (or) Hash table Hashed page tables are common in address spaces gt 32 bitsEach entry in hash table contains a linked list of elements that hash to same locationHash table is indexed by hash valueEach element consists of 3 fields

                                            1 Page number2 Frame number3 Pointer to next element in linked list

                                            The algorithm works as follows Page number is passed to hash function to get hash value Locating the hash value in hash table is easy as Hash table is indexed by hash value Page number is then compared with field 1 in the first element in linked list If there is a match the corresponding frame number is used to get physical address If there is no match next entries in linked list are searched

                                            Inverted Page Table

                                            Invertible page table has one entry for each frame of physical memory Each entry in invertible page table has

                                            Process-id Page number

                                            Logical address also has 3 parts Process-id Page number

                                            OffsetProcess-id and page number of the logical address is compared with each entry in the invertible page tableIf a match is found at ith entry i is added to offset to get logical address If there is no match it means it is illegal addressAdvantages and disadvantages

                                            Decreases memory needed to store each page table Increases time needed to search the table when a page reference occurs Solution Use hash table to

                                            limit the search to one mdash or at most a few mdash page-table entries Inverted Page Table Architecture

                                            Unit-IV CPU Scheduling

                                            Basic conceptsTo maximize CPU utilization some process must be running at all times

                                            In multi programming several processes will be in main memory at a given time If running process has to wait for IO the OS takes the CPU away from that process and assigns CPU to other process in ready queue So CPU must be scheduled and CPU scheduling is fundamental OS function

                                            CPU- IO burst cycleProcess execution begins with a CPU burst that is followed by an IO burst which is followed by another CPU burst then another IO burst and so on as shown in the figure

                                            An IO bound program has many short CPU bursts A CPU-bound program has few long CPU bursts

                                            Schedulers a module in OS for scheduling decisions1048708 Long-term scheduler (or job scheduler) ndash selects which processes should be brought into the ready queue1048708 Medium-term scheduler ndash selects which processes should be swapped inout the memory1048708 Short-term scheduler (or CPU scheduler) ndash Whenever the CPU becomes idle the operating system must select one of the processes in ready queue to be executed This selection is done by short term scheduler (or) CPU scheduler

                                            Dispatcher gives CPU control to the process selected by short term scheduler This function involves following

                                            1 switching context2 switching to user mode3 jumping to proper location in the user program to restart that program

                                            Time taken to stop one process and start another process is known as dispatch latency This must be kept smallReady QueueReady queue is implemented as a FIFO queue a priority queue a tree or simply an unordered linked listThe records in ready queue are PCBs of processes

                                            Pre-emptive Scheduling

                                            CPU scheduling decisions take place under one of four conditions

                                            1 When a process switches from the running state to the waiting state2 When a process switches from the running state to the ready state for example in response to an

                                            interrupt3 When a process switches from the waiting state to the ready state say at completion of IO 4 When a process terminates

                                            For conditions 1 and 4 there is no choice - A new process must be selected For conditions 2 and 3 there is a choice - continue running the current process (or) select a different

                                            one If CPU scheduling takes place only under conditions 1 and 4 the system is said to be non-

                                            preemptive or cooperative In non-preemptive a process runs until it itself gives up the CPU ie when it is waiting for IO (or) when process completes Otherwise the system is said to be pre-emptive

                                            Windows used non-preemptive scheduling up to Windows 3x and started using pre-emptive scheduling with Win95 Macs used non-preemptive prior to OSX and pre-emptive since then Note that pre-emptive scheduling is only possible on hardware that supports a timer interrupt

                                            Note that pre-emptive scheduling can cause problems when two processes share data because one process may get interrupted in the middle of updating shared data structures

                                            Preemption can also be a problem if the kernel is busy implementing a system call ( eg updating critical kernel data structures ) when the preemption occurs Solution wait until the system call has either completed or blocked before allowing the pre-emption But this solution is problematic for real-time systems

                                            Some critical sections of code protect themselves from concurrency problems by disabling interrupts before entering the critical section and re-enabling interrupts on exiting the section But this should only be done only on very short pieces of code that will take less CPU time

                                            Scheduling Criteria to decide CPU scheduling algorithm1 CPU utilization in real system cpu utilization will range from 40 to 902 Throughput Number of processes that are completed per unit time3 Turnaround time Time when the process has completed ndash Time when the process was submittedie it includes waiting time and burst time4 Waiting time amount of time that a process spends waiting for CPU5 Response time Time when the first response came - Time when the process was submitted

                                            Optimization Criteria Max CPU utilizationMax throughput

                                            Min turnaround timeMin waiting timeMin response time CPU Scheduling Algorithms1) First-Come First-Served (FCFS) Scheduling First come first served CPU scheduling algorithm is non pre-emptive Processes are scheduled in the order they have arrivedFCFS is implemented by Queue When the CPU is free it is allocated to the process at the head of the queueAdvantages Easy to implementDisadvantages 1 Average waiting time is quite long

                                            2 Convoy effect (it occurs as small processes wait in queue for big process to leave CPU)Example Process Burst Time P1 24 P2 3 P3 3

                                            Suppose that the processes arrive in the order P1 P2 P3

                                            The Gantt Chart for the schedule is P1 P2 P3

                                            0 24 27 30 Waiting time for P1 = 0 P2 = 24 P3 = 27 Average waiting time (0 + 24 + 27)3 = 17 2) Shortest-Job-First (SJF) SchedulingSchedule the Process with shortest burst time If CPU burst time of two processe are same then FCFS is usedAdvantages Average waiting time decreasesDisadvantages It is difficult to know the length of next CPU burstSJF can be either pre-emptive (or) non pre-emptive1 Non pre-emptive SJF ndash once CPU is given to the process it cannot be preempted until completes its CPU burst2 Pre-emptive SJF (or) Shortest-Remaining-Time-First (SRTF) If the newly arrived process is shorter than what is remaining of currently executing process then pre-empt the currently executing process

                                            Example Process Arrival Time Burst Time P1 00 7

                                            P2 20 4 P3 40 1

                                            P4 50 4

                                            SJF (non-preemptive) Gantt Chart P1 P3 P2 P4

                                            0 7 8 12 16 Average waiting time = [0 +(8-2)+(7-4) +(12-5)] 4 =4

                                            Example of Preemptive SJF

                                            Process Arrival Time Burst TimeP1 00 7P2 20 4P3 40 1P4 50 4

                                            SJF (preemptive) P1 P2 P3 P2 P4 P1

                                            0 2 4 5 7 11 16 Average waiting time = (9 + 1 + 0 +2)4 =3 3) Priority SchedulingA priority number (integer) is associated with each processThe CPU is allocated to the process with the highest priority (smallest integer equiv highest priority)If two processes have equal priority then FCFS is usedSJF is a priority algorithm in which larger the CPU burst lower the priorityThere are two schemes 1 Preemptive 2 nonpreemptive

                                            Problem equiv Starvation ndash low priority processes may never executeSolution equiv Aging ndash as time progresses increase the priority of the process

                                            4) Round Robin (RR) Each process gets a small unit of CPU time (time quantum) usually 10-100 milliseconds After this time has elapsed the process is preempted and added to the end of the ready queueIf there are n processes in the ready queue and the time quantum is q then each process gets 1n of the CPU time in chunks of at most q time units at once No process waits more than (n-1)q time unitsPerformance 1 q large _ FIFO 2 q small _ q must be large with respect to context switch otherwise overhead is too highExample of RR with Time Quantum = 4 Process Burst Time P1 24 P2 3

                                            P3 3 The Gantt chart isP1 P2 P3 P1 P1 P1 P1 P1

                                            0 4 7 10 14 18 22 26 30 Average waiting time = [(30-24)+4+7]3 = 173 =566 5)Multilevel Queue scheduling Ready queue is partitioned into separate queuesFor ex ready queue is divided into 2 queues

                                            1 foreground (interactive) queue2 background (batch) queue

                                            Each queue has its own scheduling algorithmFor ex Round Robin scheduling algorithm can be used in foreground queue

                                            FCFS scheduling algorithm can be used in background queueScheduling must be done between the queues This can be done in 2 ways1 Fixed priority scheduling

                                            Foreground queue has highest priority All the processes in foreground queue must be completed and queue must be empty Then only the processes in background queue will be given CPU As shown in below figure Disadvantage starvation

                                            2 Time slice ndash each queue gets a certain amount of CPU timewhich it can schedule amongst its processes ie 80 to foreground in RR and 20 to background in FCFS

                                            6) Multilevel Feedback Queue Scheduling A process can move between queues The idea is to separate processes according to their CPU bursts IO ndash bound and interactive processes will be in highest priority queueIf a process takes more CPU time it is moved to lower priority queue If a process waits too long in lower priority queue it is moved to higher priority queue to prevent starvationAs shown in the below figure let there be 3 queues Q0 Q1 Q2

                                            1 Q0 ndash time quantum 8 milliseconds 2 Q1 ndash time quantum 16 milliseconds 3 Q2 ndash FCFS

                                            Scheduling

                                            1 A process entering ready queue is put in queue Q0 When it gains CPU the process receives 8 msec If it does not finish in 8 milliseconds process is moved to queue Q1 2 At Q1 process again receives 16 additional milliseconds If it still does not complete it is moved to queue Q2

                                            Q) SynchronizationSeveral processes run in an operating system Some process share resources due to which problems like data inconsistency may arise

                                            Example of process synchronization is Producer Consumer problem (or) Bounded-Buffer problemProducer Consumer problem To make sure that producer process should not add data when the buffer is full andConsumer process should not take data when the buffer is empty

                                            The code for Producer process can be modified as followswhile (true) Produce an item in next producedwhile (counter == BUFFER_SIZE) do nothingbuffer[in]=nextProducedin = (in+1) BUFFER_SIZEcounter++

                                            here an integer variable counter is initialized to zeroCounter is incremented every time a new item is added to the buffer and counter is decremented every time an item is removed from bufferin is an index that always points to an empty slot after the last filled slotout is an index that always points to the first filled slot after at the head of the circular queue

                                            The code for consumer process is as shown belowwhile(true)

                                            while (counter == 0) do nothingnextConsumed = buffer[out]out=(out+1)BUFFER_SIZEcounter--

                                            The above producer and consumer code produces RACE CONDITIONOutcome of execution of several co-operating processes depends on the order in which accesses takes place is called Race ConditionFor Example if counter =5

                                            Given counter =5 and if producer produces one item and consumer consumes one item then correct value of counter= 5 only But here the counter = 4 ie inconsistent resultIf T5 is executed before T4 then the counter =6 ie inconsistent result

                                            Solution to avoid race condition make sure only one process at a time must update the shared variable (here it is counter)

                                            Q) Critical section problemAns Each process has a segment of code called critical section Critical section code is the code that contains common variables of co-operating processesTo avoid data inconsistencyTwo processes must not execute critical section code at same timeFor this to happen1 Each process must request permission to enter its critical sectionmdashEntry section2 End of critical section ndashExit section3 Rest of the code after critical section ndash Remainder section

                                            Solution to critical section must satisfy the following three requirements1 Mutual exclusion if one process is in critical section no other process can be in critical section2 Progress Processes not in remainder section must decide which process will enter critical section next3 Bounded Waiting All the processes must be allowed to enter critical section No process must wait indefinitely

                                            Q) Petersonrsquos solution for critical section problem (Software based solution)AnsTime Process 1 Process2T0 do

                                            flag[1] = TRUE turn =2

                                            do flag[2] = TRUE

                                            T1 turn =1

                                            T2 While( flag[2] ampamp turn == 2)Since turn = 2 while loop will not run

                                            While( flag[1] ampamp turn == 1)Loops continuously until flag[1]

                                            becomes falseT3 Enter critical section

                                            T4 flag[1] = FALSE Enter Critical section

                                            T5 Enters remainder section flag[2]=FALSE

                                            while (TRUE) Enters remainder section

                                            while (TRUE)

                                            101 File Concept Data is stored in secondary memory in the form of files A file is a sequence of bits bytes lines or records A file has structure which depends on its type

                                            File Attributes

                                            1 Name ndash Name of the file for user understanding2 Identifier used by OS to identify a file ( eg inode number )3 Type ndash Text files executable files binary files etc 4 Location ndash location of the file on the hard drive 5 Size current size of the file6 Protection Controls who can read write or execute the file7 Time Date and user identification gives the information when the file was created last modified

                                            and last used etc This data is used for protection security and usage monitoring

                                            All information about files is kept in directory structure The directory entry consists of the filersquos name and its unique identifier The identifier locates the other file attributes Directory structure is kept on disk

                                            File Operations

                                            The file Abstract Data Type supports many common operations 1 Creating a file find free space on disk add entry to directory 2 Writing a file write data at current file position pointer location and update pointer3 Reading a file read data at current file position pointer location and update pointer4 Repositioning within a file or file seek change pointer location to a given value5 Deleting a file free the space allocated to file and remove entry from directory6 Truncating a file delete the data and update file size

                                            Open File Table

                                            Most OSes require that files be opened before access and closed after access Information about all files that are open currently in the system is stored in an open file table This

                                            system wide open file table is maintained by OS And information about files opened by the process is stored by the processrsquos open file table Open file table has the following information

                                            1 File pointer - records the current position in the file for the next read or write2 File-open count - How many times has the current file been opened ( simultaneously by

                                            different processes ) and not yet closedWhen this counter reaches zero the file can be removed from the table

                                            3 Disk location of the file4 Access rights

                                            Some systems provide support for file locking 1 A shared lock is for reading only2 A exclusive lock is for writing as well as reading3 An advisory lock OS makes sure the locks are acquired and released appropriately4 A mandatory lock software developers makes sure the locks are acquired and released

                                            appropriately5 UNIX used advisory locks and Windows uses mandatory locks

                                            File Types

                                            Implementing file types A common technique is to include the type as part of the filename The name is split into 2 parts

                                            1 File Name 2 Extension

                                            File name and extension are separated by a period The system uses the extension to indicate the type of file and the type of operations that can be done on that fileUser opens Microsoft word and clicks on File menu and Open option User will only specify the filename without extension application will look for a file with the given name and extension it expects Hence extensions are not supported by the operating systemUnix system uses a crude magic number stored at the beginning of the file to indicate the type of fileIn Mac OS X operating system file produced by word processor has the word processorrsquos name as its creator When user double clicks on the file word will open the required fileSome common file types are as shown below

                                            File Structure Some files contain an internal structure which may or may not be known to the OS for ex executable files have a specific structure so that it can determine

                                            where in memory to load the file and what is the location of the first instruction

                                            If OS supports many file formats it will increase the size and complexity of the OS UNIX treats all files as sequences of bytes with no internal structure (With the exception of

                                            executable binary programs which it must know how to load and find the first executable statement etc)

                                            Macintosh files have two forks - a resource fork and a data fork The resource fork contains information relating to the UI such as icons and button images and can be modified independently Data fork contains the code or data

                                            Internal File structure

                                            Disk files are accessed in units of physical blocks typically 512 bytes

                                            Internally files are organized in units of logical units like single byte (or)

                                            data record size (or) structure size

                                            The number of logical units which fit into one physical block determines its packingIf a student file is organized internally as records Each record takes 64 bytes of memoryAnd main memory is divided into blocks of 512 bytes Then packing = 51264=8 8 records can be stored per block

                                            Internal fragmentation may occur due to this

                                            102 Access Methods

                                            The information in file can be accessed in the following ways1 Sequential access2 Direct access (or) Relative access

                                            1 Sequential access we cannot randomly access any record

                                            Records are read and written sequentially one record after another

                                            A sequential access file emulates magnetic tape operation and generally supports the following operations read next - read a record and advance the tape to the next position write next - write a record and advance the tape to the next position rewind skip n records - May or may not be supported N may be limited to positive numbers or may be

                                            limited to +- 1

                                            2 Direct access (or) Relative accessA file is made up of fixed length records so that we can randomly access any record

                                            No restriction on order of reading or writing

                                            Direct access files are useful when we need to access large amounts of data randomly

                                            Ex database

                                            Operations supported include n is relative block number ie relative to beginning of file

                                            1 read n - read record number n 2 write n - write record number n 3 jump to record n - could be 0 or the end of file4 Query current record - used to return back to this record later5 Sequential access can be easily emulated using direct access The inverse is complicated and

                                            inefficient

                                            3 Other access methods can be built on top of direct access method We construct an index for the file which contains pointers to various blocks To search a file we first access index and then use pointer to access file

                                            If a student file has 120000 records of students sorted by student id We can create an index consisting of 1st student id in each blockif 64 records can be stored in each block then number of blocks needed to store 120000 records = 12000064 asymp 2000 blocks So index has 2000 entries Index is kept in main memory so that searching becomes faster

                                            If the index itself becomes large we create an index for index file (Ex B-tree)

                                            Q) Allocation Methods

                                            To allocate space to files on disk is such a way that disk space is utilized effectively and files can be accessed quickly

                                            Three major methods of allocating disk space are

                                            1 Contiguous 2 Linked

                                            3 Indexed

                                            1 Contiguous allocation Each file occupies set of contiguous blocks on disk

                                            If the file is n blocks long and starts at location b then it occupies blocks b b+1 b+2hellipb+n-1

                                            The directory entry for each file indicates the address of starting block(ie b) and length (n)

                                            Advantages

                                            1 For direct access to block i we can directly access by finding the block as b+i Since the file is stored sequentially sequential access is easy Hence access is easy

                                            2 Supports both direct and sequential access3 Number of disk seeks required is minimum 4 Seek time is minimum

                                            Disadvantages

                                            1 Difficult to find space to a new file (or) add more data to existing fileFirst fit and Best fit strategies are used to find a free hole( set of free contiguous blocks) from list of available holes But both are inefficient wrt storage utilization

                                            2 External fragmentation even though there are enough free blocks required for a new file they are not contiguous and hence cannot be allocated to new fileSolution copy all the files onto another disk or tape We then copy the files back onto original disk contiguously This is known as compaction But compaction is time consuming

                                            3 Size of the file must be known when the file is created Even though final size is known entire size may not be used by the file at the beginning Hence leads to wastage of space

                                            If size of the file is not known in advance then (i) If we allocate too little space to file file cannot be extended Solution find a larger hole copy the contents of the file to the new space and release the previous space This may be possible as long as space exists Also this is time consuming(ii) If we overestimate the amount of space needed much of the space is unused This will lead to internal fragmentation

                                            To minimize these drawbacks some os uses modified contiguous allocation scheme Here contiguous blocks of memory is allocated at firstIf the file needs more blocks another set of contiguous space known as extent is added Location and a block count plus a link to the first block of next extentIf extents are too large and of fixed size internal fragmentation occursIf extents are of variable sizes external fragmentation occurs

                                            Linked AllocationEach file is a linked list of disk blocks The disk blocks can be scattered anywhere on the diskThe directory contains a pointer to first and last blocks of the file For example a file of 5 blocks might start at a block 9 and continue at block 16 then block 1 and then block 10 and finally block 25 Each block contains a pointer to next block If each block is 512 bytes and disk address (pointer ) requires 4 bytes then the user sees blocks of 512-4=508 bytes To create a new file we simply create a new entry in the directory The pointer will have nil for empty file Also the size field of empty file is 0

                                            To add more data to existing file free block is found and data is written to it and is linked to end of file To read a file we read by following the pointers block to block

                                            Advantages 1 Easy to find space to a new file (or) add more data to existing file2 Size of the file need not be declared when the file is created3 There is no external fragmentation

                                            Disadvantages1 Seek time is more2 Number of disk seeks required is more3 Pointers use larger percentage of filersquos disk space4 Inefficient for direct access files To access i th record i disk reads are required5 Not Reliable if pointers were lost or damaged

                                            Solution to disadvantage 3 is to use to group multiple blocks as clusters(for ex 4 blocks = 1cluster) We allocate clusters instead of blocks to file Makes logical to physical block mapping simpleImproves disk throughput by reducing disk access timeReduces wastage of space due to pointers as only few pointers may be neededFree ndashlist management also becomes simpleDisadvantage internal fragmentation more space is wasted when the cluster is partially full

                                            Solution to disadvantage 5 is to use doubly linked list (or) store the file name and relative block number in each block

                                            An variation of linked allocation is the use of file allocation table (FAT)A section of disk at the beginning of each volume contains FAT FAT has one entry for each disk block and is indexed by block numberThe block number contains the block number of next block that contains the file The last block will have special end of the file value as table entry An unused block is indicated by table value 0

                                            Advantages

                                            1 Easy to find space to a new file (or) add more data to existing file2 Random access time is improved

                                            Disadvantage number of disk seeks is more The disk head must move to read the FAT and find the location of block then move to location of the block

                                            3 Indexed allocation

                                            Each file has its own index block Index block is an array of disk block addresses The ith entry in index block points to ith block of the file

                                            The directory contains the address of the index block

                                            When the file is created all pointers in the index block are set to nil

                                            To add more data to existing file free block is found and data is written to it The address of block is put in the index block

                                            Advantages

                                            1 Easy to find space to a new file (or) add more data to existing file2 Supports direct (or) random access efficiently3 No external fragmentation

                                            Disadvantage

                                            1 Seek time is more2 Number of disk seeks required is more3 Not Reliable4 Index file must be kept in memory If this memory is not available then we have the read the index

                                            block and then desired data block ie 2 disk accesses are required To access a block near the end of the file we need to read all the index blocks to read the needed data block

                                            5 Pointer overhead of index block is gt than that of linked allocation For example if we have file of only one or two blocks Entire index block is allocated to store these one or two pointers

                                            Various mechanisms followed for size of index block

                                            1 linked scheme size of index block is one disk block Index block contains a small header containing name of the file and set of the first 100 disk block addresses The next address contains the address of another index block for large files

                                            2 multilevel index to access a block the OS uses first-level index to find second-level index block which points to file block This approach can be continued to third or fourth level

                                            3 combined scheme

                                            For example Say there are 15 pointers of index block in the filersquos inode The first 12 of these pointers have address of blocks that contains file data The next 3 pointers point to indirect blocks The first points to single indirect block which is an index block that contain addresses of filersquos data blocks

                                            The second points to double indirect blocks and third points to triple indirect blocks as shown below

                                            Q) Free space management

                                            To keep track of free disk space the system maintains free space list To create a file we search free space list and allocate the required space to new file

                                            The free space list can be implemented as

                                            1 Bit vector (or) Bit map2 Linked list3 Grouping4 Counting 5 Space maps

                                            1 Bit vector (or) Bit map Each block is represented by 1 bit If the block is free the bit is 1 and if the block is allocated the bit is 0 For example consider a disk where blocks 23458 blocks are free The free-space bit map would be 001111001000

                                            Advantages simple and easy to get contiguous files

                                            Disadvantages

                                            1 Bit map requires extra space2 Bit map is inefficient if kept in disk Bit maps of smaller disk can be kept in main memory but may

                                            not be possible to keep the bit map of larger disk in main memory3 Bit map must be modified both when blocks are allocated and freed Freeing 1 GB of data on a 1-TB

                                            disk could cause thousands of blocks of bit maps to be updated because those data blocks are scattered all over the disk

                                            2 Linked List Free-space list is implemented by linking all the free disk blocks together A pointer to the first free block is stored in special location on the disk For example consider a disk where blocks 23458 blocks are freeWe keep a pointer to block 2 as the first free block Block 2 would contain a pointer to block 3 Block 3 would point to block 4 and block 4 would point to block 5and so onAdvantage No waste of spaceDisadvantage Cannot get contiguous space easily ie To traverse the free-space list we must read each block which requires more IO time

                                            3 Grouping First free block stores the addresses of n free blocks The first n-1 free blocks are free and the last block contains the addresses of another n free blocks and so onAdvantage large number of free blocks can be found quickly4 Counting free-space list can be maintained by keeping the address of first free block and the number (n)of free contiguous blocks that follow the first block

                                            Each entry in the free-space list consists of disk address and a countEach entry requires more space andlist is shorter only if count is gt1These entries can be stored in a B-tree for efficient insertion deletion and search

                                            5 Space maps ZFS divides the disk space into chunks of manageable sizes called metaslabs Each metaslab has a space mapZFS uses counting algorithm to store information about free blocks Space map is a log of all block activities in time order in counting formatWhen ZFS allocates or frees space from a metaslab it loads the associated space map into memory and updates it Finally free-space list is updated on diskSynchronization Hardware Hardware solution to synchronization is to provide atomic operations These operations operate as a single instruction without interruption Two such operations are 1 Test and Set() hardware instruction2 Swap() instruction

                                            1 Test and Set instruction is as shown

                                            Solution using Test and SetThe shared variable lock is initialized to false

                                            2 Swap() instruction definition is as follows

                                            Solution using Swap1 Shared Boolean variable lock is initialized to false and each process has a local boolean variable key

                                            Both Test and Set Swap instructions satisfy the mutual exclusion requirement but unfortunately do not guarantee bounded waiting If there are multiple processes trying to get into their critical sections there is no guarantee of what order they will enter and any one process could have the bad luck to wait forever until they got their turn in the critical section (Since there is no guarantee as to the relative rates of the processes a very fast process could theoretically release the lock whip through their remainder section and re-lock the lock before a slower process got a chance As more and more processes are involved vying for the same resource the odds of a slow process getting locked out completely increase )

                                            Below figure illustrates a solution using test-and-set that satisfies bounded waiting using two shared data structures boolean lock and boolean waiting[ N ] where N is the number of processes in contention for critical sections

                                            Bounded-waiting mutual exclusion with TestAndSet( )

                                            It first looks in an order ( starting with the next process on the list ) for a process that has been waiting and if it finds one then it releases that particular process from its waiting state without unlocking the critical section thereby allowing a specific process into the critical section while continuing to block all the others

                                            Q) What is semaphoreAns Semaphore is used to solve synchronization problems A semaphore S is an integer variable accessed only through 2 standard atomic operations 1 wait() 2 signal()Wait primitive Signal primitivewait(S)

                                            while(S lt= 0) do nothingS--

                                            signal (S)

                                            S++

                                            Each process that wishes to use a resource performs wait() operation When a process releases a resource it performs signal() operation

                                            Semaphores Usage1 binary semaphore or mutex locks can take values 0 (or) 1

                                            2 Counting Semaphore can take any integer value Counting semaphore is used to count the number remaining resources The counter is initialized to the number of resources available whenever the counting semaphore gt 0 then a process can enter a critical section and use one of the resources When the counter = 0 ( or negative in some implementations ) then the process blocks until another process frees a resource and increments the counting semaphore with a signal callFor example if 3 resources are there and 4 processes Value of counting semaphore =3if process P1 requires a resource it perform wait() operationNow counting semaphore = 2if process P2 requires a resource it perform wait() operationNow counting semaphore = 1if process P3 requires a resource it perform wait() operationNow counting semaphore = 0if process P4 requires a resource it perform wait() operation P4 waits until a resource is available

                                            3 Semaphores can also be used to synchronize certain operations

                                            Semaphore ImplementationWhen a process is in critical section and any other process that tries to enter critical section must loop continuously wasting CPU time (busy waiting) This type of semaphore is called spin lock as the process spins while waiting For example suppose it is important that process P1 execute statement S1 before process P2 executes statement S2

                                            First we create a semaphore named synch that is shared by the two processes and initialize it to zero Then in process P1 we insert the code

                                            S1signal( synch )

                                            and in process P2 we insert the code wait( synch )

                                            S2 Because synch was initialized to 0 process P2 will block on the wait until after P1 executes the call

                                            to signal Each semaphore has a integer value and maintains a waiting queue of waiting processes

                                            Semaphore implementation with no busy waiting To overcome busy waiting we block the waiting process and waiting process is restarted by wakeup() operation(ie the process is removed from waiting queue to ready state)

                                            Problems due to semaphore1 Deadlocks Deadlocks occur when multiple processes are blocked each waiting for a resource that can only be freed by one of the other (blocked) processes shown below

                                            2 Starvation In Starvation one or more processes gets blocked forever and never get a chance to enter critical section For example if we do not specify the algorithms for adding processes to the waiting queue or selecting one to be removed from the queue in the signal( ) call If a LIFO queue is chosen then the first process that starts waiting will never get a chance

                                            3 Priority inversionLet Process LMH have priorities as shown belowLltMltHprocess H is waiting for a resource that has been held by process L so Process L is running When process M enters it pre-empts process L and M is running nowHere M runs before H even though M has lower priority than HTo avoid this L is given high priority so that M does not pre-empt L=====================================================================Q) Classical problems on synchronization

                                            1 The Bounded Buffer Problem (also called the The Producer-Consumer Problem)2 The Readers-Writers Problem3 The Dining Philosophers Problem

                                            These problems are used to test newly proposed synchronization scheme

                                            1 The Bounded Buffer ProblemConsider

                                            n buffers and each buffer holds one item a producer process which creates the items (1 at a time) a consumer process which processes them (1 at a time)

                                            Producer process cannot produce an item if all the buffers are full and Consumer process cannot consume if all the buffers are empty As both processes modify the contents of buffer synchronization must be there For that we use 3 Semaphores

                                            1 empty = n (empty semaphore counts the number of empty buffers)2 full=0 (full semaphore counts the number of filled buffers)3 mutex=1 (mutex provides mutual exclusion for access to the buffer)

                                            Producer process Consumer processdo

                                            hellip produce an item wait(empty) wait(mutex) hellip

                                            do wait(full) wait(mutex) hellip remove an item from buffer hellip

                                            add item to buffer hellip signal(mutex) signal(full)

                                            while (1)

                                            signal(mutex) signal(empty) hellip consume the item hellip while (1)

                                            2 The Readers-Writers ProblemA data item such as a file is shared among several processesEach process is classified as reader or writerMultiple readers can read the file at same timeA writer must have exclusive access (ie cannot share with either a reader or another writer)Two versions of the readers-writers problem

                                            readers priority new reader need not wait because a writer is waiting writers priority if a writer is waiting to access the database no new readers can start reading

                                            A solution to both version may cause starvation in the readers priority version writers may starve in the writers priority version readers may starve

                                            A semaphore solution to the readers priority version uses 3 semaphores1 readcount = 0 (readcount semaphore counts the number of reading shared data2 mutex =1 (mutex semaphore is used for mutually exclusion when readcount is updated)3 wrt = 1 (wrt semaphore is common to both writer and reader)

                                            wrt provides mutually exclusive access to shared data Reader process Writer process

                                            wait(mutex) readcount++ if (readcount == 1) wait(rt) signal(mutex) hellip reading is performed hellip wait(mutex) readcount-- if (readcount == 0) signal(wrt) signal(mutex)

                                            do wait(wrt) hellip writing is performed hellip signal(wrt)

                                            while (TRUE)

                                            3 The Dining Philosophers Problemn philosophers sit around a table thinking and eatingAs time passes a philosopher gets hungry and tries to pick up the chopstick on his left and on his right A philosopher may only pick up one chopstick at a time and cannot pick up a chopstick already in the hand of neighbour philosopherThe dining philosophers problems is an example of a large class or concurrency control problems it is a simple representation of the need to allocate several resources among several processes in a deadlock-free and starvation-free mannerA semaphore solution represent each chopstick with a semaphore

                                            semaphore chopstick[5] Initially all values are 1

                                            Philosopher ido

                                            51

                                            wait(chopstick[i])wait(chopstick[(i+1) 5])hellipeathellipsignal(chopstick[i])signal(chopstick[(i+1) 5])hellipthinkhellip

                                            while (1)This solution guarantees no two neighbouring philosophers eat simultaneously but has the possibility of creating a deadlockQ) Types of operating systems

                                            MULTIPROCESSING SYSTEMS has multiple hardware CPUs

                                            In symmetric multiprocessing (SMP) systems The processes share the same main memory and io facilities All processors can perform the same functions (hence term symmetric)

                                            52

                                            ASYMMETRIC MULTIPROCESSING SYSTEMS In asymmetric multiprocessing (ASMP) all CPUs are not equal (Master and slave CPUs) ASMP have dedicated processors to specific tasks For example one processor may be dedicated to disk operations another to video operations etc These systems dont have the flexibility to assign processes to the least-loaded CPU unlike an SMP system

                                            REAL TIME SYSTEM is used to implement a computer application that must complete its execution within its time constraint

                                            Two kinds of real-time systems have evolved

                                            1 A hard real-time system can guarantee that response requirement will be met under all conditions 2 A soft real-time system cannot guarantee that response requirement meets under all conditions Ex

                                            Digital audio or multimedia systems Digital telephones

                                            DISTRIBUTED SYSTEMS execute parts of a computation in different systems at the same time It uses distributed control ie it spreads its decision-making actions across different computers in the system so that failures of individual computers or the network does not cripple its operation

                                            A distributed operating system appears as a uniprocessor system even though it has multiple processors The users may not know where their programs are being run or where their files are located that should all be handled automatically by the operating system

                                            Distributed systems allow applications to run on several processors at the same time thus requiring more complex processor scheduling algorithms

                                            DESKTOP SYSTEMS A desktop system is a personal computer (PC) system used at a single location Modern desktop systems support multiprogramming Common examples are Linux FreeBSD Windows 8 and the Macintosh operating system

                                            HANDHELD SYSTEMS (palmtop computers) Two of the most popular operating systems for handhelds are Symbian OS and Android OS They have very less memory

                                            53

                                            CLUSTERED SYSTEMS A computer cluster consists of a set of loosely connected computers that work together so that in many respects they can be viewed as a single system The components of a cluster are usually connected to each other through fast local area networks each node (computer used as a server) running its own instance of an operating system

                                            Q) Modern operating system

                                            New developments in hardware applications and security threats lead to the development of modern operating system

                                            New Hardware developments like increased machine speed high speed network increased size and variety of memory devices

                                            New Applications developments like multimedia applications internet and web access and client server computing

                                            Developments in modern operating system can be categorized as

                                            1 Microkernel architecture assigns only few functions to kernel ( IPC and CPU scheduling etc) Other OS services are provided by processes called servers that run in user mode This approach decouples server and kernel development This approach is well suited to distributed environment

                                            2 Multithreading a process is divided into threads that can run concurrently Useful for applications that perform a number of independent tasks Example database server that listens and processes many client requests

                                            3 Symmetric multiprocessing As now-a-days there are multiple microprocessors in a single system SMP operating system provides greater efficiency when there are multiple processors The processes share the same main memory and io facilities All processors can perform the same functions (hence term symmetric)

                                            SMP schedules processes or threads across all of the processors

                                            Advantages

                                            1 Increased performance if some portion of the program can be run in parallel the performance of SMP OS increases

                                            2 increased availability as all processes can perform the same task if one processor fails others will work 3 incremental growth one can add a new processor to increase the performance4 Scaling cost can be dependent on the number of processors used

                                            4 Distributed operating systems Distributed systems allow applications to run on several processors at the same time

                                            5 Object oriented design adds modularity to kernel Os can be customized without effecting system integrity

                                            Q) Linux Operating systems It is open source as its source code is freely available It is free to use Linux was designed considering UNIX compatibility Linux is a multi-user multitasking system Main design goals are speed efficiency and standardization Components of Linux System

                                            54

                                            Linux Operating System has primarily three components as shown in the below diagram

                                            Kernel - Kernel is the core part of Linux It is responsible for all major activities of this operating system It interacts directly with hardware Kernel hides low level hardware details to system or application programs

                                            System Library - System libraries are special functions or programs These are used by application programs to use Kernelrsquos features

                                            System Utility - System Utility programs are responsible to do specialized individual level tasks like updating log file accepting login requests from terminals etc

                                            Kernel Modules Sections of kernel code that can be compiled loaded and unloaded independent of the rest of the kernel A kernel module may typically implement a device driver a file system or a networking protocol

                                            Three components to Linux module support 1 module management Supports loading modules into memory and allows to talk to the rest of the kernel 2driver registration Allows modules to inform rest of the kernel that a new driver has become available 3 conflict resolution protect reserved resources of one driver from accidental use by another driver

                                            Kernel Mode vs User ModeKernel component code executes in a special privileged mode called kernel mode with full access to all resources of the computer This code represents a single process executes in single address space and do not require any context switch and hence is very efficient and fast Kernel runs each processes and provides system services to processes provides protected access to hardwares to processesCode which is not required to run in kernel mode is in System Library User programs and other system programs works in User Mode which has no access to system hardwares and kernel code User programs utilities use System libraries to access Kernel functions to get systems low level tasksBasic FeaturesFollowing are some of the important features of Linux Operating System

                                            Portable - Portability means softwares can works on different types of hardwares in same way Linux kernel and application programs supports their installation on any kind of hardware platform

                                            Open Source - Linux source code is freely available and it is community based development project Multiple teams works in collaboration to enhance the capability of Linux operating system and it is continuously evolving

                                            Multi-User - Linux is a multiuser system means multiple users can access system resources like memory ram application programs at same time

                                            Multiprogramming - Linux is a multiprogramming system means multiple applications can run at same time

                                            Hierarchical File System - Linux provides a standard file structure in which system files user files are arranged

                                            Shell - Linux provides a special interpreter program which can be used to execute commands of the operating system It can be used to do various types of operations call application programs etc

                                            55

                                            Security - Linux provides user security using authentication features like password protection controlled access to specific files encryption of data

                                            Architecture Linux System Architecture is consists of following layers

                                            Hardware layer - Hardware consists of all peripheral devices (RAM HDD CPU etc)

                                            Kernel - Core component of Operating System interacts directly with hardware provides low level services to upper layer components

                                            Shell - An interface to kernel Takes commands from user and executes kernels functions

                                            Utilities - Utility programs giving user most of the functionalities of an operating systems

                                            Q) windows XP operating System1Extensibility mdash layered architecture

                                            Executive which runs in protected mode provides the basic system services On top of the executive several server subsystems operate in user mode Modular structure allows additional environmental subsystems to be added without affecting the

                                            executive 2Portability mdash XP can be moved from on hardware architecture to another with relatively few changes

                                            Written in C and C++ Processor-dependent code is isolated in a dynamic link library (DLL) called the ldquohardware abstraction

                                            layerrdquo (HAL)3 Reliability mdashXP uses hardware protection for virtual memory and software protection mechanisms for operating system resources 4 Compatibility mdash applications that follow the IEEE 10031 (POSIX) standard can be complied to run on 2000 without changing the source code 5 Performance mdash XP subsystems can communicate with one another via high-performance message passing

                                            Preemption of low priority threads enables the system to respond quickly to external events Designed for symmetrical multiprocessing

                                            6 International support mdash supports different locales via the national language support (NLS) APIXP Architecture 1 Layered system of modules 2 Protected mode mdash hardware abstraction layer (HAL) kernel executive 3 User mode mdash collection of subsystems

                                            a Environmental subsystems emulate different operating systems b Protection subsystems provide security functions

                                            XP architecture

                                            56

                                            Q) Windows network Operating systemExample Windows 2000 operating system

                                            All the pictures are taken from Silberschatz Abraham et al Peter Baer Galvin and Greg Gagne Operating system concepts Reading Addison-Wesley

                                            Visit my blog enthusiaststudentblogspotin57

                                            mtechmessengerblogspotin

                                            Keywords OS exam notes OS notesOperating system notes Opearing system galvin notesSimple batch system Multi- programmed Batch System Time sharing System Difference between Batch Multi-programming and Time sharing What is process structure of process in memory Process states PCB Operations on Processes Process Creation Process Termination cascading terminationcontext switch Basic Concepts of threads two modes of CPU execution Inter process communication Shared ndash Memory Systems Message passing systemswhat is deadlock four conditions that are necessary for deadlock to occur Methods for handling deadlocks Deadlock prevention Deadlock avoidanceResource Allocation graphBankers AlgorithmSafe State Unsafe state Safety Algorithm Resource-Request Algorithm Memory management Address Binding Logical vs Physical Address Space MMU Memory-Management Unit Dynamic loading Dynamic Linking Overlays Swapping Contiguous memory allocation Single-partition allocation Multiple-partition allocation Internal fragmentation Variable Size partitions External fragmentation First-fit Best-fit Worst-fit Non-Contiguous Memory allocation Paging Implementation of Page Table PTLR PTBR page table base register Page-table length register translation look-aside buffers TLB Protection in Paging Shared pages in Paging Segmentation Shared Segments Virtual memory Demand paging Procedure for Handling a Page Fault pure demand paging Page Replacement Page Replacement Algorithms FIFO Optimal Page replacement AlgorithmLeast Recently used (LRU) Page replacement algorithm ThrashingWorking Set model global and local page replacement

                                            58

                                            • 101 File Concept Data is stored in secondary memory in the form of files A file is a sequence of bits bytes lines or records A file has structure which depends on its type
                                            • File Attributes
                                            • File Operations
                                            • Kernel Mode vs User Mode
                                            • Basic Features
                                            • Architecture

                                              ==============================================================================

                                              Q)Demand pagingDemand paging is technique of loading pages from disk to main memory only when the page is needed Hence using less amount of physical memory we get faster response

                                              Demand paging is similar to PAGING + SWAPPING

                                              Instead of swapping a whole process the pager brings only those pages needed into memory

                                              To distinguish between pages that are in memory and pages that are on the disk validinvalid bit is used

                                              Validinvalid bit is attached to each entry in page table When this bit is set to valid the page is legal and in memory When this bit is set to invalid either the page is illegal or the page is in disk

                                              When CPU generates a logical address whose page table entry is set to invalid page fault occurs Hence the required page is in disk Some section of hard disk called swap space is used to hold pages that are not present in memory

                                              Q) Procedure for Handling a Page Fault

                                              1) CPU generates a logical address and if the validinvalid bit for the page that has this logical address is set to invalid page fault occurs

                                              2) Page fault causes a trap to operating system

                                              3)Check if the logical address is within the logical address space of process ie PTBR and PTLR is checked if (required logical address is not within the logical address space of process)

                                              Terminate the processelse

                                              page is not in memory and page is in disk

                                              4) To bring the required page into memory we need to find a free main memory frame map main memory address to disk block and fetch disk block and load the block into free frame5) When the required page is brought into memory update the page table to indicate the page is in memory6) Restart instruction interrupted by illegal address trap The process will continue as if page had always been in memory

                                              Q) What is pure demand pagingAns If no page belonging to the executing process is in main memory the process will fault for every page it needs Page faults will occur until every page that is needed is in memory This scheme is known as pure demand paging NEVER BRING A PAGE UNTIL IT IS REQUIRED

                                              Q) Page Replacement1 Find the location of the desired page on the disk2 Find a free

                                              i) If there is a free frame use itii) If there is no free frame use Page replacement Algorithms to find some page in memory that is

                                              not really in use and swap itiii) If the modify bit associated with the victim frame is set to 1 the page is written to disk Make the

                                              changes to validinvalid bit of victim frames page table entry to invalid to indicate the victim frame is no longer in memory

                                              else no need to write to disk3 Read the desired page into victim frame and change the frame and page tables4 Restart the user process

                                              =================================================================================================Q) Page Replacement AlgorithmsGoal Produce a low page-fault rateThe algorithms are evaluated by running it on a particular string of memory references (reference string) and by computing the number of page faults on that string

                                              1 First in First out (FIFO) Page replacement algorithm Uses the time when a page was brought into memory The page that was brought first into memory is replaced ie oldest page is chosen for replacement We create a FIFO queue to hold all pages in memory

                                              When a page is brought into memory we insert it at the tail of the queue and we replace the page at the head of the queue

                                              FIFO 15 page faultsProblems due to FIFO is Beladyrsquos anamolyFor a refrence string shown below123412512345When number of frames allocated to a process is 1 number of page faults=12 due to FIFO page replacementWhen number of frames allocated to a process is 2 number of page faults=12 due to FIFO page replacementWhen number of frames allocated to a process is 3 number of page faults=9 due to FIFO page replacementWhen number of frames allocated to a process is 1 number of page faults=10 due to FIFO page replacementAs the number of frames allocated to a process increases page faults must decrease But in FIFO page replacement page faults increase2 Optimal Page replacement AlgorithmReplace the page that will not be used for longest period of time Used for measuring how well your algorithm performs It is not practical to implement OPT algorithm as it is not possible to know what the future references will beex

                                              Optimal page replacement 9 page faults

                                              3 Least Recently used (LRU) Page replacement algorithm Replace the page that has not been used for longest period of time LRU makes use of time of last use of the page Ex

                                              Number of page faults = 12

                                              LRU requires hardware assistance to determine the time of last use of the page We can use 1) Stack or 2) Counter to implement LRU page replacement algorithm

                                              1) Stack implementation ndash keep a stack of page numbers in a double link formwhen a Page is referenced it is moved to the top of stackThis implementation requires 6 pointers to be changed

                                              2) counter implementation Time-of-use field is allocated to each frame allocatedCPU maintains a counter Counter is incremented for every page reference Counter value is copied to the time-of-use field of the referenced page

                                              Disadvantages of LRU page replacement algorithm Updating counter or stack must be done for every memory reference HENCE LRU IS SLOW and implementation requires hardware assistance ==============================================================================Q) LRU approximation page replacement1) Using reference bit2) Using additional reference bits algorithm3) Second chance algorithm4) Enhanced Second chance algorithm5) Counting based Page replacement

                                              Has two schemes1) Least Frequently Used (LFU)2) Most Frequently Used (MFU)

                                              6) Page Buffering AlgorithmsQ) What is ThrashingAns A process is thrashing if it is spending more time paging than executingThrashing may occur when global (or) local page replacement is used as followsSolution to Thrashing is Working Set strategyIf we provide a process with as many frames as it needs thrashing can be avoided Since it is not possible to know how many frames a process needs we use locality model of process execution

                                              here Locality is set of pages that are actively used together Locality model states that as the process executes it moves from locality to locality If we allocate enough frames to a process to accommodate the size of current locality it will not fault again until it changes current localityWorking Set model uses working-set and a parameter ∆ = working-set windowThe set of pages in the most recent ∆ page references is the working setLet WS S i be the working-set size for process Pi ie Pi needs WS S i framesLet D be the total demand for frames then D=sum WS Si

                                              and mbe the total number of available frames

                                              If iquestm thrashing occurs because some processes will not have enough framesSoIF (Dgtm iquest

                                              OS selects a process to suspendelse OS may initiate another processThe working set strategy prevents thrashing while keeping the degree of multi programming as high as possibleThe main difficulty is to keep track of moving working set window ========================================================================Q) How does thrashing occur in global and local page replacementAns In Global page replacement when a page fault occurs for a process any frame in main memory can be replaced causing page fault to some other process that needs the replaced frame Page fault processes must use paging device to swap pages in and out As more number of processes queue up for the paging device

                                              the READY queue empties And CPU becomes idle So OS introduces new processes as CPU is free This further increases number of page faults

                                              In Local page replacement Each process is allocated certain frames in main memory and when a page fault occurs only the frames allocated to it will be replacedWhen a process starts thrashing other processes are not affected But since the processes will be in queue for paging device most of the time average service time for page fault increases Hence effective access time will increase even for a process that is not thrashingQ) Page Table Structure Or Structure of page table in memory

                                              Hierarchical Paging Hashed Page Tables Inverted Page Tables

                                              Hierarchical Paging If Page table is large then we break up the page table into multiple page tables ie page table is paged A simple technique is a two-level page tableTwo-Level Paging Example A logical address (on 32-bit machine with 4K page size) is divided into 1 a page number consisting of 20 bits 2 a page offset consisting of 12 bitsSince the page table is paged the page number is further divided into 1 a 10-bit page number 2 a 10-bit page offset Thus a logical address is as follows page number page offset

                                              pi p2 d

                                              10 10 12 where p1 is an index into the outer page table and p2 is the displacement within the page of the outer page table Two-Level Page-Table Scheme

                                              Address-Translation Scheme Address-translation scheme for a two-level 32-bit paging architecture

                                              Hashed Page Tables (or) Hash table Hashed page tables are common in address spaces gt 32 bitsEach entry in hash table contains a linked list of elements that hash to same locationHash table is indexed by hash valueEach element consists of 3 fields

                                              1 Page number2 Frame number3 Pointer to next element in linked list

                                              The algorithm works as follows Page number is passed to hash function to get hash value Locating the hash value in hash table is easy as Hash table is indexed by hash value Page number is then compared with field 1 in the first element in linked list If there is a match the corresponding frame number is used to get physical address If there is no match next entries in linked list are searched

                                              Inverted Page Table

                                              Invertible page table has one entry for each frame of physical memory Each entry in invertible page table has

                                              Process-id Page number

                                              Logical address also has 3 parts Process-id Page number

                                              OffsetProcess-id and page number of the logical address is compared with each entry in the invertible page tableIf a match is found at ith entry i is added to offset to get logical address If there is no match it means it is illegal addressAdvantages and disadvantages

                                              Decreases memory needed to store each page table Increases time needed to search the table when a page reference occurs Solution Use hash table to

                                              limit the search to one mdash or at most a few mdash page-table entries Inverted Page Table Architecture

                                              Unit-IV CPU Scheduling

                                              Basic conceptsTo maximize CPU utilization some process must be running at all times

                                              In multi programming several processes will be in main memory at a given time If running process has to wait for IO the OS takes the CPU away from that process and assigns CPU to other process in ready queue So CPU must be scheduled and CPU scheduling is fundamental OS function

                                              CPU- IO burst cycleProcess execution begins with a CPU burst that is followed by an IO burst which is followed by another CPU burst then another IO burst and so on as shown in the figure

                                              An IO bound program has many short CPU bursts A CPU-bound program has few long CPU bursts

                                              Schedulers a module in OS for scheduling decisions1048708 Long-term scheduler (or job scheduler) ndash selects which processes should be brought into the ready queue1048708 Medium-term scheduler ndash selects which processes should be swapped inout the memory1048708 Short-term scheduler (or CPU scheduler) ndash Whenever the CPU becomes idle the operating system must select one of the processes in ready queue to be executed This selection is done by short term scheduler (or) CPU scheduler

                                              Dispatcher gives CPU control to the process selected by short term scheduler This function involves following

                                              1 switching context2 switching to user mode3 jumping to proper location in the user program to restart that program

                                              Time taken to stop one process and start another process is known as dispatch latency This must be kept smallReady QueueReady queue is implemented as a FIFO queue a priority queue a tree or simply an unordered linked listThe records in ready queue are PCBs of processes

                                              Pre-emptive Scheduling

                                              CPU scheduling decisions take place under one of four conditions

                                              1 When a process switches from the running state to the waiting state2 When a process switches from the running state to the ready state for example in response to an

                                              interrupt3 When a process switches from the waiting state to the ready state say at completion of IO 4 When a process terminates

                                              For conditions 1 and 4 there is no choice - A new process must be selected For conditions 2 and 3 there is a choice - continue running the current process (or) select a different

                                              one If CPU scheduling takes place only under conditions 1 and 4 the system is said to be non-

                                              preemptive or cooperative In non-preemptive a process runs until it itself gives up the CPU ie when it is waiting for IO (or) when process completes Otherwise the system is said to be pre-emptive

                                              Windows used non-preemptive scheduling up to Windows 3x and started using pre-emptive scheduling with Win95 Macs used non-preemptive prior to OSX and pre-emptive since then Note that pre-emptive scheduling is only possible on hardware that supports a timer interrupt

                                              Note that pre-emptive scheduling can cause problems when two processes share data because one process may get interrupted in the middle of updating shared data structures

                                              Preemption can also be a problem if the kernel is busy implementing a system call ( eg updating critical kernel data structures ) when the preemption occurs Solution wait until the system call has either completed or blocked before allowing the pre-emption But this solution is problematic for real-time systems

                                              Some critical sections of code protect themselves from concurrency problems by disabling interrupts before entering the critical section and re-enabling interrupts on exiting the section But this should only be done only on very short pieces of code that will take less CPU time

                                              Scheduling Criteria to decide CPU scheduling algorithm1 CPU utilization in real system cpu utilization will range from 40 to 902 Throughput Number of processes that are completed per unit time3 Turnaround time Time when the process has completed ndash Time when the process was submittedie it includes waiting time and burst time4 Waiting time amount of time that a process spends waiting for CPU5 Response time Time when the first response came - Time when the process was submitted

                                              Optimization Criteria Max CPU utilizationMax throughput

                                              Min turnaround timeMin waiting timeMin response time CPU Scheduling Algorithms1) First-Come First-Served (FCFS) Scheduling First come first served CPU scheduling algorithm is non pre-emptive Processes are scheduled in the order they have arrivedFCFS is implemented by Queue When the CPU is free it is allocated to the process at the head of the queueAdvantages Easy to implementDisadvantages 1 Average waiting time is quite long

                                              2 Convoy effect (it occurs as small processes wait in queue for big process to leave CPU)Example Process Burst Time P1 24 P2 3 P3 3

                                              Suppose that the processes arrive in the order P1 P2 P3

                                              The Gantt Chart for the schedule is P1 P2 P3

                                              0 24 27 30 Waiting time for P1 = 0 P2 = 24 P3 = 27 Average waiting time (0 + 24 + 27)3 = 17 2) Shortest-Job-First (SJF) SchedulingSchedule the Process with shortest burst time If CPU burst time of two processe are same then FCFS is usedAdvantages Average waiting time decreasesDisadvantages It is difficult to know the length of next CPU burstSJF can be either pre-emptive (or) non pre-emptive1 Non pre-emptive SJF ndash once CPU is given to the process it cannot be preempted until completes its CPU burst2 Pre-emptive SJF (or) Shortest-Remaining-Time-First (SRTF) If the newly arrived process is shorter than what is remaining of currently executing process then pre-empt the currently executing process

                                              Example Process Arrival Time Burst Time P1 00 7

                                              P2 20 4 P3 40 1

                                              P4 50 4

                                              SJF (non-preemptive) Gantt Chart P1 P3 P2 P4

                                              0 7 8 12 16 Average waiting time = [0 +(8-2)+(7-4) +(12-5)] 4 =4

                                              Example of Preemptive SJF

                                              Process Arrival Time Burst TimeP1 00 7P2 20 4P3 40 1P4 50 4

                                              SJF (preemptive) P1 P2 P3 P2 P4 P1

                                              0 2 4 5 7 11 16 Average waiting time = (9 + 1 + 0 +2)4 =3 3) Priority SchedulingA priority number (integer) is associated with each processThe CPU is allocated to the process with the highest priority (smallest integer equiv highest priority)If two processes have equal priority then FCFS is usedSJF is a priority algorithm in which larger the CPU burst lower the priorityThere are two schemes 1 Preemptive 2 nonpreemptive

                                              Problem equiv Starvation ndash low priority processes may never executeSolution equiv Aging ndash as time progresses increase the priority of the process

                                              4) Round Robin (RR) Each process gets a small unit of CPU time (time quantum) usually 10-100 milliseconds After this time has elapsed the process is preempted and added to the end of the ready queueIf there are n processes in the ready queue and the time quantum is q then each process gets 1n of the CPU time in chunks of at most q time units at once No process waits more than (n-1)q time unitsPerformance 1 q large _ FIFO 2 q small _ q must be large with respect to context switch otherwise overhead is too highExample of RR with Time Quantum = 4 Process Burst Time P1 24 P2 3

                                              P3 3 The Gantt chart isP1 P2 P3 P1 P1 P1 P1 P1

                                              0 4 7 10 14 18 22 26 30 Average waiting time = [(30-24)+4+7]3 = 173 =566 5)Multilevel Queue scheduling Ready queue is partitioned into separate queuesFor ex ready queue is divided into 2 queues

                                              1 foreground (interactive) queue2 background (batch) queue

                                              Each queue has its own scheduling algorithmFor ex Round Robin scheduling algorithm can be used in foreground queue

                                              FCFS scheduling algorithm can be used in background queueScheduling must be done between the queues This can be done in 2 ways1 Fixed priority scheduling

                                              Foreground queue has highest priority All the processes in foreground queue must be completed and queue must be empty Then only the processes in background queue will be given CPU As shown in below figure Disadvantage starvation

                                              2 Time slice ndash each queue gets a certain amount of CPU timewhich it can schedule amongst its processes ie 80 to foreground in RR and 20 to background in FCFS

                                              6) Multilevel Feedback Queue Scheduling A process can move between queues The idea is to separate processes according to their CPU bursts IO ndash bound and interactive processes will be in highest priority queueIf a process takes more CPU time it is moved to lower priority queue If a process waits too long in lower priority queue it is moved to higher priority queue to prevent starvationAs shown in the below figure let there be 3 queues Q0 Q1 Q2

                                              1 Q0 ndash time quantum 8 milliseconds 2 Q1 ndash time quantum 16 milliseconds 3 Q2 ndash FCFS

                                              Scheduling

                                              1 A process entering ready queue is put in queue Q0 When it gains CPU the process receives 8 msec If it does not finish in 8 milliseconds process is moved to queue Q1 2 At Q1 process again receives 16 additional milliseconds If it still does not complete it is moved to queue Q2

                                              Q) SynchronizationSeveral processes run in an operating system Some process share resources due to which problems like data inconsistency may arise

                                              Example of process synchronization is Producer Consumer problem (or) Bounded-Buffer problemProducer Consumer problem To make sure that producer process should not add data when the buffer is full andConsumer process should not take data when the buffer is empty

                                              The code for Producer process can be modified as followswhile (true) Produce an item in next producedwhile (counter == BUFFER_SIZE) do nothingbuffer[in]=nextProducedin = (in+1) BUFFER_SIZEcounter++

                                              here an integer variable counter is initialized to zeroCounter is incremented every time a new item is added to the buffer and counter is decremented every time an item is removed from bufferin is an index that always points to an empty slot after the last filled slotout is an index that always points to the first filled slot after at the head of the circular queue

                                              The code for consumer process is as shown belowwhile(true)

                                              while (counter == 0) do nothingnextConsumed = buffer[out]out=(out+1)BUFFER_SIZEcounter--

                                              The above producer and consumer code produces RACE CONDITIONOutcome of execution of several co-operating processes depends on the order in which accesses takes place is called Race ConditionFor Example if counter =5

                                              Given counter =5 and if producer produces one item and consumer consumes one item then correct value of counter= 5 only But here the counter = 4 ie inconsistent resultIf T5 is executed before T4 then the counter =6 ie inconsistent result

                                              Solution to avoid race condition make sure only one process at a time must update the shared variable (here it is counter)

                                              Q) Critical section problemAns Each process has a segment of code called critical section Critical section code is the code that contains common variables of co-operating processesTo avoid data inconsistencyTwo processes must not execute critical section code at same timeFor this to happen1 Each process must request permission to enter its critical sectionmdashEntry section2 End of critical section ndashExit section3 Rest of the code after critical section ndash Remainder section

                                              Solution to critical section must satisfy the following three requirements1 Mutual exclusion if one process is in critical section no other process can be in critical section2 Progress Processes not in remainder section must decide which process will enter critical section next3 Bounded Waiting All the processes must be allowed to enter critical section No process must wait indefinitely

                                              Q) Petersonrsquos solution for critical section problem (Software based solution)AnsTime Process 1 Process2T0 do

                                              flag[1] = TRUE turn =2

                                              do flag[2] = TRUE

                                              T1 turn =1

                                              T2 While( flag[2] ampamp turn == 2)Since turn = 2 while loop will not run

                                              While( flag[1] ampamp turn == 1)Loops continuously until flag[1]

                                              becomes falseT3 Enter critical section

                                              T4 flag[1] = FALSE Enter Critical section

                                              T5 Enters remainder section flag[2]=FALSE

                                              while (TRUE) Enters remainder section

                                              while (TRUE)

                                              101 File Concept Data is stored in secondary memory in the form of files A file is a sequence of bits bytes lines or records A file has structure which depends on its type

                                              File Attributes

                                              1 Name ndash Name of the file for user understanding2 Identifier used by OS to identify a file ( eg inode number )3 Type ndash Text files executable files binary files etc 4 Location ndash location of the file on the hard drive 5 Size current size of the file6 Protection Controls who can read write or execute the file7 Time Date and user identification gives the information when the file was created last modified

                                              and last used etc This data is used for protection security and usage monitoring

                                              All information about files is kept in directory structure The directory entry consists of the filersquos name and its unique identifier The identifier locates the other file attributes Directory structure is kept on disk

                                              File Operations

                                              The file Abstract Data Type supports many common operations 1 Creating a file find free space on disk add entry to directory 2 Writing a file write data at current file position pointer location and update pointer3 Reading a file read data at current file position pointer location and update pointer4 Repositioning within a file or file seek change pointer location to a given value5 Deleting a file free the space allocated to file and remove entry from directory6 Truncating a file delete the data and update file size

                                              Open File Table

                                              Most OSes require that files be opened before access and closed after access Information about all files that are open currently in the system is stored in an open file table This

                                              system wide open file table is maintained by OS And information about files opened by the process is stored by the processrsquos open file table Open file table has the following information

                                              1 File pointer - records the current position in the file for the next read or write2 File-open count - How many times has the current file been opened ( simultaneously by

                                              different processes ) and not yet closedWhen this counter reaches zero the file can be removed from the table

                                              3 Disk location of the file4 Access rights

                                              Some systems provide support for file locking 1 A shared lock is for reading only2 A exclusive lock is for writing as well as reading3 An advisory lock OS makes sure the locks are acquired and released appropriately4 A mandatory lock software developers makes sure the locks are acquired and released

                                              appropriately5 UNIX used advisory locks and Windows uses mandatory locks

                                              File Types

                                              Implementing file types A common technique is to include the type as part of the filename The name is split into 2 parts

                                              1 File Name 2 Extension

                                              File name and extension are separated by a period The system uses the extension to indicate the type of file and the type of operations that can be done on that fileUser opens Microsoft word and clicks on File menu and Open option User will only specify the filename without extension application will look for a file with the given name and extension it expects Hence extensions are not supported by the operating systemUnix system uses a crude magic number stored at the beginning of the file to indicate the type of fileIn Mac OS X operating system file produced by word processor has the word processorrsquos name as its creator When user double clicks on the file word will open the required fileSome common file types are as shown below

                                              File Structure Some files contain an internal structure which may or may not be known to the OS for ex executable files have a specific structure so that it can determine

                                              where in memory to load the file and what is the location of the first instruction

                                              If OS supports many file formats it will increase the size and complexity of the OS UNIX treats all files as sequences of bytes with no internal structure (With the exception of

                                              executable binary programs which it must know how to load and find the first executable statement etc)

                                              Macintosh files have two forks - a resource fork and a data fork The resource fork contains information relating to the UI such as icons and button images and can be modified independently Data fork contains the code or data

                                              Internal File structure

                                              Disk files are accessed in units of physical blocks typically 512 bytes

                                              Internally files are organized in units of logical units like single byte (or)

                                              data record size (or) structure size

                                              The number of logical units which fit into one physical block determines its packingIf a student file is organized internally as records Each record takes 64 bytes of memoryAnd main memory is divided into blocks of 512 bytes Then packing = 51264=8 8 records can be stored per block

                                              Internal fragmentation may occur due to this

                                              102 Access Methods

                                              The information in file can be accessed in the following ways1 Sequential access2 Direct access (or) Relative access

                                              1 Sequential access we cannot randomly access any record

                                              Records are read and written sequentially one record after another

                                              A sequential access file emulates magnetic tape operation and generally supports the following operations read next - read a record and advance the tape to the next position write next - write a record and advance the tape to the next position rewind skip n records - May or may not be supported N may be limited to positive numbers or may be

                                              limited to +- 1

                                              2 Direct access (or) Relative accessA file is made up of fixed length records so that we can randomly access any record

                                              No restriction on order of reading or writing

                                              Direct access files are useful when we need to access large amounts of data randomly

                                              Ex database

                                              Operations supported include n is relative block number ie relative to beginning of file

                                              1 read n - read record number n 2 write n - write record number n 3 jump to record n - could be 0 or the end of file4 Query current record - used to return back to this record later5 Sequential access can be easily emulated using direct access The inverse is complicated and

                                              inefficient

                                              3 Other access methods can be built on top of direct access method We construct an index for the file which contains pointers to various blocks To search a file we first access index and then use pointer to access file

                                              If a student file has 120000 records of students sorted by student id We can create an index consisting of 1st student id in each blockif 64 records can be stored in each block then number of blocks needed to store 120000 records = 12000064 asymp 2000 blocks So index has 2000 entries Index is kept in main memory so that searching becomes faster

                                              If the index itself becomes large we create an index for index file (Ex B-tree)

                                              Q) Allocation Methods

                                              To allocate space to files on disk is such a way that disk space is utilized effectively and files can be accessed quickly

                                              Three major methods of allocating disk space are

                                              1 Contiguous 2 Linked

                                              3 Indexed

                                              1 Contiguous allocation Each file occupies set of contiguous blocks on disk

                                              If the file is n blocks long and starts at location b then it occupies blocks b b+1 b+2hellipb+n-1

                                              The directory entry for each file indicates the address of starting block(ie b) and length (n)

                                              Advantages

                                              1 For direct access to block i we can directly access by finding the block as b+i Since the file is stored sequentially sequential access is easy Hence access is easy

                                              2 Supports both direct and sequential access3 Number of disk seeks required is minimum 4 Seek time is minimum

                                              Disadvantages

                                              1 Difficult to find space to a new file (or) add more data to existing fileFirst fit and Best fit strategies are used to find a free hole( set of free contiguous blocks) from list of available holes But both are inefficient wrt storage utilization

                                              2 External fragmentation even though there are enough free blocks required for a new file they are not contiguous and hence cannot be allocated to new fileSolution copy all the files onto another disk or tape We then copy the files back onto original disk contiguously This is known as compaction But compaction is time consuming

                                              3 Size of the file must be known when the file is created Even though final size is known entire size may not be used by the file at the beginning Hence leads to wastage of space

                                              If size of the file is not known in advance then (i) If we allocate too little space to file file cannot be extended Solution find a larger hole copy the contents of the file to the new space and release the previous space This may be possible as long as space exists Also this is time consuming(ii) If we overestimate the amount of space needed much of the space is unused This will lead to internal fragmentation

                                              To minimize these drawbacks some os uses modified contiguous allocation scheme Here contiguous blocks of memory is allocated at firstIf the file needs more blocks another set of contiguous space known as extent is added Location and a block count plus a link to the first block of next extentIf extents are too large and of fixed size internal fragmentation occursIf extents are of variable sizes external fragmentation occurs

                                              Linked AllocationEach file is a linked list of disk blocks The disk blocks can be scattered anywhere on the diskThe directory contains a pointer to first and last blocks of the file For example a file of 5 blocks might start at a block 9 and continue at block 16 then block 1 and then block 10 and finally block 25 Each block contains a pointer to next block If each block is 512 bytes and disk address (pointer ) requires 4 bytes then the user sees blocks of 512-4=508 bytes To create a new file we simply create a new entry in the directory The pointer will have nil for empty file Also the size field of empty file is 0

                                              To add more data to existing file free block is found and data is written to it and is linked to end of file To read a file we read by following the pointers block to block

                                              Advantages 1 Easy to find space to a new file (or) add more data to existing file2 Size of the file need not be declared when the file is created3 There is no external fragmentation

                                              Disadvantages1 Seek time is more2 Number of disk seeks required is more3 Pointers use larger percentage of filersquos disk space4 Inefficient for direct access files To access i th record i disk reads are required5 Not Reliable if pointers were lost or damaged

                                              Solution to disadvantage 3 is to use to group multiple blocks as clusters(for ex 4 blocks = 1cluster) We allocate clusters instead of blocks to file Makes logical to physical block mapping simpleImproves disk throughput by reducing disk access timeReduces wastage of space due to pointers as only few pointers may be neededFree ndashlist management also becomes simpleDisadvantage internal fragmentation more space is wasted when the cluster is partially full

                                              Solution to disadvantage 5 is to use doubly linked list (or) store the file name and relative block number in each block

                                              An variation of linked allocation is the use of file allocation table (FAT)A section of disk at the beginning of each volume contains FAT FAT has one entry for each disk block and is indexed by block numberThe block number contains the block number of next block that contains the file The last block will have special end of the file value as table entry An unused block is indicated by table value 0

                                              Advantages

                                              1 Easy to find space to a new file (or) add more data to existing file2 Random access time is improved

                                              Disadvantage number of disk seeks is more The disk head must move to read the FAT and find the location of block then move to location of the block

                                              3 Indexed allocation

                                              Each file has its own index block Index block is an array of disk block addresses The ith entry in index block points to ith block of the file

                                              The directory contains the address of the index block

                                              When the file is created all pointers in the index block are set to nil

                                              To add more data to existing file free block is found and data is written to it The address of block is put in the index block

                                              Advantages

                                              1 Easy to find space to a new file (or) add more data to existing file2 Supports direct (or) random access efficiently3 No external fragmentation

                                              Disadvantage

                                              1 Seek time is more2 Number of disk seeks required is more3 Not Reliable4 Index file must be kept in memory If this memory is not available then we have the read the index

                                              block and then desired data block ie 2 disk accesses are required To access a block near the end of the file we need to read all the index blocks to read the needed data block

                                              5 Pointer overhead of index block is gt than that of linked allocation For example if we have file of only one or two blocks Entire index block is allocated to store these one or two pointers

                                              Various mechanisms followed for size of index block

                                              1 linked scheme size of index block is one disk block Index block contains a small header containing name of the file and set of the first 100 disk block addresses The next address contains the address of another index block for large files

                                              2 multilevel index to access a block the OS uses first-level index to find second-level index block which points to file block This approach can be continued to third or fourth level

                                              3 combined scheme

                                              For example Say there are 15 pointers of index block in the filersquos inode The first 12 of these pointers have address of blocks that contains file data The next 3 pointers point to indirect blocks The first points to single indirect block which is an index block that contain addresses of filersquos data blocks

                                              The second points to double indirect blocks and third points to triple indirect blocks as shown below

                                              Q) Free space management

                                              To keep track of free disk space the system maintains free space list To create a file we search free space list and allocate the required space to new file

                                              The free space list can be implemented as

                                              1 Bit vector (or) Bit map2 Linked list3 Grouping4 Counting 5 Space maps

                                              1 Bit vector (or) Bit map Each block is represented by 1 bit If the block is free the bit is 1 and if the block is allocated the bit is 0 For example consider a disk where blocks 23458 blocks are free The free-space bit map would be 001111001000

                                              Advantages simple and easy to get contiguous files

                                              Disadvantages

                                              1 Bit map requires extra space2 Bit map is inefficient if kept in disk Bit maps of smaller disk can be kept in main memory but may

                                              not be possible to keep the bit map of larger disk in main memory3 Bit map must be modified both when blocks are allocated and freed Freeing 1 GB of data on a 1-TB

                                              disk could cause thousands of blocks of bit maps to be updated because those data blocks are scattered all over the disk

                                              2 Linked List Free-space list is implemented by linking all the free disk blocks together A pointer to the first free block is stored in special location on the disk For example consider a disk where blocks 23458 blocks are freeWe keep a pointer to block 2 as the first free block Block 2 would contain a pointer to block 3 Block 3 would point to block 4 and block 4 would point to block 5and so onAdvantage No waste of spaceDisadvantage Cannot get contiguous space easily ie To traverse the free-space list we must read each block which requires more IO time

                                              3 Grouping First free block stores the addresses of n free blocks The first n-1 free blocks are free and the last block contains the addresses of another n free blocks and so onAdvantage large number of free blocks can be found quickly4 Counting free-space list can be maintained by keeping the address of first free block and the number (n)of free contiguous blocks that follow the first block

                                              Each entry in the free-space list consists of disk address and a countEach entry requires more space andlist is shorter only if count is gt1These entries can be stored in a B-tree for efficient insertion deletion and search

                                              5 Space maps ZFS divides the disk space into chunks of manageable sizes called metaslabs Each metaslab has a space mapZFS uses counting algorithm to store information about free blocks Space map is a log of all block activities in time order in counting formatWhen ZFS allocates or frees space from a metaslab it loads the associated space map into memory and updates it Finally free-space list is updated on diskSynchronization Hardware Hardware solution to synchronization is to provide atomic operations These operations operate as a single instruction without interruption Two such operations are 1 Test and Set() hardware instruction2 Swap() instruction

                                              1 Test and Set instruction is as shown

                                              Solution using Test and SetThe shared variable lock is initialized to false

                                              2 Swap() instruction definition is as follows

                                              Solution using Swap1 Shared Boolean variable lock is initialized to false and each process has a local boolean variable key

                                              Both Test and Set Swap instructions satisfy the mutual exclusion requirement but unfortunately do not guarantee bounded waiting If there are multiple processes trying to get into their critical sections there is no guarantee of what order they will enter and any one process could have the bad luck to wait forever until they got their turn in the critical section (Since there is no guarantee as to the relative rates of the processes a very fast process could theoretically release the lock whip through their remainder section and re-lock the lock before a slower process got a chance As more and more processes are involved vying for the same resource the odds of a slow process getting locked out completely increase )

                                              Below figure illustrates a solution using test-and-set that satisfies bounded waiting using two shared data structures boolean lock and boolean waiting[ N ] where N is the number of processes in contention for critical sections

                                              Bounded-waiting mutual exclusion with TestAndSet( )

                                              It first looks in an order ( starting with the next process on the list ) for a process that has been waiting and if it finds one then it releases that particular process from its waiting state without unlocking the critical section thereby allowing a specific process into the critical section while continuing to block all the others

                                              Q) What is semaphoreAns Semaphore is used to solve synchronization problems A semaphore S is an integer variable accessed only through 2 standard atomic operations 1 wait() 2 signal()Wait primitive Signal primitivewait(S)

                                              while(S lt= 0) do nothingS--

                                              signal (S)

                                              S++

                                              Each process that wishes to use a resource performs wait() operation When a process releases a resource it performs signal() operation

                                              Semaphores Usage1 binary semaphore or mutex locks can take values 0 (or) 1

                                              2 Counting Semaphore can take any integer value Counting semaphore is used to count the number remaining resources The counter is initialized to the number of resources available whenever the counting semaphore gt 0 then a process can enter a critical section and use one of the resources When the counter = 0 ( or negative in some implementations ) then the process blocks until another process frees a resource and increments the counting semaphore with a signal callFor example if 3 resources are there and 4 processes Value of counting semaphore =3if process P1 requires a resource it perform wait() operationNow counting semaphore = 2if process P2 requires a resource it perform wait() operationNow counting semaphore = 1if process P3 requires a resource it perform wait() operationNow counting semaphore = 0if process P4 requires a resource it perform wait() operation P4 waits until a resource is available

                                              3 Semaphores can also be used to synchronize certain operations

                                              Semaphore ImplementationWhen a process is in critical section and any other process that tries to enter critical section must loop continuously wasting CPU time (busy waiting) This type of semaphore is called spin lock as the process spins while waiting For example suppose it is important that process P1 execute statement S1 before process P2 executes statement S2

                                              First we create a semaphore named synch that is shared by the two processes and initialize it to zero Then in process P1 we insert the code

                                              S1signal( synch )

                                              and in process P2 we insert the code wait( synch )

                                              S2 Because synch was initialized to 0 process P2 will block on the wait until after P1 executes the call

                                              to signal Each semaphore has a integer value and maintains a waiting queue of waiting processes

                                              Semaphore implementation with no busy waiting To overcome busy waiting we block the waiting process and waiting process is restarted by wakeup() operation(ie the process is removed from waiting queue to ready state)

                                              Problems due to semaphore1 Deadlocks Deadlocks occur when multiple processes are blocked each waiting for a resource that can only be freed by one of the other (blocked) processes shown below

                                              2 Starvation In Starvation one or more processes gets blocked forever and never get a chance to enter critical section For example if we do not specify the algorithms for adding processes to the waiting queue or selecting one to be removed from the queue in the signal( ) call If a LIFO queue is chosen then the first process that starts waiting will never get a chance

                                              3 Priority inversionLet Process LMH have priorities as shown belowLltMltHprocess H is waiting for a resource that has been held by process L so Process L is running When process M enters it pre-empts process L and M is running nowHere M runs before H even though M has lower priority than HTo avoid this L is given high priority so that M does not pre-empt L=====================================================================Q) Classical problems on synchronization

                                              1 The Bounded Buffer Problem (also called the The Producer-Consumer Problem)2 The Readers-Writers Problem3 The Dining Philosophers Problem

                                              These problems are used to test newly proposed synchronization scheme

                                              1 The Bounded Buffer ProblemConsider

                                              n buffers and each buffer holds one item a producer process which creates the items (1 at a time) a consumer process which processes them (1 at a time)

                                              Producer process cannot produce an item if all the buffers are full and Consumer process cannot consume if all the buffers are empty As both processes modify the contents of buffer synchronization must be there For that we use 3 Semaphores

                                              1 empty = n (empty semaphore counts the number of empty buffers)2 full=0 (full semaphore counts the number of filled buffers)3 mutex=1 (mutex provides mutual exclusion for access to the buffer)

                                              Producer process Consumer processdo

                                              hellip produce an item wait(empty) wait(mutex) hellip

                                              do wait(full) wait(mutex) hellip remove an item from buffer hellip

                                              add item to buffer hellip signal(mutex) signal(full)

                                              while (1)

                                              signal(mutex) signal(empty) hellip consume the item hellip while (1)

                                              2 The Readers-Writers ProblemA data item such as a file is shared among several processesEach process is classified as reader or writerMultiple readers can read the file at same timeA writer must have exclusive access (ie cannot share with either a reader or another writer)Two versions of the readers-writers problem

                                              readers priority new reader need not wait because a writer is waiting writers priority if a writer is waiting to access the database no new readers can start reading

                                              A solution to both version may cause starvation in the readers priority version writers may starve in the writers priority version readers may starve

                                              A semaphore solution to the readers priority version uses 3 semaphores1 readcount = 0 (readcount semaphore counts the number of reading shared data2 mutex =1 (mutex semaphore is used for mutually exclusion when readcount is updated)3 wrt = 1 (wrt semaphore is common to both writer and reader)

                                              wrt provides mutually exclusive access to shared data Reader process Writer process

                                              wait(mutex) readcount++ if (readcount == 1) wait(rt) signal(mutex) hellip reading is performed hellip wait(mutex) readcount-- if (readcount == 0) signal(wrt) signal(mutex)

                                              do wait(wrt) hellip writing is performed hellip signal(wrt)

                                              while (TRUE)

                                              3 The Dining Philosophers Problemn philosophers sit around a table thinking and eatingAs time passes a philosopher gets hungry and tries to pick up the chopstick on his left and on his right A philosopher may only pick up one chopstick at a time and cannot pick up a chopstick already in the hand of neighbour philosopherThe dining philosophers problems is an example of a large class or concurrency control problems it is a simple representation of the need to allocate several resources among several processes in a deadlock-free and starvation-free mannerA semaphore solution represent each chopstick with a semaphore

                                              semaphore chopstick[5] Initially all values are 1

                                              Philosopher ido

                                              51

                                              wait(chopstick[i])wait(chopstick[(i+1) 5])hellipeathellipsignal(chopstick[i])signal(chopstick[(i+1) 5])hellipthinkhellip

                                              while (1)This solution guarantees no two neighbouring philosophers eat simultaneously but has the possibility of creating a deadlockQ) Types of operating systems

                                              MULTIPROCESSING SYSTEMS has multiple hardware CPUs

                                              In symmetric multiprocessing (SMP) systems The processes share the same main memory and io facilities All processors can perform the same functions (hence term symmetric)

                                              52

                                              ASYMMETRIC MULTIPROCESSING SYSTEMS In asymmetric multiprocessing (ASMP) all CPUs are not equal (Master and slave CPUs) ASMP have dedicated processors to specific tasks For example one processor may be dedicated to disk operations another to video operations etc These systems dont have the flexibility to assign processes to the least-loaded CPU unlike an SMP system

                                              REAL TIME SYSTEM is used to implement a computer application that must complete its execution within its time constraint

                                              Two kinds of real-time systems have evolved

                                              1 A hard real-time system can guarantee that response requirement will be met under all conditions 2 A soft real-time system cannot guarantee that response requirement meets under all conditions Ex

                                              Digital audio or multimedia systems Digital telephones

                                              DISTRIBUTED SYSTEMS execute parts of a computation in different systems at the same time It uses distributed control ie it spreads its decision-making actions across different computers in the system so that failures of individual computers or the network does not cripple its operation

                                              A distributed operating system appears as a uniprocessor system even though it has multiple processors The users may not know where their programs are being run or where their files are located that should all be handled automatically by the operating system

                                              Distributed systems allow applications to run on several processors at the same time thus requiring more complex processor scheduling algorithms

                                              DESKTOP SYSTEMS A desktop system is a personal computer (PC) system used at a single location Modern desktop systems support multiprogramming Common examples are Linux FreeBSD Windows 8 and the Macintosh operating system

                                              HANDHELD SYSTEMS (palmtop computers) Two of the most popular operating systems for handhelds are Symbian OS and Android OS They have very less memory

                                              53

                                              CLUSTERED SYSTEMS A computer cluster consists of a set of loosely connected computers that work together so that in many respects they can be viewed as a single system The components of a cluster are usually connected to each other through fast local area networks each node (computer used as a server) running its own instance of an operating system

                                              Q) Modern operating system

                                              New developments in hardware applications and security threats lead to the development of modern operating system

                                              New Hardware developments like increased machine speed high speed network increased size and variety of memory devices

                                              New Applications developments like multimedia applications internet and web access and client server computing

                                              Developments in modern operating system can be categorized as

                                              1 Microkernel architecture assigns only few functions to kernel ( IPC and CPU scheduling etc) Other OS services are provided by processes called servers that run in user mode This approach decouples server and kernel development This approach is well suited to distributed environment

                                              2 Multithreading a process is divided into threads that can run concurrently Useful for applications that perform a number of independent tasks Example database server that listens and processes many client requests

                                              3 Symmetric multiprocessing As now-a-days there are multiple microprocessors in a single system SMP operating system provides greater efficiency when there are multiple processors The processes share the same main memory and io facilities All processors can perform the same functions (hence term symmetric)

                                              SMP schedules processes or threads across all of the processors

                                              Advantages

                                              1 Increased performance if some portion of the program can be run in parallel the performance of SMP OS increases

                                              2 increased availability as all processes can perform the same task if one processor fails others will work 3 incremental growth one can add a new processor to increase the performance4 Scaling cost can be dependent on the number of processors used

                                              4 Distributed operating systems Distributed systems allow applications to run on several processors at the same time

                                              5 Object oriented design adds modularity to kernel Os can be customized without effecting system integrity

                                              Q) Linux Operating systems It is open source as its source code is freely available It is free to use Linux was designed considering UNIX compatibility Linux is a multi-user multitasking system Main design goals are speed efficiency and standardization Components of Linux System

                                              54

                                              Linux Operating System has primarily three components as shown in the below diagram

                                              Kernel - Kernel is the core part of Linux It is responsible for all major activities of this operating system It interacts directly with hardware Kernel hides low level hardware details to system or application programs

                                              System Library - System libraries are special functions or programs These are used by application programs to use Kernelrsquos features

                                              System Utility - System Utility programs are responsible to do specialized individual level tasks like updating log file accepting login requests from terminals etc

                                              Kernel Modules Sections of kernel code that can be compiled loaded and unloaded independent of the rest of the kernel A kernel module may typically implement a device driver a file system or a networking protocol

                                              Three components to Linux module support 1 module management Supports loading modules into memory and allows to talk to the rest of the kernel 2driver registration Allows modules to inform rest of the kernel that a new driver has become available 3 conflict resolution protect reserved resources of one driver from accidental use by another driver

                                              Kernel Mode vs User ModeKernel component code executes in a special privileged mode called kernel mode with full access to all resources of the computer This code represents a single process executes in single address space and do not require any context switch and hence is very efficient and fast Kernel runs each processes and provides system services to processes provides protected access to hardwares to processesCode which is not required to run in kernel mode is in System Library User programs and other system programs works in User Mode which has no access to system hardwares and kernel code User programs utilities use System libraries to access Kernel functions to get systems low level tasksBasic FeaturesFollowing are some of the important features of Linux Operating System

                                              Portable - Portability means softwares can works on different types of hardwares in same way Linux kernel and application programs supports their installation on any kind of hardware platform

                                              Open Source - Linux source code is freely available and it is community based development project Multiple teams works in collaboration to enhance the capability of Linux operating system and it is continuously evolving

                                              Multi-User - Linux is a multiuser system means multiple users can access system resources like memory ram application programs at same time

                                              Multiprogramming - Linux is a multiprogramming system means multiple applications can run at same time

                                              Hierarchical File System - Linux provides a standard file structure in which system files user files are arranged

                                              Shell - Linux provides a special interpreter program which can be used to execute commands of the operating system It can be used to do various types of operations call application programs etc

                                              55

                                              Security - Linux provides user security using authentication features like password protection controlled access to specific files encryption of data

                                              Architecture Linux System Architecture is consists of following layers

                                              Hardware layer - Hardware consists of all peripheral devices (RAM HDD CPU etc)

                                              Kernel - Core component of Operating System interacts directly with hardware provides low level services to upper layer components

                                              Shell - An interface to kernel Takes commands from user and executes kernels functions

                                              Utilities - Utility programs giving user most of the functionalities of an operating systems

                                              Q) windows XP operating System1Extensibility mdash layered architecture

                                              Executive which runs in protected mode provides the basic system services On top of the executive several server subsystems operate in user mode Modular structure allows additional environmental subsystems to be added without affecting the

                                              executive 2Portability mdash XP can be moved from on hardware architecture to another with relatively few changes

                                              Written in C and C++ Processor-dependent code is isolated in a dynamic link library (DLL) called the ldquohardware abstraction

                                              layerrdquo (HAL)3 Reliability mdashXP uses hardware protection for virtual memory and software protection mechanisms for operating system resources 4 Compatibility mdash applications that follow the IEEE 10031 (POSIX) standard can be complied to run on 2000 without changing the source code 5 Performance mdash XP subsystems can communicate with one another via high-performance message passing

                                              Preemption of low priority threads enables the system to respond quickly to external events Designed for symmetrical multiprocessing

                                              6 International support mdash supports different locales via the national language support (NLS) APIXP Architecture 1 Layered system of modules 2 Protected mode mdash hardware abstraction layer (HAL) kernel executive 3 User mode mdash collection of subsystems

                                              a Environmental subsystems emulate different operating systems b Protection subsystems provide security functions

                                              XP architecture

                                              56

                                              Q) Windows network Operating systemExample Windows 2000 operating system

                                              All the pictures are taken from Silberschatz Abraham et al Peter Baer Galvin and Greg Gagne Operating system concepts Reading Addison-Wesley

                                              Visit my blog enthusiaststudentblogspotin57

                                              mtechmessengerblogspotin

                                              Keywords OS exam notes OS notesOperating system notes Opearing system galvin notesSimple batch system Multi- programmed Batch System Time sharing System Difference between Batch Multi-programming and Time sharing What is process structure of process in memory Process states PCB Operations on Processes Process Creation Process Termination cascading terminationcontext switch Basic Concepts of threads two modes of CPU execution Inter process communication Shared ndash Memory Systems Message passing systemswhat is deadlock four conditions that are necessary for deadlock to occur Methods for handling deadlocks Deadlock prevention Deadlock avoidanceResource Allocation graphBankers AlgorithmSafe State Unsafe state Safety Algorithm Resource-Request Algorithm Memory management Address Binding Logical vs Physical Address Space MMU Memory-Management Unit Dynamic loading Dynamic Linking Overlays Swapping Contiguous memory allocation Single-partition allocation Multiple-partition allocation Internal fragmentation Variable Size partitions External fragmentation First-fit Best-fit Worst-fit Non-Contiguous Memory allocation Paging Implementation of Page Table PTLR PTBR page table base register Page-table length register translation look-aside buffers TLB Protection in Paging Shared pages in Paging Segmentation Shared Segments Virtual memory Demand paging Procedure for Handling a Page Fault pure demand paging Page Replacement Page Replacement Algorithms FIFO Optimal Page replacement AlgorithmLeast Recently used (LRU) Page replacement algorithm ThrashingWorking Set model global and local page replacement

                                              58

                                              • 101 File Concept Data is stored in secondary memory in the form of files A file is a sequence of bits bytes lines or records A file has structure which depends on its type
                                              • File Attributes
                                              • File Operations
                                              • Kernel Mode vs User Mode
                                              • Basic Features
                                              • Architecture

                                                4) To bring the required page into memory we need to find a free main memory frame map main memory address to disk block and fetch disk block and load the block into free frame5) When the required page is brought into memory update the page table to indicate the page is in memory6) Restart instruction interrupted by illegal address trap The process will continue as if page had always been in memory

                                                Q) What is pure demand pagingAns If no page belonging to the executing process is in main memory the process will fault for every page it needs Page faults will occur until every page that is needed is in memory This scheme is known as pure demand paging NEVER BRING A PAGE UNTIL IT IS REQUIRED

                                                Q) Page Replacement1 Find the location of the desired page on the disk2 Find a free

                                                i) If there is a free frame use itii) If there is no free frame use Page replacement Algorithms to find some page in memory that is

                                                not really in use and swap itiii) If the modify bit associated with the victim frame is set to 1 the page is written to disk Make the

                                                changes to validinvalid bit of victim frames page table entry to invalid to indicate the victim frame is no longer in memory

                                                else no need to write to disk3 Read the desired page into victim frame and change the frame and page tables4 Restart the user process

                                                =================================================================================================Q) Page Replacement AlgorithmsGoal Produce a low page-fault rateThe algorithms are evaluated by running it on a particular string of memory references (reference string) and by computing the number of page faults on that string

                                                1 First in First out (FIFO) Page replacement algorithm Uses the time when a page was brought into memory The page that was brought first into memory is replaced ie oldest page is chosen for replacement We create a FIFO queue to hold all pages in memory

                                                When a page is brought into memory we insert it at the tail of the queue and we replace the page at the head of the queue

                                                FIFO 15 page faultsProblems due to FIFO is Beladyrsquos anamolyFor a refrence string shown below123412512345When number of frames allocated to a process is 1 number of page faults=12 due to FIFO page replacementWhen number of frames allocated to a process is 2 number of page faults=12 due to FIFO page replacementWhen number of frames allocated to a process is 3 number of page faults=9 due to FIFO page replacementWhen number of frames allocated to a process is 1 number of page faults=10 due to FIFO page replacementAs the number of frames allocated to a process increases page faults must decrease But in FIFO page replacement page faults increase2 Optimal Page replacement AlgorithmReplace the page that will not be used for longest period of time Used for measuring how well your algorithm performs It is not practical to implement OPT algorithm as it is not possible to know what the future references will beex

                                                Optimal page replacement 9 page faults

                                                3 Least Recently used (LRU) Page replacement algorithm Replace the page that has not been used for longest period of time LRU makes use of time of last use of the page Ex

                                                Number of page faults = 12

                                                LRU requires hardware assistance to determine the time of last use of the page We can use 1) Stack or 2) Counter to implement LRU page replacement algorithm

                                                1) Stack implementation ndash keep a stack of page numbers in a double link formwhen a Page is referenced it is moved to the top of stackThis implementation requires 6 pointers to be changed

                                                2) counter implementation Time-of-use field is allocated to each frame allocatedCPU maintains a counter Counter is incremented for every page reference Counter value is copied to the time-of-use field of the referenced page

                                                Disadvantages of LRU page replacement algorithm Updating counter or stack must be done for every memory reference HENCE LRU IS SLOW and implementation requires hardware assistance ==============================================================================Q) LRU approximation page replacement1) Using reference bit2) Using additional reference bits algorithm3) Second chance algorithm4) Enhanced Second chance algorithm5) Counting based Page replacement

                                                Has two schemes1) Least Frequently Used (LFU)2) Most Frequently Used (MFU)

                                                6) Page Buffering AlgorithmsQ) What is ThrashingAns A process is thrashing if it is spending more time paging than executingThrashing may occur when global (or) local page replacement is used as followsSolution to Thrashing is Working Set strategyIf we provide a process with as many frames as it needs thrashing can be avoided Since it is not possible to know how many frames a process needs we use locality model of process execution

                                                here Locality is set of pages that are actively used together Locality model states that as the process executes it moves from locality to locality If we allocate enough frames to a process to accommodate the size of current locality it will not fault again until it changes current localityWorking Set model uses working-set and a parameter ∆ = working-set windowThe set of pages in the most recent ∆ page references is the working setLet WS S i be the working-set size for process Pi ie Pi needs WS S i framesLet D be the total demand for frames then D=sum WS Si

                                                and mbe the total number of available frames

                                                If iquestm thrashing occurs because some processes will not have enough framesSoIF (Dgtm iquest

                                                OS selects a process to suspendelse OS may initiate another processThe working set strategy prevents thrashing while keeping the degree of multi programming as high as possibleThe main difficulty is to keep track of moving working set window ========================================================================Q) How does thrashing occur in global and local page replacementAns In Global page replacement when a page fault occurs for a process any frame in main memory can be replaced causing page fault to some other process that needs the replaced frame Page fault processes must use paging device to swap pages in and out As more number of processes queue up for the paging device

                                                the READY queue empties And CPU becomes idle So OS introduces new processes as CPU is free This further increases number of page faults

                                                In Local page replacement Each process is allocated certain frames in main memory and when a page fault occurs only the frames allocated to it will be replacedWhen a process starts thrashing other processes are not affected But since the processes will be in queue for paging device most of the time average service time for page fault increases Hence effective access time will increase even for a process that is not thrashingQ) Page Table Structure Or Structure of page table in memory

                                                Hierarchical Paging Hashed Page Tables Inverted Page Tables

                                                Hierarchical Paging If Page table is large then we break up the page table into multiple page tables ie page table is paged A simple technique is a two-level page tableTwo-Level Paging Example A logical address (on 32-bit machine with 4K page size) is divided into 1 a page number consisting of 20 bits 2 a page offset consisting of 12 bitsSince the page table is paged the page number is further divided into 1 a 10-bit page number 2 a 10-bit page offset Thus a logical address is as follows page number page offset

                                                pi p2 d

                                                10 10 12 where p1 is an index into the outer page table and p2 is the displacement within the page of the outer page table Two-Level Page-Table Scheme

                                                Address-Translation Scheme Address-translation scheme for a two-level 32-bit paging architecture

                                                Hashed Page Tables (or) Hash table Hashed page tables are common in address spaces gt 32 bitsEach entry in hash table contains a linked list of elements that hash to same locationHash table is indexed by hash valueEach element consists of 3 fields

                                                1 Page number2 Frame number3 Pointer to next element in linked list

                                                The algorithm works as follows Page number is passed to hash function to get hash value Locating the hash value in hash table is easy as Hash table is indexed by hash value Page number is then compared with field 1 in the first element in linked list If there is a match the corresponding frame number is used to get physical address If there is no match next entries in linked list are searched

                                                Inverted Page Table

                                                Invertible page table has one entry for each frame of physical memory Each entry in invertible page table has

                                                Process-id Page number

                                                Logical address also has 3 parts Process-id Page number

                                                OffsetProcess-id and page number of the logical address is compared with each entry in the invertible page tableIf a match is found at ith entry i is added to offset to get logical address If there is no match it means it is illegal addressAdvantages and disadvantages

                                                Decreases memory needed to store each page table Increases time needed to search the table when a page reference occurs Solution Use hash table to

                                                limit the search to one mdash or at most a few mdash page-table entries Inverted Page Table Architecture

                                                Unit-IV CPU Scheduling

                                                Basic conceptsTo maximize CPU utilization some process must be running at all times

                                                In multi programming several processes will be in main memory at a given time If running process has to wait for IO the OS takes the CPU away from that process and assigns CPU to other process in ready queue So CPU must be scheduled and CPU scheduling is fundamental OS function

                                                CPU- IO burst cycleProcess execution begins with a CPU burst that is followed by an IO burst which is followed by another CPU burst then another IO burst and so on as shown in the figure

                                                An IO bound program has many short CPU bursts A CPU-bound program has few long CPU bursts

                                                Schedulers a module in OS for scheduling decisions1048708 Long-term scheduler (or job scheduler) ndash selects which processes should be brought into the ready queue1048708 Medium-term scheduler ndash selects which processes should be swapped inout the memory1048708 Short-term scheduler (or CPU scheduler) ndash Whenever the CPU becomes idle the operating system must select one of the processes in ready queue to be executed This selection is done by short term scheduler (or) CPU scheduler

                                                Dispatcher gives CPU control to the process selected by short term scheduler This function involves following

                                                1 switching context2 switching to user mode3 jumping to proper location in the user program to restart that program

                                                Time taken to stop one process and start another process is known as dispatch latency This must be kept smallReady QueueReady queue is implemented as a FIFO queue a priority queue a tree or simply an unordered linked listThe records in ready queue are PCBs of processes

                                                Pre-emptive Scheduling

                                                CPU scheduling decisions take place under one of four conditions

                                                1 When a process switches from the running state to the waiting state2 When a process switches from the running state to the ready state for example in response to an

                                                interrupt3 When a process switches from the waiting state to the ready state say at completion of IO 4 When a process terminates

                                                For conditions 1 and 4 there is no choice - A new process must be selected For conditions 2 and 3 there is a choice - continue running the current process (or) select a different

                                                one If CPU scheduling takes place only under conditions 1 and 4 the system is said to be non-

                                                preemptive or cooperative In non-preemptive a process runs until it itself gives up the CPU ie when it is waiting for IO (or) when process completes Otherwise the system is said to be pre-emptive

                                                Windows used non-preemptive scheduling up to Windows 3x and started using pre-emptive scheduling with Win95 Macs used non-preemptive prior to OSX and pre-emptive since then Note that pre-emptive scheduling is only possible on hardware that supports a timer interrupt

                                                Note that pre-emptive scheduling can cause problems when two processes share data because one process may get interrupted in the middle of updating shared data structures

                                                Preemption can also be a problem if the kernel is busy implementing a system call ( eg updating critical kernel data structures ) when the preemption occurs Solution wait until the system call has either completed or blocked before allowing the pre-emption But this solution is problematic for real-time systems

                                                Some critical sections of code protect themselves from concurrency problems by disabling interrupts before entering the critical section and re-enabling interrupts on exiting the section But this should only be done only on very short pieces of code that will take less CPU time

                                                Scheduling Criteria to decide CPU scheduling algorithm1 CPU utilization in real system cpu utilization will range from 40 to 902 Throughput Number of processes that are completed per unit time3 Turnaround time Time when the process has completed ndash Time when the process was submittedie it includes waiting time and burst time4 Waiting time amount of time that a process spends waiting for CPU5 Response time Time when the first response came - Time when the process was submitted

                                                Optimization Criteria Max CPU utilizationMax throughput

                                                Min turnaround timeMin waiting timeMin response time CPU Scheduling Algorithms1) First-Come First-Served (FCFS) Scheduling First come first served CPU scheduling algorithm is non pre-emptive Processes are scheduled in the order they have arrivedFCFS is implemented by Queue When the CPU is free it is allocated to the process at the head of the queueAdvantages Easy to implementDisadvantages 1 Average waiting time is quite long

                                                2 Convoy effect (it occurs as small processes wait in queue for big process to leave CPU)Example Process Burst Time P1 24 P2 3 P3 3

                                                Suppose that the processes arrive in the order P1 P2 P3

                                                The Gantt Chart for the schedule is P1 P2 P3

                                                0 24 27 30 Waiting time for P1 = 0 P2 = 24 P3 = 27 Average waiting time (0 + 24 + 27)3 = 17 2) Shortest-Job-First (SJF) SchedulingSchedule the Process with shortest burst time If CPU burst time of two processe are same then FCFS is usedAdvantages Average waiting time decreasesDisadvantages It is difficult to know the length of next CPU burstSJF can be either pre-emptive (or) non pre-emptive1 Non pre-emptive SJF ndash once CPU is given to the process it cannot be preempted until completes its CPU burst2 Pre-emptive SJF (or) Shortest-Remaining-Time-First (SRTF) If the newly arrived process is shorter than what is remaining of currently executing process then pre-empt the currently executing process

                                                Example Process Arrival Time Burst Time P1 00 7

                                                P2 20 4 P3 40 1

                                                P4 50 4

                                                SJF (non-preemptive) Gantt Chart P1 P3 P2 P4

                                                0 7 8 12 16 Average waiting time = [0 +(8-2)+(7-4) +(12-5)] 4 =4

                                                Example of Preemptive SJF

                                                Process Arrival Time Burst TimeP1 00 7P2 20 4P3 40 1P4 50 4

                                                SJF (preemptive) P1 P2 P3 P2 P4 P1

                                                0 2 4 5 7 11 16 Average waiting time = (9 + 1 + 0 +2)4 =3 3) Priority SchedulingA priority number (integer) is associated with each processThe CPU is allocated to the process with the highest priority (smallest integer equiv highest priority)If two processes have equal priority then FCFS is usedSJF is a priority algorithm in which larger the CPU burst lower the priorityThere are two schemes 1 Preemptive 2 nonpreemptive

                                                Problem equiv Starvation ndash low priority processes may never executeSolution equiv Aging ndash as time progresses increase the priority of the process

                                                4) Round Robin (RR) Each process gets a small unit of CPU time (time quantum) usually 10-100 milliseconds After this time has elapsed the process is preempted and added to the end of the ready queueIf there are n processes in the ready queue and the time quantum is q then each process gets 1n of the CPU time in chunks of at most q time units at once No process waits more than (n-1)q time unitsPerformance 1 q large _ FIFO 2 q small _ q must be large with respect to context switch otherwise overhead is too highExample of RR with Time Quantum = 4 Process Burst Time P1 24 P2 3

                                                P3 3 The Gantt chart isP1 P2 P3 P1 P1 P1 P1 P1

                                                0 4 7 10 14 18 22 26 30 Average waiting time = [(30-24)+4+7]3 = 173 =566 5)Multilevel Queue scheduling Ready queue is partitioned into separate queuesFor ex ready queue is divided into 2 queues

                                                1 foreground (interactive) queue2 background (batch) queue

                                                Each queue has its own scheduling algorithmFor ex Round Robin scheduling algorithm can be used in foreground queue

                                                FCFS scheduling algorithm can be used in background queueScheduling must be done between the queues This can be done in 2 ways1 Fixed priority scheduling

                                                Foreground queue has highest priority All the processes in foreground queue must be completed and queue must be empty Then only the processes in background queue will be given CPU As shown in below figure Disadvantage starvation

                                                2 Time slice ndash each queue gets a certain amount of CPU timewhich it can schedule amongst its processes ie 80 to foreground in RR and 20 to background in FCFS

                                                6) Multilevel Feedback Queue Scheduling A process can move between queues The idea is to separate processes according to their CPU bursts IO ndash bound and interactive processes will be in highest priority queueIf a process takes more CPU time it is moved to lower priority queue If a process waits too long in lower priority queue it is moved to higher priority queue to prevent starvationAs shown in the below figure let there be 3 queues Q0 Q1 Q2

                                                1 Q0 ndash time quantum 8 milliseconds 2 Q1 ndash time quantum 16 milliseconds 3 Q2 ndash FCFS

                                                Scheduling

                                                1 A process entering ready queue is put in queue Q0 When it gains CPU the process receives 8 msec If it does not finish in 8 milliseconds process is moved to queue Q1 2 At Q1 process again receives 16 additional milliseconds If it still does not complete it is moved to queue Q2

                                                Q) SynchronizationSeveral processes run in an operating system Some process share resources due to which problems like data inconsistency may arise

                                                Example of process synchronization is Producer Consumer problem (or) Bounded-Buffer problemProducer Consumer problem To make sure that producer process should not add data when the buffer is full andConsumer process should not take data when the buffer is empty

                                                The code for Producer process can be modified as followswhile (true) Produce an item in next producedwhile (counter == BUFFER_SIZE) do nothingbuffer[in]=nextProducedin = (in+1) BUFFER_SIZEcounter++

                                                here an integer variable counter is initialized to zeroCounter is incremented every time a new item is added to the buffer and counter is decremented every time an item is removed from bufferin is an index that always points to an empty slot after the last filled slotout is an index that always points to the first filled slot after at the head of the circular queue

                                                The code for consumer process is as shown belowwhile(true)

                                                while (counter == 0) do nothingnextConsumed = buffer[out]out=(out+1)BUFFER_SIZEcounter--

                                                The above producer and consumer code produces RACE CONDITIONOutcome of execution of several co-operating processes depends on the order in which accesses takes place is called Race ConditionFor Example if counter =5

                                                Given counter =5 and if producer produces one item and consumer consumes one item then correct value of counter= 5 only But here the counter = 4 ie inconsistent resultIf T5 is executed before T4 then the counter =6 ie inconsistent result

                                                Solution to avoid race condition make sure only one process at a time must update the shared variable (here it is counter)

                                                Q) Critical section problemAns Each process has a segment of code called critical section Critical section code is the code that contains common variables of co-operating processesTo avoid data inconsistencyTwo processes must not execute critical section code at same timeFor this to happen1 Each process must request permission to enter its critical sectionmdashEntry section2 End of critical section ndashExit section3 Rest of the code after critical section ndash Remainder section

                                                Solution to critical section must satisfy the following three requirements1 Mutual exclusion if one process is in critical section no other process can be in critical section2 Progress Processes not in remainder section must decide which process will enter critical section next3 Bounded Waiting All the processes must be allowed to enter critical section No process must wait indefinitely

                                                Q) Petersonrsquos solution for critical section problem (Software based solution)AnsTime Process 1 Process2T0 do

                                                flag[1] = TRUE turn =2

                                                do flag[2] = TRUE

                                                T1 turn =1

                                                T2 While( flag[2] ampamp turn == 2)Since turn = 2 while loop will not run

                                                While( flag[1] ampamp turn == 1)Loops continuously until flag[1]

                                                becomes falseT3 Enter critical section

                                                T4 flag[1] = FALSE Enter Critical section

                                                T5 Enters remainder section flag[2]=FALSE

                                                while (TRUE) Enters remainder section

                                                while (TRUE)

                                                101 File Concept Data is stored in secondary memory in the form of files A file is a sequence of bits bytes lines or records A file has structure which depends on its type

                                                File Attributes

                                                1 Name ndash Name of the file for user understanding2 Identifier used by OS to identify a file ( eg inode number )3 Type ndash Text files executable files binary files etc 4 Location ndash location of the file on the hard drive 5 Size current size of the file6 Protection Controls who can read write or execute the file7 Time Date and user identification gives the information when the file was created last modified

                                                and last used etc This data is used for protection security and usage monitoring

                                                All information about files is kept in directory structure The directory entry consists of the filersquos name and its unique identifier The identifier locates the other file attributes Directory structure is kept on disk

                                                File Operations

                                                The file Abstract Data Type supports many common operations 1 Creating a file find free space on disk add entry to directory 2 Writing a file write data at current file position pointer location and update pointer3 Reading a file read data at current file position pointer location and update pointer4 Repositioning within a file or file seek change pointer location to a given value5 Deleting a file free the space allocated to file and remove entry from directory6 Truncating a file delete the data and update file size

                                                Open File Table

                                                Most OSes require that files be opened before access and closed after access Information about all files that are open currently in the system is stored in an open file table This

                                                system wide open file table is maintained by OS And information about files opened by the process is stored by the processrsquos open file table Open file table has the following information

                                                1 File pointer - records the current position in the file for the next read or write2 File-open count - How many times has the current file been opened ( simultaneously by

                                                different processes ) and not yet closedWhen this counter reaches zero the file can be removed from the table

                                                3 Disk location of the file4 Access rights

                                                Some systems provide support for file locking 1 A shared lock is for reading only2 A exclusive lock is for writing as well as reading3 An advisory lock OS makes sure the locks are acquired and released appropriately4 A mandatory lock software developers makes sure the locks are acquired and released

                                                appropriately5 UNIX used advisory locks and Windows uses mandatory locks

                                                File Types

                                                Implementing file types A common technique is to include the type as part of the filename The name is split into 2 parts

                                                1 File Name 2 Extension

                                                File name and extension are separated by a period The system uses the extension to indicate the type of file and the type of operations that can be done on that fileUser opens Microsoft word and clicks on File menu and Open option User will only specify the filename without extension application will look for a file with the given name and extension it expects Hence extensions are not supported by the operating systemUnix system uses a crude magic number stored at the beginning of the file to indicate the type of fileIn Mac OS X operating system file produced by word processor has the word processorrsquos name as its creator When user double clicks on the file word will open the required fileSome common file types are as shown below

                                                File Structure Some files contain an internal structure which may or may not be known to the OS for ex executable files have a specific structure so that it can determine

                                                where in memory to load the file and what is the location of the first instruction

                                                If OS supports many file formats it will increase the size and complexity of the OS UNIX treats all files as sequences of bytes with no internal structure (With the exception of

                                                executable binary programs which it must know how to load and find the first executable statement etc)

                                                Macintosh files have two forks - a resource fork and a data fork The resource fork contains information relating to the UI such as icons and button images and can be modified independently Data fork contains the code or data

                                                Internal File structure

                                                Disk files are accessed in units of physical blocks typically 512 bytes

                                                Internally files are organized in units of logical units like single byte (or)

                                                data record size (or) structure size

                                                The number of logical units which fit into one physical block determines its packingIf a student file is organized internally as records Each record takes 64 bytes of memoryAnd main memory is divided into blocks of 512 bytes Then packing = 51264=8 8 records can be stored per block

                                                Internal fragmentation may occur due to this

                                                102 Access Methods

                                                The information in file can be accessed in the following ways1 Sequential access2 Direct access (or) Relative access

                                                1 Sequential access we cannot randomly access any record

                                                Records are read and written sequentially one record after another

                                                A sequential access file emulates magnetic tape operation and generally supports the following operations read next - read a record and advance the tape to the next position write next - write a record and advance the tape to the next position rewind skip n records - May or may not be supported N may be limited to positive numbers or may be

                                                limited to +- 1

                                                2 Direct access (or) Relative accessA file is made up of fixed length records so that we can randomly access any record

                                                No restriction on order of reading or writing

                                                Direct access files are useful when we need to access large amounts of data randomly

                                                Ex database

                                                Operations supported include n is relative block number ie relative to beginning of file

                                                1 read n - read record number n 2 write n - write record number n 3 jump to record n - could be 0 or the end of file4 Query current record - used to return back to this record later5 Sequential access can be easily emulated using direct access The inverse is complicated and

                                                inefficient

                                                3 Other access methods can be built on top of direct access method We construct an index for the file which contains pointers to various blocks To search a file we first access index and then use pointer to access file

                                                If a student file has 120000 records of students sorted by student id We can create an index consisting of 1st student id in each blockif 64 records can be stored in each block then number of blocks needed to store 120000 records = 12000064 asymp 2000 blocks So index has 2000 entries Index is kept in main memory so that searching becomes faster

                                                If the index itself becomes large we create an index for index file (Ex B-tree)

                                                Q) Allocation Methods

                                                To allocate space to files on disk is such a way that disk space is utilized effectively and files can be accessed quickly

                                                Three major methods of allocating disk space are

                                                1 Contiguous 2 Linked

                                                3 Indexed

                                                1 Contiguous allocation Each file occupies set of contiguous blocks on disk

                                                If the file is n blocks long and starts at location b then it occupies blocks b b+1 b+2hellipb+n-1

                                                The directory entry for each file indicates the address of starting block(ie b) and length (n)

                                                Advantages

                                                1 For direct access to block i we can directly access by finding the block as b+i Since the file is stored sequentially sequential access is easy Hence access is easy

                                                2 Supports both direct and sequential access3 Number of disk seeks required is minimum 4 Seek time is minimum

                                                Disadvantages

                                                1 Difficult to find space to a new file (or) add more data to existing fileFirst fit and Best fit strategies are used to find a free hole( set of free contiguous blocks) from list of available holes But both are inefficient wrt storage utilization

                                                2 External fragmentation even though there are enough free blocks required for a new file they are not contiguous and hence cannot be allocated to new fileSolution copy all the files onto another disk or tape We then copy the files back onto original disk contiguously This is known as compaction But compaction is time consuming

                                                3 Size of the file must be known when the file is created Even though final size is known entire size may not be used by the file at the beginning Hence leads to wastage of space

                                                If size of the file is not known in advance then (i) If we allocate too little space to file file cannot be extended Solution find a larger hole copy the contents of the file to the new space and release the previous space This may be possible as long as space exists Also this is time consuming(ii) If we overestimate the amount of space needed much of the space is unused This will lead to internal fragmentation

                                                To minimize these drawbacks some os uses modified contiguous allocation scheme Here contiguous blocks of memory is allocated at firstIf the file needs more blocks another set of contiguous space known as extent is added Location and a block count plus a link to the first block of next extentIf extents are too large and of fixed size internal fragmentation occursIf extents are of variable sizes external fragmentation occurs

                                                Linked AllocationEach file is a linked list of disk blocks The disk blocks can be scattered anywhere on the diskThe directory contains a pointer to first and last blocks of the file For example a file of 5 blocks might start at a block 9 and continue at block 16 then block 1 and then block 10 and finally block 25 Each block contains a pointer to next block If each block is 512 bytes and disk address (pointer ) requires 4 bytes then the user sees blocks of 512-4=508 bytes To create a new file we simply create a new entry in the directory The pointer will have nil for empty file Also the size field of empty file is 0

                                                To add more data to existing file free block is found and data is written to it and is linked to end of file To read a file we read by following the pointers block to block

                                                Advantages 1 Easy to find space to a new file (or) add more data to existing file2 Size of the file need not be declared when the file is created3 There is no external fragmentation

                                                Disadvantages1 Seek time is more2 Number of disk seeks required is more3 Pointers use larger percentage of filersquos disk space4 Inefficient for direct access files To access i th record i disk reads are required5 Not Reliable if pointers were lost or damaged

                                                Solution to disadvantage 3 is to use to group multiple blocks as clusters(for ex 4 blocks = 1cluster) We allocate clusters instead of blocks to file Makes logical to physical block mapping simpleImproves disk throughput by reducing disk access timeReduces wastage of space due to pointers as only few pointers may be neededFree ndashlist management also becomes simpleDisadvantage internal fragmentation more space is wasted when the cluster is partially full

                                                Solution to disadvantage 5 is to use doubly linked list (or) store the file name and relative block number in each block

                                                An variation of linked allocation is the use of file allocation table (FAT)A section of disk at the beginning of each volume contains FAT FAT has one entry for each disk block and is indexed by block numberThe block number contains the block number of next block that contains the file The last block will have special end of the file value as table entry An unused block is indicated by table value 0

                                                Advantages

                                                1 Easy to find space to a new file (or) add more data to existing file2 Random access time is improved

                                                Disadvantage number of disk seeks is more The disk head must move to read the FAT and find the location of block then move to location of the block

                                                3 Indexed allocation

                                                Each file has its own index block Index block is an array of disk block addresses The ith entry in index block points to ith block of the file

                                                The directory contains the address of the index block

                                                When the file is created all pointers in the index block are set to nil

                                                To add more data to existing file free block is found and data is written to it The address of block is put in the index block

                                                Advantages

                                                1 Easy to find space to a new file (or) add more data to existing file2 Supports direct (or) random access efficiently3 No external fragmentation

                                                Disadvantage

                                                1 Seek time is more2 Number of disk seeks required is more3 Not Reliable4 Index file must be kept in memory If this memory is not available then we have the read the index

                                                block and then desired data block ie 2 disk accesses are required To access a block near the end of the file we need to read all the index blocks to read the needed data block

                                                5 Pointer overhead of index block is gt than that of linked allocation For example if we have file of only one or two blocks Entire index block is allocated to store these one or two pointers

                                                Various mechanisms followed for size of index block

                                                1 linked scheme size of index block is one disk block Index block contains a small header containing name of the file and set of the first 100 disk block addresses The next address contains the address of another index block for large files

                                                2 multilevel index to access a block the OS uses first-level index to find second-level index block which points to file block This approach can be continued to third or fourth level

                                                3 combined scheme

                                                For example Say there are 15 pointers of index block in the filersquos inode The first 12 of these pointers have address of blocks that contains file data The next 3 pointers point to indirect blocks The first points to single indirect block which is an index block that contain addresses of filersquos data blocks

                                                The second points to double indirect blocks and third points to triple indirect blocks as shown below

                                                Q) Free space management

                                                To keep track of free disk space the system maintains free space list To create a file we search free space list and allocate the required space to new file

                                                The free space list can be implemented as

                                                1 Bit vector (or) Bit map2 Linked list3 Grouping4 Counting 5 Space maps

                                                1 Bit vector (or) Bit map Each block is represented by 1 bit If the block is free the bit is 1 and if the block is allocated the bit is 0 For example consider a disk where blocks 23458 blocks are free The free-space bit map would be 001111001000

                                                Advantages simple and easy to get contiguous files

                                                Disadvantages

                                                1 Bit map requires extra space2 Bit map is inefficient if kept in disk Bit maps of smaller disk can be kept in main memory but may

                                                not be possible to keep the bit map of larger disk in main memory3 Bit map must be modified both when blocks are allocated and freed Freeing 1 GB of data on a 1-TB

                                                disk could cause thousands of blocks of bit maps to be updated because those data blocks are scattered all over the disk

                                                2 Linked List Free-space list is implemented by linking all the free disk blocks together A pointer to the first free block is stored in special location on the disk For example consider a disk where blocks 23458 blocks are freeWe keep a pointer to block 2 as the first free block Block 2 would contain a pointer to block 3 Block 3 would point to block 4 and block 4 would point to block 5and so onAdvantage No waste of spaceDisadvantage Cannot get contiguous space easily ie To traverse the free-space list we must read each block which requires more IO time

                                                3 Grouping First free block stores the addresses of n free blocks The first n-1 free blocks are free and the last block contains the addresses of another n free blocks and so onAdvantage large number of free blocks can be found quickly4 Counting free-space list can be maintained by keeping the address of first free block and the number (n)of free contiguous blocks that follow the first block

                                                Each entry in the free-space list consists of disk address and a countEach entry requires more space andlist is shorter only if count is gt1These entries can be stored in a B-tree for efficient insertion deletion and search

                                                5 Space maps ZFS divides the disk space into chunks of manageable sizes called metaslabs Each metaslab has a space mapZFS uses counting algorithm to store information about free blocks Space map is a log of all block activities in time order in counting formatWhen ZFS allocates or frees space from a metaslab it loads the associated space map into memory and updates it Finally free-space list is updated on diskSynchronization Hardware Hardware solution to synchronization is to provide atomic operations These operations operate as a single instruction without interruption Two such operations are 1 Test and Set() hardware instruction2 Swap() instruction

                                                1 Test and Set instruction is as shown

                                                Solution using Test and SetThe shared variable lock is initialized to false

                                                2 Swap() instruction definition is as follows

                                                Solution using Swap1 Shared Boolean variable lock is initialized to false and each process has a local boolean variable key

                                                Both Test and Set Swap instructions satisfy the mutual exclusion requirement but unfortunately do not guarantee bounded waiting If there are multiple processes trying to get into their critical sections there is no guarantee of what order they will enter and any one process could have the bad luck to wait forever until they got their turn in the critical section (Since there is no guarantee as to the relative rates of the processes a very fast process could theoretically release the lock whip through their remainder section and re-lock the lock before a slower process got a chance As more and more processes are involved vying for the same resource the odds of a slow process getting locked out completely increase )

                                                Below figure illustrates a solution using test-and-set that satisfies bounded waiting using two shared data structures boolean lock and boolean waiting[ N ] where N is the number of processes in contention for critical sections

                                                Bounded-waiting mutual exclusion with TestAndSet( )

                                                It first looks in an order ( starting with the next process on the list ) for a process that has been waiting and if it finds one then it releases that particular process from its waiting state without unlocking the critical section thereby allowing a specific process into the critical section while continuing to block all the others

                                                Q) What is semaphoreAns Semaphore is used to solve synchronization problems A semaphore S is an integer variable accessed only through 2 standard atomic operations 1 wait() 2 signal()Wait primitive Signal primitivewait(S)

                                                while(S lt= 0) do nothingS--

                                                signal (S)

                                                S++

                                                Each process that wishes to use a resource performs wait() operation When a process releases a resource it performs signal() operation

                                                Semaphores Usage1 binary semaphore or mutex locks can take values 0 (or) 1

                                                2 Counting Semaphore can take any integer value Counting semaphore is used to count the number remaining resources The counter is initialized to the number of resources available whenever the counting semaphore gt 0 then a process can enter a critical section and use one of the resources When the counter = 0 ( or negative in some implementations ) then the process blocks until another process frees a resource and increments the counting semaphore with a signal callFor example if 3 resources are there and 4 processes Value of counting semaphore =3if process P1 requires a resource it perform wait() operationNow counting semaphore = 2if process P2 requires a resource it perform wait() operationNow counting semaphore = 1if process P3 requires a resource it perform wait() operationNow counting semaphore = 0if process P4 requires a resource it perform wait() operation P4 waits until a resource is available

                                                3 Semaphores can also be used to synchronize certain operations

                                                Semaphore ImplementationWhen a process is in critical section and any other process that tries to enter critical section must loop continuously wasting CPU time (busy waiting) This type of semaphore is called spin lock as the process spins while waiting For example suppose it is important that process P1 execute statement S1 before process P2 executes statement S2

                                                First we create a semaphore named synch that is shared by the two processes and initialize it to zero Then in process P1 we insert the code

                                                S1signal( synch )

                                                and in process P2 we insert the code wait( synch )

                                                S2 Because synch was initialized to 0 process P2 will block on the wait until after P1 executes the call

                                                to signal Each semaphore has a integer value and maintains a waiting queue of waiting processes

                                                Semaphore implementation with no busy waiting To overcome busy waiting we block the waiting process and waiting process is restarted by wakeup() operation(ie the process is removed from waiting queue to ready state)

                                                Problems due to semaphore1 Deadlocks Deadlocks occur when multiple processes are blocked each waiting for a resource that can only be freed by one of the other (blocked) processes shown below

                                                2 Starvation In Starvation one or more processes gets blocked forever and never get a chance to enter critical section For example if we do not specify the algorithms for adding processes to the waiting queue or selecting one to be removed from the queue in the signal( ) call If a LIFO queue is chosen then the first process that starts waiting will never get a chance

                                                3 Priority inversionLet Process LMH have priorities as shown belowLltMltHprocess H is waiting for a resource that has been held by process L so Process L is running When process M enters it pre-empts process L and M is running nowHere M runs before H even though M has lower priority than HTo avoid this L is given high priority so that M does not pre-empt L=====================================================================Q) Classical problems on synchronization

                                                1 The Bounded Buffer Problem (also called the The Producer-Consumer Problem)2 The Readers-Writers Problem3 The Dining Philosophers Problem

                                                These problems are used to test newly proposed synchronization scheme

                                                1 The Bounded Buffer ProblemConsider

                                                n buffers and each buffer holds one item a producer process which creates the items (1 at a time) a consumer process which processes them (1 at a time)

                                                Producer process cannot produce an item if all the buffers are full and Consumer process cannot consume if all the buffers are empty As both processes modify the contents of buffer synchronization must be there For that we use 3 Semaphores

                                                1 empty = n (empty semaphore counts the number of empty buffers)2 full=0 (full semaphore counts the number of filled buffers)3 mutex=1 (mutex provides mutual exclusion for access to the buffer)

                                                Producer process Consumer processdo

                                                hellip produce an item wait(empty) wait(mutex) hellip

                                                do wait(full) wait(mutex) hellip remove an item from buffer hellip

                                                add item to buffer hellip signal(mutex) signal(full)

                                                while (1)

                                                signal(mutex) signal(empty) hellip consume the item hellip while (1)

                                                2 The Readers-Writers ProblemA data item such as a file is shared among several processesEach process is classified as reader or writerMultiple readers can read the file at same timeA writer must have exclusive access (ie cannot share with either a reader or another writer)Two versions of the readers-writers problem

                                                readers priority new reader need not wait because a writer is waiting writers priority if a writer is waiting to access the database no new readers can start reading

                                                A solution to both version may cause starvation in the readers priority version writers may starve in the writers priority version readers may starve

                                                A semaphore solution to the readers priority version uses 3 semaphores1 readcount = 0 (readcount semaphore counts the number of reading shared data2 mutex =1 (mutex semaphore is used for mutually exclusion when readcount is updated)3 wrt = 1 (wrt semaphore is common to both writer and reader)

                                                wrt provides mutually exclusive access to shared data Reader process Writer process

                                                wait(mutex) readcount++ if (readcount == 1) wait(rt) signal(mutex) hellip reading is performed hellip wait(mutex) readcount-- if (readcount == 0) signal(wrt) signal(mutex)

                                                do wait(wrt) hellip writing is performed hellip signal(wrt)

                                                while (TRUE)

                                                3 The Dining Philosophers Problemn philosophers sit around a table thinking and eatingAs time passes a philosopher gets hungry and tries to pick up the chopstick on his left and on his right A philosopher may only pick up one chopstick at a time and cannot pick up a chopstick already in the hand of neighbour philosopherThe dining philosophers problems is an example of a large class or concurrency control problems it is a simple representation of the need to allocate several resources among several processes in a deadlock-free and starvation-free mannerA semaphore solution represent each chopstick with a semaphore

                                                semaphore chopstick[5] Initially all values are 1

                                                Philosopher ido

                                                51

                                                wait(chopstick[i])wait(chopstick[(i+1) 5])hellipeathellipsignal(chopstick[i])signal(chopstick[(i+1) 5])hellipthinkhellip

                                                while (1)This solution guarantees no two neighbouring philosophers eat simultaneously but has the possibility of creating a deadlockQ) Types of operating systems

                                                MULTIPROCESSING SYSTEMS has multiple hardware CPUs

                                                In symmetric multiprocessing (SMP) systems The processes share the same main memory and io facilities All processors can perform the same functions (hence term symmetric)

                                                52

                                                ASYMMETRIC MULTIPROCESSING SYSTEMS In asymmetric multiprocessing (ASMP) all CPUs are not equal (Master and slave CPUs) ASMP have dedicated processors to specific tasks For example one processor may be dedicated to disk operations another to video operations etc These systems dont have the flexibility to assign processes to the least-loaded CPU unlike an SMP system

                                                REAL TIME SYSTEM is used to implement a computer application that must complete its execution within its time constraint

                                                Two kinds of real-time systems have evolved

                                                1 A hard real-time system can guarantee that response requirement will be met under all conditions 2 A soft real-time system cannot guarantee that response requirement meets under all conditions Ex

                                                Digital audio or multimedia systems Digital telephones

                                                DISTRIBUTED SYSTEMS execute parts of a computation in different systems at the same time It uses distributed control ie it spreads its decision-making actions across different computers in the system so that failures of individual computers or the network does not cripple its operation

                                                A distributed operating system appears as a uniprocessor system even though it has multiple processors The users may not know where their programs are being run or where their files are located that should all be handled automatically by the operating system

                                                Distributed systems allow applications to run on several processors at the same time thus requiring more complex processor scheduling algorithms

                                                DESKTOP SYSTEMS A desktop system is a personal computer (PC) system used at a single location Modern desktop systems support multiprogramming Common examples are Linux FreeBSD Windows 8 and the Macintosh operating system

                                                HANDHELD SYSTEMS (palmtop computers) Two of the most popular operating systems for handhelds are Symbian OS and Android OS They have very less memory

                                                53

                                                CLUSTERED SYSTEMS A computer cluster consists of a set of loosely connected computers that work together so that in many respects they can be viewed as a single system The components of a cluster are usually connected to each other through fast local area networks each node (computer used as a server) running its own instance of an operating system

                                                Q) Modern operating system

                                                New developments in hardware applications and security threats lead to the development of modern operating system

                                                New Hardware developments like increased machine speed high speed network increased size and variety of memory devices

                                                New Applications developments like multimedia applications internet and web access and client server computing

                                                Developments in modern operating system can be categorized as

                                                1 Microkernel architecture assigns only few functions to kernel ( IPC and CPU scheduling etc) Other OS services are provided by processes called servers that run in user mode This approach decouples server and kernel development This approach is well suited to distributed environment

                                                2 Multithreading a process is divided into threads that can run concurrently Useful for applications that perform a number of independent tasks Example database server that listens and processes many client requests

                                                3 Symmetric multiprocessing As now-a-days there are multiple microprocessors in a single system SMP operating system provides greater efficiency when there are multiple processors The processes share the same main memory and io facilities All processors can perform the same functions (hence term symmetric)

                                                SMP schedules processes or threads across all of the processors

                                                Advantages

                                                1 Increased performance if some portion of the program can be run in parallel the performance of SMP OS increases

                                                2 increased availability as all processes can perform the same task if one processor fails others will work 3 incremental growth one can add a new processor to increase the performance4 Scaling cost can be dependent on the number of processors used

                                                4 Distributed operating systems Distributed systems allow applications to run on several processors at the same time

                                                5 Object oriented design adds modularity to kernel Os can be customized without effecting system integrity

                                                Q) Linux Operating systems It is open source as its source code is freely available It is free to use Linux was designed considering UNIX compatibility Linux is a multi-user multitasking system Main design goals are speed efficiency and standardization Components of Linux System

                                                54

                                                Linux Operating System has primarily three components as shown in the below diagram

                                                Kernel - Kernel is the core part of Linux It is responsible for all major activities of this operating system It interacts directly with hardware Kernel hides low level hardware details to system or application programs

                                                System Library - System libraries are special functions or programs These are used by application programs to use Kernelrsquos features

                                                System Utility - System Utility programs are responsible to do specialized individual level tasks like updating log file accepting login requests from terminals etc

                                                Kernel Modules Sections of kernel code that can be compiled loaded and unloaded independent of the rest of the kernel A kernel module may typically implement a device driver a file system or a networking protocol

                                                Three components to Linux module support 1 module management Supports loading modules into memory and allows to talk to the rest of the kernel 2driver registration Allows modules to inform rest of the kernel that a new driver has become available 3 conflict resolution protect reserved resources of one driver from accidental use by another driver

                                                Kernel Mode vs User ModeKernel component code executes in a special privileged mode called kernel mode with full access to all resources of the computer This code represents a single process executes in single address space and do not require any context switch and hence is very efficient and fast Kernel runs each processes and provides system services to processes provides protected access to hardwares to processesCode which is not required to run in kernel mode is in System Library User programs and other system programs works in User Mode which has no access to system hardwares and kernel code User programs utilities use System libraries to access Kernel functions to get systems low level tasksBasic FeaturesFollowing are some of the important features of Linux Operating System

                                                Portable - Portability means softwares can works on different types of hardwares in same way Linux kernel and application programs supports their installation on any kind of hardware platform

                                                Open Source - Linux source code is freely available and it is community based development project Multiple teams works in collaboration to enhance the capability of Linux operating system and it is continuously evolving

                                                Multi-User - Linux is a multiuser system means multiple users can access system resources like memory ram application programs at same time

                                                Multiprogramming - Linux is a multiprogramming system means multiple applications can run at same time

                                                Hierarchical File System - Linux provides a standard file structure in which system files user files are arranged

                                                Shell - Linux provides a special interpreter program which can be used to execute commands of the operating system It can be used to do various types of operations call application programs etc

                                                55

                                                Security - Linux provides user security using authentication features like password protection controlled access to specific files encryption of data

                                                Architecture Linux System Architecture is consists of following layers

                                                Hardware layer - Hardware consists of all peripheral devices (RAM HDD CPU etc)

                                                Kernel - Core component of Operating System interacts directly with hardware provides low level services to upper layer components

                                                Shell - An interface to kernel Takes commands from user and executes kernels functions

                                                Utilities - Utility programs giving user most of the functionalities of an operating systems

                                                Q) windows XP operating System1Extensibility mdash layered architecture

                                                Executive which runs in protected mode provides the basic system services On top of the executive several server subsystems operate in user mode Modular structure allows additional environmental subsystems to be added without affecting the

                                                executive 2Portability mdash XP can be moved from on hardware architecture to another with relatively few changes

                                                Written in C and C++ Processor-dependent code is isolated in a dynamic link library (DLL) called the ldquohardware abstraction

                                                layerrdquo (HAL)3 Reliability mdashXP uses hardware protection for virtual memory and software protection mechanisms for operating system resources 4 Compatibility mdash applications that follow the IEEE 10031 (POSIX) standard can be complied to run on 2000 without changing the source code 5 Performance mdash XP subsystems can communicate with one another via high-performance message passing

                                                Preemption of low priority threads enables the system to respond quickly to external events Designed for symmetrical multiprocessing

                                                6 International support mdash supports different locales via the national language support (NLS) APIXP Architecture 1 Layered system of modules 2 Protected mode mdash hardware abstraction layer (HAL) kernel executive 3 User mode mdash collection of subsystems

                                                a Environmental subsystems emulate different operating systems b Protection subsystems provide security functions

                                                XP architecture

                                                56

                                                Q) Windows network Operating systemExample Windows 2000 operating system

                                                All the pictures are taken from Silberschatz Abraham et al Peter Baer Galvin and Greg Gagne Operating system concepts Reading Addison-Wesley

                                                Visit my blog enthusiaststudentblogspotin57

                                                mtechmessengerblogspotin

                                                Keywords OS exam notes OS notesOperating system notes Opearing system galvin notesSimple batch system Multi- programmed Batch System Time sharing System Difference between Batch Multi-programming and Time sharing What is process structure of process in memory Process states PCB Operations on Processes Process Creation Process Termination cascading terminationcontext switch Basic Concepts of threads two modes of CPU execution Inter process communication Shared ndash Memory Systems Message passing systemswhat is deadlock four conditions that are necessary for deadlock to occur Methods for handling deadlocks Deadlock prevention Deadlock avoidanceResource Allocation graphBankers AlgorithmSafe State Unsafe state Safety Algorithm Resource-Request Algorithm Memory management Address Binding Logical vs Physical Address Space MMU Memory-Management Unit Dynamic loading Dynamic Linking Overlays Swapping Contiguous memory allocation Single-partition allocation Multiple-partition allocation Internal fragmentation Variable Size partitions External fragmentation First-fit Best-fit Worst-fit Non-Contiguous Memory allocation Paging Implementation of Page Table PTLR PTBR page table base register Page-table length register translation look-aside buffers TLB Protection in Paging Shared pages in Paging Segmentation Shared Segments Virtual memory Demand paging Procedure for Handling a Page Fault pure demand paging Page Replacement Page Replacement Algorithms FIFO Optimal Page replacement AlgorithmLeast Recently used (LRU) Page replacement algorithm ThrashingWorking Set model global and local page replacement

                                                58

                                                • 101 File Concept Data is stored in secondary memory in the form of files A file is a sequence of bits bytes lines or records A file has structure which depends on its type
                                                • File Attributes
                                                • File Operations
                                                • Kernel Mode vs User Mode
                                                • Basic Features
                                                • Architecture

                                                  When a page is brought into memory we insert it at the tail of the queue and we replace the page at the head of the queue

                                                  FIFO 15 page faultsProblems due to FIFO is Beladyrsquos anamolyFor a refrence string shown below123412512345When number of frames allocated to a process is 1 number of page faults=12 due to FIFO page replacementWhen number of frames allocated to a process is 2 number of page faults=12 due to FIFO page replacementWhen number of frames allocated to a process is 3 number of page faults=9 due to FIFO page replacementWhen number of frames allocated to a process is 1 number of page faults=10 due to FIFO page replacementAs the number of frames allocated to a process increases page faults must decrease But in FIFO page replacement page faults increase2 Optimal Page replacement AlgorithmReplace the page that will not be used for longest period of time Used for measuring how well your algorithm performs It is not practical to implement OPT algorithm as it is not possible to know what the future references will beex

                                                  Optimal page replacement 9 page faults

                                                  3 Least Recently used (LRU) Page replacement algorithm Replace the page that has not been used for longest period of time LRU makes use of time of last use of the page Ex

                                                  Number of page faults = 12

                                                  LRU requires hardware assistance to determine the time of last use of the page We can use 1) Stack or 2) Counter to implement LRU page replacement algorithm

                                                  1) Stack implementation ndash keep a stack of page numbers in a double link formwhen a Page is referenced it is moved to the top of stackThis implementation requires 6 pointers to be changed

                                                  2) counter implementation Time-of-use field is allocated to each frame allocatedCPU maintains a counter Counter is incremented for every page reference Counter value is copied to the time-of-use field of the referenced page

                                                  Disadvantages of LRU page replacement algorithm Updating counter or stack must be done for every memory reference HENCE LRU IS SLOW and implementation requires hardware assistance ==============================================================================Q) LRU approximation page replacement1) Using reference bit2) Using additional reference bits algorithm3) Second chance algorithm4) Enhanced Second chance algorithm5) Counting based Page replacement

                                                  Has two schemes1) Least Frequently Used (LFU)2) Most Frequently Used (MFU)

                                                  6) Page Buffering AlgorithmsQ) What is ThrashingAns A process is thrashing if it is spending more time paging than executingThrashing may occur when global (or) local page replacement is used as followsSolution to Thrashing is Working Set strategyIf we provide a process with as many frames as it needs thrashing can be avoided Since it is not possible to know how many frames a process needs we use locality model of process execution

                                                  here Locality is set of pages that are actively used together Locality model states that as the process executes it moves from locality to locality If we allocate enough frames to a process to accommodate the size of current locality it will not fault again until it changes current localityWorking Set model uses working-set and a parameter ∆ = working-set windowThe set of pages in the most recent ∆ page references is the working setLet WS S i be the working-set size for process Pi ie Pi needs WS S i framesLet D be the total demand for frames then D=sum WS Si

                                                  and mbe the total number of available frames

                                                  If iquestm thrashing occurs because some processes will not have enough framesSoIF (Dgtm iquest

                                                  OS selects a process to suspendelse OS may initiate another processThe working set strategy prevents thrashing while keeping the degree of multi programming as high as possibleThe main difficulty is to keep track of moving working set window ========================================================================Q) How does thrashing occur in global and local page replacementAns In Global page replacement when a page fault occurs for a process any frame in main memory can be replaced causing page fault to some other process that needs the replaced frame Page fault processes must use paging device to swap pages in and out As more number of processes queue up for the paging device

                                                  the READY queue empties And CPU becomes idle So OS introduces new processes as CPU is free This further increases number of page faults

                                                  In Local page replacement Each process is allocated certain frames in main memory and when a page fault occurs only the frames allocated to it will be replacedWhen a process starts thrashing other processes are not affected But since the processes will be in queue for paging device most of the time average service time for page fault increases Hence effective access time will increase even for a process that is not thrashingQ) Page Table Structure Or Structure of page table in memory

                                                  Hierarchical Paging Hashed Page Tables Inverted Page Tables

                                                  Hierarchical Paging If Page table is large then we break up the page table into multiple page tables ie page table is paged A simple technique is a two-level page tableTwo-Level Paging Example A logical address (on 32-bit machine with 4K page size) is divided into 1 a page number consisting of 20 bits 2 a page offset consisting of 12 bitsSince the page table is paged the page number is further divided into 1 a 10-bit page number 2 a 10-bit page offset Thus a logical address is as follows page number page offset

                                                  pi p2 d

                                                  10 10 12 where p1 is an index into the outer page table and p2 is the displacement within the page of the outer page table Two-Level Page-Table Scheme

                                                  Address-Translation Scheme Address-translation scheme for a two-level 32-bit paging architecture

                                                  Hashed Page Tables (or) Hash table Hashed page tables are common in address spaces gt 32 bitsEach entry in hash table contains a linked list of elements that hash to same locationHash table is indexed by hash valueEach element consists of 3 fields

                                                  1 Page number2 Frame number3 Pointer to next element in linked list

                                                  The algorithm works as follows Page number is passed to hash function to get hash value Locating the hash value in hash table is easy as Hash table is indexed by hash value Page number is then compared with field 1 in the first element in linked list If there is a match the corresponding frame number is used to get physical address If there is no match next entries in linked list are searched

                                                  Inverted Page Table

                                                  Invertible page table has one entry for each frame of physical memory Each entry in invertible page table has

                                                  Process-id Page number

                                                  Logical address also has 3 parts Process-id Page number

                                                  OffsetProcess-id and page number of the logical address is compared with each entry in the invertible page tableIf a match is found at ith entry i is added to offset to get logical address If there is no match it means it is illegal addressAdvantages and disadvantages

                                                  Decreases memory needed to store each page table Increases time needed to search the table when a page reference occurs Solution Use hash table to

                                                  limit the search to one mdash or at most a few mdash page-table entries Inverted Page Table Architecture

                                                  Unit-IV CPU Scheduling

                                                  Basic conceptsTo maximize CPU utilization some process must be running at all times

                                                  In multi programming several processes will be in main memory at a given time If running process has to wait for IO the OS takes the CPU away from that process and assigns CPU to other process in ready queue So CPU must be scheduled and CPU scheduling is fundamental OS function

                                                  CPU- IO burst cycleProcess execution begins with a CPU burst that is followed by an IO burst which is followed by another CPU burst then another IO burst and so on as shown in the figure

                                                  An IO bound program has many short CPU bursts A CPU-bound program has few long CPU bursts

                                                  Schedulers a module in OS for scheduling decisions1048708 Long-term scheduler (or job scheduler) ndash selects which processes should be brought into the ready queue1048708 Medium-term scheduler ndash selects which processes should be swapped inout the memory1048708 Short-term scheduler (or CPU scheduler) ndash Whenever the CPU becomes idle the operating system must select one of the processes in ready queue to be executed This selection is done by short term scheduler (or) CPU scheduler

                                                  Dispatcher gives CPU control to the process selected by short term scheduler This function involves following

                                                  1 switching context2 switching to user mode3 jumping to proper location in the user program to restart that program

                                                  Time taken to stop one process and start another process is known as dispatch latency This must be kept smallReady QueueReady queue is implemented as a FIFO queue a priority queue a tree or simply an unordered linked listThe records in ready queue are PCBs of processes

                                                  Pre-emptive Scheduling

                                                  CPU scheduling decisions take place under one of four conditions

                                                  1 When a process switches from the running state to the waiting state2 When a process switches from the running state to the ready state for example in response to an

                                                  interrupt3 When a process switches from the waiting state to the ready state say at completion of IO 4 When a process terminates

                                                  For conditions 1 and 4 there is no choice - A new process must be selected For conditions 2 and 3 there is a choice - continue running the current process (or) select a different

                                                  one If CPU scheduling takes place only under conditions 1 and 4 the system is said to be non-

                                                  preemptive or cooperative In non-preemptive a process runs until it itself gives up the CPU ie when it is waiting for IO (or) when process completes Otherwise the system is said to be pre-emptive

                                                  Windows used non-preemptive scheduling up to Windows 3x and started using pre-emptive scheduling with Win95 Macs used non-preemptive prior to OSX and pre-emptive since then Note that pre-emptive scheduling is only possible on hardware that supports a timer interrupt

                                                  Note that pre-emptive scheduling can cause problems when two processes share data because one process may get interrupted in the middle of updating shared data structures

                                                  Preemption can also be a problem if the kernel is busy implementing a system call ( eg updating critical kernel data structures ) when the preemption occurs Solution wait until the system call has either completed or blocked before allowing the pre-emption But this solution is problematic for real-time systems

                                                  Some critical sections of code protect themselves from concurrency problems by disabling interrupts before entering the critical section and re-enabling interrupts on exiting the section But this should only be done only on very short pieces of code that will take less CPU time

                                                  Scheduling Criteria to decide CPU scheduling algorithm1 CPU utilization in real system cpu utilization will range from 40 to 902 Throughput Number of processes that are completed per unit time3 Turnaround time Time when the process has completed ndash Time when the process was submittedie it includes waiting time and burst time4 Waiting time amount of time that a process spends waiting for CPU5 Response time Time when the first response came - Time when the process was submitted

                                                  Optimization Criteria Max CPU utilizationMax throughput

                                                  Min turnaround timeMin waiting timeMin response time CPU Scheduling Algorithms1) First-Come First-Served (FCFS) Scheduling First come first served CPU scheduling algorithm is non pre-emptive Processes are scheduled in the order they have arrivedFCFS is implemented by Queue When the CPU is free it is allocated to the process at the head of the queueAdvantages Easy to implementDisadvantages 1 Average waiting time is quite long

                                                  2 Convoy effect (it occurs as small processes wait in queue for big process to leave CPU)Example Process Burst Time P1 24 P2 3 P3 3

                                                  Suppose that the processes arrive in the order P1 P2 P3

                                                  The Gantt Chart for the schedule is P1 P2 P3

                                                  0 24 27 30 Waiting time for P1 = 0 P2 = 24 P3 = 27 Average waiting time (0 + 24 + 27)3 = 17 2) Shortest-Job-First (SJF) SchedulingSchedule the Process with shortest burst time If CPU burst time of two processe are same then FCFS is usedAdvantages Average waiting time decreasesDisadvantages It is difficult to know the length of next CPU burstSJF can be either pre-emptive (or) non pre-emptive1 Non pre-emptive SJF ndash once CPU is given to the process it cannot be preempted until completes its CPU burst2 Pre-emptive SJF (or) Shortest-Remaining-Time-First (SRTF) If the newly arrived process is shorter than what is remaining of currently executing process then pre-empt the currently executing process

                                                  Example Process Arrival Time Burst Time P1 00 7

                                                  P2 20 4 P3 40 1

                                                  P4 50 4

                                                  SJF (non-preemptive) Gantt Chart P1 P3 P2 P4

                                                  0 7 8 12 16 Average waiting time = [0 +(8-2)+(7-4) +(12-5)] 4 =4

                                                  Example of Preemptive SJF

                                                  Process Arrival Time Burst TimeP1 00 7P2 20 4P3 40 1P4 50 4

                                                  SJF (preemptive) P1 P2 P3 P2 P4 P1

                                                  0 2 4 5 7 11 16 Average waiting time = (9 + 1 + 0 +2)4 =3 3) Priority SchedulingA priority number (integer) is associated with each processThe CPU is allocated to the process with the highest priority (smallest integer equiv highest priority)If two processes have equal priority then FCFS is usedSJF is a priority algorithm in which larger the CPU burst lower the priorityThere are two schemes 1 Preemptive 2 nonpreemptive

                                                  Problem equiv Starvation ndash low priority processes may never executeSolution equiv Aging ndash as time progresses increase the priority of the process

                                                  4) Round Robin (RR) Each process gets a small unit of CPU time (time quantum) usually 10-100 milliseconds After this time has elapsed the process is preempted and added to the end of the ready queueIf there are n processes in the ready queue and the time quantum is q then each process gets 1n of the CPU time in chunks of at most q time units at once No process waits more than (n-1)q time unitsPerformance 1 q large _ FIFO 2 q small _ q must be large with respect to context switch otherwise overhead is too highExample of RR with Time Quantum = 4 Process Burst Time P1 24 P2 3

                                                  P3 3 The Gantt chart isP1 P2 P3 P1 P1 P1 P1 P1

                                                  0 4 7 10 14 18 22 26 30 Average waiting time = [(30-24)+4+7]3 = 173 =566 5)Multilevel Queue scheduling Ready queue is partitioned into separate queuesFor ex ready queue is divided into 2 queues

                                                  1 foreground (interactive) queue2 background (batch) queue

                                                  Each queue has its own scheduling algorithmFor ex Round Robin scheduling algorithm can be used in foreground queue

                                                  FCFS scheduling algorithm can be used in background queueScheduling must be done between the queues This can be done in 2 ways1 Fixed priority scheduling

                                                  Foreground queue has highest priority All the processes in foreground queue must be completed and queue must be empty Then only the processes in background queue will be given CPU As shown in below figure Disadvantage starvation

                                                  2 Time slice ndash each queue gets a certain amount of CPU timewhich it can schedule amongst its processes ie 80 to foreground in RR and 20 to background in FCFS

                                                  6) Multilevel Feedback Queue Scheduling A process can move between queues The idea is to separate processes according to their CPU bursts IO ndash bound and interactive processes will be in highest priority queueIf a process takes more CPU time it is moved to lower priority queue If a process waits too long in lower priority queue it is moved to higher priority queue to prevent starvationAs shown in the below figure let there be 3 queues Q0 Q1 Q2

                                                  1 Q0 ndash time quantum 8 milliseconds 2 Q1 ndash time quantum 16 milliseconds 3 Q2 ndash FCFS

                                                  Scheduling

                                                  1 A process entering ready queue is put in queue Q0 When it gains CPU the process receives 8 msec If it does not finish in 8 milliseconds process is moved to queue Q1 2 At Q1 process again receives 16 additional milliseconds If it still does not complete it is moved to queue Q2

                                                  Q) SynchronizationSeveral processes run in an operating system Some process share resources due to which problems like data inconsistency may arise

                                                  Example of process synchronization is Producer Consumer problem (or) Bounded-Buffer problemProducer Consumer problem To make sure that producer process should not add data when the buffer is full andConsumer process should not take data when the buffer is empty

                                                  The code for Producer process can be modified as followswhile (true) Produce an item in next producedwhile (counter == BUFFER_SIZE) do nothingbuffer[in]=nextProducedin = (in+1) BUFFER_SIZEcounter++

                                                  here an integer variable counter is initialized to zeroCounter is incremented every time a new item is added to the buffer and counter is decremented every time an item is removed from bufferin is an index that always points to an empty slot after the last filled slotout is an index that always points to the first filled slot after at the head of the circular queue

                                                  The code for consumer process is as shown belowwhile(true)

                                                  while (counter == 0) do nothingnextConsumed = buffer[out]out=(out+1)BUFFER_SIZEcounter--

                                                  The above producer and consumer code produces RACE CONDITIONOutcome of execution of several co-operating processes depends on the order in which accesses takes place is called Race ConditionFor Example if counter =5

                                                  Given counter =5 and if producer produces one item and consumer consumes one item then correct value of counter= 5 only But here the counter = 4 ie inconsistent resultIf T5 is executed before T4 then the counter =6 ie inconsistent result

                                                  Solution to avoid race condition make sure only one process at a time must update the shared variable (here it is counter)

                                                  Q) Critical section problemAns Each process has a segment of code called critical section Critical section code is the code that contains common variables of co-operating processesTo avoid data inconsistencyTwo processes must not execute critical section code at same timeFor this to happen1 Each process must request permission to enter its critical sectionmdashEntry section2 End of critical section ndashExit section3 Rest of the code after critical section ndash Remainder section

                                                  Solution to critical section must satisfy the following three requirements1 Mutual exclusion if one process is in critical section no other process can be in critical section2 Progress Processes not in remainder section must decide which process will enter critical section next3 Bounded Waiting All the processes must be allowed to enter critical section No process must wait indefinitely

                                                  Q) Petersonrsquos solution for critical section problem (Software based solution)AnsTime Process 1 Process2T0 do

                                                  flag[1] = TRUE turn =2

                                                  do flag[2] = TRUE

                                                  T1 turn =1

                                                  T2 While( flag[2] ampamp turn == 2)Since turn = 2 while loop will not run

                                                  While( flag[1] ampamp turn == 1)Loops continuously until flag[1]

                                                  becomes falseT3 Enter critical section

                                                  T4 flag[1] = FALSE Enter Critical section

                                                  T5 Enters remainder section flag[2]=FALSE

                                                  while (TRUE) Enters remainder section

                                                  while (TRUE)

                                                  101 File Concept Data is stored in secondary memory in the form of files A file is a sequence of bits bytes lines or records A file has structure which depends on its type

                                                  File Attributes

                                                  1 Name ndash Name of the file for user understanding2 Identifier used by OS to identify a file ( eg inode number )3 Type ndash Text files executable files binary files etc 4 Location ndash location of the file on the hard drive 5 Size current size of the file6 Protection Controls who can read write or execute the file7 Time Date and user identification gives the information when the file was created last modified

                                                  and last used etc This data is used for protection security and usage monitoring

                                                  All information about files is kept in directory structure The directory entry consists of the filersquos name and its unique identifier The identifier locates the other file attributes Directory structure is kept on disk

                                                  File Operations

                                                  The file Abstract Data Type supports many common operations 1 Creating a file find free space on disk add entry to directory 2 Writing a file write data at current file position pointer location and update pointer3 Reading a file read data at current file position pointer location and update pointer4 Repositioning within a file or file seek change pointer location to a given value5 Deleting a file free the space allocated to file and remove entry from directory6 Truncating a file delete the data and update file size

                                                  Open File Table

                                                  Most OSes require that files be opened before access and closed after access Information about all files that are open currently in the system is stored in an open file table This

                                                  system wide open file table is maintained by OS And information about files opened by the process is stored by the processrsquos open file table Open file table has the following information

                                                  1 File pointer - records the current position in the file for the next read or write2 File-open count - How many times has the current file been opened ( simultaneously by

                                                  different processes ) and not yet closedWhen this counter reaches zero the file can be removed from the table

                                                  3 Disk location of the file4 Access rights

                                                  Some systems provide support for file locking 1 A shared lock is for reading only2 A exclusive lock is for writing as well as reading3 An advisory lock OS makes sure the locks are acquired and released appropriately4 A mandatory lock software developers makes sure the locks are acquired and released

                                                  appropriately5 UNIX used advisory locks and Windows uses mandatory locks

                                                  File Types

                                                  Implementing file types A common technique is to include the type as part of the filename The name is split into 2 parts

                                                  1 File Name 2 Extension

                                                  File name and extension are separated by a period The system uses the extension to indicate the type of file and the type of operations that can be done on that fileUser opens Microsoft word and clicks on File menu and Open option User will only specify the filename without extension application will look for a file with the given name and extension it expects Hence extensions are not supported by the operating systemUnix system uses a crude magic number stored at the beginning of the file to indicate the type of fileIn Mac OS X operating system file produced by word processor has the word processorrsquos name as its creator When user double clicks on the file word will open the required fileSome common file types are as shown below

                                                  File Structure Some files contain an internal structure which may or may not be known to the OS for ex executable files have a specific structure so that it can determine

                                                  where in memory to load the file and what is the location of the first instruction

                                                  If OS supports many file formats it will increase the size and complexity of the OS UNIX treats all files as sequences of bytes with no internal structure (With the exception of

                                                  executable binary programs which it must know how to load and find the first executable statement etc)

                                                  Macintosh files have two forks - a resource fork and a data fork The resource fork contains information relating to the UI such as icons and button images and can be modified independently Data fork contains the code or data

                                                  Internal File structure

                                                  Disk files are accessed in units of physical blocks typically 512 bytes

                                                  Internally files are organized in units of logical units like single byte (or)

                                                  data record size (or) structure size

                                                  The number of logical units which fit into one physical block determines its packingIf a student file is organized internally as records Each record takes 64 bytes of memoryAnd main memory is divided into blocks of 512 bytes Then packing = 51264=8 8 records can be stored per block

                                                  Internal fragmentation may occur due to this

                                                  102 Access Methods

                                                  The information in file can be accessed in the following ways1 Sequential access2 Direct access (or) Relative access

                                                  1 Sequential access we cannot randomly access any record

                                                  Records are read and written sequentially one record after another

                                                  A sequential access file emulates magnetic tape operation and generally supports the following operations read next - read a record and advance the tape to the next position write next - write a record and advance the tape to the next position rewind skip n records - May or may not be supported N may be limited to positive numbers or may be

                                                  limited to +- 1

                                                  2 Direct access (or) Relative accessA file is made up of fixed length records so that we can randomly access any record

                                                  No restriction on order of reading or writing

                                                  Direct access files are useful when we need to access large amounts of data randomly

                                                  Ex database

                                                  Operations supported include n is relative block number ie relative to beginning of file

                                                  1 read n - read record number n 2 write n - write record number n 3 jump to record n - could be 0 or the end of file4 Query current record - used to return back to this record later5 Sequential access can be easily emulated using direct access The inverse is complicated and

                                                  inefficient

                                                  3 Other access methods can be built on top of direct access method We construct an index for the file which contains pointers to various blocks To search a file we first access index and then use pointer to access file

                                                  If a student file has 120000 records of students sorted by student id We can create an index consisting of 1st student id in each blockif 64 records can be stored in each block then number of blocks needed to store 120000 records = 12000064 asymp 2000 blocks So index has 2000 entries Index is kept in main memory so that searching becomes faster

                                                  If the index itself becomes large we create an index for index file (Ex B-tree)

                                                  Q) Allocation Methods

                                                  To allocate space to files on disk is such a way that disk space is utilized effectively and files can be accessed quickly

                                                  Three major methods of allocating disk space are

                                                  1 Contiguous 2 Linked

                                                  3 Indexed

                                                  1 Contiguous allocation Each file occupies set of contiguous blocks on disk

                                                  If the file is n blocks long and starts at location b then it occupies blocks b b+1 b+2hellipb+n-1

                                                  The directory entry for each file indicates the address of starting block(ie b) and length (n)

                                                  Advantages

                                                  1 For direct access to block i we can directly access by finding the block as b+i Since the file is stored sequentially sequential access is easy Hence access is easy

                                                  2 Supports both direct and sequential access3 Number of disk seeks required is minimum 4 Seek time is minimum

                                                  Disadvantages

                                                  1 Difficult to find space to a new file (or) add more data to existing fileFirst fit and Best fit strategies are used to find a free hole( set of free contiguous blocks) from list of available holes But both are inefficient wrt storage utilization

                                                  2 External fragmentation even though there are enough free blocks required for a new file they are not contiguous and hence cannot be allocated to new fileSolution copy all the files onto another disk or tape We then copy the files back onto original disk contiguously This is known as compaction But compaction is time consuming

                                                  3 Size of the file must be known when the file is created Even though final size is known entire size may not be used by the file at the beginning Hence leads to wastage of space

                                                  If size of the file is not known in advance then (i) If we allocate too little space to file file cannot be extended Solution find a larger hole copy the contents of the file to the new space and release the previous space This may be possible as long as space exists Also this is time consuming(ii) If we overestimate the amount of space needed much of the space is unused This will lead to internal fragmentation

                                                  To minimize these drawbacks some os uses modified contiguous allocation scheme Here contiguous blocks of memory is allocated at firstIf the file needs more blocks another set of contiguous space known as extent is added Location and a block count plus a link to the first block of next extentIf extents are too large and of fixed size internal fragmentation occursIf extents are of variable sizes external fragmentation occurs

                                                  Linked AllocationEach file is a linked list of disk blocks The disk blocks can be scattered anywhere on the diskThe directory contains a pointer to first and last blocks of the file For example a file of 5 blocks might start at a block 9 and continue at block 16 then block 1 and then block 10 and finally block 25 Each block contains a pointer to next block If each block is 512 bytes and disk address (pointer ) requires 4 bytes then the user sees blocks of 512-4=508 bytes To create a new file we simply create a new entry in the directory The pointer will have nil for empty file Also the size field of empty file is 0

                                                  To add more data to existing file free block is found and data is written to it and is linked to end of file To read a file we read by following the pointers block to block

                                                  Advantages 1 Easy to find space to a new file (or) add more data to existing file2 Size of the file need not be declared when the file is created3 There is no external fragmentation

                                                  Disadvantages1 Seek time is more2 Number of disk seeks required is more3 Pointers use larger percentage of filersquos disk space4 Inefficient for direct access files To access i th record i disk reads are required5 Not Reliable if pointers were lost or damaged

                                                  Solution to disadvantage 3 is to use to group multiple blocks as clusters(for ex 4 blocks = 1cluster) We allocate clusters instead of blocks to file Makes logical to physical block mapping simpleImproves disk throughput by reducing disk access timeReduces wastage of space due to pointers as only few pointers may be neededFree ndashlist management also becomes simpleDisadvantage internal fragmentation more space is wasted when the cluster is partially full

                                                  Solution to disadvantage 5 is to use doubly linked list (or) store the file name and relative block number in each block

                                                  An variation of linked allocation is the use of file allocation table (FAT)A section of disk at the beginning of each volume contains FAT FAT has one entry for each disk block and is indexed by block numberThe block number contains the block number of next block that contains the file The last block will have special end of the file value as table entry An unused block is indicated by table value 0

                                                  Advantages

                                                  1 Easy to find space to a new file (or) add more data to existing file2 Random access time is improved

                                                  Disadvantage number of disk seeks is more The disk head must move to read the FAT and find the location of block then move to location of the block

                                                  3 Indexed allocation

                                                  Each file has its own index block Index block is an array of disk block addresses The ith entry in index block points to ith block of the file

                                                  The directory contains the address of the index block

                                                  When the file is created all pointers in the index block are set to nil

                                                  To add more data to existing file free block is found and data is written to it The address of block is put in the index block

                                                  Advantages

                                                  1 Easy to find space to a new file (or) add more data to existing file2 Supports direct (or) random access efficiently3 No external fragmentation

                                                  Disadvantage

                                                  1 Seek time is more2 Number of disk seeks required is more3 Not Reliable4 Index file must be kept in memory If this memory is not available then we have the read the index

                                                  block and then desired data block ie 2 disk accesses are required To access a block near the end of the file we need to read all the index blocks to read the needed data block

                                                  5 Pointer overhead of index block is gt than that of linked allocation For example if we have file of only one or two blocks Entire index block is allocated to store these one or two pointers

                                                  Various mechanisms followed for size of index block

                                                  1 linked scheme size of index block is one disk block Index block contains a small header containing name of the file and set of the first 100 disk block addresses The next address contains the address of another index block for large files

                                                  2 multilevel index to access a block the OS uses first-level index to find second-level index block which points to file block This approach can be continued to third or fourth level

                                                  3 combined scheme

                                                  For example Say there are 15 pointers of index block in the filersquos inode The first 12 of these pointers have address of blocks that contains file data The next 3 pointers point to indirect blocks The first points to single indirect block which is an index block that contain addresses of filersquos data blocks

                                                  The second points to double indirect blocks and third points to triple indirect blocks as shown below

                                                  Q) Free space management

                                                  To keep track of free disk space the system maintains free space list To create a file we search free space list and allocate the required space to new file

                                                  The free space list can be implemented as

                                                  1 Bit vector (or) Bit map2 Linked list3 Grouping4 Counting 5 Space maps

                                                  1 Bit vector (or) Bit map Each block is represented by 1 bit If the block is free the bit is 1 and if the block is allocated the bit is 0 For example consider a disk where blocks 23458 blocks are free The free-space bit map would be 001111001000

                                                  Advantages simple and easy to get contiguous files

                                                  Disadvantages

                                                  1 Bit map requires extra space2 Bit map is inefficient if kept in disk Bit maps of smaller disk can be kept in main memory but may

                                                  not be possible to keep the bit map of larger disk in main memory3 Bit map must be modified both when blocks are allocated and freed Freeing 1 GB of data on a 1-TB

                                                  disk could cause thousands of blocks of bit maps to be updated because those data blocks are scattered all over the disk

                                                  2 Linked List Free-space list is implemented by linking all the free disk blocks together A pointer to the first free block is stored in special location on the disk For example consider a disk where blocks 23458 blocks are freeWe keep a pointer to block 2 as the first free block Block 2 would contain a pointer to block 3 Block 3 would point to block 4 and block 4 would point to block 5and so onAdvantage No waste of spaceDisadvantage Cannot get contiguous space easily ie To traverse the free-space list we must read each block which requires more IO time

                                                  3 Grouping First free block stores the addresses of n free blocks The first n-1 free blocks are free and the last block contains the addresses of another n free blocks and so onAdvantage large number of free blocks can be found quickly4 Counting free-space list can be maintained by keeping the address of first free block and the number (n)of free contiguous blocks that follow the first block

                                                  Each entry in the free-space list consists of disk address and a countEach entry requires more space andlist is shorter only if count is gt1These entries can be stored in a B-tree for efficient insertion deletion and search

                                                  5 Space maps ZFS divides the disk space into chunks of manageable sizes called metaslabs Each metaslab has a space mapZFS uses counting algorithm to store information about free blocks Space map is a log of all block activities in time order in counting formatWhen ZFS allocates or frees space from a metaslab it loads the associated space map into memory and updates it Finally free-space list is updated on diskSynchronization Hardware Hardware solution to synchronization is to provide atomic operations These operations operate as a single instruction without interruption Two such operations are 1 Test and Set() hardware instruction2 Swap() instruction

                                                  1 Test and Set instruction is as shown

                                                  Solution using Test and SetThe shared variable lock is initialized to false

                                                  2 Swap() instruction definition is as follows

                                                  Solution using Swap1 Shared Boolean variable lock is initialized to false and each process has a local boolean variable key

                                                  Both Test and Set Swap instructions satisfy the mutual exclusion requirement but unfortunately do not guarantee bounded waiting If there are multiple processes trying to get into their critical sections there is no guarantee of what order they will enter and any one process could have the bad luck to wait forever until they got their turn in the critical section (Since there is no guarantee as to the relative rates of the processes a very fast process could theoretically release the lock whip through their remainder section and re-lock the lock before a slower process got a chance As more and more processes are involved vying for the same resource the odds of a slow process getting locked out completely increase )

                                                  Below figure illustrates a solution using test-and-set that satisfies bounded waiting using two shared data structures boolean lock and boolean waiting[ N ] where N is the number of processes in contention for critical sections

                                                  Bounded-waiting mutual exclusion with TestAndSet( )

                                                  It first looks in an order ( starting with the next process on the list ) for a process that has been waiting and if it finds one then it releases that particular process from its waiting state without unlocking the critical section thereby allowing a specific process into the critical section while continuing to block all the others

                                                  Q) What is semaphoreAns Semaphore is used to solve synchronization problems A semaphore S is an integer variable accessed only through 2 standard atomic operations 1 wait() 2 signal()Wait primitive Signal primitivewait(S)

                                                  while(S lt= 0) do nothingS--

                                                  signal (S)

                                                  S++

                                                  Each process that wishes to use a resource performs wait() operation When a process releases a resource it performs signal() operation

                                                  Semaphores Usage1 binary semaphore or mutex locks can take values 0 (or) 1

                                                  2 Counting Semaphore can take any integer value Counting semaphore is used to count the number remaining resources The counter is initialized to the number of resources available whenever the counting semaphore gt 0 then a process can enter a critical section and use one of the resources When the counter = 0 ( or negative in some implementations ) then the process blocks until another process frees a resource and increments the counting semaphore with a signal callFor example if 3 resources are there and 4 processes Value of counting semaphore =3if process P1 requires a resource it perform wait() operationNow counting semaphore = 2if process P2 requires a resource it perform wait() operationNow counting semaphore = 1if process P3 requires a resource it perform wait() operationNow counting semaphore = 0if process P4 requires a resource it perform wait() operation P4 waits until a resource is available

                                                  3 Semaphores can also be used to synchronize certain operations

                                                  Semaphore ImplementationWhen a process is in critical section and any other process that tries to enter critical section must loop continuously wasting CPU time (busy waiting) This type of semaphore is called spin lock as the process spins while waiting For example suppose it is important that process P1 execute statement S1 before process P2 executes statement S2

                                                  First we create a semaphore named synch that is shared by the two processes and initialize it to zero Then in process P1 we insert the code

                                                  S1signal( synch )

                                                  and in process P2 we insert the code wait( synch )

                                                  S2 Because synch was initialized to 0 process P2 will block on the wait until after P1 executes the call

                                                  to signal Each semaphore has a integer value and maintains a waiting queue of waiting processes

                                                  Semaphore implementation with no busy waiting To overcome busy waiting we block the waiting process and waiting process is restarted by wakeup() operation(ie the process is removed from waiting queue to ready state)

                                                  Problems due to semaphore1 Deadlocks Deadlocks occur when multiple processes are blocked each waiting for a resource that can only be freed by one of the other (blocked) processes shown below

                                                  2 Starvation In Starvation one or more processes gets blocked forever and never get a chance to enter critical section For example if we do not specify the algorithms for adding processes to the waiting queue or selecting one to be removed from the queue in the signal( ) call If a LIFO queue is chosen then the first process that starts waiting will never get a chance

                                                  3 Priority inversionLet Process LMH have priorities as shown belowLltMltHprocess H is waiting for a resource that has been held by process L so Process L is running When process M enters it pre-empts process L and M is running nowHere M runs before H even though M has lower priority than HTo avoid this L is given high priority so that M does not pre-empt L=====================================================================Q) Classical problems on synchronization

                                                  1 The Bounded Buffer Problem (also called the The Producer-Consumer Problem)2 The Readers-Writers Problem3 The Dining Philosophers Problem

                                                  These problems are used to test newly proposed synchronization scheme

                                                  1 The Bounded Buffer ProblemConsider

                                                  n buffers and each buffer holds one item a producer process which creates the items (1 at a time) a consumer process which processes them (1 at a time)

                                                  Producer process cannot produce an item if all the buffers are full and Consumer process cannot consume if all the buffers are empty As both processes modify the contents of buffer synchronization must be there For that we use 3 Semaphores

                                                  1 empty = n (empty semaphore counts the number of empty buffers)2 full=0 (full semaphore counts the number of filled buffers)3 mutex=1 (mutex provides mutual exclusion for access to the buffer)

                                                  Producer process Consumer processdo

                                                  hellip produce an item wait(empty) wait(mutex) hellip

                                                  do wait(full) wait(mutex) hellip remove an item from buffer hellip

                                                  add item to buffer hellip signal(mutex) signal(full)

                                                  while (1)

                                                  signal(mutex) signal(empty) hellip consume the item hellip while (1)

                                                  2 The Readers-Writers ProblemA data item such as a file is shared among several processesEach process is classified as reader or writerMultiple readers can read the file at same timeA writer must have exclusive access (ie cannot share with either a reader or another writer)Two versions of the readers-writers problem

                                                  readers priority new reader need not wait because a writer is waiting writers priority if a writer is waiting to access the database no new readers can start reading

                                                  A solution to both version may cause starvation in the readers priority version writers may starve in the writers priority version readers may starve

                                                  A semaphore solution to the readers priority version uses 3 semaphores1 readcount = 0 (readcount semaphore counts the number of reading shared data2 mutex =1 (mutex semaphore is used for mutually exclusion when readcount is updated)3 wrt = 1 (wrt semaphore is common to both writer and reader)

                                                  wrt provides mutually exclusive access to shared data Reader process Writer process

                                                  wait(mutex) readcount++ if (readcount == 1) wait(rt) signal(mutex) hellip reading is performed hellip wait(mutex) readcount-- if (readcount == 0) signal(wrt) signal(mutex)

                                                  do wait(wrt) hellip writing is performed hellip signal(wrt)

                                                  while (TRUE)

                                                  3 The Dining Philosophers Problemn philosophers sit around a table thinking and eatingAs time passes a philosopher gets hungry and tries to pick up the chopstick on his left and on his right A philosopher may only pick up one chopstick at a time and cannot pick up a chopstick already in the hand of neighbour philosopherThe dining philosophers problems is an example of a large class or concurrency control problems it is a simple representation of the need to allocate several resources among several processes in a deadlock-free and starvation-free mannerA semaphore solution represent each chopstick with a semaphore

                                                  semaphore chopstick[5] Initially all values are 1

                                                  Philosopher ido

                                                  51

                                                  wait(chopstick[i])wait(chopstick[(i+1) 5])hellipeathellipsignal(chopstick[i])signal(chopstick[(i+1) 5])hellipthinkhellip

                                                  while (1)This solution guarantees no two neighbouring philosophers eat simultaneously but has the possibility of creating a deadlockQ) Types of operating systems

                                                  MULTIPROCESSING SYSTEMS has multiple hardware CPUs

                                                  In symmetric multiprocessing (SMP) systems The processes share the same main memory and io facilities All processors can perform the same functions (hence term symmetric)

                                                  52

                                                  ASYMMETRIC MULTIPROCESSING SYSTEMS In asymmetric multiprocessing (ASMP) all CPUs are not equal (Master and slave CPUs) ASMP have dedicated processors to specific tasks For example one processor may be dedicated to disk operations another to video operations etc These systems dont have the flexibility to assign processes to the least-loaded CPU unlike an SMP system

                                                  REAL TIME SYSTEM is used to implement a computer application that must complete its execution within its time constraint

                                                  Two kinds of real-time systems have evolved

                                                  1 A hard real-time system can guarantee that response requirement will be met under all conditions 2 A soft real-time system cannot guarantee that response requirement meets under all conditions Ex

                                                  Digital audio or multimedia systems Digital telephones

                                                  DISTRIBUTED SYSTEMS execute parts of a computation in different systems at the same time It uses distributed control ie it spreads its decision-making actions across different computers in the system so that failures of individual computers or the network does not cripple its operation

                                                  A distributed operating system appears as a uniprocessor system even though it has multiple processors The users may not know where their programs are being run or where their files are located that should all be handled automatically by the operating system

                                                  Distributed systems allow applications to run on several processors at the same time thus requiring more complex processor scheduling algorithms

                                                  DESKTOP SYSTEMS A desktop system is a personal computer (PC) system used at a single location Modern desktop systems support multiprogramming Common examples are Linux FreeBSD Windows 8 and the Macintosh operating system

                                                  HANDHELD SYSTEMS (palmtop computers) Two of the most popular operating systems for handhelds are Symbian OS and Android OS They have very less memory

                                                  53

                                                  CLUSTERED SYSTEMS A computer cluster consists of a set of loosely connected computers that work together so that in many respects they can be viewed as a single system The components of a cluster are usually connected to each other through fast local area networks each node (computer used as a server) running its own instance of an operating system

                                                  Q) Modern operating system

                                                  New developments in hardware applications and security threats lead to the development of modern operating system

                                                  New Hardware developments like increased machine speed high speed network increased size and variety of memory devices

                                                  New Applications developments like multimedia applications internet and web access and client server computing

                                                  Developments in modern operating system can be categorized as

                                                  1 Microkernel architecture assigns only few functions to kernel ( IPC and CPU scheduling etc) Other OS services are provided by processes called servers that run in user mode This approach decouples server and kernel development This approach is well suited to distributed environment

                                                  2 Multithreading a process is divided into threads that can run concurrently Useful for applications that perform a number of independent tasks Example database server that listens and processes many client requests

                                                  3 Symmetric multiprocessing As now-a-days there are multiple microprocessors in a single system SMP operating system provides greater efficiency when there are multiple processors The processes share the same main memory and io facilities All processors can perform the same functions (hence term symmetric)

                                                  SMP schedules processes or threads across all of the processors

                                                  Advantages

                                                  1 Increased performance if some portion of the program can be run in parallel the performance of SMP OS increases

                                                  2 increased availability as all processes can perform the same task if one processor fails others will work 3 incremental growth one can add a new processor to increase the performance4 Scaling cost can be dependent on the number of processors used

                                                  4 Distributed operating systems Distributed systems allow applications to run on several processors at the same time

                                                  5 Object oriented design adds modularity to kernel Os can be customized without effecting system integrity

                                                  Q) Linux Operating systems It is open source as its source code is freely available It is free to use Linux was designed considering UNIX compatibility Linux is a multi-user multitasking system Main design goals are speed efficiency and standardization Components of Linux System

                                                  54

                                                  Linux Operating System has primarily three components as shown in the below diagram

                                                  Kernel - Kernel is the core part of Linux It is responsible for all major activities of this operating system It interacts directly with hardware Kernel hides low level hardware details to system or application programs

                                                  System Library - System libraries are special functions or programs These are used by application programs to use Kernelrsquos features

                                                  System Utility - System Utility programs are responsible to do specialized individual level tasks like updating log file accepting login requests from terminals etc

                                                  Kernel Modules Sections of kernel code that can be compiled loaded and unloaded independent of the rest of the kernel A kernel module may typically implement a device driver a file system or a networking protocol

                                                  Three components to Linux module support 1 module management Supports loading modules into memory and allows to talk to the rest of the kernel 2driver registration Allows modules to inform rest of the kernel that a new driver has become available 3 conflict resolution protect reserved resources of one driver from accidental use by another driver

                                                  Kernel Mode vs User ModeKernel component code executes in a special privileged mode called kernel mode with full access to all resources of the computer This code represents a single process executes in single address space and do not require any context switch and hence is very efficient and fast Kernel runs each processes and provides system services to processes provides protected access to hardwares to processesCode which is not required to run in kernel mode is in System Library User programs and other system programs works in User Mode which has no access to system hardwares and kernel code User programs utilities use System libraries to access Kernel functions to get systems low level tasksBasic FeaturesFollowing are some of the important features of Linux Operating System

                                                  Portable - Portability means softwares can works on different types of hardwares in same way Linux kernel and application programs supports their installation on any kind of hardware platform

                                                  Open Source - Linux source code is freely available and it is community based development project Multiple teams works in collaboration to enhance the capability of Linux operating system and it is continuously evolving

                                                  Multi-User - Linux is a multiuser system means multiple users can access system resources like memory ram application programs at same time

                                                  Multiprogramming - Linux is a multiprogramming system means multiple applications can run at same time

                                                  Hierarchical File System - Linux provides a standard file structure in which system files user files are arranged

                                                  Shell - Linux provides a special interpreter program which can be used to execute commands of the operating system It can be used to do various types of operations call application programs etc

                                                  55

                                                  Security - Linux provides user security using authentication features like password protection controlled access to specific files encryption of data

                                                  Architecture Linux System Architecture is consists of following layers

                                                  Hardware layer - Hardware consists of all peripheral devices (RAM HDD CPU etc)

                                                  Kernel - Core component of Operating System interacts directly with hardware provides low level services to upper layer components

                                                  Shell - An interface to kernel Takes commands from user and executes kernels functions

                                                  Utilities - Utility programs giving user most of the functionalities of an operating systems

                                                  Q) windows XP operating System1Extensibility mdash layered architecture

                                                  Executive which runs in protected mode provides the basic system services On top of the executive several server subsystems operate in user mode Modular structure allows additional environmental subsystems to be added without affecting the

                                                  executive 2Portability mdash XP can be moved from on hardware architecture to another with relatively few changes

                                                  Written in C and C++ Processor-dependent code is isolated in a dynamic link library (DLL) called the ldquohardware abstraction

                                                  layerrdquo (HAL)3 Reliability mdashXP uses hardware protection for virtual memory and software protection mechanisms for operating system resources 4 Compatibility mdash applications that follow the IEEE 10031 (POSIX) standard can be complied to run on 2000 without changing the source code 5 Performance mdash XP subsystems can communicate with one another via high-performance message passing

                                                  Preemption of low priority threads enables the system to respond quickly to external events Designed for symmetrical multiprocessing

                                                  6 International support mdash supports different locales via the national language support (NLS) APIXP Architecture 1 Layered system of modules 2 Protected mode mdash hardware abstraction layer (HAL) kernel executive 3 User mode mdash collection of subsystems

                                                  a Environmental subsystems emulate different operating systems b Protection subsystems provide security functions

                                                  XP architecture

                                                  56

                                                  Q) Windows network Operating systemExample Windows 2000 operating system

                                                  All the pictures are taken from Silberschatz Abraham et al Peter Baer Galvin and Greg Gagne Operating system concepts Reading Addison-Wesley

                                                  Visit my blog enthusiaststudentblogspotin57

                                                  mtechmessengerblogspotin

                                                  Keywords OS exam notes OS notesOperating system notes Opearing system galvin notesSimple batch system Multi- programmed Batch System Time sharing System Difference between Batch Multi-programming and Time sharing What is process structure of process in memory Process states PCB Operations on Processes Process Creation Process Termination cascading terminationcontext switch Basic Concepts of threads two modes of CPU execution Inter process communication Shared ndash Memory Systems Message passing systemswhat is deadlock four conditions that are necessary for deadlock to occur Methods for handling deadlocks Deadlock prevention Deadlock avoidanceResource Allocation graphBankers AlgorithmSafe State Unsafe state Safety Algorithm Resource-Request Algorithm Memory management Address Binding Logical vs Physical Address Space MMU Memory-Management Unit Dynamic loading Dynamic Linking Overlays Swapping Contiguous memory allocation Single-partition allocation Multiple-partition allocation Internal fragmentation Variable Size partitions External fragmentation First-fit Best-fit Worst-fit Non-Contiguous Memory allocation Paging Implementation of Page Table PTLR PTBR page table base register Page-table length register translation look-aside buffers TLB Protection in Paging Shared pages in Paging Segmentation Shared Segments Virtual memory Demand paging Procedure for Handling a Page Fault pure demand paging Page Replacement Page Replacement Algorithms FIFO Optimal Page replacement AlgorithmLeast Recently used (LRU) Page replacement algorithm ThrashingWorking Set model global and local page replacement

                                                  58

                                                  • 101 File Concept Data is stored in secondary memory in the form of files A file is a sequence of bits bytes lines or records A file has structure which depends on its type
                                                  • File Attributes
                                                  • File Operations
                                                  • Kernel Mode vs User Mode
                                                  • Basic Features
                                                  • Architecture

                                                    2) counter implementation Time-of-use field is allocated to each frame allocatedCPU maintains a counter Counter is incremented for every page reference Counter value is copied to the time-of-use field of the referenced page

                                                    Disadvantages of LRU page replacement algorithm Updating counter or stack must be done for every memory reference HENCE LRU IS SLOW and implementation requires hardware assistance ==============================================================================Q) LRU approximation page replacement1) Using reference bit2) Using additional reference bits algorithm3) Second chance algorithm4) Enhanced Second chance algorithm5) Counting based Page replacement

                                                    Has two schemes1) Least Frequently Used (LFU)2) Most Frequently Used (MFU)

                                                    6) Page Buffering AlgorithmsQ) What is ThrashingAns A process is thrashing if it is spending more time paging than executingThrashing may occur when global (or) local page replacement is used as followsSolution to Thrashing is Working Set strategyIf we provide a process with as many frames as it needs thrashing can be avoided Since it is not possible to know how many frames a process needs we use locality model of process execution

                                                    here Locality is set of pages that are actively used together Locality model states that as the process executes it moves from locality to locality If we allocate enough frames to a process to accommodate the size of current locality it will not fault again until it changes current localityWorking Set model uses working-set and a parameter ∆ = working-set windowThe set of pages in the most recent ∆ page references is the working setLet WS S i be the working-set size for process Pi ie Pi needs WS S i framesLet D be the total demand for frames then D=sum WS Si

                                                    and mbe the total number of available frames

                                                    If iquestm thrashing occurs because some processes will not have enough framesSoIF (Dgtm iquest

                                                    OS selects a process to suspendelse OS may initiate another processThe working set strategy prevents thrashing while keeping the degree of multi programming as high as possibleThe main difficulty is to keep track of moving working set window ========================================================================Q) How does thrashing occur in global and local page replacementAns In Global page replacement when a page fault occurs for a process any frame in main memory can be replaced causing page fault to some other process that needs the replaced frame Page fault processes must use paging device to swap pages in and out As more number of processes queue up for the paging device

                                                    the READY queue empties And CPU becomes idle So OS introduces new processes as CPU is free This further increases number of page faults

                                                    In Local page replacement Each process is allocated certain frames in main memory and when a page fault occurs only the frames allocated to it will be replacedWhen a process starts thrashing other processes are not affected But since the processes will be in queue for paging device most of the time average service time for page fault increases Hence effective access time will increase even for a process that is not thrashingQ) Page Table Structure Or Structure of page table in memory

                                                    Hierarchical Paging Hashed Page Tables Inverted Page Tables

                                                    Hierarchical Paging If Page table is large then we break up the page table into multiple page tables ie page table is paged A simple technique is a two-level page tableTwo-Level Paging Example A logical address (on 32-bit machine with 4K page size) is divided into 1 a page number consisting of 20 bits 2 a page offset consisting of 12 bitsSince the page table is paged the page number is further divided into 1 a 10-bit page number 2 a 10-bit page offset Thus a logical address is as follows page number page offset

                                                    pi p2 d

                                                    10 10 12 where p1 is an index into the outer page table and p2 is the displacement within the page of the outer page table Two-Level Page-Table Scheme

                                                    Address-Translation Scheme Address-translation scheme for a two-level 32-bit paging architecture

                                                    Hashed Page Tables (or) Hash table Hashed page tables are common in address spaces gt 32 bitsEach entry in hash table contains a linked list of elements that hash to same locationHash table is indexed by hash valueEach element consists of 3 fields

                                                    1 Page number2 Frame number3 Pointer to next element in linked list

                                                    The algorithm works as follows Page number is passed to hash function to get hash value Locating the hash value in hash table is easy as Hash table is indexed by hash value Page number is then compared with field 1 in the first element in linked list If there is a match the corresponding frame number is used to get physical address If there is no match next entries in linked list are searched

                                                    Inverted Page Table

                                                    Invertible page table has one entry for each frame of physical memory Each entry in invertible page table has

                                                    Process-id Page number

                                                    Logical address also has 3 parts Process-id Page number

                                                    OffsetProcess-id and page number of the logical address is compared with each entry in the invertible page tableIf a match is found at ith entry i is added to offset to get logical address If there is no match it means it is illegal addressAdvantages and disadvantages

                                                    Decreases memory needed to store each page table Increases time needed to search the table when a page reference occurs Solution Use hash table to

                                                    limit the search to one mdash or at most a few mdash page-table entries Inverted Page Table Architecture

                                                    Unit-IV CPU Scheduling

                                                    Basic conceptsTo maximize CPU utilization some process must be running at all times

                                                    In multi programming several processes will be in main memory at a given time If running process has to wait for IO the OS takes the CPU away from that process and assigns CPU to other process in ready queue So CPU must be scheduled and CPU scheduling is fundamental OS function

                                                    CPU- IO burst cycleProcess execution begins with a CPU burst that is followed by an IO burst which is followed by another CPU burst then another IO burst and so on as shown in the figure

                                                    An IO bound program has many short CPU bursts A CPU-bound program has few long CPU bursts

                                                    Schedulers a module in OS for scheduling decisions1048708 Long-term scheduler (or job scheduler) ndash selects which processes should be brought into the ready queue1048708 Medium-term scheduler ndash selects which processes should be swapped inout the memory1048708 Short-term scheduler (or CPU scheduler) ndash Whenever the CPU becomes idle the operating system must select one of the processes in ready queue to be executed This selection is done by short term scheduler (or) CPU scheduler

                                                    Dispatcher gives CPU control to the process selected by short term scheduler This function involves following

                                                    1 switching context2 switching to user mode3 jumping to proper location in the user program to restart that program

                                                    Time taken to stop one process and start another process is known as dispatch latency This must be kept smallReady QueueReady queue is implemented as a FIFO queue a priority queue a tree or simply an unordered linked listThe records in ready queue are PCBs of processes

                                                    Pre-emptive Scheduling

                                                    CPU scheduling decisions take place under one of four conditions

                                                    1 When a process switches from the running state to the waiting state2 When a process switches from the running state to the ready state for example in response to an

                                                    interrupt3 When a process switches from the waiting state to the ready state say at completion of IO 4 When a process terminates

                                                    For conditions 1 and 4 there is no choice - A new process must be selected For conditions 2 and 3 there is a choice - continue running the current process (or) select a different

                                                    one If CPU scheduling takes place only under conditions 1 and 4 the system is said to be non-

                                                    preemptive or cooperative In non-preemptive a process runs until it itself gives up the CPU ie when it is waiting for IO (or) when process completes Otherwise the system is said to be pre-emptive

                                                    Windows used non-preemptive scheduling up to Windows 3x and started using pre-emptive scheduling with Win95 Macs used non-preemptive prior to OSX and pre-emptive since then Note that pre-emptive scheduling is only possible on hardware that supports a timer interrupt

                                                    Note that pre-emptive scheduling can cause problems when two processes share data because one process may get interrupted in the middle of updating shared data structures

                                                    Preemption can also be a problem if the kernel is busy implementing a system call ( eg updating critical kernel data structures ) when the preemption occurs Solution wait until the system call has either completed or blocked before allowing the pre-emption But this solution is problematic for real-time systems

                                                    Some critical sections of code protect themselves from concurrency problems by disabling interrupts before entering the critical section and re-enabling interrupts on exiting the section But this should only be done only on very short pieces of code that will take less CPU time

                                                    Scheduling Criteria to decide CPU scheduling algorithm1 CPU utilization in real system cpu utilization will range from 40 to 902 Throughput Number of processes that are completed per unit time3 Turnaround time Time when the process has completed ndash Time when the process was submittedie it includes waiting time and burst time4 Waiting time amount of time that a process spends waiting for CPU5 Response time Time when the first response came - Time when the process was submitted

                                                    Optimization Criteria Max CPU utilizationMax throughput

                                                    Min turnaround timeMin waiting timeMin response time CPU Scheduling Algorithms1) First-Come First-Served (FCFS) Scheduling First come first served CPU scheduling algorithm is non pre-emptive Processes are scheduled in the order they have arrivedFCFS is implemented by Queue When the CPU is free it is allocated to the process at the head of the queueAdvantages Easy to implementDisadvantages 1 Average waiting time is quite long

                                                    2 Convoy effect (it occurs as small processes wait in queue for big process to leave CPU)Example Process Burst Time P1 24 P2 3 P3 3

                                                    Suppose that the processes arrive in the order P1 P2 P3

                                                    The Gantt Chart for the schedule is P1 P2 P3

                                                    0 24 27 30 Waiting time for P1 = 0 P2 = 24 P3 = 27 Average waiting time (0 + 24 + 27)3 = 17 2) Shortest-Job-First (SJF) SchedulingSchedule the Process with shortest burst time If CPU burst time of two processe are same then FCFS is usedAdvantages Average waiting time decreasesDisadvantages It is difficult to know the length of next CPU burstSJF can be either pre-emptive (or) non pre-emptive1 Non pre-emptive SJF ndash once CPU is given to the process it cannot be preempted until completes its CPU burst2 Pre-emptive SJF (or) Shortest-Remaining-Time-First (SRTF) If the newly arrived process is shorter than what is remaining of currently executing process then pre-empt the currently executing process

                                                    Example Process Arrival Time Burst Time P1 00 7

                                                    P2 20 4 P3 40 1

                                                    P4 50 4

                                                    SJF (non-preemptive) Gantt Chart P1 P3 P2 P4

                                                    0 7 8 12 16 Average waiting time = [0 +(8-2)+(7-4) +(12-5)] 4 =4

                                                    Example of Preemptive SJF

                                                    Process Arrival Time Burst TimeP1 00 7P2 20 4P3 40 1P4 50 4

                                                    SJF (preemptive) P1 P2 P3 P2 P4 P1

                                                    0 2 4 5 7 11 16 Average waiting time = (9 + 1 + 0 +2)4 =3 3) Priority SchedulingA priority number (integer) is associated with each processThe CPU is allocated to the process with the highest priority (smallest integer equiv highest priority)If two processes have equal priority then FCFS is usedSJF is a priority algorithm in which larger the CPU burst lower the priorityThere are two schemes 1 Preemptive 2 nonpreemptive

                                                    Problem equiv Starvation ndash low priority processes may never executeSolution equiv Aging ndash as time progresses increase the priority of the process

                                                    4) Round Robin (RR) Each process gets a small unit of CPU time (time quantum) usually 10-100 milliseconds After this time has elapsed the process is preempted and added to the end of the ready queueIf there are n processes in the ready queue and the time quantum is q then each process gets 1n of the CPU time in chunks of at most q time units at once No process waits more than (n-1)q time unitsPerformance 1 q large _ FIFO 2 q small _ q must be large with respect to context switch otherwise overhead is too highExample of RR with Time Quantum = 4 Process Burst Time P1 24 P2 3

                                                    P3 3 The Gantt chart isP1 P2 P3 P1 P1 P1 P1 P1

                                                    0 4 7 10 14 18 22 26 30 Average waiting time = [(30-24)+4+7]3 = 173 =566 5)Multilevel Queue scheduling Ready queue is partitioned into separate queuesFor ex ready queue is divided into 2 queues

                                                    1 foreground (interactive) queue2 background (batch) queue

                                                    Each queue has its own scheduling algorithmFor ex Round Robin scheduling algorithm can be used in foreground queue

                                                    FCFS scheduling algorithm can be used in background queueScheduling must be done between the queues This can be done in 2 ways1 Fixed priority scheduling

                                                    Foreground queue has highest priority All the processes in foreground queue must be completed and queue must be empty Then only the processes in background queue will be given CPU As shown in below figure Disadvantage starvation

                                                    2 Time slice ndash each queue gets a certain amount of CPU timewhich it can schedule amongst its processes ie 80 to foreground in RR and 20 to background in FCFS

                                                    6) Multilevel Feedback Queue Scheduling A process can move between queues The idea is to separate processes according to their CPU bursts IO ndash bound and interactive processes will be in highest priority queueIf a process takes more CPU time it is moved to lower priority queue If a process waits too long in lower priority queue it is moved to higher priority queue to prevent starvationAs shown in the below figure let there be 3 queues Q0 Q1 Q2

                                                    1 Q0 ndash time quantum 8 milliseconds 2 Q1 ndash time quantum 16 milliseconds 3 Q2 ndash FCFS

                                                    Scheduling

                                                    1 A process entering ready queue is put in queue Q0 When it gains CPU the process receives 8 msec If it does not finish in 8 milliseconds process is moved to queue Q1 2 At Q1 process again receives 16 additional milliseconds If it still does not complete it is moved to queue Q2

                                                    Q) SynchronizationSeveral processes run in an operating system Some process share resources due to which problems like data inconsistency may arise

                                                    Example of process synchronization is Producer Consumer problem (or) Bounded-Buffer problemProducer Consumer problem To make sure that producer process should not add data when the buffer is full andConsumer process should not take data when the buffer is empty

                                                    The code for Producer process can be modified as followswhile (true) Produce an item in next producedwhile (counter == BUFFER_SIZE) do nothingbuffer[in]=nextProducedin = (in+1) BUFFER_SIZEcounter++

                                                    here an integer variable counter is initialized to zeroCounter is incremented every time a new item is added to the buffer and counter is decremented every time an item is removed from bufferin is an index that always points to an empty slot after the last filled slotout is an index that always points to the first filled slot after at the head of the circular queue

                                                    The code for consumer process is as shown belowwhile(true)

                                                    while (counter == 0) do nothingnextConsumed = buffer[out]out=(out+1)BUFFER_SIZEcounter--

                                                    The above producer and consumer code produces RACE CONDITIONOutcome of execution of several co-operating processes depends on the order in which accesses takes place is called Race ConditionFor Example if counter =5

                                                    Given counter =5 and if producer produces one item and consumer consumes one item then correct value of counter= 5 only But here the counter = 4 ie inconsistent resultIf T5 is executed before T4 then the counter =6 ie inconsistent result

                                                    Solution to avoid race condition make sure only one process at a time must update the shared variable (here it is counter)

                                                    Q) Critical section problemAns Each process has a segment of code called critical section Critical section code is the code that contains common variables of co-operating processesTo avoid data inconsistencyTwo processes must not execute critical section code at same timeFor this to happen1 Each process must request permission to enter its critical sectionmdashEntry section2 End of critical section ndashExit section3 Rest of the code after critical section ndash Remainder section

                                                    Solution to critical section must satisfy the following three requirements1 Mutual exclusion if one process is in critical section no other process can be in critical section2 Progress Processes not in remainder section must decide which process will enter critical section next3 Bounded Waiting All the processes must be allowed to enter critical section No process must wait indefinitely

                                                    Q) Petersonrsquos solution for critical section problem (Software based solution)AnsTime Process 1 Process2T0 do

                                                    flag[1] = TRUE turn =2

                                                    do flag[2] = TRUE

                                                    T1 turn =1

                                                    T2 While( flag[2] ampamp turn == 2)Since turn = 2 while loop will not run

                                                    While( flag[1] ampamp turn == 1)Loops continuously until flag[1]

                                                    becomes falseT3 Enter critical section

                                                    T4 flag[1] = FALSE Enter Critical section

                                                    T5 Enters remainder section flag[2]=FALSE

                                                    while (TRUE) Enters remainder section

                                                    while (TRUE)

                                                    101 File Concept Data is stored in secondary memory in the form of files A file is a sequence of bits bytes lines or records A file has structure which depends on its type

                                                    File Attributes

                                                    1 Name ndash Name of the file for user understanding2 Identifier used by OS to identify a file ( eg inode number )3 Type ndash Text files executable files binary files etc 4 Location ndash location of the file on the hard drive 5 Size current size of the file6 Protection Controls who can read write or execute the file7 Time Date and user identification gives the information when the file was created last modified

                                                    and last used etc This data is used for protection security and usage monitoring

                                                    All information about files is kept in directory structure The directory entry consists of the filersquos name and its unique identifier The identifier locates the other file attributes Directory structure is kept on disk

                                                    File Operations

                                                    The file Abstract Data Type supports many common operations 1 Creating a file find free space on disk add entry to directory 2 Writing a file write data at current file position pointer location and update pointer3 Reading a file read data at current file position pointer location and update pointer4 Repositioning within a file or file seek change pointer location to a given value5 Deleting a file free the space allocated to file and remove entry from directory6 Truncating a file delete the data and update file size

                                                    Open File Table

                                                    Most OSes require that files be opened before access and closed after access Information about all files that are open currently in the system is stored in an open file table This

                                                    system wide open file table is maintained by OS And information about files opened by the process is stored by the processrsquos open file table Open file table has the following information

                                                    1 File pointer - records the current position in the file for the next read or write2 File-open count - How many times has the current file been opened ( simultaneously by

                                                    different processes ) and not yet closedWhen this counter reaches zero the file can be removed from the table

                                                    3 Disk location of the file4 Access rights

                                                    Some systems provide support for file locking 1 A shared lock is for reading only2 A exclusive lock is for writing as well as reading3 An advisory lock OS makes sure the locks are acquired and released appropriately4 A mandatory lock software developers makes sure the locks are acquired and released

                                                    appropriately5 UNIX used advisory locks and Windows uses mandatory locks

                                                    File Types

                                                    Implementing file types A common technique is to include the type as part of the filename The name is split into 2 parts

                                                    1 File Name 2 Extension

                                                    File name and extension are separated by a period The system uses the extension to indicate the type of file and the type of operations that can be done on that fileUser opens Microsoft word and clicks on File menu and Open option User will only specify the filename without extension application will look for a file with the given name and extension it expects Hence extensions are not supported by the operating systemUnix system uses a crude magic number stored at the beginning of the file to indicate the type of fileIn Mac OS X operating system file produced by word processor has the word processorrsquos name as its creator When user double clicks on the file word will open the required fileSome common file types are as shown below

                                                    File Structure Some files contain an internal structure which may or may not be known to the OS for ex executable files have a specific structure so that it can determine

                                                    where in memory to load the file and what is the location of the first instruction

                                                    If OS supports many file formats it will increase the size and complexity of the OS UNIX treats all files as sequences of bytes with no internal structure (With the exception of

                                                    executable binary programs which it must know how to load and find the first executable statement etc)

                                                    Macintosh files have two forks - a resource fork and a data fork The resource fork contains information relating to the UI such as icons and button images and can be modified independently Data fork contains the code or data

                                                    Internal File structure

                                                    Disk files are accessed in units of physical blocks typically 512 bytes

                                                    Internally files are organized in units of logical units like single byte (or)

                                                    data record size (or) structure size

                                                    The number of logical units which fit into one physical block determines its packingIf a student file is organized internally as records Each record takes 64 bytes of memoryAnd main memory is divided into blocks of 512 bytes Then packing = 51264=8 8 records can be stored per block

                                                    Internal fragmentation may occur due to this

                                                    102 Access Methods

                                                    The information in file can be accessed in the following ways1 Sequential access2 Direct access (or) Relative access

                                                    1 Sequential access we cannot randomly access any record

                                                    Records are read and written sequentially one record after another

                                                    A sequential access file emulates magnetic tape operation and generally supports the following operations read next - read a record and advance the tape to the next position write next - write a record and advance the tape to the next position rewind skip n records - May or may not be supported N may be limited to positive numbers or may be

                                                    limited to +- 1

                                                    2 Direct access (or) Relative accessA file is made up of fixed length records so that we can randomly access any record

                                                    No restriction on order of reading or writing

                                                    Direct access files are useful when we need to access large amounts of data randomly

                                                    Ex database

                                                    Operations supported include n is relative block number ie relative to beginning of file

                                                    1 read n - read record number n 2 write n - write record number n 3 jump to record n - could be 0 or the end of file4 Query current record - used to return back to this record later5 Sequential access can be easily emulated using direct access The inverse is complicated and

                                                    inefficient

                                                    3 Other access methods can be built on top of direct access method We construct an index for the file which contains pointers to various blocks To search a file we first access index and then use pointer to access file

                                                    If a student file has 120000 records of students sorted by student id We can create an index consisting of 1st student id in each blockif 64 records can be stored in each block then number of blocks needed to store 120000 records = 12000064 asymp 2000 blocks So index has 2000 entries Index is kept in main memory so that searching becomes faster

                                                    If the index itself becomes large we create an index for index file (Ex B-tree)

                                                    Q) Allocation Methods

                                                    To allocate space to files on disk is such a way that disk space is utilized effectively and files can be accessed quickly

                                                    Three major methods of allocating disk space are

                                                    1 Contiguous 2 Linked

                                                    3 Indexed

                                                    1 Contiguous allocation Each file occupies set of contiguous blocks on disk

                                                    If the file is n blocks long and starts at location b then it occupies blocks b b+1 b+2hellipb+n-1

                                                    The directory entry for each file indicates the address of starting block(ie b) and length (n)

                                                    Advantages

                                                    1 For direct access to block i we can directly access by finding the block as b+i Since the file is stored sequentially sequential access is easy Hence access is easy

                                                    2 Supports both direct and sequential access3 Number of disk seeks required is minimum 4 Seek time is minimum

                                                    Disadvantages

                                                    1 Difficult to find space to a new file (or) add more data to existing fileFirst fit and Best fit strategies are used to find a free hole( set of free contiguous blocks) from list of available holes But both are inefficient wrt storage utilization

                                                    2 External fragmentation even though there are enough free blocks required for a new file they are not contiguous and hence cannot be allocated to new fileSolution copy all the files onto another disk or tape We then copy the files back onto original disk contiguously This is known as compaction But compaction is time consuming

                                                    3 Size of the file must be known when the file is created Even though final size is known entire size may not be used by the file at the beginning Hence leads to wastage of space

                                                    If size of the file is not known in advance then (i) If we allocate too little space to file file cannot be extended Solution find a larger hole copy the contents of the file to the new space and release the previous space This may be possible as long as space exists Also this is time consuming(ii) If we overestimate the amount of space needed much of the space is unused This will lead to internal fragmentation

                                                    To minimize these drawbacks some os uses modified contiguous allocation scheme Here contiguous blocks of memory is allocated at firstIf the file needs more blocks another set of contiguous space known as extent is added Location and a block count plus a link to the first block of next extentIf extents are too large and of fixed size internal fragmentation occursIf extents are of variable sizes external fragmentation occurs

                                                    Linked AllocationEach file is a linked list of disk blocks The disk blocks can be scattered anywhere on the diskThe directory contains a pointer to first and last blocks of the file For example a file of 5 blocks might start at a block 9 and continue at block 16 then block 1 and then block 10 and finally block 25 Each block contains a pointer to next block If each block is 512 bytes and disk address (pointer ) requires 4 bytes then the user sees blocks of 512-4=508 bytes To create a new file we simply create a new entry in the directory The pointer will have nil for empty file Also the size field of empty file is 0

                                                    To add more data to existing file free block is found and data is written to it and is linked to end of file To read a file we read by following the pointers block to block

                                                    Advantages 1 Easy to find space to a new file (or) add more data to existing file2 Size of the file need not be declared when the file is created3 There is no external fragmentation

                                                    Disadvantages1 Seek time is more2 Number of disk seeks required is more3 Pointers use larger percentage of filersquos disk space4 Inefficient for direct access files To access i th record i disk reads are required5 Not Reliable if pointers were lost or damaged

                                                    Solution to disadvantage 3 is to use to group multiple blocks as clusters(for ex 4 blocks = 1cluster) We allocate clusters instead of blocks to file Makes logical to physical block mapping simpleImproves disk throughput by reducing disk access timeReduces wastage of space due to pointers as only few pointers may be neededFree ndashlist management also becomes simpleDisadvantage internal fragmentation more space is wasted when the cluster is partially full

                                                    Solution to disadvantage 5 is to use doubly linked list (or) store the file name and relative block number in each block

                                                    An variation of linked allocation is the use of file allocation table (FAT)A section of disk at the beginning of each volume contains FAT FAT has one entry for each disk block and is indexed by block numberThe block number contains the block number of next block that contains the file The last block will have special end of the file value as table entry An unused block is indicated by table value 0

                                                    Advantages

                                                    1 Easy to find space to a new file (or) add more data to existing file2 Random access time is improved

                                                    Disadvantage number of disk seeks is more The disk head must move to read the FAT and find the location of block then move to location of the block

                                                    3 Indexed allocation

                                                    Each file has its own index block Index block is an array of disk block addresses The ith entry in index block points to ith block of the file

                                                    The directory contains the address of the index block

                                                    When the file is created all pointers in the index block are set to nil

                                                    To add more data to existing file free block is found and data is written to it The address of block is put in the index block

                                                    Advantages

                                                    1 Easy to find space to a new file (or) add more data to existing file2 Supports direct (or) random access efficiently3 No external fragmentation

                                                    Disadvantage

                                                    1 Seek time is more2 Number of disk seeks required is more3 Not Reliable4 Index file must be kept in memory If this memory is not available then we have the read the index

                                                    block and then desired data block ie 2 disk accesses are required To access a block near the end of the file we need to read all the index blocks to read the needed data block

                                                    5 Pointer overhead of index block is gt than that of linked allocation For example if we have file of only one or two blocks Entire index block is allocated to store these one or two pointers

                                                    Various mechanisms followed for size of index block

                                                    1 linked scheme size of index block is one disk block Index block contains a small header containing name of the file and set of the first 100 disk block addresses The next address contains the address of another index block for large files

                                                    2 multilevel index to access a block the OS uses first-level index to find second-level index block which points to file block This approach can be continued to third or fourth level

                                                    3 combined scheme

                                                    For example Say there are 15 pointers of index block in the filersquos inode The first 12 of these pointers have address of blocks that contains file data The next 3 pointers point to indirect blocks The first points to single indirect block which is an index block that contain addresses of filersquos data blocks

                                                    The second points to double indirect blocks and third points to triple indirect blocks as shown below

                                                    Q) Free space management

                                                    To keep track of free disk space the system maintains free space list To create a file we search free space list and allocate the required space to new file

                                                    The free space list can be implemented as

                                                    1 Bit vector (or) Bit map2 Linked list3 Grouping4 Counting 5 Space maps

                                                    1 Bit vector (or) Bit map Each block is represented by 1 bit If the block is free the bit is 1 and if the block is allocated the bit is 0 For example consider a disk where blocks 23458 blocks are free The free-space bit map would be 001111001000

                                                    Advantages simple and easy to get contiguous files

                                                    Disadvantages

                                                    1 Bit map requires extra space2 Bit map is inefficient if kept in disk Bit maps of smaller disk can be kept in main memory but may

                                                    not be possible to keep the bit map of larger disk in main memory3 Bit map must be modified both when blocks are allocated and freed Freeing 1 GB of data on a 1-TB

                                                    disk could cause thousands of blocks of bit maps to be updated because those data blocks are scattered all over the disk

                                                    2 Linked List Free-space list is implemented by linking all the free disk blocks together A pointer to the first free block is stored in special location on the disk For example consider a disk where blocks 23458 blocks are freeWe keep a pointer to block 2 as the first free block Block 2 would contain a pointer to block 3 Block 3 would point to block 4 and block 4 would point to block 5and so onAdvantage No waste of spaceDisadvantage Cannot get contiguous space easily ie To traverse the free-space list we must read each block which requires more IO time

                                                    3 Grouping First free block stores the addresses of n free blocks The first n-1 free blocks are free and the last block contains the addresses of another n free blocks and so onAdvantage large number of free blocks can be found quickly4 Counting free-space list can be maintained by keeping the address of first free block and the number (n)of free contiguous blocks that follow the first block

                                                    Each entry in the free-space list consists of disk address and a countEach entry requires more space andlist is shorter only if count is gt1These entries can be stored in a B-tree for efficient insertion deletion and search

                                                    5 Space maps ZFS divides the disk space into chunks of manageable sizes called metaslabs Each metaslab has a space mapZFS uses counting algorithm to store information about free blocks Space map is a log of all block activities in time order in counting formatWhen ZFS allocates or frees space from a metaslab it loads the associated space map into memory and updates it Finally free-space list is updated on diskSynchronization Hardware Hardware solution to synchronization is to provide atomic operations These operations operate as a single instruction without interruption Two such operations are 1 Test and Set() hardware instruction2 Swap() instruction

                                                    1 Test and Set instruction is as shown

                                                    Solution using Test and SetThe shared variable lock is initialized to false

                                                    2 Swap() instruction definition is as follows

                                                    Solution using Swap1 Shared Boolean variable lock is initialized to false and each process has a local boolean variable key

                                                    Both Test and Set Swap instructions satisfy the mutual exclusion requirement but unfortunately do not guarantee bounded waiting If there are multiple processes trying to get into their critical sections there is no guarantee of what order they will enter and any one process could have the bad luck to wait forever until they got their turn in the critical section (Since there is no guarantee as to the relative rates of the processes a very fast process could theoretically release the lock whip through their remainder section and re-lock the lock before a slower process got a chance As more and more processes are involved vying for the same resource the odds of a slow process getting locked out completely increase )

                                                    Below figure illustrates a solution using test-and-set that satisfies bounded waiting using two shared data structures boolean lock and boolean waiting[ N ] where N is the number of processes in contention for critical sections

                                                    Bounded-waiting mutual exclusion with TestAndSet( )

                                                    It first looks in an order ( starting with the next process on the list ) for a process that has been waiting and if it finds one then it releases that particular process from its waiting state without unlocking the critical section thereby allowing a specific process into the critical section while continuing to block all the others

                                                    Q) What is semaphoreAns Semaphore is used to solve synchronization problems A semaphore S is an integer variable accessed only through 2 standard atomic operations 1 wait() 2 signal()Wait primitive Signal primitivewait(S)

                                                    while(S lt= 0) do nothingS--

                                                    signal (S)

                                                    S++

                                                    Each process that wishes to use a resource performs wait() operation When a process releases a resource it performs signal() operation

                                                    Semaphores Usage1 binary semaphore or mutex locks can take values 0 (or) 1

                                                    2 Counting Semaphore can take any integer value Counting semaphore is used to count the number remaining resources The counter is initialized to the number of resources available whenever the counting semaphore gt 0 then a process can enter a critical section and use one of the resources When the counter = 0 ( or negative in some implementations ) then the process blocks until another process frees a resource and increments the counting semaphore with a signal callFor example if 3 resources are there and 4 processes Value of counting semaphore =3if process P1 requires a resource it perform wait() operationNow counting semaphore = 2if process P2 requires a resource it perform wait() operationNow counting semaphore = 1if process P3 requires a resource it perform wait() operationNow counting semaphore = 0if process P4 requires a resource it perform wait() operation P4 waits until a resource is available

                                                    3 Semaphores can also be used to synchronize certain operations

                                                    Semaphore ImplementationWhen a process is in critical section and any other process that tries to enter critical section must loop continuously wasting CPU time (busy waiting) This type of semaphore is called spin lock as the process spins while waiting For example suppose it is important that process P1 execute statement S1 before process P2 executes statement S2

                                                    First we create a semaphore named synch that is shared by the two processes and initialize it to zero Then in process P1 we insert the code

                                                    S1signal( synch )

                                                    and in process P2 we insert the code wait( synch )

                                                    S2 Because synch was initialized to 0 process P2 will block on the wait until after P1 executes the call

                                                    to signal Each semaphore has a integer value and maintains a waiting queue of waiting processes

                                                    Semaphore implementation with no busy waiting To overcome busy waiting we block the waiting process and waiting process is restarted by wakeup() operation(ie the process is removed from waiting queue to ready state)

                                                    Problems due to semaphore1 Deadlocks Deadlocks occur when multiple processes are blocked each waiting for a resource that can only be freed by one of the other (blocked) processes shown below

                                                    2 Starvation In Starvation one or more processes gets blocked forever and never get a chance to enter critical section For example if we do not specify the algorithms for adding processes to the waiting queue or selecting one to be removed from the queue in the signal( ) call If a LIFO queue is chosen then the first process that starts waiting will never get a chance

                                                    3 Priority inversionLet Process LMH have priorities as shown belowLltMltHprocess H is waiting for a resource that has been held by process L so Process L is running When process M enters it pre-empts process L and M is running nowHere M runs before H even though M has lower priority than HTo avoid this L is given high priority so that M does not pre-empt L=====================================================================Q) Classical problems on synchronization

                                                    1 The Bounded Buffer Problem (also called the The Producer-Consumer Problem)2 The Readers-Writers Problem3 The Dining Philosophers Problem

                                                    These problems are used to test newly proposed synchronization scheme

                                                    1 The Bounded Buffer ProblemConsider

                                                    n buffers and each buffer holds one item a producer process which creates the items (1 at a time) a consumer process which processes them (1 at a time)

                                                    Producer process cannot produce an item if all the buffers are full and Consumer process cannot consume if all the buffers are empty As both processes modify the contents of buffer synchronization must be there For that we use 3 Semaphores

                                                    1 empty = n (empty semaphore counts the number of empty buffers)2 full=0 (full semaphore counts the number of filled buffers)3 mutex=1 (mutex provides mutual exclusion for access to the buffer)

                                                    Producer process Consumer processdo

                                                    hellip produce an item wait(empty) wait(mutex) hellip

                                                    do wait(full) wait(mutex) hellip remove an item from buffer hellip

                                                    add item to buffer hellip signal(mutex) signal(full)

                                                    while (1)

                                                    signal(mutex) signal(empty) hellip consume the item hellip while (1)

                                                    2 The Readers-Writers ProblemA data item such as a file is shared among several processesEach process is classified as reader or writerMultiple readers can read the file at same timeA writer must have exclusive access (ie cannot share with either a reader or another writer)Two versions of the readers-writers problem

                                                    readers priority new reader need not wait because a writer is waiting writers priority if a writer is waiting to access the database no new readers can start reading

                                                    A solution to both version may cause starvation in the readers priority version writers may starve in the writers priority version readers may starve

                                                    A semaphore solution to the readers priority version uses 3 semaphores1 readcount = 0 (readcount semaphore counts the number of reading shared data2 mutex =1 (mutex semaphore is used for mutually exclusion when readcount is updated)3 wrt = 1 (wrt semaphore is common to both writer and reader)

                                                    wrt provides mutually exclusive access to shared data Reader process Writer process

                                                    wait(mutex) readcount++ if (readcount == 1) wait(rt) signal(mutex) hellip reading is performed hellip wait(mutex) readcount-- if (readcount == 0) signal(wrt) signal(mutex)

                                                    do wait(wrt) hellip writing is performed hellip signal(wrt)

                                                    while (TRUE)

                                                    3 The Dining Philosophers Problemn philosophers sit around a table thinking and eatingAs time passes a philosopher gets hungry and tries to pick up the chopstick on his left and on his right A philosopher may only pick up one chopstick at a time and cannot pick up a chopstick already in the hand of neighbour philosopherThe dining philosophers problems is an example of a large class or concurrency control problems it is a simple representation of the need to allocate several resources among several processes in a deadlock-free and starvation-free mannerA semaphore solution represent each chopstick with a semaphore

                                                    semaphore chopstick[5] Initially all values are 1

                                                    Philosopher ido

                                                    51

                                                    wait(chopstick[i])wait(chopstick[(i+1) 5])hellipeathellipsignal(chopstick[i])signal(chopstick[(i+1) 5])hellipthinkhellip

                                                    while (1)This solution guarantees no two neighbouring philosophers eat simultaneously but has the possibility of creating a deadlockQ) Types of operating systems

                                                    MULTIPROCESSING SYSTEMS has multiple hardware CPUs

                                                    In symmetric multiprocessing (SMP) systems The processes share the same main memory and io facilities All processors can perform the same functions (hence term symmetric)

                                                    52

                                                    ASYMMETRIC MULTIPROCESSING SYSTEMS In asymmetric multiprocessing (ASMP) all CPUs are not equal (Master and slave CPUs) ASMP have dedicated processors to specific tasks For example one processor may be dedicated to disk operations another to video operations etc These systems dont have the flexibility to assign processes to the least-loaded CPU unlike an SMP system

                                                    REAL TIME SYSTEM is used to implement a computer application that must complete its execution within its time constraint

                                                    Two kinds of real-time systems have evolved

                                                    1 A hard real-time system can guarantee that response requirement will be met under all conditions 2 A soft real-time system cannot guarantee that response requirement meets under all conditions Ex

                                                    Digital audio or multimedia systems Digital telephones

                                                    DISTRIBUTED SYSTEMS execute parts of a computation in different systems at the same time It uses distributed control ie it spreads its decision-making actions across different computers in the system so that failures of individual computers or the network does not cripple its operation

                                                    A distributed operating system appears as a uniprocessor system even though it has multiple processors The users may not know where their programs are being run or where their files are located that should all be handled automatically by the operating system

                                                    Distributed systems allow applications to run on several processors at the same time thus requiring more complex processor scheduling algorithms

                                                    DESKTOP SYSTEMS A desktop system is a personal computer (PC) system used at a single location Modern desktop systems support multiprogramming Common examples are Linux FreeBSD Windows 8 and the Macintosh operating system

                                                    HANDHELD SYSTEMS (palmtop computers) Two of the most popular operating systems for handhelds are Symbian OS and Android OS They have very less memory

                                                    53

                                                    CLUSTERED SYSTEMS A computer cluster consists of a set of loosely connected computers that work together so that in many respects they can be viewed as a single system The components of a cluster are usually connected to each other through fast local area networks each node (computer used as a server) running its own instance of an operating system

                                                    Q) Modern operating system

                                                    New developments in hardware applications and security threats lead to the development of modern operating system

                                                    New Hardware developments like increased machine speed high speed network increased size and variety of memory devices

                                                    New Applications developments like multimedia applications internet and web access and client server computing

                                                    Developments in modern operating system can be categorized as

                                                    1 Microkernel architecture assigns only few functions to kernel ( IPC and CPU scheduling etc) Other OS services are provided by processes called servers that run in user mode This approach decouples server and kernel development This approach is well suited to distributed environment

                                                    2 Multithreading a process is divided into threads that can run concurrently Useful for applications that perform a number of independent tasks Example database server that listens and processes many client requests

                                                    3 Symmetric multiprocessing As now-a-days there are multiple microprocessors in a single system SMP operating system provides greater efficiency when there are multiple processors The processes share the same main memory and io facilities All processors can perform the same functions (hence term symmetric)

                                                    SMP schedules processes or threads across all of the processors

                                                    Advantages

                                                    1 Increased performance if some portion of the program can be run in parallel the performance of SMP OS increases

                                                    2 increased availability as all processes can perform the same task if one processor fails others will work 3 incremental growth one can add a new processor to increase the performance4 Scaling cost can be dependent on the number of processors used

                                                    4 Distributed operating systems Distributed systems allow applications to run on several processors at the same time

                                                    5 Object oriented design adds modularity to kernel Os can be customized without effecting system integrity

                                                    Q) Linux Operating systems It is open source as its source code is freely available It is free to use Linux was designed considering UNIX compatibility Linux is a multi-user multitasking system Main design goals are speed efficiency and standardization Components of Linux System

                                                    54

                                                    Linux Operating System has primarily three components as shown in the below diagram

                                                    Kernel - Kernel is the core part of Linux It is responsible for all major activities of this operating system It interacts directly with hardware Kernel hides low level hardware details to system or application programs

                                                    System Library - System libraries are special functions or programs These are used by application programs to use Kernelrsquos features

                                                    System Utility - System Utility programs are responsible to do specialized individual level tasks like updating log file accepting login requests from terminals etc

                                                    Kernel Modules Sections of kernel code that can be compiled loaded and unloaded independent of the rest of the kernel A kernel module may typically implement a device driver a file system or a networking protocol

                                                    Three components to Linux module support 1 module management Supports loading modules into memory and allows to talk to the rest of the kernel 2driver registration Allows modules to inform rest of the kernel that a new driver has become available 3 conflict resolution protect reserved resources of one driver from accidental use by another driver

                                                    Kernel Mode vs User ModeKernel component code executes in a special privileged mode called kernel mode with full access to all resources of the computer This code represents a single process executes in single address space and do not require any context switch and hence is very efficient and fast Kernel runs each processes and provides system services to processes provides protected access to hardwares to processesCode which is not required to run in kernel mode is in System Library User programs and other system programs works in User Mode which has no access to system hardwares and kernel code User programs utilities use System libraries to access Kernel functions to get systems low level tasksBasic FeaturesFollowing are some of the important features of Linux Operating System

                                                    Portable - Portability means softwares can works on different types of hardwares in same way Linux kernel and application programs supports their installation on any kind of hardware platform

                                                    Open Source - Linux source code is freely available and it is community based development project Multiple teams works in collaboration to enhance the capability of Linux operating system and it is continuously evolving

                                                    Multi-User - Linux is a multiuser system means multiple users can access system resources like memory ram application programs at same time

                                                    Multiprogramming - Linux is a multiprogramming system means multiple applications can run at same time

                                                    Hierarchical File System - Linux provides a standard file structure in which system files user files are arranged

                                                    Shell - Linux provides a special interpreter program which can be used to execute commands of the operating system It can be used to do various types of operations call application programs etc

                                                    55

                                                    Security - Linux provides user security using authentication features like password protection controlled access to specific files encryption of data

                                                    Architecture Linux System Architecture is consists of following layers

                                                    Hardware layer - Hardware consists of all peripheral devices (RAM HDD CPU etc)

                                                    Kernel - Core component of Operating System interacts directly with hardware provides low level services to upper layer components

                                                    Shell - An interface to kernel Takes commands from user and executes kernels functions

                                                    Utilities - Utility programs giving user most of the functionalities of an operating systems

                                                    Q) windows XP operating System1Extensibility mdash layered architecture

                                                    Executive which runs in protected mode provides the basic system services On top of the executive several server subsystems operate in user mode Modular structure allows additional environmental subsystems to be added without affecting the

                                                    executive 2Portability mdash XP can be moved from on hardware architecture to another with relatively few changes

                                                    Written in C and C++ Processor-dependent code is isolated in a dynamic link library (DLL) called the ldquohardware abstraction

                                                    layerrdquo (HAL)3 Reliability mdashXP uses hardware protection for virtual memory and software protection mechanisms for operating system resources 4 Compatibility mdash applications that follow the IEEE 10031 (POSIX) standard can be complied to run on 2000 without changing the source code 5 Performance mdash XP subsystems can communicate with one another via high-performance message passing

                                                    Preemption of low priority threads enables the system to respond quickly to external events Designed for symmetrical multiprocessing

                                                    6 International support mdash supports different locales via the national language support (NLS) APIXP Architecture 1 Layered system of modules 2 Protected mode mdash hardware abstraction layer (HAL) kernel executive 3 User mode mdash collection of subsystems

                                                    a Environmental subsystems emulate different operating systems b Protection subsystems provide security functions

                                                    XP architecture

                                                    56

                                                    Q) Windows network Operating systemExample Windows 2000 operating system

                                                    All the pictures are taken from Silberschatz Abraham et al Peter Baer Galvin and Greg Gagne Operating system concepts Reading Addison-Wesley

                                                    Visit my blog enthusiaststudentblogspotin57

                                                    mtechmessengerblogspotin

                                                    Keywords OS exam notes OS notesOperating system notes Opearing system galvin notesSimple batch system Multi- programmed Batch System Time sharing System Difference between Batch Multi-programming and Time sharing What is process structure of process in memory Process states PCB Operations on Processes Process Creation Process Termination cascading terminationcontext switch Basic Concepts of threads two modes of CPU execution Inter process communication Shared ndash Memory Systems Message passing systemswhat is deadlock four conditions that are necessary for deadlock to occur Methods for handling deadlocks Deadlock prevention Deadlock avoidanceResource Allocation graphBankers AlgorithmSafe State Unsafe state Safety Algorithm Resource-Request Algorithm Memory management Address Binding Logical vs Physical Address Space MMU Memory-Management Unit Dynamic loading Dynamic Linking Overlays Swapping Contiguous memory allocation Single-partition allocation Multiple-partition allocation Internal fragmentation Variable Size partitions External fragmentation First-fit Best-fit Worst-fit Non-Contiguous Memory allocation Paging Implementation of Page Table PTLR PTBR page table base register Page-table length register translation look-aside buffers TLB Protection in Paging Shared pages in Paging Segmentation Shared Segments Virtual memory Demand paging Procedure for Handling a Page Fault pure demand paging Page Replacement Page Replacement Algorithms FIFO Optimal Page replacement AlgorithmLeast Recently used (LRU) Page replacement algorithm ThrashingWorking Set model global and local page replacement

                                                    58

                                                    • 101 File Concept Data is stored in secondary memory in the form of files A file is a sequence of bits bytes lines or records A file has structure which depends on its type
                                                    • File Attributes
                                                    • File Operations
                                                    • Kernel Mode vs User Mode
                                                    • Basic Features
                                                    • Architecture

                                                      the READY queue empties And CPU becomes idle So OS introduces new processes as CPU is free This further increases number of page faults

                                                      In Local page replacement Each process is allocated certain frames in main memory and when a page fault occurs only the frames allocated to it will be replacedWhen a process starts thrashing other processes are not affected But since the processes will be in queue for paging device most of the time average service time for page fault increases Hence effective access time will increase even for a process that is not thrashingQ) Page Table Structure Or Structure of page table in memory

                                                      Hierarchical Paging Hashed Page Tables Inverted Page Tables

                                                      Hierarchical Paging If Page table is large then we break up the page table into multiple page tables ie page table is paged A simple technique is a two-level page tableTwo-Level Paging Example A logical address (on 32-bit machine with 4K page size) is divided into 1 a page number consisting of 20 bits 2 a page offset consisting of 12 bitsSince the page table is paged the page number is further divided into 1 a 10-bit page number 2 a 10-bit page offset Thus a logical address is as follows page number page offset

                                                      pi p2 d

                                                      10 10 12 where p1 is an index into the outer page table and p2 is the displacement within the page of the outer page table Two-Level Page-Table Scheme

                                                      Address-Translation Scheme Address-translation scheme for a two-level 32-bit paging architecture

                                                      Hashed Page Tables (or) Hash table Hashed page tables are common in address spaces gt 32 bitsEach entry in hash table contains a linked list of elements that hash to same locationHash table is indexed by hash valueEach element consists of 3 fields

                                                      1 Page number2 Frame number3 Pointer to next element in linked list

                                                      The algorithm works as follows Page number is passed to hash function to get hash value Locating the hash value in hash table is easy as Hash table is indexed by hash value Page number is then compared with field 1 in the first element in linked list If there is a match the corresponding frame number is used to get physical address If there is no match next entries in linked list are searched

                                                      Inverted Page Table

                                                      Invertible page table has one entry for each frame of physical memory Each entry in invertible page table has

                                                      Process-id Page number

                                                      Logical address also has 3 parts Process-id Page number

                                                      OffsetProcess-id and page number of the logical address is compared with each entry in the invertible page tableIf a match is found at ith entry i is added to offset to get logical address If there is no match it means it is illegal addressAdvantages and disadvantages

                                                      Decreases memory needed to store each page table Increases time needed to search the table when a page reference occurs Solution Use hash table to

                                                      limit the search to one mdash or at most a few mdash page-table entries Inverted Page Table Architecture

                                                      Unit-IV CPU Scheduling

                                                      Basic conceptsTo maximize CPU utilization some process must be running at all times

                                                      In multi programming several processes will be in main memory at a given time If running process has to wait for IO the OS takes the CPU away from that process and assigns CPU to other process in ready queue So CPU must be scheduled and CPU scheduling is fundamental OS function

                                                      CPU- IO burst cycleProcess execution begins with a CPU burst that is followed by an IO burst which is followed by another CPU burst then another IO burst and so on as shown in the figure

                                                      An IO bound program has many short CPU bursts A CPU-bound program has few long CPU bursts

                                                      Schedulers a module in OS for scheduling decisions1048708 Long-term scheduler (or job scheduler) ndash selects which processes should be brought into the ready queue1048708 Medium-term scheduler ndash selects which processes should be swapped inout the memory1048708 Short-term scheduler (or CPU scheduler) ndash Whenever the CPU becomes idle the operating system must select one of the processes in ready queue to be executed This selection is done by short term scheduler (or) CPU scheduler

                                                      Dispatcher gives CPU control to the process selected by short term scheduler This function involves following

                                                      1 switching context2 switching to user mode3 jumping to proper location in the user program to restart that program

                                                      Time taken to stop one process and start another process is known as dispatch latency This must be kept smallReady QueueReady queue is implemented as a FIFO queue a priority queue a tree or simply an unordered linked listThe records in ready queue are PCBs of processes

                                                      Pre-emptive Scheduling

                                                      CPU scheduling decisions take place under one of four conditions

                                                      1 When a process switches from the running state to the waiting state2 When a process switches from the running state to the ready state for example in response to an

                                                      interrupt3 When a process switches from the waiting state to the ready state say at completion of IO 4 When a process terminates

                                                      For conditions 1 and 4 there is no choice - A new process must be selected For conditions 2 and 3 there is a choice - continue running the current process (or) select a different

                                                      one If CPU scheduling takes place only under conditions 1 and 4 the system is said to be non-

                                                      preemptive or cooperative In non-preemptive a process runs until it itself gives up the CPU ie when it is waiting for IO (or) when process completes Otherwise the system is said to be pre-emptive

                                                      Windows used non-preemptive scheduling up to Windows 3x and started using pre-emptive scheduling with Win95 Macs used non-preemptive prior to OSX and pre-emptive since then Note that pre-emptive scheduling is only possible on hardware that supports a timer interrupt

                                                      Note that pre-emptive scheduling can cause problems when two processes share data because one process may get interrupted in the middle of updating shared data structures

                                                      Preemption can also be a problem if the kernel is busy implementing a system call ( eg updating critical kernel data structures ) when the preemption occurs Solution wait until the system call has either completed or blocked before allowing the pre-emption But this solution is problematic for real-time systems

                                                      Some critical sections of code protect themselves from concurrency problems by disabling interrupts before entering the critical section and re-enabling interrupts on exiting the section But this should only be done only on very short pieces of code that will take less CPU time

                                                      Scheduling Criteria to decide CPU scheduling algorithm1 CPU utilization in real system cpu utilization will range from 40 to 902 Throughput Number of processes that are completed per unit time3 Turnaround time Time when the process has completed ndash Time when the process was submittedie it includes waiting time and burst time4 Waiting time amount of time that a process spends waiting for CPU5 Response time Time when the first response came - Time when the process was submitted

                                                      Optimization Criteria Max CPU utilizationMax throughput

                                                      Min turnaround timeMin waiting timeMin response time CPU Scheduling Algorithms1) First-Come First-Served (FCFS) Scheduling First come first served CPU scheduling algorithm is non pre-emptive Processes are scheduled in the order they have arrivedFCFS is implemented by Queue When the CPU is free it is allocated to the process at the head of the queueAdvantages Easy to implementDisadvantages 1 Average waiting time is quite long

                                                      2 Convoy effect (it occurs as small processes wait in queue for big process to leave CPU)Example Process Burst Time P1 24 P2 3 P3 3

                                                      Suppose that the processes arrive in the order P1 P2 P3

                                                      The Gantt Chart for the schedule is P1 P2 P3

                                                      0 24 27 30 Waiting time for P1 = 0 P2 = 24 P3 = 27 Average waiting time (0 + 24 + 27)3 = 17 2) Shortest-Job-First (SJF) SchedulingSchedule the Process with shortest burst time If CPU burst time of two processe are same then FCFS is usedAdvantages Average waiting time decreasesDisadvantages It is difficult to know the length of next CPU burstSJF can be either pre-emptive (or) non pre-emptive1 Non pre-emptive SJF ndash once CPU is given to the process it cannot be preempted until completes its CPU burst2 Pre-emptive SJF (or) Shortest-Remaining-Time-First (SRTF) If the newly arrived process is shorter than what is remaining of currently executing process then pre-empt the currently executing process

                                                      Example Process Arrival Time Burst Time P1 00 7

                                                      P2 20 4 P3 40 1

                                                      P4 50 4

                                                      SJF (non-preemptive) Gantt Chart P1 P3 P2 P4

                                                      0 7 8 12 16 Average waiting time = [0 +(8-2)+(7-4) +(12-5)] 4 =4

                                                      Example of Preemptive SJF

                                                      Process Arrival Time Burst TimeP1 00 7P2 20 4P3 40 1P4 50 4

                                                      SJF (preemptive) P1 P2 P3 P2 P4 P1

                                                      0 2 4 5 7 11 16 Average waiting time = (9 + 1 + 0 +2)4 =3 3) Priority SchedulingA priority number (integer) is associated with each processThe CPU is allocated to the process with the highest priority (smallest integer equiv highest priority)If two processes have equal priority then FCFS is usedSJF is a priority algorithm in which larger the CPU burst lower the priorityThere are two schemes 1 Preemptive 2 nonpreemptive

                                                      Problem equiv Starvation ndash low priority processes may never executeSolution equiv Aging ndash as time progresses increase the priority of the process

                                                      4) Round Robin (RR) Each process gets a small unit of CPU time (time quantum) usually 10-100 milliseconds After this time has elapsed the process is preempted and added to the end of the ready queueIf there are n processes in the ready queue and the time quantum is q then each process gets 1n of the CPU time in chunks of at most q time units at once No process waits more than (n-1)q time unitsPerformance 1 q large _ FIFO 2 q small _ q must be large with respect to context switch otherwise overhead is too highExample of RR with Time Quantum = 4 Process Burst Time P1 24 P2 3

                                                      P3 3 The Gantt chart isP1 P2 P3 P1 P1 P1 P1 P1

                                                      0 4 7 10 14 18 22 26 30 Average waiting time = [(30-24)+4+7]3 = 173 =566 5)Multilevel Queue scheduling Ready queue is partitioned into separate queuesFor ex ready queue is divided into 2 queues

                                                      1 foreground (interactive) queue2 background (batch) queue

                                                      Each queue has its own scheduling algorithmFor ex Round Robin scheduling algorithm can be used in foreground queue

                                                      FCFS scheduling algorithm can be used in background queueScheduling must be done between the queues This can be done in 2 ways1 Fixed priority scheduling

                                                      Foreground queue has highest priority All the processes in foreground queue must be completed and queue must be empty Then only the processes in background queue will be given CPU As shown in below figure Disadvantage starvation

                                                      2 Time slice ndash each queue gets a certain amount of CPU timewhich it can schedule amongst its processes ie 80 to foreground in RR and 20 to background in FCFS

                                                      6) Multilevel Feedback Queue Scheduling A process can move between queues The idea is to separate processes according to their CPU bursts IO ndash bound and interactive processes will be in highest priority queueIf a process takes more CPU time it is moved to lower priority queue If a process waits too long in lower priority queue it is moved to higher priority queue to prevent starvationAs shown in the below figure let there be 3 queues Q0 Q1 Q2

                                                      1 Q0 ndash time quantum 8 milliseconds 2 Q1 ndash time quantum 16 milliseconds 3 Q2 ndash FCFS

                                                      Scheduling

                                                      1 A process entering ready queue is put in queue Q0 When it gains CPU the process receives 8 msec If it does not finish in 8 milliseconds process is moved to queue Q1 2 At Q1 process again receives 16 additional milliseconds If it still does not complete it is moved to queue Q2

                                                      Q) SynchronizationSeveral processes run in an operating system Some process share resources due to which problems like data inconsistency may arise

                                                      Example of process synchronization is Producer Consumer problem (or) Bounded-Buffer problemProducer Consumer problem To make sure that producer process should not add data when the buffer is full andConsumer process should not take data when the buffer is empty

                                                      The code for Producer process can be modified as followswhile (true) Produce an item in next producedwhile (counter == BUFFER_SIZE) do nothingbuffer[in]=nextProducedin = (in+1) BUFFER_SIZEcounter++

                                                      here an integer variable counter is initialized to zeroCounter is incremented every time a new item is added to the buffer and counter is decremented every time an item is removed from bufferin is an index that always points to an empty slot after the last filled slotout is an index that always points to the first filled slot after at the head of the circular queue

                                                      The code for consumer process is as shown belowwhile(true)

                                                      while (counter == 0) do nothingnextConsumed = buffer[out]out=(out+1)BUFFER_SIZEcounter--

                                                      The above producer and consumer code produces RACE CONDITIONOutcome of execution of several co-operating processes depends on the order in which accesses takes place is called Race ConditionFor Example if counter =5

                                                      Given counter =5 and if producer produces one item and consumer consumes one item then correct value of counter= 5 only But here the counter = 4 ie inconsistent resultIf T5 is executed before T4 then the counter =6 ie inconsistent result

                                                      Solution to avoid race condition make sure only one process at a time must update the shared variable (here it is counter)

                                                      Q) Critical section problemAns Each process has a segment of code called critical section Critical section code is the code that contains common variables of co-operating processesTo avoid data inconsistencyTwo processes must not execute critical section code at same timeFor this to happen1 Each process must request permission to enter its critical sectionmdashEntry section2 End of critical section ndashExit section3 Rest of the code after critical section ndash Remainder section

                                                      Solution to critical section must satisfy the following three requirements1 Mutual exclusion if one process is in critical section no other process can be in critical section2 Progress Processes not in remainder section must decide which process will enter critical section next3 Bounded Waiting All the processes must be allowed to enter critical section No process must wait indefinitely

                                                      Q) Petersonrsquos solution for critical section problem (Software based solution)AnsTime Process 1 Process2T0 do

                                                      flag[1] = TRUE turn =2

                                                      do flag[2] = TRUE

                                                      T1 turn =1

                                                      T2 While( flag[2] ampamp turn == 2)Since turn = 2 while loop will not run

                                                      While( flag[1] ampamp turn == 1)Loops continuously until flag[1]

                                                      becomes falseT3 Enter critical section

                                                      T4 flag[1] = FALSE Enter Critical section

                                                      T5 Enters remainder section flag[2]=FALSE

                                                      while (TRUE) Enters remainder section

                                                      while (TRUE)

                                                      101 File Concept Data is stored in secondary memory in the form of files A file is a sequence of bits bytes lines or records A file has structure which depends on its type

                                                      File Attributes

                                                      1 Name ndash Name of the file for user understanding2 Identifier used by OS to identify a file ( eg inode number )3 Type ndash Text files executable files binary files etc 4 Location ndash location of the file on the hard drive 5 Size current size of the file6 Protection Controls who can read write or execute the file7 Time Date and user identification gives the information when the file was created last modified

                                                      and last used etc This data is used for protection security and usage monitoring

                                                      All information about files is kept in directory structure The directory entry consists of the filersquos name and its unique identifier The identifier locates the other file attributes Directory structure is kept on disk

                                                      File Operations

                                                      The file Abstract Data Type supports many common operations 1 Creating a file find free space on disk add entry to directory 2 Writing a file write data at current file position pointer location and update pointer3 Reading a file read data at current file position pointer location and update pointer4 Repositioning within a file or file seek change pointer location to a given value5 Deleting a file free the space allocated to file and remove entry from directory6 Truncating a file delete the data and update file size

                                                      Open File Table

                                                      Most OSes require that files be opened before access and closed after access Information about all files that are open currently in the system is stored in an open file table This

                                                      system wide open file table is maintained by OS And information about files opened by the process is stored by the processrsquos open file table Open file table has the following information

                                                      1 File pointer - records the current position in the file for the next read or write2 File-open count - How many times has the current file been opened ( simultaneously by

                                                      different processes ) and not yet closedWhen this counter reaches zero the file can be removed from the table

                                                      3 Disk location of the file4 Access rights

                                                      Some systems provide support for file locking 1 A shared lock is for reading only2 A exclusive lock is for writing as well as reading3 An advisory lock OS makes sure the locks are acquired and released appropriately4 A mandatory lock software developers makes sure the locks are acquired and released

                                                      appropriately5 UNIX used advisory locks and Windows uses mandatory locks

                                                      File Types

                                                      Implementing file types A common technique is to include the type as part of the filename The name is split into 2 parts

                                                      1 File Name 2 Extension

                                                      File name and extension are separated by a period The system uses the extension to indicate the type of file and the type of operations that can be done on that fileUser opens Microsoft word and clicks on File menu and Open option User will only specify the filename without extension application will look for a file with the given name and extension it expects Hence extensions are not supported by the operating systemUnix system uses a crude magic number stored at the beginning of the file to indicate the type of fileIn Mac OS X operating system file produced by word processor has the word processorrsquos name as its creator When user double clicks on the file word will open the required fileSome common file types are as shown below

                                                      File Structure Some files contain an internal structure which may or may not be known to the OS for ex executable files have a specific structure so that it can determine

                                                      where in memory to load the file and what is the location of the first instruction

                                                      If OS supports many file formats it will increase the size and complexity of the OS UNIX treats all files as sequences of bytes with no internal structure (With the exception of

                                                      executable binary programs which it must know how to load and find the first executable statement etc)

                                                      Macintosh files have two forks - a resource fork and a data fork The resource fork contains information relating to the UI such as icons and button images and can be modified independently Data fork contains the code or data

                                                      Internal File structure

                                                      Disk files are accessed in units of physical blocks typically 512 bytes

                                                      Internally files are organized in units of logical units like single byte (or)

                                                      data record size (or) structure size

                                                      The number of logical units which fit into one physical block determines its packingIf a student file is organized internally as records Each record takes 64 bytes of memoryAnd main memory is divided into blocks of 512 bytes Then packing = 51264=8 8 records can be stored per block

                                                      Internal fragmentation may occur due to this

                                                      102 Access Methods

                                                      The information in file can be accessed in the following ways1 Sequential access2 Direct access (or) Relative access

                                                      1 Sequential access we cannot randomly access any record

                                                      Records are read and written sequentially one record after another

                                                      A sequential access file emulates magnetic tape operation and generally supports the following operations read next - read a record and advance the tape to the next position write next - write a record and advance the tape to the next position rewind skip n records - May or may not be supported N may be limited to positive numbers or may be

                                                      limited to +- 1

                                                      2 Direct access (or) Relative accessA file is made up of fixed length records so that we can randomly access any record

                                                      No restriction on order of reading or writing

                                                      Direct access files are useful when we need to access large amounts of data randomly

                                                      Ex database

                                                      Operations supported include n is relative block number ie relative to beginning of file

                                                      1 read n - read record number n 2 write n - write record number n 3 jump to record n - could be 0 or the end of file4 Query current record - used to return back to this record later5 Sequential access can be easily emulated using direct access The inverse is complicated and

                                                      inefficient

                                                      3 Other access methods can be built on top of direct access method We construct an index for the file which contains pointers to various blocks To search a file we first access index and then use pointer to access file

                                                      If a student file has 120000 records of students sorted by student id We can create an index consisting of 1st student id in each blockif 64 records can be stored in each block then number of blocks needed to store 120000 records = 12000064 asymp 2000 blocks So index has 2000 entries Index is kept in main memory so that searching becomes faster

                                                      If the index itself becomes large we create an index for index file (Ex B-tree)

                                                      Q) Allocation Methods

                                                      To allocate space to files on disk is such a way that disk space is utilized effectively and files can be accessed quickly

                                                      Three major methods of allocating disk space are

                                                      1 Contiguous 2 Linked

                                                      3 Indexed

                                                      1 Contiguous allocation Each file occupies set of contiguous blocks on disk

                                                      If the file is n blocks long and starts at location b then it occupies blocks b b+1 b+2hellipb+n-1

                                                      The directory entry for each file indicates the address of starting block(ie b) and length (n)

                                                      Advantages

                                                      1 For direct access to block i we can directly access by finding the block as b+i Since the file is stored sequentially sequential access is easy Hence access is easy

                                                      2 Supports both direct and sequential access3 Number of disk seeks required is minimum 4 Seek time is minimum

                                                      Disadvantages

                                                      1 Difficult to find space to a new file (or) add more data to existing fileFirst fit and Best fit strategies are used to find a free hole( set of free contiguous blocks) from list of available holes But both are inefficient wrt storage utilization

                                                      2 External fragmentation even though there are enough free blocks required for a new file they are not contiguous and hence cannot be allocated to new fileSolution copy all the files onto another disk or tape We then copy the files back onto original disk contiguously This is known as compaction But compaction is time consuming

                                                      3 Size of the file must be known when the file is created Even though final size is known entire size may not be used by the file at the beginning Hence leads to wastage of space

                                                      If size of the file is not known in advance then (i) If we allocate too little space to file file cannot be extended Solution find a larger hole copy the contents of the file to the new space and release the previous space This may be possible as long as space exists Also this is time consuming(ii) If we overestimate the amount of space needed much of the space is unused This will lead to internal fragmentation

                                                      To minimize these drawbacks some os uses modified contiguous allocation scheme Here contiguous blocks of memory is allocated at firstIf the file needs more blocks another set of contiguous space known as extent is added Location and a block count plus a link to the first block of next extentIf extents are too large and of fixed size internal fragmentation occursIf extents are of variable sizes external fragmentation occurs

                                                      Linked AllocationEach file is a linked list of disk blocks The disk blocks can be scattered anywhere on the diskThe directory contains a pointer to first and last blocks of the file For example a file of 5 blocks might start at a block 9 and continue at block 16 then block 1 and then block 10 and finally block 25 Each block contains a pointer to next block If each block is 512 bytes and disk address (pointer ) requires 4 bytes then the user sees blocks of 512-4=508 bytes To create a new file we simply create a new entry in the directory The pointer will have nil for empty file Also the size field of empty file is 0

                                                      To add more data to existing file free block is found and data is written to it and is linked to end of file To read a file we read by following the pointers block to block

                                                      Advantages 1 Easy to find space to a new file (or) add more data to existing file2 Size of the file need not be declared when the file is created3 There is no external fragmentation

                                                      Disadvantages1 Seek time is more2 Number of disk seeks required is more3 Pointers use larger percentage of filersquos disk space4 Inefficient for direct access files To access i th record i disk reads are required5 Not Reliable if pointers were lost or damaged

                                                      Solution to disadvantage 3 is to use to group multiple blocks as clusters(for ex 4 blocks = 1cluster) We allocate clusters instead of blocks to file Makes logical to physical block mapping simpleImproves disk throughput by reducing disk access timeReduces wastage of space due to pointers as only few pointers may be neededFree ndashlist management also becomes simpleDisadvantage internal fragmentation more space is wasted when the cluster is partially full

                                                      Solution to disadvantage 5 is to use doubly linked list (or) store the file name and relative block number in each block

                                                      An variation of linked allocation is the use of file allocation table (FAT)A section of disk at the beginning of each volume contains FAT FAT has one entry for each disk block and is indexed by block numberThe block number contains the block number of next block that contains the file The last block will have special end of the file value as table entry An unused block is indicated by table value 0

                                                      Advantages

                                                      1 Easy to find space to a new file (or) add more data to existing file2 Random access time is improved

                                                      Disadvantage number of disk seeks is more The disk head must move to read the FAT and find the location of block then move to location of the block

                                                      3 Indexed allocation

                                                      Each file has its own index block Index block is an array of disk block addresses The ith entry in index block points to ith block of the file

                                                      The directory contains the address of the index block

                                                      When the file is created all pointers in the index block are set to nil

                                                      To add more data to existing file free block is found and data is written to it The address of block is put in the index block

                                                      Advantages

                                                      1 Easy to find space to a new file (or) add more data to existing file2 Supports direct (or) random access efficiently3 No external fragmentation

                                                      Disadvantage

                                                      1 Seek time is more2 Number of disk seeks required is more3 Not Reliable4 Index file must be kept in memory If this memory is not available then we have the read the index

                                                      block and then desired data block ie 2 disk accesses are required To access a block near the end of the file we need to read all the index blocks to read the needed data block

                                                      5 Pointer overhead of index block is gt than that of linked allocation For example if we have file of only one or two blocks Entire index block is allocated to store these one or two pointers

                                                      Various mechanisms followed for size of index block

                                                      1 linked scheme size of index block is one disk block Index block contains a small header containing name of the file and set of the first 100 disk block addresses The next address contains the address of another index block for large files

                                                      2 multilevel index to access a block the OS uses first-level index to find second-level index block which points to file block This approach can be continued to third or fourth level

                                                      3 combined scheme

                                                      For example Say there are 15 pointers of index block in the filersquos inode The first 12 of these pointers have address of blocks that contains file data The next 3 pointers point to indirect blocks The first points to single indirect block which is an index block that contain addresses of filersquos data blocks

                                                      The second points to double indirect blocks and third points to triple indirect blocks as shown below

                                                      Q) Free space management

                                                      To keep track of free disk space the system maintains free space list To create a file we search free space list and allocate the required space to new file

                                                      The free space list can be implemented as

                                                      1 Bit vector (or) Bit map2 Linked list3 Grouping4 Counting 5 Space maps

                                                      1 Bit vector (or) Bit map Each block is represented by 1 bit If the block is free the bit is 1 and if the block is allocated the bit is 0 For example consider a disk where blocks 23458 blocks are free The free-space bit map would be 001111001000

                                                      Advantages simple and easy to get contiguous files

                                                      Disadvantages

                                                      1 Bit map requires extra space2 Bit map is inefficient if kept in disk Bit maps of smaller disk can be kept in main memory but may

                                                      not be possible to keep the bit map of larger disk in main memory3 Bit map must be modified both when blocks are allocated and freed Freeing 1 GB of data on a 1-TB

                                                      disk could cause thousands of blocks of bit maps to be updated because those data blocks are scattered all over the disk

                                                      2 Linked List Free-space list is implemented by linking all the free disk blocks together A pointer to the first free block is stored in special location on the disk For example consider a disk where blocks 23458 blocks are freeWe keep a pointer to block 2 as the first free block Block 2 would contain a pointer to block 3 Block 3 would point to block 4 and block 4 would point to block 5and so onAdvantage No waste of spaceDisadvantage Cannot get contiguous space easily ie To traverse the free-space list we must read each block which requires more IO time

                                                      3 Grouping First free block stores the addresses of n free blocks The first n-1 free blocks are free and the last block contains the addresses of another n free blocks and so onAdvantage large number of free blocks can be found quickly4 Counting free-space list can be maintained by keeping the address of first free block and the number (n)of free contiguous blocks that follow the first block

                                                      Each entry in the free-space list consists of disk address and a countEach entry requires more space andlist is shorter only if count is gt1These entries can be stored in a B-tree for efficient insertion deletion and search

                                                      5 Space maps ZFS divides the disk space into chunks of manageable sizes called metaslabs Each metaslab has a space mapZFS uses counting algorithm to store information about free blocks Space map is a log of all block activities in time order in counting formatWhen ZFS allocates or frees space from a metaslab it loads the associated space map into memory and updates it Finally free-space list is updated on diskSynchronization Hardware Hardware solution to synchronization is to provide atomic operations These operations operate as a single instruction without interruption Two such operations are 1 Test and Set() hardware instruction2 Swap() instruction

                                                      1 Test and Set instruction is as shown

                                                      Solution using Test and SetThe shared variable lock is initialized to false

                                                      2 Swap() instruction definition is as follows

                                                      Solution using Swap1 Shared Boolean variable lock is initialized to false and each process has a local boolean variable key

                                                      Both Test and Set Swap instructions satisfy the mutual exclusion requirement but unfortunately do not guarantee bounded waiting If there are multiple processes trying to get into their critical sections there is no guarantee of what order they will enter and any one process could have the bad luck to wait forever until they got their turn in the critical section (Since there is no guarantee as to the relative rates of the processes a very fast process could theoretically release the lock whip through their remainder section and re-lock the lock before a slower process got a chance As more and more processes are involved vying for the same resource the odds of a slow process getting locked out completely increase )

                                                      Below figure illustrates a solution using test-and-set that satisfies bounded waiting using two shared data structures boolean lock and boolean waiting[ N ] where N is the number of processes in contention for critical sections

                                                      Bounded-waiting mutual exclusion with TestAndSet( )

                                                      It first looks in an order ( starting with the next process on the list ) for a process that has been waiting and if it finds one then it releases that particular process from its waiting state without unlocking the critical section thereby allowing a specific process into the critical section while continuing to block all the others

                                                      Q) What is semaphoreAns Semaphore is used to solve synchronization problems A semaphore S is an integer variable accessed only through 2 standard atomic operations 1 wait() 2 signal()Wait primitive Signal primitivewait(S)

                                                      while(S lt= 0) do nothingS--

                                                      signal (S)

                                                      S++

                                                      Each process that wishes to use a resource performs wait() operation When a process releases a resource it performs signal() operation

                                                      Semaphores Usage1 binary semaphore or mutex locks can take values 0 (or) 1

                                                      2 Counting Semaphore can take any integer value Counting semaphore is used to count the number remaining resources The counter is initialized to the number of resources available whenever the counting semaphore gt 0 then a process can enter a critical section and use one of the resources When the counter = 0 ( or negative in some implementations ) then the process blocks until another process frees a resource and increments the counting semaphore with a signal callFor example if 3 resources are there and 4 processes Value of counting semaphore =3if process P1 requires a resource it perform wait() operationNow counting semaphore = 2if process P2 requires a resource it perform wait() operationNow counting semaphore = 1if process P3 requires a resource it perform wait() operationNow counting semaphore = 0if process P4 requires a resource it perform wait() operation P4 waits until a resource is available

                                                      3 Semaphores can also be used to synchronize certain operations

                                                      Semaphore ImplementationWhen a process is in critical section and any other process that tries to enter critical section must loop continuously wasting CPU time (busy waiting) This type of semaphore is called spin lock as the process spins while waiting For example suppose it is important that process P1 execute statement S1 before process P2 executes statement S2

                                                      First we create a semaphore named synch that is shared by the two processes and initialize it to zero Then in process P1 we insert the code

                                                      S1signal( synch )

                                                      and in process P2 we insert the code wait( synch )

                                                      S2 Because synch was initialized to 0 process P2 will block on the wait until after P1 executes the call

                                                      to signal Each semaphore has a integer value and maintains a waiting queue of waiting processes

                                                      Semaphore implementation with no busy waiting To overcome busy waiting we block the waiting process and waiting process is restarted by wakeup() operation(ie the process is removed from waiting queue to ready state)

                                                      Problems due to semaphore1 Deadlocks Deadlocks occur when multiple processes are blocked each waiting for a resource that can only be freed by one of the other (blocked) processes shown below

                                                      2 Starvation In Starvation one or more processes gets blocked forever and never get a chance to enter critical section For example if we do not specify the algorithms for adding processes to the waiting queue or selecting one to be removed from the queue in the signal( ) call If a LIFO queue is chosen then the first process that starts waiting will never get a chance

                                                      3 Priority inversionLet Process LMH have priorities as shown belowLltMltHprocess H is waiting for a resource that has been held by process L so Process L is running When process M enters it pre-empts process L and M is running nowHere M runs before H even though M has lower priority than HTo avoid this L is given high priority so that M does not pre-empt L=====================================================================Q) Classical problems on synchronization

                                                      1 The Bounded Buffer Problem (also called the The Producer-Consumer Problem)2 The Readers-Writers Problem3 The Dining Philosophers Problem

                                                      These problems are used to test newly proposed synchronization scheme

                                                      1 The Bounded Buffer ProblemConsider

                                                      n buffers and each buffer holds one item a producer process which creates the items (1 at a time) a consumer process which processes them (1 at a time)

                                                      Producer process cannot produce an item if all the buffers are full and Consumer process cannot consume if all the buffers are empty As both processes modify the contents of buffer synchronization must be there For that we use 3 Semaphores

                                                      1 empty = n (empty semaphore counts the number of empty buffers)2 full=0 (full semaphore counts the number of filled buffers)3 mutex=1 (mutex provides mutual exclusion for access to the buffer)

                                                      Producer process Consumer processdo

                                                      hellip produce an item wait(empty) wait(mutex) hellip

                                                      do wait(full) wait(mutex) hellip remove an item from buffer hellip

                                                      add item to buffer hellip signal(mutex) signal(full)

                                                      while (1)

                                                      signal(mutex) signal(empty) hellip consume the item hellip while (1)

                                                      2 The Readers-Writers ProblemA data item such as a file is shared among several processesEach process is classified as reader or writerMultiple readers can read the file at same timeA writer must have exclusive access (ie cannot share with either a reader or another writer)Two versions of the readers-writers problem

                                                      readers priority new reader need not wait because a writer is waiting writers priority if a writer is waiting to access the database no new readers can start reading

                                                      A solution to both version may cause starvation in the readers priority version writers may starve in the writers priority version readers may starve

                                                      A semaphore solution to the readers priority version uses 3 semaphores1 readcount = 0 (readcount semaphore counts the number of reading shared data2 mutex =1 (mutex semaphore is used for mutually exclusion when readcount is updated)3 wrt = 1 (wrt semaphore is common to both writer and reader)

                                                      wrt provides mutually exclusive access to shared data Reader process Writer process

                                                      wait(mutex) readcount++ if (readcount == 1) wait(rt) signal(mutex) hellip reading is performed hellip wait(mutex) readcount-- if (readcount == 0) signal(wrt) signal(mutex)

                                                      do wait(wrt) hellip writing is performed hellip signal(wrt)

                                                      while (TRUE)

                                                      3 The Dining Philosophers Problemn philosophers sit around a table thinking and eatingAs time passes a philosopher gets hungry and tries to pick up the chopstick on his left and on his right A philosopher may only pick up one chopstick at a time and cannot pick up a chopstick already in the hand of neighbour philosopherThe dining philosophers problems is an example of a large class or concurrency control problems it is a simple representation of the need to allocate several resources among several processes in a deadlock-free and starvation-free mannerA semaphore solution represent each chopstick with a semaphore

                                                      semaphore chopstick[5] Initially all values are 1

                                                      Philosopher ido

                                                      51

                                                      wait(chopstick[i])wait(chopstick[(i+1) 5])hellipeathellipsignal(chopstick[i])signal(chopstick[(i+1) 5])hellipthinkhellip

                                                      while (1)This solution guarantees no two neighbouring philosophers eat simultaneously but has the possibility of creating a deadlockQ) Types of operating systems

                                                      MULTIPROCESSING SYSTEMS has multiple hardware CPUs

                                                      In symmetric multiprocessing (SMP) systems The processes share the same main memory and io facilities All processors can perform the same functions (hence term symmetric)

                                                      52

                                                      ASYMMETRIC MULTIPROCESSING SYSTEMS In asymmetric multiprocessing (ASMP) all CPUs are not equal (Master and slave CPUs) ASMP have dedicated processors to specific tasks For example one processor may be dedicated to disk operations another to video operations etc These systems dont have the flexibility to assign processes to the least-loaded CPU unlike an SMP system

                                                      REAL TIME SYSTEM is used to implement a computer application that must complete its execution within its time constraint

                                                      Two kinds of real-time systems have evolved

                                                      1 A hard real-time system can guarantee that response requirement will be met under all conditions 2 A soft real-time system cannot guarantee that response requirement meets under all conditions Ex

                                                      Digital audio or multimedia systems Digital telephones

                                                      DISTRIBUTED SYSTEMS execute parts of a computation in different systems at the same time It uses distributed control ie it spreads its decision-making actions across different computers in the system so that failures of individual computers or the network does not cripple its operation

                                                      A distributed operating system appears as a uniprocessor system even though it has multiple processors The users may not know where their programs are being run or where their files are located that should all be handled automatically by the operating system

                                                      Distributed systems allow applications to run on several processors at the same time thus requiring more complex processor scheduling algorithms

                                                      DESKTOP SYSTEMS A desktop system is a personal computer (PC) system used at a single location Modern desktop systems support multiprogramming Common examples are Linux FreeBSD Windows 8 and the Macintosh operating system

                                                      HANDHELD SYSTEMS (palmtop computers) Two of the most popular operating systems for handhelds are Symbian OS and Android OS They have very less memory

                                                      53

                                                      CLUSTERED SYSTEMS A computer cluster consists of a set of loosely connected computers that work together so that in many respects they can be viewed as a single system The components of a cluster are usually connected to each other through fast local area networks each node (computer used as a server) running its own instance of an operating system

                                                      Q) Modern operating system

                                                      New developments in hardware applications and security threats lead to the development of modern operating system

                                                      New Hardware developments like increased machine speed high speed network increased size and variety of memory devices

                                                      New Applications developments like multimedia applications internet and web access and client server computing

                                                      Developments in modern operating system can be categorized as

                                                      1 Microkernel architecture assigns only few functions to kernel ( IPC and CPU scheduling etc) Other OS services are provided by processes called servers that run in user mode This approach decouples server and kernel development This approach is well suited to distributed environment

                                                      2 Multithreading a process is divided into threads that can run concurrently Useful for applications that perform a number of independent tasks Example database server that listens and processes many client requests

                                                      3 Symmetric multiprocessing As now-a-days there are multiple microprocessors in a single system SMP operating system provides greater efficiency when there are multiple processors The processes share the same main memory and io facilities All processors can perform the same functions (hence term symmetric)

                                                      SMP schedules processes or threads across all of the processors

                                                      Advantages

                                                      1 Increased performance if some portion of the program can be run in parallel the performance of SMP OS increases

                                                      2 increased availability as all processes can perform the same task if one processor fails others will work 3 incremental growth one can add a new processor to increase the performance4 Scaling cost can be dependent on the number of processors used

                                                      4 Distributed operating systems Distributed systems allow applications to run on several processors at the same time

                                                      5 Object oriented design adds modularity to kernel Os can be customized without effecting system integrity

                                                      Q) Linux Operating systems It is open source as its source code is freely available It is free to use Linux was designed considering UNIX compatibility Linux is a multi-user multitasking system Main design goals are speed efficiency and standardization Components of Linux System

                                                      54

                                                      Linux Operating System has primarily three components as shown in the below diagram

                                                      Kernel - Kernel is the core part of Linux It is responsible for all major activities of this operating system It interacts directly with hardware Kernel hides low level hardware details to system or application programs

                                                      System Library - System libraries are special functions or programs These are used by application programs to use Kernelrsquos features

                                                      System Utility - System Utility programs are responsible to do specialized individual level tasks like updating log file accepting login requests from terminals etc

                                                      Kernel Modules Sections of kernel code that can be compiled loaded and unloaded independent of the rest of the kernel A kernel module may typically implement a device driver a file system or a networking protocol

                                                      Three components to Linux module support 1 module management Supports loading modules into memory and allows to talk to the rest of the kernel 2driver registration Allows modules to inform rest of the kernel that a new driver has become available 3 conflict resolution protect reserved resources of one driver from accidental use by another driver

                                                      Kernel Mode vs User ModeKernel component code executes in a special privileged mode called kernel mode with full access to all resources of the computer This code represents a single process executes in single address space and do not require any context switch and hence is very efficient and fast Kernel runs each processes and provides system services to processes provides protected access to hardwares to processesCode which is not required to run in kernel mode is in System Library User programs and other system programs works in User Mode which has no access to system hardwares and kernel code User programs utilities use System libraries to access Kernel functions to get systems low level tasksBasic FeaturesFollowing are some of the important features of Linux Operating System

                                                      Portable - Portability means softwares can works on different types of hardwares in same way Linux kernel and application programs supports their installation on any kind of hardware platform

                                                      Open Source - Linux source code is freely available and it is community based development project Multiple teams works in collaboration to enhance the capability of Linux operating system and it is continuously evolving

                                                      Multi-User - Linux is a multiuser system means multiple users can access system resources like memory ram application programs at same time

                                                      Multiprogramming - Linux is a multiprogramming system means multiple applications can run at same time

                                                      Hierarchical File System - Linux provides a standard file structure in which system files user files are arranged

                                                      Shell - Linux provides a special interpreter program which can be used to execute commands of the operating system It can be used to do various types of operations call application programs etc

                                                      55

                                                      Security - Linux provides user security using authentication features like password protection controlled access to specific files encryption of data

                                                      Architecture Linux System Architecture is consists of following layers

                                                      Hardware layer - Hardware consists of all peripheral devices (RAM HDD CPU etc)

                                                      Kernel - Core component of Operating System interacts directly with hardware provides low level services to upper layer components

                                                      Shell - An interface to kernel Takes commands from user and executes kernels functions

                                                      Utilities - Utility programs giving user most of the functionalities of an operating systems

                                                      Q) windows XP operating System1Extensibility mdash layered architecture

                                                      Executive which runs in protected mode provides the basic system services On top of the executive several server subsystems operate in user mode Modular structure allows additional environmental subsystems to be added without affecting the

                                                      executive 2Portability mdash XP can be moved from on hardware architecture to another with relatively few changes

                                                      Written in C and C++ Processor-dependent code is isolated in a dynamic link library (DLL) called the ldquohardware abstraction

                                                      layerrdquo (HAL)3 Reliability mdashXP uses hardware protection for virtual memory and software protection mechanisms for operating system resources 4 Compatibility mdash applications that follow the IEEE 10031 (POSIX) standard can be complied to run on 2000 without changing the source code 5 Performance mdash XP subsystems can communicate with one another via high-performance message passing

                                                      Preemption of low priority threads enables the system to respond quickly to external events Designed for symmetrical multiprocessing

                                                      6 International support mdash supports different locales via the national language support (NLS) APIXP Architecture 1 Layered system of modules 2 Protected mode mdash hardware abstraction layer (HAL) kernel executive 3 User mode mdash collection of subsystems

                                                      a Environmental subsystems emulate different operating systems b Protection subsystems provide security functions

                                                      XP architecture

                                                      56

                                                      Q) Windows network Operating systemExample Windows 2000 operating system

                                                      All the pictures are taken from Silberschatz Abraham et al Peter Baer Galvin and Greg Gagne Operating system concepts Reading Addison-Wesley

                                                      Visit my blog enthusiaststudentblogspotin57

                                                      mtechmessengerblogspotin

                                                      Keywords OS exam notes OS notesOperating system notes Opearing system galvin notesSimple batch system Multi- programmed Batch System Time sharing System Difference between Batch Multi-programming and Time sharing What is process structure of process in memory Process states PCB Operations on Processes Process Creation Process Termination cascading terminationcontext switch Basic Concepts of threads two modes of CPU execution Inter process communication Shared ndash Memory Systems Message passing systemswhat is deadlock four conditions that are necessary for deadlock to occur Methods for handling deadlocks Deadlock prevention Deadlock avoidanceResource Allocation graphBankers AlgorithmSafe State Unsafe state Safety Algorithm Resource-Request Algorithm Memory management Address Binding Logical vs Physical Address Space MMU Memory-Management Unit Dynamic loading Dynamic Linking Overlays Swapping Contiguous memory allocation Single-partition allocation Multiple-partition allocation Internal fragmentation Variable Size partitions External fragmentation First-fit Best-fit Worst-fit Non-Contiguous Memory allocation Paging Implementation of Page Table PTLR PTBR page table base register Page-table length register translation look-aside buffers TLB Protection in Paging Shared pages in Paging Segmentation Shared Segments Virtual memory Demand paging Procedure for Handling a Page Fault pure demand paging Page Replacement Page Replacement Algorithms FIFO Optimal Page replacement AlgorithmLeast Recently used (LRU) Page replacement algorithm ThrashingWorking Set model global and local page replacement

                                                      58

                                                      • 101 File Concept Data is stored in secondary memory in the form of files A file is a sequence of bits bytes lines or records A file has structure which depends on its type
                                                      • File Attributes
                                                      • File Operations
                                                      • Kernel Mode vs User Mode
                                                      • Basic Features
                                                      • Architecture

                                                        Hashed Page Tables (or) Hash table Hashed page tables are common in address spaces gt 32 bitsEach entry in hash table contains a linked list of elements that hash to same locationHash table is indexed by hash valueEach element consists of 3 fields

                                                        1 Page number2 Frame number3 Pointer to next element in linked list

                                                        The algorithm works as follows Page number is passed to hash function to get hash value Locating the hash value in hash table is easy as Hash table is indexed by hash value Page number is then compared with field 1 in the first element in linked list If there is a match the corresponding frame number is used to get physical address If there is no match next entries in linked list are searched

                                                        Inverted Page Table

                                                        Invertible page table has one entry for each frame of physical memory Each entry in invertible page table has

                                                        Process-id Page number

                                                        Logical address also has 3 parts Process-id Page number

                                                        OffsetProcess-id and page number of the logical address is compared with each entry in the invertible page tableIf a match is found at ith entry i is added to offset to get logical address If there is no match it means it is illegal addressAdvantages and disadvantages

                                                        Decreases memory needed to store each page table Increases time needed to search the table when a page reference occurs Solution Use hash table to

                                                        limit the search to one mdash or at most a few mdash page-table entries Inverted Page Table Architecture

                                                        Unit-IV CPU Scheduling

                                                        Basic conceptsTo maximize CPU utilization some process must be running at all times

                                                        In multi programming several processes will be in main memory at a given time If running process has to wait for IO the OS takes the CPU away from that process and assigns CPU to other process in ready queue So CPU must be scheduled and CPU scheduling is fundamental OS function

                                                        CPU- IO burst cycleProcess execution begins with a CPU burst that is followed by an IO burst which is followed by another CPU burst then another IO burst and so on as shown in the figure

                                                        An IO bound program has many short CPU bursts A CPU-bound program has few long CPU bursts

                                                        Schedulers a module in OS for scheduling decisions1048708 Long-term scheduler (or job scheduler) ndash selects which processes should be brought into the ready queue1048708 Medium-term scheduler ndash selects which processes should be swapped inout the memory1048708 Short-term scheduler (or CPU scheduler) ndash Whenever the CPU becomes idle the operating system must select one of the processes in ready queue to be executed This selection is done by short term scheduler (or) CPU scheduler

                                                        Dispatcher gives CPU control to the process selected by short term scheduler This function involves following

                                                        1 switching context2 switching to user mode3 jumping to proper location in the user program to restart that program

                                                        Time taken to stop one process and start another process is known as dispatch latency This must be kept smallReady QueueReady queue is implemented as a FIFO queue a priority queue a tree or simply an unordered linked listThe records in ready queue are PCBs of processes

                                                        Pre-emptive Scheduling

                                                        CPU scheduling decisions take place under one of four conditions

                                                        1 When a process switches from the running state to the waiting state2 When a process switches from the running state to the ready state for example in response to an

                                                        interrupt3 When a process switches from the waiting state to the ready state say at completion of IO 4 When a process terminates

                                                        For conditions 1 and 4 there is no choice - A new process must be selected For conditions 2 and 3 there is a choice - continue running the current process (or) select a different

                                                        one If CPU scheduling takes place only under conditions 1 and 4 the system is said to be non-

                                                        preemptive or cooperative In non-preemptive a process runs until it itself gives up the CPU ie when it is waiting for IO (or) when process completes Otherwise the system is said to be pre-emptive

                                                        Windows used non-preemptive scheduling up to Windows 3x and started using pre-emptive scheduling with Win95 Macs used non-preemptive prior to OSX and pre-emptive since then Note that pre-emptive scheduling is only possible on hardware that supports a timer interrupt

                                                        Note that pre-emptive scheduling can cause problems when two processes share data because one process may get interrupted in the middle of updating shared data structures

                                                        Preemption can also be a problem if the kernel is busy implementing a system call ( eg updating critical kernel data structures ) when the preemption occurs Solution wait until the system call has either completed or blocked before allowing the pre-emption But this solution is problematic for real-time systems

                                                        Some critical sections of code protect themselves from concurrency problems by disabling interrupts before entering the critical section and re-enabling interrupts on exiting the section But this should only be done only on very short pieces of code that will take less CPU time

                                                        Scheduling Criteria to decide CPU scheduling algorithm1 CPU utilization in real system cpu utilization will range from 40 to 902 Throughput Number of processes that are completed per unit time3 Turnaround time Time when the process has completed ndash Time when the process was submittedie it includes waiting time and burst time4 Waiting time amount of time that a process spends waiting for CPU5 Response time Time when the first response came - Time when the process was submitted

                                                        Optimization Criteria Max CPU utilizationMax throughput

                                                        Min turnaround timeMin waiting timeMin response time CPU Scheduling Algorithms1) First-Come First-Served (FCFS) Scheduling First come first served CPU scheduling algorithm is non pre-emptive Processes are scheduled in the order they have arrivedFCFS is implemented by Queue When the CPU is free it is allocated to the process at the head of the queueAdvantages Easy to implementDisadvantages 1 Average waiting time is quite long

                                                        2 Convoy effect (it occurs as small processes wait in queue for big process to leave CPU)Example Process Burst Time P1 24 P2 3 P3 3

                                                        Suppose that the processes arrive in the order P1 P2 P3

                                                        The Gantt Chart for the schedule is P1 P2 P3

                                                        0 24 27 30 Waiting time for P1 = 0 P2 = 24 P3 = 27 Average waiting time (0 + 24 + 27)3 = 17 2) Shortest-Job-First (SJF) SchedulingSchedule the Process with shortest burst time If CPU burst time of two processe are same then FCFS is usedAdvantages Average waiting time decreasesDisadvantages It is difficult to know the length of next CPU burstSJF can be either pre-emptive (or) non pre-emptive1 Non pre-emptive SJF ndash once CPU is given to the process it cannot be preempted until completes its CPU burst2 Pre-emptive SJF (or) Shortest-Remaining-Time-First (SRTF) If the newly arrived process is shorter than what is remaining of currently executing process then pre-empt the currently executing process

                                                        Example Process Arrival Time Burst Time P1 00 7

                                                        P2 20 4 P3 40 1

                                                        P4 50 4

                                                        SJF (non-preemptive) Gantt Chart P1 P3 P2 P4

                                                        0 7 8 12 16 Average waiting time = [0 +(8-2)+(7-4) +(12-5)] 4 =4

                                                        Example of Preemptive SJF

                                                        Process Arrival Time Burst TimeP1 00 7P2 20 4P3 40 1P4 50 4

                                                        SJF (preemptive) P1 P2 P3 P2 P4 P1

                                                        0 2 4 5 7 11 16 Average waiting time = (9 + 1 + 0 +2)4 =3 3) Priority SchedulingA priority number (integer) is associated with each processThe CPU is allocated to the process with the highest priority (smallest integer equiv highest priority)If two processes have equal priority then FCFS is usedSJF is a priority algorithm in which larger the CPU burst lower the priorityThere are two schemes 1 Preemptive 2 nonpreemptive

                                                        Problem equiv Starvation ndash low priority processes may never executeSolution equiv Aging ndash as time progresses increase the priority of the process

                                                        4) Round Robin (RR) Each process gets a small unit of CPU time (time quantum) usually 10-100 milliseconds After this time has elapsed the process is preempted and added to the end of the ready queueIf there are n processes in the ready queue and the time quantum is q then each process gets 1n of the CPU time in chunks of at most q time units at once No process waits more than (n-1)q time unitsPerformance 1 q large _ FIFO 2 q small _ q must be large with respect to context switch otherwise overhead is too highExample of RR with Time Quantum = 4 Process Burst Time P1 24 P2 3

                                                        P3 3 The Gantt chart isP1 P2 P3 P1 P1 P1 P1 P1

                                                        0 4 7 10 14 18 22 26 30 Average waiting time = [(30-24)+4+7]3 = 173 =566 5)Multilevel Queue scheduling Ready queue is partitioned into separate queuesFor ex ready queue is divided into 2 queues

                                                        1 foreground (interactive) queue2 background (batch) queue

                                                        Each queue has its own scheduling algorithmFor ex Round Robin scheduling algorithm can be used in foreground queue

                                                        FCFS scheduling algorithm can be used in background queueScheduling must be done between the queues This can be done in 2 ways1 Fixed priority scheduling

                                                        Foreground queue has highest priority All the processes in foreground queue must be completed and queue must be empty Then only the processes in background queue will be given CPU As shown in below figure Disadvantage starvation

                                                        2 Time slice ndash each queue gets a certain amount of CPU timewhich it can schedule amongst its processes ie 80 to foreground in RR and 20 to background in FCFS

                                                        6) Multilevel Feedback Queue Scheduling A process can move between queues The idea is to separate processes according to their CPU bursts IO ndash bound and interactive processes will be in highest priority queueIf a process takes more CPU time it is moved to lower priority queue If a process waits too long in lower priority queue it is moved to higher priority queue to prevent starvationAs shown in the below figure let there be 3 queues Q0 Q1 Q2

                                                        1 Q0 ndash time quantum 8 milliseconds 2 Q1 ndash time quantum 16 milliseconds 3 Q2 ndash FCFS

                                                        Scheduling

                                                        1 A process entering ready queue is put in queue Q0 When it gains CPU the process receives 8 msec If it does not finish in 8 milliseconds process is moved to queue Q1 2 At Q1 process again receives 16 additional milliseconds If it still does not complete it is moved to queue Q2

                                                        Q) SynchronizationSeveral processes run in an operating system Some process share resources due to which problems like data inconsistency may arise

                                                        Example of process synchronization is Producer Consumer problem (or) Bounded-Buffer problemProducer Consumer problem To make sure that producer process should not add data when the buffer is full andConsumer process should not take data when the buffer is empty

                                                        The code for Producer process can be modified as followswhile (true) Produce an item in next producedwhile (counter == BUFFER_SIZE) do nothingbuffer[in]=nextProducedin = (in+1) BUFFER_SIZEcounter++

                                                        here an integer variable counter is initialized to zeroCounter is incremented every time a new item is added to the buffer and counter is decremented every time an item is removed from bufferin is an index that always points to an empty slot after the last filled slotout is an index that always points to the first filled slot after at the head of the circular queue

                                                        The code for consumer process is as shown belowwhile(true)

                                                        while (counter == 0) do nothingnextConsumed = buffer[out]out=(out+1)BUFFER_SIZEcounter--

                                                        The above producer and consumer code produces RACE CONDITIONOutcome of execution of several co-operating processes depends on the order in which accesses takes place is called Race ConditionFor Example if counter =5

                                                        Given counter =5 and if producer produces one item and consumer consumes one item then correct value of counter= 5 only But here the counter = 4 ie inconsistent resultIf T5 is executed before T4 then the counter =6 ie inconsistent result

                                                        Solution to avoid race condition make sure only one process at a time must update the shared variable (here it is counter)

                                                        Q) Critical section problemAns Each process has a segment of code called critical section Critical section code is the code that contains common variables of co-operating processesTo avoid data inconsistencyTwo processes must not execute critical section code at same timeFor this to happen1 Each process must request permission to enter its critical sectionmdashEntry section2 End of critical section ndashExit section3 Rest of the code after critical section ndash Remainder section

                                                        Solution to critical section must satisfy the following three requirements1 Mutual exclusion if one process is in critical section no other process can be in critical section2 Progress Processes not in remainder section must decide which process will enter critical section next3 Bounded Waiting All the processes must be allowed to enter critical section No process must wait indefinitely

                                                        Q) Petersonrsquos solution for critical section problem (Software based solution)AnsTime Process 1 Process2T0 do

                                                        flag[1] = TRUE turn =2

                                                        do flag[2] = TRUE

                                                        T1 turn =1

                                                        T2 While( flag[2] ampamp turn == 2)Since turn = 2 while loop will not run

                                                        While( flag[1] ampamp turn == 1)Loops continuously until flag[1]

                                                        becomes falseT3 Enter critical section

                                                        T4 flag[1] = FALSE Enter Critical section

                                                        T5 Enters remainder section flag[2]=FALSE

                                                        while (TRUE) Enters remainder section

                                                        while (TRUE)

                                                        101 File Concept Data is stored in secondary memory in the form of files A file is a sequence of bits bytes lines or records A file has structure which depends on its type

                                                        File Attributes

                                                        1 Name ndash Name of the file for user understanding2 Identifier used by OS to identify a file ( eg inode number )3 Type ndash Text files executable files binary files etc 4 Location ndash location of the file on the hard drive 5 Size current size of the file6 Protection Controls who can read write or execute the file7 Time Date and user identification gives the information when the file was created last modified

                                                        and last used etc This data is used for protection security and usage monitoring

                                                        All information about files is kept in directory structure The directory entry consists of the filersquos name and its unique identifier The identifier locates the other file attributes Directory structure is kept on disk

                                                        File Operations

                                                        The file Abstract Data Type supports many common operations 1 Creating a file find free space on disk add entry to directory 2 Writing a file write data at current file position pointer location and update pointer3 Reading a file read data at current file position pointer location and update pointer4 Repositioning within a file or file seek change pointer location to a given value5 Deleting a file free the space allocated to file and remove entry from directory6 Truncating a file delete the data and update file size

                                                        Open File Table

                                                        Most OSes require that files be opened before access and closed after access Information about all files that are open currently in the system is stored in an open file table This

                                                        system wide open file table is maintained by OS And information about files opened by the process is stored by the processrsquos open file table Open file table has the following information

                                                        1 File pointer - records the current position in the file for the next read or write2 File-open count - How many times has the current file been opened ( simultaneously by

                                                        different processes ) and not yet closedWhen this counter reaches zero the file can be removed from the table

                                                        3 Disk location of the file4 Access rights

                                                        Some systems provide support for file locking 1 A shared lock is for reading only2 A exclusive lock is for writing as well as reading3 An advisory lock OS makes sure the locks are acquired and released appropriately4 A mandatory lock software developers makes sure the locks are acquired and released

                                                        appropriately5 UNIX used advisory locks and Windows uses mandatory locks

                                                        File Types

                                                        Implementing file types A common technique is to include the type as part of the filename The name is split into 2 parts

                                                        1 File Name 2 Extension

                                                        File name and extension are separated by a period The system uses the extension to indicate the type of file and the type of operations that can be done on that fileUser opens Microsoft word and clicks on File menu and Open option User will only specify the filename without extension application will look for a file with the given name and extension it expects Hence extensions are not supported by the operating systemUnix system uses a crude magic number stored at the beginning of the file to indicate the type of fileIn Mac OS X operating system file produced by word processor has the word processorrsquos name as its creator When user double clicks on the file word will open the required fileSome common file types are as shown below

                                                        File Structure Some files contain an internal structure which may or may not be known to the OS for ex executable files have a specific structure so that it can determine

                                                        where in memory to load the file and what is the location of the first instruction

                                                        If OS supports many file formats it will increase the size and complexity of the OS UNIX treats all files as sequences of bytes with no internal structure (With the exception of

                                                        executable binary programs which it must know how to load and find the first executable statement etc)

                                                        Macintosh files have two forks - a resource fork and a data fork The resource fork contains information relating to the UI such as icons and button images and can be modified independently Data fork contains the code or data

                                                        Internal File structure

                                                        Disk files are accessed in units of physical blocks typically 512 bytes

                                                        Internally files are organized in units of logical units like single byte (or)

                                                        data record size (or) structure size

                                                        The number of logical units which fit into one physical block determines its packingIf a student file is organized internally as records Each record takes 64 bytes of memoryAnd main memory is divided into blocks of 512 bytes Then packing = 51264=8 8 records can be stored per block

                                                        Internal fragmentation may occur due to this

                                                        102 Access Methods

                                                        The information in file can be accessed in the following ways1 Sequential access2 Direct access (or) Relative access

                                                        1 Sequential access we cannot randomly access any record

                                                        Records are read and written sequentially one record after another

                                                        A sequential access file emulates magnetic tape operation and generally supports the following operations read next - read a record and advance the tape to the next position write next - write a record and advance the tape to the next position rewind skip n records - May or may not be supported N may be limited to positive numbers or may be

                                                        limited to +- 1

                                                        2 Direct access (or) Relative accessA file is made up of fixed length records so that we can randomly access any record

                                                        No restriction on order of reading or writing

                                                        Direct access files are useful when we need to access large amounts of data randomly

                                                        Ex database

                                                        Operations supported include n is relative block number ie relative to beginning of file

                                                        1 read n - read record number n 2 write n - write record number n 3 jump to record n - could be 0 or the end of file4 Query current record - used to return back to this record later5 Sequential access can be easily emulated using direct access The inverse is complicated and

                                                        inefficient

                                                        3 Other access methods can be built on top of direct access method We construct an index for the file which contains pointers to various blocks To search a file we first access index and then use pointer to access file

                                                        If a student file has 120000 records of students sorted by student id We can create an index consisting of 1st student id in each blockif 64 records can be stored in each block then number of blocks needed to store 120000 records = 12000064 asymp 2000 blocks So index has 2000 entries Index is kept in main memory so that searching becomes faster

                                                        If the index itself becomes large we create an index for index file (Ex B-tree)

                                                        Q) Allocation Methods

                                                        To allocate space to files on disk is such a way that disk space is utilized effectively and files can be accessed quickly

                                                        Three major methods of allocating disk space are

                                                        1 Contiguous 2 Linked

                                                        3 Indexed

                                                        1 Contiguous allocation Each file occupies set of contiguous blocks on disk

                                                        If the file is n blocks long and starts at location b then it occupies blocks b b+1 b+2hellipb+n-1

                                                        The directory entry for each file indicates the address of starting block(ie b) and length (n)

                                                        Advantages

                                                        1 For direct access to block i we can directly access by finding the block as b+i Since the file is stored sequentially sequential access is easy Hence access is easy

                                                        2 Supports both direct and sequential access3 Number of disk seeks required is minimum 4 Seek time is minimum

                                                        Disadvantages

                                                        1 Difficult to find space to a new file (or) add more data to existing fileFirst fit and Best fit strategies are used to find a free hole( set of free contiguous blocks) from list of available holes But both are inefficient wrt storage utilization

                                                        2 External fragmentation even though there are enough free blocks required for a new file they are not contiguous and hence cannot be allocated to new fileSolution copy all the files onto another disk or tape We then copy the files back onto original disk contiguously This is known as compaction But compaction is time consuming

                                                        3 Size of the file must be known when the file is created Even though final size is known entire size may not be used by the file at the beginning Hence leads to wastage of space

                                                        If size of the file is not known in advance then (i) If we allocate too little space to file file cannot be extended Solution find a larger hole copy the contents of the file to the new space and release the previous space This may be possible as long as space exists Also this is time consuming(ii) If we overestimate the amount of space needed much of the space is unused This will lead to internal fragmentation

                                                        To minimize these drawbacks some os uses modified contiguous allocation scheme Here contiguous blocks of memory is allocated at firstIf the file needs more blocks another set of contiguous space known as extent is added Location and a block count plus a link to the first block of next extentIf extents are too large and of fixed size internal fragmentation occursIf extents are of variable sizes external fragmentation occurs

                                                        Linked AllocationEach file is a linked list of disk blocks The disk blocks can be scattered anywhere on the diskThe directory contains a pointer to first and last blocks of the file For example a file of 5 blocks might start at a block 9 and continue at block 16 then block 1 and then block 10 and finally block 25 Each block contains a pointer to next block If each block is 512 bytes and disk address (pointer ) requires 4 bytes then the user sees blocks of 512-4=508 bytes To create a new file we simply create a new entry in the directory The pointer will have nil for empty file Also the size field of empty file is 0

                                                        To add more data to existing file free block is found and data is written to it and is linked to end of file To read a file we read by following the pointers block to block

                                                        Advantages 1 Easy to find space to a new file (or) add more data to existing file2 Size of the file need not be declared when the file is created3 There is no external fragmentation

                                                        Disadvantages1 Seek time is more2 Number of disk seeks required is more3 Pointers use larger percentage of filersquos disk space4 Inefficient for direct access files To access i th record i disk reads are required5 Not Reliable if pointers were lost or damaged

                                                        Solution to disadvantage 3 is to use to group multiple blocks as clusters(for ex 4 blocks = 1cluster) We allocate clusters instead of blocks to file Makes logical to physical block mapping simpleImproves disk throughput by reducing disk access timeReduces wastage of space due to pointers as only few pointers may be neededFree ndashlist management also becomes simpleDisadvantage internal fragmentation more space is wasted when the cluster is partially full

                                                        Solution to disadvantage 5 is to use doubly linked list (or) store the file name and relative block number in each block

                                                        An variation of linked allocation is the use of file allocation table (FAT)A section of disk at the beginning of each volume contains FAT FAT has one entry for each disk block and is indexed by block numberThe block number contains the block number of next block that contains the file The last block will have special end of the file value as table entry An unused block is indicated by table value 0

                                                        Advantages

                                                        1 Easy to find space to a new file (or) add more data to existing file2 Random access time is improved

                                                        Disadvantage number of disk seeks is more The disk head must move to read the FAT and find the location of block then move to location of the block

                                                        3 Indexed allocation

                                                        Each file has its own index block Index block is an array of disk block addresses The ith entry in index block points to ith block of the file

                                                        The directory contains the address of the index block

                                                        When the file is created all pointers in the index block are set to nil

                                                        To add more data to existing file free block is found and data is written to it The address of block is put in the index block

                                                        Advantages

                                                        1 Easy to find space to a new file (or) add more data to existing file2 Supports direct (or) random access efficiently3 No external fragmentation

                                                        Disadvantage

                                                        1 Seek time is more2 Number of disk seeks required is more3 Not Reliable4 Index file must be kept in memory If this memory is not available then we have the read the index

                                                        block and then desired data block ie 2 disk accesses are required To access a block near the end of the file we need to read all the index blocks to read the needed data block

                                                        5 Pointer overhead of index block is gt than that of linked allocation For example if we have file of only one or two blocks Entire index block is allocated to store these one or two pointers

                                                        Various mechanisms followed for size of index block

                                                        1 linked scheme size of index block is one disk block Index block contains a small header containing name of the file and set of the first 100 disk block addresses The next address contains the address of another index block for large files

                                                        2 multilevel index to access a block the OS uses first-level index to find second-level index block which points to file block This approach can be continued to third or fourth level

                                                        3 combined scheme

                                                        For example Say there are 15 pointers of index block in the filersquos inode The first 12 of these pointers have address of blocks that contains file data The next 3 pointers point to indirect blocks The first points to single indirect block which is an index block that contain addresses of filersquos data blocks

                                                        The second points to double indirect blocks and third points to triple indirect blocks as shown below

                                                        Q) Free space management

                                                        To keep track of free disk space the system maintains free space list To create a file we search free space list and allocate the required space to new file

                                                        The free space list can be implemented as

                                                        1 Bit vector (or) Bit map2 Linked list3 Grouping4 Counting 5 Space maps

                                                        1 Bit vector (or) Bit map Each block is represented by 1 bit If the block is free the bit is 1 and if the block is allocated the bit is 0 For example consider a disk where blocks 23458 blocks are free The free-space bit map would be 001111001000

                                                        Advantages simple and easy to get contiguous files

                                                        Disadvantages

                                                        1 Bit map requires extra space2 Bit map is inefficient if kept in disk Bit maps of smaller disk can be kept in main memory but may

                                                        not be possible to keep the bit map of larger disk in main memory3 Bit map must be modified both when blocks are allocated and freed Freeing 1 GB of data on a 1-TB

                                                        disk could cause thousands of blocks of bit maps to be updated because those data blocks are scattered all over the disk

                                                        2 Linked List Free-space list is implemented by linking all the free disk blocks together A pointer to the first free block is stored in special location on the disk For example consider a disk where blocks 23458 blocks are freeWe keep a pointer to block 2 as the first free block Block 2 would contain a pointer to block 3 Block 3 would point to block 4 and block 4 would point to block 5and so onAdvantage No waste of spaceDisadvantage Cannot get contiguous space easily ie To traverse the free-space list we must read each block which requires more IO time

                                                        3 Grouping First free block stores the addresses of n free blocks The first n-1 free blocks are free and the last block contains the addresses of another n free blocks and so onAdvantage large number of free blocks can be found quickly4 Counting free-space list can be maintained by keeping the address of first free block and the number (n)of free contiguous blocks that follow the first block

                                                        Each entry in the free-space list consists of disk address and a countEach entry requires more space andlist is shorter only if count is gt1These entries can be stored in a B-tree for efficient insertion deletion and search

                                                        5 Space maps ZFS divides the disk space into chunks of manageable sizes called metaslabs Each metaslab has a space mapZFS uses counting algorithm to store information about free blocks Space map is a log of all block activities in time order in counting formatWhen ZFS allocates or frees space from a metaslab it loads the associated space map into memory and updates it Finally free-space list is updated on diskSynchronization Hardware Hardware solution to synchronization is to provide atomic operations These operations operate as a single instruction without interruption Two such operations are 1 Test and Set() hardware instruction2 Swap() instruction

                                                        1 Test and Set instruction is as shown

                                                        Solution using Test and SetThe shared variable lock is initialized to false

                                                        2 Swap() instruction definition is as follows

                                                        Solution using Swap1 Shared Boolean variable lock is initialized to false and each process has a local boolean variable key

                                                        Both Test and Set Swap instructions satisfy the mutual exclusion requirement but unfortunately do not guarantee bounded waiting If there are multiple processes trying to get into their critical sections there is no guarantee of what order they will enter and any one process could have the bad luck to wait forever until they got their turn in the critical section (Since there is no guarantee as to the relative rates of the processes a very fast process could theoretically release the lock whip through their remainder section and re-lock the lock before a slower process got a chance As more and more processes are involved vying for the same resource the odds of a slow process getting locked out completely increase )

                                                        Below figure illustrates a solution using test-and-set that satisfies bounded waiting using two shared data structures boolean lock and boolean waiting[ N ] where N is the number of processes in contention for critical sections

                                                        Bounded-waiting mutual exclusion with TestAndSet( )

                                                        It first looks in an order ( starting with the next process on the list ) for a process that has been waiting and if it finds one then it releases that particular process from its waiting state without unlocking the critical section thereby allowing a specific process into the critical section while continuing to block all the others

                                                        Q) What is semaphoreAns Semaphore is used to solve synchronization problems A semaphore S is an integer variable accessed only through 2 standard atomic operations 1 wait() 2 signal()Wait primitive Signal primitivewait(S)

                                                        while(S lt= 0) do nothingS--

                                                        signal (S)

                                                        S++

                                                        Each process that wishes to use a resource performs wait() operation When a process releases a resource it performs signal() operation

                                                        Semaphores Usage1 binary semaphore or mutex locks can take values 0 (or) 1

                                                        2 Counting Semaphore can take any integer value Counting semaphore is used to count the number remaining resources The counter is initialized to the number of resources available whenever the counting semaphore gt 0 then a process can enter a critical section and use one of the resources When the counter = 0 ( or negative in some implementations ) then the process blocks until another process frees a resource and increments the counting semaphore with a signal callFor example if 3 resources are there and 4 processes Value of counting semaphore =3if process P1 requires a resource it perform wait() operationNow counting semaphore = 2if process P2 requires a resource it perform wait() operationNow counting semaphore = 1if process P3 requires a resource it perform wait() operationNow counting semaphore = 0if process P4 requires a resource it perform wait() operation P4 waits until a resource is available

                                                        3 Semaphores can also be used to synchronize certain operations

                                                        Semaphore ImplementationWhen a process is in critical section and any other process that tries to enter critical section must loop continuously wasting CPU time (busy waiting) This type of semaphore is called spin lock as the process spins while waiting For example suppose it is important that process P1 execute statement S1 before process P2 executes statement S2

                                                        First we create a semaphore named synch that is shared by the two processes and initialize it to zero Then in process P1 we insert the code

                                                        S1signal( synch )

                                                        and in process P2 we insert the code wait( synch )

                                                        S2 Because synch was initialized to 0 process P2 will block on the wait until after P1 executes the call

                                                        to signal Each semaphore has a integer value and maintains a waiting queue of waiting processes

                                                        Semaphore implementation with no busy waiting To overcome busy waiting we block the waiting process and waiting process is restarted by wakeup() operation(ie the process is removed from waiting queue to ready state)

                                                        Problems due to semaphore1 Deadlocks Deadlocks occur when multiple processes are blocked each waiting for a resource that can only be freed by one of the other (blocked) processes shown below

                                                        2 Starvation In Starvation one or more processes gets blocked forever and never get a chance to enter critical section For example if we do not specify the algorithms for adding processes to the waiting queue or selecting one to be removed from the queue in the signal( ) call If a LIFO queue is chosen then the first process that starts waiting will never get a chance

                                                        3 Priority inversionLet Process LMH have priorities as shown belowLltMltHprocess H is waiting for a resource that has been held by process L so Process L is running When process M enters it pre-empts process L and M is running nowHere M runs before H even though M has lower priority than HTo avoid this L is given high priority so that M does not pre-empt L=====================================================================Q) Classical problems on synchronization

                                                        1 The Bounded Buffer Problem (also called the The Producer-Consumer Problem)2 The Readers-Writers Problem3 The Dining Philosophers Problem

                                                        These problems are used to test newly proposed synchronization scheme

                                                        1 The Bounded Buffer ProblemConsider

                                                        n buffers and each buffer holds one item a producer process which creates the items (1 at a time) a consumer process which processes them (1 at a time)

                                                        Producer process cannot produce an item if all the buffers are full and Consumer process cannot consume if all the buffers are empty As both processes modify the contents of buffer synchronization must be there For that we use 3 Semaphores

                                                        1 empty = n (empty semaphore counts the number of empty buffers)2 full=0 (full semaphore counts the number of filled buffers)3 mutex=1 (mutex provides mutual exclusion for access to the buffer)

                                                        Producer process Consumer processdo

                                                        hellip produce an item wait(empty) wait(mutex) hellip

                                                        do wait(full) wait(mutex) hellip remove an item from buffer hellip

                                                        add item to buffer hellip signal(mutex) signal(full)

                                                        while (1)

                                                        signal(mutex) signal(empty) hellip consume the item hellip while (1)

                                                        2 The Readers-Writers ProblemA data item such as a file is shared among several processesEach process is classified as reader or writerMultiple readers can read the file at same timeA writer must have exclusive access (ie cannot share with either a reader or another writer)Two versions of the readers-writers problem

                                                        readers priority new reader need not wait because a writer is waiting writers priority if a writer is waiting to access the database no new readers can start reading

                                                        A solution to both version may cause starvation in the readers priority version writers may starve in the writers priority version readers may starve

                                                        A semaphore solution to the readers priority version uses 3 semaphores1 readcount = 0 (readcount semaphore counts the number of reading shared data2 mutex =1 (mutex semaphore is used for mutually exclusion when readcount is updated)3 wrt = 1 (wrt semaphore is common to both writer and reader)

                                                        wrt provides mutually exclusive access to shared data Reader process Writer process

                                                        wait(mutex) readcount++ if (readcount == 1) wait(rt) signal(mutex) hellip reading is performed hellip wait(mutex) readcount-- if (readcount == 0) signal(wrt) signal(mutex)

                                                        do wait(wrt) hellip writing is performed hellip signal(wrt)

                                                        while (TRUE)

                                                        3 The Dining Philosophers Problemn philosophers sit around a table thinking and eatingAs time passes a philosopher gets hungry and tries to pick up the chopstick on his left and on his right A philosopher may only pick up one chopstick at a time and cannot pick up a chopstick already in the hand of neighbour philosopherThe dining philosophers problems is an example of a large class or concurrency control problems it is a simple representation of the need to allocate several resources among several processes in a deadlock-free and starvation-free mannerA semaphore solution represent each chopstick with a semaphore

                                                        semaphore chopstick[5] Initially all values are 1

                                                        Philosopher ido

                                                        51

                                                        wait(chopstick[i])wait(chopstick[(i+1) 5])hellipeathellipsignal(chopstick[i])signal(chopstick[(i+1) 5])hellipthinkhellip

                                                        while (1)This solution guarantees no two neighbouring philosophers eat simultaneously but has the possibility of creating a deadlockQ) Types of operating systems

                                                        MULTIPROCESSING SYSTEMS has multiple hardware CPUs

                                                        In symmetric multiprocessing (SMP) systems The processes share the same main memory and io facilities All processors can perform the same functions (hence term symmetric)

                                                        52

                                                        ASYMMETRIC MULTIPROCESSING SYSTEMS In asymmetric multiprocessing (ASMP) all CPUs are not equal (Master and slave CPUs) ASMP have dedicated processors to specific tasks For example one processor may be dedicated to disk operations another to video operations etc These systems dont have the flexibility to assign processes to the least-loaded CPU unlike an SMP system

                                                        REAL TIME SYSTEM is used to implement a computer application that must complete its execution within its time constraint

                                                        Two kinds of real-time systems have evolved

                                                        1 A hard real-time system can guarantee that response requirement will be met under all conditions 2 A soft real-time system cannot guarantee that response requirement meets under all conditions Ex

                                                        Digital audio or multimedia systems Digital telephones

                                                        DISTRIBUTED SYSTEMS execute parts of a computation in different systems at the same time It uses distributed control ie it spreads its decision-making actions across different computers in the system so that failures of individual computers or the network does not cripple its operation

                                                        A distributed operating system appears as a uniprocessor system even though it has multiple processors The users may not know where their programs are being run or where their files are located that should all be handled automatically by the operating system

                                                        Distributed systems allow applications to run on several processors at the same time thus requiring more complex processor scheduling algorithms

                                                        DESKTOP SYSTEMS A desktop system is a personal computer (PC) system used at a single location Modern desktop systems support multiprogramming Common examples are Linux FreeBSD Windows 8 and the Macintosh operating system

                                                        HANDHELD SYSTEMS (palmtop computers) Two of the most popular operating systems for handhelds are Symbian OS and Android OS They have very less memory

                                                        53

                                                        CLUSTERED SYSTEMS A computer cluster consists of a set of loosely connected computers that work together so that in many respects they can be viewed as a single system The components of a cluster are usually connected to each other through fast local area networks each node (computer used as a server) running its own instance of an operating system

                                                        Q) Modern operating system

                                                        New developments in hardware applications and security threats lead to the development of modern operating system

                                                        New Hardware developments like increased machine speed high speed network increased size and variety of memory devices

                                                        New Applications developments like multimedia applications internet and web access and client server computing

                                                        Developments in modern operating system can be categorized as

                                                        1 Microkernel architecture assigns only few functions to kernel ( IPC and CPU scheduling etc) Other OS services are provided by processes called servers that run in user mode This approach decouples server and kernel development This approach is well suited to distributed environment

                                                        2 Multithreading a process is divided into threads that can run concurrently Useful for applications that perform a number of independent tasks Example database server that listens and processes many client requests

                                                        3 Symmetric multiprocessing As now-a-days there are multiple microprocessors in a single system SMP operating system provides greater efficiency when there are multiple processors The processes share the same main memory and io facilities All processors can perform the same functions (hence term symmetric)

                                                        SMP schedules processes or threads across all of the processors

                                                        Advantages

                                                        1 Increased performance if some portion of the program can be run in parallel the performance of SMP OS increases

                                                        2 increased availability as all processes can perform the same task if one processor fails others will work 3 incremental growth one can add a new processor to increase the performance4 Scaling cost can be dependent on the number of processors used

                                                        4 Distributed operating systems Distributed systems allow applications to run on several processors at the same time

                                                        5 Object oriented design adds modularity to kernel Os can be customized without effecting system integrity

                                                        Q) Linux Operating systems It is open source as its source code is freely available It is free to use Linux was designed considering UNIX compatibility Linux is a multi-user multitasking system Main design goals are speed efficiency and standardization Components of Linux System

                                                        54

                                                        Linux Operating System has primarily three components as shown in the below diagram

                                                        Kernel - Kernel is the core part of Linux It is responsible for all major activities of this operating system It interacts directly with hardware Kernel hides low level hardware details to system or application programs

                                                        System Library - System libraries are special functions or programs These are used by application programs to use Kernelrsquos features

                                                        System Utility - System Utility programs are responsible to do specialized individual level tasks like updating log file accepting login requests from terminals etc

                                                        Kernel Modules Sections of kernel code that can be compiled loaded and unloaded independent of the rest of the kernel A kernel module may typically implement a device driver a file system or a networking protocol

                                                        Three components to Linux module support 1 module management Supports loading modules into memory and allows to talk to the rest of the kernel 2driver registration Allows modules to inform rest of the kernel that a new driver has become available 3 conflict resolution protect reserved resources of one driver from accidental use by another driver

                                                        Kernel Mode vs User ModeKernel component code executes in a special privileged mode called kernel mode with full access to all resources of the computer This code represents a single process executes in single address space and do not require any context switch and hence is very efficient and fast Kernel runs each processes and provides system services to processes provides protected access to hardwares to processesCode which is not required to run in kernel mode is in System Library User programs and other system programs works in User Mode which has no access to system hardwares and kernel code User programs utilities use System libraries to access Kernel functions to get systems low level tasksBasic FeaturesFollowing are some of the important features of Linux Operating System

                                                        Portable - Portability means softwares can works on different types of hardwares in same way Linux kernel and application programs supports their installation on any kind of hardware platform

                                                        Open Source - Linux source code is freely available and it is community based development project Multiple teams works in collaboration to enhance the capability of Linux operating system and it is continuously evolving

                                                        Multi-User - Linux is a multiuser system means multiple users can access system resources like memory ram application programs at same time

                                                        Multiprogramming - Linux is a multiprogramming system means multiple applications can run at same time

                                                        Hierarchical File System - Linux provides a standard file structure in which system files user files are arranged

                                                        Shell - Linux provides a special interpreter program which can be used to execute commands of the operating system It can be used to do various types of operations call application programs etc

                                                        55

                                                        Security - Linux provides user security using authentication features like password protection controlled access to specific files encryption of data

                                                        Architecture Linux System Architecture is consists of following layers

                                                        Hardware layer - Hardware consists of all peripheral devices (RAM HDD CPU etc)

                                                        Kernel - Core component of Operating System interacts directly with hardware provides low level services to upper layer components

                                                        Shell - An interface to kernel Takes commands from user and executes kernels functions

                                                        Utilities - Utility programs giving user most of the functionalities of an operating systems

                                                        Q) windows XP operating System1Extensibility mdash layered architecture

                                                        Executive which runs in protected mode provides the basic system services On top of the executive several server subsystems operate in user mode Modular structure allows additional environmental subsystems to be added without affecting the

                                                        executive 2Portability mdash XP can be moved from on hardware architecture to another with relatively few changes

                                                        Written in C and C++ Processor-dependent code is isolated in a dynamic link library (DLL) called the ldquohardware abstraction

                                                        layerrdquo (HAL)3 Reliability mdashXP uses hardware protection for virtual memory and software protection mechanisms for operating system resources 4 Compatibility mdash applications that follow the IEEE 10031 (POSIX) standard can be complied to run on 2000 without changing the source code 5 Performance mdash XP subsystems can communicate with one another via high-performance message passing

                                                        Preemption of low priority threads enables the system to respond quickly to external events Designed for symmetrical multiprocessing

                                                        6 International support mdash supports different locales via the national language support (NLS) APIXP Architecture 1 Layered system of modules 2 Protected mode mdash hardware abstraction layer (HAL) kernel executive 3 User mode mdash collection of subsystems

                                                        a Environmental subsystems emulate different operating systems b Protection subsystems provide security functions

                                                        XP architecture

                                                        56

                                                        Q) Windows network Operating systemExample Windows 2000 operating system

                                                        All the pictures are taken from Silberschatz Abraham et al Peter Baer Galvin and Greg Gagne Operating system concepts Reading Addison-Wesley

                                                        Visit my blog enthusiaststudentblogspotin57

                                                        mtechmessengerblogspotin

                                                        Keywords OS exam notes OS notesOperating system notes Opearing system galvin notesSimple batch system Multi- programmed Batch System Time sharing System Difference between Batch Multi-programming and Time sharing What is process structure of process in memory Process states PCB Operations on Processes Process Creation Process Termination cascading terminationcontext switch Basic Concepts of threads two modes of CPU execution Inter process communication Shared ndash Memory Systems Message passing systemswhat is deadlock four conditions that are necessary for deadlock to occur Methods for handling deadlocks Deadlock prevention Deadlock avoidanceResource Allocation graphBankers AlgorithmSafe State Unsafe state Safety Algorithm Resource-Request Algorithm Memory management Address Binding Logical vs Physical Address Space MMU Memory-Management Unit Dynamic loading Dynamic Linking Overlays Swapping Contiguous memory allocation Single-partition allocation Multiple-partition allocation Internal fragmentation Variable Size partitions External fragmentation First-fit Best-fit Worst-fit Non-Contiguous Memory allocation Paging Implementation of Page Table PTLR PTBR page table base register Page-table length register translation look-aside buffers TLB Protection in Paging Shared pages in Paging Segmentation Shared Segments Virtual memory Demand paging Procedure for Handling a Page Fault pure demand paging Page Replacement Page Replacement Algorithms FIFO Optimal Page replacement AlgorithmLeast Recently used (LRU) Page replacement algorithm ThrashingWorking Set model global and local page replacement

                                                        58

                                                        • 101 File Concept Data is stored in secondary memory in the form of files A file is a sequence of bits bytes lines or records A file has structure which depends on its type
                                                        • File Attributes
                                                        • File Operations
                                                        • Kernel Mode vs User Mode
                                                        • Basic Features
                                                        • Architecture

                                                          OffsetProcess-id and page number of the logical address is compared with each entry in the invertible page tableIf a match is found at ith entry i is added to offset to get logical address If there is no match it means it is illegal addressAdvantages and disadvantages

                                                          Decreases memory needed to store each page table Increases time needed to search the table when a page reference occurs Solution Use hash table to

                                                          limit the search to one mdash or at most a few mdash page-table entries Inverted Page Table Architecture

                                                          Unit-IV CPU Scheduling

                                                          Basic conceptsTo maximize CPU utilization some process must be running at all times

                                                          In multi programming several processes will be in main memory at a given time If running process has to wait for IO the OS takes the CPU away from that process and assigns CPU to other process in ready queue So CPU must be scheduled and CPU scheduling is fundamental OS function

                                                          CPU- IO burst cycleProcess execution begins with a CPU burst that is followed by an IO burst which is followed by another CPU burst then another IO burst and so on as shown in the figure

                                                          An IO bound program has many short CPU bursts A CPU-bound program has few long CPU bursts

                                                          Schedulers a module in OS for scheduling decisions1048708 Long-term scheduler (or job scheduler) ndash selects which processes should be brought into the ready queue1048708 Medium-term scheduler ndash selects which processes should be swapped inout the memory1048708 Short-term scheduler (or CPU scheduler) ndash Whenever the CPU becomes idle the operating system must select one of the processes in ready queue to be executed This selection is done by short term scheduler (or) CPU scheduler

                                                          Dispatcher gives CPU control to the process selected by short term scheduler This function involves following

                                                          1 switching context2 switching to user mode3 jumping to proper location in the user program to restart that program

                                                          Time taken to stop one process and start another process is known as dispatch latency This must be kept smallReady QueueReady queue is implemented as a FIFO queue a priority queue a tree or simply an unordered linked listThe records in ready queue are PCBs of processes

                                                          Pre-emptive Scheduling

                                                          CPU scheduling decisions take place under one of four conditions

                                                          1 When a process switches from the running state to the waiting state2 When a process switches from the running state to the ready state for example in response to an

                                                          interrupt3 When a process switches from the waiting state to the ready state say at completion of IO 4 When a process terminates

                                                          For conditions 1 and 4 there is no choice - A new process must be selected For conditions 2 and 3 there is a choice - continue running the current process (or) select a different

                                                          one If CPU scheduling takes place only under conditions 1 and 4 the system is said to be non-

                                                          preemptive or cooperative In non-preemptive a process runs until it itself gives up the CPU ie when it is waiting for IO (or) when process completes Otherwise the system is said to be pre-emptive

                                                          Windows used non-preemptive scheduling up to Windows 3x and started using pre-emptive scheduling with Win95 Macs used non-preemptive prior to OSX and pre-emptive since then Note that pre-emptive scheduling is only possible on hardware that supports a timer interrupt

                                                          Note that pre-emptive scheduling can cause problems when two processes share data because one process may get interrupted in the middle of updating shared data structures

                                                          Preemption can also be a problem if the kernel is busy implementing a system call ( eg updating critical kernel data structures ) when the preemption occurs Solution wait until the system call has either completed or blocked before allowing the pre-emption But this solution is problematic for real-time systems

                                                          Some critical sections of code protect themselves from concurrency problems by disabling interrupts before entering the critical section and re-enabling interrupts on exiting the section But this should only be done only on very short pieces of code that will take less CPU time

                                                          Scheduling Criteria to decide CPU scheduling algorithm1 CPU utilization in real system cpu utilization will range from 40 to 902 Throughput Number of processes that are completed per unit time3 Turnaround time Time when the process has completed ndash Time when the process was submittedie it includes waiting time and burst time4 Waiting time amount of time that a process spends waiting for CPU5 Response time Time when the first response came - Time when the process was submitted

                                                          Optimization Criteria Max CPU utilizationMax throughput

                                                          Min turnaround timeMin waiting timeMin response time CPU Scheduling Algorithms1) First-Come First-Served (FCFS) Scheduling First come first served CPU scheduling algorithm is non pre-emptive Processes are scheduled in the order they have arrivedFCFS is implemented by Queue When the CPU is free it is allocated to the process at the head of the queueAdvantages Easy to implementDisadvantages 1 Average waiting time is quite long

                                                          2 Convoy effect (it occurs as small processes wait in queue for big process to leave CPU)Example Process Burst Time P1 24 P2 3 P3 3

                                                          Suppose that the processes arrive in the order P1 P2 P3

                                                          The Gantt Chart for the schedule is P1 P2 P3

                                                          0 24 27 30 Waiting time for P1 = 0 P2 = 24 P3 = 27 Average waiting time (0 + 24 + 27)3 = 17 2) Shortest-Job-First (SJF) SchedulingSchedule the Process with shortest burst time If CPU burst time of two processe are same then FCFS is usedAdvantages Average waiting time decreasesDisadvantages It is difficult to know the length of next CPU burstSJF can be either pre-emptive (or) non pre-emptive1 Non pre-emptive SJF ndash once CPU is given to the process it cannot be preempted until completes its CPU burst2 Pre-emptive SJF (or) Shortest-Remaining-Time-First (SRTF) If the newly arrived process is shorter than what is remaining of currently executing process then pre-empt the currently executing process

                                                          Example Process Arrival Time Burst Time P1 00 7

                                                          P2 20 4 P3 40 1

                                                          P4 50 4

                                                          SJF (non-preemptive) Gantt Chart P1 P3 P2 P4

                                                          0 7 8 12 16 Average waiting time = [0 +(8-2)+(7-4) +(12-5)] 4 =4

                                                          Example of Preemptive SJF

                                                          Process Arrival Time Burst TimeP1 00 7P2 20 4P3 40 1P4 50 4

                                                          SJF (preemptive) P1 P2 P3 P2 P4 P1

                                                          0 2 4 5 7 11 16 Average waiting time = (9 + 1 + 0 +2)4 =3 3) Priority SchedulingA priority number (integer) is associated with each processThe CPU is allocated to the process with the highest priority (smallest integer equiv highest priority)If two processes have equal priority then FCFS is usedSJF is a priority algorithm in which larger the CPU burst lower the priorityThere are two schemes 1 Preemptive 2 nonpreemptive

                                                          Problem equiv Starvation ndash low priority processes may never executeSolution equiv Aging ndash as time progresses increase the priority of the process

                                                          4) Round Robin (RR) Each process gets a small unit of CPU time (time quantum) usually 10-100 milliseconds After this time has elapsed the process is preempted and added to the end of the ready queueIf there are n processes in the ready queue and the time quantum is q then each process gets 1n of the CPU time in chunks of at most q time units at once No process waits more than (n-1)q time unitsPerformance 1 q large _ FIFO 2 q small _ q must be large with respect to context switch otherwise overhead is too highExample of RR with Time Quantum = 4 Process Burst Time P1 24 P2 3

                                                          P3 3 The Gantt chart isP1 P2 P3 P1 P1 P1 P1 P1

                                                          0 4 7 10 14 18 22 26 30 Average waiting time = [(30-24)+4+7]3 = 173 =566 5)Multilevel Queue scheduling Ready queue is partitioned into separate queuesFor ex ready queue is divided into 2 queues

                                                          1 foreground (interactive) queue2 background (batch) queue

                                                          Each queue has its own scheduling algorithmFor ex Round Robin scheduling algorithm can be used in foreground queue

                                                          FCFS scheduling algorithm can be used in background queueScheduling must be done between the queues This can be done in 2 ways1 Fixed priority scheduling

                                                          Foreground queue has highest priority All the processes in foreground queue must be completed and queue must be empty Then only the processes in background queue will be given CPU As shown in below figure Disadvantage starvation

                                                          2 Time slice ndash each queue gets a certain amount of CPU timewhich it can schedule amongst its processes ie 80 to foreground in RR and 20 to background in FCFS

                                                          6) Multilevel Feedback Queue Scheduling A process can move between queues The idea is to separate processes according to their CPU bursts IO ndash bound and interactive processes will be in highest priority queueIf a process takes more CPU time it is moved to lower priority queue If a process waits too long in lower priority queue it is moved to higher priority queue to prevent starvationAs shown in the below figure let there be 3 queues Q0 Q1 Q2

                                                          1 Q0 ndash time quantum 8 milliseconds 2 Q1 ndash time quantum 16 milliseconds 3 Q2 ndash FCFS

                                                          Scheduling

                                                          1 A process entering ready queue is put in queue Q0 When it gains CPU the process receives 8 msec If it does not finish in 8 milliseconds process is moved to queue Q1 2 At Q1 process again receives 16 additional milliseconds If it still does not complete it is moved to queue Q2

                                                          Q) SynchronizationSeveral processes run in an operating system Some process share resources due to which problems like data inconsistency may arise

                                                          Example of process synchronization is Producer Consumer problem (or) Bounded-Buffer problemProducer Consumer problem To make sure that producer process should not add data when the buffer is full andConsumer process should not take data when the buffer is empty

                                                          The code for Producer process can be modified as followswhile (true) Produce an item in next producedwhile (counter == BUFFER_SIZE) do nothingbuffer[in]=nextProducedin = (in+1) BUFFER_SIZEcounter++

                                                          here an integer variable counter is initialized to zeroCounter is incremented every time a new item is added to the buffer and counter is decremented every time an item is removed from bufferin is an index that always points to an empty slot after the last filled slotout is an index that always points to the first filled slot after at the head of the circular queue

                                                          The code for consumer process is as shown belowwhile(true)

                                                          while (counter == 0) do nothingnextConsumed = buffer[out]out=(out+1)BUFFER_SIZEcounter--

                                                          The above producer and consumer code produces RACE CONDITIONOutcome of execution of several co-operating processes depends on the order in which accesses takes place is called Race ConditionFor Example if counter =5

                                                          Given counter =5 and if producer produces one item and consumer consumes one item then correct value of counter= 5 only But here the counter = 4 ie inconsistent resultIf T5 is executed before T4 then the counter =6 ie inconsistent result

                                                          Solution to avoid race condition make sure only one process at a time must update the shared variable (here it is counter)

                                                          Q) Critical section problemAns Each process has a segment of code called critical section Critical section code is the code that contains common variables of co-operating processesTo avoid data inconsistencyTwo processes must not execute critical section code at same timeFor this to happen1 Each process must request permission to enter its critical sectionmdashEntry section2 End of critical section ndashExit section3 Rest of the code after critical section ndash Remainder section

                                                          Solution to critical section must satisfy the following three requirements1 Mutual exclusion if one process is in critical section no other process can be in critical section2 Progress Processes not in remainder section must decide which process will enter critical section next3 Bounded Waiting All the processes must be allowed to enter critical section No process must wait indefinitely

                                                          Q) Petersonrsquos solution for critical section problem (Software based solution)AnsTime Process 1 Process2T0 do

                                                          flag[1] = TRUE turn =2

                                                          do flag[2] = TRUE

                                                          T1 turn =1

                                                          T2 While( flag[2] ampamp turn == 2)Since turn = 2 while loop will not run

                                                          While( flag[1] ampamp turn == 1)Loops continuously until flag[1]

                                                          becomes falseT3 Enter critical section

                                                          T4 flag[1] = FALSE Enter Critical section

                                                          T5 Enters remainder section flag[2]=FALSE

                                                          while (TRUE) Enters remainder section

                                                          while (TRUE)

                                                          101 File Concept Data is stored in secondary memory in the form of files A file is a sequence of bits bytes lines or records A file has structure which depends on its type

                                                          File Attributes

                                                          1 Name ndash Name of the file for user understanding2 Identifier used by OS to identify a file ( eg inode number )3 Type ndash Text files executable files binary files etc 4 Location ndash location of the file on the hard drive 5 Size current size of the file6 Protection Controls who can read write or execute the file7 Time Date and user identification gives the information when the file was created last modified

                                                          and last used etc This data is used for protection security and usage monitoring

                                                          All information about files is kept in directory structure The directory entry consists of the filersquos name and its unique identifier The identifier locates the other file attributes Directory structure is kept on disk

                                                          File Operations

                                                          The file Abstract Data Type supports many common operations 1 Creating a file find free space on disk add entry to directory 2 Writing a file write data at current file position pointer location and update pointer3 Reading a file read data at current file position pointer location and update pointer4 Repositioning within a file or file seek change pointer location to a given value5 Deleting a file free the space allocated to file and remove entry from directory6 Truncating a file delete the data and update file size

                                                          Open File Table

                                                          Most OSes require that files be opened before access and closed after access Information about all files that are open currently in the system is stored in an open file table This

                                                          system wide open file table is maintained by OS And information about files opened by the process is stored by the processrsquos open file table Open file table has the following information

                                                          1 File pointer - records the current position in the file for the next read or write2 File-open count - How many times has the current file been opened ( simultaneously by

                                                          different processes ) and not yet closedWhen this counter reaches zero the file can be removed from the table

                                                          3 Disk location of the file4 Access rights

                                                          Some systems provide support for file locking 1 A shared lock is for reading only2 A exclusive lock is for writing as well as reading3 An advisory lock OS makes sure the locks are acquired and released appropriately4 A mandatory lock software developers makes sure the locks are acquired and released

                                                          appropriately5 UNIX used advisory locks and Windows uses mandatory locks

                                                          File Types

                                                          Implementing file types A common technique is to include the type as part of the filename The name is split into 2 parts

                                                          1 File Name 2 Extension

                                                          File name and extension are separated by a period The system uses the extension to indicate the type of file and the type of operations that can be done on that fileUser opens Microsoft word and clicks on File menu and Open option User will only specify the filename without extension application will look for a file with the given name and extension it expects Hence extensions are not supported by the operating systemUnix system uses a crude magic number stored at the beginning of the file to indicate the type of fileIn Mac OS X operating system file produced by word processor has the word processorrsquos name as its creator When user double clicks on the file word will open the required fileSome common file types are as shown below

                                                          File Structure Some files contain an internal structure which may or may not be known to the OS for ex executable files have a specific structure so that it can determine

                                                          where in memory to load the file and what is the location of the first instruction

                                                          If OS supports many file formats it will increase the size and complexity of the OS UNIX treats all files as sequences of bytes with no internal structure (With the exception of

                                                          executable binary programs which it must know how to load and find the first executable statement etc)

                                                          Macintosh files have two forks - a resource fork and a data fork The resource fork contains information relating to the UI such as icons and button images and can be modified independently Data fork contains the code or data

                                                          Internal File structure

                                                          Disk files are accessed in units of physical blocks typically 512 bytes

                                                          Internally files are organized in units of logical units like single byte (or)

                                                          data record size (or) structure size

                                                          The number of logical units which fit into one physical block determines its packingIf a student file is organized internally as records Each record takes 64 bytes of memoryAnd main memory is divided into blocks of 512 bytes Then packing = 51264=8 8 records can be stored per block

                                                          Internal fragmentation may occur due to this

                                                          102 Access Methods

                                                          The information in file can be accessed in the following ways1 Sequential access2 Direct access (or) Relative access

                                                          1 Sequential access we cannot randomly access any record

                                                          Records are read and written sequentially one record after another

                                                          A sequential access file emulates magnetic tape operation and generally supports the following operations read next - read a record and advance the tape to the next position write next - write a record and advance the tape to the next position rewind skip n records - May or may not be supported N may be limited to positive numbers or may be

                                                          limited to +- 1

                                                          2 Direct access (or) Relative accessA file is made up of fixed length records so that we can randomly access any record

                                                          No restriction on order of reading or writing

                                                          Direct access files are useful when we need to access large amounts of data randomly

                                                          Ex database

                                                          Operations supported include n is relative block number ie relative to beginning of file

                                                          1 read n - read record number n 2 write n - write record number n 3 jump to record n - could be 0 or the end of file4 Query current record - used to return back to this record later5 Sequential access can be easily emulated using direct access The inverse is complicated and

                                                          inefficient

                                                          3 Other access methods can be built on top of direct access method We construct an index for the file which contains pointers to various blocks To search a file we first access index and then use pointer to access file

                                                          If a student file has 120000 records of students sorted by student id We can create an index consisting of 1st student id in each blockif 64 records can be stored in each block then number of blocks needed to store 120000 records = 12000064 asymp 2000 blocks So index has 2000 entries Index is kept in main memory so that searching becomes faster

                                                          If the index itself becomes large we create an index for index file (Ex B-tree)

                                                          Q) Allocation Methods

                                                          To allocate space to files on disk is such a way that disk space is utilized effectively and files can be accessed quickly

                                                          Three major methods of allocating disk space are

                                                          1 Contiguous 2 Linked

                                                          3 Indexed

                                                          1 Contiguous allocation Each file occupies set of contiguous blocks on disk

                                                          If the file is n blocks long and starts at location b then it occupies blocks b b+1 b+2hellipb+n-1

                                                          The directory entry for each file indicates the address of starting block(ie b) and length (n)

                                                          Advantages

                                                          1 For direct access to block i we can directly access by finding the block as b+i Since the file is stored sequentially sequential access is easy Hence access is easy

                                                          2 Supports both direct and sequential access3 Number of disk seeks required is minimum 4 Seek time is minimum

                                                          Disadvantages

                                                          1 Difficult to find space to a new file (or) add more data to existing fileFirst fit and Best fit strategies are used to find a free hole( set of free contiguous blocks) from list of available holes But both are inefficient wrt storage utilization

                                                          2 External fragmentation even though there are enough free blocks required for a new file they are not contiguous and hence cannot be allocated to new fileSolution copy all the files onto another disk or tape We then copy the files back onto original disk contiguously This is known as compaction But compaction is time consuming

                                                          3 Size of the file must be known when the file is created Even though final size is known entire size may not be used by the file at the beginning Hence leads to wastage of space

                                                          If size of the file is not known in advance then (i) If we allocate too little space to file file cannot be extended Solution find a larger hole copy the contents of the file to the new space and release the previous space This may be possible as long as space exists Also this is time consuming(ii) If we overestimate the amount of space needed much of the space is unused This will lead to internal fragmentation

                                                          To minimize these drawbacks some os uses modified contiguous allocation scheme Here contiguous blocks of memory is allocated at firstIf the file needs more blocks another set of contiguous space known as extent is added Location and a block count plus a link to the first block of next extentIf extents are too large and of fixed size internal fragmentation occursIf extents are of variable sizes external fragmentation occurs

                                                          Linked AllocationEach file is a linked list of disk blocks The disk blocks can be scattered anywhere on the diskThe directory contains a pointer to first and last blocks of the file For example a file of 5 blocks might start at a block 9 and continue at block 16 then block 1 and then block 10 and finally block 25 Each block contains a pointer to next block If each block is 512 bytes and disk address (pointer ) requires 4 bytes then the user sees blocks of 512-4=508 bytes To create a new file we simply create a new entry in the directory The pointer will have nil for empty file Also the size field of empty file is 0

                                                          To add more data to existing file free block is found and data is written to it and is linked to end of file To read a file we read by following the pointers block to block

                                                          Advantages 1 Easy to find space to a new file (or) add more data to existing file2 Size of the file need not be declared when the file is created3 There is no external fragmentation

                                                          Disadvantages1 Seek time is more2 Number of disk seeks required is more3 Pointers use larger percentage of filersquos disk space4 Inefficient for direct access files To access i th record i disk reads are required5 Not Reliable if pointers were lost or damaged

                                                          Solution to disadvantage 3 is to use to group multiple blocks as clusters(for ex 4 blocks = 1cluster) We allocate clusters instead of blocks to file Makes logical to physical block mapping simpleImproves disk throughput by reducing disk access timeReduces wastage of space due to pointers as only few pointers may be neededFree ndashlist management also becomes simpleDisadvantage internal fragmentation more space is wasted when the cluster is partially full

                                                          Solution to disadvantage 5 is to use doubly linked list (or) store the file name and relative block number in each block

                                                          An variation of linked allocation is the use of file allocation table (FAT)A section of disk at the beginning of each volume contains FAT FAT has one entry for each disk block and is indexed by block numberThe block number contains the block number of next block that contains the file The last block will have special end of the file value as table entry An unused block is indicated by table value 0

                                                          Advantages

                                                          1 Easy to find space to a new file (or) add more data to existing file2 Random access time is improved

                                                          Disadvantage number of disk seeks is more The disk head must move to read the FAT and find the location of block then move to location of the block

                                                          3 Indexed allocation

                                                          Each file has its own index block Index block is an array of disk block addresses The ith entry in index block points to ith block of the file

                                                          The directory contains the address of the index block

                                                          When the file is created all pointers in the index block are set to nil

                                                          To add more data to existing file free block is found and data is written to it The address of block is put in the index block

                                                          Advantages

                                                          1 Easy to find space to a new file (or) add more data to existing file2 Supports direct (or) random access efficiently3 No external fragmentation

                                                          Disadvantage

                                                          1 Seek time is more2 Number of disk seeks required is more3 Not Reliable4 Index file must be kept in memory If this memory is not available then we have the read the index

                                                          block and then desired data block ie 2 disk accesses are required To access a block near the end of the file we need to read all the index blocks to read the needed data block

                                                          5 Pointer overhead of index block is gt than that of linked allocation For example if we have file of only one or two blocks Entire index block is allocated to store these one or two pointers

                                                          Various mechanisms followed for size of index block

                                                          1 linked scheme size of index block is one disk block Index block contains a small header containing name of the file and set of the first 100 disk block addresses The next address contains the address of another index block for large files

                                                          2 multilevel index to access a block the OS uses first-level index to find second-level index block which points to file block This approach can be continued to third or fourth level

                                                          3 combined scheme

                                                          For example Say there are 15 pointers of index block in the filersquos inode The first 12 of these pointers have address of blocks that contains file data The next 3 pointers point to indirect blocks The first points to single indirect block which is an index block that contain addresses of filersquos data blocks

                                                          The second points to double indirect blocks and third points to triple indirect blocks as shown below

                                                          Q) Free space management

                                                          To keep track of free disk space the system maintains free space list To create a file we search free space list and allocate the required space to new file

                                                          The free space list can be implemented as

                                                          1 Bit vector (or) Bit map2 Linked list3 Grouping4 Counting 5 Space maps

                                                          1 Bit vector (or) Bit map Each block is represented by 1 bit If the block is free the bit is 1 and if the block is allocated the bit is 0 For example consider a disk where blocks 23458 blocks are free The free-space bit map would be 001111001000

                                                          Advantages simple and easy to get contiguous files

                                                          Disadvantages

                                                          1 Bit map requires extra space2 Bit map is inefficient if kept in disk Bit maps of smaller disk can be kept in main memory but may

                                                          not be possible to keep the bit map of larger disk in main memory3 Bit map must be modified both when blocks are allocated and freed Freeing 1 GB of data on a 1-TB

                                                          disk could cause thousands of blocks of bit maps to be updated because those data blocks are scattered all over the disk

                                                          2 Linked List Free-space list is implemented by linking all the free disk blocks together A pointer to the first free block is stored in special location on the disk For example consider a disk where blocks 23458 blocks are freeWe keep a pointer to block 2 as the first free block Block 2 would contain a pointer to block 3 Block 3 would point to block 4 and block 4 would point to block 5and so onAdvantage No waste of spaceDisadvantage Cannot get contiguous space easily ie To traverse the free-space list we must read each block which requires more IO time

                                                          3 Grouping First free block stores the addresses of n free blocks The first n-1 free blocks are free and the last block contains the addresses of another n free blocks and so onAdvantage large number of free blocks can be found quickly4 Counting free-space list can be maintained by keeping the address of first free block and the number (n)of free contiguous blocks that follow the first block

                                                          Each entry in the free-space list consists of disk address and a countEach entry requires more space andlist is shorter only if count is gt1These entries can be stored in a B-tree for efficient insertion deletion and search

                                                          5 Space maps ZFS divides the disk space into chunks of manageable sizes called metaslabs Each metaslab has a space mapZFS uses counting algorithm to store information about free blocks Space map is a log of all block activities in time order in counting formatWhen ZFS allocates or frees space from a metaslab it loads the associated space map into memory and updates it Finally free-space list is updated on diskSynchronization Hardware Hardware solution to synchronization is to provide atomic operations These operations operate as a single instruction without interruption Two such operations are 1 Test and Set() hardware instruction2 Swap() instruction

                                                          1 Test and Set instruction is as shown

                                                          Solution using Test and SetThe shared variable lock is initialized to false

                                                          2 Swap() instruction definition is as follows

                                                          Solution using Swap1 Shared Boolean variable lock is initialized to false and each process has a local boolean variable key

                                                          Both Test and Set Swap instructions satisfy the mutual exclusion requirement but unfortunately do not guarantee bounded waiting If there are multiple processes trying to get into their critical sections there is no guarantee of what order they will enter and any one process could have the bad luck to wait forever until they got their turn in the critical section (Since there is no guarantee as to the relative rates of the processes a very fast process could theoretically release the lock whip through their remainder section and re-lock the lock before a slower process got a chance As more and more processes are involved vying for the same resource the odds of a slow process getting locked out completely increase )

                                                          Below figure illustrates a solution using test-and-set that satisfies bounded waiting using two shared data structures boolean lock and boolean waiting[ N ] where N is the number of processes in contention for critical sections

                                                          Bounded-waiting mutual exclusion with TestAndSet( )

                                                          It first looks in an order ( starting with the next process on the list ) for a process that has been waiting and if it finds one then it releases that particular process from its waiting state without unlocking the critical section thereby allowing a specific process into the critical section while continuing to block all the others

                                                          Q) What is semaphoreAns Semaphore is used to solve synchronization problems A semaphore S is an integer variable accessed only through 2 standard atomic operations 1 wait() 2 signal()Wait primitive Signal primitivewait(S)

                                                          while(S lt= 0) do nothingS--

                                                          signal (S)

                                                          S++

                                                          Each process that wishes to use a resource performs wait() operation When a process releases a resource it performs signal() operation

                                                          Semaphores Usage1 binary semaphore or mutex locks can take values 0 (or) 1

                                                          2 Counting Semaphore can take any integer value Counting semaphore is used to count the number remaining resources The counter is initialized to the number of resources available whenever the counting semaphore gt 0 then a process can enter a critical section and use one of the resources When the counter = 0 ( or negative in some implementations ) then the process blocks until another process frees a resource and increments the counting semaphore with a signal callFor example if 3 resources are there and 4 processes Value of counting semaphore =3if process P1 requires a resource it perform wait() operationNow counting semaphore = 2if process P2 requires a resource it perform wait() operationNow counting semaphore = 1if process P3 requires a resource it perform wait() operationNow counting semaphore = 0if process P4 requires a resource it perform wait() operation P4 waits until a resource is available

                                                          3 Semaphores can also be used to synchronize certain operations

                                                          Semaphore ImplementationWhen a process is in critical section and any other process that tries to enter critical section must loop continuously wasting CPU time (busy waiting) This type of semaphore is called spin lock as the process spins while waiting For example suppose it is important that process P1 execute statement S1 before process P2 executes statement S2

                                                          First we create a semaphore named synch that is shared by the two processes and initialize it to zero Then in process P1 we insert the code

                                                          S1signal( synch )

                                                          and in process P2 we insert the code wait( synch )

                                                          S2 Because synch was initialized to 0 process P2 will block on the wait until after P1 executes the call

                                                          to signal Each semaphore has a integer value and maintains a waiting queue of waiting processes

                                                          Semaphore implementation with no busy waiting To overcome busy waiting we block the waiting process and waiting process is restarted by wakeup() operation(ie the process is removed from waiting queue to ready state)

                                                          Problems due to semaphore1 Deadlocks Deadlocks occur when multiple processes are blocked each waiting for a resource that can only be freed by one of the other (blocked) processes shown below

                                                          2 Starvation In Starvation one or more processes gets blocked forever and never get a chance to enter critical section For example if we do not specify the algorithms for adding processes to the waiting queue or selecting one to be removed from the queue in the signal( ) call If a LIFO queue is chosen then the first process that starts waiting will never get a chance

                                                          3 Priority inversionLet Process LMH have priorities as shown belowLltMltHprocess H is waiting for a resource that has been held by process L so Process L is running When process M enters it pre-empts process L and M is running nowHere M runs before H even though M has lower priority than HTo avoid this L is given high priority so that M does not pre-empt L=====================================================================Q) Classical problems on synchronization

                                                          1 The Bounded Buffer Problem (also called the The Producer-Consumer Problem)2 The Readers-Writers Problem3 The Dining Philosophers Problem

                                                          These problems are used to test newly proposed synchronization scheme

                                                          1 The Bounded Buffer ProblemConsider

                                                          n buffers and each buffer holds one item a producer process which creates the items (1 at a time) a consumer process which processes them (1 at a time)

                                                          Producer process cannot produce an item if all the buffers are full and Consumer process cannot consume if all the buffers are empty As both processes modify the contents of buffer synchronization must be there For that we use 3 Semaphores

                                                          1 empty = n (empty semaphore counts the number of empty buffers)2 full=0 (full semaphore counts the number of filled buffers)3 mutex=1 (mutex provides mutual exclusion for access to the buffer)

                                                          Producer process Consumer processdo

                                                          hellip produce an item wait(empty) wait(mutex) hellip

                                                          do wait(full) wait(mutex) hellip remove an item from buffer hellip

                                                          add item to buffer hellip signal(mutex) signal(full)

                                                          while (1)

                                                          signal(mutex) signal(empty) hellip consume the item hellip while (1)

                                                          2 The Readers-Writers ProblemA data item such as a file is shared among several processesEach process is classified as reader or writerMultiple readers can read the file at same timeA writer must have exclusive access (ie cannot share with either a reader or another writer)Two versions of the readers-writers problem

                                                          readers priority new reader need not wait because a writer is waiting writers priority if a writer is waiting to access the database no new readers can start reading

                                                          A solution to both version may cause starvation in the readers priority version writers may starve in the writers priority version readers may starve

                                                          A semaphore solution to the readers priority version uses 3 semaphores1 readcount = 0 (readcount semaphore counts the number of reading shared data2 mutex =1 (mutex semaphore is used for mutually exclusion when readcount is updated)3 wrt = 1 (wrt semaphore is common to both writer and reader)

                                                          wrt provides mutually exclusive access to shared data Reader process Writer process

                                                          wait(mutex) readcount++ if (readcount == 1) wait(rt) signal(mutex) hellip reading is performed hellip wait(mutex) readcount-- if (readcount == 0) signal(wrt) signal(mutex)

                                                          do wait(wrt) hellip writing is performed hellip signal(wrt)

                                                          while (TRUE)

                                                          3 The Dining Philosophers Problemn philosophers sit around a table thinking and eatingAs time passes a philosopher gets hungry and tries to pick up the chopstick on his left and on his right A philosopher may only pick up one chopstick at a time and cannot pick up a chopstick already in the hand of neighbour philosopherThe dining philosophers problems is an example of a large class or concurrency control problems it is a simple representation of the need to allocate several resources among several processes in a deadlock-free and starvation-free mannerA semaphore solution represent each chopstick with a semaphore

                                                          semaphore chopstick[5] Initially all values are 1

                                                          Philosopher ido

                                                          51

                                                          wait(chopstick[i])wait(chopstick[(i+1) 5])hellipeathellipsignal(chopstick[i])signal(chopstick[(i+1) 5])hellipthinkhellip

                                                          while (1)This solution guarantees no two neighbouring philosophers eat simultaneously but has the possibility of creating a deadlockQ) Types of operating systems

                                                          MULTIPROCESSING SYSTEMS has multiple hardware CPUs

                                                          In symmetric multiprocessing (SMP) systems The processes share the same main memory and io facilities All processors can perform the same functions (hence term symmetric)

                                                          52

                                                          ASYMMETRIC MULTIPROCESSING SYSTEMS In asymmetric multiprocessing (ASMP) all CPUs are not equal (Master and slave CPUs) ASMP have dedicated processors to specific tasks For example one processor may be dedicated to disk operations another to video operations etc These systems dont have the flexibility to assign processes to the least-loaded CPU unlike an SMP system

                                                          REAL TIME SYSTEM is used to implement a computer application that must complete its execution within its time constraint

                                                          Two kinds of real-time systems have evolved

                                                          1 A hard real-time system can guarantee that response requirement will be met under all conditions 2 A soft real-time system cannot guarantee that response requirement meets under all conditions Ex

                                                          Digital audio or multimedia systems Digital telephones

                                                          DISTRIBUTED SYSTEMS execute parts of a computation in different systems at the same time It uses distributed control ie it spreads its decision-making actions across different computers in the system so that failures of individual computers or the network does not cripple its operation

                                                          A distributed operating system appears as a uniprocessor system even though it has multiple processors The users may not know where their programs are being run or where their files are located that should all be handled automatically by the operating system

                                                          Distributed systems allow applications to run on several processors at the same time thus requiring more complex processor scheduling algorithms

                                                          DESKTOP SYSTEMS A desktop system is a personal computer (PC) system used at a single location Modern desktop systems support multiprogramming Common examples are Linux FreeBSD Windows 8 and the Macintosh operating system

                                                          HANDHELD SYSTEMS (palmtop computers) Two of the most popular operating systems for handhelds are Symbian OS and Android OS They have very less memory

                                                          53

                                                          CLUSTERED SYSTEMS A computer cluster consists of a set of loosely connected computers that work together so that in many respects they can be viewed as a single system The components of a cluster are usually connected to each other through fast local area networks each node (computer used as a server) running its own instance of an operating system

                                                          Q) Modern operating system

                                                          New developments in hardware applications and security threats lead to the development of modern operating system

                                                          New Hardware developments like increased machine speed high speed network increased size and variety of memory devices

                                                          New Applications developments like multimedia applications internet and web access and client server computing

                                                          Developments in modern operating system can be categorized as

                                                          1 Microkernel architecture assigns only few functions to kernel ( IPC and CPU scheduling etc) Other OS services are provided by processes called servers that run in user mode This approach decouples server and kernel development This approach is well suited to distributed environment

                                                          2 Multithreading a process is divided into threads that can run concurrently Useful for applications that perform a number of independent tasks Example database server that listens and processes many client requests

                                                          3 Symmetric multiprocessing As now-a-days there are multiple microprocessors in a single system SMP operating system provides greater efficiency when there are multiple processors The processes share the same main memory and io facilities All processors can perform the same functions (hence term symmetric)

                                                          SMP schedules processes or threads across all of the processors

                                                          Advantages

                                                          1 Increased performance if some portion of the program can be run in parallel the performance of SMP OS increases

                                                          2 increased availability as all processes can perform the same task if one processor fails others will work 3 incremental growth one can add a new processor to increase the performance4 Scaling cost can be dependent on the number of processors used

                                                          4 Distributed operating systems Distributed systems allow applications to run on several processors at the same time

                                                          5 Object oriented design adds modularity to kernel Os can be customized without effecting system integrity

                                                          Q) Linux Operating systems It is open source as its source code is freely available It is free to use Linux was designed considering UNIX compatibility Linux is a multi-user multitasking system Main design goals are speed efficiency and standardization Components of Linux System

                                                          54

                                                          Linux Operating System has primarily three components as shown in the below diagram

                                                          Kernel - Kernel is the core part of Linux It is responsible for all major activities of this operating system It interacts directly with hardware Kernel hides low level hardware details to system or application programs

                                                          System Library - System libraries are special functions or programs These are used by application programs to use Kernelrsquos features

                                                          System Utility - System Utility programs are responsible to do specialized individual level tasks like updating log file accepting login requests from terminals etc

                                                          Kernel Modules Sections of kernel code that can be compiled loaded and unloaded independent of the rest of the kernel A kernel module may typically implement a device driver a file system or a networking protocol

                                                          Three components to Linux module support 1 module management Supports loading modules into memory and allows to talk to the rest of the kernel 2driver registration Allows modules to inform rest of the kernel that a new driver has become available 3 conflict resolution protect reserved resources of one driver from accidental use by another driver

                                                          Kernel Mode vs User ModeKernel component code executes in a special privileged mode called kernel mode with full access to all resources of the computer This code represents a single process executes in single address space and do not require any context switch and hence is very efficient and fast Kernel runs each processes and provides system services to processes provides protected access to hardwares to processesCode which is not required to run in kernel mode is in System Library User programs and other system programs works in User Mode which has no access to system hardwares and kernel code User programs utilities use System libraries to access Kernel functions to get systems low level tasksBasic FeaturesFollowing are some of the important features of Linux Operating System

                                                          Portable - Portability means softwares can works on different types of hardwares in same way Linux kernel and application programs supports their installation on any kind of hardware platform

                                                          Open Source - Linux source code is freely available and it is community based development project Multiple teams works in collaboration to enhance the capability of Linux operating system and it is continuously evolving

                                                          Multi-User - Linux is a multiuser system means multiple users can access system resources like memory ram application programs at same time

                                                          Multiprogramming - Linux is a multiprogramming system means multiple applications can run at same time

                                                          Hierarchical File System - Linux provides a standard file structure in which system files user files are arranged

                                                          Shell - Linux provides a special interpreter program which can be used to execute commands of the operating system It can be used to do various types of operations call application programs etc

                                                          55

                                                          Security - Linux provides user security using authentication features like password protection controlled access to specific files encryption of data

                                                          Architecture Linux System Architecture is consists of following layers

                                                          Hardware layer - Hardware consists of all peripheral devices (RAM HDD CPU etc)

                                                          Kernel - Core component of Operating System interacts directly with hardware provides low level services to upper layer components

                                                          Shell - An interface to kernel Takes commands from user and executes kernels functions

                                                          Utilities - Utility programs giving user most of the functionalities of an operating systems

                                                          Q) windows XP operating System1Extensibility mdash layered architecture

                                                          Executive which runs in protected mode provides the basic system services On top of the executive several server subsystems operate in user mode Modular structure allows additional environmental subsystems to be added without affecting the

                                                          executive 2Portability mdash XP can be moved from on hardware architecture to another with relatively few changes

                                                          Written in C and C++ Processor-dependent code is isolated in a dynamic link library (DLL) called the ldquohardware abstraction

                                                          layerrdquo (HAL)3 Reliability mdashXP uses hardware protection for virtual memory and software protection mechanisms for operating system resources 4 Compatibility mdash applications that follow the IEEE 10031 (POSIX) standard can be complied to run on 2000 without changing the source code 5 Performance mdash XP subsystems can communicate with one another via high-performance message passing

                                                          Preemption of low priority threads enables the system to respond quickly to external events Designed for symmetrical multiprocessing

                                                          6 International support mdash supports different locales via the national language support (NLS) APIXP Architecture 1 Layered system of modules 2 Protected mode mdash hardware abstraction layer (HAL) kernel executive 3 User mode mdash collection of subsystems

                                                          a Environmental subsystems emulate different operating systems b Protection subsystems provide security functions

                                                          XP architecture

                                                          56

                                                          Q) Windows network Operating systemExample Windows 2000 operating system

                                                          All the pictures are taken from Silberschatz Abraham et al Peter Baer Galvin and Greg Gagne Operating system concepts Reading Addison-Wesley

                                                          Visit my blog enthusiaststudentblogspotin57

                                                          mtechmessengerblogspotin

                                                          Keywords OS exam notes OS notesOperating system notes Opearing system galvin notesSimple batch system Multi- programmed Batch System Time sharing System Difference between Batch Multi-programming and Time sharing What is process structure of process in memory Process states PCB Operations on Processes Process Creation Process Termination cascading terminationcontext switch Basic Concepts of threads two modes of CPU execution Inter process communication Shared ndash Memory Systems Message passing systemswhat is deadlock four conditions that are necessary for deadlock to occur Methods for handling deadlocks Deadlock prevention Deadlock avoidanceResource Allocation graphBankers AlgorithmSafe State Unsafe state Safety Algorithm Resource-Request Algorithm Memory management Address Binding Logical vs Physical Address Space MMU Memory-Management Unit Dynamic loading Dynamic Linking Overlays Swapping Contiguous memory allocation Single-partition allocation Multiple-partition allocation Internal fragmentation Variable Size partitions External fragmentation First-fit Best-fit Worst-fit Non-Contiguous Memory allocation Paging Implementation of Page Table PTLR PTBR page table base register Page-table length register translation look-aside buffers TLB Protection in Paging Shared pages in Paging Segmentation Shared Segments Virtual memory Demand paging Procedure for Handling a Page Fault pure demand paging Page Replacement Page Replacement Algorithms FIFO Optimal Page replacement AlgorithmLeast Recently used (LRU) Page replacement algorithm ThrashingWorking Set model global and local page replacement

                                                          58

                                                          • 101 File Concept Data is stored in secondary memory in the form of files A file is a sequence of bits bytes lines or records A file has structure which depends on its type
                                                          • File Attributes
                                                          • File Operations
                                                          • Kernel Mode vs User Mode
                                                          • Basic Features
                                                          • Architecture

                                                            Dispatcher gives CPU control to the process selected by short term scheduler This function involves following

                                                            1 switching context2 switching to user mode3 jumping to proper location in the user program to restart that program

                                                            Time taken to stop one process and start another process is known as dispatch latency This must be kept smallReady QueueReady queue is implemented as a FIFO queue a priority queue a tree or simply an unordered linked listThe records in ready queue are PCBs of processes

                                                            Pre-emptive Scheduling

                                                            CPU scheduling decisions take place under one of four conditions

                                                            1 When a process switches from the running state to the waiting state2 When a process switches from the running state to the ready state for example in response to an

                                                            interrupt3 When a process switches from the waiting state to the ready state say at completion of IO 4 When a process terminates

                                                            For conditions 1 and 4 there is no choice - A new process must be selected For conditions 2 and 3 there is a choice - continue running the current process (or) select a different

                                                            one If CPU scheduling takes place only under conditions 1 and 4 the system is said to be non-

                                                            preemptive or cooperative In non-preemptive a process runs until it itself gives up the CPU ie when it is waiting for IO (or) when process completes Otherwise the system is said to be pre-emptive

                                                            Windows used non-preemptive scheduling up to Windows 3x and started using pre-emptive scheduling with Win95 Macs used non-preemptive prior to OSX and pre-emptive since then Note that pre-emptive scheduling is only possible on hardware that supports a timer interrupt

                                                            Note that pre-emptive scheduling can cause problems when two processes share data because one process may get interrupted in the middle of updating shared data structures

                                                            Preemption can also be a problem if the kernel is busy implementing a system call ( eg updating critical kernel data structures ) when the preemption occurs Solution wait until the system call has either completed or blocked before allowing the pre-emption But this solution is problematic for real-time systems

                                                            Some critical sections of code protect themselves from concurrency problems by disabling interrupts before entering the critical section and re-enabling interrupts on exiting the section But this should only be done only on very short pieces of code that will take less CPU time

                                                            Scheduling Criteria to decide CPU scheduling algorithm1 CPU utilization in real system cpu utilization will range from 40 to 902 Throughput Number of processes that are completed per unit time3 Turnaround time Time when the process has completed ndash Time when the process was submittedie it includes waiting time and burst time4 Waiting time amount of time that a process spends waiting for CPU5 Response time Time when the first response came - Time when the process was submitted

                                                            Optimization Criteria Max CPU utilizationMax throughput

                                                            Min turnaround timeMin waiting timeMin response time CPU Scheduling Algorithms1) First-Come First-Served (FCFS) Scheduling First come first served CPU scheduling algorithm is non pre-emptive Processes are scheduled in the order they have arrivedFCFS is implemented by Queue When the CPU is free it is allocated to the process at the head of the queueAdvantages Easy to implementDisadvantages 1 Average waiting time is quite long

                                                            2 Convoy effect (it occurs as small processes wait in queue for big process to leave CPU)Example Process Burst Time P1 24 P2 3 P3 3

                                                            Suppose that the processes arrive in the order P1 P2 P3

                                                            The Gantt Chart for the schedule is P1 P2 P3

                                                            0 24 27 30 Waiting time for P1 = 0 P2 = 24 P3 = 27 Average waiting time (0 + 24 + 27)3 = 17 2) Shortest-Job-First (SJF) SchedulingSchedule the Process with shortest burst time If CPU burst time of two processe are same then FCFS is usedAdvantages Average waiting time decreasesDisadvantages It is difficult to know the length of next CPU burstSJF can be either pre-emptive (or) non pre-emptive1 Non pre-emptive SJF ndash once CPU is given to the process it cannot be preempted until completes its CPU burst2 Pre-emptive SJF (or) Shortest-Remaining-Time-First (SRTF) If the newly arrived process is shorter than what is remaining of currently executing process then pre-empt the currently executing process

                                                            Example Process Arrival Time Burst Time P1 00 7

                                                            P2 20 4 P3 40 1

                                                            P4 50 4

                                                            SJF (non-preemptive) Gantt Chart P1 P3 P2 P4

                                                            0 7 8 12 16 Average waiting time = [0 +(8-2)+(7-4) +(12-5)] 4 =4

                                                            Example of Preemptive SJF

                                                            Process Arrival Time Burst TimeP1 00 7P2 20 4P3 40 1P4 50 4

                                                            SJF (preemptive) P1 P2 P3 P2 P4 P1

                                                            0 2 4 5 7 11 16 Average waiting time = (9 + 1 + 0 +2)4 =3 3) Priority SchedulingA priority number (integer) is associated with each processThe CPU is allocated to the process with the highest priority (smallest integer equiv highest priority)If two processes have equal priority then FCFS is usedSJF is a priority algorithm in which larger the CPU burst lower the priorityThere are two schemes 1 Preemptive 2 nonpreemptive

                                                            Problem equiv Starvation ndash low priority processes may never executeSolution equiv Aging ndash as time progresses increase the priority of the process

                                                            4) Round Robin (RR) Each process gets a small unit of CPU time (time quantum) usually 10-100 milliseconds After this time has elapsed the process is preempted and added to the end of the ready queueIf there are n processes in the ready queue and the time quantum is q then each process gets 1n of the CPU time in chunks of at most q time units at once No process waits more than (n-1)q time unitsPerformance 1 q large _ FIFO 2 q small _ q must be large with respect to context switch otherwise overhead is too highExample of RR with Time Quantum = 4 Process Burst Time P1 24 P2 3

                                                            P3 3 The Gantt chart isP1 P2 P3 P1 P1 P1 P1 P1

                                                            0 4 7 10 14 18 22 26 30 Average waiting time = [(30-24)+4+7]3 = 173 =566 5)Multilevel Queue scheduling Ready queue is partitioned into separate queuesFor ex ready queue is divided into 2 queues

                                                            1 foreground (interactive) queue2 background (batch) queue

                                                            Each queue has its own scheduling algorithmFor ex Round Robin scheduling algorithm can be used in foreground queue

                                                            FCFS scheduling algorithm can be used in background queueScheduling must be done between the queues This can be done in 2 ways1 Fixed priority scheduling

                                                            Foreground queue has highest priority All the processes in foreground queue must be completed and queue must be empty Then only the processes in background queue will be given CPU As shown in below figure Disadvantage starvation

                                                            2 Time slice ndash each queue gets a certain amount of CPU timewhich it can schedule amongst its processes ie 80 to foreground in RR and 20 to background in FCFS

                                                            6) Multilevel Feedback Queue Scheduling A process can move between queues The idea is to separate processes according to their CPU bursts IO ndash bound and interactive processes will be in highest priority queueIf a process takes more CPU time it is moved to lower priority queue If a process waits too long in lower priority queue it is moved to higher priority queue to prevent starvationAs shown in the below figure let there be 3 queues Q0 Q1 Q2

                                                            1 Q0 ndash time quantum 8 milliseconds 2 Q1 ndash time quantum 16 milliseconds 3 Q2 ndash FCFS

                                                            Scheduling

                                                            1 A process entering ready queue is put in queue Q0 When it gains CPU the process receives 8 msec If it does not finish in 8 milliseconds process is moved to queue Q1 2 At Q1 process again receives 16 additional milliseconds If it still does not complete it is moved to queue Q2

                                                            Q) SynchronizationSeveral processes run in an operating system Some process share resources due to which problems like data inconsistency may arise

                                                            Example of process synchronization is Producer Consumer problem (or) Bounded-Buffer problemProducer Consumer problem To make sure that producer process should not add data when the buffer is full andConsumer process should not take data when the buffer is empty

                                                            The code for Producer process can be modified as followswhile (true) Produce an item in next producedwhile (counter == BUFFER_SIZE) do nothingbuffer[in]=nextProducedin = (in+1) BUFFER_SIZEcounter++

                                                            here an integer variable counter is initialized to zeroCounter is incremented every time a new item is added to the buffer and counter is decremented every time an item is removed from bufferin is an index that always points to an empty slot after the last filled slotout is an index that always points to the first filled slot after at the head of the circular queue

                                                            The code for consumer process is as shown belowwhile(true)

                                                            while (counter == 0) do nothingnextConsumed = buffer[out]out=(out+1)BUFFER_SIZEcounter--

                                                            The above producer and consumer code produces RACE CONDITIONOutcome of execution of several co-operating processes depends on the order in which accesses takes place is called Race ConditionFor Example if counter =5

                                                            Given counter =5 and if producer produces one item and consumer consumes one item then correct value of counter= 5 only But here the counter = 4 ie inconsistent resultIf T5 is executed before T4 then the counter =6 ie inconsistent result

                                                            Solution to avoid race condition make sure only one process at a time must update the shared variable (here it is counter)

                                                            Q) Critical section problemAns Each process has a segment of code called critical section Critical section code is the code that contains common variables of co-operating processesTo avoid data inconsistencyTwo processes must not execute critical section code at same timeFor this to happen1 Each process must request permission to enter its critical sectionmdashEntry section2 End of critical section ndashExit section3 Rest of the code after critical section ndash Remainder section

                                                            Solution to critical section must satisfy the following three requirements1 Mutual exclusion if one process is in critical section no other process can be in critical section2 Progress Processes not in remainder section must decide which process will enter critical section next3 Bounded Waiting All the processes must be allowed to enter critical section No process must wait indefinitely

                                                            Q) Petersonrsquos solution for critical section problem (Software based solution)AnsTime Process 1 Process2T0 do

                                                            flag[1] = TRUE turn =2

                                                            do flag[2] = TRUE

                                                            T1 turn =1

                                                            T2 While( flag[2] ampamp turn == 2)Since turn = 2 while loop will not run

                                                            While( flag[1] ampamp turn == 1)Loops continuously until flag[1]

                                                            becomes falseT3 Enter critical section

                                                            T4 flag[1] = FALSE Enter Critical section

                                                            T5 Enters remainder section flag[2]=FALSE

                                                            while (TRUE) Enters remainder section

                                                            while (TRUE)

                                                            101 File Concept Data is stored in secondary memory in the form of files A file is a sequence of bits bytes lines or records A file has structure which depends on its type

                                                            File Attributes

                                                            1 Name ndash Name of the file for user understanding2 Identifier used by OS to identify a file ( eg inode number )3 Type ndash Text files executable files binary files etc 4 Location ndash location of the file on the hard drive 5 Size current size of the file6 Protection Controls who can read write or execute the file7 Time Date and user identification gives the information when the file was created last modified

                                                            and last used etc This data is used for protection security and usage monitoring

                                                            All information about files is kept in directory structure The directory entry consists of the filersquos name and its unique identifier The identifier locates the other file attributes Directory structure is kept on disk

                                                            File Operations

                                                            The file Abstract Data Type supports many common operations 1 Creating a file find free space on disk add entry to directory 2 Writing a file write data at current file position pointer location and update pointer3 Reading a file read data at current file position pointer location and update pointer4 Repositioning within a file or file seek change pointer location to a given value5 Deleting a file free the space allocated to file and remove entry from directory6 Truncating a file delete the data and update file size

                                                            Open File Table

                                                            Most OSes require that files be opened before access and closed after access Information about all files that are open currently in the system is stored in an open file table This

                                                            system wide open file table is maintained by OS And information about files opened by the process is stored by the processrsquos open file table Open file table has the following information

                                                            1 File pointer - records the current position in the file for the next read or write2 File-open count - How many times has the current file been opened ( simultaneously by

                                                            different processes ) and not yet closedWhen this counter reaches zero the file can be removed from the table

                                                            3 Disk location of the file4 Access rights

                                                            Some systems provide support for file locking 1 A shared lock is for reading only2 A exclusive lock is for writing as well as reading3 An advisory lock OS makes sure the locks are acquired and released appropriately4 A mandatory lock software developers makes sure the locks are acquired and released

                                                            appropriately5 UNIX used advisory locks and Windows uses mandatory locks

                                                            File Types

                                                            Implementing file types A common technique is to include the type as part of the filename The name is split into 2 parts

                                                            1 File Name 2 Extension

                                                            File name and extension are separated by a period The system uses the extension to indicate the type of file and the type of operations that can be done on that fileUser opens Microsoft word and clicks on File menu and Open option User will only specify the filename without extension application will look for a file with the given name and extension it expects Hence extensions are not supported by the operating systemUnix system uses a crude magic number stored at the beginning of the file to indicate the type of fileIn Mac OS X operating system file produced by word processor has the word processorrsquos name as its creator When user double clicks on the file word will open the required fileSome common file types are as shown below

                                                            File Structure Some files contain an internal structure which may or may not be known to the OS for ex executable files have a specific structure so that it can determine

                                                            where in memory to load the file and what is the location of the first instruction

                                                            If OS supports many file formats it will increase the size and complexity of the OS UNIX treats all files as sequences of bytes with no internal structure (With the exception of

                                                            executable binary programs which it must know how to load and find the first executable statement etc)

                                                            Macintosh files have two forks - a resource fork and a data fork The resource fork contains information relating to the UI such as icons and button images and can be modified independently Data fork contains the code or data

                                                            Internal File structure

                                                            Disk files are accessed in units of physical blocks typically 512 bytes

                                                            Internally files are organized in units of logical units like single byte (or)

                                                            data record size (or) structure size

                                                            The number of logical units which fit into one physical block determines its packingIf a student file is organized internally as records Each record takes 64 bytes of memoryAnd main memory is divided into blocks of 512 bytes Then packing = 51264=8 8 records can be stored per block

                                                            Internal fragmentation may occur due to this

                                                            102 Access Methods

                                                            The information in file can be accessed in the following ways1 Sequential access2 Direct access (or) Relative access

                                                            1 Sequential access we cannot randomly access any record

                                                            Records are read and written sequentially one record after another

                                                            A sequential access file emulates magnetic tape operation and generally supports the following operations read next - read a record and advance the tape to the next position write next - write a record and advance the tape to the next position rewind skip n records - May or may not be supported N may be limited to positive numbers or may be

                                                            limited to +- 1

                                                            2 Direct access (or) Relative accessA file is made up of fixed length records so that we can randomly access any record

                                                            No restriction on order of reading or writing

                                                            Direct access files are useful when we need to access large amounts of data randomly

                                                            Ex database

                                                            Operations supported include n is relative block number ie relative to beginning of file

                                                            1 read n - read record number n 2 write n - write record number n 3 jump to record n - could be 0 or the end of file4 Query current record - used to return back to this record later5 Sequential access can be easily emulated using direct access The inverse is complicated and

                                                            inefficient

                                                            3 Other access methods can be built on top of direct access method We construct an index for the file which contains pointers to various blocks To search a file we first access index and then use pointer to access file

                                                            If a student file has 120000 records of students sorted by student id We can create an index consisting of 1st student id in each blockif 64 records can be stored in each block then number of blocks needed to store 120000 records = 12000064 asymp 2000 blocks So index has 2000 entries Index is kept in main memory so that searching becomes faster

                                                            If the index itself becomes large we create an index for index file (Ex B-tree)

                                                            Q) Allocation Methods

                                                            To allocate space to files on disk is such a way that disk space is utilized effectively and files can be accessed quickly

                                                            Three major methods of allocating disk space are

                                                            1 Contiguous 2 Linked

                                                            3 Indexed

                                                            1 Contiguous allocation Each file occupies set of contiguous blocks on disk

                                                            If the file is n blocks long and starts at location b then it occupies blocks b b+1 b+2hellipb+n-1

                                                            The directory entry for each file indicates the address of starting block(ie b) and length (n)

                                                            Advantages

                                                            1 For direct access to block i we can directly access by finding the block as b+i Since the file is stored sequentially sequential access is easy Hence access is easy

                                                            2 Supports both direct and sequential access3 Number of disk seeks required is minimum 4 Seek time is minimum

                                                            Disadvantages

                                                            1 Difficult to find space to a new file (or) add more data to existing fileFirst fit and Best fit strategies are used to find a free hole( set of free contiguous blocks) from list of available holes But both are inefficient wrt storage utilization

                                                            2 External fragmentation even though there are enough free blocks required for a new file they are not contiguous and hence cannot be allocated to new fileSolution copy all the files onto another disk or tape We then copy the files back onto original disk contiguously This is known as compaction But compaction is time consuming

                                                            3 Size of the file must be known when the file is created Even though final size is known entire size may not be used by the file at the beginning Hence leads to wastage of space

                                                            If size of the file is not known in advance then (i) If we allocate too little space to file file cannot be extended Solution find a larger hole copy the contents of the file to the new space and release the previous space This may be possible as long as space exists Also this is time consuming(ii) If we overestimate the amount of space needed much of the space is unused This will lead to internal fragmentation

                                                            To minimize these drawbacks some os uses modified contiguous allocation scheme Here contiguous blocks of memory is allocated at firstIf the file needs more blocks another set of contiguous space known as extent is added Location and a block count plus a link to the first block of next extentIf extents are too large and of fixed size internal fragmentation occursIf extents are of variable sizes external fragmentation occurs

                                                            Linked AllocationEach file is a linked list of disk blocks The disk blocks can be scattered anywhere on the diskThe directory contains a pointer to first and last blocks of the file For example a file of 5 blocks might start at a block 9 and continue at block 16 then block 1 and then block 10 and finally block 25 Each block contains a pointer to next block If each block is 512 bytes and disk address (pointer ) requires 4 bytes then the user sees blocks of 512-4=508 bytes To create a new file we simply create a new entry in the directory The pointer will have nil for empty file Also the size field of empty file is 0

                                                            To add more data to existing file free block is found and data is written to it and is linked to end of file To read a file we read by following the pointers block to block

                                                            Advantages 1 Easy to find space to a new file (or) add more data to existing file2 Size of the file need not be declared when the file is created3 There is no external fragmentation

                                                            Disadvantages1 Seek time is more2 Number of disk seeks required is more3 Pointers use larger percentage of filersquos disk space4 Inefficient for direct access files To access i th record i disk reads are required5 Not Reliable if pointers were lost or damaged

                                                            Solution to disadvantage 3 is to use to group multiple blocks as clusters(for ex 4 blocks = 1cluster) We allocate clusters instead of blocks to file Makes logical to physical block mapping simpleImproves disk throughput by reducing disk access timeReduces wastage of space due to pointers as only few pointers may be neededFree ndashlist management also becomes simpleDisadvantage internal fragmentation more space is wasted when the cluster is partially full

                                                            Solution to disadvantage 5 is to use doubly linked list (or) store the file name and relative block number in each block

                                                            An variation of linked allocation is the use of file allocation table (FAT)A section of disk at the beginning of each volume contains FAT FAT has one entry for each disk block and is indexed by block numberThe block number contains the block number of next block that contains the file The last block will have special end of the file value as table entry An unused block is indicated by table value 0

                                                            Advantages

                                                            1 Easy to find space to a new file (or) add more data to existing file2 Random access time is improved

                                                            Disadvantage number of disk seeks is more The disk head must move to read the FAT and find the location of block then move to location of the block

                                                            3 Indexed allocation

                                                            Each file has its own index block Index block is an array of disk block addresses The ith entry in index block points to ith block of the file

                                                            The directory contains the address of the index block

                                                            When the file is created all pointers in the index block are set to nil

                                                            To add more data to existing file free block is found and data is written to it The address of block is put in the index block

                                                            Advantages

                                                            1 Easy to find space to a new file (or) add more data to existing file2 Supports direct (or) random access efficiently3 No external fragmentation

                                                            Disadvantage

                                                            1 Seek time is more2 Number of disk seeks required is more3 Not Reliable4 Index file must be kept in memory If this memory is not available then we have the read the index

                                                            block and then desired data block ie 2 disk accesses are required To access a block near the end of the file we need to read all the index blocks to read the needed data block

                                                            5 Pointer overhead of index block is gt than that of linked allocation For example if we have file of only one or two blocks Entire index block is allocated to store these one or two pointers

                                                            Various mechanisms followed for size of index block

                                                            1 linked scheme size of index block is one disk block Index block contains a small header containing name of the file and set of the first 100 disk block addresses The next address contains the address of another index block for large files

                                                            2 multilevel index to access a block the OS uses first-level index to find second-level index block which points to file block This approach can be continued to third or fourth level

                                                            3 combined scheme

                                                            For example Say there are 15 pointers of index block in the filersquos inode The first 12 of these pointers have address of blocks that contains file data The next 3 pointers point to indirect blocks The first points to single indirect block which is an index block that contain addresses of filersquos data blocks

                                                            The second points to double indirect blocks and third points to triple indirect blocks as shown below

                                                            Q) Free space management

                                                            To keep track of free disk space the system maintains free space list To create a file we search free space list and allocate the required space to new file

                                                            The free space list can be implemented as

                                                            1 Bit vector (or) Bit map2 Linked list3 Grouping4 Counting 5 Space maps

                                                            1 Bit vector (or) Bit map Each block is represented by 1 bit If the block is free the bit is 1 and if the block is allocated the bit is 0 For example consider a disk where blocks 23458 blocks are free The free-space bit map would be 001111001000

                                                            Advantages simple and easy to get contiguous files

                                                            Disadvantages

                                                            1 Bit map requires extra space2 Bit map is inefficient if kept in disk Bit maps of smaller disk can be kept in main memory but may

                                                            not be possible to keep the bit map of larger disk in main memory3 Bit map must be modified both when blocks are allocated and freed Freeing 1 GB of data on a 1-TB

                                                            disk could cause thousands of blocks of bit maps to be updated because those data blocks are scattered all over the disk

                                                            2 Linked List Free-space list is implemented by linking all the free disk blocks together A pointer to the first free block is stored in special location on the disk For example consider a disk where blocks 23458 blocks are freeWe keep a pointer to block 2 as the first free block Block 2 would contain a pointer to block 3 Block 3 would point to block 4 and block 4 would point to block 5and so onAdvantage No waste of spaceDisadvantage Cannot get contiguous space easily ie To traverse the free-space list we must read each block which requires more IO time

                                                            3 Grouping First free block stores the addresses of n free blocks The first n-1 free blocks are free and the last block contains the addresses of another n free blocks and so onAdvantage large number of free blocks can be found quickly4 Counting free-space list can be maintained by keeping the address of first free block and the number (n)of free contiguous blocks that follow the first block

                                                            Each entry in the free-space list consists of disk address and a countEach entry requires more space andlist is shorter only if count is gt1These entries can be stored in a B-tree for efficient insertion deletion and search

                                                            5 Space maps ZFS divides the disk space into chunks of manageable sizes called metaslabs Each metaslab has a space mapZFS uses counting algorithm to store information about free blocks Space map is a log of all block activities in time order in counting formatWhen ZFS allocates or frees space from a metaslab it loads the associated space map into memory and updates it Finally free-space list is updated on diskSynchronization Hardware Hardware solution to synchronization is to provide atomic operations These operations operate as a single instruction without interruption Two such operations are 1 Test and Set() hardware instruction2 Swap() instruction

                                                            1 Test and Set instruction is as shown

                                                            Solution using Test and SetThe shared variable lock is initialized to false

                                                            2 Swap() instruction definition is as follows

                                                            Solution using Swap1 Shared Boolean variable lock is initialized to false and each process has a local boolean variable key

                                                            Both Test and Set Swap instructions satisfy the mutual exclusion requirement but unfortunately do not guarantee bounded waiting If there are multiple processes trying to get into their critical sections there is no guarantee of what order they will enter and any one process could have the bad luck to wait forever until they got their turn in the critical section (Since there is no guarantee as to the relative rates of the processes a very fast process could theoretically release the lock whip through their remainder section and re-lock the lock before a slower process got a chance As more and more processes are involved vying for the same resource the odds of a slow process getting locked out completely increase )

                                                            Below figure illustrates a solution using test-and-set that satisfies bounded waiting using two shared data structures boolean lock and boolean waiting[ N ] where N is the number of processes in contention for critical sections

                                                            Bounded-waiting mutual exclusion with TestAndSet( )

                                                            It first looks in an order ( starting with the next process on the list ) for a process that has been waiting and if it finds one then it releases that particular process from its waiting state without unlocking the critical section thereby allowing a specific process into the critical section while continuing to block all the others

                                                            Q) What is semaphoreAns Semaphore is used to solve synchronization problems A semaphore S is an integer variable accessed only through 2 standard atomic operations 1 wait() 2 signal()Wait primitive Signal primitivewait(S)

                                                            while(S lt= 0) do nothingS--

                                                            signal (S)

                                                            S++

                                                            Each process that wishes to use a resource performs wait() operation When a process releases a resource it performs signal() operation

                                                            Semaphores Usage1 binary semaphore or mutex locks can take values 0 (or) 1

                                                            2 Counting Semaphore can take any integer value Counting semaphore is used to count the number remaining resources The counter is initialized to the number of resources available whenever the counting semaphore gt 0 then a process can enter a critical section and use one of the resources When the counter = 0 ( or negative in some implementations ) then the process blocks until another process frees a resource and increments the counting semaphore with a signal callFor example if 3 resources are there and 4 processes Value of counting semaphore =3if process P1 requires a resource it perform wait() operationNow counting semaphore = 2if process P2 requires a resource it perform wait() operationNow counting semaphore = 1if process P3 requires a resource it perform wait() operationNow counting semaphore = 0if process P4 requires a resource it perform wait() operation P4 waits until a resource is available

                                                            3 Semaphores can also be used to synchronize certain operations

                                                            Semaphore ImplementationWhen a process is in critical section and any other process that tries to enter critical section must loop continuously wasting CPU time (busy waiting) This type of semaphore is called spin lock as the process spins while waiting For example suppose it is important that process P1 execute statement S1 before process P2 executes statement S2

                                                            First we create a semaphore named synch that is shared by the two processes and initialize it to zero Then in process P1 we insert the code

                                                            S1signal( synch )

                                                            and in process P2 we insert the code wait( synch )

                                                            S2 Because synch was initialized to 0 process P2 will block on the wait until after P1 executes the call

                                                            to signal Each semaphore has a integer value and maintains a waiting queue of waiting processes

                                                            Semaphore implementation with no busy waiting To overcome busy waiting we block the waiting process and waiting process is restarted by wakeup() operation(ie the process is removed from waiting queue to ready state)

                                                            Problems due to semaphore1 Deadlocks Deadlocks occur when multiple processes are blocked each waiting for a resource that can only be freed by one of the other (blocked) processes shown below

                                                            2 Starvation In Starvation one or more processes gets blocked forever and never get a chance to enter critical section For example if we do not specify the algorithms for adding processes to the waiting queue or selecting one to be removed from the queue in the signal( ) call If a LIFO queue is chosen then the first process that starts waiting will never get a chance

                                                            3 Priority inversionLet Process LMH have priorities as shown belowLltMltHprocess H is waiting for a resource that has been held by process L so Process L is running When process M enters it pre-empts process L and M is running nowHere M runs before H even though M has lower priority than HTo avoid this L is given high priority so that M does not pre-empt L=====================================================================Q) Classical problems on synchronization

                                                            1 The Bounded Buffer Problem (also called the The Producer-Consumer Problem)2 The Readers-Writers Problem3 The Dining Philosophers Problem

                                                            These problems are used to test newly proposed synchronization scheme

                                                            1 The Bounded Buffer ProblemConsider

                                                            n buffers and each buffer holds one item a producer process which creates the items (1 at a time) a consumer process which processes them (1 at a time)

                                                            Producer process cannot produce an item if all the buffers are full and Consumer process cannot consume if all the buffers are empty As both processes modify the contents of buffer synchronization must be there For that we use 3 Semaphores

                                                            1 empty = n (empty semaphore counts the number of empty buffers)2 full=0 (full semaphore counts the number of filled buffers)3 mutex=1 (mutex provides mutual exclusion for access to the buffer)

                                                            Producer process Consumer processdo

                                                            hellip produce an item wait(empty) wait(mutex) hellip

                                                            do wait(full) wait(mutex) hellip remove an item from buffer hellip

                                                            add item to buffer hellip signal(mutex) signal(full)

                                                            while (1)

                                                            signal(mutex) signal(empty) hellip consume the item hellip while (1)

                                                            2 The Readers-Writers ProblemA data item such as a file is shared among several processesEach process is classified as reader or writerMultiple readers can read the file at same timeA writer must have exclusive access (ie cannot share with either a reader or another writer)Two versions of the readers-writers problem

                                                            readers priority new reader need not wait because a writer is waiting writers priority if a writer is waiting to access the database no new readers can start reading

                                                            A solution to both version may cause starvation in the readers priority version writers may starve in the writers priority version readers may starve

                                                            A semaphore solution to the readers priority version uses 3 semaphores1 readcount = 0 (readcount semaphore counts the number of reading shared data2 mutex =1 (mutex semaphore is used for mutually exclusion when readcount is updated)3 wrt = 1 (wrt semaphore is common to both writer and reader)

                                                            wrt provides mutually exclusive access to shared data Reader process Writer process

                                                            wait(mutex) readcount++ if (readcount == 1) wait(rt) signal(mutex) hellip reading is performed hellip wait(mutex) readcount-- if (readcount == 0) signal(wrt) signal(mutex)

                                                            do wait(wrt) hellip writing is performed hellip signal(wrt)

                                                            while (TRUE)

                                                            3 The Dining Philosophers Problemn philosophers sit around a table thinking and eatingAs time passes a philosopher gets hungry and tries to pick up the chopstick on his left and on his right A philosopher may only pick up one chopstick at a time and cannot pick up a chopstick already in the hand of neighbour philosopherThe dining philosophers problems is an example of a large class or concurrency control problems it is a simple representation of the need to allocate several resources among several processes in a deadlock-free and starvation-free mannerA semaphore solution represent each chopstick with a semaphore

                                                            semaphore chopstick[5] Initially all values are 1

                                                            Philosopher ido

                                                            51

                                                            wait(chopstick[i])wait(chopstick[(i+1) 5])hellipeathellipsignal(chopstick[i])signal(chopstick[(i+1) 5])hellipthinkhellip

                                                            while (1)This solution guarantees no two neighbouring philosophers eat simultaneously but has the possibility of creating a deadlockQ) Types of operating systems

                                                            MULTIPROCESSING SYSTEMS has multiple hardware CPUs

                                                            In symmetric multiprocessing (SMP) systems The processes share the same main memory and io facilities All processors can perform the same functions (hence term symmetric)

                                                            52

                                                            ASYMMETRIC MULTIPROCESSING SYSTEMS In asymmetric multiprocessing (ASMP) all CPUs are not equal (Master and slave CPUs) ASMP have dedicated processors to specific tasks For example one processor may be dedicated to disk operations another to video operations etc These systems dont have the flexibility to assign processes to the least-loaded CPU unlike an SMP system

                                                            REAL TIME SYSTEM is used to implement a computer application that must complete its execution within its time constraint

                                                            Two kinds of real-time systems have evolved

                                                            1 A hard real-time system can guarantee that response requirement will be met under all conditions 2 A soft real-time system cannot guarantee that response requirement meets under all conditions Ex

                                                            Digital audio or multimedia systems Digital telephones

                                                            DISTRIBUTED SYSTEMS execute parts of a computation in different systems at the same time It uses distributed control ie it spreads its decision-making actions across different computers in the system so that failures of individual computers or the network does not cripple its operation

                                                            A distributed operating system appears as a uniprocessor system even though it has multiple processors The users may not know where their programs are being run or where their files are located that should all be handled automatically by the operating system

                                                            Distributed systems allow applications to run on several processors at the same time thus requiring more complex processor scheduling algorithms

                                                            DESKTOP SYSTEMS A desktop system is a personal computer (PC) system used at a single location Modern desktop systems support multiprogramming Common examples are Linux FreeBSD Windows 8 and the Macintosh operating system

                                                            HANDHELD SYSTEMS (palmtop computers) Two of the most popular operating systems for handhelds are Symbian OS and Android OS They have very less memory

                                                            53

                                                            CLUSTERED SYSTEMS A computer cluster consists of a set of loosely connected computers that work together so that in many respects they can be viewed as a single system The components of a cluster are usually connected to each other through fast local area networks each node (computer used as a server) running its own instance of an operating system

                                                            Q) Modern operating system

                                                            New developments in hardware applications and security threats lead to the development of modern operating system

                                                            New Hardware developments like increased machine speed high speed network increased size and variety of memory devices

                                                            New Applications developments like multimedia applications internet and web access and client server computing

                                                            Developments in modern operating system can be categorized as

                                                            1 Microkernel architecture assigns only few functions to kernel ( IPC and CPU scheduling etc) Other OS services are provided by processes called servers that run in user mode This approach decouples server and kernel development This approach is well suited to distributed environment

                                                            2 Multithreading a process is divided into threads that can run concurrently Useful for applications that perform a number of independent tasks Example database server that listens and processes many client requests

                                                            3 Symmetric multiprocessing As now-a-days there are multiple microprocessors in a single system SMP operating system provides greater efficiency when there are multiple processors The processes share the same main memory and io facilities All processors can perform the same functions (hence term symmetric)

                                                            SMP schedules processes or threads across all of the processors

                                                            Advantages

                                                            1 Increased performance if some portion of the program can be run in parallel the performance of SMP OS increases

                                                            2 increased availability as all processes can perform the same task if one processor fails others will work 3 incremental growth one can add a new processor to increase the performance4 Scaling cost can be dependent on the number of processors used

                                                            4 Distributed operating systems Distributed systems allow applications to run on several processors at the same time

                                                            5 Object oriented design adds modularity to kernel Os can be customized without effecting system integrity

                                                            Q) Linux Operating systems It is open source as its source code is freely available It is free to use Linux was designed considering UNIX compatibility Linux is a multi-user multitasking system Main design goals are speed efficiency and standardization Components of Linux System

                                                            54

                                                            Linux Operating System has primarily three components as shown in the below diagram

                                                            Kernel - Kernel is the core part of Linux It is responsible for all major activities of this operating system It interacts directly with hardware Kernel hides low level hardware details to system or application programs

                                                            System Library - System libraries are special functions or programs These are used by application programs to use Kernelrsquos features

                                                            System Utility - System Utility programs are responsible to do specialized individual level tasks like updating log file accepting login requests from terminals etc

                                                            Kernel Modules Sections of kernel code that can be compiled loaded and unloaded independent of the rest of the kernel A kernel module may typically implement a device driver a file system or a networking protocol

                                                            Three components to Linux module support 1 module management Supports loading modules into memory and allows to talk to the rest of the kernel 2driver registration Allows modules to inform rest of the kernel that a new driver has become available 3 conflict resolution protect reserved resources of one driver from accidental use by another driver

                                                            Kernel Mode vs User ModeKernel component code executes in a special privileged mode called kernel mode with full access to all resources of the computer This code represents a single process executes in single address space and do not require any context switch and hence is very efficient and fast Kernel runs each processes and provides system services to processes provides protected access to hardwares to processesCode which is not required to run in kernel mode is in System Library User programs and other system programs works in User Mode which has no access to system hardwares and kernel code User programs utilities use System libraries to access Kernel functions to get systems low level tasksBasic FeaturesFollowing are some of the important features of Linux Operating System

                                                            Portable - Portability means softwares can works on different types of hardwares in same way Linux kernel and application programs supports their installation on any kind of hardware platform

                                                            Open Source - Linux source code is freely available and it is community based development project Multiple teams works in collaboration to enhance the capability of Linux operating system and it is continuously evolving

                                                            Multi-User - Linux is a multiuser system means multiple users can access system resources like memory ram application programs at same time

                                                            Multiprogramming - Linux is a multiprogramming system means multiple applications can run at same time

                                                            Hierarchical File System - Linux provides a standard file structure in which system files user files are arranged

                                                            Shell - Linux provides a special interpreter program which can be used to execute commands of the operating system It can be used to do various types of operations call application programs etc

                                                            55

                                                            Security - Linux provides user security using authentication features like password protection controlled access to specific files encryption of data

                                                            Architecture Linux System Architecture is consists of following layers

                                                            Hardware layer - Hardware consists of all peripheral devices (RAM HDD CPU etc)

                                                            Kernel - Core component of Operating System interacts directly with hardware provides low level services to upper layer components

                                                            Shell - An interface to kernel Takes commands from user and executes kernels functions

                                                            Utilities - Utility programs giving user most of the functionalities of an operating systems

                                                            Q) windows XP operating System1Extensibility mdash layered architecture

                                                            Executive which runs in protected mode provides the basic system services On top of the executive several server subsystems operate in user mode Modular structure allows additional environmental subsystems to be added without affecting the

                                                            executive 2Portability mdash XP can be moved from on hardware architecture to another with relatively few changes

                                                            Written in C and C++ Processor-dependent code is isolated in a dynamic link library (DLL) called the ldquohardware abstraction

                                                            layerrdquo (HAL)3 Reliability mdashXP uses hardware protection for virtual memory and software protection mechanisms for operating system resources 4 Compatibility mdash applications that follow the IEEE 10031 (POSIX) standard can be complied to run on 2000 without changing the source code 5 Performance mdash XP subsystems can communicate with one another via high-performance message passing

                                                            Preemption of low priority threads enables the system to respond quickly to external events Designed for symmetrical multiprocessing

                                                            6 International support mdash supports different locales via the national language support (NLS) APIXP Architecture 1 Layered system of modules 2 Protected mode mdash hardware abstraction layer (HAL) kernel executive 3 User mode mdash collection of subsystems

                                                            a Environmental subsystems emulate different operating systems b Protection subsystems provide security functions

                                                            XP architecture

                                                            56

                                                            Q) Windows network Operating systemExample Windows 2000 operating system

                                                            All the pictures are taken from Silberschatz Abraham et al Peter Baer Galvin and Greg Gagne Operating system concepts Reading Addison-Wesley

                                                            Visit my blog enthusiaststudentblogspotin57

                                                            mtechmessengerblogspotin

                                                            Keywords OS exam notes OS notesOperating system notes Opearing system galvin notesSimple batch system Multi- programmed Batch System Time sharing System Difference between Batch Multi-programming and Time sharing What is process structure of process in memory Process states PCB Operations on Processes Process Creation Process Termination cascading terminationcontext switch Basic Concepts of threads two modes of CPU execution Inter process communication Shared ndash Memory Systems Message passing systemswhat is deadlock four conditions that are necessary for deadlock to occur Methods for handling deadlocks Deadlock prevention Deadlock avoidanceResource Allocation graphBankers AlgorithmSafe State Unsafe state Safety Algorithm Resource-Request Algorithm Memory management Address Binding Logical vs Physical Address Space MMU Memory-Management Unit Dynamic loading Dynamic Linking Overlays Swapping Contiguous memory allocation Single-partition allocation Multiple-partition allocation Internal fragmentation Variable Size partitions External fragmentation First-fit Best-fit Worst-fit Non-Contiguous Memory allocation Paging Implementation of Page Table PTLR PTBR page table base register Page-table length register translation look-aside buffers TLB Protection in Paging Shared pages in Paging Segmentation Shared Segments Virtual memory Demand paging Procedure for Handling a Page Fault pure demand paging Page Replacement Page Replacement Algorithms FIFO Optimal Page replacement AlgorithmLeast Recently used (LRU) Page replacement algorithm ThrashingWorking Set model global and local page replacement

                                                            58

                                                            • 101 File Concept Data is stored in secondary memory in the form of files A file is a sequence of bits bytes lines or records A file has structure which depends on its type
                                                            • File Attributes
                                                            • File Operations
                                                            • Kernel Mode vs User Mode
                                                            • Basic Features
                                                            • Architecture

                                                              Min turnaround timeMin waiting timeMin response time CPU Scheduling Algorithms1) First-Come First-Served (FCFS) Scheduling First come first served CPU scheduling algorithm is non pre-emptive Processes are scheduled in the order they have arrivedFCFS is implemented by Queue When the CPU is free it is allocated to the process at the head of the queueAdvantages Easy to implementDisadvantages 1 Average waiting time is quite long

                                                              2 Convoy effect (it occurs as small processes wait in queue for big process to leave CPU)Example Process Burst Time P1 24 P2 3 P3 3

                                                              Suppose that the processes arrive in the order P1 P2 P3

                                                              The Gantt Chart for the schedule is P1 P2 P3

                                                              0 24 27 30 Waiting time for P1 = 0 P2 = 24 P3 = 27 Average waiting time (0 + 24 + 27)3 = 17 2) Shortest-Job-First (SJF) SchedulingSchedule the Process with shortest burst time If CPU burst time of two processe are same then FCFS is usedAdvantages Average waiting time decreasesDisadvantages It is difficult to know the length of next CPU burstSJF can be either pre-emptive (or) non pre-emptive1 Non pre-emptive SJF ndash once CPU is given to the process it cannot be preempted until completes its CPU burst2 Pre-emptive SJF (or) Shortest-Remaining-Time-First (SRTF) If the newly arrived process is shorter than what is remaining of currently executing process then pre-empt the currently executing process

                                                              Example Process Arrival Time Burst Time P1 00 7

                                                              P2 20 4 P3 40 1

                                                              P4 50 4

                                                              SJF (non-preemptive) Gantt Chart P1 P3 P2 P4

                                                              0 7 8 12 16 Average waiting time = [0 +(8-2)+(7-4) +(12-5)] 4 =4

                                                              Example of Preemptive SJF

                                                              Process Arrival Time Burst TimeP1 00 7P2 20 4P3 40 1P4 50 4

                                                              SJF (preemptive) P1 P2 P3 P2 P4 P1

                                                              0 2 4 5 7 11 16 Average waiting time = (9 + 1 + 0 +2)4 =3 3) Priority SchedulingA priority number (integer) is associated with each processThe CPU is allocated to the process with the highest priority (smallest integer equiv highest priority)If two processes have equal priority then FCFS is usedSJF is a priority algorithm in which larger the CPU burst lower the priorityThere are two schemes 1 Preemptive 2 nonpreemptive

                                                              Problem equiv Starvation ndash low priority processes may never executeSolution equiv Aging ndash as time progresses increase the priority of the process

                                                              4) Round Robin (RR) Each process gets a small unit of CPU time (time quantum) usually 10-100 milliseconds After this time has elapsed the process is preempted and added to the end of the ready queueIf there are n processes in the ready queue and the time quantum is q then each process gets 1n of the CPU time in chunks of at most q time units at once No process waits more than (n-1)q time unitsPerformance 1 q large _ FIFO 2 q small _ q must be large with respect to context switch otherwise overhead is too highExample of RR with Time Quantum = 4 Process Burst Time P1 24 P2 3

                                                              P3 3 The Gantt chart isP1 P2 P3 P1 P1 P1 P1 P1

                                                              0 4 7 10 14 18 22 26 30 Average waiting time = [(30-24)+4+7]3 = 173 =566 5)Multilevel Queue scheduling Ready queue is partitioned into separate queuesFor ex ready queue is divided into 2 queues

                                                              1 foreground (interactive) queue2 background (batch) queue

                                                              Each queue has its own scheduling algorithmFor ex Round Robin scheduling algorithm can be used in foreground queue

                                                              FCFS scheduling algorithm can be used in background queueScheduling must be done between the queues This can be done in 2 ways1 Fixed priority scheduling

                                                              Foreground queue has highest priority All the processes in foreground queue must be completed and queue must be empty Then only the processes in background queue will be given CPU As shown in below figure Disadvantage starvation

                                                              2 Time slice ndash each queue gets a certain amount of CPU timewhich it can schedule amongst its processes ie 80 to foreground in RR and 20 to background in FCFS

                                                              6) Multilevel Feedback Queue Scheduling A process can move between queues The idea is to separate processes according to their CPU bursts IO ndash bound and interactive processes will be in highest priority queueIf a process takes more CPU time it is moved to lower priority queue If a process waits too long in lower priority queue it is moved to higher priority queue to prevent starvationAs shown in the below figure let there be 3 queues Q0 Q1 Q2

                                                              1 Q0 ndash time quantum 8 milliseconds 2 Q1 ndash time quantum 16 milliseconds 3 Q2 ndash FCFS

                                                              Scheduling

                                                              1 A process entering ready queue is put in queue Q0 When it gains CPU the process receives 8 msec If it does not finish in 8 milliseconds process is moved to queue Q1 2 At Q1 process again receives 16 additional milliseconds If it still does not complete it is moved to queue Q2

                                                              Q) SynchronizationSeveral processes run in an operating system Some process share resources due to which problems like data inconsistency may arise

                                                              Example of process synchronization is Producer Consumer problem (or) Bounded-Buffer problemProducer Consumer problem To make sure that producer process should not add data when the buffer is full andConsumer process should not take data when the buffer is empty

                                                              The code for Producer process can be modified as followswhile (true) Produce an item in next producedwhile (counter == BUFFER_SIZE) do nothingbuffer[in]=nextProducedin = (in+1) BUFFER_SIZEcounter++

                                                              here an integer variable counter is initialized to zeroCounter is incremented every time a new item is added to the buffer and counter is decremented every time an item is removed from bufferin is an index that always points to an empty slot after the last filled slotout is an index that always points to the first filled slot after at the head of the circular queue

                                                              The code for consumer process is as shown belowwhile(true)

                                                              while (counter == 0) do nothingnextConsumed = buffer[out]out=(out+1)BUFFER_SIZEcounter--

                                                              The above producer and consumer code produces RACE CONDITIONOutcome of execution of several co-operating processes depends on the order in which accesses takes place is called Race ConditionFor Example if counter =5

                                                              Given counter =5 and if producer produces one item and consumer consumes one item then correct value of counter= 5 only But here the counter = 4 ie inconsistent resultIf T5 is executed before T4 then the counter =6 ie inconsistent result

                                                              Solution to avoid race condition make sure only one process at a time must update the shared variable (here it is counter)

                                                              Q) Critical section problemAns Each process has a segment of code called critical section Critical section code is the code that contains common variables of co-operating processesTo avoid data inconsistencyTwo processes must not execute critical section code at same timeFor this to happen1 Each process must request permission to enter its critical sectionmdashEntry section2 End of critical section ndashExit section3 Rest of the code after critical section ndash Remainder section

                                                              Solution to critical section must satisfy the following three requirements1 Mutual exclusion if one process is in critical section no other process can be in critical section2 Progress Processes not in remainder section must decide which process will enter critical section next3 Bounded Waiting All the processes must be allowed to enter critical section No process must wait indefinitely

                                                              Q) Petersonrsquos solution for critical section problem (Software based solution)AnsTime Process 1 Process2T0 do

                                                              flag[1] = TRUE turn =2

                                                              do flag[2] = TRUE

                                                              T1 turn =1

                                                              T2 While( flag[2] ampamp turn == 2)Since turn = 2 while loop will not run

                                                              While( flag[1] ampamp turn == 1)Loops continuously until flag[1]

                                                              becomes falseT3 Enter critical section

                                                              T4 flag[1] = FALSE Enter Critical section

                                                              T5 Enters remainder section flag[2]=FALSE

                                                              while (TRUE) Enters remainder section

                                                              while (TRUE)

                                                              101 File Concept Data is stored in secondary memory in the form of files A file is a sequence of bits bytes lines or records A file has structure which depends on its type

                                                              File Attributes

                                                              1 Name ndash Name of the file for user understanding2 Identifier used by OS to identify a file ( eg inode number )3 Type ndash Text files executable files binary files etc 4 Location ndash location of the file on the hard drive 5 Size current size of the file6 Protection Controls who can read write or execute the file7 Time Date and user identification gives the information when the file was created last modified

                                                              and last used etc This data is used for protection security and usage monitoring

                                                              All information about files is kept in directory structure The directory entry consists of the filersquos name and its unique identifier The identifier locates the other file attributes Directory structure is kept on disk

                                                              File Operations

                                                              The file Abstract Data Type supports many common operations 1 Creating a file find free space on disk add entry to directory 2 Writing a file write data at current file position pointer location and update pointer3 Reading a file read data at current file position pointer location and update pointer4 Repositioning within a file or file seek change pointer location to a given value5 Deleting a file free the space allocated to file and remove entry from directory6 Truncating a file delete the data and update file size

                                                              Open File Table

                                                              Most OSes require that files be opened before access and closed after access Information about all files that are open currently in the system is stored in an open file table This

                                                              system wide open file table is maintained by OS And information about files opened by the process is stored by the processrsquos open file table Open file table has the following information

                                                              1 File pointer - records the current position in the file for the next read or write2 File-open count - How many times has the current file been opened ( simultaneously by

                                                              different processes ) and not yet closedWhen this counter reaches zero the file can be removed from the table

                                                              3 Disk location of the file4 Access rights

                                                              Some systems provide support for file locking 1 A shared lock is for reading only2 A exclusive lock is for writing as well as reading3 An advisory lock OS makes sure the locks are acquired and released appropriately4 A mandatory lock software developers makes sure the locks are acquired and released

                                                              appropriately5 UNIX used advisory locks and Windows uses mandatory locks

                                                              File Types

                                                              Implementing file types A common technique is to include the type as part of the filename The name is split into 2 parts

                                                              1 File Name 2 Extension

                                                              File name and extension are separated by a period The system uses the extension to indicate the type of file and the type of operations that can be done on that fileUser opens Microsoft word and clicks on File menu and Open option User will only specify the filename without extension application will look for a file with the given name and extension it expects Hence extensions are not supported by the operating systemUnix system uses a crude magic number stored at the beginning of the file to indicate the type of fileIn Mac OS X operating system file produced by word processor has the word processorrsquos name as its creator When user double clicks on the file word will open the required fileSome common file types are as shown below

                                                              File Structure Some files contain an internal structure which may or may not be known to the OS for ex executable files have a specific structure so that it can determine

                                                              where in memory to load the file and what is the location of the first instruction

                                                              If OS supports many file formats it will increase the size and complexity of the OS UNIX treats all files as sequences of bytes with no internal structure (With the exception of

                                                              executable binary programs which it must know how to load and find the first executable statement etc)

                                                              Macintosh files have two forks - a resource fork and a data fork The resource fork contains information relating to the UI such as icons and button images and can be modified independently Data fork contains the code or data

                                                              Internal File structure

                                                              Disk files are accessed in units of physical blocks typically 512 bytes

                                                              Internally files are organized in units of logical units like single byte (or)

                                                              data record size (or) structure size

                                                              The number of logical units which fit into one physical block determines its packingIf a student file is organized internally as records Each record takes 64 bytes of memoryAnd main memory is divided into blocks of 512 bytes Then packing = 51264=8 8 records can be stored per block

                                                              Internal fragmentation may occur due to this

                                                              102 Access Methods

                                                              The information in file can be accessed in the following ways1 Sequential access2 Direct access (or) Relative access

                                                              1 Sequential access we cannot randomly access any record

                                                              Records are read and written sequentially one record after another

                                                              A sequential access file emulates magnetic tape operation and generally supports the following operations read next - read a record and advance the tape to the next position write next - write a record and advance the tape to the next position rewind skip n records - May or may not be supported N may be limited to positive numbers or may be

                                                              limited to +- 1

                                                              2 Direct access (or) Relative accessA file is made up of fixed length records so that we can randomly access any record

                                                              No restriction on order of reading or writing

                                                              Direct access files are useful when we need to access large amounts of data randomly

                                                              Ex database

                                                              Operations supported include n is relative block number ie relative to beginning of file

                                                              1 read n - read record number n 2 write n - write record number n 3 jump to record n - could be 0 or the end of file4 Query current record - used to return back to this record later5 Sequential access can be easily emulated using direct access The inverse is complicated and

                                                              inefficient

                                                              3 Other access methods can be built on top of direct access method We construct an index for the file which contains pointers to various blocks To search a file we first access index and then use pointer to access file

                                                              If a student file has 120000 records of students sorted by student id We can create an index consisting of 1st student id in each blockif 64 records can be stored in each block then number of blocks needed to store 120000 records = 12000064 asymp 2000 blocks So index has 2000 entries Index is kept in main memory so that searching becomes faster

                                                              If the index itself becomes large we create an index for index file (Ex B-tree)

                                                              Q) Allocation Methods

                                                              To allocate space to files on disk is such a way that disk space is utilized effectively and files can be accessed quickly

                                                              Three major methods of allocating disk space are

                                                              1 Contiguous 2 Linked

                                                              3 Indexed

                                                              1 Contiguous allocation Each file occupies set of contiguous blocks on disk

                                                              If the file is n blocks long and starts at location b then it occupies blocks b b+1 b+2hellipb+n-1

                                                              The directory entry for each file indicates the address of starting block(ie b) and length (n)

                                                              Advantages

                                                              1 For direct access to block i we can directly access by finding the block as b+i Since the file is stored sequentially sequential access is easy Hence access is easy

                                                              2 Supports both direct and sequential access3 Number of disk seeks required is minimum 4 Seek time is minimum

                                                              Disadvantages

                                                              1 Difficult to find space to a new file (or) add more data to existing fileFirst fit and Best fit strategies are used to find a free hole( set of free contiguous blocks) from list of available holes But both are inefficient wrt storage utilization

                                                              2 External fragmentation even though there are enough free blocks required for a new file they are not contiguous and hence cannot be allocated to new fileSolution copy all the files onto another disk or tape We then copy the files back onto original disk contiguously This is known as compaction But compaction is time consuming

                                                              3 Size of the file must be known when the file is created Even though final size is known entire size may not be used by the file at the beginning Hence leads to wastage of space

                                                              If size of the file is not known in advance then (i) If we allocate too little space to file file cannot be extended Solution find a larger hole copy the contents of the file to the new space and release the previous space This may be possible as long as space exists Also this is time consuming(ii) If we overestimate the amount of space needed much of the space is unused This will lead to internal fragmentation

                                                              To minimize these drawbacks some os uses modified contiguous allocation scheme Here contiguous blocks of memory is allocated at firstIf the file needs more blocks another set of contiguous space known as extent is added Location and a block count plus a link to the first block of next extentIf extents are too large and of fixed size internal fragmentation occursIf extents are of variable sizes external fragmentation occurs

                                                              Linked AllocationEach file is a linked list of disk blocks The disk blocks can be scattered anywhere on the diskThe directory contains a pointer to first and last blocks of the file For example a file of 5 blocks might start at a block 9 and continue at block 16 then block 1 and then block 10 and finally block 25 Each block contains a pointer to next block If each block is 512 bytes and disk address (pointer ) requires 4 bytes then the user sees blocks of 512-4=508 bytes To create a new file we simply create a new entry in the directory The pointer will have nil for empty file Also the size field of empty file is 0

                                                              To add more data to existing file free block is found and data is written to it and is linked to end of file To read a file we read by following the pointers block to block

                                                              Advantages 1 Easy to find space to a new file (or) add more data to existing file2 Size of the file need not be declared when the file is created3 There is no external fragmentation

                                                              Disadvantages1 Seek time is more2 Number of disk seeks required is more3 Pointers use larger percentage of filersquos disk space4 Inefficient for direct access files To access i th record i disk reads are required5 Not Reliable if pointers were lost or damaged

                                                              Solution to disadvantage 3 is to use to group multiple blocks as clusters(for ex 4 blocks = 1cluster) We allocate clusters instead of blocks to file Makes logical to physical block mapping simpleImproves disk throughput by reducing disk access timeReduces wastage of space due to pointers as only few pointers may be neededFree ndashlist management also becomes simpleDisadvantage internal fragmentation more space is wasted when the cluster is partially full

                                                              Solution to disadvantage 5 is to use doubly linked list (or) store the file name and relative block number in each block

                                                              An variation of linked allocation is the use of file allocation table (FAT)A section of disk at the beginning of each volume contains FAT FAT has one entry for each disk block and is indexed by block numberThe block number contains the block number of next block that contains the file The last block will have special end of the file value as table entry An unused block is indicated by table value 0

                                                              Advantages

                                                              1 Easy to find space to a new file (or) add more data to existing file2 Random access time is improved

                                                              Disadvantage number of disk seeks is more The disk head must move to read the FAT and find the location of block then move to location of the block

                                                              3 Indexed allocation

                                                              Each file has its own index block Index block is an array of disk block addresses The ith entry in index block points to ith block of the file

                                                              The directory contains the address of the index block

                                                              When the file is created all pointers in the index block are set to nil

                                                              To add more data to existing file free block is found and data is written to it The address of block is put in the index block

                                                              Advantages

                                                              1 Easy to find space to a new file (or) add more data to existing file2 Supports direct (or) random access efficiently3 No external fragmentation

                                                              Disadvantage

                                                              1 Seek time is more2 Number of disk seeks required is more3 Not Reliable4 Index file must be kept in memory If this memory is not available then we have the read the index

                                                              block and then desired data block ie 2 disk accesses are required To access a block near the end of the file we need to read all the index blocks to read the needed data block

                                                              5 Pointer overhead of index block is gt than that of linked allocation For example if we have file of only one or two blocks Entire index block is allocated to store these one or two pointers

                                                              Various mechanisms followed for size of index block

                                                              1 linked scheme size of index block is one disk block Index block contains a small header containing name of the file and set of the first 100 disk block addresses The next address contains the address of another index block for large files

                                                              2 multilevel index to access a block the OS uses first-level index to find second-level index block which points to file block This approach can be continued to third or fourth level

                                                              3 combined scheme

                                                              For example Say there are 15 pointers of index block in the filersquos inode The first 12 of these pointers have address of blocks that contains file data The next 3 pointers point to indirect blocks The first points to single indirect block which is an index block that contain addresses of filersquos data blocks

                                                              The second points to double indirect blocks and third points to triple indirect blocks as shown below

                                                              Q) Free space management

                                                              To keep track of free disk space the system maintains free space list To create a file we search free space list and allocate the required space to new file

                                                              The free space list can be implemented as

                                                              1 Bit vector (or) Bit map2 Linked list3 Grouping4 Counting 5 Space maps

                                                              1 Bit vector (or) Bit map Each block is represented by 1 bit If the block is free the bit is 1 and if the block is allocated the bit is 0 For example consider a disk where blocks 23458 blocks are free The free-space bit map would be 001111001000

                                                              Advantages simple and easy to get contiguous files

                                                              Disadvantages

                                                              1 Bit map requires extra space2 Bit map is inefficient if kept in disk Bit maps of smaller disk can be kept in main memory but may

                                                              not be possible to keep the bit map of larger disk in main memory3 Bit map must be modified both when blocks are allocated and freed Freeing 1 GB of data on a 1-TB

                                                              disk could cause thousands of blocks of bit maps to be updated because those data blocks are scattered all over the disk

                                                              2 Linked List Free-space list is implemented by linking all the free disk blocks together A pointer to the first free block is stored in special location on the disk For example consider a disk where blocks 23458 blocks are freeWe keep a pointer to block 2 as the first free block Block 2 would contain a pointer to block 3 Block 3 would point to block 4 and block 4 would point to block 5and so onAdvantage No waste of spaceDisadvantage Cannot get contiguous space easily ie To traverse the free-space list we must read each block which requires more IO time

                                                              3 Grouping First free block stores the addresses of n free blocks The first n-1 free blocks are free and the last block contains the addresses of another n free blocks and so onAdvantage large number of free blocks can be found quickly4 Counting free-space list can be maintained by keeping the address of first free block and the number (n)of free contiguous blocks that follow the first block

                                                              Each entry in the free-space list consists of disk address and a countEach entry requires more space andlist is shorter only if count is gt1These entries can be stored in a B-tree for efficient insertion deletion and search

                                                              5 Space maps ZFS divides the disk space into chunks of manageable sizes called metaslabs Each metaslab has a space mapZFS uses counting algorithm to store information about free blocks Space map is a log of all block activities in time order in counting formatWhen ZFS allocates or frees space from a metaslab it loads the associated space map into memory and updates it Finally free-space list is updated on diskSynchronization Hardware Hardware solution to synchronization is to provide atomic operations These operations operate as a single instruction without interruption Two such operations are 1 Test and Set() hardware instruction2 Swap() instruction

                                                              1 Test and Set instruction is as shown

                                                              Solution using Test and SetThe shared variable lock is initialized to false

                                                              2 Swap() instruction definition is as follows

                                                              Solution using Swap1 Shared Boolean variable lock is initialized to false and each process has a local boolean variable key

                                                              Both Test and Set Swap instructions satisfy the mutual exclusion requirement but unfortunately do not guarantee bounded waiting If there are multiple processes trying to get into their critical sections there is no guarantee of what order they will enter and any one process could have the bad luck to wait forever until they got their turn in the critical section (Since there is no guarantee as to the relative rates of the processes a very fast process could theoretically release the lock whip through their remainder section and re-lock the lock before a slower process got a chance As more and more processes are involved vying for the same resource the odds of a slow process getting locked out completely increase )

                                                              Below figure illustrates a solution using test-and-set that satisfies bounded waiting using two shared data structures boolean lock and boolean waiting[ N ] where N is the number of processes in contention for critical sections

                                                              Bounded-waiting mutual exclusion with TestAndSet( )

                                                              It first looks in an order ( starting with the next process on the list ) for a process that has been waiting and if it finds one then it releases that particular process from its waiting state without unlocking the critical section thereby allowing a specific process into the critical section while continuing to block all the others

                                                              Q) What is semaphoreAns Semaphore is used to solve synchronization problems A semaphore S is an integer variable accessed only through 2 standard atomic operations 1 wait() 2 signal()Wait primitive Signal primitivewait(S)

                                                              while(S lt= 0) do nothingS--

                                                              signal (S)

                                                              S++

                                                              Each process that wishes to use a resource performs wait() operation When a process releases a resource it performs signal() operation

                                                              Semaphores Usage1 binary semaphore or mutex locks can take values 0 (or) 1

                                                              2 Counting Semaphore can take any integer value Counting semaphore is used to count the number remaining resources The counter is initialized to the number of resources available whenever the counting semaphore gt 0 then a process can enter a critical section and use one of the resources When the counter = 0 ( or negative in some implementations ) then the process blocks until another process frees a resource and increments the counting semaphore with a signal callFor example if 3 resources are there and 4 processes Value of counting semaphore =3if process P1 requires a resource it perform wait() operationNow counting semaphore = 2if process P2 requires a resource it perform wait() operationNow counting semaphore = 1if process P3 requires a resource it perform wait() operationNow counting semaphore = 0if process P4 requires a resource it perform wait() operation P4 waits until a resource is available

                                                              3 Semaphores can also be used to synchronize certain operations

                                                              Semaphore ImplementationWhen a process is in critical section and any other process that tries to enter critical section must loop continuously wasting CPU time (busy waiting) This type of semaphore is called spin lock as the process spins while waiting For example suppose it is important that process P1 execute statement S1 before process P2 executes statement S2

                                                              First we create a semaphore named synch that is shared by the two processes and initialize it to zero Then in process P1 we insert the code

                                                              S1signal( synch )

                                                              and in process P2 we insert the code wait( synch )

                                                              S2 Because synch was initialized to 0 process P2 will block on the wait until after P1 executes the call

                                                              to signal Each semaphore has a integer value and maintains a waiting queue of waiting processes

                                                              Semaphore implementation with no busy waiting To overcome busy waiting we block the waiting process and waiting process is restarted by wakeup() operation(ie the process is removed from waiting queue to ready state)

                                                              Problems due to semaphore1 Deadlocks Deadlocks occur when multiple processes are blocked each waiting for a resource that can only be freed by one of the other (blocked) processes shown below

                                                              2 Starvation In Starvation one or more processes gets blocked forever and never get a chance to enter critical section For example if we do not specify the algorithms for adding processes to the waiting queue or selecting one to be removed from the queue in the signal( ) call If a LIFO queue is chosen then the first process that starts waiting will never get a chance

                                                              3 Priority inversionLet Process LMH have priorities as shown belowLltMltHprocess H is waiting for a resource that has been held by process L so Process L is running When process M enters it pre-empts process L and M is running nowHere M runs before H even though M has lower priority than HTo avoid this L is given high priority so that M does not pre-empt L=====================================================================Q) Classical problems on synchronization

                                                              1 The Bounded Buffer Problem (also called the The Producer-Consumer Problem)2 The Readers-Writers Problem3 The Dining Philosophers Problem

                                                              These problems are used to test newly proposed synchronization scheme

                                                              1 The Bounded Buffer ProblemConsider

                                                              n buffers and each buffer holds one item a producer process which creates the items (1 at a time) a consumer process which processes them (1 at a time)

                                                              Producer process cannot produce an item if all the buffers are full and Consumer process cannot consume if all the buffers are empty As both processes modify the contents of buffer synchronization must be there For that we use 3 Semaphores

                                                              1 empty = n (empty semaphore counts the number of empty buffers)2 full=0 (full semaphore counts the number of filled buffers)3 mutex=1 (mutex provides mutual exclusion for access to the buffer)

                                                              Producer process Consumer processdo

                                                              hellip produce an item wait(empty) wait(mutex) hellip

                                                              do wait(full) wait(mutex) hellip remove an item from buffer hellip

                                                              add item to buffer hellip signal(mutex) signal(full)

                                                              while (1)

                                                              signal(mutex) signal(empty) hellip consume the item hellip while (1)

                                                              2 The Readers-Writers ProblemA data item such as a file is shared among several processesEach process is classified as reader or writerMultiple readers can read the file at same timeA writer must have exclusive access (ie cannot share with either a reader or another writer)Two versions of the readers-writers problem

                                                              readers priority new reader need not wait because a writer is waiting writers priority if a writer is waiting to access the database no new readers can start reading

                                                              A solution to both version may cause starvation in the readers priority version writers may starve in the writers priority version readers may starve

                                                              A semaphore solution to the readers priority version uses 3 semaphores1 readcount = 0 (readcount semaphore counts the number of reading shared data2 mutex =1 (mutex semaphore is used for mutually exclusion when readcount is updated)3 wrt = 1 (wrt semaphore is common to both writer and reader)

                                                              wrt provides mutually exclusive access to shared data Reader process Writer process

                                                              wait(mutex) readcount++ if (readcount == 1) wait(rt) signal(mutex) hellip reading is performed hellip wait(mutex) readcount-- if (readcount == 0) signal(wrt) signal(mutex)

                                                              do wait(wrt) hellip writing is performed hellip signal(wrt)

                                                              while (TRUE)

                                                              3 The Dining Philosophers Problemn philosophers sit around a table thinking and eatingAs time passes a philosopher gets hungry and tries to pick up the chopstick on his left and on his right A philosopher may only pick up one chopstick at a time and cannot pick up a chopstick already in the hand of neighbour philosopherThe dining philosophers problems is an example of a large class or concurrency control problems it is a simple representation of the need to allocate several resources among several processes in a deadlock-free and starvation-free mannerA semaphore solution represent each chopstick with a semaphore

                                                              semaphore chopstick[5] Initially all values are 1

                                                              Philosopher ido

                                                              51

                                                              wait(chopstick[i])wait(chopstick[(i+1) 5])hellipeathellipsignal(chopstick[i])signal(chopstick[(i+1) 5])hellipthinkhellip

                                                              while (1)This solution guarantees no two neighbouring philosophers eat simultaneously but has the possibility of creating a deadlockQ) Types of operating systems

                                                              MULTIPROCESSING SYSTEMS has multiple hardware CPUs

                                                              In symmetric multiprocessing (SMP) systems The processes share the same main memory and io facilities All processors can perform the same functions (hence term symmetric)

                                                              52

                                                              ASYMMETRIC MULTIPROCESSING SYSTEMS In asymmetric multiprocessing (ASMP) all CPUs are not equal (Master and slave CPUs) ASMP have dedicated processors to specific tasks For example one processor may be dedicated to disk operations another to video operations etc These systems dont have the flexibility to assign processes to the least-loaded CPU unlike an SMP system

                                                              REAL TIME SYSTEM is used to implement a computer application that must complete its execution within its time constraint

                                                              Two kinds of real-time systems have evolved

                                                              1 A hard real-time system can guarantee that response requirement will be met under all conditions 2 A soft real-time system cannot guarantee that response requirement meets under all conditions Ex

                                                              Digital audio or multimedia systems Digital telephones

                                                              DISTRIBUTED SYSTEMS execute parts of a computation in different systems at the same time It uses distributed control ie it spreads its decision-making actions across different computers in the system so that failures of individual computers or the network does not cripple its operation

                                                              A distributed operating system appears as a uniprocessor system even though it has multiple processors The users may not know where their programs are being run or where their files are located that should all be handled automatically by the operating system

                                                              Distributed systems allow applications to run on several processors at the same time thus requiring more complex processor scheduling algorithms

                                                              DESKTOP SYSTEMS A desktop system is a personal computer (PC) system used at a single location Modern desktop systems support multiprogramming Common examples are Linux FreeBSD Windows 8 and the Macintosh operating system

                                                              HANDHELD SYSTEMS (palmtop computers) Two of the most popular operating systems for handhelds are Symbian OS and Android OS They have very less memory

                                                              53

                                                              CLUSTERED SYSTEMS A computer cluster consists of a set of loosely connected computers that work together so that in many respects they can be viewed as a single system The components of a cluster are usually connected to each other through fast local area networks each node (computer used as a server) running its own instance of an operating system

                                                              Q) Modern operating system

                                                              New developments in hardware applications and security threats lead to the development of modern operating system

                                                              New Hardware developments like increased machine speed high speed network increased size and variety of memory devices

                                                              New Applications developments like multimedia applications internet and web access and client server computing

                                                              Developments in modern operating system can be categorized as

                                                              1 Microkernel architecture assigns only few functions to kernel ( IPC and CPU scheduling etc) Other OS services are provided by processes called servers that run in user mode This approach decouples server and kernel development This approach is well suited to distributed environment

                                                              2 Multithreading a process is divided into threads that can run concurrently Useful for applications that perform a number of independent tasks Example database server that listens and processes many client requests

                                                              3 Symmetric multiprocessing As now-a-days there are multiple microprocessors in a single system SMP operating system provides greater efficiency when there are multiple processors The processes share the same main memory and io facilities All processors can perform the same functions (hence term symmetric)

                                                              SMP schedules processes or threads across all of the processors

                                                              Advantages

                                                              1 Increased performance if some portion of the program can be run in parallel the performance of SMP OS increases

                                                              2 increased availability as all processes can perform the same task if one processor fails others will work 3 incremental growth one can add a new processor to increase the performance4 Scaling cost can be dependent on the number of processors used

                                                              4 Distributed operating systems Distributed systems allow applications to run on several processors at the same time

                                                              5 Object oriented design adds modularity to kernel Os can be customized without effecting system integrity

                                                              Q) Linux Operating systems It is open source as its source code is freely available It is free to use Linux was designed considering UNIX compatibility Linux is a multi-user multitasking system Main design goals are speed efficiency and standardization Components of Linux System

                                                              54

                                                              Linux Operating System has primarily three components as shown in the below diagram

                                                              Kernel - Kernel is the core part of Linux It is responsible for all major activities of this operating system It interacts directly with hardware Kernel hides low level hardware details to system or application programs

                                                              System Library - System libraries are special functions or programs These are used by application programs to use Kernelrsquos features

                                                              System Utility - System Utility programs are responsible to do specialized individual level tasks like updating log file accepting login requests from terminals etc

                                                              Kernel Modules Sections of kernel code that can be compiled loaded and unloaded independent of the rest of the kernel A kernel module may typically implement a device driver a file system or a networking protocol

                                                              Three components to Linux module support 1 module management Supports loading modules into memory and allows to talk to the rest of the kernel 2driver registration Allows modules to inform rest of the kernel that a new driver has become available 3 conflict resolution protect reserved resources of one driver from accidental use by another driver

                                                              Kernel Mode vs User ModeKernel component code executes in a special privileged mode called kernel mode with full access to all resources of the computer This code represents a single process executes in single address space and do not require any context switch and hence is very efficient and fast Kernel runs each processes and provides system services to processes provides protected access to hardwares to processesCode which is not required to run in kernel mode is in System Library User programs and other system programs works in User Mode which has no access to system hardwares and kernel code User programs utilities use System libraries to access Kernel functions to get systems low level tasksBasic FeaturesFollowing are some of the important features of Linux Operating System

                                                              Portable - Portability means softwares can works on different types of hardwares in same way Linux kernel and application programs supports their installation on any kind of hardware platform

                                                              Open Source - Linux source code is freely available and it is community based development project Multiple teams works in collaboration to enhance the capability of Linux operating system and it is continuously evolving

                                                              Multi-User - Linux is a multiuser system means multiple users can access system resources like memory ram application programs at same time

                                                              Multiprogramming - Linux is a multiprogramming system means multiple applications can run at same time

                                                              Hierarchical File System - Linux provides a standard file structure in which system files user files are arranged

                                                              Shell - Linux provides a special interpreter program which can be used to execute commands of the operating system It can be used to do various types of operations call application programs etc

                                                              55

                                                              Security - Linux provides user security using authentication features like password protection controlled access to specific files encryption of data

                                                              Architecture Linux System Architecture is consists of following layers

                                                              Hardware layer - Hardware consists of all peripheral devices (RAM HDD CPU etc)

                                                              Kernel - Core component of Operating System interacts directly with hardware provides low level services to upper layer components

                                                              Shell - An interface to kernel Takes commands from user and executes kernels functions

                                                              Utilities - Utility programs giving user most of the functionalities of an operating systems

                                                              Q) windows XP operating System1Extensibility mdash layered architecture

                                                              Executive which runs in protected mode provides the basic system services On top of the executive several server subsystems operate in user mode Modular structure allows additional environmental subsystems to be added without affecting the

                                                              executive 2Portability mdash XP can be moved from on hardware architecture to another with relatively few changes

                                                              Written in C and C++ Processor-dependent code is isolated in a dynamic link library (DLL) called the ldquohardware abstraction

                                                              layerrdquo (HAL)3 Reliability mdashXP uses hardware protection for virtual memory and software protection mechanisms for operating system resources 4 Compatibility mdash applications that follow the IEEE 10031 (POSIX) standard can be complied to run on 2000 without changing the source code 5 Performance mdash XP subsystems can communicate with one another via high-performance message passing

                                                              Preemption of low priority threads enables the system to respond quickly to external events Designed for symmetrical multiprocessing

                                                              6 International support mdash supports different locales via the national language support (NLS) APIXP Architecture 1 Layered system of modules 2 Protected mode mdash hardware abstraction layer (HAL) kernel executive 3 User mode mdash collection of subsystems

                                                              a Environmental subsystems emulate different operating systems b Protection subsystems provide security functions

                                                              XP architecture

                                                              56

                                                              Q) Windows network Operating systemExample Windows 2000 operating system

                                                              All the pictures are taken from Silberschatz Abraham et al Peter Baer Galvin and Greg Gagne Operating system concepts Reading Addison-Wesley

                                                              Visit my blog enthusiaststudentblogspotin57

                                                              mtechmessengerblogspotin

                                                              Keywords OS exam notes OS notesOperating system notes Opearing system galvin notesSimple batch system Multi- programmed Batch System Time sharing System Difference between Batch Multi-programming and Time sharing What is process structure of process in memory Process states PCB Operations on Processes Process Creation Process Termination cascading terminationcontext switch Basic Concepts of threads two modes of CPU execution Inter process communication Shared ndash Memory Systems Message passing systemswhat is deadlock four conditions that are necessary for deadlock to occur Methods for handling deadlocks Deadlock prevention Deadlock avoidanceResource Allocation graphBankers AlgorithmSafe State Unsafe state Safety Algorithm Resource-Request Algorithm Memory management Address Binding Logical vs Physical Address Space MMU Memory-Management Unit Dynamic loading Dynamic Linking Overlays Swapping Contiguous memory allocation Single-partition allocation Multiple-partition allocation Internal fragmentation Variable Size partitions External fragmentation First-fit Best-fit Worst-fit Non-Contiguous Memory allocation Paging Implementation of Page Table PTLR PTBR page table base register Page-table length register translation look-aside buffers TLB Protection in Paging Shared pages in Paging Segmentation Shared Segments Virtual memory Demand paging Procedure for Handling a Page Fault pure demand paging Page Replacement Page Replacement Algorithms FIFO Optimal Page replacement AlgorithmLeast Recently used (LRU) Page replacement algorithm ThrashingWorking Set model global and local page replacement

                                                              58

                                                              • 101 File Concept Data is stored in secondary memory in the form of files A file is a sequence of bits bytes lines or records A file has structure which depends on its type
                                                              • File Attributes
                                                              • File Operations
                                                              • Kernel Mode vs User Mode
                                                              • Basic Features
                                                              • Architecture

                                                                SJF (non-preemptive) Gantt Chart P1 P3 P2 P4

                                                                0 7 8 12 16 Average waiting time = [0 +(8-2)+(7-4) +(12-5)] 4 =4

                                                                Example of Preemptive SJF

                                                                Process Arrival Time Burst TimeP1 00 7P2 20 4P3 40 1P4 50 4

                                                                SJF (preemptive) P1 P2 P3 P2 P4 P1

                                                                0 2 4 5 7 11 16 Average waiting time = (9 + 1 + 0 +2)4 =3 3) Priority SchedulingA priority number (integer) is associated with each processThe CPU is allocated to the process with the highest priority (smallest integer equiv highest priority)If two processes have equal priority then FCFS is usedSJF is a priority algorithm in which larger the CPU burst lower the priorityThere are two schemes 1 Preemptive 2 nonpreemptive

                                                                Problem equiv Starvation ndash low priority processes may never executeSolution equiv Aging ndash as time progresses increase the priority of the process

                                                                4) Round Robin (RR) Each process gets a small unit of CPU time (time quantum) usually 10-100 milliseconds After this time has elapsed the process is preempted and added to the end of the ready queueIf there are n processes in the ready queue and the time quantum is q then each process gets 1n of the CPU time in chunks of at most q time units at once No process waits more than (n-1)q time unitsPerformance 1 q large _ FIFO 2 q small _ q must be large with respect to context switch otherwise overhead is too highExample of RR with Time Quantum = 4 Process Burst Time P1 24 P2 3

                                                                P3 3 The Gantt chart isP1 P2 P3 P1 P1 P1 P1 P1

                                                                0 4 7 10 14 18 22 26 30 Average waiting time = [(30-24)+4+7]3 = 173 =566 5)Multilevel Queue scheduling Ready queue is partitioned into separate queuesFor ex ready queue is divided into 2 queues

                                                                1 foreground (interactive) queue2 background (batch) queue

                                                                Each queue has its own scheduling algorithmFor ex Round Robin scheduling algorithm can be used in foreground queue

                                                                FCFS scheduling algorithm can be used in background queueScheduling must be done between the queues This can be done in 2 ways1 Fixed priority scheduling

                                                                Foreground queue has highest priority All the processes in foreground queue must be completed and queue must be empty Then only the processes in background queue will be given CPU As shown in below figure Disadvantage starvation

                                                                2 Time slice ndash each queue gets a certain amount of CPU timewhich it can schedule amongst its processes ie 80 to foreground in RR and 20 to background in FCFS

                                                                6) Multilevel Feedback Queue Scheduling A process can move between queues The idea is to separate processes according to their CPU bursts IO ndash bound and interactive processes will be in highest priority queueIf a process takes more CPU time it is moved to lower priority queue If a process waits too long in lower priority queue it is moved to higher priority queue to prevent starvationAs shown in the below figure let there be 3 queues Q0 Q1 Q2

                                                                1 Q0 ndash time quantum 8 milliseconds 2 Q1 ndash time quantum 16 milliseconds 3 Q2 ndash FCFS

                                                                Scheduling

                                                                1 A process entering ready queue is put in queue Q0 When it gains CPU the process receives 8 msec If it does not finish in 8 milliseconds process is moved to queue Q1 2 At Q1 process again receives 16 additional milliseconds If it still does not complete it is moved to queue Q2

                                                                Q) SynchronizationSeveral processes run in an operating system Some process share resources due to which problems like data inconsistency may arise

                                                                Example of process synchronization is Producer Consumer problem (or) Bounded-Buffer problemProducer Consumer problem To make sure that producer process should not add data when the buffer is full andConsumer process should not take data when the buffer is empty

                                                                The code for Producer process can be modified as followswhile (true) Produce an item in next producedwhile (counter == BUFFER_SIZE) do nothingbuffer[in]=nextProducedin = (in+1) BUFFER_SIZEcounter++

                                                                here an integer variable counter is initialized to zeroCounter is incremented every time a new item is added to the buffer and counter is decremented every time an item is removed from bufferin is an index that always points to an empty slot after the last filled slotout is an index that always points to the first filled slot after at the head of the circular queue

                                                                The code for consumer process is as shown belowwhile(true)

                                                                while (counter == 0) do nothingnextConsumed = buffer[out]out=(out+1)BUFFER_SIZEcounter--

                                                                The above producer and consumer code produces RACE CONDITIONOutcome of execution of several co-operating processes depends on the order in which accesses takes place is called Race ConditionFor Example if counter =5

                                                                Given counter =5 and if producer produces one item and consumer consumes one item then correct value of counter= 5 only But here the counter = 4 ie inconsistent resultIf T5 is executed before T4 then the counter =6 ie inconsistent result

                                                                Solution to avoid race condition make sure only one process at a time must update the shared variable (here it is counter)

                                                                Q) Critical section problemAns Each process has a segment of code called critical section Critical section code is the code that contains common variables of co-operating processesTo avoid data inconsistencyTwo processes must not execute critical section code at same timeFor this to happen1 Each process must request permission to enter its critical sectionmdashEntry section2 End of critical section ndashExit section3 Rest of the code after critical section ndash Remainder section

                                                                Solution to critical section must satisfy the following three requirements1 Mutual exclusion if one process is in critical section no other process can be in critical section2 Progress Processes not in remainder section must decide which process will enter critical section next3 Bounded Waiting All the processes must be allowed to enter critical section No process must wait indefinitely

                                                                Q) Petersonrsquos solution for critical section problem (Software based solution)AnsTime Process 1 Process2T0 do

                                                                flag[1] = TRUE turn =2

                                                                do flag[2] = TRUE

                                                                T1 turn =1

                                                                T2 While( flag[2] ampamp turn == 2)Since turn = 2 while loop will not run

                                                                While( flag[1] ampamp turn == 1)Loops continuously until flag[1]

                                                                becomes falseT3 Enter critical section

                                                                T4 flag[1] = FALSE Enter Critical section

                                                                T5 Enters remainder section flag[2]=FALSE

                                                                while (TRUE) Enters remainder section

                                                                while (TRUE)

                                                                101 File Concept Data is stored in secondary memory in the form of files A file is a sequence of bits bytes lines or records A file has structure which depends on its type

                                                                File Attributes

                                                                1 Name ndash Name of the file for user understanding2 Identifier used by OS to identify a file ( eg inode number )3 Type ndash Text files executable files binary files etc 4 Location ndash location of the file on the hard drive 5 Size current size of the file6 Protection Controls who can read write or execute the file7 Time Date and user identification gives the information when the file was created last modified

                                                                and last used etc This data is used for protection security and usage monitoring

                                                                All information about files is kept in directory structure The directory entry consists of the filersquos name and its unique identifier The identifier locates the other file attributes Directory structure is kept on disk

                                                                File Operations

                                                                The file Abstract Data Type supports many common operations 1 Creating a file find free space on disk add entry to directory 2 Writing a file write data at current file position pointer location and update pointer3 Reading a file read data at current file position pointer location and update pointer4 Repositioning within a file or file seek change pointer location to a given value5 Deleting a file free the space allocated to file and remove entry from directory6 Truncating a file delete the data and update file size

                                                                Open File Table

                                                                Most OSes require that files be opened before access and closed after access Information about all files that are open currently in the system is stored in an open file table This

                                                                system wide open file table is maintained by OS And information about files opened by the process is stored by the processrsquos open file table Open file table has the following information

                                                                1 File pointer - records the current position in the file for the next read or write2 File-open count - How many times has the current file been opened ( simultaneously by

                                                                different processes ) and not yet closedWhen this counter reaches zero the file can be removed from the table

                                                                3 Disk location of the file4 Access rights

                                                                Some systems provide support for file locking 1 A shared lock is for reading only2 A exclusive lock is for writing as well as reading3 An advisory lock OS makes sure the locks are acquired and released appropriately4 A mandatory lock software developers makes sure the locks are acquired and released

                                                                appropriately5 UNIX used advisory locks and Windows uses mandatory locks

                                                                File Types

                                                                Implementing file types A common technique is to include the type as part of the filename The name is split into 2 parts

                                                                1 File Name 2 Extension

                                                                File name and extension are separated by a period The system uses the extension to indicate the type of file and the type of operations that can be done on that fileUser opens Microsoft word and clicks on File menu and Open option User will only specify the filename without extension application will look for a file with the given name and extension it expects Hence extensions are not supported by the operating systemUnix system uses a crude magic number stored at the beginning of the file to indicate the type of fileIn Mac OS X operating system file produced by word processor has the word processorrsquos name as its creator When user double clicks on the file word will open the required fileSome common file types are as shown below

                                                                File Structure Some files contain an internal structure which may or may not be known to the OS for ex executable files have a specific structure so that it can determine

                                                                where in memory to load the file and what is the location of the first instruction

                                                                If OS supports many file formats it will increase the size and complexity of the OS UNIX treats all files as sequences of bytes with no internal structure (With the exception of

                                                                executable binary programs which it must know how to load and find the first executable statement etc)

                                                                Macintosh files have two forks - a resource fork and a data fork The resource fork contains information relating to the UI such as icons and button images and can be modified independently Data fork contains the code or data

                                                                Internal File structure

                                                                Disk files are accessed in units of physical blocks typically 512 bytes

                                                                Internally files are organized in units of logical units like single byte (or)

                                                                data record size (or) structure size

                                                                The number of logical units which fit into one physical block determines its packingIf a student file is organized internally as records Each record takes 64 bytes of memoryAnd main memory is divided into blocks of 512 bytes Then packing = 51264=8 8 records can be stored per block

                                                                Internal fragmentation may occur due to this

                                                                102 Access Methods

                                                                The information in file can be accessed in the following ways1 Sequential access2 Direct access (or) Relative access

                                                                1 Sequential access we cannot randomly access any record

                                                                Records are read and written sequentially one record after another

                                                                A sequential access file emulates magnetic tape operation and generally supports the following operations read next - read a record and advance the tape to the next position write next - write a record and advance the tape to the next position rewind skip n records - May or may not be supported N may be limited to positive numbers or may be

                                                                limited to +- 1

                                                                2 Direct access (or) Relative accessA file is made up of fixed length records so that we can randomly access any record

                                                                No restriction on order of reading or writing

                                                                Direct access files are useful when we need to access large amounts of data randomly

                                                                Ex database

                                                                Operations supported include n is relative block number ie relative to beginning of file

                                                                1 read n - read record number n 2 write n - write record number n 3 jump to record n - could be 0 or the end of file4 Query current record - used to return back to this record later5 Sequential access can be easily emulated using direct access The inverse is complicated and

                                                                inefficient

                                                                3 Other access methods can be built on top of direct access method We construct an index for the file which contains pointers to various blocks To search a file we first access index and then use pointer to access file

                                                                If a student file has 120000 records of students sorted by student id We can create an index consisting of 1st student id in each blockif 64 records can be stored in each block then number of blocks needed to store 120000 records = 12000064 asymp 2000 blocks So index has 2000 entries Index is kept in main memory so that searching becomes faster

                                                                If the index itself becomes large we create an index for index file (Ex B-tree)

                                                                Q) Allocation Methods

                                                                To allocate space to files on disk is such a way that disk space is utilized effectively and files can be accessed quickly

                                                                Three major methods of allocating disk space are

                                                                1 Contiguous 2 Linked

                                                                3 Indexed

                                                                1 Contiguous allocation Each file occupies set of contiguous blocks on disk

                                                                If the file is n blocks long and starts at location b then it occupies blocks b b+1 b+2hellipb+n-1

                                                                The directory entry for each file indicates the address of starting block(ie b) and length (n)

                                                                Advantages

                                                                1 For direct access to block i we can directly access by finding the block as b+i Since the file is stored sequentially sequential access is easy Hence access is easy

                                                                2 Supports both direct and sequential access3 Number of disk seeks required is minimum 4 Seek time is minimum

                                                                Disadvantages

                                                                1 Difficult to find space to a new file (or) add more data to existing fileFirst fit and Best fit strategies are used to find a free hole( set of free contiguous blocks) from list of available holes But both are inefficient wrt storage utilization

                                                                2 External fragmentation even though there are enough free blocks required for a new file they are not contiguous and hence cannot be allocated to new fileSolution copy all the files onto another disk or tape We then copy the files back onto original disk contiguously This is known as compaction But compaction is time consuming

                                                                3 Size of the file must be known when the file is created Even though final size is known entire size may not be used by the file at the beginning Hence leads to wastage of space

                                                                If size of the file is not known in advance then (i) If we allocate too little space to file file cannot be extended Solution find a larger hole copy the contents of the file to the new space and release the previous space This may be possible as long as space exists Also this is time consuming(ii) If we overestimate the amount of space needed much of the space is unused This will lead to internal fragmentation

                                                                To minimize these drawbacks some os uses modified contiguous allocation scheme Here contiguous blocks of memory is allocated at firstIf the file needs more blocks another set of contiguous space known as extent is added Location and a block count plus a link to the first block of next extentIf extents are too large and of fixed size internal fragmentation occursIf extents are of variable sizes external fragmentation occurs

                                                                Linked AllocationEach file is a linked list of disk blocks The disk blocks can be scattered anywhere on the diskThe directory contains a pointer to first and last blocks of the file For example a file of 5 blocks might start at a block 9 and continue at block 16 then block 1 and then block 10 and finally block 25 Each block contains a pointer to next block If each block is 512 bytes and disk address (pointer ) requires 4 bytes then the user sees blocks of 512-4=508 bytes To create a new file we simply create a new entry in the directory The pointer will have nil for empty file Also the size field of empty file is 0

                                                                To add more data to existing file free block is found and data is written to it and is linked to end of file To read a file we read by following the pointers block to block

                                                                Advantages 1 Easy to find space to a new file (or) add more data to existing file2 Size of the file need not be declared when the file is created3 There is no external fragmentation

                                                                Disadvantages1 Seek time is more2 Number of disk seeks required is more3 Pointers use larger percentage of filersquos disk space4 Inefficient for direct access files To access i th record i disk reads are required5 Not Reliable if pointers were lost or damaged

                                                                Solution to disadvantage 3 is to use to group multiple blocks as clusters(for ex 4 blocks = 1cluster) We allocate clusters instead of blocks to file Makes logical to physical block mapping simpleImproves disk throughput by reducing disk access timeReduces wastage of space due to pointers as only few pointers may be neededFree ndashlist management also becomes simpleDisadvantage internal fragmentation more space is wasted when the cluster is partially full

                                                                Solution to disadvantage 5 is to use doubly linked list (or) store the file name and relative block number in each block

                                                                An variation of linked allocation is the use of file allocation table (FAT)A section of disk at the beginning of each volume contains FAT FAT has one entry for each disk block and is indexed by block numberThe block number contains the block number of next block that contains the file The last block will have special end of the file value as table entry An unused block is indicated by table value 0

                                                                Advantages

                                                                1 Easy to find space to a new file (or) add more data to existing file2 Random access time is improved

                                                                Disadvantage number of disk seeks is more The disk head must move to read the FAT and find the location of block then move to location of the block

                                                                3 Indexed allocation

                                                                Each file has its own index block Index block is an array of disk block addresses The ith entry in index block points to ith block of the file

                                                                The directory contains the address of the index block

                                                                When the file is created all pointers in the index block are set to nil

                                                                To add more data to existing file free block is found and data is written to it The address of block is put in the index block

                                                                Advantages

                                                                1 Easy to find space to a new file (or) add more data to existing file2 Supports direct (or) random access efficiently3 No external fragmentation

                                                                Disadvantage

                                                                1 Seek time is more2 Number of disk seeks required is more3 Not Reliable4 Index file must be kept in memory If this memory is not available then we have the read the index

                                                                block and then desired data block ie 2 disk accesses are required To access a block near the end of the file we need to read all the index blocks to read the needed data block

                                                                5 Pointer overhead of index block is gt than that of linked allocation For example if we have file of only one or two blocks Entire index block is allocated to store these one or two pointers

                                                                Various mechanisms followed for size of index block

                                                                1 linked scheme size of index block is one disk block Index block contains a small header containing name of the file and set of the first 100 disk block addresses The next address contains the address of another index block for large files

                                                                2 multilevel index to access a block the OS uses first-level index to find second-level index block which points to file block This approach can be continued to third or fourth level

                                                                3 combined scheme

                                                                For example Say there are 15 pointers of index block in the filersquos inode The first 12 of these pointers have address of blocks that contains file data The next 3 pointers point to indirect blocks The first points to single indirect block which is an index block that contain addresses of filersquos data blocks

                                                                The second points to double indirect blocks and third points to triple indirect blocks as shown below

                                                                Q) Free space management

                                                                To keep track of free disk space the system maintains free space list To create a file we search free space list and allocate the required space to new file

                                                                The free space list can be implemented as

                                                                1 Bit vector (or) Bit map2 Linked list3 Grouping4 Counting 5 Space maps

                                                                1 Bit vector (or) Bit map Each block is represented by 1 bit If the block is free the bit is 1 and if the block is allocated the bit is 0 For example consider a disk where blocks 23458 blocks are free The free-space bit map would be 001111001000

                                                                Advantages simple and easy to get contiguous files

                                                                Disadvantages

                                                                1 Bit map requires extra space2 Bit map is inefficient if kept in disk Bit maps of smaller disk can be kept in main memory but may

                                                                not be possible to keep the bit map of larger disk in main memory3 Bit map must be modified both when blocks are allocated and freed Freeing 1 GB of data on a 1-TB

                                                                disk could cause thousands of blocks of bit maps to be updated because those data blocks are scattered all over the disk

                                                                2 Linked List Free-space list is implemented by linking all the free disk blocks together A pointer to the first free block is stored in special location on the disk For example consider a disk where blocks 23458 blocks are freeWe keep a pointer to block 2 as the first free block Block 2 would contain a pointer to block 3 Block 3 would point to block 4 and block 4 would point to block 5and so onAdvantage No waste of spaceDisadvantage Cannot get contiguous space easily ie To traverse the free-space list we must read each block which requires more IO time

                                                                3 Grouping First free block stores the addresses of n free blocks The first n-1 free blocks are free and the last block contains the addresses of another n free blocks and so onAdvantage large number of free blocks can be found quickly4 Counting free-space list can be maintained by keeping the address of first free block and the number (n)of free contiguous blocks that follow the first block

                                                                Each entry in the free-space list consists of disk address and a countEach entry requires more space andlist is shorter only if count is gt1These entries can be stored in a B-tree for efficient insertion deletion and search

                                                                5 Space maps ZFS divides the disk space into chunks of manageable sizes called metaslabs Each metaslab has a space mapZFS uses counting algorithm to store information about free blocks Space map is a log of all block activities in time order in counting formatWhen ZFS allocates or frees space from a metaslab it loads the associated space map into memory and updates it Finally free-space list is updated on diskSynchronization Hardware Hardware solution to synchronization is to provide atomic operations These operations operate as a single instruction without interruption Two such operations are 1 Test and Set() hardware instruction2 Swap() instruction

                                                                1 Test and Set instruction is as shown

                                                                Solution using Test and SetThe shared variable lock is initialized to false

                                                                2 Swap() instruction definition is as follows

                                                                Solution using Swap1 Shared Boolean variable lock is initialized to false and each process has a local boolean variable key

                                                                Both Test and Set Swap instructions satisfy the mutual exclusion requirement but unfortunately do not guarantee bounded waiting If there are multiple processes trying to get into their critical sections there is no guarantee of what order they will enter and any one process could have the bad luck to wait forever until they got their turn in the critical section (Since there is no guarantee as to the relative rates of the processes a very fast process could theoretically release the lock whip through their remainder section and re-lock the lock before a slower process got a chance As more and more processes are involved vying for the same resource the odds of a slow process getting locked out completely increase )

                                                                Below figure illustrates a solution using test-and-set that satisfies bounded waiting using two shared data structures boolean lock and boolean waiting[ N ] where N is the number of processes in contention for critical sections

                                                                Bounded-waiting mutual exclusion with TestAndSet( )

                                                                It first looks in an order ( starting with the next process on the list ) for a process that has been waiting and if it finds one then it releases that particular process from its waiting state without unlocking the critical section thereby allowing a specific process into the critical section while continuing to block all the others

                                                                Q) What is semaphoreAns Semaphore is used to solve synchronization problems A semaphore S is an integer variable accessed only through 2 standard atomic operations 1 wait() 2 signal()Wait primitive Signal primitivewait(S)

                                                                while(S lt= 0) do nothingS--

                                                                signal (S)

                                                                S++

                                                                Each process that wishes to use a resource performs wait() operation When a process releases a resource it performs signal() operation

                                                                Semaphores Usage1 binary semaphore or mutex locks can take values 0 (or) 1

                                                                2 Counting Semaphore can take any integer value Counting semaphore is used to count the number remaining resources The counter is initialized to the number of resources available whenever the counting semaphore gt 0 then a process can enter a critical section and use one of the resources When the counter = 0 ( or negative in some implementations ) then the process blocks until another process frees a resource and increments the counting semaphore with a signal callFor example if 3 resources are there and 4 processes Value of counting semaphore =3if process P1 requires a resource it perform wait() operationNow counting semaphore = 2if process P2 requires a resource it perform wait() operationNow counting semaphore = 1if process P3 requires a resource it perform wait() operationNow counting semaphore = 0if process P4 requires a resource it perform wait() operation P4 waits until a resource is available

                                                                3 Semaphores can also be used to synchronize certain operations

                                                                Semaphore ImplementationWhen a process is in critical section and any other process that tries to enter critical section must loop continuously wasting CPU time (busy waiting) This type of semaphore is called spin lock as the process spins while waiting For example suppose it is important that process P1 execute statement S1 before process P2 executes statement S2

                                                                First we create a semaphore named synch that is shared by the two processes and initialize it to zero Then in process P1 we insert the code

                                                                S1signal( synch )

                                                                and in process P2 we insert the code wait( synch )

                                                                S2 Because synch was initialized to 0 process P2 will block on the wait until after P1 executes the call

                                                                to signal Each semaphore has a integer value and maintains a waiting queue of waiting processes

                                                                Semaphore implementation with no busy waiting To overcome busy waiting we block the waiting process and waiting process is restarted by wakeup() operation(ie the process is removed from waiting queue to ready state)

                                                                Problems due to semaphore1 Deadlocks Deadlocks occur when multiple processes are blocked each waiting for a resource that can only be freed by one of the other (blocked) processes shown below

                                                                2 Starvation In Starvation one or more processes gets blocked forever and never get a chance to enter critical section For example if we do not specify the algorithms for adding processes to the waiting queue or selecting one to be removed from the queue in the signal( ) call If a LIFO queue is chosen then the first process that starts waiting will never get a chance

                                                                3 Priority inversionLet Process LMH have priorities as shown belowLltMltHprocess H is waiting for a resource that has been held by process L so Process L is running When process M enters it pre-empts process L and M is running nowHere M runs before H even though M has lower priority than HTo avoid this L is given high priority so that M does not pre-empt L=====================================================================Q) Classical problems on synchronization

                                                                1 The Bounded Buffer Problem (also called the The Producer-Consumer Problem)2 The Readers-Writers Problem3 The Dining Philosophers Problem

                                                                These problems are used to test newly proposed synchronization scheme

                                                                1 The Bounded Buffer ProblemConsider

                                                                n buffers and each buffer holds one item a producer process which creates the items (1 at a time) a consumer process which processes them (1 at a time)

                                                                Producer process cannot produce an item if all the buffers are full and Consumer process cannot consume if all the buffers are empty As both processes modify the contents of buffer synchronization must be there For that we use 3 Semaphores

                                                                1 empty = n (empty semaphore counts the number of empty buffers)2 full=0 (full semaphore counts the number of filled buffers)3 mutex=1 (mutex provides mutual exclusion for access to the buffer)

                                                                Producer process Consumer processdo

                                                                hellip produce an item wait(empty) wait(mutex) hellip

                                                                do wait(full) wait(mutex) hellip remove an item from buffer hellip

                                                                add item to buffer hellip signal(mutex) signal(full)

                                                                while (1)

                                                                signal(mutex) signal(empty) hellip consume the item hellip while (1)

                                                                2 The Readers-Writers ProblemA data item such as a file is shared among several processesEach process is classified as reader or writerMultiple readers can read the file at same timeA writer must have exclusive access (ie cannot share with either a reader or another writer)Two versions of the readers-writers problem

                                                                readers priority new reader need not wait because a writer is waiting writers priority if a writer is waiting to access the database no new readers can start reading

                                                                A solution to both version may cause starvation in the readers priority version writers may starve in the writers priority version readers may starve

                                                                A semaphore solution to the readers priority version uses 3 semaphores1 readcount = 0 (readcount semaphore counts the number of reading shared data2 mutex =1 (mutex semaphore is used for mutually exclusion when readcount is updated)3 wrt = 1 (wrt semaphore is common to both writer and reader)

                                                                wrt provides mutually exclusive access to shared data Reader process Writer process

                                                                wait(mutex) readcount++ if (readcount == 1) wait(rt) signal(mutex) hellip reading is performed hellip wait(mutex) readcount-- if (readcount == 0) signal(wrt) signal(mutex)

                                                                do wait(wrt) hellip writing is performed hellip signal(wrt)

                                                                while (TRUE)

                                                                3 The Dining Philosophers Problemn philosophers sit around a table thinking and eatingAs time passes a philosopher gets hungry and tries to pick up the chopstick on his left and on his right A philosopher may only pick up one chopstick at a time and cannot pick up a chopstick already in the hand of neighbour philosopherThe dining philosophers problems is an example of a large class or concurrency control problems it is a simple representation of the need to allocate several resources among several processes in a deadlock-free and starvation-free mannerA semaphore solution represent each chopstick with a semaphore

                                                                semaphore chopstick[5] Initially all values are 1

                                                                Philosopher ido

                                                                51

                                                                wait(chopstick[i])wait(chopstick[(i+1) 5])hellipeathellipsignal(chopstick[i])signal(chopstick[(i+1) 5])hellipthinkhellip

                                                                while (1)This solution guarantees no two neighbouring philosophers eat simultaneously but has the possibility of creating a deadlockQ) Types of operating systems

                                                                MULTIPROCESSING SYSTEMS has multiple hardware CPUs

                                                                In symmetric multiprocessing (SMP) systems The processes share the same main memory and io facilities All processors can perform the same functions (hence term symmetric)

                                                                52

                                                                ASYMMETRIC MULTIPROCESSING SYSTEMS In asymmetric multiprocessing (ASMP) all CPUs are not equal (Master and slave CPUs) ASMP have dedicated processors to specific tasks For example one processor may be dedicated to disk operations another to video operations etc These systems dont have the flexibility to assign processes to the least-loaded CPU unlike an SMP system

                                                                REAL TIME SYSTEM is used to implement a computer application that must complete its execution within its time constraint

                                                                Two kinds of real-time systems have evolved

                                                                1 A hard real-time system can guarantee that response requirement will be met under all conditions 2 A soft real-time system cannot guarantee that response requirement meets under all conditions Ex

                                                                Digital audio or multimedia systems Digital telephones

                                                                DISTRIBUTED SYSTEMS execute parts of a computation in different systems at the same time It uses distributed control ie it spreads its decision-making actions across different computers in the system so that failures of individual computers or the network does not cripple its operation

                                                                A distributed operating system appears as a uniprocessor system even though it has multiple processors The users may not know where their programs are being run or where their files are located that should all be handled automatically by the operating system

                                                                Distributed systems allow applications to run on several processors at the same time thus requiring more complex processor scheduling algorithms

                                                                DESKTOP SYSTEMS A desktop system is a personal computer (PC) system used at a single location Modern desktop systems support multiprogramming Common examples are Linux FreeBSD Windows 8 and the Macintosh operating system

                                                                HANDHELD SYSTEMS (palmtop computers) Two of the most popular operating systems for handhelds are Symbian OS and Android OS They have very less memory

                                                                53

                                                                CLUSTERED SYSTEMS A computer cluster consists of a set of loosely connected computers that work together so that in many respects they can be viewed as a single system The components of a cluster are usually connected to each other through fast local area networks each node (computer used as a server) running its own instance of an operating system

                                                                Q) Modern operating system

                                                                New developments in hardware applications and security threats lead to the development of modern operating system

                                                                New Hardware developments like increased machine speed high speed network increased size and variety of memory devices

                                                                New Applications developments like multimedia applications internet and web access and client server computing

                                                                Developments in modern operating system can be categorized as

                                                                1 Microkernel architecture assigns only few functions to kernel ( IPC and CPU scheduling etc) Other OS services are provided by processes called servers that run in user mode This approach decouples server and kernel development This approach is well suited to distributed environment

                                                                2 Multithreading a process is divided into threads that can run concurrently Useful for applications that perform a number of independent tasks Example database server that listens and processes many client requests

                                                                3 Symmetric multiprocessing As now-a-days there are multiple microprocessors in a single system SMP operating system provides greater efficiency when there are multiple processors The processes share the same main memory and io facilities All processors can perform the same functions (hence term symmetric)

                                                                SMP schedules processes or threads across all of the processors

                                                                Advantages

                                                                1 Increased performance if some portion of the program can be run in parallel the performance of SMP OS increases

                                                                2 increased availability as all processes can perform the same task if one processor fails others will work 3 incremental growth one can add a new processor to increase the performance4 Scaling cost can be dependent on the number of processors used

                                                                4 Distributed operating systems Distributed systems allow applications to run on several processors at the same time

                                                                5 Object oriented design adds modularity to kernel Os can be customized without effecting system integrity

                                                                Q) Linux Operating systems It is open source as its source code is freely available It is free to use Linux was designed considering UNIX compatibility Linux is a multi-user multitasking system Main design goals are speed efficiency and standardization Components of Linux System

                                                                54

                                                                Linux Operating System has primarily three components as shown in the below diagram

                                                                Kernel - Kernel is the core part of Linux It is responsible for all major activities of this operating system It interacts directly with hardware Kernel hides low level hardware details to system or application programs

                                                                System Library - System libraries are special functions or programs These are used by application programs to use Kernelrsquos features

                                                                System Utility - System Utility programs are responsible to do specialized individual level tasks like updating log file accepting login requests from terminals etc

                                                                Kernel Modules Sections of kernel code that can be compiled loaded and unloaded independent of the rest of the kernel A kernel module may typically implement a device driver a file system or a networking protocol

                                                                Three components to Linux module support 1 module management Supports loading modules into memory and allows to talk to the rest of the kernel 2driver registration Allows modules to inform rest of the kernel that a new driver has become available 3 conflict resolution protect reserved resources of one driver from accidental use by another driver

                                                                Kernel Mode vs User ModeKernel component code executes in a special privileged mode called kernel mode with full access to all resources of the computer This code represents a single process executes in single address space and do not require any context switch and hence is very efficient and fast Kernel runs each processes and provides system services to processes provides protected access to hardwares to processesCode which is not required to run in kernel mode is in System Library User programs and other system programs works in User Mode which has no access to system hardwares and kernel code User programs utilities use System libraries to access Kernel functions to get systems low level tasksBasic FeaturesFollowing are some of the important features of Linux Operating System

                                                                Portable - Portability means softwares can works on different types of hardwares in same way Linux kernel and application programs supports their installation on any kind of hardware platform

                                                                Open Source - Linux source code is freely available and it is community based development project Multiple teams works in collaboration to enhance the capability of Linux operating system and it is continuously evolving

                                                                Multi-User - Linux is a multiuser system means multiple users can access system resources like memory ram application programs at same time

                                                                Multiprogramming - Linux is a multiprogramming system means multiple applications can run at same time

                                                                Hierarchical File System - Linux provides a standard file structure in which system files user files are arranged

                                                                Shell - Linux provides a special interpreter program which can be used to execute commands of the operating system It can be used to do various types of operations call application programs etc

                                                                55

                                                                Security - Linux provides user security using authentication features like password protection controlled access to specific files encryption of data

                                                                Architecture Linux System Architecture is consists of following layers

                                                                Hardware layer - Hardware consists of all peripheral devices (RAM HDD CPU etc)

                                                                Kernel - Core component of Operating System interacts directly with hardware provides low level services to upper layer components

                                                                Shell - An interface to kernel Takes commands from user and executes kernels functions

                                                                Utilities - Utility programs giving user most of the functionalities of an operating systems

                                                                Q) windows XP operating System1Extensibility mdash layered architecture

                                                                Executive which runs in protected mode provides the basic system services On top of the executive several server subsystems operate in user mode Modular structure allows additional environmental subsystems to be added without affecting the

                                                                executive 2Portability mdash XP can be moved from on hardware architecture to another with relatively few changes

                                                                Written in C and C++ Processor-dependent code is isolated in a dynamic link library (DLL) called the ldquohardware abstraction

                                                                layerrdquo (HAL)3 Reliability mdashXP uses hardware protection for virtual memory and software protection mechanisms for operating system resources 4 Compatibility mdash applications that follow the IEEE 10031 (POSIX) standard can be complied to run on 2000 without changing the source code 5 Performance mdash XP subsystems can communicate with one another via high-performance message passing

                                                                Preemption of low priority threads enables the system to respond quickly to external events Designed for symmetrical multiprocessing

                                                                6 International support mdash supports different locales via the national language support (NLS) APIXP Architecture 1 Layered system of modules 2 Protected mode mdash hardware abstraction layer (HAL) kernel executive 3 User mode mdash collection of subsystems

                                                                a Environmental subsystems emulate different operating systems b Protection subsystems provide security functions

                                                                XP architecture

                                                                56

                                                                Q) Windows network Operating systemExample Windows 2000 operating system

                                                                All the pictures are taken from Silberschatz Abraham et al Peter Baer Galvin and Greg Gagne Operating system concepts Reading Addison-Wesley

                                                                Visit my blog enthusiaststudentblogspotin57

                                                                mtechmessengerblogspotin

                                                                Keywords OS exam notes OS notesOperating system notes Opearing system galvin notesSimple batch system Multi- programmed Batch System Time sharing System Difference between Batch Multi-programming and Time sharing What is process structure of process in memory Process states PCB Operations on Processes Process Creation Process Termination cascading terminationcontext switch Basic Concepts of threads two modes of CPU execution Inter process communication Shared ndash Memory Systems Message passing systemswhat is deadlock four conditions that are necessary for deadlock to occur Methods for handling deadlocks Deadlock prevention Deadlock avoidanceResource Allocation graphBankers AlgorithmSafe State Unsafe state Safety Algorithm Resource-Request Algorithm Memory management Address Binding Logical vs Physical Address Space MMU Memory-Management Unit Dynamic loading Dynamic Linking Overlays Swapping Contiguous memory allocation Single-partition allocation Multiple-partition allocation Internal fragmentation Variable Size partitions External fragmentation First-fit Best-fit Worst-fit Non-Contiguous Memory allocation Paging Implementation of Page Table PTLR PTBR page table base register Page-table length register translation look-aside buffers TLB Protection in Paging Shared pages in Paging Segmentation Shared Segments Virtual memory Demand paging Procedure for Handling a Page Fault pure demand paging Page Replacement Page Replacement Algorithms FIFO Optimal Page replacement AlgorithmLeast Recently used (LRU) Page replacement algorithm ThrashingWorking Set model global and local page replacement

                                                                58

                                                                • 101 File Concept Data is stored in secondary memory in the form of files A file is a sequence of bits bytes lines or records A file has structure which depends on its type
                                                                • File Attributes
                                                                • File Operations
                                                                • Kernel Mode vs User Mode
                                                                • Basic Features
                                                                • Architecture

                                                                  P3 3 The Gantt chart isP1 P2 P3 P1 P1 P1 P1 P1

                                                                  0 4 7 10 14 18 22 26 30 Average waiting time = [(30-24)+4+7]3 = 173 =566 5)Multilevel Queue scheduling Ready queue is partitioned into separate queuesFor ex ready queue is divided into 2 queues

                                                                  1 foreground (interactive) queue2 background (batch) queue

                                                                  Each queue has its own scheduling algorithmFor ex Round Robin scheduling algorithm can be used in foreground queue

                                                                  FCFS scheduling algorithm can be used in background queueScheduling must be done between the queues This can be done in 2 ways1 Fixed priority scheduling

                                                                  Foreground queue has highest priority All the processes in foreground queue must be completed and queue must be empty Then only the processes in background queue will be given CPU As shown in below figure Disadvantage starvation

                                                                  2 Time slice ndash each queue gets a certain amount of CPU timewhich it can schedule amongst its processes ie 80 to foreground in RR and 20 to background in FCFS

                                                                  6) Multilevel Feedback Queue Scheduling A process can move between queues The idea is to separate processes according to their CPU bursts IO ndash bound and interactive processes will be in highest priority queueIf a process takes more CPU time it is moved to lower priority queue If a process waits too long in lower priority queue it is moved to higher priority queue to prevent starvationAs shown in the below figure let there be 3 queues Q0 Q1 Q2

                                                                  1 Q0 ndash time quantum 8 milliseconds 2 Q1 ndash time quantum 16 milliseconds 3 Q2 ndash FCFS

                                                                  Scheduling

                                                                  1 A process entering ready queue is put in queue Q0 When it gains CPU the process receives 8 msec If it does not finish in 8 milliseconds process is moved to queue Q1 2 At Q1 process again receives 16 additional milliseconds If it still does not complete it is moved to queue Q2

                                                                  Q) SynchronizationSeveral processes run in an operating system Some process share resources due to which problems like data inconsistency may arise

                                                                  Example of process synchronization is Producer Consumer problem (or) Bounded-Buffer problemProducer Consumer problem To make sure that producer process should not add data when the buffer is full andConsumer process should not take data when the buffer is empty

                                                                  The code for Producer process can be modified as followswhile (true) Produce an item in next producedwhile (counter == BUFFER_SIZE) do nothingbuffer[in]=nextProducedin = (in+1) BUFFER_SIZEcounter++

                                                                  here an integer variable counter is initialized to zeroCounter is incremented every time a new item is added to the buffer and counter is decremented every time an item is removed from bufferin is an index that always points to an empty slot after the last filled slotout is an index that always points to the first filled slot after at the head of the circular queue

                                                                  The code for consumer process is as shown belowwhile(true)

                                                                  while (counter == 0) do nothingnextConsumed = buffer[out]out=(out+1)BUFFER_SIZEcounter--

                                                                  The above producer and consumer code produces RACE CONDITIONOutcome of execution of several co-operating processes depends on the order in which accesses takes place is called Race ConditionFor Example if counter =5

                                                                  Given counter =5 and if producer produces one item and consumer consumes one item then correct value of counter= 5 only But here the counter = 4 ie inconsistent resultIf T5 is executed before T4 then the counter =6 ie inconsistent result

                                                                  Solution to avoid race condition make sure only one process at a time must update the shared variable (here it is counter)

                                                                  Q) Critical section problemAns Each process has a segment of code called critical section Critical section code is the code that contains common variables of co-operating processesTo avoid data inconsistencyTwo processes must not execute critical section code at same timeFor this to happen1 Each process must request permission to enter its critical sectionmdashEntry section2 End of critical section ndashExit section3 Rest of the code after critical section ndash Remainder section

                                                                  Solution to critical section must satisfy the following three requirements1 Mutual exclusion if one process is in critical section no other process can be in critical section2 Progress Processes not in remainder section must decide which process will enter critical section next3 Bounded Waiting All the processes must be allowed to enter critical section No process must wait indefinitely

                                                                  Q) Petersonrsquos solution for critical section problem (Software based solution)AnsTime Process 1 Process2T0 do

                                                                  flag[1] = TRUE turn =2

                                                                  do flag[2] = TRUE

                                                                  T1 turn =1

                                                                  T2 While( flag[2] ampamp turn == 2)Since turn = 2 while loop will not run

                                                                  While( flag[1] ampamp turn == 1)Loops continuously until flag[1]

                                                                  becomes falseT3 Enter critical section

                                                                  T4 flag[1] = FALSE Enter Critical section

                                                                  T5 Enters remainder section flag[2]=FALSE

                                                                  while (TRUE) Enters remainder section

                                                                  while (TRUE)

                                                                  101 File Concept Data is stored in secondary memory in the form of files A file is a sequence of bits bytes lines or records A file has structure which depends on its type

                                                                  File Attributes

                                                                  1 Name ndash Name of the file for user understanding2 Identifier used by OS to identify a file ( eg inode number )3 Type ndash Text files executable files binary files etc 4 Location ndash location of the file on the hard drive 5 Size current size of the file6 Protection Controls who can read write or execute the file7 Time Date and user identification gives the information when the file was created last modified

                                                                  and last used etc This data is used for protection security and usage monitoring

                                                                  All information about files is kept in directory structure The directory entry consists of the filersquos name and its unique identifier The identifier locates the other file attributes Directory structure is kept on disk

                                                                  File Operations

                                                                  The file Abstract Data Type supports many common operations 1 Creating a file find free space on disk add entry to directory 2 Writing a file write data at current file position pointer location and update pointer3 Reading a file read data at current file position pointer location and update pointer4 Repositioning within a file or file seek change pointer location to a given value5 Deleting a file free the space allocated to file and remove entry from directory6 Truncating a file delete the data and update file size

                                                                  Open File Table

                                                                  Most OSes require that files be opened before access and closed after access Information about all files that are open currently in the system is stored in an open file table This

                                                                  system wide open file table is maintained by OS And information about files opened by the process is stored by the processrsquos open file table Open file table has the following information

                                                                  1 File pointer - records the current position in the file for the next read or write2 File-open count - How many times has the current file been opened ( simultaneously by

                                                                  different processes ) and not yet closedWhen this counter reaches zero the file can be removed from the table

                                                                  3 Disk location of the file4 Access rights

                                                                  Some systems provide support for file locking 1 A shared lock is for reading only2 A exclusive lock is for writing as well as reading3 An advisory lock OS makes sure the locks are acquired and released appropriately4 A mandatory lock software developers makes sure the locks are acquired and released

                                                                  appropriately5 UNIX used advisory locks and Windows uses mandatory locks

                                                                  File Types

                                                                  Implementing file types A common technique is to include the type as part of the filename The name is split into 2 parts

                                                                  1 File Name 2 Extension

                                                                  File name and extension are separated by a period The system uses the extension to indicate the type of file and the type of operations that can be done on that fileUser opens Microsoft word and clicks on File menu and Open option User will only specify the filename without extension application will look for a file with the given name and extension it expects Hence extensions are not supported by the operating systemUnix system uses a crude magic number stored at the beginning of the file to indicate the type of fileIn Mac OS X operating system file produced by word processor has the word processorrsquos name as its creator When user double clicks on the file word will open the required fileSome common file types are as shown below

                                                                  File Structure Some files contain an internal structure which may or may not be known to the OS for ex executable files have a specific structure so that it can determine

                                                                  where in memory to load the file and what is the location of the first instruction

                                                                  If OS supports many file formats it will increase the size and complexity of the OS UNIX treats all files as sequences of bytes with no internal structure (With the exception of

                                                                  executable binary programs which it must know how to load and find the first executable statement etc)

                                                                  Macintosh files have two forks - a resource fork and a data fork The resource fork contains information relating to the UI such as icons and button images and can be modified independently Data fork contains the code or data

                                                                  Internal File structure

                                                                  Disk files are accessed in units of physical blocks typically 512 bytes

                                                                  Internally files are organized in units of logical units like single byte (or)

                                                                  data record size (or) structure size

                                                                  The number of logical units which fit into one physical block determines its packingIf a student file is organized internally as records Each record takes 64 bytes of memoryAnd main memory is divided into blocks of 512 bytes Then packing = 51264=8 8 records can be stored per block

                                                                  Internal fragmentation may occur due to this

                                                                  102 Access Methods

                                                                  The information in file can be accessed in the following ways1 Sequential access2 Direct access (or) Relative access

                                                                  1 Sequential access we cannot randomly access any record

                                                                  Records are read and written sequentially one record after another

                                                                  A sequential access file emulates magnetic tape operation and generally supports the following operations read next - read a record and advance the tape to the next position write next - write a record and advance the tape to the next position rewind skip n records - May or may not be supported N may be limited to positive numbers or may be

                                                                  limited to +- 1

                                                                  2 Direct access (or) Relative accessA file is made up of fixed length records so that we can randomly access any record

                                                                  No restriction on order of reading or writing

                                                                  Direct access files are useful when we need to access large amounts of data randomly

                                                                  Ex database

                                                                  Operations supported include n is relative block number ie relative to beginning of file

                                                                  1 read n - read record number n 2 write n - write record number n 3 jump to record n - could be 0 or the end of file4 Query current record - used to return back to this record later5 Sequential access can be easily emulated using direct access The inverse is complicated and

                                                                  inefficient

                                                                  3 Other access methods can be built on top of direct access method We construct an index for the file which contains pointers to various blocks To search a file we first access index and then use pointer to access file

                                                                  If a student file has 120000 records of students sorted by student id We can create an index consisting of 1st student id in each blockif 64 records can be stored in each block then number of blocks needed to store 120000 records = 12000064 asymp 2000 blocks So index has 2000 entries Index is kept in main memory so that searching becomes faster

                                                                  If the index itself becomes large we create an index for index file (Ex B-tree)

                                                                  Q) Allocation Methods

                                                                  To allocate space to files on disk is such a way that disk space is utilized effectively and files can be accessed quickly

                                                                  Three major methods of allocating disk space are

                                                                  1 Contiguous 2 Linked

                                                                  3 Indexed

                                                                  1 Contiguous allocation Each file occupies set of contiguous blocks on disk

                                                                  If the file is n blocks long and starts at location b then it occupies blocks b b+1 b+2hellipb+n-1

                                                                  The directory entry for each file indicates the address of starting block(ie b) and length (n)

                                                                  Advantages

                                                                  1 For direct access to block i we can directly access by finding the block as b+i Since the file is stored sequentially sequential access is easy Hence access is easy

                                                                  2 Supports both direct and sequential access3 Number of disk seeks required is minimum 4 Seek time is minimum

                                                                  Disadvantages

                                                                  1 Difficult to find space to a new file (or) add more data to existing fileFirst fit and Best fit strategies are used to find a free hole( set of free contiguous blocks) from list of available holes But both are inefficient wrt storage utilization

                                                                  2 External fragmentation even though there are enough free blocks required for a new file they are not contiguous and hence cannot be allocated to new fileSolution copy all the files onto another disk or tape We then copy the files back onto original disk contiguously This is known as compaction But compaction is time consuming

                                                                  3 Size of the file must be known when the file is created Even though final size is known entire size may not be used by the file at the beginning Hence leads to wastage of space

                                                                  If size of the file is not known in advance then (i) If we allocate too little space to file file cannot be extended Solution find a larger hole copy the contents of the file to the new space and release the previous space This may be possible as long as space exists Also this is time consuming(ii) If we overestimate the amount of space needed much of the space is unused This will lead to internal fragmentation

                                                                  To minimize these drawbacks some os uses modified contiguous allocation scheme Here contiguous blocks of memory is allocated at firstIf the file needs more blocks another set of contiguous space known as extent is added Location and a block count plus a link to the first block of next extentIf extents are too large and of fixed size internal fragmentation occursIf extents are of variable sizes external fragmentation occurs

                                                                  Linked AllocationEach file is a linked list of disk blocks The disk blocks can be scattered anywhere on the diskThe directory contains a pointer to first and last blocks of the file For example a file of 5 blocks might start at a block 9 and continue at block 16 then block 1 and then block 10 and finally block 25 Each block contains a pointer to next block If each block is 512 bytes and disk address (pointer ) requires 4 bytes then the user sees blocks of 512-4=508 bytes To create a new file we simply create a new entry in the directory The pointer will have nil for empty file Also the size field of empty file is 0

                                                                  To add more data to existing file free block is found and data is written to it and is linked to end of file To read a file we read by following the pointers block to block

                                                                  Advantages 1 Easy to find space to a new file (or) add more data to existing file2 Size of the file need not be declared when the file is created3 There is no external fragmentation

                                                                  Disadvantages1 Seek time is more2 Number of disk seeks required is more3 Pointers use larger percentage of filersquos disk space4 Inefficient for direct access files To access i th record i disk reads are required5 Not Reliable if pointers were lost or damaged

                                                                  Solution to disadvantage 3 is to use to group multiple blocks as clusters(for ex 4 blocks = 1cluster) We allocate clusters instead of blocks to file Makes logical to physical block mapping simpleImproves disk throughput by reducing disk access timeReduces wastage of space due to pointers as only few pointers may be neededFree ndashlist management also becomes simpleDisadvantage internal fragmentation more space is wasted when the cluster is partially full

                                                                  Solution to disadvantage 5 is to use doubly linked list (or) store the file name and relative block number in each block

                                                                  An variation of linked allocation is the use of file allocation table (FAT)A section of disk at the beginning of each volume contains FAT FAT has one entry for each disk block and is indexed by block numberThe block number contains the block number of next block that contains the file The last block will have special end of the file value as table entry An unused block is indicated by table value 0

                                                                  Advantages

                                                                  1 Easy to find space to a new file (or) add more data to existing file2 Random access time is improved

                                                                  Disadvantage number of disk seeks is more The disk head must move to read the FAT and find the location of block then move to location of the block

                                                                  3 Indexed allocation

                                                                  Each file has its own index block Index block is an array of disk block addresses The ith entry in index block points to ith block of the file

                                                                  The directory contains the address of the index block

                                                                  When the file is created all pointers in the index block are set to nil

                                                                  To add more data to existing file free block is found and data is written to it The address of block is put in the index block

                                                                  Advantages

                                                                  1 Easy to find space to a new file (or) add more data to existing file2 Supports direct (or) random access efficiently3 No external fragmentation

                                                                  Disadvantage

                                                                  1 Seek time is more2 Number of disk seeks required is more3 Not Reliable4 Index file must be kept in memory If this memory is not available then we have the read the index

                                                                  block and then desired data block ie 2 disk accesses are required To access a block near the end of the file we need to read all the index blocks to read the needed data block

                                                                  5 Pointer overhead of index block is gt than that of linked allocation For example if we have file of only one or two blocks Entire index block is allocated to store these one or two pointers

                                                                  Various mechanisms followed for size of index block

                                                                  1 linked scheme size of index block is one disk block Index block contains a small header containing name of the file and set of the first 100 disk block addresses The next address contains the address of another index block for large files

                                                                  2 multilevel index to access a block the OS uses first-level index to find second-level index block which points to file block This approach can be continued to third or fourth level

                                                                  3 combined scheme

                                                                  For example Say there are 15 pointers of index block in the filersquos inode The first 12 of these pointers have address of blocks that contains file data The next 3 pointers point to indirect blocks The first points to single indirect block which is an index block that contain addresses of filersquos data blocks

                                                                  The second points to double indirect blocks and third points to triple indirect blocks as shown below

                                                                  Q) Free space management

                                                                  To keep track of free disk space the system maintains free space list To create a file we search free space list and allocate the required space to new file

                                                                  The free space list can be implemented as

                                                                  1 Bit vector (or) Bit map2 Linked list3 Grouping4 Counting 5 Space maps

                                                                  1 Bit vector (or) Bit map Each block is represented by 1 bit If the block is free the bit is 1 and if the block is allocated the bit is 0 For example consider a disk where blocks 23458 blocks are free The free-space bit map would be 001111001000

                                                                  Advantages simple and easy to get contiguous files

                                                                  Disadvantages

                                                                  1 Bit map requires extra space2 Bit map is inefficient if kept in disk Bit maps of smaller disk can be kept in main memory but may

                                                                  not be possible to keep the bit map of larger disk in main memory3 Bit map must be modified both when blocks are allocated and freed Freeing 1 GB of data on a 1-TB

                                                                  disk could cause thousands of blocks of bit maps to be updated because those data blocks are scattered all over the disk

                                                                  2 Linked List Free-space list is implemented by linking all the free disk blocks together A pointer to the first free block is stored in special location on the disk For example consider a disk where blocks 23458 blocks are freeWe keep a pointer to block 2 as the first free block Block 2 would contain a pointer to block 3 Block 3 would point to block 4 and block 4 would point to block 5and so onAdvantage No waste of spaceDisadvantage Cannot get contiguous space easily ie To traverse the free-space list we must read each block which requires more IO time

                                                                  3 Grouping First free block stores the addresses of n free blocks The first n-1 free blocks are free and the last block contains the addresses of another n free blocks and so onAdvantage large number of free blocks can be found quickly4 Counting free-space list can be maintained by keeping the address of first free block and the number (n)of free contiguous blocks that follow the first block

                                                                  Each entry in the free-space list consists of disk address and a countEach entry requires more space andlist is shorter only if count is gt1These entries can be stored in a B-tree for efficient insertion deletion and search

                                                                  5 Space maps ZFS divides the disk space into chunks of manageable sizes called metaslabs Each metaslab has a space mapZFS uses counting algorithm to store information about free blocks Space map is a log of all block activities in time order in counting formatWhen ZFS allocates or frees space from a metaslab it loads the associated space map into memory and updates it Finally free-space list is updated on diskSynchronization Hardware Hardware solution to synchronization is to provide atomic operations These operations operate as a single instruction without interruption Two such operations are 1 Test and Set() hardware instruction2 Swap() instruction

                                                                  1 Test and Set instruction is as shown

                                                                  Solution using Test and SetThe shared variable lock is initialized to false

                                                                  2 Swap() instruction definition is as follows

                                                                  Solution using Swap1 Shared Boolean variable lock is initialized to false and each process has a local boolean variable key

                                                                  Both Test and Set Swap instructions satisfy the mutual exclusion requirement but unfortunately do not guarantee bounded waiting If there are multiple processes trying to get into their critical sections there is no guarantee of what order they will enter and any one process could have the bad luck to wait forever until they got their turn in the critical section (Since there is no guarantee as to the relative rates of the processes a very fast process could theoretically release the lock whip through their remainder section and re-lock the lock before a slower process got a chance As more and more processes are involved vying for the same resource the odds of a slow process getting locked out completely increase )

                                                                  Below figure illustrates a solution using test-and-set that satisfies bounded waiting using two shared data structures boolean lock and boolean waiting[ N ] where N is the number of processes in contention for critical sections

                                                                  Bounded-waiting mutual exclusion with TestAndSet( )

                                                                  It first looks in an order ( starting with the next process on the list ) for a process that has been waiting and if it finds one then it releases that particular process from its waiting state without unlocking the critical section thereby allowing a specific process into the critical section while continuing to block all the others

                                                                  Q) What is semaphoreAns Semaphore is used to solve synchronization problems A semaphore S is an integer variable accessed only through 2 standard atomic operations 1 wait() 2 signal()Wait primitive Signal primitivewait(S)

                                                                  while(S lt= 0) do nothingS--

                                                                  signal (S)

                                                                  S++

                                                                  Each process that wishes to use a resource performs wait() operation When a process releases a resource it performs signal() operation

                                                                  Semaphores Usage1 binary semaphore or mutex locks can take values 0 (or) 1

                                                                  2 Counting Semaphore can take any integer value Counting semaphore is used to count the number remaining resources The counter is initialized to the number of resources available whenever the counting semaphore gt 0 then a process can enter a critical section and use one of the resources When the counter = 0 ( or negative in some implementations ) then the process blocks until another process frees a resource and increments the counting semaphore with a signal callFor example if 3 resources are there and 4 processes Value of counting semaphore =3if process P1 requires a resource it perform wait() operationNow counting semaphore = 2if process P2 requires a resource it perform wait() operationNow counting semaphore = 1if process P3 requires a resource it perform wait() operationNow counting semaphore = 0if process P4 requires a resource it perform wait() operation P4 waits until a resource is available

                                                                  3 Semaphores can also be used to synchronize certain operations

                                                                  Semaphore ImplementationWhen a process is in critical section and any other process that tries to enter critical section must loop continuously wasting CPU time (busy waiting) This type of semaphore is called spin lock as the process spins while waiting For example suppose it is important that process P1 execute statement S1 before process P2 executes statement S2

                                                                  First we create a semaphore named synch that is shared by the two processes and initialize it to zero Then in process P1 we insert the code

                                                                  S1signal( synch )

                                                                  and in process P2 we insert the code wait( synch )

                                                                  S2 Because synch was initialized to 0 process P2 will block on the wait until after P1 executes the call

                                                                  to signal Each semaphore has a integer value and maintains a waiting queue of waiting processes

                                                                  Semaphore implementation with no busy waiting To overcome busy waiting we block the waiting process and waiting process is restarted by wakeup() operation(ie the process is removed from waiting queue to ready state)

                                                                  Problems due to semaphore1 Deadlocks Deadlocks occur when multiple processes are blocked each waiting for a resource that can only be freed by one of the other (blocked) processes shown below

                                                                  2 Starvation In Starvation one or more processes gets blocked forever and never get a chance to enter critical section For example if we do not specify the algorithms for adding processes to the waiting queue or selecting one to be removed from the queue in the signal( ) call If a LIFO queue is chosen then the first process that starts waiting will never get a chance

                                                                  3 Priority inversionLet Process LMH have priorities as shown belowLltMltHprocess H is waiting for a resource that has been held by process L so Process L is running When process M enters it pre-empts process L and M is running nowHere M runs before H even though M has lower priority than HTo avoid this L is given high priority so that M does not pre-empt L=====================================================================Q) Classical problems on synchronization

                                                                  1 The Bounded Buffer Problem (also called the The Producer-Consumer Problem)2 The Readers-Writers Problem3 The Dining Philosophers Problem

                                                                  These problems are used to test newly proposed synchronization scheme

                                                                  1 The Bounded Buffer ProblemConsider

                                                                  n buffers and each buffer holds one item a producer process which creates the items (1 at a time) a consumer process which processes them (1 at a time)

                                                                  Producer process cannot produce an item if all the buffers are full and Consumer process cannot consume if all the buffers are empty As both processes modify the contents of buffer synchronization must be there For that we use 3 Semaphores

                                                                  1 empty = n (empty semaphore counts the number of empty buffers)2 full=0 (full semaphore counts the number of filled buffers)3 mutex=1 (mutex provides mutual exclusion for access to the buffer)

                                                                  Producer process Consumer processdo

                                                                  hellip produce an item wait(empty) wait(mutex) hellip

                                                                  do wait(full) wait(mutex) hellip remove an item from buffer hellip

                                                                  add item to buffer hellip signal(mutex) signal(full)

                                                                  while (1)

                                                                  signal(mutex) signal(empty) hellip consume the item hellip while (1)

                                                                  2 The Readers-Writers ProblemA data item such as a file is shared among several processesEach process is classified as reader or writerMultiple readers can read the file at same timeA writer must have exclusive access (ie cannot share with either a reader or another writer)Two versions of the readers-writers problem

                                                                  readers priority new reader need not wait because a writer is waiting writers priority if a writer is waiting to access the database no new readers can start reading

                                                                  A solution to both version may cause starvation in the readers priority version writers may starve in the writers priority version readers may starve

                                                                  A semaphore solution to the readers priority version uses 3 semaphores1 readcount = 0 (readcount semaphore counts the number of reading shared data2 mutex =1 (mutex semaphore is used for mutually exclusion when readcount is updated)3 wrt = 1 (wrt semaphore is common to both writer and reader)

                                                                  wrt provides mutually exclusive access to shared data Reader process Writer process

                                                                  wait(mutex) readcount++ if (readcount == 1) wait(rt) signal(mutex) hellip reading is performed hellip wait(mutex) readcount-- if (readcount == 0) signal(wrt) signal(mutex)

                                                                  do wait(wrt) hellip writing is performed hellip signal(wrt)

                                                                  while (TRUE)

                                                                  3 The Dining Philosophers Problemn philosophers sit around a table thinking and eatingAs time passes a philosopher gets hungry and tries to pick up the chopstick on his left and on his right A philosopher may only pick up one chopstick at a time and cannot pick up a chopstick already in the hand of neighbour philosopherThe dining philosophers problems is an example of a large class or concurrency control problems it is a simple representation of the need to allocate several resources among several processes in a deadlock-free and starvation-free mannerA semaphore solution represent each chopstick with a semaphore

                                                                  semaphore chopstick[5] Initially all values are 1

                                                                  Philosopher ido

                                                                  51

                                                                  wait(chopstick[i])wait(chopstick[(i+1) 5])hellipeathellipsignal(chopstick[i])signal(chopstick[(i+1) 5])hellipthinkhellip

                                                                  while (1)This solution guarantees no two neighbouring philosophers eat simultaneously but has the possibility of creating a deadlockQ) Types of operating systems

                                                                  MULTIPROCESSING SYSTEMS has multiple hardware CPUs

                                                                  In symmetric multiprocessing (SMP) systems The processes share the same main memory and io facilities All processors can perform the same functions (hence term symmetric)

                                                                  52

                                                                  ASYMMETRIC MULTIPROCESSING SYSTEMS In asymmetric multiprocessing (ASMP) all CPUs are not equal (Master and slave CPUs) ASMP have dedicated processors to specific tasks For example one processor may be dedicated to disk operations another to video operations etc These systems dont have the flexibility to assign processes to the least-loaded CPU unlike an SMP system

                                                                  REAL TIME SYSTEM is used to implement a computer application that must complete its execution within its time constraint

                                                                  Two kinds of real-time systems have evolved

                                                                  1 A hard real-time system can guarantee that response requirement will be met under all conditions 2 A soft real-time system cannot guarantee that response requirement meets under all conditions Ex

                                                                  Digital audio or multimedia systems Digital telephones

                                                                  DISTRIBUTED SYSTEMS execute parts of a computation in different systems at the same time It uses distributed control ie it spreads its decision-making actions across different computers in the system so that failures of individual computers or the network does not cripple its operation

                                                                  A distributed operating system appears as a uniprocessor system even though it has multiple processors The users may not know where their programs are being run or where their files are located that should all be handled automatically by the operating system

                                                                  Distributed systems allow applications to run on several processors at the same time thus requiring more complex processor scheduling algorithms

                                                                  DESKTOP SYSTEMS A desktop system is a personal computer (PC) system used at a single location Modern desktop systems support multiprogramming Common examples are Linux FreeBSD Windows 8 and the Macintosh operating system

                                                                  HANDHELD SYSTEMS (palmtop computers) Two of the most popular operating systems for handhelds are Symbian OS and Android OS They have very less memory

                                                                  53

                                                                  CLUSTERED SYSTEMS A computer cluster consists of a set of loosely connected computers that work together so that in many respects they can be viewed as a single system The components of a cluster are usually connected to each other through fast local area networks each node (computer used as a server) running its own instance of an operating system

                                                                  Q) Modern operating system

                                                                  New developments in hardware applications and security threats lead to the development of modern operating system

                                                                  New Hardware developments like increased machine speed high speed network increased size and variety of memory devices

                                                                  New Applications developments like multimedia applications internet and web access and client server computing

                                                                  Developments in modern operating system can be categorized as

                                                                  1 Microkernel architecture assigns only few functions to kernel ( IPC and CPU scheduling etc) Other OS services are provided by processes called servers that run in user mode This approach decouples server and kernel development This approach is well suited to distributed environment

                                                                  2 Multithreading a process is divided into threads that can run concurrently Useful for applications that perform a number of independent tasks Example database server that listens and processes many client requests

                                                                  3 Symmetric multiprocessing As now-a-days there are multiple microprocessors in a single system SMP operating system provides greater efficiency when there are multiple processors The processes share the same main memory and io facilities All processors can perform the same functions (hence term symmetric)

                                                                  SMP schedules processes or threads across all of the processors

                                                                  Advantages

                                                                  1 Increased performance if some portion of the program can be run in parallel the performance of SMP OS increases

                                                                  2 increased availability as all processes can perform the same task if one processor fails others will work 3 incremental growth one can add a new processor to increase the performance4 Scaling cost can be dependent on the number of processors used

                                                                  4 Distributed operating systems Distributed systems allow applications to run on several processors at the same time

                                                                  5 Object oriented design adds modularity to kernel Os can be customized without effecting system integrity

                                                                  Q) Linux Operating systems It is open source as its source code is freely available It is free to use Linux was designed considering UNIX compatibility Linux is a multi-user multitasking system Main design goals are speed efficiency and standardization Components of Linux System

                                                                  54

                                                                  Linux Operating System has primarily three components as shown in the below diagram

                                                                  Kernel - Kernel is the core part of Linux It is responsible for all major activities of this operating system It interacts directly with hardware Kernel hides low level hardware details to system or application programs

                                                                  System Library - System libraries are special functions or programs These are used by application programs to use Kernelrsquos features

                                                                  System Utility - System Utility programs are responsible to do specialized individual level tasks like updating log file accepting login requests from terminals etc

                                                                  Kernel Modules Sections of kernel code that can be compiled loaded and unloaded independent of the rest of the kernel A kernel module may typically implement a device driver a file system or a networking protocol

                                                                  Three components to Linux module support 1 module management Supports loading modules into memory and allows to talk to the rest of the kernel 2driver registration Allows modules to inform rest of the kernel that a new driver has become available 3 conflict resolution protect reserved resources of one driver from accidental use by another driver

                                                                  Kernel Mode vs User ModeKernel component code executes in a special privileged mode called kernel mode with full access to all resources of the computer This code represents a single process executes in single address space and do not require any context switch and hence is very efficient and fast Kernel runs each processes and provides system services to processes provides protected access to hardwares to processesCode which is not required to run in kernel mode is in System Library User programs and other system programs works in User Mode which has no access to system hardwares and kernel code User programs utilities use System libraries to access Kernel functions to get systems low level tasksBasic FeaturesFollowing are some of the important features of Linux Operating System

                                                                  Portable - Portability means softwares can works on different types of hardwares in same way Linux kernel and application programs supports their installation on any kind of hardware platform

                                                                  Open Source - Linux source code is freely available and it is community based development project Multiple teams works in collaboration to enhance the capability of Linux operating system and it is continuously evolving

                                                                  Multi-User - Linux is a multiuser system means multiple users can access system resources like memory ram application programs at same time

                                                                  Multiprogramming - Linux is a multiprogramming system means multiple applications can run at same time

                                                                  Hierarchical File System - Linux provides a standard file structure in which system files user files are arranged

                                                                  Shell - Linux provides a special interpreter program which can be used to execute commands of the operating system It can be used to do various types of operations call application programs etc

                                                                  55

                                                                  Security - Linux provides user security using authentication features like password protection controlled access to specific files encryption of data

                                                                  Architecture Linux System Architecture is consists of following layers

                                                                  Hardware layer - Hardware consists of all peripheral devices (RAM HDD CPU etc)

                                                                  Kernel - Core component of Operating System interacts directly with hardware provides low level services to upper layer components

                                                                  Shell - An interface to kernel Takes commands from user and executes kernels functions

                                                                  Utilities - Utility programs giving user most of the functionalities of an operating systems

                                                                  Q) windows XP operating System1Extensibility mdash layered architecture

                                                                  Executive which runs in protected mode provides the basic system services On top of the executive several server subsystems operate in user mode Modular structure allows additional environmental subsystems to be added without affecting the

                                                                  executive 2Portability mdash XP can be moved from on hardware architecture to another with relatively few changes

                                                                  Written in C and C++ Processor-dependent code is isolated in a dynamic link library (DLL) called the ldquohardware abstraction

                                                                  layerrdquo (HAL)3 Reliability mdashXP uses hardware protection for virtual memory and software protection mechanisms for operating system resources 4 Compatibility mdash applications that follow the IEEE 10031 (POSIX) standard can be complied to run on 2000 without changing the source code 5 Performance mdash XP subsystems can communicate with one another via high-performance message passing

                                                                  Preemption of low priority threads enables the system to respond quickly to external events Designed for symmetrical multiprocessing

                                                                  6 International support mdash supports different locales via the national language support (NLS) APIXP Architecture 1 Layered system of modules 2 Protected mode mdash hardware abstraction layer (HAL) kernel executive 3 User mode mdash collection of subsystems

                                                                  a Environmental subsystems emulate different operating systems b Protection subsystems provide security functions

                                                                  XP architecture

                                                                  56

                                                                  Q) Windows network Operating systemExample Windows 2000 operating system

                                                                  All the pictures are taken from Silberschatz Abraham et al Peter Baer Galvin and Greg Gagne Operating system concepts Reading Addison-Wesley

                                                                  Visit my blog enthusiaststudentblogspotin57

                                                                  mtechmessengerblogspotin

                                                                  Keywords OS exam notes OS notesOperating system notes Opearing system galvin notesSimple batch system Multi- programmed Batch System Time sharing System Difference between Batch Multi-programming and Time sharing What is process structure of process in memory Process states PCB Operations on Processes Process Creation Process Termination cascading terminationcontext switch Basic Concepts of threads two modes of CPU execution Inter process communication Shared ndash Memory Systems Message passing systemswhat is deadlock four conditions that are necessary for deadlock to occur Methods for handling deadlocks Deadlock prevention Deadlock avoidanceResource Allocation graphBankers AlgorithmSafe State Unsafe state Safety Algorithm Resource-Request Algorithm Memory management Address Binding Logical vs Physical Address Space MMU Memory-Management Unit Dynamic loading Dynamic Linking Overlays Swapping Contiguous memory allocation Single-partition allocation Multiple-partition allocation Internal fragmentation Variable Size partitions External fragmentation First-fit Best-fit Worst-fit Non-Contiguous Memory allocation Paging Implementation of Page Table PTLR PTBR page table base register Page-table length register translation look-aside buffers TLB Protection in Paging Shared pages in Paging Segmentation Shared Segments Virtual memory Demand paging Procedure for Handling a Page Fault pure demand paging Page Replacement Page Replacement Algorithms FIFO Optimal Page replacement AlgorithmLeast Recently used (LRU) Page replacement algorithm ThrashingWorking Set model global and local page replacement

                                                                  58

                                                                  • 101 File Concept Data is stored in secondary memory in the form of files A file is a sequence of bits bytes lines or records A file has structure which depends on its type
                                                                  • File Attributes
                                                                  • File Operations
                                                                  • Kernel Mode vs User Mode
                                                                  • Basic Features
                                                                  • Architecture

                                                                    1 A process entering ready queue is put in queue Q0 When it gains CPU the process receives 8 msec If it does not finish in 8 milliseconds process is moved to queue Q1 2 At Q1 process again receives 16 additional milliseconds If it still does not complete it is moved to queue Q2

                                                                    Q) SynchronizationSeveral processes run in an operating system Some process share resources due to which problems like data inconsistency may arise

                                                                    Example of process synchronization is Producer Consumer problem (or) Bounded-Buffer problemProducer Consumer problem To make sure that producer process should not add data when the buffer is full andConsumer process should not take data when the buffer is empty

                                                                    The code for Producer process can be modified as followswhile (true) Produce an item in next producedwhile (counter == BUFFER_SIZE) do nothingbuffer[in]=nextProducedin = (in+1) BUFFER_SIZEcounter++

                                                                    here an integer variable counter is initialized to zeroCounter is incremented every time a new item is added to the buffer and counter is decremented every time an item is removed from bufferin is an index that always points to an empty slot after the last filled slotout is an index that always points to the first filled slot after at the head of the circular queue

                                                                    The code for consumer process is as shown belowwhile(true)

                                                                    while (counter == 0) do nothingnextConsumed = buffer[out]out=(out+1)BUFFER_SIZEcounter--

                                                                    The above producer and consumer code produces RACE CONDITIONOutcome of execution of several co-operating processes depends on the order in which accesses takes place is called Race ConditionFor Example if counter =5

                                                                    Given counter =5 and if producer produces one item and consumer consumes one item then correct value of counter= 5 only But here the counter = 4 ie inconsistent resultIf T5 is executed before T4 then the counter =6 ie inconsistent result

                                                                    Solution to avoid race condition make sure only one process at a time must update the shared variable (here it is counter)

                                                                    Q) Critical section problemAns Each process has a segment of code called critical section Critical section code is the code that contains common variables of co-operating processesTo avoid data inconsistencyTwo processes must not execute critical section code at same timeFor this to happen1 Each process must request permission to enter its critical sectionmdashEntry section2 End of critical section ndashExit section3 Rest of the code after critical section ndash Remainder section

                                                                    Solution to critical section must satisfy the following three requirements1 Mutual exclusion if one process is in critical section no other process can be in critical section2 Progress Processes not in remainder section must decide which process will enter critical section next3 Bounded Waiting All the processes must be allowed to enter critical section No process must wait indefinitely

                                                                    Q) Petersonrsquos solution for critical section problem (Software based solution)AnsTime Process 1 Process2T0 do

                                                                    flag[1] = TRUE turn =2

                                                                    do flag[2] = TRUE

                                                                    T1 turn =1

                                                                    T2 While( flag[2] ampamp turn == 2)Since turn = 2 while loop will not run

                                                                    While( flag[1] ampamp turn == 1)Loops continuously until flag[1]

                                                                    becomes falseT3 Enter critical section

                                                                    T4 flag[1] = FALSE Enter Critical section

                                                                    T5 Enters remainder section flag[2]=FALSE

                                                                    while (TRUE) Enters remainder section

                                                                    while (TRUE)

                                                                    101 File Concept Data is stored in secondary memory in the form of files A file is a sequence of bits bytes lines or records A file has structure which depends on its type

                                                                    File Attributes

                                                                    1 Name ndash Name of the file for user understanding2 Identifier used by OS to identify a file ( eg inode number )3 Type ndash Text files executable files binary files etc 4 Location ndash location of the file on the hard drive 5 Size current size of the file6 Protection Controls who can read write or execute the file7 Time Date and user identification gives the information when the file was created last modified

                                                                    and last used etc This data is used for protection security and usage monitoring

                                                                    All information about files is kept in directory structure The directory entry consists of the filersquos name and its unique identifier The identifier locates the other file attributes Directory structure is kept on disk

                                                                    File Operations

                                                                    The file Abstract Data Type supports many common operations 1 Creating a file find free space on disk add entry to directory 2 Writing a file write data at current file position pointer location and update pointer3 Reading a file read data at current file position pointer location and update pointer4 Repositioning within a file or file seek change pointer location to a given value5 Deleting a file free the space allocated to file and remove entry from directory6 Truncating a file delete the data and update file size

                                                                    Open File Table

                                                                    Most OSes require that files be opened before access and closed after access Information about all files that are open currently in the system is stored in an open file table This

                                                                    system wide open file table is maintained by OS And information about files opened by the process is stored by the processrsquos open file table Open file table has the following information

                                                                    1 File pointer - records the current position in the file for the next read or write2 File-open count - How many times has the current file been opened ( simultaneously by

                                                                    different processes ) and not yet closedWhen this counter reaches zero the file can be removed from the table

                                                                    3 Disk location of the file4 Access rights

                                                                    Some systems provide support for file locking 1 A shared lock is for reading only2 A exclusive lock is for writing as well as reading3 An advisory lock OS makes sure the locks are acquired and released appropriately4 A mandatory lock software developers makes sure the locks are acquired and released

                                                                    appropriately5 UNIX used advisory locks and Windows uses mandatory locks

                                                                    File Types

                                                                    Implementing file types A common technique is to include the type as part of the filename The name is split into 2 parts

                                                                    1 File Name 2 Extension

                                                                    File name and extension are separated by a period The system uses the extension to indicate the type of file and the type of operations that can be done on that fileUser opens Microsoft word and clicks on File menu and Open option User will only specify the filename without extension application will look for a file with the given name and extension it expects Hence extensions are not supported by the operating systemUnix system uses a crude magic number stored at the beginning of the file to indicate the type of fileIn Mac OS X operating system file produced by word processor has the word processorrsquos name as its creator When user double clicks on the file word will open the required fileSome common file types are as shown below

                                                                    File Structure Some files contain an internal structure which may or may not be known to the OS for ex executable files have a specific structure so that it can determine

                                                                    where in memory to load the file and what is the location of the first instruction

                                                                    If OS supports many file formats it will increase the size and complexity of the OS UNIX treats all files as sequences of bytes with no internal structure (With the exception of

                                                                    executable binary programs which it must know how to load and find the first executable statement etc)

                                                                    Macintosh files have two forks - a resource fork and a data fork The resource fork contains information relating to the UI such as icons and button images and can be modified independently Data fork contains the code or data

                                                                    Internal File structure

                                                                    Disk files are accessed in units of physical blocks typically 512 bytes

                                                                    Internally files are organized in units of logical units like single byte (or)

                                                                    data record size (or) structure size

                                                                    The number of logical units which fit into one physical block determines its packingIf a student file is organized internally as records Each record takes 64 bytes of memoryAnd main memory is divided into blocks of 512 bytes Then packing = 51264=8 8 records can be stored per block

                                                                    Internal fragmentation may occur due to this

                                                                    102 Access Methods

                                                                    The information in file can be accessed in the following ways1 Sequential access2 Direct access (or) Relative access

                                                                    1 Sequential access we cannot randomly access any record

                                                                    Records are read and written sequentially one record after another

                                                                    A sequential access file emulates magnetic tape operation and generally supports the following operations read next - read a record and advance the tape to the next position write next - write a record and advance the tape to the next position rewind skip n records - May or may not be supported N may be limited to positive numbers or may be

                                                                    limited to +- 1

                                                                    2 Direct access (or) Relative accessA file is made up of fixed length records so that we can randomly access any record

                                                                    No restriction on order of reading or writing

                                                                    Direct access files are useful when we need to access large amounts of data randomly

                                                                    Ex database

                                                                    Operations supported include n is relative block number ie relative to beginning of file

                                                                    1 read n - read record number n 2 write n - write record number n 3 jump to record n - could be 0 or the end of file4 Query current record - used to return back to this record later5 Sequential access can be easily emulated using direct access The inverse is complicated and

                                                                    inefficient

                                                                    3 Other access methods can be built on top of direct access method We construct an index for the file which contains pointers to various blocks To search a file we first access index and then use pointer to access file

                                                                    If a student file has 120000 records of students sorted by student id We can create an index consisting of 1st student id in each blockif 64 records can be stored in each block then number of blocks needed to store 120000 records = 12000064 asymp 2000 blocks So index has 2000 entries Index is kept in main memory so that searching becomes faster

                                                                    If the index itself becomes large we create an index for index file (Ex B-tree)

                                                                    Q) Allocation Methods

                                                                    To allocate space to files on disk is such a way that disk space is utilized effectively and files can be accessed quickly

                                                                    Three major methods of allocating disk space are

                                                                    1 Contiguous 2 Linked

                                                                    3 Indexed

                                                                    1 Contiguous allocation Each file occupies set of contiguous blocks on disk

                                                                    If the file is n blocks long and starts at location b then it occupies blocks b b+1 b+2hellipb+n-1

                                                                    The directory entry for each file indicates the address of starting block(ie b) and length (n)

                                                                    Advantages

                                                                    1 For direct access to block i we can directly access by finding the block as b+i Since the file is stored sequentially sequential access is easy Hence access is easy

                                                                    2 Supports both direct and sequential access3 Number of disk seeks required is minimum 4 Seek time is minimum

                                                                    Disadvantages

                                                                    1 Difficult to find space to a new file (or) add more data to existing fileFirst fit and Best fit strategies are used to find a free hole( set of free contiguous blocks) from list of available holes But both are inefficient wrt storage utilization

                                                                    2 External fragmentation even though there are enough free blocks required for a new file they are not contiguous and hence cannot be allocated to new fileSolution copy all the files onto another disk or tape We then copy the files back onto original disk contiguously This is known as compaction But compaction is time consuming

                                                                    3 Size of the file must be known when the file is created Even though final size is known entire size may not be used by the file at the beginning Hence leads to wastage of space

                                                                    If size of the file is not known in advance then (i) If we allocate too little space to file file cannot be extended Solution find a larger hole copy the contents of the file to the new space and release the previous space This may be possible as long as space exists Also this is time consuming(ii) If we overestimate the amount of space needed much of the space is unused This will lead to internal fragmentation

                                                                    To minimize these drawbacks some os uses modified contiguous allocation scheme Here contiguous blocks of memory is allocated at firstIf the file needs more blocks another set of contiguous space known as extent is added Location and a block count plus a link to the first block of next extentIf extents are too large and of fixed size internal fragmentation occursIf extents are of variable sizes external fragmentation occurs

                                                                    Linked AllocationEach file is a linked list of disk blocks The disk blocks can be scattered anywhere on the diskThe directory contains a pointer to first and last blocks of the file For example a file of 5 blocks might start at a block 9 and continue at block 16 then block 1 and then block 10 and finally block 25 Each block contains a pointer to next block If each block is 512 bytes and disk address (pointer ) requires 4 bytes then the user sees blocks of 512-4=508 bytes To create a new file we simply create a new entry in the directory The pointer will have nil for empty file Also the size field of empty file is 0

                                                                    To add more data to existing file free block is found and data is written to it and is linked to end of file To read a file we read by following the pointers block to block

                                                                    Advantages 1 Easy to find space to a new file (or) add more data to existing file2 Size of the file need not be declared when the file is created3 There is no external fragmentation

                                                                    Disadvantages1 Seek time is more2 Number of disk seeks required is more3 Pointers use larger percentage of filersquos disk space4 Inefficient for direct access files To access i th record i disk reads are required5 Not Reliable if pointers were lost or damaged

                                                                    Solution to disadvantage 3 is to use to group multiple blocks as clusters(for ex 4 blocks = 1cluster) We allocate clusters instead of blocks to file Makes logical to physical block mapping simpleImproves disk throughput by reducing disk access timeReduces wastage of space due to pointers as only few pointers may be neededFree ndashlist management also becomes simpleDisadvantage internal fragmentation more space is wasted when the cluster is partially full

                                                                    Solution to disadvantage 5 is to use doubly linked list (or) store the file name and relative block number in each block

                                                                    An variation of linked allocation is the use of file allocation table (FAT)A section of disk at the beginning of each volume contains FAT FAT has one entry for each disk block and is indexed by block numberThe block number contains the block number of next block that contains the file The last block will have special end of the file value as table entry An unused block is indicated by table value 0

                                                                    Advantages

                                                                    1 Easy to find space to a new file (or) add more data to existing file2 Random access time is improved

                                                                    Disadvantage number of disk seeks is more The disk head must move to read the FAT and find the location of block then move to location of the block

                                                                    3 Indexed allocation

                                                                    Each file has its own index block Index block is an array of disk block addresses The ith entry in index block points to ith block of the file

                                                                    The directory contains the address of the index block

                                                                    When the file is created all pointers in the index block are set to nil

                                                                    To add more data to existing file free block is found and data is written to it The address of block is put in the index block

                                                                    Advantages

                                                                    1 Easy to find space to a new file (or) add more data to existing file2 Supports direct (or) random access efficiently3 No external fragmentation

                                                                    Disadvantage

                                                                    1 Seek time is more2 Number of disk seeks required is more3 Not Reliable4 Index file must be kept in memory If this memory is not available then we have the read the index

                                                                    block and then desired data block ie 2 disk accesses are required To access a block near the end of the file we need to read all the index blocks to read the needed data block

                                                                    5 Pointer overhead of index block is gt than that of linked allocation For example if we have file of only one or two blocks Entire index block is allocated to store these one or two pointers

                                                                    Various mechanisms followed for size of index block

                                                                    1 linked scheme size of index block is one disk block Index block contains a small header containing name of the file and set of the first 100 disk block addresses The next address contains the address of another index block for large files

                                                                    2 multilevel index to access a block the OS uses first-level index to find second-level index block which points to file block This approach can be continued to third or fourth level

                                                                    3 combined scheme

                                                                    For example Say there are 15 pointers of index block in the filersquos inode The first 12 of these pointers have address of blocks that contains file data The next 3 pointers point to indirect blocks The first points to single indirect block which is an index block that contain addresses of filersquos data blocks

                                                                    The second points to double indirect blocks and third points to triple indirect blocks as shown below

                                                                    Q) Free space management

                                                                    To keep track of free disk space the system maintains free space list To create a file we search free space list and allocate the required space to new file

                                                                    The free space list can be implemented as

                                                                    1 Bit vector (or) Bit map2 Linked list3 Grouping4 Counting 5 Space maps

                                                                    1 Bit vector (or) Bit map Each block is represented by 1 bit If the block is free the bit is 1 and if the block is allocated the bit is 0 For example consider a disk where blocks 23458 blocks are free The free-space bit map would be 001111001000

                                                                    Advantages simple and easy to get contiguous files

                                                                    Disadvantages

                                                                    1 Bit map requires extra space2 Bit map is inefficient if kept in disk Bit maps of smaller disk can be kept in main memory but may

                                                                    not be possible to keep the bit map of larger disk in main memory3 Bit map must be modified both when blocks are allocated and freed Freeing 1 GB of data on a 1-TB

                                                                    disk could cause thousands of blocks of bit maps to be updated because those data blocks are scattered all over the disk

                                                                    2 Linked List Free-space list is implemented by linking all the free disk blocks together A pointer to the first free block is stored in special location on the disk For example consider a disk where blocks 23458 blocks are freeWe keep a pointer to block 2 as the first free block Block 2 would contain a pointer to block 3 Block 3 would point to block 4 and block 4 would point to block 5and so onAdvantage No waste of spaceDisadvantage Cannot get contiguous space easily ie To traverse the free-space list we must read each block which requires more IO time

                                                                    3 Grouping First free block stores the addresses of n free blocks The first n-1 free blocks are free and the last block contains the addresses of another n free blocks and so onAdvantage large number of free blocks can be found quickly4 Counting free-space list can be maintained by keeping the address of first free block and the number (n)of free contiguous blocks that follow the first block

                                                                    Each entry in the free-space list consists of disk address and a countEach entry requires more space andlist is shorter only if count is gt1These entries can be stored in a B-tree for efficient insertion deletion and search

                                                                    5 Space maps ZFS divides the disk space into chunks of manageable sizes called metaslabs Each metaslab has a space mapZFS uses counting algorithm to store information about free blocks Space map is a log of all block activities in time order in counting formatWhen ZFS allocates or frees space from a metaslab it loads the associated space map into memory and updates it Finally free-space list is updated on diskSynchronization Hardware Hardware solution to synchronization is to provide atomic operations These operations operate as a single instruction without interruption Two such operations are 1 Test and Set() hardware instruction2 Swap() instruction

                                                                    1 Test and Set instruction is as shown

                                                                    Solution using Test and SetThe shared variable lock is initialized to false

                                                                    2 Swap() instruction definition is as follows

                                                                    Solution using Swap1 Shared Boolean variable lock is initialized to false and each process has a local boolean variable key

                                                                    Both Test and Set Swap instructions satisfy the mutual exclusion requirement but unfortunately do not guarantee bounded waiting If there are multiple processes trying to get into their critical sections there is no guarantee of what order they will enter and any one process could have the bad luck to wait forever until they got their turn in the critical section (Since there is no guarantee as to the relative rates of the processes a very fast process could theoretically release the lock whip through their remainder section and re-lock the lock before a slower process got a chance As more and more processes are involved vying for the same resource the odds of a slow process getting locked out completely increase )

                                                                    Below figure illustrates a solution using test-and-set that satisfies bounded waiting using two shared data structures boolean lock and boolean waiting[ N ] where N is the number of processes in contention for critical sections

                                                                    Bounded-waiting mutual exclusion with TestAndSet( )

                                                                    It first looks in an order ( starting with the next process on the list ) for a process that has been waiting and if it finds one then it releases that particular process from its waiting state without unlocking the critical section thereby allowing a specific process into the critical section while continuing to block all the others

                                                                    Q) What is semaphoreAns Semaphore is used to solve synchronization problems A semaphore S is an integer variable accessed only through 2 standard atomic operations 1 wait() 2 signal()Wait primitive Signal primitivewait(S)

                                                                    while(S lt= 0) do nothingS--

                                                                    signal (S)

                                                                    S++

                                                                    Each process that wishes to use a resource performs wait() operation When a process releases a resource it performs signal() operation

                                                                    Semaphores Usage1 binary semaphore or mutex locks can take values 0 (or) 1

                                                                    2 Counting Semaphore can take any integer value Counting semaphore is used to count the number remaining resources The counter is initialized to the number of resources available whenever the counting semaphore gt 0 then a process can enter a critical section and use one of the resources When the counter = 0 ( or negative in some implementations ) then the process blocks until another process frees a resource and increments the counting semaphore with a signal callFor example if 3 resources are there and 4 processes Value of counting semaphore =3if process P1 requires a resource it perform wait() operationNow counting semaphore = 2if process P2 requires a resource it perform wait() operationNow counting semaphore = 1if process P3 requires a resource it perform wait() operationNow counting semaphore = 0if process P4 requires a resource it perform wait() operation P4 waits until a resource is available

                                                                    3 Semaphores can also be used to synchronize certain operations

                                                                    Semaphore ImplementationWhen a process is in critical section and any other process that tries to enter critical section must loop continuously wasting CPU time (busy waiting) This type of semaphore is called spin lock as the process spins while waiting For example suppose it is important that process P1 execute statement S1 before process P2 executes statement S2

                                                                    First we create a semaphore named synch that is shared by the two processes and initialize it to zero Then in process P1 we insert the code

                                                                    S1signal( synch )

                                                                    and in process P2 we insert the code wait( synch )

                                                                    S2 Because synch was initialized to 0 process P2 will block on the wait until after P1 executes the call

                                                                    to signal Each semaphore has a integer value and maintains a waiting queue of waiting processes

                                                                    Semaphore implementation with no busy waiting To overcome busy waiting we block the waiting process and waiting process is restarted by wakeup() operation(ie the process is removed from waiting queue to ready state)

                                                                    Problems due to semaphore1 Deadlocks Deadlocks occur when multiple processes are blocked each waiting for a resource that can only be freed by one of the other (blocked) processes shown below

                                                                    2 Starvation In Starvation one or more processes gets blocked forever and never get a chance to enter critical section For example if we do not specify the algorithms for adding processes to the waiting queue or selecting one to be removed from the queue in the signal( ) call If a LIFO queue is chosen then the first process that starts waiting will never get a chance

                                                                    3 Priority inversionLet Process LMH have priorities as shown belowLltMltHprocess H is waiting for a resource that has been held by process L so Process L is running When process M enters it pre-empts process L and M is running nowHere M runs before H even though M has lower priority than HTo avoid this L is given high priority so that M does not pre-empt L=====================================================================Q) Classical problems on synchronization

                                                                    1 The Bounded Buffer Problem (also called the The Producer-Consumer Problem)2 The Readers-Writers Problem3 The Dining Philosophers Problem

                                                                    These problems are used to test newly proposed synchronization scheme

                                                                    1 The Bounded Buffer ProblemConsider

                                                                    n buffers and each buffer holds one item a producer process which creates the items (1 at a time) a consumer process which processes them (1 at a time)

                                                                    Producer process cannot produce an item if all the buffers are full and Consumer process cannot consume if all the buffers are empty As both processes modify the contents of buffer synchronization must be there For that we use 3 Semaphores

                                                                    1 empty = n (empty semaphore counts the number of empty buffers)2 full=0 (full semaphore counts the number of filled buffers)3 mutex=1 (mutex provides mutual exclusion for access to the buffer)

                                                                    Producer process Consumer processdo

                                                                    hellip produce an item wait(empty) wait(mutex) hellip

                                                                    do wait(full) wait(mutex) hellip remove an item from buffer hellip

                                                                    add item to buffer hellip signal(mutex) signal(full)

                                                                    while (1)

                                                                    signal(mutex) signal(empty) hellip consume the item hellip while (1)

                                                                    2 The Readers-Writers ProblemA data item such as a file is shared among several processesEach process is classified as reader or writerMultiple readers can read the file at same timeA writer must have exclusive access (ie cannot share with either a reader or another writer)Two versions of the readers-writers problem

                                                                    readers priority new reader need not wait because a writer is waiting writers priority if a writer is waiting to access the database no new readers can start reading

                                                                    A solution to both version may cause starvation in the readers priority version writers may starve in the writers priority version readers may starve

                                                                    A semaphore solution to the readers priority version uses 3 semaphores1 readcount = 0 (readcount semaphore counts the number of reading shared data2 mutex =1 (mutex semaphore is used for mutually exclusion when readcount is updated)3 wrt = 1 (wrt semaphore is common to both writer and reader)

                                                                    wrt provides mutually exclusive access to shared data Reader process Writer process

                                                                    wait(mutex) readcount++ if (readcount == 1) wait(rt) signal(mutex) hellip reading is performed hellip wait(mutex) readcount-- if (readcount == 0) signal(wrt) signal(mutex)

                                                                    do wait(wrt) hellip writing is performed hellip signal(wrt)

                                                                    while (TRUE)

                                                                    3 The Dining Philosophers Problemn philosophers sit around a table thinking and eatingAs time passes a philosopher gets hungry and tries to pick up the chopstick on his left and on his right A philosopher may only pick up one chopstick at a time and cannot pick up a chopstick already in the hand of neighbour philosopherThe dining philosophers problems is an example of a large class or concurrency control problems it is a simple representation of the need to allocate several resources among several processes in a deadlock-free and starvation-free mannerA semaphore solution represent each chopstick with a semaphore

                                                                    semaphore chopstick[5] Initially all values are 1

                                                                    Philosopher ido

                                                                    51

                                                                    wait(chopstick[i])wait(chopstick[(i+1) 5])hellipeathellipsignal(chopstick[i])signal(chopstick[(i+1) 5])hellipthinkhellip

                                                                    while (1)This solution guarantees no two neighbouring philosophers eat simultaneously but has the possibility of creating a deadlockQ) Types of operating systems

                                                                    MULTIPROCESSING SYSTEMS has multiple hardware CPUs

                                                                    In symmetric multiprocessing (SMP) systems The processes share the same main memory and io facilities All processors can perform the same functions (hence term symmetric)

                                                                    52

                                                                    ASYMMETRIC MULTIPROCESSING SYSTEMS In asymmetric multiprocessing (ASMP) all CPUs are not equal (Master and slave CPUs) ASMP have dedicated processors to specific tasks For example one processor may be dedicated to disk operations another to video operations etc These systems dont have the flexibility to assign processes to the least-loaded CPU unlike an SMP system

                                                                    REAL TIME SYSTEM is used to implement a computer application that must complete its execution within its time constraint

                                                                    Two kinds of real-time systems have evolved

                                                                    1 A hard real-time system can guarantee that response requirement will be met under all conditions 2 A soft real-time system cannot guarantee that response requirement meets under all conditions Ex

                                                                    Digital audio or multimedia systems Digital telephones

                                                                    DISTRIBUTED SYSTEMS execute parts of a computation in different systems at the same time It uses distributed control ie it spreads its decision-making actions across different computers in the system so that failures of individual computers or the network does not cripple its operation

                                                                    A distributed operating system appears as a uniprocessor system even though it has multiple processors The users may not know where their programs are being run or where their files are located that should all be handled automatically by the operating system

                                                                    Distributed systems allow applications to run on several processors at the same time thus requiring more complex processor scheduling algorithms

                                                                    DESKTOP SYSTEMS A desktop system is a personal computer (PC) system used at a single location Modern desktop systems support multiprogramming Common examples are Linux FreeBSD Windows 8 and the Macintosh operating system

                                                                    HANDHELD SYSTEMS (palmtop computers) Two of the most popular operating systems for handhelds are Symbian OS and Android OS They have very less memory

                                                                    53

                                                                    CLUSTERED SYSTEMS A computer cluster consists of a set of loosely connected computers that work together so that in many respects they can be viewed as a single system The components of a cluster are usually connected to each other through fast local area networks each node (computer used as a server) running its own instance of an operating system

                                                                    Q) Modern operating system

                                                                    New developments in hardware applications and security threats lead to the development of modern operating system

                                                                    New Hardware developments like increased machine speed high speed network increased size and variety of memory devices

                                                                    New Applications developments like multimedia applications internet and web access and client server computing

                                                                    Developments in modern operating system can be categorized as

                                                                    1 Microkernel architecture assigns only few functions to kernel ( IPC and CPU scheduling etc) Other OS services are provided by processes called servers that run in user mode This approach decouples server and kernel development This approach is well suited to distributed environment

                                                                    2 Multithreading a process is divided into threads that can run concurrently Useful for applications that perform a number of independent tasks Example database server that listens and processes many client requests

                                                                    3 Symmetric multiprocessing As now-a-days there are multiple microprocessors in a single system SMP operating system provides greater efficiency when there are multiple processors The processes share the same main memory and io facilities All processors can perform the same functions (hence term symmetric)

                                                                    SMP schedules processes or threads across all of the processors

                                                                    Advantages

                                                                    1 Increased performance if some portion of the program can be run in parallel the performance of SMP OS increases

                                                                    2 increased availability as all processes can perform the same task if one processor fails others will work 3 incremental growth one can add a new processor to increase the performance4 Scaling cost can be dependent on the number of processors used

                                                                    4 Distributed operating systems Distributed systems allow applications to run on several processors at the same time

                                                                    5 Object oriented design adds modularity to kernel Os can be customized without effecting system integrity

                                                                    Q) Linux Operating systems It is open source as its source code is freely available It is free to use Linux was designed considering UNIX compatibility Linux is a multi-user multitasking system Main design goals are speed efficiency and standardization Components of Linux System

                                                                    54

                                                                    Linux Operating System has primarily three components as shown in the below diagram

                                                                    Kernel - Kernel is the core part of Linux It is responsible for all major activities of this operating system It interacts directly with hardware Kernel hides low level hardware details to system or application programs

                                                                    System Library - System libraries are special functions or programs These are used by application programs to use Kernelrsquos features

                                                                    System Utility - System Utility programs are responsible to do specialized individual level tasks like updating log file accepting login requests from terminals etc

                                                                    Kernel Modules Sections of kernel code that can be compiled loaded and unloaded independent of the rest of the kernel A kernel module may typically implement a device driver a file system or a networking protocol

                                                                    Three components to Linux module support 1 module management Supports loading modules into memory and allows to talk to the rest of the kernel 2driver registration Allows modules to inform rest of the kernel that a new driver has become available 3 conflict resolution protect reserved resources of one driver from accidental use by another driver

                                                                    Kernel Mode vs User ModeKernel component code executes in a special privileged mode called kernel mode with full access to all resources of the computer This code represents a single process executes in single address space and do not require any context switch and hence is very efficient and fast Kernel runs each processes and provides system services to processes provides protected access to hardwares to processesCode which is not required to run in kernel mode is in System Library User programs and other system programs works in User Mode which has no access to system hardwares and kernel code User programs utilities use System libraries to access Kernel functions to get systems low level tasksBasic FeaturesFollowing are some of the important features of Linux Operating System

                                                                    Portable - Portability means softwares can works on different types of hardwares in same way Linux kernel and application programs supports their installation on any kind of hardware platform

                                                                    Open Source - Linux source code is freely available and it is community based development project Multiple teams works in collaboration to enhance the capability of Linux operating system and it is continuously evolving

                                                                    Multi-User - Linux is a multiuser system means multiple users can access system resources like memory ram application programs at same time

                                                                    Multiprogramming - Linux is a multiprogramming system means multiple applications can run at same time

                                                                    Hierarchical File System - Linux provides a standard file structure in which system files user files are arranged

                                                                    Shell - Linux provides a special interpreter program which can be used to execute commands of the operating system It can be used to do various types of operations call application programs etc

                                                                    55

                                                                    Security - Linux provides user security using authentication features like password protection controlled access to specific files encryption of data

                                                                    Architecture Linux System Architecture is consists of following layers

                                                                    Hardware layer - Hardware consists of all peripheral devices (RAM HDD CPU etc)

                                                                    Kernel - Core component of Operating System interacts directly with hardware provides low level services to upper layer components

                                                                    Shell - An interface to kernel Takes commands from user and executes kernels functions

                                                                    Utilities - Utility programs giving user most of the functionalities of an operating systems

                                                                    Q) windows XP operating System1Extensibility mdash layered architecture

                                                                    Executive which runs in protected mode provides the basic system services On top of the executive several server subsystems operate in user mode Modular structure allows additional environmental subsystems to be added without affecting the

                                                                    executive 2Portability mdash XP can be moved from on hardware architecture to another with relatively few changes

                                                                    Written in C and C++ Processor-dependent code is isolated in a dynamic link library (DLL) called the ldquohardware abstraction

                                                                    layerrdquo (HAL)3 Reliability mdashXP uses hardware protection for virtual memory and software protection mechanisms for operating system resources 4 Compatibility mdash applications that follow the IEEE 10031 (POSIX) standard can be complied to run on 2000 without changing the source code 5 Performance mdash XP subsystems can communicate with one another via high-performance message passing

                                                                    Preemption of low priority threads enables the system to respond quickly to external events Designed for symmetrical multiprocessing

                                                                    6 International support mdash supports different locales via the national language support (NLS) APIXP Architecture 1 Layered system of modules 2 Protected mode mdash hardware abstraction layer (HAL) kernel executive 3 User mode mdash collection of subsystems

                                                                    a Environmental subsystems emulate different operating systems b Protection subsystems provide security functions

                                                                    XP architecture

                                                                    56

                                                                    Q) Windows network Operating systemExample Windows 2000 operating system

                                                                    All the pictures are taken from Silberschatz Abraham et al Peter Baer Galvin and Greg Gagne Operating system concepts Reading Addison-Wesley

                                                                    Visit my blog enthusiaststudentblogspotin57

                                                                    mtechmessengerblogspotin

                                                                    Keywords OS exam notes OS notesOperating system notes Opearing system galvin notesSimple batch system Multi- programmed Batch System Time sharing System Difference between Batch Multi-programming and Time sharing What is process structure of process in memory Process states PCB Operations on Processes Process Creation Process Termination cascading terminationcontext switch Basic Concepts of threads two modes of CPU execution Inter process communication Shared ndash Memory Systems Message passing systemswhat is deadlock four conditions that are necessary for deadlock to occur Methods for handling deadlocks Deadlock prevention Deadlock avoidanceResource Allocation graphBankers AlgorithmSafe State Unsafe state Safety Algorithm Resource-Request Algorithm Memory management Address Binding Logical vs Physical Address Space MMU Memory-Management Unit Dynamic loading Dynamic Linking Overlays Swapping Contiguous memory allocation Single-partition allocation Multiple-partition allocation Internal fragmentation Variable Size partitions External fragmentation First-fit Best-fit Worst-fit Non-Contiguous Memory allocation Paging Implementation of Page Table PTLR PTBR page table base register Page-table length register translation look-aside buffers TLB Protection in Paging Shared pages in Paging Segmentation Shared Segments Virtual memory Demand paging Procedure for Handling a Page Fault pure demand paging Page Replacement Page Replacement Algorithms FIFO Optimal Page replacement AlgorithmLeast Recently used (LRU) Page replacement algorithm ThrashingWorking Set model global and local page replacement

                                                                    58

                                                                    • 101 File Concept Data is stored in secondary memory in the form of files A file is a sequence of bits bytes lines or records A file has structure which depends on its type
                                                                    • File Attributes
                                                                    • File Operations
                                                                    • Kernel Mode vs User Mode
                                                                    • Basic Features
                                                                    • Architecture

                                                                      Given counter =5 and if producer produces one item and consumer consumes one item then correct value of counter= 5 only But here the counter = 4 ie inconsistent resultIf T5 is executed before T4 then the counter =6 ie inconsistent result

                                                                      Solution to avoid race condition make sure only one process at a time must update the shared variable (here it is counter)

                                                                      Q) Critical section problemAns Each process has a segment of code called critical section Critical section code is the code that contains common variables of co-operating processesTo avoid data inconsistencyTwo processes must not execute critical section code at same timeFor this to happen1 Each process must request permission to enter its critical sectionmdashEntry section2 End of critical section ndashExit section3 Rest of the code after critical section ndash Remainder section

                                                                      Solution to critical section must satisfy the following three requirements1 Mutual exclusion if one process is in critical section no other process can be in critical section2 Progress Processes not in remainder section must decide which process will enter critical section next3 Bounded Waiting All the processes must be allowed to enter critical section No process must wait indefinitely

                                                                      Q) Petersonrsquos solution for critical section problem (Software based solution)AnsTime Process 1 Process2T0 do

                                                                      flag[1] = TRUE turn =2

                                                                      do flag[2] = TRUE

                                                                      T1 turn =1

                                                                      T2 While( flag[2] ampamp turn == 2)Since turn = 2 while loop will not run

                                                                      While( flag[1] ampamp turn == 1)Loops continuously until flag[1]

                                                                      becomes falseT3 Enter critical section

                                                                      T4 flag[1] = FALSE Enter Critical section

                                                                      T5 Enters remainder section flag[2]=FALSE

                                                                      while (TRUE) Enters remainder section

                                                                      while (TRUE)

                                                                      101 File Concept Data is stored in secondary memory in the form of files A file is a sequence of bits bytes lines or records A file has structure which depends on its type

                                                                      File Attributes

                                                                      1 Name ndash Name of the file for user understanding2 Identifier used by OS to identify a file ( eg inode number )3 Type ndash Text files executable files binary files etc 4 Location ndash location of the file on the hard drive 5 Size current size of the file6 Protection Controls who can read write or execute the file7 Time Date and user identification gives the information when the file was created last modified

                                                                      and last used etc This data is used for protection security and usage monitoring

                                                                      All information about files is kept in directory structure The directory entry consists of the filersquos name and its unique identifier The identifier locates the other file attributes Directory structure is kept on disk

                                                                      File Operations

                                                                      The file Abstract Data Type supports many common operations 1 Creating a file find free space on disk add entry to directory 2 Writing a file write data at current file position pointer location and update pointer3 Reading a file read data at current file position pointer location and update pointer4 Repositioning within a file or file seek change pointer location to a given value5 Deleting a file free the space allocated to file and remove entry from directory6 Truncating a file delete the data and update file size

                                                                      Open File Table

                                                                      Most OSes require that files be opened before access and closed after access Information about all files that are open currently in the system is stored in an open file table This

                                                                      system wide open file table is maintained by OS And information about files opened by the process is stored by the processrsquos open file table Open file table has the following information

                                                                      1 File pointer - records the current position in the file for the next read or write2 File-open count - How many times has the current file been opened ( simultaneously by

                                                                      different processes ) and not yet closedWhen this counter reaches zero the file can be removed from the table

                                                                      3 Disk location of the file4 Access rights

                                                                      Some systems provide support for file locking 1 A shared lock is for reading only2 A exclusive lock is for writing as well as reading3 An advisory lock OS makes sure the locks are acquired and released appropriately4 A mandatory lock software developers makes sure the locks are acquired and released

                                                                      appropriately5 UNIX used advisory locks and Windows uses mandatory locks

                                                                      File Types

                                                                      Implementing file types A common technique is to include the type as part of the filename The name is split into 2 parts

                                                                      1 File Name 2 Extension

                                                                      File name and extension are separated by a period The system uses the extension to indicate the type of file and the type of operations that can be done on that fileUser opens Microsoft word and clicks on File menu and Open option User will only specify the filename without extension application will look for a file with the given name and extension it expects Hence extensions are not supported by the operating systemUnix system uses a crude magic number stored at the beginning of the file to indicate the type of fileIn Mac OS X operating system file produced by word processor has the word processorrsquos name as its creator When user double clicks on the file word will open the required fileSome common file types are as shown below

                                                                      File Structure Some files contain an internal structure which may or may not be known to the OS for ex executable files have a specific structure so that it can determine

                                                                      where in memory to load the file and what is the location of the first instruction

                                                                      If OS supports many file formats it will increase the size and complexity of the OS UNIX treats all files as sequences of bytes with no internal structure (With the exception of

                                                                      executable binary programs which it must know how to load and find the first executable statement etc)

                                                                      Macintosh files have two forks - a resource fork and a data fork The resource fork contains information relating to the UI such as icons and button images and can be modified independently Data fork contains the code or data

                                                                      Internal File structure

                                                                      Disk files are accessed in units of physical blocks typically 512 bytes

                                                                      Internally files are organized in units of logical units like single byte (or)

                                                                      data record size (or) structure size

                                                                      The number of logical units which fit into one physical block determines its packingIf a student file is organized internally as records Each record takes 64 bytes of memoryAnd main memory is divided into blocks of 512 bytes Then packing = 51264=8 8 records can be stored per block

                                                                      Internal fragmentation may occur due to this

                                                                      102 Access Methods

                                                                      The information in file can be accessed in the following ways1 Sequential access2 Direct access (or) Relative access

                                                                      1 Sequential access we cannot randomly access any record

                                                                      Records are read and written sequentially one record after another

                                                                      A sequential access file emulates magnetic tape operation and generally supports the following operations read next - read a record and advance the tape to the next position write next - write a record and advance the tape to the next position rewind skip n records - May or may not be supported N may be limited to positive numbers or may be

                                                                      limited to +- 1

                                                                      2 Direct access (or) Relative accessA file is made up of fixed length records so that we can randomly access any record

                                                                      No restriction on order of reading or writing

                                                                      Direct access files are useful when we need to access large amounts of data randomly

                                                                      Ex database

                                                                      Operations supported include n is relative block number ie relative to beginning of file

                                                                      1 read n - read record number n 2 write n - write record number n 3 jump to record n - could be 0 or the end of file4 Query current record - used to return back to this record later5 Sequential access can be easily emulated using direct access The inverse is complicated and

                                                                      inefficient

                                                                      3 Other access methods can be built on top of direct access method We construct an index for the file which contains pointers to various blocks To search a file we first access index and then use pointer to access file

                                                                      If a student file has 120000 records of students sorted by student id We can create an index consisting of 1st student id in each blockif 64 records can be stored in each block then number of blocks needed to store 120000 records = 12000064 asymp 2000 blocks So index has 2000 entries Index is kept in main memory so that searching becomes faster

                                                                      If the index itself becomes large we create an index for index file (Ex B-tree)

                                                                      Q) Allocation Methods

                                                                      To allocate space to files on disk is such a way that disk space is utilized effectively and files can be accessed quickly

                                                                      Three major methods of allocating disk space are

                                                                      1 Contiguous 2 Linked

                                                                      3 Indexed

                                                                      1 Contiguous allocation Each file occupies set of contiguous blocks on disk

                                                                      If the file is n blocks long and starts at location b then it occupies blocks b b+1 b+2hellipb+n-1

                                                                      The directory entry for each file indicates the address of starting block(ie b) and length (n)

                                                                      Advantages

                                                                      1 For direct access to block i we can directly access by finding the block as b+i Since the file is stored sequentially sequential access is easy Hence access is easy

                                                                      2 Supports both direct and sequential access3 Number of disk seeks required is minimum 4 Seek time is minimum

                                                                      Disadvantages

                                                                      1 Difficult to find space to a new file (or) add more data to existing fileFirst fit and Best fit strategies are used to find a free hole( set of free contiguous blocks) from list of available holes But both are inefficient wrt storage utilization

                                                                      2 External fragmentation even though there are enough free blocks required for a new file they are not contiguous and hence cannot be allocated to new fileSolution copy all the files onto another disk or tape We then copy the files back onto original disk contiguously This is known as compaction But compaction is time consuming

                                                                      3 Size of the file must be known when the file is created Even though final size is known entire size may not be used by the file at the beginning Hence leads to wastage of space

                                                                      If size of the file is not known in advance then (i) If we allocate too little space to file file cannot be extended Solution find a larger hole copy the contents of the file to the new space and release the previous space This may be possible as long as space exists Also this is time consuming(ii) If we overestimate the amount of space needed much of the space is unused This will lead to internal fragmentation

                                                                      To minimize these drawbacks some os uses modified contiguous allocation scheme Here contiguous blocks of memory is allocated at firstIf the file needs more blocks another set of contiguous space known as extent is added Location and a block count plus a link to the first block of next extentIf extents are too large and of fixed size internal fragmentation occursIf extents are of variable sizes external fragmentation occurs

                                                                      Linked AllocationEach file is a linked list of disk blocks The disk blocks can be scattered anywhere on the diskThe directory contains a pointer to first and last blocks of the file For example a file of 5 blocks might start at a block 9 and continue at block 16 then block 1 and then block 10 and finally block 25 Each block contains a pointer to next block If each block is 512 bytes and disk address (pointer ) requires 4 bytes then the user sees blocks of 512-4=508 bytes To create a new file we simply create a new entry in the directory The pointer will have nil for empty file Also the size field of empty file is 0

                                                                      To add more data to existing file free block is found and data is written to it and is linked to end of file To read a file we read by following the pointers block to block

                                                                      Advantages 1 Easy to find space to a new file (or) add more data to existing file2 Size of the file need not be declared when the file is created3 There is no external fragmentation

                                                                      Disadvantages1 Seek time is more2 Number of disk seeks required is more3 Pointers use larger percentage of filersquos disk space4 Inefficient for direct access files To access i th record i disk reads are required5 Not Reliable if pointers were lost or damaged

                                                                      Solution to disadvantage 3 is to use to group multiple blocks as clusters(for ex 4 blocks = 1cluster) We allocate clusters instead of blocks to file Makes logical to physical block mapping simpleImproves disk throughput by reducing disk access timeReduces wastage of space due to pointers as only few pointers may be neededFree ndashlist management also becomes simpleDisadvantage internal fragmentation more space is wasted when the cluster is partially full

                                                                      Solution to disadvantage 5 is to use doubly linked list (or) store the file name and relative block number in each block

                                                                      An variation of linked allocation is the use of file allocation table (FAT)A section of disk at the beginning of each volume contains FAT FAT has one entry for each disk block and is indexed by block numberThe block number contains the block number of next block that contains the file The last block will have special end of the file value as table entry An unused block is indicated by table value 0

                                                                      Advantages

                                                                      1 Easy to find space to a new file (or) add more data to existing file2 Random access time is improved

                                                                      Disadvantage number of disk seeks is more The disk head must move to read the FAT and find the location of block then move to location of the block

                                                                      3 Indexed allocation

                                                                      Each file has its own index block Index block is an array of disk block addresses The ith entry in index block points to ith block of the file

                                                                      The directory contains the address of the index block

                                                                      When the file is created all pointers in the index block are set to nil

                                                                      To add more data to existing file free block is found and data is written to it The address of block is put in the index block

                                                                      Advantages

                                                                      1 Easy to find space to a new file (or) add more data to existing file2 Supports direct (or) random access efficiently3 No external fragmentation

                                                                      Disadvantage

                                                                      1 Seek time is more2 Number of disk seeks required is more3 Not Reliable4 Index file must be kept in memory If this memory is not available then we have the read the index

                                                                      block and then desired data block ie 2 disk accesses are required To access a block near the end of the file we need to read all the index blocks to read the needed data block

                                                                      5 Pointer overhead of index block is gt than that of linked allocation For example if we have file of only one or two blocks Entire index block is allocated to store these one or two pointers

                                                                      Various mechanisms followed for size of index block

                                                                      1 linked scheme size of index block is one disk block Index block contains a small header containing name of the file and set of the first 100 disk block addresses The next address contains the address of another index block for large files

                                                                      2 multilevel index to access a block the OS uses first-level index to find second-level index block which points to file block This approach can be continued to third or fourth level

                                                                      3 combined scheme

                                                                      For example Say there are 15 pointers of index block in the filersquos inode The first 12 of these pointers have address of blocks that contains file data The next 3 pointers point to indirect blocks The first points to single indirect block which is an index block that contain addresses of filersquos data blocks

                                                                      The second points to double indirect blocks and third points to triple indirect blocks as shown below

                                                                      Q) Free space management

                                                                      To keep track of free disk space the system maintains free space list To create a file we search free space list and allocate the required space to new file

                                                                      The free space list can be implemented as

                                                                      1 Bit vector (or) Bit map2 Linked list3 Grouping4 Counting 5 Space maps

                                                                      1 Bit vector (or) Bit map Each block is represented by 1 bit If the block is free the bit is 1 and if the block is allocated the bit is 0 For example consider a disk where blocks 23458 blocks are free The free-space bit map would be 001111001000

                                                                      Advantages simple and easy to get contiguous files

                                                                      Disadvantages

                                                                      1 Bit map requires extra space2 Bit map is inefficient if kept in disk Bit maps of smaller disk can be kept in main memory but may

                                                                      not be possible to keep the bit map of larger disk in main memory3 Bit map must be modified both when blocks are allocated and freed Freeing 1 GB of data on a 1-TB

                                                                      disk could cause thousands of blocks of bit maps to be updated because those data blocks are scattered all over the disk

                                                                      2 Linked List Free-space list is implemented by linking all the free disk blocks together A pointer to the first free block is stored in special location on the disk For example consider a disk where blocks 23458 blocks are freeWe keep a pointer to block 2 as the first free block Block 2 would contain a pointer to block 3 Block 3 would point to block 4 and block 4 would point to block 5and so onAdvantage No waste of spaceDisadvantage Cannot get contiguous space easily ie To traverse the free-space list we must read each block which requires more IO time

                                                                      3 Grouping First free block stores the addresses of n free blocks The first n-1 free blocks are free and the last block contains the addresses of another n free blocks and so onAdvantage large number of free blocks can be found quickly4 Counting free-space list can be maintained by keeping the address of first free block and the number (n)of free contiguous blocks that follow the first block

                                                                      Each entry in the free-space list consists of disk address and a countEach entry requires more space andlist is shorter only if count is gt1These entries can be stored in a B-tree for efficient insertion deletion and search

                                                                      5 Space maps ZFS divides the disk space into chunks of manageable sizes called metaslabs Each metaslab has a space mapZFS uses counting algorithm to store information about free blocks Space map is a log of all block activities in time order in counting formatWhen ZFS allocates or frees space from a metaslab it loads the associated space map into memory and updates it Finally free-space list is updated on diskSynchronization Hardware Hardware solution to synchronization is to provide atomic operations These operations operate as a single instruction without interruption Two such operations are 1 Test and Set() hardware instruction2 Swap() instruction

                                                                      1 Test and Set instruction is as shown

                                                                      Solution using Test and SetThe shared variable lock is initialized to false

                                                                      2 Swap() instruction definition is as follows

                                                                      Solution using Swap1 Shared Boolean variable lock is initialized to false and each process has a local boolean variable key

                                                                      Both Test and Set Swap instructions satisfy the mutual exclusion requirement but unfortunately do not guarantee bounded waiting If there are multiple processes trying to get into their critical sections there is no guarantee of what order they will enter and any one process could have the bad luck to wait forever until they got their turn in the critical section (Since there is no guarantee as to the relative rates of the processes a very fast process could theoretically release the lock whip through their remainder section and re-lock the lock before a slower process got a chance As more and more processes are involved vying for the same resource the odds of a slow process getting locked out completely increase )

                                                                      Below figure illustrates a solution using test-and-set that satisfies bounded waiting using two shared data structures boolean lock and boolean waiting[ N ] where N is the number of processes in contention for critical sections

                                                                      Bounded-waiting mutual exclusion with TestAndSet( )

                                                                      It first looks in an order ( starting with the next process on the list ) for a process that has been waiting and if it finds one then it releases that particular process from its waiting state without unlocking the critical section thereby allowing a specific process into the critical section while continuing to block all the others

                                                                      Q) What is semaphoreAns Semaphore is used to solve synchronization problems A semaphore S is an integer variable accessed only through 2 standard atomic operations 1 wait() 2 signal()Wait primitive Signal primitivewait(S)

                                                                      while(S lt= 0) do nothingS--

                                                                      signal (S)

                                                                      S++

                                                                      Each process that wishes to use a resource performs wait() operation When a process releases a resource it performs signal() operation

                                                                      Semaphores Usage1 binary semaphore or mutex locks can take values 0 (or) 1

                                                                      2 Counting Semaphore can take any integer value Counting semaphore is used to count the number remaining resources The counter is initialized to the number of resources available whenever the counting semaphore gt 0 then a process can enter a critical section and use one of the resources When the counter = 0 ( or negative in some implementations ) then the process blocks until another process frees a resource and increments the counting semaphore with a signal callFor example if 3 resources are there and 4 processes Value of counting semaphore =3if process P1 requires a resource it perform wait() operationNow counting semaphore = 2if process P2 requires a resource it perform wait() operationNow counting semaphore = 1if process P3 requires a resource it perform wait() operationNow counting semaphore = 0if process P4 requires a resource it perform wait() operation P4 waits until a resource is available

                                                                      3 Semaphores can also be used to synchronize certain operations

                                                                      Semaphore ImplementationWhen a process is in critical section and any other process that tries to enter critical section must loop continuously wasting CPU time (busy waiting) This type of semaphore is called spin lock as the process spins while waiting For example suppose it is important that process P1 execute statement S1 before process P2 executes statement S2

                                                                      First we create a semaphore named synch that is shared by the two processes and initialize it to zero Then in process P1 we insert the code

                                                                      S1signal( synch )

                                                                      and in process P2 we insert the code wait( synch )

                                                                      S2 Because synch was initialized to 0 process P2 will block on the wait until after P1 executes the call

                                                                      to signal Each semaphore has a integer value and maintains a waiting queue of waiting processes

                                                                      Semaphore implementation with no busy waiting To overcome busy waiting we block the waiting process and waiting process is restarted by wakeup() operation(ie the process is removed from waiting queue to ready state)

                                                                      Problems due to semaphore1 Deadlocks Deadlocks occur when multiple processes are blocked each waiting for a resource that can only be freed by one of the other (blocked) processes shown below

                                                                      2 Starvation In Starvation one or more processes gets blocked forever and never get a chance to enter critical section For example if we do not specify the algorithms for adding processes to the waiting queue or selecting one to be removed from the queue in the signal( ) call If a LIFO queue is chosen then the first process that starts waiting will never get a chance

                                                                      3 Priority inversionLet Process LMH have priorities as shown belowLltMltHprocess H is waiting for a resource that has been held by process L so Process L is running When process M enters it pre-empts process L and M is running nowHere M runs before H even though M has lower priority than HTo avoid this L is given high priority so that M does not pre-empt L=====================================================================Q) Classical problems on synchronization

                                                                      1 The Bounded Buffer Problem (also called the The Producer-Consumer Problem)2 The Readers-Writers Problem3 The Dining Philosophers Problem

                                                                      These problems are used to test newly proposed synchronization scheme

                                                                      1 The Bounded Buffer ProblemConsider

                                                                      n buffers and each buffer holds one item a producer process which creates the items (1 at a time) a consumer process which processes them (1 at a time)

                                                                      Producer process cannot produce an item if all the buffers are full and Consumer process cannot consume if all the buffers are empty As both processes modify the contents of buffer synchronization must be there For that we use 3 Semaphores

                                                                      1 empty = n (empty semaphore counts the number of empty buffers)2 full=0 (full semaphore counts the number of filled buffers)3 mutex=1 (mutex provides mutual exclusion for access to the buffer)

                                                                      Producer process Consumer processdo

                                                                      hellip produce an item wait(empty) wait(mutex) hellip

                                                                      do wait(full) wait(mutex) hellip remove an item from buffer hellip

                                                                      add item to buffer hellip signal(mutex) signal(full)

                                                                      while (1)

                                                                      signal(mutex) signal(empty) hellip consume the item hellip while (1)

                                                                      2 The Readers-Writers ProblemA data item such as a file is shared among several processesEach process is classified as reader or writerMultiple readers can read the file at same timeA writer must have exclusive access (ie cannot share with either a reader or another writer)Two versions of the readers-writers problem

                                                                      readers priority new reader need not wait because a writer is waiting writers priority if a writer is waiting to access the database no new readers can start reading

                                                                      A solution to both version may cause starvation in the readers priority version writers may starve in the writers priority version readers may starve

                                                                      A semaphore solution to the readers priority version uses 3 semaphores1 readcount = 0 (readcount semaphore counts the number of reading shared data2 mutex =1 (mutex semaphore is used for mutually exclusion when readcount is updated)3 wrt = 1 (wrt semaphore is common to both writer and reader)

                                                                      wrt provides mutually exclusive access to shared data Reader process Writer process

                                                                      wait(mutex) readcount++ if (readcount == 1) wait(rt) signal(mutex) hellip reading is performed hellip wait(mutex) readcount-- if (readcount == 0) signal(wrt) signal(mutex)

                                                                      do wait(wrt) hellip writing is performed hellip signal(wrt)

                                                                      while (TRUE)

                                                                      3 The Dining Philosophers Problemn philosophers sit around a table thinking and eatingAs time passes a philosopher gets hungry and tries to pick up the chopstick on his left and on his right A philosopher may only pick up one chopstick at a time and cannot pick up a chopstick already in the hand of neighbour philosopherThe dining philosophers problems is an example of a large class or concurrency control problems it is a simple representation of the need to allocate several resources among several processes in a deadlock-free and starvation-free mannerA semaphore solution represent each chopstick with a semaphore

                                                                      semaphore chopstick[5] Initially all values are 1

                                                                      Philosopher ido

                                                                      51

                                                                      wait(chopstick[i])wait(chopstick[(i+1) 5])hellipeathellipsignal(chopstick[i])signal(chopstick[(i+1) 5])hellipthinkhellip

                                                                      while (1)This solution guarantees no two neighbouring philosophers eat simultaneously but has the possibility of creating a deadlockQ) Types of operating systems

                                                                      MULTIPROCESSING SYSTEMS has multiple hardware CPUs

                                                                      In symmetric multiprocessing (SMP) systems The processes share the same main memory and io facilities All processors can perform the same functions (hence term symmetric)

                                                                      52

                                                                      ASYMMETRIC MULTIPROCESSING SYSTEMS In asymmetric multiprocessing (ASMP) all CPUs are not equal (Master and slave CPUs) ASMP have dedicated processors to specific tasks For example one processor may be dedicated to disk operations another to video operations etc These systems dont have the flexibility to assign processes to the least-loaded CPU unlike an SMP system

                                                                      REAL TIME SYSTEM is used to implement a computer application that must complete its execution within its time constraint

                                                                      Two kinds of real-time systems have evolved

                                                                      1 A hard real-time system can guarantee that response requirement will be met under all conditions 2 A soft real-time system cannot guarantee that response requirement meets under all conditions Ex

                                                                      Digital audio or multimedia systems Digital telephones

                                                                      DISTRIBUTED SYSTEMS execute parts of a computation in different systems at the same time It uses distributed control ie it spreads its decision-making actions across different computers in the system so that failures of individual computers or the network does not cripple its operation

                                                                      A distributed operating system appears as a uniprocessor system even though it has multiple processors The users may not know where their programs are being run or where their files are located that should all be handled automatically by the operating system

                                                                      Distributed systems allow applications to run on several processors at the same time thus requiring more complex processor scheduling algorithms

                                                                      DESKTOP SYSTEMS A desktop system is a personal computer (PC) system used at a single location Modern desktop systems support multiprogramming Common examples are Linux FreeBSD Windows 8 and the Macintosh operating system

                                                                      HANDHELD SYSTEMS (palmtop computers) Two of the most popular operating systems for handhelds are Symbian OS and Android OS They have very less memory

                                                                      53

                                                                      CLUSTERED SYSTEMS A computer cluster consists of a set of loosely connected computers that work together so that in many respects they can be viewed as a single system The components of a cluster are usually connected to each other through fast local area networks each node (computer used as a server) running its own instance of an operating system

                                                                      Q) Modern operating system

                                                                      New developments in hardware applications and security threats lead to the development of modern operating system

                                                                      New Hardware developments like increased machine speed high speed network increased size and variety of memory devices

                                                                      New Applications developments like multimedia applications internet and web access and client server computing

                                                                      Developments in modern operating system can be categorized as

                                                                      1 Microkernel architecture assigns only few functions to kernel ( IPC and CPU scheduling etc) Other OS services are provided by processes called servers that run in user mode This approach decouples server and kernel development This approach is well suited to distributed environment

                                                                      2 Multithreading a process is divided into threads that can run concurrently Useful for applications that perform a number of independent tasks Example database server that listens and processes many client requests

                                                                      3 Symmetric multiprocessing As now-a-days there are multiple microprocessors in a single system SMP operating system provides greater efficiency when there are multiple processors The processes share the same main memory and io facilities All processors can perform the same functions (hence term symmetric)

                                                                      SMP schedules processes or threads across all of the processors

                                                                      Advantages

                                                                      1 Increased performance if some portion of the program can be run in parallel the performance of SMP OS increases

                                                                      2 increased availability as all processes can perform the same task if one processor fails others will work 3 incremental growth one can add a new processor to increase the performance4 Scaling cost can be dependent on the number of processors used

                                                                      4 Distributed operating systems Distributed systems allow applications to run on several processors at the same time

                                                                      5 Object oriented design adds modularity to kernel Os can be customized without effecting system integrity

                                                                      Q) Linux Operating systems It is open source as its source code is freely available It is free to use Linux was designed considering UNIX compatibility Linux is a multi-user multitasking system Main design goals are speed efficiency and standardization Components of Linux System

                                                                      54

                                                                      Linux Operating System has primarily three components as shown in the below diagram

                                                                      Kernel - Kernel is the core part of Linux It is responsible for all major activities of this operating system It interacts directly with hardware Kernel hides low level hardware details to system or application programs

                                                                      System Library - System libraries are special functions or programs These are used by application programs to use Kernelrsquos features

                                                                      System Utility - System Utility programs are responsible to do specialized individual level tasks like updating log file accepting login requests from terminals etc

                                                                      Kernel Modules Sections of kernel code that can be compiled loaded and unloaded independent of the rest of the kernel A kernel module may typically implement a device driver a file system or a networking protocol

                                                                      Three components to Linux module support 1 module management Supports loading modules into memory and allows to talk to the rest of the kernel 2driver registration Allows modules to inform rest of the kernel that a new driver has become available 3 conflict resolution protect reserved resources of one driver from accidental use by another driver

                                                                      Kernel Mode vs User ModeKernel component code executes in a special privileged mode called kernel mode with full access to all resources of the computer This code represents a single process executes in single address space and do not require any context switch and hence is very efficient and fast Kernel runs each processes and provides system services to processes provides protected access to hardwares to processesCode which is not required to run in kernel mode is in System Library User programs and other system programs works in User Mode which has no access to system hardwares and kernel code User programs utilities use System libraries to access Kernel functions to get systems low level tasksBasic FeaturesFollowing are some of the important features of Linux Operating System

                                                                      Portable - Portability means softwares can works on different types of hardwares in same way Linux kernel and application programs supports their installation on any kind of hardware platform

                                                                      Open Source - Linux source code is freely available and it is community based development project Multiple teams works in collaboration to enhance the capability of Linux operating system and it is continuously evolving

                                                                      Multi-User - Linux is a multiuser system means multiple users can access system resources like memory ram application programs at same time

                                                                      Multiprogramming - Linux is a multiprogramming system means multiple applications can run at same time

                                                                      Hierarchical File System - Linux provides a standard file structure in which system files user files are arranged

                                                                      Shell - Linux provides a special interpreter program which can be used to execute commands of the operating system It can be used to do various types of operations call application programs etc

                                                                      55

                                                                      Security - Linux provides user security using authentication features like password protection controlled access to specific files encryption of data

                                                                      Architecture Linux System Architecture is consists of following layers

                                                                      Hardware layer - Hardware consists of all peripheral devices (RAM HDD CPU etc)

                                                                      Kernel - Core component of Operating System interacts directly with hardware provides low level services to upper layer components

                                                                      Shell - An interface to kernel Takes commands from user and executes kernels functions

                                                                      Utilities - Utility programs giving user most of the functionalities of an operating systems

                                                                      Q) windows XP operating System1Extensibility mdash layered architecture

                                                                      Executive which runs in protected mode provides the basic system services On top of the executive several server subsystems operate in user mode Modular structure allows additional environmental subsystems to be added without affecting the

                                                                      executive 2Portability mdash XP can be moved from on hardware architecture to another with relatively few changes

                                                                      Written in C and C++ Processor-dependent code is isolated in a dynamic link library (DLL) called the ldquohardware abstraction

                                                                      layerrdquo (HAL)3 Reliability mdashXP uses hardware protection for virtual memory and software protection mechanisms for operating system resources 4 Compatibility mdash applications that follow the IEEE 10031 (POSIX) standard can be complied to run on 2000 without changing the source code 5 Performance mdash XP subsystems can communicate with one another via high-performance message passing

                                                                      Preemption of low priority threads enables the system to respond quickly to external events Designed for symmetrical multiprocessing

                                                                      6 International support mdash supports different locales via the national language support (NLS) APIXP Architecture 1 Layered system of modules 2 Protected mode mdash hardware abstraction layer (HAL) kernel executive 3 User mode mdash collection of subsystems

                                                                      a Environmental subsystems emulate different operating systems b Protection subsystems provide security functions

                                                                      XP architecture

                                                                      56

                                                                      Q) Windows network Operating systemExample Windows 2000 operating system

                                                                      All the pictures are taken from Silberschatz Abraham et al Peter Baer Galvin and Greg Gagne Operating system concepts Reading Addison-Wesley

                                                                      Visit my blog enthusiaststudentblogspotin57

                                                                      mtechmessengerblogspotin

                                                                      Keywords OS exam notes OS notesOperating system notes Opearing system galvin notesSimple batch system Multi- programmed Batch System Time sharing System Difference between Batch Multi-programming and Time sharing What is process structure of process in memory Process states PCB Operations on Processes Process Creation Process Termination cascading terminationcontext switch Basic Concepts of threads two modes of CPU execution Inter process communication Shared ndash Memory Systems Message passing systemswhat is deadlock four conditions that are necessary for deadlock to occur Methods for handling deadlocks Deadlock prevention Deadlock avoidanceResource Allocation graphBankers AlgorithmSafe State Unsafe state Safety Algorithm Resource-Request Algorithm Memory management Address Binding Logical vs Physical Address Space MMU Memory-Management Unit Dynamic loading Dynamic Linking Overlays Swapping Contiguous memory allocation Single-partition allocation Multiple-partition allocation Internal fragmentation Variable Size partitions External fragmentation First-fit Best-fit Worst-fit Non-Contiguous Memory allocation Paging Implementation of Page Table PTLR PTBR page table base register Page-table length register translation look-aside buffers TLB Protection in Paging Shared pages in Paging Segmentation Shared Segments Virtual memory Demand paging Procedure for Handling a Page Fault pure demand paging Page Replacement Page Replacement Algorithms FIFO Optimal Page replacement AlgorithmLeast Recently used (LRU) Page replacement algorithm ThrashingWorking Set model global and local page replacement

                                                                      58

                                                                      • 101 File Concept Data is stored in secondary memory in the form of files A file is a sequence of bits bytes lines or records A file has structure which depends on its type
                                                                      • File Attributes
                                                                      • File Operations
                                                                      • Kernel Mode vs User Mode
                                                                      • Basic Features
                                                                      • Architecture

                                                                        T4 flag[1] = FALSE Enter Critical section

                                                                        T5 Enters remainder section flag[2]=FALSE

                                                                        while (TRUE) Enters remainder section

                                                                        while (TRUE)

                                                                        101 File Concept Data is stored in secondary memory in the form of files A file is a sequence of bits bytes lines or records A file has structure which depends on its type

                                                                        File Attributes

                                                                        1 Name ndash Name of the file for user understanding2 Identifier used by OS to identify a file ( eg inode number )3 Type ndash Text files executable files binary files etc 4 Location ndash location of the file on the hard drive 5 Size current size of the file6 Protection Controls who can read write or execute the file7 Time Date and user identification gives the information when the file was created last modified

                                                                        and last used etc This data is used for protection security and usage monitoring

                                                                        All information about files is kept in directory structure The directory entry consists of the filersquos name and its unique identifier The identifier locates the other file attributes Directory structure is kept on disk

                                                                        File Operations

                                                                        The file Abstract Data Type supports many common operations 1 Creating a file find free space on disk add entry to directory 2 Writing a file write data at current file position pointer location and update pointer3 Reading a file read data at current file position pointer location and update pointer4 Repositioning within a file or file seek change pointer location to a given value5 Deleting a file free the space allocated to file and remove entry from directory6 Truncating a file delete the data and update file size

                                                                        Open File Table

                                                                        Most OSes require that files be opened before access and closed after access Information about all files that are open currently in the system is stored in an open file table This

                                                                        system wide open file table is maintained by OS And information about files opened by the process is stored by the processrsquos open file table Open file table has the following information

                                                                        1 File pointer - records the current position in the file for the next read or write2 File-open count - How many times has the current file been opened ( simultaneously by

                                                                        different processes ) and not yet closedWhen this counter reaches zero the file can be removed from the table

                                                                        3 Disk location of the file4 Access rights

                                                                        Some systems provide support for file locking 1 A shared lock is for reading only2 A exclusive lock is for writing as well as reading3 An advisory lock OS makes sure the locks are acquired and released appropriately4 A mandatory lock software developers makes sure the locks are acquired and released

                                                                        appropriately5 UNIX used advisory locks and Windows uses mandatory locks

                                                                        File Types

                                                                        Implementing file types A common technique is to include the type as part of the filename The name is split into 2 parts

                                                                        1 File Name 2 Extension

                                                                        File name and extension are separated by a period The system uses the extension to indicate the type of file and the type of operations that can be done on that fileUser opens Microsoft word and clicks on File menu and Open option User will only specify the filename without extension application will look for a file with the given name and extension it expects Hence extensions are not supported by the operating systemUnix system uses a crude magic number stored at the beginning of the file to indicate the type of fileIn Mac OS X operating system file produced by word processor has the word processorrsquos name as its creator When user double clicks on the file word will open the required fileSome common file types are as shown below

                                                                        File Structure Some files contain an internal structure which may or may not be known to the OS for ex executable files have a specific structure so that it can determine

                                                                        where in memory to load the file and what is the location of the first instruction

                                                                        If OS supports many file formats it will increase the size and complexity of the OS UNIX treats all files as sequences of bytes with no internal structure (With the exception of

                                                                        executable binary programs which it must know how to load and find the first executable statement etc)

                                                                        Macintosh files have two forks - a resource fork and a data fork The resource fork contains information relating to the UI such as icons and button images and can be modified independently Data fork contains the code or data

                                                                        Internal File structure

                                                                        Disk files are accessed in units of physical blocks typically 512 bytes

                                                                        Internally files are organized in units of logical units like single byte (or)

                                                                        data record size (or) structure size

                                                                        The number of logical units which fit into one physical block determines its packingIf a student file is organized internally as records Each record takes 64 bytes of memoryAnd main memory is divided into blocks of 512 bytes Then packing = 51264=8 8 records can be stored per block

                                                                        Internal fragmentation may occur due to this

                                                                        102 Access Methods

                                                                        The information in file can be accessed in the following ways1 Sequential access2 Direct access (or) Relative access

                                                                        1 Sequential access we cannot randomly access any record

                                                                        Records are read and written sequentially one record after another

                                                                        A sequential access file emulates magnetic tape operation and generally supports the following operations read next - read a record and advance the tape to the next position write next - write a record and advance the tape to the next position rewind skip n records - May or may not be supported N may be limited to positive numbers or may be

                                                                        limited to +- 1

                                                                        2 Direct access (or) Relative accessA file is made up of fixed length records so that we can randomly access any record

                                                                        No restriction on order of reading or writing

                                                                        Direct access files are useful when we need to access large amounts of data randomly

                                                                        Ex database

                                                                        Operations supported include n is relative block number ie relative to beginning of file

                                                                        1 read n - read record number n 2 write n - write record number n 3 jump to record n - could be 0 or the end of file4 Query current record - used to return back to this record later5 Sequential access can be easily emulated using direct access The inverse is complicated and

                                                                        inefficient

                                                                        3 Other access methods can be built on top of direct access method We construct an index for the file which contains pointers to various blocks To search a file we first access index and then use pointer to access file

                                                                        If a student file has 120000 records of students sorted by student id We can create an index consisting of 1st student id in each blockif 64 records can be stored in each block then number of blocks needed to store 120000 records = 12000064 asymp 2000 blocks So index has 2000 entries Index is kept in main memory so that searching becomes faster

                                                                        If the index itself becomes large we create an index for index file (Ex B-tree)

                                                                        Q) Allocation Methods

                                                                        To allocate space to files on disk is such a way that disk space is utilized effectively and files can be accessed quickly

                                                                        Three major methods of allocating disk space are

                                                                        1 Contiguous 2 Linked

                                                                        3 Indexed

                                                                        1 Contiguous allocation Each file occupies set of contiguous blocks on disk

                                                                        If the file is n blocks long and starts at location b then it occupies blocks b b+1 b+2hellipb+n-1

                                                                        The directory entry for each file indicates the address of starting block(ie b) and length (n)

                                                                        Advantages

                                                                        1 For direct access to block i we can directly access by finding the block as b+i Since the file is stored sequentially sequential access is easy Hence access is easy

                                                                        2 Supports both direct and sequential access3 Number of disk seeks required is minimum 4 Seek time is minimum

                                                                        Disadvantages

                                                                        1 Difficult to find space to a new file (or) add more data to existing fileFirst fit and Best fit strategies are used to find a free hole( set of free contiguous blocks) from list of available holes But both are inefficient wrt storage utilization

                                                                        2 External fragmentation even though there are enough free blocks required for a new file they are not contiguous and hence cannot be allocated to new fileSolution copy all the files onto another disk or tape We then copy the files back onto original disk contiguously This is known as compaction But compaction is time consuming

                                                                        3 Size of the file must be known when the file is created Even though final size is known entire size may not be used by the file at the beginning Hence leads to wastage of space

                                                                        If size of the file is not known in advance then (i) If we allocate too little space to file file cannot be extended Solution find a larger hole copy the contents of the file to the new space and release the previous space This may be possible as long as space exists Also this is time consuming(ii) If we overestimate the amount of space needed much of the space is unused This will lead to internal fragmentation

                                                                        To minimize these drawbacks some os uses modified contiguous allocation scheme Here contiguous blocks of memory is allocated at firstIf the file needs more blocks another set of contiguous space known as extent is added Location and a block count plus a link to the first block of next extentIf extents are too large and of fixed size internal fragmentation occursIf extents are of variable sizes external fragmentation occurs

                                                                        Linked AllocationEach file is a linked list of disk blocks The disk blocks can be scattered anywhere on the diskThe directory contains a pointer to first and last blocks of the file For example a file of 5 blocks might start at a block 9 and continue at block 16 then block 1 and then block 10 and finally block 25 Each block contains a pointer to next block If each block is 512 bytes and disk address (pointer ) requires 4 bytes then the user sees blocks of 512-4=508 bytes To create a new file we simply create a new entry in the directory The pointer will have nil for empty file Also the size field of empty file is 0

                                                                        To add more data to existing file free block is found and data is written to it and is linked to end of file To read a file we read by following the pointers block to block

                                                                        Advantages 1 Easy to find space to a new file (or) add more data to existing file2 Size of the file need not be declared when the file is created3 There is no external fragmentation

                                                                        Disadvantages1 Seek time is more2 Number of disk seeks required is more3 Pointers use larger percentage of filersquos disk space4 Inefficient for direct access files To access i th record i disk reads are required5 Not Reliable if pointers were lost or damaged

                                                                        Solution to disadvantage 3 is to use to group multiple blocks as clusters(for ex 4 blocks = 1cluster) We allocate clusters instead of blocks to file Makes logical to physical block mapping simpleImproves disk throughput by reducing disk access timeReduces wastage of space due to pointers as only few pointers may be neededFree ndashlist management also becomes simpleDisadvantage internal fragmentation more space is wasted when the cluster is partially full

                                                                        Solution to disadvantage 5 is to use doubly linked list (or) store the file name and relative block number in each block

                                                                        An variation of linked allocation is the use of file allocation table (FAT)A section of disk at the beginning of each volume contains FAT FAT has one entry for each disk block and is indexed by block numberThe block number contains the block number of next block that contains the file The last block will have special end of the file value as table entry An unused block is indicated by table value 0

                                                                        Advantages

                                                                        1 Easy to find space to a new file (or) add more data to existing file2 Random access time is improved

                                                                        Disadvantage number of disk seeks is more The disk head must move to read the FAT and find the location of block then move to location of the block

                                                                        3 Indexed allocation

                                                                        Each file has its own index block Index block is an array of disk block addresses The ith entry in index block points to ith block of the file

                                                                        The directory contains the address of the index block

                                                                        When the file is created all pointers in the index block are set to nil

                                                                        To add more data to existing file free block is found and data is written to it The address of block is put in the index block

                                                                        Advantages

                                                                        1 Easy to find space to a new file (or) add more data to existing file2 Supports direct (or) random access efficiently3 No external fragmentation

                                                                        Disadvantage

                                                                        1 Seek time is more2 Number of disk seeks required is more3 Not Reliable4 Index file must be kept in memory If this memory is not available then we have the read the index

                                                                        block and then desired data block ie 2 disk accesses are required To access a block near the end of the file we need to read all the index blocks to read the needed data block

                                                                        5 Pointer overhead of index block is gt than that of linked allocation For example if we have file of only one or two blocks Entire index block is allocated to store these one or two pointers

                                                                        Various mechanisms followed for size of index block

                                                                        1 linked scheme size of index block is one disk block Index block contains a small header containing name of the file and set of the first 100 disk block addresses The next address contains the address of another index block for large files

                                                                        2 multilevel index to access a block the OS uses first-level index to find second-level index block which points to file block This approach can be continued to third or fourth level

                                                                        3 combined scheme

                                                                        For example Say there are 15 pointers of index block in the filersquos inode The first 12 of these pointers have address of blocks that contains file data The next 3 pointers point to indirect blocks The first points to single indirect block which is an index block that contain addresses of filersquos data blocks

                                                                        The second points to double indirect blocks and third points to triple indirect blocks as shown below

                                                                        Q) Free space management

                                                                        To keep track of free disk space the system maintains free space list To create a file we search free space list and allocate the required space to new file

                                                                        The free space list can be implemented as

                                                                        1 Bit vector (or) Bit map2 Linked list3 Grouping4 Counting 5 Space maps

                                                                        1 Bit vector (or) Bit map Each block is represented by 1 bit If the block is free the bit is 1 and if the block is allocated the bit is 0 For example consider a disk where blocks 23458 blocks are free The free-space bit map would be 001111001000

                                                                        Advantages simple and easy to get contiguous files

                                                                        Disadvantages

                                                                        1 Bit map requires extra space2 Bit map is inefficient if kept in disk Bit maps of smaller disk can be kept in main memory but may

                                                                        not be possible to keep the bit map of larger disk in main memory3 Bit map must be modified both when blocks are allocated and freed Freeing 1 GB of data on a 1-TB

                                                                        disk could cause thousands of blocks of bit maps to be updated because those data blocks are scattered all over the disk

                                                                        2 Linked List Free-space list is implemented by linking all the free disk blocks together A pointer to the first free block is stored in special location on the disk For example consider a disk where blocks 23458 blocks are freeWe keep a pointer to block 2 as the first free block Block 2 would contain a pointer to block 3 Block 3 would point to block 4 and block 4 would point to block 5and so onAdvantage No waste of spaceDisadvantage Cannot get contiguous space easily ie To traverse the free-space list we must read each block which requires more IO time

                                                                        3 Grouping First free block stores the addresses of n free blocks The first n-1 free blocks are free and the last block contains the addresses of another n free blocks and so onAdvantage large number of free blocks can be found quickly4 Counting free-space list can be maintained by keeping the address of first free block and the number (n)of free contiguous blocks that follow the first block

                                                                        Each entry in the free-space list consists of disk address and a countEach entry requires more space andlist is shorter only if count is gt1These entries can be stored in a B-tree for efficient insertion deletion and search

                                                                        5 Space maps ZFS divides the disk space into chunks of manageable sizes called metaslabs Each metaslab has a space mapZFS uses counting algorithm to store information about free blocks Space map is a log of all block activities in time order in counting formatWhen ZFS allocates or frees space from a metaslab it loads the associated space map into memory and updates it Finally free-space list is updated on diskSynchronization Hardware Hardware solution to synchronization is to provide atomic operations These operations operate as a single instruction without interruption Two such operations are 1 Test and Set() hardware instruction2 Swap() instruction

                                                                        1 Test and Set instruction is as shown

                                                                        Solution using Test and SetThe shared variable lock is initialized to false

                                                                        2 Swap() instruction definition is as follows

                                                                        Solution using Swap1 Shared Boolean variable lock is initialized to false and each process has a local boolean variable key

                                                                        Both Test and Set Swap instructions satisfy the mutual exclusion requirement but unfortunately do not guarantee bounded waiting If there are multiple processes trying to get into their critical sections there is no guarantee of what order they will enter and any one process could have the bad luck to wait forever until they got their turn in the critical section (Since there is no guarantee as to the relative rates of the processes a very fast process could theoretically release the lock whip through their remainder section and re-lock the lock before a slower process got a chance As more and more processes are involved vying for the same resource the odds of a slow process getting locked out completely increase )

                                                                        Below figure illustrates a solution using test-and-set that satisfies bounded waiting using two shared data structures boolean lock and boolean waiting[ N ] where N is the number of processes in contention for critical sections

                                                                        Bounded-waiting mutual exclusion with TestAndSet( )

                                                                        It first looks in an order ( starting with the next process on the list ) for a process that has been waiting and if it finds one then it releases that particular process from its waiting state without unlocking the critical section thereby allowing a specific process into the critical section while continuing to block all the others

                                                                        Q) What is semaphoreAns Semaphore is used to solve synchronization problems A semaphore S is an integer variable accessed only through 2 standard atomic operations 1 wait() 2 signal()Wait primitive Signal primitivewait(S)

                                                                        while(S lt= 0) do nothingS--

                                                                        signal (S)

                                                                        S++

                                                                        Each process that wishes to use a resource performs wait() operation When a process releases a resource it performs signal() operation

                                                                        Semaphores Usage1 binary semaphore or mutex locks can take values 0 (or) 1

                                                                        2 Counting Semaphore can take any integer value Counting semaphore is used to count the number remaining resources The counter is initialized to the number of resources available whenever the counting semaphore gt 0 then a process can enter a critical section and use one of the resources When the counter = 0 ( or negative in some implementations ) then the process blocks until another process frees a resource and increments the counting semaphore with a signal callFor example if 3 resources are there and 4 processes Value of counting semaphore =3if process P1 requires a resource it perform wait() operationNow counting semaphore = 2if process P2 requires a resource it perform wait() operationNow counting semaphore = 1if process P3 requires a resource it perform wait() operationNow counting semaphore = 0if process P4 requires a resource it perform wait() operation P4 waits until a resource is available

                                                                        3 Semaphores can also be used to synchronize certain operations

                                                                        Semaphore ImplementationWhen a process is in critical section and any other process that tries to enter critical section must loop continuously wasting CPU time (busy waiting) This type of semaphore is called spin lock as the process spins while waiting For example suppose it is important that process P1 execute statement S1 before process P2 executes statement S2

                                                                        First we create a semaphore named synch that is shared by the two processes and initialize it to zero Then in process P1 we insert the code

                                                                        S1signal( synch )

                                                                        and in process P2 we insert the code wait( synch )

                                                                        S2 Because synch was initialized to 0 process P2 will block on the wait until after P1 executes the call

                                                                        to signal Each semaphore has a integer value and maintains a waiting queue of waiting processes

                                                                        Semaphore implementation with no busy waiting To overcome busy waiting we block the waiting process and waiting process is restarted by wakeup() operation(ie the process is removed from waiting queue to ready state)

                                                                        Problems due to semaphore1 Deadlocks Deadlocks occur when multiple processes are blocked each waiting for a resource that can only be freed by one of the other (blocked) processes shown below

                                                                        2 Starvation In Starvation one or more processes gets blocked forever and never get a chance to enter critical section For example if we do not specify the algorithms for adding processes to the waiting queue or selecting one to be removed from the queue in the signal( ) call If a LIFO queue is chosen then the first process that starts waiting will never get a chance

                                                                        3 Priority inversionLet Process LMH have priorities as shown belowLltMltHprocess H is waiting for a resource that has been held by process L so Process L is running When process M enters it pre-empts process L and M is running nowHere M runs before H even though M has lower priority than HTo avoid this L is given high priority so that M does not pre-empt L=====================================================================Q) Classical problems on synchronization

                                                                        1 The Bounded Buffer Problem (also called the The Producer-Consumer Problem)2 The Readers-Writers Problem3 The Dining Philosophers Problem

                                                                        These problems are used to test newly proposed synchronization scheme

                                                                        1 The Bounded Buffer ProblemConsider

                                                                        n buffers and each buffer holds one item a producer process which creates the items (1 at a time) a consumer process which processes them (1 at a time)

                                                                        Producer process cannot produce an item if all the buffers are full and Consumer process cannot consume if all the buffers are empty As both processes modify the contents of buffer synchronization must be there For that we use 3 Semaphores

                                                                        1 empty = n (empty semaphore counts the number of empty buffers)2 full=0 (full semaphore counts the number of filled buffers)3 mutex=1 (mutex provides mutual exclusion for access to the buffer)

                                                                        Producer process Consumer processdo

                                                                        hellip produce an item wait(empty) wait(mutex) hellip

                                                                        do wait(full) wait(mutex) hellip remove an item from buffer hellip

                                                                        add item to buffer hellip signal(mutex) signal(full)

                                                                        while (1)

                                                                        signal(mutex) signal(empty) hellip consume the item hellip while (1)

                                                                        2 The Readers-Writers ProblemA data item such as a file is shared among several processesEach process is classified as reader or writerMultiple readers can read the file at same timeA writer must have exclusive access (ie cannot share with either a reader or another writer)Two versions of the readers-writers problem

                                                                        readers priority new reader need not wait because a writer is waiting writers priority if a writer is waiting to access the database no new readers can start reading

                                                                        A solution to both version may cause starvation in the readers priority version writers may starve in the writers priority version readers may starve

                                                                        A semaphore solution to the readers priority version uses 3 semaphores1 readcount = 0 (readcount semaphore counts the number of reading shared data2 mutex =1 (mutex semaphore is used for mutually exclusion when readcount is updated)3 wrt = 1 (wrt semaphore is common to both writer and reader)

                                                                        wrt provides mutually exclusive access to shared data Reader process Writer process

                                                                        wait(mutex) readcount++ if (readcount == 1) wait(rt) signal(mutex) hellip reading is performed hellip wait(mutex) readcount-- if (readcount == 0) signal(wrt) signal(mutex)

                                                                        do wait(wrt) hellip writing is performed hellip signal(wrt)

                                                                        while (TRUE)

                                                                        3 The Dining Philosophers Problemn philosophers sit around a table thinking and eatingAs time passes a philosopher gets hungry and tries to pick up the chopstick on his left and on his right A philosopher may only pick up one chopstick at a time and cannot pick up a chopstick already in the hand of neighbour philosopherThe dining philosophers problems is an example of a large class or concurrency control problems it is a simple representation of the need to allocate several resources among several processes in a deadlock-free and starvation-free mannerA semaphore solution represent each chopstick with a semaphore

                                                                        semaphore chopstick[5] Initially all values are 1

                                                                        Philosopher ido

                                                                        51

                                                                        wait(chopstick[i])wait(chopstick[(i+1) 5])hellipeathellipsignal(chopstick[i])signal(chopstick[(i+1) 5])hellipthinkhellip

                                                                        while (1)This solution guarantees no two neighbouring philosophers eat simultaneously but has the possibility of creating a deadlockQ) Types of operating systems

                                                                        MULTIPROCESSING SYSTEMS has multiple hardware CPUs

                                                                        In symmetric multiprocessing (SMP) systems The processes share the same main memory and io facilities All processors can perform the same functions (hence term symmetric)

                                                                        52

                                                                        ASYMMETRIC MULTIPROCESSING SYSTEMS In asymmetric multiprocessing (ASMP) all CPUs are not equal (Master and slave CPUs) ASMP have dedicated processors to specific tasks For example one processor may be dedicated to disk operations another to video operations etc These systems dont have the flexibility to assign processes to the least-loaded CPU unlike an SMP system

                                                                        REAL TIME SYSTEM is used to implement a computer application that must complete its execution within its time constraint

                                                                        Two kinds of real-time systems have evolved

                                                                        1 A hard real-time system can guarantee that response requirement will be met under all conditions 2 A soft real-time system cannot guarantee that response requirement meets under all conditions Ex

                                                                        Digital audio or multimedia systems Digital telephones

                                                                        DISTRIBUTED SYSTEMS execute parts of a computation in different systems at the same time It uses distributed control ie it spreads its decision-making actions across different computers in the system so that failures of individual computers or the network does not cripple its operation

                                                                        A distributed operating system appears as a uniprocessor system even though it has multiple processors The users may not know where their programs are being run or where their files are located that should all be handled automatically by the operating system

                                                                        Distributed systems allow applications to run on several processors at the same time thus requiring more complex processor scheduling algorithms

                                                                        DESKTOP SYSTEMS A desktop system is a personal computer (PC) system used at a single location Modern desktop systems support multiprogramming Common examples are Linux FreeBSD Windows 8 and the Macintosh operating system

                                                                        HANDHELD SYSTEMS (palmtop computers) Two of the most popular operating systems for handhelds are Symbian OS and Android OS They have very less memory

                                                                        53

                                                                        CLUSTERED SYSTEMS A computer cluster consists of a set of loosely connected computers that work together so that in many respects they can be viewed as a single system The components of a cluster are usually connected to each other through fast local area networks each node (computer used as a server) running its own instance of an operating system

                                                                        Q) Modern operating system

                                                                        New developments in hardware applications and security threats lead to the development of modern operating system

                                                                        New Hardware developments like increased machine speed high speed network increased size and variety of memory devices

                                                                        New Applications developments like multimedia applications internet and web access and client server computing

                                                                        Developments in modern operating system can be categorized as

                                                                        1 Microkernel architecture assigns only few functions to kernel ( IPC and CPU scheduling etc) Other OS services are provided by processes called servers that run in user mode This approach decouples server and kernel development This approach is well suited to distributed environment

                                                                        2 Multithreading a process is divided into threads that can run concurrently Useful for applications that perform a number of independent tasks Example database server that listens and processes many client requests

                                                                        3 Symmetric multiprocessing As now-a-days there are multiple microprocessors in a single system SMP operating system provides greater efficiency when there are multiple processors The processes share the same main memory and io facilities All processors can perform the same functions (hence term symmetric)

                                                                        SMP schedules processes or threads across all of the processors

                                                                        Advantages

                                                                        1 Increased performance if some portion of the program can be run in parallel the performance of SMP OS increases

                                                                        2 increased availability as all processes can perform the same task if one processor fails others will work 3 incremental growth one can add a new processor to increase the performance4 Scaling cost can be dependent on the number of processors used

                                                                        4 Distributed operating systems Distributed systems allow applications to run on several processors at the same time

                                                                        5 Object oriented design adds modularity to kernel Os can be customized without effecting system integrity

                                                                        Q) Linux Operating systems It is open source as its source code is freely available It is free to use Linux was designed considering UNIX compatibility Linux is a multi-user multitasking system Main design goals are speed efficiency and standardization Components of Linux System

                                                                        54

                                                                        Linux Operating System has primarily three components as shown in the below diagram

                                                                        Kernel - Kernel is the core part of Linux It is responsible for all major activities of this operating system It interacts directly with hardware Kernel hides low level hardware details to system or application programs

                                                                        System Library - System libraries are special functions or programs These are used by application programs to use Kernelrsquos features

                                                                        System Utility - System Utility programs are responsible to do specialized individual level tasks like updating log file accepting login requests from terminals etc

                                                                        Kernel Modules Sections of kernel code that can be compiled loaded and unloaded independent of the rest of the kernel A kernel module may typically implement a device driver a file system or a networking protocol

                                                                        Three components to Linux module support 1 module management Supports loading modules into memory and allows to talk to the rest of the kernel 2driver registration Allows modules to inform rest of the kernel that a new driver has become available 3 conflict resolution protect reserved resources of one driver from accidental use by another driver

                                                                        Kernel Mode vs User ModeKernel component code executes in a special privileged mode called kernel mode with full access to all resources of the computer This code represents a single process executes in single address space and do not require any context switch and hence is very efficient and fast Kernel runs each processes and provides system services to processes provides protected access to hardwares to processesCode which is not required to run in kernel mode is in System Library User programs and other system programs works in User Mode which has no access to system hardwares and kernel code User programs utilities use System libraries to access Kernel functions to get systems low level tasksBasic FeaturesFollowing are some of the important features of Linux Operating System

                                                                        Portable - Portability means softwares can works on different types of hardwares in same way Linux kernel and application programs supports their installation on any kind of hardware platform

                                                                        Open Source - Linux source code is freely available and it is community based development project Multiple teams works in collaboration to enhance the capability of Linux operating system and it is continuously evolving

                                                                        Multi-User - Linux is a multiuser system means multiple users can access system resources like memory ram application programs at same time

                                                                        Multiprogramming - Linux is a multiprogramming system means multiple applications can run at same time

                                                                        Hierarchical File System - Linux provides a standard file structure in which system files user files are arranged

                                                                        Shell - Linux provides a special interpreter program which can be used to execute commands of the operating system It can be used to do various types of operations call application programs etc

                                                                        55

                                                                        Security - Linux provides user security using authentication features like password protection controlled access to specific files encryption of data

                                                                        Architecture Linux System Architecture is consists of following layers

                                                                        Hardware layer - Hardware consists of all peripheral devices (RAM HDD CPU etc)

                                                                        Kernel - Core component of Operating System interacts directly with hardware provides low level services to upper layer components

                                                                        Shell - An interface to kernel Takes commands from user and executes kernels functions

                                                                        Utilities - Utility programs giving user most of the functionalities of an operating systems

                                                                        Q) windows XP operating System1Extensibility mdash layered architecture

                                                                        Executive which runs in protected mode provides the basic system services On top of the executive several server subsystems operate in user mode Modular structure allows additional environmental subsystems to be added without affecting the

                                                                        executive 2Portability mdash XP can be moved from on hardware architecture to another with relatively few changes

                                                                        Written in C and C++ Processor-dependent code is isolated in a dynamic link library (DLL) called the ldquohardware abstraction

                                                                        layerrdquo (HAL)3 Reliability mdashXP uses hardware protection for virtual memory and software protection mechanisms for operating system resources 4 Compatibility mdash applications that follow the IEEE 10031 (POSIX) standard can be complied to run on 2000 without changing the source code 5 Performance mdash XP subsystems can communicate with one another via high-performance message passing

                                                                        Preemption of low priority threads enables the system to respond quickly to external events Designed for symmetrical multiprocessing

                                                                        6 International support mdash supports different locales via the national language support (NLS) APIXP Architecture 1 Layered system of modules 2 Protected mode mdash hardware abstraction layer (HAL) kernel executive 3 User mode mdash collection of subsystems

                                                                        a Environmental subsystems emulate different operating systems b Protection subsystems provide security functions

                                                                        XP architecture

                                                                        56

                                                                        Q) Windows network Operating systemExample Windows 2000 operating system

                                                                        All the pictures are taken from Silberschatz Abraham et al Peter Baer Galvin and Greg Gagne Operating system concepts Reading Addison-Wesley

                                                                        Visit my blog enthusiaststudentblogspotin57

                                                                        mtechmessengerblogspotin

                                                                        Keywords OS exam notes OS notesOperating system notes Opearing system galvin notesSimple batch system Multi- programmed Batch System Time sharing System Difference between Batch Multi-programming and Time sharing What is process structure of process in memory Process states PCB Operations on Processes Process Creation Process Termination cascading terminationcontext switch Basic Concepts of threads two modes of CPU execution Inter process communication Shared ndash Memory Systems Message passing systemswhat is deadlock four conditions that are necessary for deadlock to occur Methods for handling deadlocks Deadlock prevention Deadlock avoidanceResource Allocation graphBankers AlgorithmSafe State Unsafe state Safety Algorithm Resource-Request Algorithm Memory management Address Binding Logical vs Physical Address Space MMU Memory-Management Unit Dynamic loading Dynamic Linking Overlays Swapping Contiguous memory allocation Single-partition allocation Multiple-partition allocation Internal fragmentation Variable Size partitions External fragmentation First-fit Best-fit Worst-fit Non-Contiguous Memory allocation Paging Implementation of Page Table PTLR PTBR page table base register Page-table length register translation look-aside buffers TLB Protection in Paging Shared pages in Paging Segmentation Shared Segments Virtual memory Demand paging Procedure for Handling a Page Fault pure demand paging Page Replacement Page Replacement Algorithms FIFO Optimal Page replacement AlgorithmLeast Recently used (LRU) Page replacement algorithm ThrashingWorking Set model global and local page replacement

                                                                        58

                                                                        • 101 File Concept Data is stored in secondary memory in the form of files A file is a sequence of bits bytes lines or records A file has structure which depends on its type
                                                                        • File Attributes
                                                                        • File Operations
                                                                        • Kernel Mode vs User Mode
                                                                        • Basic Features
                                                                        • Architecture

                                                                          101 File Concept Data is stored in secondary memory in the form of files A file is a sequence of bits bytes lines or records A file has structure which depends on its type

                                                                          File Attributes

                                                                          1 Name ndash Name of the file for user understanding2 Identifier used by OS to identify a file ( eg inode number )3 Type ndash Text files executable files binary files etc 4 Location ndash location of the file on the hard drive 5 Size current size of the file6 Protection Controls who can read write or execute the file7 Time Date and user identification gives the information when the file was created last modified

                                                                          and last used etc This data is used for protection security and usage monitoring

                                                                          All information about files is kept in directory structure The directory entry consists of the filersquos name and its unique identifier The identifier locates the other file attributes Directory structure is kept on disk

                                                                          File Operations

                                                                          The file Abstract Data Type supports many common operations 1 Creating a file find free space on disk add entry to directory 2 Writing a file write data at current file position pointer location and update pointer3 Reading a file read data at current file position pointer location and update pointer4 Repositioning within a file or file seek change pointer location to a given value5 Deleting a file free the space allocated to file and remove entry from directory6 Truncating a file delete the data and update file size

                                                                          Open File Table

                                                                          Most OSes require that files be opened before access and closed after access Information about all files that are open currently in the system is stored in an open file table This

                                                                          system wide open file table is maintained by OS And information about files opened by the process is stored by the processrsquos open file table Open file table has the following information

                                                                          1 File pointer - records the current position in the file for the next read or write2 File-open count - How many times has the current file been opened ( simultaneously by

                                                                          different processes ) and not yet closedWhen this counter reaches zero the file can be removed from the table

                                                                          3 Disk location of the file4 Access rights

                                                                          Some systems provide support for file locking 1 A shared lock is for reading only2 A exclusive lock is for writing as well as reading3 An advisory lock OS makes sure the locks are acquired and released appropriately4 A mandatory lock software developers makes sure the locks are acquired and released

                                                                          appropriately5 UNIX used advisory locks and Windows uses mandatory locks

                                                                          File Types

                                                                          Implementing file types A common technique is to include the type as part of the filename The name is split into 2 parts

                                                                          1 File Name 2 Extension

                                                                          File name and extension are separated by a period The system uses the extension to indicate the type of file and the type of operations that can be done on that fileUser opens Microsoft word and clicks on File menu and Open option User will only specify the filename without extension application will look for a file with the given name and extension it expects Hence extensions are not supported by the operating systemUnix system uses a crude magic number stored at the beginning of the file to indicate the type of fileIn Mac OS X operating system file produced by word processor has the word processorrsquos name as its creator When user double clicks on the file word will open the required fileSome common file types are as shown below

                                                                          File Structure Some files contain an internal structure which may or may not be known to the OS for ex executable files have a specific structure so that it can determine

                                                                          where in memory to load the file and what is the location of the first instruction

                                                                          If OS supports many file formats it will increase the size and complexity of the OS UNIX treats all files as sequences of bytes with no internal structure (With the exception of

                                                                          executable binary programs which it must know how to load and find the first executable statement etc)

                                                                          Macintosh files have two forks - a resource fork and a data fork The resource fork contains information relating to the UI such as icons and button images and can be modified independently Data fork contains the code or data

                                                                          Internal File structure

                                                                          Disk files are accessed in units of physical blocks typically 512 bytes

                                                                          Internally files are organized in units of logical units like single byte (or)

                                                                          data record size (or) structure size

                                                                          The number of logical units which fit into one physical block determines its packingIf a student file is organized internally as records Each record takes 64 bytes of memoryAnd main memory is divided into blocks of 512 bytes Then packing = 51264=8 8 records can be stored per block

                                                                          Internal fragmentation may occur due to this

                                                                          102 Access Methods

                                                                          The information in file can be accessed in the following ways1 Sequential access2 Direct access (or) Relative access

                                                                          1 Sequential access we cannot randomly access any record

                                                                          Records are read and written sequentially one record after another

                                                                          A sequential access file emulates magnetic tape operation and generally supports the following operations read next - read a record and advance the tape to the next position write next - write a record and advance the tape to the next position rewind skip n records - May or may not be supported N may be limited to positive numbers or may be

                                                                          limited to +- 1

                                                                          2 Direct access (or) Relative accessA file is made up of fixed length records so that we can randomly access any record

                                                                          No restriction on order of reading or writing

                                                                          Direct access files are useful when we need to access large amounts of data randomly

                                                                          Ex database

                                                                          Operations supported include n is relative block number ie relative to beginning of file

                                                                          1 read n - read record number n 2 write n - write record number n 3 jump to record n - could be 0 or the end of file4 Query current record - used to return back to this record later5 Sequential access can be easily emulated using direct access The inverse is complicated and

                                                                          inefficient

                                                                          3 Other access methods can be built on top of direct access method We construct an index for the file which contains pointers to various blocks To search a file we first access index and then use pointer to access file

                                                                          If a student file has 120000 records of students sorted by student id We can create an index consisting of 1st student id in each blockif 64 records can be stored in each block then number of blocks needed to store 120000 records = 12000064 asymp 2000 blocks So index has 2000 entries Index is kept in main memory so that searching becomes faster

                                                                          If the index itself becomes large we create an index for index file (Ex B-tree)

                                                                          Q) Allocation Methods

                                                                          To allocate space to files on disk is such a way that disk space is utilized effectively and files can be accessed quickly

                                                                          Three major methods of allocating disk space are

                                                                          1 Contiguous 2 Linked

                                                                          3 Indexed

                                                                          1 Contiguous allocation Each file occupies set of contiguous blocks on disk

                                                                          If the file is n blocks long and starts at location b then it occupies blocks b b+1 b+2hellipb+n-1

                                                                          The directory entry for each file indicates the address of starting block(ie b) and length (n)

                                                                          Advantages

                                                                          1 For direct access to block i we can directly access by finding the block as b+i Since the file is stored sequentially sequential access is easy Hence access is easy

                                                                          2 Supports both direct and sequential access3 Number of disk seeks required is minimum 4 Seek time is minimum

                                                                          Disadvantages

                                                                          1 Difficult to find space to a new file (or) add more data to existing fileFirst fit and Best fit strategies are used to find a free hole( set of free contiguous blocks) from list of available holes But both are inefficient wrt storage utilization

                                                                          2 External fragmentation even though there are enough free blocks required for a new file they are not contiguous and hence cannot be allocated to new fileSolution copy all the files onto another disk or tape We then copy the files back onto original disk contiguously This is known as compaction But compaction is time consuming

                                                                          3 Size of the file must be known when the file is created Even though final size is known entire size may not be used by the file at the beginning Hence leads to wastage of space

                                                                          If size of the file is not known in advance then (i) If we allocate too little space to file file cannot be extended Solution find a larger hole copy the contents of the file to the new space and release the previous space This may be possible as long as space exists Also this is time consuming(ii) If we overestimate the amount of space needed much of the space is unused This will lead to internal fragmentation

                                                                          To minimize these drawbacks some os uses modified contiguous allocation scheme Here contiguous blocks of memory is allocated at firstIf the file needs more blocks another set of contiguous space known as extent is added Location and a block count plus a link to the first block of next extentIf extents are too large and of fixed size internal fragmentation occursIf extents are of variable sizes external fragmentation occurs

                                                                          Linked AllocationEach file is a linked list of disk blocks The disk blocks can be scattered anywhere on the diskThe directory contains a pointer to first and last blocks of the file For example a file of 5 blocks might start at a block 9 and continue at block 16 then block 1 and then block 10 and finally block 25 Each block contains a pointer to next block If each block is 512 bytes and disk address (pointer ) requires 4 bytes then the user sees blocks of 512-4=508 bytes To create a new file we simply create a new entry in the directory The pointer will have nil for empty file Also the size field of empty file is 0

                                                                          To add more data to existing file free block is found and data is written to it and is linked to end of file To read a file we read by following the pointers block to block

                                                                          Advantages 1 Easy to find space to a new file (or) add more data to existing file2 Size of the file need not be declared when the file is created3 There is no external fragmentation

                                                                          Disadvantages1 Seek time is more2 Number of disk seeks required is more3 Pointers use larger percentage of filersquos disk space4 Inefficient for direct access files To access i th record i disk reads are required5 Not Reliable if pointers were lost or damaged

                                                                          Solution to disadvantage 3 is to use to group multiple blocks as clusters(for ex 4 blocks = 1cluster) We allocate clusters instead of blocks to file Makes logical to physical block mapping simpleImproves disk throughput by reducing disk access timeReduces wastage of space due to pointers as only few pointers may be neededFree ndashlist management also becomes simpleDisadvantage internal fragmentation more space is wasted when the cluster is partially full

                                                                          Solution to disadvantage 5 is to use doubly linked list (or) store the file name and relative block number in each block

                                                                          An variation of linked allocation is the use of file allocation table (FAT)A section of disk at the beginning of each volume contains FAT FAT has one entry for each disk block and is indexed by block numberThe block number contains the block number of next block that contains the file The last block will have special end of the file value as table entry An unused block is indicated by table value 0

                                                                          Advantages

                                                                          1 Easy to find space to a new file (or) add more data to existing file2 Random access time is improved

                                                                          Disadvantage number of disk seeks is more The disk head must move to read the FAT and find the location of block then move to location of the block

                                                                          3 Indexed allocation

                                                                          Each file has its own index block Index block is an array of disk block addresses The ith entry in index block points to ith block of the file

                                                                          The directory contains the address of the index block

                                                                          When the file is created all pointers in the index block are set to nil

                                                                          To add more data to existing file free block is found and data is written to it The address of block is put in the index block

                                                                          Advantages

                                                                          1 Easy to find space to a new file (or) add more data to existing file2 Supports direct (or) random access efficiently3 No external fragmentation

                                                                          Disadvantage

                                                                          1 Seek time is more2 Number of disk seeks required is more3 Not Reliable4 Index file must be kept in memory If this memory is not available then we have the read the index

                                                                          block and then desired data block ie 2 disk accesses are required To access a block near the end of the file we need to read all the index blocks to read the needed data block

                                                                          5 Pointer overhead of index block is gt than that of linked allocation For example if we have file of only one or two blocks Entire index block is allocated to store these one or two pointers

                                                                          Various mechanisms followed for size of index block

                                                                          1 linked scheme size of index block is one disk block Index block contains a small header containing name of the file and set of the first 100 disk block addresses The next address contains the address of another index block for large files

                                                                          2 multilevel index to access a block the OS uses first-level index to find second-level index block which points to file block This approach can be continued to third or fourth level

                                                                          3 combined scheme

                                                                          For example Say there are 15 pointers of index block in the filersquos inode The first 12 of these pointers have address of blocks that contains file data The next 3 pointers point to indirect blocks The first points to single indirect block which is an index block that contain addresses of filersquos data blocks

                                                                          The second points to double indirect blocks and third points to triple indirect blocks as shown below

                                                                          Q) Free space management

                                                                          To keep track of free disk space the system maintains free space list To create a file we search free space list and allocate the required space to new file

                                                                          The free space list can be implemented as

                                                                          1 Bit vector (or) Bit map2 Linked list3 Grouping4 Counting 5 Space maps

                                                                          1 Bit vector (or) Bit map Each block is represented by 1 bit If the block is free the bit is 1 and if the block is allocated the bit is 0 For example consider a disk where blocks 23458 blocks are free The free-space bit map would be 001111001000

                                                                          Advantages simple and easy to get contiguous files

                                                                          Disadvantages

                                                                          1 Bit map requires extra space2 Bit map is inefficient if kept in disk Bit maps of smaller disk can be kept in main memory but may

                                                                          not be possible to keep the bit map of larger disk in main memory3 Bit map must be modified both when blocks are allocated and freed Freeing 1 GB of data on a 1-TB

                                                                          disk could cause thousands of blocks of bit maps to be updated because those data blocks are scattered all over the disk

                                                                          2 Linked List Free-space list is implemented by linking all the free disk blocks together A pointer to the first free block is stored in special location on the disk For example consider a disk where blocks 23458 blocks are freeWe keep a pointer to block 2 as the first free block Block 2 would contain a pointer to block 3 Block 3 would point to block 4 and block 4 would point to block 5and so onAdvantage No waste of spaceDisadvantage Cannot get contiguous space easily ie To traverse the free-space list we must read each block which requires more IO time

                                                                          3 Grouping First free block stores the addresses of n free blocks The first n-1 free blocks are free and the last block contains the addresses of another n free blocks and so onAdvantage large number of free blocks can be found quickly4 Counting free-space list can be maintained by keeping the address of first free block and the number (n)of free contiguous blocks that follow the first block

                                                                          Each entry in the free-space list consists of disk address and a countEach entry requires more space andlist is shorter only if count is gt1These entries can be stored in a B-tree for efficient insertion deletion and search

                                                                          5 Space maps ZFS divides the disk space into chunks of manageable sizes called metaslabs Each metaslab has a space mapZFS uses counting algorithm to store information about free blocks Space map is a log of all block activities in time order in counting formatWhen ZFS allocates or frees space from a metaslab it loads the associated space map into memory and updates it Finally free-space list is updated on diskSynchronization Hardware Hardware solution to synchronization is to provide atomic operations These operations operate as a single instruction without interruption Two such operations are 1 Test and Set() hardware instruction2 Swap() instruction

                                                                          1 Test and Set instruction is as shown

                                                                          Solution using Test and SetThe shared variable lock is initialized to false

                                                                          2 Swap() instruction definition is as follows

                                                                          Solution using Swap1 Shared Boolean variable lock is initialized to false and each process has a local boolean variable key

                                                                          Both Test and Set Swap instructions satisfy the mutual exclusion requirement but unfortunately do not guarantee bounded waiting If there are multiple processes trying to get into their critical sections there is no guarantee of what order they will enter and any one process could have the bad luck to wait forever until they got their turn in the critical section (Since there is no guarantee as to the relative rates of the processes a very fast process could theoretically release the lock whip through their remainder section and re-lock the lock before a slower process got a chance As more and more processes are involved vying for the same resource the odds of a slow process getting locked out completely increase )

                                                                          Below figure illustrates a solution using test-and-set that satisfies bounded waiting using two shared data structures boolean lock and boolean waiting[ N ] where N is the number of processes in contention for critical sections

                                                                          Bounded-waiting mutual exclusion with TestAndSet( )

                                                                          It first looks in an order ( starting with the next process on the list ) for a process that has been waiting and if it finds one then it releases that particular process from its waiting state without unlocking the critical section thereby allowing a specific process into the critical section while continuing to block all the others

                                                                          Q) What is semaphoreAns Semaphore is used to solve synchronization problems A semaphore S is an integer variable accessed only through 2 standard atomic operations 1 wait() 2 signal()Wait primitive Signal primitivewait(S)

                                                                          while(S lt= 0) do nothingS--

                                                                          signal (S)

                                                                          S++

                                                                          Each process that wishes to use a resource performs wait() operation When a process releases a resource it performs signal() operation

                                                                          Semaphores Usage1 binary semaphore or mutex locks can take values 0 (or) 1

                                                                          2 Counting Semaphore can take any integer value Counting semaphore is used to count the number remaining resources The counter is initialized to the number of resources available whenever the counting semaphore gt 0 then a process can enter a critical section and use one of the resources When the counter = 0 ( or negative in some implementations ) then the process blocks until another process frees a resource and increments the counting semaphore with a signal callFor example if 3 resources are there and 4 processes Value of counting semaphore =3if process P1 requires a resource it perform wait() operationNow counting semaphore = 2if process P2 requires a resource it perform wait() operationNow counting semaphore = 1if process P3 requires a resource it perform wait() operationNow counting semaphore = 0if process P4 requires a resource it perform wait() operation P4 waits until a resource is available

                                                                          3 Semaphores can also be used to synchronize certain operations

                                                                          Semaphore ImplementationWhen a process is in critical section and any other process that tries to enter critical section must loop continuously wasting CPU time (busy waiting) This type of semaphore is called spin lock as the process spins while waiting For example suppose it is important that process P1 execute statement S1 before process P2 executes statement S2

                                                                          First we create a semaphore named synch that is shared by the two processes and initialize it to zero Then in process P1 we insert the code

                                                                          S1signal( synch )

                                                                          and in process P2 we insert the code wait( synch )

                                                                          S2 Because synch was initialized to 0 process P2 will block on the wait until after P1 executes the call

                                                                          to signal Each semaphore has a integer value and maintains a waiting queue of waiting processes

                                                                          Semaphore implementation with no busy waiting To overcome busy waiting we block the waiting process and waiting process is restarted by wakeup() operation(ie the process is removed from waiting queue to ready state)

                                                                          Problems due to semaphore1 Deadlocks Deadlocks occur when multiple processes are blocked each waiting for a resource that can only be freed by one of the other (blocked) processes shown below

                                                                          2 Starvation In Starvation one or more processes gets blocked forever and never get a chance to enter critical section For example if we do not specify the algorithms for adding processes to the waiting queue or selecting one to be removed from the queue in the signal( ) call If a LIFO queue is chosen then the first process that starts waiting will never get a chance

                                                                          3 Priority inversionLet Process LMH have priorities as shown belowLltMltHprocess H is waiting for a resource that has been held by process L so Process L is running When process M enters it pre-empts process L and M is running nowHere M runs before H even though M has lower priority than HTo avoid this L is given high priority so that M does not pre-empt L=====================================================================Q) Classical problems on synchronization

                                                                          1 The Bounded Buffer Problem (also called the The Producer-Consumer Problem)2 The Readers-Writers Problem3 The Dining Philosophers Problem

                                                                          These problems are used to test newly proposed synchronization scheme

                                                                          1 The Bounded Buffer ProblemConsider

                                                                          n buffers and each buffer holds one item a producer process which creates the items (1 at a time) a consumer process which processes them (1 at a time)

                                                                          Producer process cannot produce an item if all the buffers are full and Consumer process cannot consume if all the buffers are empty As both processes modify the contents of buffer synchronization must be there For that we use 3 Semaphores

                                                                          1 empty = n (empty semaphore counts the number of empty buffers)2 full=0 (full semaphore counts the number of filled buffers)3 mutex=1 (mutex provides mutual exclusion for access to the buffer)

                                                                          Producer process Consumer processdo

                                                                          hellip produce an item wait(empty) wait(mutex) hellip

                                                                          do wait(full) wait(mutex) hellip remove an item from buffer hellip

                                                                          add item to buffer hellip signal(mutex) signal(full)

                                                                          while (1)

                                                                          signal(mutex) signal(empty) hellip consume the item hellip while (1)

                                                                          2 The Readers-Writers ProblemA data item such as a file is shared among several processesEach process is classified as reader or writerMultiple readers can read the file at same timeA writer must have exclusive access (ie cannot share with either a reader or another writer)Two versions of the readers-writers problem

                                                                          readers priority new reader need not wait because a writer is waiting writers priority if a writer is waiting to access the database no new readers can start reading

                                                                          A solution to both version may cause starvation in the readers priority version writers may starve in the writers priority version readers may starve

                                                                          A semaphore solution to the readers priority version uses 3 semaphores1 readcount = 0 (readcount semaphore counts the number of reading shared data2 mutex =1 (mutex semaphore is used for mutually exclusion when readcount is updated)3 wrt = 1 (wrt semaphore is common to both writer and reader)

                                                                          wrt provides mutually exclusive access to shared data Reader process Writer process

                                                                          wait(mutex) readcount++ if (readcount == 1) wait(rt) signal(mutex) hellip reading is performed hellip wait(mutex) readcount-- if (readcount == 0) signal(wrt) signal(mutex)

                                                                          do wait(wrt) hellip writing is performed hellip signal(wrt)

                                                                          while (TRUE)

                                                                          3 The Dining Philosophers Problemn philosophers sit around a table thinking and eatingAs time passes a philosopher gets hungry and tries to pick up the chopstick on his left and on his right A philosopher may only pick up one chopstick at a time and cannot pick up a chopstick already in the hand of neighbour philosopherThe dining philosophers problems is an example of a large class or concurrency control problems it is a simple representation of the need to allocate several resources among several processes in a deadlock-free and starvation-free mannerA semaphore solution represent each chopstick with a semaphore

                                                                          semaphore chopstick[5] Initially all values are 1

                                                                          Philosopher ido

                                                                          51

                                                                          wait(chopstick[i])wait(chopstick[(i+1) 5])hellipeathellipsignal(chopstick[i])signal(chopstick[(i+1) 5])hellipthinkhellip

                                                                          while (1)This solution guarantees no two neighbouring philosophers eat simultaneously but has the possibility of creating a deadlockQ) Types of operating systems

                                                                          MULTIPROCESSING SYSTEMS has multiple hardware CPUs

                                                                          In symmetric multiprocessing (SMP) systems The processes share the same main memory and io facilities All processors can perform the same functions (hence term symmetric)

                                                                          52

                                                                          ASYMMETRIC MULTIPROCESSING SYSTEMS In asymmetric multiprocessing (ASMP) all CPUs are not equal (Master and slave CPUs) ASMP have dedicated processors to specific tasks For example one processor may be dedicated to disk operations another to video operations etc These systems dont have the flexibility to assign processes to the least-loaded CPU unlike an SMP system

                                                                          REAL TIME SYSTEM is used to implement a computer application that must complete its execution within its time constraint

                                                                          Two kinds of real-time systems have evolved

                                                                          1 A hard real-time system can guarantee that response requirement will be met under all conditions 2 A soft real-time system cannot guarantee that response requirement meets under all conditions Ex

                                                                          Digital audio or multimedia systems Digital telephones

                                                                          DISTRIBUTED SYSTEMS execute parts of a computation in different systems at the same time It uses distributed control ie it spreads its decision-making actions across different computers in the system so that failures of individual computers or the network does not cripple its operation

                                                                          A distributed operating system appears as a uniprocessor system even though it has multiple processors The users may not know where their programs are being run or where their files are located that should all be handled automatically by the operating system

                                                                          Distributed systems allow applications to run on several processors at the same time thus requiring more complex processor scheduling algorithms

                                                                          DESKTOP SYSTEMS A desktop system is a personal computer (PC) system used at a single location Modern desktop systems support multiprogramming Common examples are Linux FreeBSD Windows 8 and the Macintosh operating system

                                                                          HANDHELD SYSTEMS (palmtop computers) Two of the most popular operating systems for handhelds are Symbian OS and Android OS They have very less memory

                                                                          53

                                                                          CLUSTERED SYSTEMS A computer cluster consists of a set of loosely connected computers that work together so that in many respects they can be viewed as a single system The components of a cluster are usually connected to each other through fast local area networks each node (computer used as a server) running its own instance of an operating system

                                                                          Q) Modern operating system

                                                                          New developments in hardware applications and security threats lead to the development of modern operating system

                                                                          New Hardware developments like increased machine speed high speed network increased size and variety of memory devices

                                                                          New Applications developments like multimedia applications internet and web access and client server computing

                                                                          Developments in modern operating system can be categorized as

                                                                          1 Microkernel architecture assigns only few functions to kernel ( IPC and CPU scheduling etc) Other OS services are provided by processes called servers that run in user mode This approach decouples server and kernel development This approach is well suited to distributed environment

                                                                          2 Multithreading a process is divided into threads that can run concurrently Useful for applications that perform a number of independent tasks Example database server that listens and processes many client requests

                                                                          3 Symmetric multiprocessing As now-a-days there are multiple microprocessors in a single system SMP operating system provides greater efficiency when there are multiple processors The processes share the same main memory and io facilities All processors can perform the same functions (hence term symmetric)

                                                                          SMP schedules processes or threads across all of the processors

                                                                          Advantages

                                                                          1 Increased performance if some portion of the program can be run in parallel the performance of SMP OS increases

                                                                          2 increased availability as all processes can perform the same task if one processor fails others will work 3 incremental growth one can add a new processor to increase the performance4 Scaling cost can be dependent on the number of processors used

                                                                          4 Distributed operating systems Distributed systems allow applications to run on several processors at the same time

                                                                          5 Object oriented design adds modularity to kernel Os can be customized without effecting system integrity

                                                                          Q) Linux Operating systems It is open source as its source code is freely available It is free to use Linux was designed considering UNIX compatibility Linux is a multi-user multitasking system Main design goals are speed efficiency and standardization Components of Linux System

                                                                          54

                                                                          Linux Operating System has primarily three components as shown in the below diagram

                                                                          Kernel - Kernel is the core part of Linux It is responsible for all major activities of this operating system It interacts directly with hardware Kernel hides low level hardware details to system or application programs

                                                                          System Library - System libraries are special functions or programs These are used by application programs to use Kernelrsquos features

                                                                          System Utility - System Utility programs are responsible to do specialized individual level tasks like updating log file accepting login requests from terminals etc

                                                                          Kernel Modules Sections of kernel code that can be compiled loaded and unloaded independent of the rest of the kernel A kernel module may typically implement a device driver a file system or a networking protocol

                                                                          Three components to Linux module support 1 module management Supports loading modules into memory and allows to talk to the rest of the kernel 2driver registration Allows modules to inform rest of the kernel that a new driver has become available 3 conflict resolution protect reserved resources of one driver from accidental use by another driver

                                                                          Kernel Mode vs User ModeKernel component code executes in a special privileged mode called kernel mode with full access to all resources of the computer This code represents a single process executes in single address space and do not require any context switch and hence is very efficient and fast Kernel runs each processes and provides system services to processes provides protected access to hardwares to processesCode which is not required to run in kernel mode is in System Library User programs and other system programs works in User Mode which has no access to system hardwares and kernel code User programs utilities use System libraries to access Kernel functions to get systems low level tasksBasic FeaturesFollowing are some of the important features of Linux Operating System

                                                                          Portable - Portability means softwares can works on different types of hardwares in same way Linux kernel and application programs supports their installation on any kind of hardware platform

                                                                          Open Source - Linux source code is freely available and it is community based development project Multiple teams works in collaboration to enhance the capability of Linux operating system and it is continuously evolving

                                                                          Multi-User - Linux is a multiuser system means multiple users can access system resources like memory ram application programs at same time

                                                                          Multiprogramming - Linux is a multiprogramming system means multiple applications can run at same time

                                                                          Hierarchical File System - Linux provides a standard file structure in which system files user files are arranged

                                                                          Shell - Linux provides a special interpreter program which can be used to execute commands of the operating system It can be used to do various types of operations call application programs etc

                                                                          55

                                                                          Security - Linux provides user security using authentication features like password protection controlled access to specific files encryption of data

                                                                          Architecture Linux System Architecture is consists of following layers

                                                                          Hardware layer - Hardware consists of all peripheral devices (RAM HDD CPU etc)

                                                                          Kernel - Core component of Operating System interacts directly with hardware provides low level services to upper layer components

                                                                          Shell - An interface to kernel Takes commands from user and executes kernels functions

                                                                          Utilities - Utility programs giving user most of the functionalities of an operating systems

                                                                          Q) windows XP operating System1Extensibility mdash layered architecture

                                                                          Executive which runs in protected mode provides the basic system services On top of the executive several server subsystems operate in user mode Modular structure allows additional environmental subsystems to be added without affecting the

                                                                          executive 2Portability mdash XP can be moved from on hardware architecture to another with relatively few changes

                                                                          Written in C and C++ Processor-dependent code is isolated in a dynamic link library (DLL) called the ldquohardware abstraction

                                                                          layerrdquo (HAL)3 Reliability mdashXP uses hardware protection for virtual memory and software protection mechanisms for operating system resources 4 Compatibility mdash applications that follow the IEEE 10031 (POSIX) standard can be complied to run on 2000 without changing the source code 5 Performance mdash XP subsystems can communicate with one another via high-performance message passing

                                                                          Preemption of low priority threads enables the system to respond quickly to external events Designed for symmetrical multiprocessing

                                                                          6 International support mdash supports different locales via the national language support (NLS) APIXP Architecture 1 Layered system of modules 2 Protected mode mdash hardware abstraction layer (HAL) kernel executive 3 User mode mdash collection of subsystems

                                                                          a Environmental subsystems emulate different operating systems b Protection subsystems provide security functions

                                                                          XP architecture

                                                                          56

                                                                          Q) Windows network Operating systemExample Windows 2000 operating system

                                                                          All the pictures are taken from Silberschatz Abraham et al Peter Baer Galvin and Greg Gagne Operating system concepts Reading Addison-Wesley

                                                                          Visit my blog enthusiaststudentblogspotin57

                                                                          mtechmessengerblogspotin

                                                                          Keywords OS exam notes OS notesOperating system notes Opearing system galvin notesSimple batch system Multi- programmed Batch System Time sharing System Difference between Batch Multi-programming and Time sharing What is process structure of process in memory Process states PCB Operations on Processes Process Creation Process Termination cascading terminationcontext switch Basic Concepts of threads two modes of CPU execution Inter process communication Shared ndash Memory Systems Message passing systemswhat is deadlock four conditions that are necessary for deadlock to occur Methods for handling deadlocks Deadlock prevention Deadlock avoidanceResource Allocation graphBankers AlgorithmSafe State Unsafe state Safety Algorithm Resource-Request Algorithm Memory management Address Binding Logical vs Physical Address Space MMU Memory-Management Unit Dynamic loading Dynamic Linking Overlays Swapping Contiguous memory allocation Single-partition allocation Multiple-partition allocation Internal fragmentation Variable Size partitions External fragmentation First-fit Best-fit Worst-fit Non-Contiguous Memory allocation Paging Implementation of Page Table PTLR PTBR page table base register Page-table length register translation look-aside buffers TLB Protection in Paging Shared pages in Paging Segmentation Shared Segments Virtual memory Demand paging Procedure for Handling a Page Fault pure demand paging Page Replacement Page Replacement Algorithms FIFO Optimal Page replacement AlgorithmLeast Recently used (LRU) Page replacement algorithm ThrashingWorking Set model global and local page replacement

                                                                          58

                                                                          • 101 File Concept Data is stored in secondary memory in the form of files A file is a sequence of bits bytes lines or records A file has structure which depends on its type
                                                                          • File Attributes
                                                                          • File Operations
                                                                          • Kernel Mode vs User Mode
                                                                          • Basic Features
                                                                          • Architecture

                                                                            File Types

                                                                            Implementing file types A common technique is to include the type as part of the filename The name is split into 2 parts

                                                                            1 File Name 2 Extension

                                                                            File name and extension are separated by a period The system uses the extension to indicate the type of file and the type of operations that can be done on that fileUser opens Microsoft word and clicks on File menu and Open option User will only specify the filename without extension application will look for a file with the given name and extension it expects Hence extensions are not supported by the operating systemUnix system uses a crude magic number stored at the beginning of the file to indicate the type of fileIn Mac OS X operating system file produced by word processor has the word processorrsquos name as its creator When user double clicks on the file word will open the required fileSome common file types are as shown below

                                                                            File Structure Some files contain an internal structure which may or may not be known to the OS for ex executable files have a specific structure so that it can determine

                                                                            where in memory to load the file and what is the location of the first instruction

                                                                            If OS supports many file formats it will increase the size and complexity of the OS UNIX treats all files as sequences of bytes with no internal structure (With the exception of

                                                                            executable binary programs which it must know how to load and find the first executable statement etc)

                                                                            Macintosh files have two forks - a resource fork and a data fork The resource fork contains information relating to the UI such as icons and button images and can be modified independently Data fork contains the code or data

                                                                            Internal File structure

                                                                            Disk files are accessed in units of physical blocks typically 512 bytes

                                                                            Internally files are organized in units of logical units like single byte (or)

                                                                            data record size (or) structure size

                                                                            The number of logical units which fit into one physical block determines its packingIf a student file is organized internally as records Each record takes 64 bytes of memoryAnd main memory is divided into blocks of 512 bytes Then packing = 51264=8 8 records can be stored per block

                                                                            Internal fragmentation may occur due to this

                                                                            102 Access Methods

                                                                            The information in file can be accessed in the following ways1 Sequential access2 Direct access (or) Relative access

                                                                            1 Sequential access we cannot randomly access any record

                                                                            Records are read and written sequentially one record after another

                                                                            A sequential access file emulates magnetic tape operation and generally supports the following operations read next - read a record and advance the tape to the next position write next - write a record and advance the tape to the next position rewind skip n records - May or may not be supported N may be limited to positive numbers or may be

                                                                            limited to +- 1

                                                                            2 Direct access (or) Relative accessA file is made up of fixed length records so that we can randomly access any record

                                                                            No restriction on order of reading or writing

                                                                            Direct access files are useful when we need to access large amounts of data randomly

                                                                            Ex database

                                                                            Operations supported include n is relative block number ie relative to beginning of file

                                                                            1 read n - read record number n 2 write n - write record number n 3 jump to record n - could be 0 or the end of file4 Query current record - used to return back to this record later5 Sequential access can be easily emulated using direct access The inverse is complicated and

                                                                            inefficient

                                                                            3 Other access methods can be built on top of direct access method We construct an index for the file which contains pointers to various blocks To search a file we first access index and then use pointer to access file

                                                                            If a student file has 120000 records of students sorted by student id We can create an index consisting of 1st student id in each blockif 64 records can be stored in each block then number of blocks needed to store 120000 records = 12000064 asymp 2000 blocks So index has 2000 entries Index is kept in main memory so that searching becomes faster

                                                                            If the index itself becomes large we create an index for index file (Ex B-tree)

                                                                            Q) Allocation Methods

                                                                            To allocate space to files on disk is such a way that disk space is utilized effectively and files can be accessed quickly

                                                                            Three major methods of allocating disk space are

                                                                            1 Contiguous 2 Linked

                                                                            3 Indexed

                                                                            1 Contiguous allocation Each file occupies set of contiguous blocks on disk

                                                                            If the file is n blocks long and starts at location b then it occupies blocks b b+1 b+2hellipb+n-1

                                                                            The directory entry for each file indicates the address of starting block(ie b) and length (n)

                                                                            Advantages

                                                                            1 For direct access to block i we can directly access by finding the block as b+i Since the file is stored sequentially sequential access is easy Hence access is easy

                                                                            2 Supports both direct and sequential access3 Number of disk seeks required is minimum 4 Seek time is minimum

                                                                            Disadvantages

                                                                            1 Difficult to find space to a new file (or) add more data to existing fileFirst fit and Best fit strategies are used to find a free hole( set of free contiguous blocks) from list of available holes But both are inefficient wrt storage utilization

                                                                            2 External fragmentation even though there are enough free blocks required for a new file they are not contiguous and hence cannot be allocated to new fileSolution copy all the files onto another disk or tape We then copy the files back onto original disk contiguously This is known as compaction But compaction is time consuming

                                                                            3 Size of the file must be known when the file is created Even though final size is known entire size may not be used by the file at the beginning Hence leads to wastage of space

                                                                            If size of the file is not known in advance then (i) If we allocate too little space to file file cannot be extended Solution find a larger hole copy the contents of the file to the new space and release the previous space This may be possible as long as space exists Also this is time consuming(ii) If we overestimate the amount of space needed much of the space is unused This will lead to internal fragmentation

                                                                            To minimize these drawbacks some os uses modified contiguous allocation scheme Here contiguous blocks of memory is allocated at firstIf the file needs more blocks another set of contiguous space known as extent is added Location and a block count plus a link to the first block of next extentIf extents are too large and of fixed size internal fragmentation occursIf extents are of variable sizes external fragmentation occurs

                                                                            Linked AllocationEach file is a linked list of disk blocks The disk blocks can be scattered anywhere on the diskThe directory contains a pointer to first and last blocks of the file For example a file of 5 blocks might start at a block 9 and continue at block 16 then block 1 and then block 10 and finally block 25 Each block contains a pointer to next block If each block is 512 bytes and disk address (pointer ) requires 4 bytes then the user sees blocks of 512-4=508 bytes To create a new file we simply create a new entry in the directory The pointer will have nil for empty file Also the size field of empty file is 0

                                                                            To add more data to existing file free block is found and data is written to it and is linked to end of file To read a file we read by following the pointers block to block

                                                                            Advantages 1 Easy to find space to a new file (or) add more data to existing file2 Size of the file need not be declared when the file is created3 There is no external fragmentation

                                                                            Disadvantages1 Seek time is more2 Number of disk seeks required is more3 Pointers use larger percentage of filersquos disk space4 Inefficient for direct access files To access i th record i disk reads are required5 Not Reliable if pointers were lost or damaged

                                                                            Solution to disadvantage 3 is to use to group multiple blocks as clusters(for ex 4 blocks = 1cluster) We allocate clusters instead of blocks to file Makes logical to physical block mapping simpleImproves disk throughput by reducing disk access timeReduces wastage of space due to pointers as only few pointers may be neededFree ndashlist management also becomes simpleDisadvantage internal fragmentation more space is wasted when the cluster is partially full

                                                                            Solution to disadvantage 5 is to use doubly linked list (or) store the file name and relative block number in each block

                                                                            An variation of linked allocation is the use of file allocation table (FAT)A section of disk at the beginning of each volume contains FAT FAT has one entry for each disk block and is indexed by block numberThe block number contains the block number of next block that contains the file The last block will have special end of the file value as table entry An unused block is indicated by table value 0

                                                                            Advantages

                                                                            1 Easy to find space to a new file (or) add more data to existing file2 Random access time is improved

                                                                            Disadvantage number of disk seeks is more The disk head must move to read the FAT and find the location of block then move to location of the block

                                                                            3 Indexed allocation

                                                                            Each file has its own index block Index block is an array of disk block addresses The ith entry in index block points to ith block of the file

                                                                            The directory contains the address of the index block

                                                                            When the file is created all pointers in the index block are set to nil

                                                                            To add more data to existing file free block is found and data is written to it The address of block is put in the index block

                                                                            Advantages

                                                                            1 Easy to find space to a new file (or) add more data to existing file2 Supports direct (or) random access efficiently3 No external fragmentation

                                                                            Disadvantage

                                                                            1 Seek time is more2 Number of disk seeks required is more3 Not Reliable4 Index file must be kept in memory If this memory is not available then we have the read the index

                                                                            block and then desired data block ie 2 disk accesses are required To access a block near the end of the file we need to read all the index blocks to read the needed data block

                                                                            5 Pointer overhead of index block is gt than that of linked allocation For example if we have file of only one or two blocks Entire index block is allocated to store these one or two pointers

                                                                            Various mechanisms followed for size of index block

                                                                            1 linked scheme size of index block is one disk block Index block contains a small header containing name of the file and set of the first 100 disk block addresses The next address contains the address of another index block for large files

                                                                            2 multilevel index to access a block the OS uses first-level index to find second-level index block which points to file block This approach can be continued to third or fourth level

                                                                            3 combined scheme

                                                                            For example Say there are 15 pointers of index block in the filersquos inode The first 12 of these pointers have address of blocks that contains file data The next 3 pointers point to indirect blocks The first points to single indirect block which is an index block that contain addresses of filersquos data blocks

                                                                            The second points to double indirect blocks and third points to triple indirect blocks as shown below

                                                                            Q) Free space management

                                                                            To keep track of free disk space the system maintains free space list To create a file we search free space list and allocate the required space to new file

                                                                            The free space list can be implemented as

                                                                            1 Bit vector (or) Bit map2 Linked list3 Grouping4 Counting 5 Space maps

                                                                            1 Bit vector (or) Bit map Each block is represented by 1 bit If the block is free the bit is 1 and if the block is allocated the bit is 0 For example consider a disk where blocks 23458 blocks are free The free-space bit map would be 001111001000

                                                                            Advantages simple and easy to get contiguous files

                                                                            Disadvantages

                                                                            1 Bit map requires extra space2 Bit map is inefficient if kept in disk Bit maps of smaller disk can be kept in main memory but may

                                                                            not be possible to keep the bit map of larger disk in main memory3 Bit map must be modified both when blocks are allocated and freed Freeing 1 GB of data on a 1-TB

                                                                            disk could cause thousands of blocks of bit maps to be updated because those data blocks are scattered all over the disk

                                                                            2 Linked List Free-space list is implemented by linking all the free disk blocks together A pointer to the first free block is stored in special location on the disk For example consider a disk where blocks 23458 blocks are freeWe keep a pointer to block 2 as the first free block Block 2 would contain a pointer to block 3 Block 3 would point to block 4 and block 4 would point to block 5and so onAdvantage No waste of spaceDisadvantage Cannot get contiguous space easily ie To traverse the free-space list we must read each block which requires more IO time

                                                                            3 Grouping First free block stores the addresses of n free blocks The first n-1 free blocks are free and the last block contains the addresses of another n free blocks and so onAdvantage large number of free blocks can be found quickly4 Counting free-space list can be maintained by keeping the address of first free block and the number (n)of free contiguous blocks that follow the first block

                                                                            Each entry in the free-space list consists of disk address and a countEach entry requires more space andlist is shorter only if count is gt1These entries can be stored in a B-tree for efficient insertion deletion and search

                                                                            5 Space maps ZFS divides the disk space into chunks of manageable sizes called metaslabs Each metaslab has a space mapZFS uses counting algorithm to store information about free blocks Space map is a log of all block activities in time order in counting formatWhen ZFS allocates or frees space from a metaslab it loads the associated space map into memory and updates it Finally free-space list is updated on diskSynchronization Hardware Hardware solution to synchronization is to provide atomic operations These operations operate as a single instruction without interruption Two such operations are 1 Test and Set() hardware instruction2 Swap() instruction

                                                                            1 Test and Set instruction is as shown

                                                                            Solution using Test and SetThe shared variable lock is initialized to false

                                                                            2 Swap() instruction definition is as follows

                                                                            Solution using Swap1 Shared Boolean variable lock is initialized to false and each process has a local boolean variable key

                                                                            Both Test and Set Swap instructions satisfy the mutual exclusion requirement but unfortunately do not guarantee bounded waiting If there are multiple processes trying to get into their critical sections there is no guarantee of what order they will enter and any one process could have the bad luck to wait forever until they got their turn in the critical section (Since there is no guarantee as to the relative rates of the processes a very fast process could theoretically release the lock whip through their remainder section and re-lock the lock before a slower process got a chance As more and more processes are involved vying for the same resource the odds of a slow process getting locked out completely increase )

                                                                            Below figure illustrates a solution using test-and-set that satisfies bounded waiting using two shared data structures boolean lock and boolean waiting[ N ] where N is the number of processes in contention for critical sections

                                                                            Bounded-waiting mutual exclusion with TestAndSet( )

                                                                            It first looks in an order ( starting with the next process on the list ) for a process that has been waiting and if it finds one then it releases that particular process from its waiting state without unlocking the critical section thereby allowing a specific process into the critical section while continuing to block all the others

                                                                            Q) What is semaphoreAns Semaphore is used to solve synchronization problems A semaphore S is an integer variable accessed only through 2 standard atomic operations 1 wait() 2 signal()Wait primitive Signal primitivewait(S)

                                                                            while(S lt= 0) do nothingS--

                                                                            signal (S)

                                                                            S++

                                                                            Each process that wishes to use a resource performs wait() operation When a process releases a resource it performs signal() operation

                                                                            Semaphores Usage1 binary semaphore or mutex locks can take values 0 (or) 1

                                                                            2 Counting Semaphore can take any integer value Counting semaphore is used to count the number remaining resources The counter is initialized to the number of resources available whenever the counting semaphore gt 0 then a process can enter a critical section and use one of the resources When the counter = 0 ( or negative in some implementations ) then the process blocks until another process frees a resource and increments the counting semaphore with a signal callFor example if 3 resources are there and 4 processes Value of counting semaphore =3if process P1 requires a resource it perform wait() operationNow counting semaphore = 2if process P2 requires a resource it perform wait() operationNow counting semaphore = 1if process P3 requires a resource it perform wait() operationNow counting semaphore = 0if process P4 requires a resource it perform wait() operation P4 waits until a resource is available

                                                                            3 Semaphores can also be used to synchronize certain operations

                                                                            Semaphore ImplementationWhen a process is in critical section and any other process that tries to enter critical section must loop continuously wasting CPU time (busy waiting) This type of semaphore is called spin lock as the process spins while waiting For example suppose it is important that process P1 execute statement S1 before process P2 executes statement S2

                                                                            First we create a semaphore named synch that is shared by the two processes and initialize it to zero Then in process P1 we insert the code

                                                                            S1signal( synch )

                                                                            and in process P2 we insert the code wait( synch )

                                                                            S2 Because synch was initialized to 0 process P2 will block on the wait until after P1 executes the call

                                                                            to signal Each semaphore has a integer value and maintains a waiting queue of waiting processes

                                                                            Semaphore implementation with no busy waiting To overcome busy waiting we block the waiting process and waiting process is restarted by wakeup() operation(ie the process is removed from waiting queue to ready state)

                                                                            Problems due to semaphore1 Deadlocks Deadlocks occur when multiple processes are blocked each waiting for a resource that can only be freed by one of the other (blocked) processes shown below

                                                                            2 Starvation In Starvation one or more processes gets blocked forever and never get a chance to enter critical section For example if we do not specify the algorithms for adding processes to the waiting queue or selecting one to be removed from the queue in the signal( ) call If a LIFO queue is chosen then the first process that starts waiting will never get a chance

                                                                            3 Priority inversionLet Process LMH have priorities as shown belowLltMltHprocess H is waiting for a resource that has been held by process L so Process L is running When process M enters it pre-empts process L and M is running nowHere M runs before H even though M has lower priority than HTo avoid this L is given high priority so that M does not pre-empt L=====================================================================Q) Classical problems on synchronization

                                                                            1 The Bounded Buffer Problem (also called the The Producer-Consumer Problem)2 The Readers-Writers Problem3 The Dining Philosophers Problem

                                                                            These problems are used to test newly proposed synchronization scheme

                                                                            1 The Bounded Buffer ProblemConsider

                                                                            n buffers and each buffer holds one item a producer process which creates the items (1 at a time) a consumer process which processes them (1 at a time)

                                                                            Producer process cannot produce an item if all the buffers are full and Consumer process cannot consume if all the buffers are empty As both processes modify the contents of buffer synchronization must be there For that we use 3 Semaphores

                                                                            1 empty = n (empty semaphore counts the number of empty buffers)2 full=0 (full semaphore counts the number of filled buffers)3 mutex=1 (mutex provides mutual exclusion for access to the buffer)

                                                                            Producer process Consumer processdo

                                                                            hellip produce an item wait(empty) wait(mutex) hellip

                                                                            do wait(full) wait(mutex) hellip remove an item from buffer hellip

                                                                            add item to buffer hellip signal(mutex) signal(full)

                                                                            while (1)

                                                                            signal(mutex) signal(empty) hellip consume the item hellip while (1)

                                                                            2 The Readers-Writers ProblemA data item such as a file is shared among several processesEach process is classified as reader or writerMultiple readers can read the file at same timeA writer must have exclusive access (ie cannot share with either a reader or another writer)Two versions of the readers-writers problem

                                                                            readers priority new reader need not wait because a writer is waiting writers priority if a writer is waiting to access the database no new readers can start reading

                                                                            A solution to both version may cause starvation in the readers priority version writers may starve in the writers priority version readers may starve

                                                                            A semaphore solution to the readers priority version uses 3 semaphores1 readcount = 0 (readcount semaphore counts the number of reading shared data2 mutex =1 (mutex semaphore is used for mutually exclusion when readcount is updated)3 wrt = 1 (wrt semaphore is common to both writer and reader)

                                                                            wrt provides mutually exclusive access to shared data Reader process Writer process

                                                                            wait(mutex) readcount++ if (readcount == 1) wait(rt) signal(mutex) hellip reading is performed hellip wait(mutex) readcount-- if (readcount == 0) signal(wrt) signal(mutex)

                                                                            do wait(wrt) hellip writing is performed hellip signal(wrt)

                                                                            while (TRUE)

                                                                            3 The Dining Philosophers Problemn philosophers sit around a table thinking and eatingAs time passes a philosopher gets hungry and tries to pick up the chopstick on his left and on his right A philosopher may only pick up one chopstick at a time and cannot pick up a chopstick already in the hand of neighbour philosopherThe dining philosophers problems is an example of a large class or concurrency control problems it is a simple representation of the need to allocate several resources among several processes in a deadlock-free and starvation-free mannerA semaphore solution represent each chopstick with a semaphore

                                                                            semaphore chopstick[5] Initially all values are 1

                                                                            Philosopher ido

                                                                            51

                                                                            wait(chopstick[i])wait(chopstick[(i+1) 5])hellipeathellipsignal(chopstick[i])signal(chopstick[(i+1) 5])hellipthinkhellip

                                                                            while (1)This solution guarantees no two neighbouring philosophers eat simultaneously but has the possibility of creating a deadlockQ) Types of operating systems

                                                                            MULTIPROCESSING SYSTEMS has multiple hardware CPUs

                                                                            In symmetric multiprocessing (SMP) systems The processes share the same main memory and io facilities All processors can perform the same functions (hence term symmetric)

                                                                            52

                                                                            ASYMMETRIC MULTIPROCESSING SYSTEMS In asymmetric multiprocessing (ASMP) all CPUs are not equal (Master and slave CPUs) ASMP have dedicated processors to specific tasks For example one processor may be dedicated to disk operations another to video operations etc These systems dont have the flexibility to assign processes to the least-loaded CPU unlike an SMP system

                                                                            REAL TIME SYSTEM is used to implement a computer application that must complete its execution within its time constraint

                                                                            Two kinds of real-time systems have evolved

                                                                            1 A hard real-time system can guarantee that response requirement will be met under all conditions 2 A soft real-time system cannot guarantee that response requirement meets under all conditions Ex

                                                                            Digital audio or multimedia systems Digital telephones

                                                                            DISTRIBUTED SYSTEMS execute parts of a computation in different systems at the same time It uses distributed control ie it spreads its decision-making actions across different computers in the system so that failures of individual computers or the network does not cripple its operation

                                                                            A distributed operating system appears as a uniprocessor system even though it has multiple processors The users may not know where their programs are being run or where their files are located that should all be handled automatically by the operating system

                                                                            Distributed systems allow applications to run on several processors at the same time thus requiring more complex processor scheduling algorithms

                                                                            DESKTOP SYSTEMS A desktop system is a personal computer (PC) system used at a single location Modern desktop systems support multiprogramming Common examples are Linux FreeBSD Windows 8 and the Macintosh operating system

                                                                            HANDHELD SYSTEMS (palmtop computers) Two of the most popular operating systems for handhelds are Symbian OS and Android OS They have very less memory

                                                                            53

                                                                            CLUSTERED SYSTEMS A computer cluster consists of a set of loosely connected computers that work together so that in many respects they can be viewed as a single system The components of a cluster are usually connected to each other through fast local area networks each node (computer used as a server) running its own instance of an operating system

                                                                            Q) Modern operating system

                                                                            New developments in hardware applications and security threats lead to the development of modern operating system

                                                                            New Hardware developments like increased machine speed high speed network increased size and variety of memory devices

                                                                            New Applications developments like multimedia applications internet and web access and client server computing

                                                                            Developments in modern operating system can be categorized as

                                                                            1 Microkernel architecture assigns only few functions to kernel ( IPC and CPU scheduling etc) Other OS services are provided by processes called servers that run in user mode This approach decouples server and kernel development This approach is well suited to distributed environment

                                                                            2 Multithreading a process is divided into threads that can run concurrently Useful for applications that perform a number of independent tasks Example database server that listens and processes many client requests

                                                                            3 Symmetric multiprocessing As now-a-days there are multiple microprocessors in a single system SMP operating system provides greater efficiency when there are multiple processors The processes share the same main memory and io facilities All processors can perform the same functions (hence term symmetric)

                                                                            SMP schedules processes or threads across all of the processors

                                                                            Advantages

                                                                            1 Increased performance if some portion of the program can be run in parallel the performance of SMP OS increases

                                                                            2 increased availability as all processes can perform the same task if one processor fails others will work 3 incremental growth one can add a new processor to increase the performance4 Scaling cost can be dependent on the number of processors used

                                                                            4 Distributed operating systems Distributed systems allow applications to run on several processors at the same time

                                                                            5 Object oriented design adds modularity to kernel Os can be customized without effecting system integrity

                                                                            Q) Linux Operating systems It is open source as its source code is freely available It is free to use Linux was designed considering UNIX compatibility Linux is a multi-user multitasking system Main design goals are speed efficiency and standardization Components of Linux System

                                                                            54

                                                                            Linux Operating System has primarily three components as shown in the below diagram

                                                                            Kernel - Kernel is the core part of Linux It is responsible for all major activities of this operating system It interacts directly with hardware Kernel hides low level hardware details to system or application programs

                                                                            System Library - System libraries are special functions or programs These are used by application programs to use Kernelrsquos features

                                                                            System Utility - System Utility programs are responsible to do specialized individual level tasks like updating log file accepting login requests from terminals etc

                                                                            Kernel Modules Sections of kernel code that can be compiled loaded and unloaded independent of the rest of the kernel A kernel module may typically implement a device driver a file system or a networking protocol

                                                                            Three components to Linux module support 1 module management Supports loading modules into memory and allows to talk to the rest of the kernel 2driver registration Allows modules to inform rest of the kernel that a new driver has become available 3 conflict resolution protect reserved resources of one driver from accidental use by another driver

                                                                            Kernel Mode vs User ModeKernel component code executes in a special privileged mode called kernel mode with full access to all resources of the computer This code represents a single process executes in single address space and do not require any context switch and hence is very efficient and fast Kernel runs each processes and provides system services to processes provides protected access to hardwares to processesCode which is not required to run in kernel mode is in System Library User programs and other system programs works in User Mode which has no access to system hardwares and kernel code User programs utilities use System libraries to access Kernel functions to get systems low level tasksBasic FeaturesFollowing are some of the important features of Linux Operating System

                                                                            Portable - Portability means softwares can works on different types of hardwares in same way Linux kernel and application programs supports their installation on any kind of hardware platform

                                                                            Open Source - Linux source code is freely available and it is community based development project Multiple teams works in collaboration to enhance the capability of Linux operating system and it is continuously evolving

                                                                            Multi-User - Linux is a multiuser system means multiple users can access system resources like memory ram application programs at same time

                                                                            Multiprogramming - Linux is a multiprogramming system means multiple applications can run at same time

                                                                            Hierarchical File System - Linux provides a standard file structure in which system files user files are arranged

                                                                            Shell - Linux provides a special interpreter program which can be used to execute commands of the operating system It can be used to do various types of operations call application programs etc

                                                                            55

                                                                            Security - Linux provides user security using authentication features like password protection controlled access to specific files encryption of data

                                                                            Architecture Linux System Architecture is consists of following layers

                                                                            Hardware layer - Hardware consists of all peripheral devices (RAM HDD CPU etc)

                                                                            Kernel - Core component of Operating System interacts directly with hardware provides low level services to upper layer components

                                                                            Shell - An interface to kernel Takes commands from user and executes kernels functions

                                                                            Utilities - Utility programs giving user most of the functionalities of an operating systems

                                                                            Q) windows XP operating System1Extensibility mdash layered architecture

                                                                            Executive which runs in protected mode provides the basic system services On top of the executive several server subsystems operate in user mode Modular structure allows additional environmental subsystems to be added without affecting the

                                                                            executive 2Portability mdash XP can be moved from on hardware architecture to another with relatively few changes

                                                                            Written in C and C++ Processor-dependent code is isolated in a dynamic link library (DLL) called the ldquohardware abstraction

                                                                            layerrdquo (HAL)3 Reliability mdashXP uses hardware protection for virtual memory and software protection mechanisms for operating system resources 4 Compatibility mdash applications that follow the IEEE 10031 (POSIX) standard can be complied to run on 2000 without changing the source code 5 Performance mdash XP subsystems can communicate with one another via high-performance message passing

                                                                            Preemption of low priority threads enables the system to respond quickly to external events Designed for symmetrical multiprocessing

                                                                            6 International support mdash supports different locales via the national language support (NLS) APIXP Architecture 1 Layered system of modules 2 Protected mode mdash hardware abstraction layer (HAL) kernel executive 3 User mode mdash collection of subsystems

                                                                            a Environmental subsystems emulate different operating systems b Protection subsystems provide security functions

                                                                            XP architecture

                                                                            56

                                                                            Q) Windows network Operating systemExample Windows 2000 operating system

                                                                            All the pictures are taken from Silberschatz Abraham et al Peter Baer Galvin and Greg Gagne Operating system concepts Reading Addison-Wesley

                                                                            Visit my blog enthusiaststudentblogspotin57

                                                                            mtechmessengerblogspotin

                                                                            Keywords OS exam notes OS notesOperating system notes Opearing system galvin notesSimple batch system Multi- programmed Batch System Time sharing System Difference between Batch Multi-programming and Time sharing What is process structure of process in memory Process states PCB Operations on Processes Process Creation Process Termination cascading terminationcontext switch Basic Concepts of threads two modes of CPU execution Inter process communication Shared ndash Memory Systems Message passing systemswhat is deadlock four conditions that are necessary for deadlock to occur Methods for handling deadlocks Deadlock prevention Deadlock avoidanceResource Allocation graphBankers AlgorithmSafe State Unsafe state Safety Algorithm Resource-Request Algorithm Memory management Address Binding Logical vs Physical Address Space MMU Memory-Management Unit Dynamic loading Dynamic Linking Overlays Swapping Contiguous memory allocation Single-partition allocation Multiple-partition allocation Internal fragmentation Variable Size partitions External fragmentation First-fit Best-fit Worst-fit Non-Contiguous Memory allocation Paging Implementation of Page Table PTLR PTBR page table base register Page-table length register translation look-aside buffers TLB Protection in Paging Shared pages in Paging Segmentation Shared Segments Virtual memory Demand paging Procedure for Handling a Page Fault pure demand paging Page Replacement Page Replacement Algorithms FIFO Optimal Page replacement AlgorithmLeast Recently used (LRU) Page replacement algorithm ThrashingWorking Set model global and local page replacement

                                                                            58

                                                                            • 101 File Concept Data is stored in secondary memory in the form of files A file is a sequence of bits bytes lines or records A file has structure which depends on its type
                                                                            • File Attributes
                                                                            • File Operations
                                                                            • Kernel Mode vs User Mode
                                                                            • Basic Features
                                                                            • Architecture

                                                                              Internally files are organized in units of logical units like single byte (or)

                                                                              data record size (or) structure size

                                                                              The number of logical units which fit into one physical block determines its packingIf a student file is organized internally as records Each record takes 64 bytes of memoryAnd main memory is divided into blocks of 512 bytes Then packing = 51264=8 8 records can be stored per block

                                                                              Internal fragmentation may occur due to this

                                                                              102 Access Methods

                                                                              The information in file can be accessed in the following ways1 Sequential access2 Direct access (or) Relative access

                                                                              1 Sequential access we cannot randomly access any record

                                                                              Records are read and written sequentially one record after another

                                                                              A sequential access file emulates magnetic tape operation and generally supports the following operations read next - read a record and advance the tape to the next position write next - write a record and advance the tape to the next position rewind skip n records - May or may not be supported N may be limited to positive numbers or may be

                                                                              limited to +- 1

                                                                              2 Direct access (or) Relative accessA file is made up of fixed length records so that we can randomly access any record

                                                                              No restriction on order of reading or writing

                                                                              Direct access files are useful when we need to access large amounts of data randomly

                                                                              Ex database

                                                                              Operations supported include n is relative block number ie relative to beginning of file

                                                                              1 read n - read record number n 2 write n - write record number n 3 jump to record n - could be 0 or the end of file4 Query current record - used to return back to this record later5 Sequential access can be easily emulated using direct access The inverse is complicated and

                                                                              inefficient

                                                                              3 Other access methods can be built on top of direct access method We construct an index for the file which contains pointers to various blocks To search a file we first access index and then use pointer to access file

                                                                              If a student file has 120000 records of students sorted by student id We can create an index consisting of 1st student id in each blockif 64 records can be stored in each block then number of blocks needed to store 120000 records = 12000064 asymp 2000 blocks So index has 2000 entries Index is kept in main memory so that searching becomes faster

                                                                              If the index itself becomes large we create an index for index file (Ex B-tree)

                                                                              Q) Allocation Methods

                                                                              To allocate space to files on disk is such a way that disk space is utilized effectively and files can be accessed quickly

                                                                              Three major methods of allocating disk space are

                                                                              1 Contiguous 2 Linked

                                                                              3 Indexed

                                                                              1 Contiguous allocation Each file occupies set of contiguous blocks on disk

                                                                              If the file is n blocks long and starts at location b then it occupies blocks b b+1 b+2hellipb+n-1

                                                                              The directory entry for each file indicates the address of starting block(ie b) and length (n)

                                                                              Advantages

                                                                              1 For direct access to block i we can directly access by finding the block as b+i Since the file is stored sequentially sequential access is easy Hence access is easy

                                                                              2 Supports both direct and sequential access3 Number of disk seeks required is minimum 4 Seek time is minimum

                                                                              Disadvantages

                                                                              1 Difficult to find space to a new file (or) add more data to existing fileFirst fit and Best fit strategies are used to find a free hole( set of free contiguous blocks) from list of available holes But both are inefficient wrt storage utilization

                                                                              2 External fragmentation even though there are enough free blocks required for a new file they are not contiguous and hence cannot be allocated to new fileSolution copy all the files onto another disk or tape We then copy the files back onto original disk contiguously This is known as compaction But compaction is time consuming

                                                                              3 Size of the file must be known when the file is created Even though final size is known entire size may not be used by the file at the beginning Hence leads to wastage of space

                                                                              If size of the file is not known in advance then (i) If we allocate too little space to file file cannot be extended Solution find a larger hole copy the contents of the file to the new space and release the previous space This may be possible as long as space exists Also this is time consuming(ii) If we overestimate the amount of space needed much of the space is unused This will lead to internal fragmentation

                                                                              To minimize these drawbacks some os uses modified contiguous allocation scheme Here contiguous blocks of memory is allocated at firstIf the file needs more blocks another set of contiguous space known as extent is added Location and a block count plus a link to the first block of next extentIf extents are too large and of fixed size internal fragmentation occursIf extents are of variable sizes external fragmentation occurs

                                                                              Linked AllocationEach file is a linked list of disk blocks The disk blocks can be scattered anywhere on the diskThe directory contains a pointer to first and last blocks of the file For example a file of 5 blocks might start at a block 9 and continue at block 16 then block 1 and then block 10 and finally block 25 Each block contains a pointer to next block If each block is 512 bytes and disk address (pointer ) requires 4 bytes then the user sees blocks of 512-4=508 bytes To create a new file we simply create a new entry in the directory The pointer will have nil for empty file Also the size field of empty file is 0

                                                                              To add more data to existing file free block is found and data is written to it and is linked to end of file To read a file we read by following the pointers block to block

                                                                              Advantages 1 Easy to find space to a new file (or) add more data to existing file2 Size of the file need not be declared when the file is created3 There is no external fragmentation

                                                                              Disadvantages1 Seek time is more2 Number of disk seeks required is more3 Pointers use larger percentage of filersquos disk space4 Inefficient for direct access files To access i th record i disk reads are required5 Not Reliable if pointers were lost or damaged

                                                                              Solution to disadvantage 3 is to use to group multiple blocks as clusters(for ex 4 blocks = 1cluster) We allocate clusters instead of blocks to file Makes logical to physical block mapping simpleImproves disk throughput by reducing disk access timeReduces wastage of space due to pointers as only few pointers may be neededFree ndashlist management also becomes simpleDisadvantage internal fragmentation more space is wasted when the cluster is partially full

                                                                              Solution to disadvantage 5 is to use doubly linked list (or) store the file name and relative block number in each block

                                                                              An variation of linked allocation is the use of file allocation table (FAT)A section of disk at the beginning of each volume contains FAT FAT has one entry for each disk block and is indexed by block numberThe block number contains the block number of next block that contains the file The last block will have special end of the file value as table entry An unused block is indicated by table value 0

                                                                              Advantages

                                                                              1 Easy to find space to a new file (or) add more data to existing file2 Random access time is improved

                                                                              Disadvantage number of disk seeks is more The disk head must move to read the FAT and find the location of block then move to location of the block

                                                                              3 Indexed allocation

                                                                              Each file has its own index block Index block is an array of disk block addresses The ith entry in index block points to ith block of the file

                                                                              The directory contains the address of the index block

                                                                              When the file is created all pointers in the index block are set to nil

                                                                              To add more data to existing file free block is found and data is written to it The address of block is put in the index block

                                                                              Advantages

                                                                              1 Easy to find space to a new file (or) add more data to existing file2 Supports direct (or) random access efficiently3 No external fragmentation

                                                                              Disadvantage

                                                                              1 Seek time is more2 Number of disk seeks required is more3 Not Reliable4 Index file must be kept in memory If this memory is not available then we have the read the index

                                                                              block and then desired data block ie 2 disk accesses are required To access a block near the end of the file we need to read all the index blocks to read the needed data block

                                                                              5 Pointer overhead of index block is gt than that of linked allocation For example if we have file of only one or two blocks Entire index block is allocated to store these one or two pointers

                                                                              Various mechanisms followed for size of index block

                                                                              1 linked scheme size of index block is one disk block Index block contains a small header containing name of the file and set of the first 100 disk block addresses The next address contains the address of another index block for large files

                                                                              2 multilevel index to access a block the OS uses first-level index to find second-level index block which points to file block This approach can be continued to third or fourth level

                                                                              3 combined scheme

                                                                              For example Say there are 15 pointers of index block in the filersquos inode The first 12 of these pointers have address of blocks that contains file data The next 3 pointers point to indirect blocks The first points to single indirect block which is an index block that contain addresses of filersquos data blocks

                                                                              The second points to double indirect blocks and third points to triple indirect blocks as shown below

                                                                              Q) Free space management

                                                                              To keep track of free disk space the system maintains free space list To create a file we search free space list and allocate the required space to new file

                                                                              The free space list can be implemented as

                                                                              1 Bit vector (or) Bit map2 Linked list3 Grouping4 Counting 5 Space maps

                                                                              1 Bit vector (or) Bit map Each block is represented by 1 bit If the block is free the bit is 1 and if the block is allocated the bit is 0 For example consider a disk where blocks 23458 blocks are free The free-space bit map would be 001111001000

                                                                              Advantages simple and easy to get contiguous files

                                                                              Disadvantages

                                                                              1 Bit map requires extra space2 Bit map is inefficient if kept in disk Bit maps of smaller disk can be kept in main memory but may

                                                                              not be possible to keep the bit map of larger disk in main memory3 Bit map must be modified both when blocks are allocated and freed Freeing 1 GB of data on a 1-TB

                                                                              disk could cause thousands of blocks of bit maps to be updated because those data blocks are scattered all over the disk

                                                                              2 Linked List Free-space list is implemented by linking all the free disk blocks together A pointer to the first free block is stored in special location on the disk For example consider a disk where blocks 23458 blocks are freeWe keep a pointer to block 2 as the first free block Block 2 would contain a pointer to block 3 Block 3 would point to block 4 and block 4 would point to block 5and so onAdvantage No waste of spaceDisadvantage Cannot get contiguous space easily ie To traverse the free-space list we must read each block which requires more IO time

                                                                              3 Grouping First free block stores the addresses of n free blocks The first n-1 free blocks are free and the last block contains the addresses of another n free blocks and so onAdvantage large number of free blocks can be found quickly4 Counting free-space list can be maintained by keeping the address of first free block and the number (n)of free contiguous blocks that follow the first block

                                                                              Each entry in the free-space list consists of disk address and a countEach entry requires more space andlist is shorter only if count is gt1These entries can be stored in a B-tree for efficient insertion deletion and search

                                                                              5 Space maps ZFS divides the disk space into chunks of manageable sizes called metaslabs Each metaslab has a space mapZFS uses counting algorithm to store information about free blocks Space map is a log of all block activities in time order in counting formatWhen ZFS allocates or frees space from a metaslab it loads the associated space map into memory and updates it Finally free-space list is updated on diskSynchronization Hardware Hardware solution to synchronization is to provide atomic operations These operations operate as a single instruction without interruption Two such operations are 1 Test and Set() hardware instruction2 Swap() instruction

                                                                              1 Test and Set instruction is as shown

                                                                              Solution using Test and SetThe shared variable lock is initialized to false

                                                                              2 Swap() instruction definition is as follows

                                                                              Solution using Swap1 Shared Boolean variable lock is initialized to false and each process has a local boolean variable key

                                                                              Both Test and Set Swap instructions satisfy the mutual exclusion requirement but unfortunately do not guarantee bounded waiting If there are multiple processes trying to get into their critical sections there is no guarantee of what order they will enter and any one process could have the bad luck to wait forever until they got their turn in the critical section (Since there is no guarantee as to the relative rates of the processes a very fast process could theoretically release the lock whip through their remainder section and re-lock the lock before a slower process got a chance As more and more processes are involved vying for the same resource the odds of a slow process getting locked out completely increase )

                                                                              Below figure illustrates a solution using test-and-set that satisfies bounded waiting using two shared data structures boolean lock and boolean waiting[ N ] where N is the number of processes in contention for critical sections

                                                                              Bounded-waiting mutual exclusion with TestAndSet( )

                                                                              It first looks in an order ( starting with the next process on the list ) for a process that has been waiting and if it finds one then it releases that particular process from its waiting state without unlocking the critical section thereby allowing a specific process into the critical section while continuing to block all the others

                                                                              Q) What is semaphoreAns Semaphore is used to solve synchronization problems A semaphore S is an integer variable accessed only through 2 standard atomic operations 1 wait() 2 signal()Wait primitive Signal primitivewait(S)

                                                                              while(S lt= 0) do nothingS--

                                                                              signal (S)

                                                                              S++

                                                                              Each process that wishes to use a resource performs wait() operation When a process releases a resource it performs signal() operation

                                                                              Semaphores Usage1 binary semaphore or mutex locks can take values 0 (or) 1

                                                                              2 Counting Semaphore can take any integer value Counting semaphore is used to count the number remaining resources The counter is initialized to the number of resources available whenever the counting semaphore gt 0 then a process can enter a critical section and use one of the resources When the counter = 0 ( or negative in some implementations ) then the process blocks until another process frees a resource and increments the counting semaphore with a signal callFor example if 3 resources are there and 4 processes Value of counting semaphore =3if process P1 requires a resource it perform wait() operationNow counting semaphore = 2if process P2 requires a resource it perform wait() operationNow counting semaphore = 1if process P3 requires a resource it perform wait() operationNow counting semaphore = 0if process P4 requires a resource it perform wait() operation P4 waits until a resource is available

                                                                              3 Semaphores can also be used to synchronize certain operations

                                                                              Semaphore ImplementationWhen a process is in critical section and any other process that tries to enter critical section must loop continuously wasting CPU time (busy waiting) This type of semaphore is called spin lock as the process spins while waiting For example suppose it is important that process P1 execute statement S1 before process P2 executes statement S2

                                                                              First we create a semaphore named synch that is shared by the two processes and initialize it to zero Then in process P1 we insert the code

                                                                              S1signal( synch )

                                                                              and in process P2 we insert the code wait( synch )

                                                                              S2 Because synch was initialized to 0 process P2 will block on the wait until after P1 executes the call

                                                                              to signal Each semaphore has a integer value and maintains a waiting queue of waiting processes

                                                                              Semaphore implementation with no busy waiting To overcome busy waiting we block the waiting process and waiting process is restarted by wakeup() operation(ie the process is removed from waiting queue to ready state)

                                                                              Problems due to semaphore1 Deadlocks Deadlocks occur when multiple processes are blocked each waiting for a resource that can only be freed by one of the other (blocked) processes shown below

                                                                              2 Starvation In Starvation one or more processes gets blocked forever and never get a chance to enter critical section For example if we do not specify the algorithms for adding processes to the waiting queue or selecting one to be removed from the queue in the signal( ) call If a LIFO queue is chosen then the first process that starts waiting will never get a chance

                                                                              3 Priority inversionLet Process LMH have priorities as shown belowLltMltHprocess H is waiting for a resource that has been held by process L so Process L is running When process M enters it pre-empts process L and M is running nowHere M runs before H even though M has lower priority than HTo avoid this L is given high priority so that M does not pre-empt L=====================================================================Q) Classical problems on synchronization

                                                                              1 The Bounded Buffer Problem (also called the The Producer-Consumer Problem)2 The Readers-Writers Problem3 The Dining Philosophers Problem

                                                                              These problems are used to test newly proposed synchronization scheme

                                                                              1 The Bounded Buffer ProblemConsider

                                                                              n buffers and each buffer holds one item a producer process which creates the items (1 at a time) a consumer process which processes them (1 at a time)

                                                                              Producer process cannot produce an item if all the buffers are full and Consumer process cannot consume if all the buffers are empty As both processes modify the contents of buffer synchronization must be there For that we use 3 Semaphores

                                                                              1 empty = n (empty semaphore counts the number of empty buffers)2 full=0 (full semaphore counts the number of filled buffers)3 mutex=1 (mutex provides mutual exclusion for access to the buffer)

                                                                              Producer process Consumer processdo

                                                                              hellip produce an item wait(empty) wait(mutex) hellip

                                                                              do wait(full) wait(mutex) hellip remove an item from buffer hellip

                                                                              add item to buffer hellip signal(mutex) signal(full)

                                                                              while (1)

                                                                              signal(mutex) signal(empty) hellip consume the item hellip while (1)

                                                                              2 The Readers-Writers ProblemA data item such as a file is shared among several processesEach process is classified as reader or writerMultiple readers can read the file at same timeA writer must have exclusive access (ie cannot share with either a reader or another writer)Two versions of the readers-writers problem

                                                                              readers priority new reader need not wait because a writer is waiting writers priority if a writer is waiting to access the database no new readers can start reading

                                                                              A solution to both version may cause starvation in the readers priority version writers may starve in the writers priority version readers may starve

                                                                              A semaphore solution to the readers priority version uses 3 semaphores1 readcount = 0 (readcount semaphore counts the number of reading shared data2 mutex =1 (mutex semaphore is used for mutually exclusion when readcount is updated)3 wrt = 1 (wrt semaphore is common to both writer and reader)

                                                                              wrt provides mutually exclusive access to shared data Reader process Writer process

                                                                              wait(mutex) readcount++ if (readcount == 1) wait(rt) signal(mutex) hellip reading is performed hellip wait(mutex) readcount-- if (readcount == 0) signal(wrt) signal(mutex)

                                                                              do wait(wrt) hellip writing is performed hellip signal(wrt)

                                                                              while (TRUE)

                                                                              3 The Dining Philosophers Problemn philosophers sit around a table thinking and eatingAs time passes a philosopher gets hungry and tries to pick up the chopstick on his left and on his right A philosopher may only pick up one chopstick at a time and cannot pick up a chopstick already in the hand of neighbour philosopherThe dining philosophers problems is an example of a large class or concurrency control problems it is a simple representation of the need to allocate several resources among several processes in a deadlock-free and starvation-free mannerA semaphore solution represent each chopstick with a semaphore

                                                                              semaphore chopstick[5] Initially all values are 1

                                                                              Philosopher ido

                                                                              51

                                                                              wait(chopstick[i])wait(chopstick[(i+1) 5])hellipeathellipsignal(chopstick[i])signal(chopstick[(i+1) 5])hellipthinkhellip

                                                                              while (1)This solution guarantees no two neighbouring philosophers eat simultaneously but has the possibility of creating a deadlockQ) Types of operating systems

                                                                              MULTIPROCESSING SYSTEMS has multiple hardware CPUs

                                                                              In symmetric multiprocessing (SMP) systems The processes share the same main memory and io facilities All processors can perform the same functions (hence term symmetric)

                                                                              52

                                                                              ASYMMETRIC MULTIPROCESSING SYSTEMS In asymmetric multiprocessing (ASMP) all CPUs are not equal (Master and slave CPUs) ASMP have dedicated processors to specific tasks For example one processor may be dedicated to disk operations another to video operations etc These systems dont have the flexibility to assign processes to the least-loaded CPU unlike an SMP system

                                                                              REAL TIME SYSTEM is used to implement a computer application that must complete its execution within its time constraint

                                                                              Two kinds of real-time systems have evolved

                                                                              1 A hard real-time system can guarantee that response requirement will be met under all conditions 2 A soft real-time system cannot guarantee that response requirement meets under all conditions Ex

                                                                              Digital audio or multimedia systems Digital telephones

                                                                              DISTRIBUTED SYSTEMS execute parts of a computation in different systems at the same time It uses distributed control ie it spreads its decision-making actions across different computers in the system so that failures of individual computers or the network does not cripple its operation

                                                                              A distributed operating system appears as a uniprocessor system even though it has multiple processors The users may not know where their programs are being run or where their files are located that should all be handled automatically by the operating system

                                                                              Distributed systems allow applications to run on several processors at the same time thus requiring more complex processor scheduling algorithms

                                                                              DESKTOP SYSTEMS A desktop system is a personal computer (PC) system used at a single location Modern desktop systems support multiprogramming Common examples are Linux FreeBSD Windows 8 and the Macintosh operating system

                                                                              HANDHELD SYSTEMS (palmtop computers) Two of the most popular operating systems for handhelds are Symbian OS and Android OS They have very less memory

                                                                              53

                                                                              CLUSTERED SYSTEMS A computer cluster consists of a set of loosely connected computers that work together so that in many respects they can be viewed as a single system The components of a cluster are usually connected to each other through fast local area networks each node (computer used as a server) running its own instance of an operating system

                                                                              Q) Modern operating system

                                                                              New developments in hardware applications and security threats lead to the development of modern operating system

                                                                              New Hardware developments like increased machine speed high speed network increased size and variety of memory devices

                                                                              New Applications developments like multimedia applications internet and web access and client server computing

                                                                              Developments in modern operating system can be categorized as

                                                                              1 Microkernel architecture assigns only few functions to kernel ( IPC and CPU scheduling etc) Other OS services are provided by processes called servers that run in user mode This approach decouples server and kernel development This approach is well suited to distributed environment

                                                                              2 Multithreading a process is divided into threads that can run concurrently Useful for applications that perform a number of independent tasks Example database server that listens and processes many client requests

                                                                              3 Symmetric multiprocessing As now-a-days there are multiple microprocessors in a single system SMP operating system provides greater efficiency when there are multiple processors The processes share the same main memory and io facilities All processors can perform the same functions (hence term symmetric)

                                                                              SMP schedules processes or threads across all of the processors

                                                                              Advantages

                                                                              1 Increased performance if some portion of the program can be run in parallel the performance of SMP OS increases

                                                                              2 increased availability as all processes can perform the same task if one processor fails others will work 3 incremental growth one can add a new processor to increase the performance4 Scaling cost can be dependent on the number of processors used

                                                                              4 Distributed operating systems Distributed systems allow applications to run on several processors at the same time

                                                                              5 Object oriented design adds modularity to kernel Os can be customized without effecting system integrity

                                                                              Q) Linux Operating systems It is open source as its source code is freely available It is free to use Linux was designed considering UNIX compatibility Linux is a multi-user multitasking system Main design goals are speed efficiency and standardization Components of Linux System

                                                                              54

                                                                              Linux Operating System has primarily three components as shown in the below diagram

                                                                              Kernel - Kernel is the core part of Linux It is responsible for all major activities of this operating system It interacts directly with hardware Kernel hides low level hardware details to system or application programs

                                                                              System Library - System libraries are special functions or programs These are used by application programs to use Kernelrsquos features

                                                                              System Utility - System Utility programs are responsible to do specialized individual level tasks like updating log file accepting login requests from terminals etc

                                                                              Kernel Modules Sections of kernel code that can be compiled loaded and unloaded independent of the rest of the kernel A kernel module may typically implement a device driver a file system or a networking protocol

                                                                              Three components to Linux module support 1 module management Supports loading modules into memory and allows to talk to the rest of the kernel 2driver registration Allows modules to inform rest of the kernel that a new driver has become available 3 conflict resolution protect reserved resources of one driver from accidental use by another driver

                                                                              Kernel Mode vs User ModeKernel component code executes in a special privileged mode called kernel mode with full access to all resources of the computer This code represents a single process executes in single address space and do not require any context switch and hence is very efficient and fast Kernel runs each processes and provides system services to processes provides protected access to hardwares to processesCode which is not required to run in kernel mode is in System Library User programs and other system programs works in User Mode which has no access to system hardwares and kernel code User programs utilities use System libraries to access Kernel functions to get systems low level tasksBasic FeaturesFollowing are some of the important features of Linux Operating System

                                                                              Portable - Portability means softwares can works on different types of hardwares in same way Linux kernel and application programs supports their installation on any kind of hardware platform

                                                                              Open Source - Linux source code is freely available and it is community based development project Multiple teams works in collaboration to enhance the capability of Linux operating system and it is continuously evolving

                                                                              Multi-User - Linux is a multiuser system means multiple users can access system resources like memory ram application programs at same time

                                                                              Multiprogramming - Linux is a multiprogramming system means multiple applications can run at same time

                                                                              Hierarchical File System - Linux provides a standard file structure in which system files user files are arranged

                                                                              Shell - Linux provides a special interpreter program which can be used to execute commands of the operating system It can be used to do various types of operations call application programs etc

                                                                              55

                                                                              Security - Linux provides user security using authentication features like password protection controlled access to specific files encryption of data

                                                                              Architecture Linux System Architecture is consists of following layers

                                                                              Hardware layer - Hardware consists of all peripheral devices (RAM HDD CPU etc)

                                                                              Kernel - Core component of Operating System interacts directly with hardware provides low level services to upper layer components

                                                                              Shell - An interface to kernel Takes commands from user and executes kernels functions

                                                                              Utilities - Utility programs giving user most of the functionalities of an operating systems

                                                                              Q) windows XP operating System1Extensibility mdash layered architecture

                                                                              Executive which runs in protected mode provides the basic system services On top of the executive several server subsystems operate in user mode Modular structure allows additional environmental subsystems to be added without affecting the

                                                                              executive 2Portability mdash XP can be moved from on hardware architecture to another with relatively few changes

                                                                              Written in C and C++ Processor-dependent code is isolated in a dynamic link library (DLL) called the ldquohardware abstraction

                                                                              layerrdquo (HAL)3 Reliability mdashXP uses hardware protection for virtual memory and software protection mechanisms for operating system resources 4 Compatibility mdash applications that follow the IEEE 10031 (POSIX) standard can be complied to run on 2000 without changing the source code 5 Performance mdash XP subsystems can communicate with one another via high-performance message passing

                                                                              Preemption of low priority threads enables the system to respond quickly to external events Designed for symmetrical multiprocessing

                                                                              6 International support mdash supports different locales via the national language support (NLS) APIXP Architecture 1 Layered system of modules 2 Protected mode mdash hardware abstraction layer (HAL) kernel executive 3 User mode mdash collection of subsystems

                                                                              a Environmental subsystems emulate different operating systems b Protection subsystems provide security functions

                                                                              XP architecture

                                                                              56

                                                                              Q) Windows network Operating systemExample Windows 2000 operating system

                                                                              All the pictures are taken from Silberschatz Abraham et al Peter Baer Galvin and Greg Gagne Operating system concepts Reading Addison-Wesley

                                                                              Visit my blog enthusiaststudentblogspotin57

                                                                              mtechmessengerblogspotin

                                                                              Keywords OS exam notes OS notesOperating system notes Opearing system galvin notesSimple batch system Multi- programmed Batch System Time sharing System Difference between Batch Multi-programming and Time sharing What is process structure of process in memory Process states PCB Operations on Processes Process Creation Process Termination cascading terminationcontext switch Basic Concepts of threads two modes of CPU execution Inter process communication Shared ndash Memory Systems Message passing systemswhat is deadlock four conditions that are necessary for deadlock to occur Methods for handling deadlocks Deadlock prevention Deadlock avoidanceResource Allocation graphBankers AlgorithmSafe State Unsafe state Safety Algorithm Resource-Request Algorithm Memory management Address Binding Logical vs Physical Address Space MMU Memory-Management Unit Dynamic loading Dynamic Linking Overlays Swapping Contiguous memory allocation Single-partition allocation Multiple-partition allocation Internal fragmentation Variable Size partitions External fragmentation First-fit Best-fit Worst-fit Non-Contiguous Memory allocation Paging Implementation of Page Table PTLR PTBR page table base register Page-table length register translation look-aside buffers TLB Protection in Paging Shared pages in Paging Segmentation Shared Segments Virtual memory Demand paging Procedure for Handling a Page Fault pure demand paging Page Replacement Page Replacement Algorithms FIFO Optimal Page replacement AlgorithmLeast Recently used (LRU) Page replacement algorithm ThrashingWorking Set model global and local page replacement

                                                                              58

                                                                              • 101 File Concept Data is stored in secondary memory in the form of files A file is a sequence of bits bytes lines or records A file has structure which depends on its type
                                                                              • File Attributes
                                                                              • File Operations
                                                                              • Kernel Mode vs User Mode
                                                                              • Basic Features
                                                                              • Architecture

                                                                                3 Other access methods can be built on top of direct access method We construct an index for the file which contains pointers to various blocks To search a file we first access index and then use pointer to access file

                                                                                If a student file has 120000 records of students sorted by student id We can create an index consisting of 1st student id in each blockif 64 records can be stored in each block then number of blocks needed to store 120000 records = 12000064 asymp 2000 blocks So index has 2000 entries Index is kept in main memory so that searching becomes faster

                                                                                If the index itself becomes large we create an index for index file (Ex B-tree)

                                                                                Q) Allocation Methods

                                                                                To allocate space to files on disk is such a way that disk space is utilized effectively and files can be accessed quickly

                                                                                Three major methods of allocating disk space are

                                                                                1 Contiguous 2 Linked

                                                                                3 Indexed

                                                                                1 Contiguous allocation Each file occupies set of contiguous blocks on disk

                                                                                If the file is n blocks long and starts at location b then it occupies blocks b b+1 b+2hellipb+n-1

                                                                                The directory entry for each file indicates the address of starting block(ie b) and length (n)

                                                                                Advantages

                                                                                1 For direct access to block i we can directly access by finding the block as b+i Since the file is stored sequentially sequential access is easy Hence access is easy

                                                                                2 Supports both direct and sequential access3 Number of disk seeks required is minimum 4 Seek time is minimum

                                                                                Disadvantages

                                                                                1 Difficult to find space to a new file (or) add more data to existing fileFirst fit and Best fit strategies are used to find a free hole( set of free contiguous blocks) from list of available holes But both are inefficient wrt storage utilization

                                                                                2 External fragmentation even though there are enough free blocks required for a new file they are not contiguous and hence cannot be allocated to new fileSolution copy all the files onto another disk or tape We then copy the files back onto original disk contiguously This is known as compaction But compaction is time consuming

                                                                                3 Size of the file must be known when the file is created Even though final size is known entire size may not be used by the file at the beginning Hence leads to wastage of space

                                                                                If size of the file is not known in advance then (i) If we allocate too little space to file file cannot be extended Solution find a larger hole copy the contents of the file to the new space and release the previous space This may be possible as long as space exists Also this is time consuming(ii) If we overestimate the amount of space needed much of the space is unused This will lead to internal fragmentation

                                                                                To minimize these drawbacks some os uses modified contiguous allocation scheme Here contiguous blocks of memory is allocated at firstIf the file needs more blocks another set of contiguous space known as extent is added Location and a block count plus a link to the first block of next extentIf extents are too large and of fixed size internal fragmentation occursIf extents are of variable sizes external fragmentation occurs

                                                                                Linked AllocationEach file is a linked list of disk blocks The disk blocks can be scattered anywhere on the diskThe directory contains a pointer to first and last blocks of the file For example a file of 5 blocks might start at a block 9 and continue at block 16 then block 1 and then block 10 and finally block 25 Each block contains a pointer to next block If each block is 512 bytes and disk address (pointer ) requires 4 bytes then the user sees blocks of 512-4=508 bytes To create a new file we simply create a new entry in the directory The pointer will have nil for empty file Also the size field of empty file is 0

                                                                                To add more data to existing file free block is found and data is written to it and is linked to end of file To read a file we read by following the pointers block to block

                                                                                Advantages 1 Easy to find space to a new file (or) add more data to existing file2 Size of the file need not be declared when the file is created3 There is no external fragmentation

                                                                                Disadvantages1 Seek time is more2 Number of disk seeks required is more3 Pointers use larger percentage of filersquos disk space4 Inefficient for direct access files To access i th record i disk reads are required5 Not Reliable if pointers were lost or damaged

                                                                                Solution to disadvantage 3 is to use to group multiple blocks as clusters(for ex 4 blocks = 1cluster) We allocate clusters instead of blocks to file Makes logical to physical block mapping simpleImproves disk throughput by reducing disk access timeReduces wastage of space due to pointers as only few pointers may be neededFree ndashlist management also becomes simpleDisadvantage internal fragmentation more space is wasted when the cluster is partially full

                                                                                Solution to disadvantage 5 is to use doubly linked list (or) store the file name and relative block number in each block

                                                                                An variation of linked allocation is the use of file allocation table (FAT)A section of disk at the beginning of each volume contains FAT FAT has one entry for each disk block and is indexed by block numberThe block number contains the block number of next block that contains the file The last block will have special end of the file value as table entry An unused block is indicated by table value 0

                                                                                Advantages

                                                                                1 Easy to find space to a new file (or) add more data to existing file2 Random access time is improved

                                                                                Disadvantage number of disk seeks is more The disk head must move to read the FAT and find the location of block then move to location of the block

                                                                                3 Indexed allocation

                                                                                Each file has its own index block Index block is an array of disk block addresses The ith entry in index block points to ith block of the file

                                                                                The directory contains the address of the index block

                                                                                When the file is created all pointers in the index block are set to nil

                                                                                To add more data to existing file free block is found and data is written to it The address of block is put in the index block

                                                                                Advantages

                                                                                1 Easy to find space to a new file (or) add more data to existing file2 Supports direct (or) random access efficiently3 No external fragmentation

                                                                                Disadvantage

                                                                                1 Seek time is more2 Number of disk seeks required is more3 Not Reliable4 Index file must be kept in memory If this memory is not available then we have the read the index

                                                                                block and then desired data block ie 2 disk accesses are required To access a block near the end of the file we need to read all the index blocks to read the needed data block

                                                                                5 Pointer overhead of index block is gt than that of linked allocation For example if we have file of only one or two blocks Entire index block is allocated to store these one or two pointers

                                                                                Various mechanisms followed for size of index block

                                                                                1 linked scheme size of index block is one disk block Index block contains a small header containing name of the file and set of the first 100 disk block addresses The next address contains the address of another index block for large files

                                                                                2 multilevel index to access a block the OS uses first-level index to find second-level index block which points to file block This approach can be continued to third or fourth level

                                                                                3 combined scheme

                                                                                For example Say there are 15 pointers of index block in the filersquos inode The first 12 of these pointers have address of blocks that contains file data The next 3 pointers point to indirect blocks The first points to single indirect block which is an index block that contain addresses of filersquos data blocks

                                                                                The second points to double indirect blocks and third points to triple indirect blocks as shown below

                                                                                Q) Free space management

                                                                                To keep track of free disk space the system maintains free space list To create a file we search free space list and allocate the required space to new file

                                                                                The free space list can be implemented as

                                                                                1 Bit vector (or) Bit map2 Linked list3 Grouping4 Counting 5 Space maps

                                                                                1 Bit vector (or) Bit map Each block is represented by 1 bit If the block is free the bit is 1 and if the block is allocated the bit is 0 For example consider a disk where blocks 23458 blocks are free The free-space bit map would be 001111001000

                                                                                Advantages simple and easy to get contiguous files

                                                                                Disadvantages

                                                                                1 Bit map requires extra space2 Bit map is inefficient if kept in disk Bit maps of smaller disk can be kept in main memory but may

                                                                                not be possible to keep the bit map of larger disk in main memory3 Bit map must be modified both when blocks are allocated and freed Freeing 1 GB of data on a 1-TB

                                                                                disk could cause thousands of blocks of bit maps to be updated because those data blocks are scattered all over the disk

                                                                                2 Linked List Free-space list is implemented by linking all the free disk blocks together A pointer to the first free block is stored in special location on the disk For example consider a disk where blocks 23458 blocks are freeWe keep a pointer to block 2 as the first free block Block 2 would contain a pointer to block 3 Block 3 would point to block 4 and block 4 would point to block 5and so onAdvantage No waste of spaceDisadvantage Cannot get contiguous space easily ie To traverse the free-space list we must read each block which requires more IO time

                                                                                3 Grouping First free block stores the addresses of n free blocks The first n-1 free blocks are free and the last block contains the addresses of another n free blocks and so onAdvantage large number of free blocks can be found quickly4 Counting free-space list can be maintained by keeping the address of first free block and the number (n)of free contiguous blocks that follow the first block

                                                                                Each entry in the free-space list consists of disk address and a countEach entry requires more space andlist is shorter only if count is gt1These entries can be stored in a B-tree for efficient insertion deletion and search

                                                                                5 Space maps ZFS divides the disk space into chunks of manageable sizes called metaslabs Each metaslab has a space mapZFS uses counting algorithm to store information about free blocks Space map is a log of all block activities in time order in counting formatWhen ZFS allocates or frees space from a metaslab it loads the associated space map into memory and updates it Finally free-space list is updated on diskSynchronization Hardware Hardware solution to synchronization is to provide atomic operations These operations operate as a single instruction without interruption Two such operations are 1 Test and Set() hardware instruction2 Swap() instruction

                                                                                1 Test and Set instruction is as shown

                                                                                Solution using Test and SetThe shared variable lock is initialized to false

                                                                                2 Swap() instruction definition is as follows

                                                                                Solution using Swap1 Shared Boolean variable lock is initialized to false and each process has a local boolean variable key

                                                                                Both Test and Set Swap instructions satisfy the mutual exclusion requirement but unfortunately do not guarantee bounded waiting If there are multiple processes trying to get into their critical sections there is no guarantee of what order they will enter and any one process could have the bad luck to wait forever until they got their turn in the critical section (Since there is no guarantee as to the relative rates of the processes a very fast process could theoretically release the lock whip through their remainder section and re-lock the lock before a slower process got a chance As more and more processes are involved vying for the same resource the odds of a slow process getting locked out completely increase )

                                                                                Below figure illustrates a solution using test-and-set that satisfies bounded waiting using two shared data structures boolean lock and boolean waiting[ N ] where N is the number of processes in contention for critical sections

                                                                                Bounded-waiting mutual exclusion with TestAndSet( )

                                                                                It first looks in an order ( starting with the next process on the list ) for a process that has been waiting and if it finds one then it releases that particular process from its waiting state without unlocking the critical section thereby allowing a specific process into the critical section while continuing to block all the others

                                                                                Q) What is semaphoreAns Semaphore is used to solve synchronization problems A semaphore S is an integer variable accessed only through 2 standard atomic operations 1 wait() 2 signal()Wait primitive Signal primitivewait(S)

                                                                                while(S lt= 0) do nothingS--

                                                                                signal (S)

                                                                                S++

                                                                                Each process that wishes to use a resource performs wait() operation When a process releases a resource it performs signal() operation

                                                                                Semaphores Usage1 binary semaphore or mutex locks can take values 0 (or) 1

                                                                                2 Counting Semaphore can take any integer value Counting semaphore is used to count the number remaining resources The counter is initialized to the number of resources available whenever the counting semaphore gt 0 then a process can enter a critical section and use one of the resources When the counter = 0 ( or negative in some implementations ) then the process blocks until another process frees a resource and increments the counting semaphore with a signal callFor example if 3 resources are there and 4 processes Value of counting semaphore =3if process P1 requires a resource it perform wait() operationNow counting semaphore = 2if process P2 requires a resource it perform wait() operationNow counting semaphore = 1if process P3 requires a resource it perform wait() operationNow counting semaphore = 0if process P4 requires a resource it perform wait() operation P4 waits until a resource is available

                                                                                3 Semaphores can also be used to synchronize certain operations

                                                                                Semaphore ImplementationWhen a process is in critical section and any other process that tries to enter critical section must loop continuously wasting CPU time (busy waiting) This type of semaphore is called spin lock as the process spins while waiting For example suppose it is important that process P1 execute statement S1 before process P2 executes statement S2

                                                                                First we create a semaphore named synch that is shared by the two processes and initialize it to zero Then in process P1 we insert the code

                                                                                S1signal( synch )

                                                                                and in process P2 we insert the code wait( synch )

                                                                                S2 Because synch was initialized to 0 process P2 will block on the wait until after P1 executes the call

                                                                                to signal Each semaphore has a integer value and maintains a waiting queue of waiting processes

                                                                                Semaphore implementation with no busy waiting To overcome busy waiting we block the waiting process and waiting process is restarted by wakeup() operation(ie the process is removed from waiting queue to ready state)

                                                                                Problems due to semaphore1 Deadlocks Deadlocks occur when multiple processes are blocked each waiting for a resource that can only be freed by one of the other (blocked) processes shown below

                                                                                2 Starvation In Starvation one or more processes gets blocked forever and never get a chance to enter critical section For example if we do not specify the algorithms for adding processes to the waiting queue or selecting one to be removed from the queue in the signal( ) call If a LIFO queue is chosen then the first process that starts waiting will never get a chance

                                                                                3 Priority inversionLet Process LMH have priorities as shown belowLltMltHprocess H is waiting for a resource that has been held by process L so Process L is running When process M enters it pre-empts process L and M is running nowHere M runs before H even though M has lower priority than HTo avoid this L is given high priority so that M does not pre-empt L=====================================================================Q) Classical problems on synchronization

                                                                                1 The Bounded Buffer Problem (also called the The Producer-Consumer Problem)2 The Readers-Writers Problem3 The Dining Philosophers Problem

                                                                                These problems are used to test newly proposed synchronization scheme

                                                                                1 The Bounded Buffer ProblemConsider

                                                                                n buffers and each buffer holds one item a producer process which creates the items (1 at a time) a consumer process which processes them (1 at a time)

                                                                                Producer process cannot produce an item if all the buffers are full and Consumer process cannot consume if all the buffers are empty As both processes modify the contents of buffer synchronization must be there For that we use 3 Semaphores

                                                                                1 empty = n (empty semaphore counts the number of empty buffers)2 full=0 (full semaphore counts the number of filled buffers)3 mutex=1 (mutex provides mutual exclusion for access to the buffer)

                                                                                Producer process Consumer processdo

                                                                                hellip produce an item wait(empty) wait(mutex) hellip

                                                                                do wait(full) wait(mutex) hellip remove an item from buffer hellip

                                                                                add item to buffer hellip signal(mutex) signal(full)

                                                                                while (1)

                                                                                signal(mutex) signal(empty) hellip consume the item hellip while (1)

                                                                                2 The Readers-Writers ProblemA data item such as a file is shared among several processesEach process is classified as reader or writerMultiple readers can read the file at same timeA writer must have exclusive access (ie cannot share with either a reader or another writer)Two versions of the readers-writers problem

                                                                                readers priority new reader need not wait because a writer is waiting writers priority if a writer is waiting to access the database no new readers can start reading

                                                                                A solution to both version may cause starvation in the readers priority version writers may starve in the writers priority version readers may starve

                                                                                A semaphore solution to the readers priority version uses 3 semaphores1 readcount = 0 (readcount semaphore counts the number of reading shared data2 mutex =1 (mutex semaphore is used for mutually exclusion when readcount is updated)3 wrt = 1 (wrt semaphore is common to both writer and reader)

                                                                                wrt provides mutually exclusive access to shared data Reader process Writer process

                                                                                wait(mutex) readcount++ if (readcount == 1) wait(rt) signal(mutex) hellip reading is performed hellip wait(mutex) readcount-- if (readcount == 0) signal(wrt) signal(mutex)

                                                                                do wait(wrt) hellip writing is performed hellip signal(wrt)

                                                                                while (TRUE)

                                                                                3 The Dining Philosophers Problemn philosophers sit around a table thinking and eatingAs time passes a philosopher gets hungry and tries to pick up the chopstick on his left and on his right A philosopher may only pick up one chopstick at a time and cannot pick up a chopstick already in the hand of neighbour philosopherThe dining philosophers problems is an example of a large class or concurrency control problems it is a simple representation of the need to allocate several resources among several processes in a deadlock-free and starvation-free mannerA semaphore solution represent each chopstick with a semaphore

                                                                                semaphore chopstick[5] Initially all values are 1

                                                                                Philosopher ido

                                                                                51

                                                                                wait(chopstick[i])wait(chopstick[(i+1) 5])hellipeathellipsignal(chopstick[i])signal(chopstick[(i+1) 5])hellipthinkhellip

                                                                                while (1)This solution guarantees no two neighbouring philosophers eat simultaneously but has the possibility of creating a deadlockQ) Types of operating systems

                                                                                MULTIPROCESSING SYSTEMS has multiple hardware CPUs

                                                                                In symmetric multiprocessing (SMP) systems The processes share the same main memory and io facilities All processors can perform the same functions (hence term symmetric)

                                                                                52

                                                                                ASYMMETRIC MULTIPROCESSING SYSTEMS In asymmetric multiprocessing (ASMP) all CPUs are not equal (Master and slave CPUs) ASMP have dedicated processors to specific tasks For example one processor may be dedicated to disk operations another to video operations etc These systems dont have the flexibility to assign processes to the least-loaded CPU unlike an SMP system

                                                                                REAL TIME SYSTEM is used to implement a computer application that must complete its execution within its time constraint

                                                                                Two kinds of real-time systems have evolved

                                                                                1 A hard real-time system can guarantee that response requirement will be met under all conditions 2 A soft real-time system cannot guarantee that response requirement meets under all conditions Ex

                                                                                Digital audio or multimedia systems Digital telephones

                                                                                DISTRIBUTED SYSTEMS execute parts of a computation in different systems at the same time It uses distributed control ie it spreads its decision-making actions across different computers in the system so that failures of individual computers or the network does not cripple its operation

                                                                                A distributed operating system appears as a uniprocessor system even though it has multiple processors The users may not know where their programs are being run or where their files are located that should all be handled automatically by the operating system

                                                                                Distributed systems allow applications to run on several processors at the same time thus requiring more complex processor scheduling algorithms

                                                                                DESKTOP SYSTEMS A desktop system is a personal computer (PC) system used at a single location Modern desktop systems support multiprogramming Common examples are Linux FreeBSD Windows 8 and the Macintosh operating system

                                                                                HANDHELD SYSTEMS (palmtop computers) Two of the most popular operating systems for handhelds are Symbian OS and Android OS They have very less memory

                                                                                53

                                                                                CLUSTERED SYSTEMS A computer cluster consists of a set of loosely connected computers that work together so that in many respects they can be viewed as a single system The components of a cluster are usually connected to each other through fast local area networks each node (computer used as a server) running its own instance of an operating system

                                                                                Q) Modern operating system

                                                                                New developments in hardware applications and security threats lead to the development of modern operating system

                                                                                New Hardware developments like increased machine speed high speed network increased size and variety of memory devices

                                                                                New Applications developments like multimedia applications internet and web access and client server computing

                                                                                Developments in modern operating system can be categorized as

                                                                                1 Microkernel architecture assigns only few functions to kernel ( IPC and CPU scheduling etc) Other OS services are provided by processes called servers that run in user mode This approach decouples server and kernel development This approach is well suited to distributed environment

                                                                                2 Multithreading a process is divided into threads that can run concurrently Useful for applications that perform a number of independent tasks Example database server that listens and processes many client requests

                                                                                3 Symmetric multiprocessing As now-a-days there are multiple microprocessors in a single system SMP operating system provides greater efficiency when there are multiple processors The processes share the same main memory and io facilities All processors can perform the same functions (hence term symmetric)

                                                                                SMP schedules processes or threads across all of the processors

                                                                                Advantages

                                                                                1 Increased performance if some portion of the program can be run in parallel the performance of SMP OS increases

                                                                                2 increased availability as all processes can perform the same task if one processor fails others will work 3 incremental growth one can add a new processor to increase the performance4 Scaling cost can be dependent on the number of processors used

                                                                                4 Distributed operating systems Distributed systems allow applications to run on several processors at the same time

                                                                                5 Object oriented design adds modularity to kernel Os can be customized without effecting system integrity

                                                                                Q) Linux Operating systems It is open source as its source code is freely available It is free to use Linux was designed considering UNIX compatibility Linux is a multi-user multitasking system Main design goals are speed efficiency and standardization Components of Linux System

                                                                                54

                                                                                Linux Operating System has primarily three components as shown in the below diagram

                                                                                Kernel - Kernel is the core part of Linux It is responsible for all major activities of this operating system It interacts directly with hardware Kernel hides low level hardware details to system or application programs

                                                                                System Library - System libraries are special functions or programs These are used by application programs to use Kernelrsquos features

                                                                                System Utility - System Utility programs are responsible to do specialized individual level tasks like updating log file accepting login requests from terminals etc

                                                                                Kernel Modules Sections of kernel code that can be compiled loaded and unloaded independent of the rest of the kernel A kernel module may typically implement a device driver a file system or a networking protocol

                                                                                Three components to Linux module support 1 module management Supports loading modules into memory and allows to talk to the rest of the kernel 2driver registration Allows modules to inform rest of the kernel that a new driver has become available 3 conflict resolution protect reserved resources of one driver from accidental use by another driver

                                                                                Kernel Mode vs User ModeKernel component code executes in a special privileged mode called kernel mode with full access to all resources of the computer This code represents a single process executes in single address space and do not require any context switch and hence is very efficient and fast Kernel runs each processes and provides system services to processes provides protected access to hardwares to processesCode which is not required to run in kernel mode is in System Library User programs and other system programs works in User Mode which has no access to system hardwares and kernel code User programs utilities use System libraries to access Kernel functions to get systems low level tasksBasic FeaturesFollowing are some of the important features of Linux Operating System

                                                                                Portable - Portability means softwares can works on different types of hardwares in same way Linux kernel and application programs supports their installation on any kind of hardware platform

                                                                                Open Source - Linux source code is freely available and it is community based development project Multiple teams works in collaboration to enhance the capability of Linux operating system and it is continuously evolving

                                                                                Multi-User - Linux is a multiuser system means multiple users can access system resources like memory ram application programs at same time

                                                                                Multiprogramming - Linux is a multiprogramming system means multiple applications can run at same time

                                                                                Hierarchical File System - Linux provides a standard file structure in which system files user files are arranged

                                                                                Shell - Linux provides a special interpreter program which can be used to execute commands of the operating system It can be used to do various types of operations call application programs etc

                                                                                55

                                                                                Security - Linux provides user security using authentication features like password protection controlled access to specific files encryption of data

                                                                                Architecture Linux System Architecture is consists of following layers

                                                                                Hardware layer - Hardware consists of all peripheral devices (RAM HDD CPU etc)

                                                                                Kernel - Core component of Operating System interacts directly with hardware provides low level services to upper layer components

                                                                                Shell - An interface to kernel Takes commands from user and executes kernels functions

                                                                                Utilities - Utility programs giving user most of the functionalities of an operating systems

                                                                                Q) windows XP operating System1Extensibility mdash layered architecture

                                                                                Executive which runs in protected mode provides the basic system services On top of the executive several server subsystems operate in user mode Modular structure allows additional environmental subsystems to be added without affecting the

                                                                                executive 2Portability mdash XP can be moved from on hardware architecture to another with relatively few changes

                                                                                Written in C and C++ Processor-dependent code is isolated in a dynamic link library (DLL) called the ldquohardware abstraction

                                                                                layerrdquo (HAL)3 Reliability mdashXP uses hardware protection for virtual memory and software protection mechanisms for operating system resources 4 Compatibility mdash applications that follow the IEEE 10031 (POSIX) standard can be complied to run on 2000 without changing the source code 5 Performance mdash XP subsystems can communicate with one another via high-performance message passing

                                                                                Preemption of low priority threads enables the system to respond quickly to external events Designed for symmetrical multiprocessing

                                                                                6 International support mdash supports different locales via the national language support (NLS) APIXP Architecture 1 Layered system of modules 2 Protected mode mdash hardware abstraction layer (HAL) kernel executive 3 User mode mdash collection of subsystems

                                                                                a Environmental subsystems emulate different operating systems b Protection subsystems provide security functions

                                                                                XP architecture

                                                                                56

                                                                                Q) Windows network Operating systemExample Windows 2000 operating system

                                                                                All the pictures are taken from Silberschatz Abraham et al Peter Baer Galvin and Greg Gagne Operating system concepts Reading Addison-Wesley

                                                                                Visit my blog enthusiaststudentblogspotin57

                                                                                mtechmessengerblogspotin

                                                                                Keywords OS exam notes OS notesOperating system notes Opearing system galvin notesSimple batch system Multi- programmed Batch System Time sharing System Difference between Batch Multi-programming and Time sharing What is process structure of process in memory Process states PCB Operations on Processes Process Creation Process Termination cascading terminationcontext switch Basic Concepts of threads two modes of CPU execution Inter process communication Shared ndash Memory Systems Message passing systemswhat is deadlock four conditions that are necessary for deadlock to occur Methods for handling deadlocks Deadlock prevention Deadlock avoidanceResource Allocation graphBankers AlgorithmSafe State Unsafe state Safety Algorithm Resource-Request Algorithm Memory management Address Binding Logical vs Physical Address Space MMU Memory-Management Unit Dynamic loading Dynamic Linking Overlays Swapping Contiguous memory allocation Single-partition allocation Multiple-partition allocation Internal fragmentation Variable Size partitions External fragmentation First-fit Best-fit Worst-fit Non-Contiguous Memory allocation Paging Implementation of Page Table PTLR PTBR page table base register Page-table length register translation look-aside buffers TLB Protection in Paging Shared pages in Paging Segmentation Shared Segments Virtual memory Demand paging Procedure for Handling a Page Fault pure demand paging Page Replacement Page Replacement Algorithms FIFO Optimal Page replacement AlgorithmLeast Recently used (LRU) Page replacement algorithm ThrashingWorking Set model global and local page replacement

                                                                                58

                                                                                • 101 File Concept Data is stored in secondary memory in the form of files A file is a sequence of bits bytes lines or records A file has structure which depends on its type
                                                                                • File Attributes
                                                                                • File Operations
                                                                                • Kernel Mode vs User Mode
                                                                                • Basic Features
                                                                                • Architecture

                                                                                  3 Indexed

                                                                                  1 Contiguous allocation Each file occupies set of contiguous blocks on disk

                                                                                  If the file is n blocks long and starts at location b then it occupies blocks b b+1 b+2hellipb+n-1

                                                                                  The directory entry for each file indicates the address of starting block(ie b) and length (n)

                                                                                  Advantages

                                                                                  1 For direct access to block i we can directly access by finding the block as b+i Since the file is stored sequentially sequential access is easy Hence access is easy

                                                                                  2 Supports both direct and sequential access3 Number of disk seeks required is minimum 4 Seek time is minimum

                                                                                  Disadvantages

                                                                                  1 Difficult to find space to a new file (or) add more data to existing fileFirst fit and Best fit strategies are used to find a free hole( set of free contiguous blocks) from list of available holes But both are inefficient wrt storage utilization

                                                                                  2 External fragmentation even though there are enough free blocks required for a new file they are not contiguous and hence cannot be allocated to new fileSolution copy all the files onto another disk or tape We then copy the files back onto original disk contiguously This is known as compaction But compaction is time consuming

                                                                                  3 Size of the file must be known when the file is created Even though final size is known entire size may not be used by the file at the beginning Hence leads to wastage of space

                                                                                  If size of the file is not known in advance then (i) If we allocate too little space to file file cannot be extended Solution find a larger hole copy the contents of the file to the new space and release the previous space This may be possible as long as space exists Also this is time consuming(ii) If we overestimate the amount of space needed much of the space is unused This will lead to internal fragmentation

                                                                                  To minimize these drawbacks some os uses modified contiguous allocation scheme Here contiguous blocks of memory is allocated at firstIf the file needs more blocks another set of contiguous space known as extent is added Location and a block count plus a link to the first block of next extentIf extents are too large and of fixed size internal fragmentation occursIf extents are of variable sizes external fragmentation occurs

                                                                                  Linked AllocationEach file is a linked list of disk blocks The disk blocks can be scattered anywhere on the diskThe directory contains a pointer to first and last blocks of the file For example a file of 5 blocks might start at a block 9 and continue at block 16 then block 1 and then block 10 and finally block 25 Each block contains a pointer to next block If each block is 512 bytes and disk address (pointer ) requires 4 bytes then the user sees blocks of 512-4=508 bytes To create a new file we simply create a new entry in the directory The pointer will have nil for empty file Also the size field of empty file is 0

                                                                                  To add more data to existing file free block is found and data is written to it and is linked to end of file To read a file we read by following the pointers block to block

                                                                                  Advantages 1 Easy to find space to a new file (or) add more data to existing file2 Size of the file need not be declared when the file is created3 There is no external fragmentation

                                                                                  Disadvantages1 Seek time is more2 Number of disk seeks required is more3 Pointers use larger percentage of filersquos disk space4 Inefficient for direct access files To access i th record i disk reads are required5 Not Reliable if pointers were lost or damaged

                                                                                  Solution to disadvantage 3 is to use to group multiple blocks as clusters(for ex 4 blocks = 1cluster) We allocate clusters instead of blocks to file Makes logical to physical block mapping simpleImproves disk throughput by reducing disk access timeReduces wastage of space due to pointers as only few pointers may be neededFree ndashlist management also becomes simpleDisadvantage internal fragmentation more space is wasted when the cluster is partially full

                                                                                  Solution to disadvantage 5 is to use doubly linked list (or) store the file name and relative block number in each block

                                                                                  An variation of linked allocation is the use of file allocation table (FAT)A section of disk at the beginning of each volume contains FAT FAT has one entry for each disk block and is indexed by block numberThe block number contains the block number of next block that contains the file The last block will have special end of the file value as table entry An unused block is indicated by table value 0

                                                                                  Advantages

                                                                                  1 Easy to find space to a new file (or) add more data to existing file2 Random access time is improved

                                                                                  Disadvantage number of disk seeks is more The disk head must move to read the FAT and find the location of block then move to location of the block

                                                                                  3 Indexed allocation

                                                                                  Each file has its own index block Index block is an array of disk block addresses The ith entry in index block points to ith block of the file

                                                                                  The directory contains the address of the index block

                                                                                  When the file is created all pointers in the index block are set to nil

                                                                                  To add more data to existing file free block is found and data is written to it The address of block is put in the index block

                                                                                  Advantages

                                                                                  1 Easy to find space to a new file (or) add more data to existing file2 Supports direct (or) random access efficiently3 No external fragmentation

                                                                                  Disadvantage

                                                                                  1 Seek time is more2 Number of disk seeks required is more3 Not Reliable4 Index file must be kept in memory If this memory is not available then we have the read the index

                                                                                  block and then desired data block ie 2 disk accesses are required To access a block near the end of the file we need to read all the index blocks to read the needed data block

                                                                                  5 Pointer overhead of index block is gt than that of linked allocation For example if we have file of only one or two blocks Entire index block is allocated to store these one or two pointers

                                                                                  Various mechanisms followed for size of index block

                                                                                  1 linked scheme size of index block is one disk block Index block contains a small header containing name of the file and set of the first 100 disk block addresses The next address contains the address of another index block for large files

                                                                                  2 multilevel index to access a block the OS uses first-level index to find second-level index block which points to file block This approach can be continued to third or fourth level

                                                                                  3 combined scheme

                                                                                  For example Say there are 15 pointers of index block in the filersquos inode The first 12 of these pointers have address of blocks that contains file data The next 3 pointers point to indirect blocks The first points to single indirect block which is an index block that contain addresses of filersquos data blocks

                                                                                  The second points to double indirect blocks and third points to triple indirect blocks as shown below

                                                                                  Q) Free space management

                                                                                  To keep track of free disk space the system maintains free space list To create a file we search free space list and allocate the required space to new file

                                                                                  The free space list can be implemented as

                                                                                  1 Bit vector (or) Bit map2 Linked list3 Grouping4 Counting 5 Space maps

                                                                                  1 Bit vector (or) Bit map Each block is represented by 1 bit If the block is free the bit is 1 and if the block is allocated the bit is 0 For example consider a disk where blocks 23458 blocks are free The free-space bit map would be 001111001000

                                                                                  Advantages simple and easy to get contiguous files

                                                                                  Disadvantages

                                                                                  1 Bit map requires extra space2 Bit map is inefficient if kept in disk Bit maps of smaller disk can be kept in main memory but may

                                                                                  not be possible to keep the bit map of larger disk in main memory3 Bit map must be modified both when blocks are allocated and freed Freeing 1 GB of data on a 1-TB

                                                                                  disk could cause thousands of blocks of bit maps to be updated because those data blocks are scattered all over the disk

                                                                                  2 Linked List Free-space list is implemented by linking all the free disk blocks together A pointer to the first free block is stored in special location on the disk For example consider a disk where blocks 23458 blocks are freeWe keep a pointer to block 2 as the first free block Block 2 would contain a pointer to block 3 Block 3 would point to block 4 and block 4 would point to block 5and so onAdvantage No waste of spaceDisadvantage Cannot get contiguous space easily ie To traverse the free-space list we must read each block which requires more IO time

                                                                                  3 Grouping First free block stores the addresses of n free blocks The first n-1 free blocks are free and the last block contains the addresses of another n free blocks and so onAdvantage large number of free blocks can be found quickly4 Counting free-space list can be maintained by keeping the address of first free block and the number (n)of free contiguous blocks that follow the first block

                                                                                  Each entry in the free-space list consists of disk address and a countEach entry requires more space andlist is shorter only if count is gt1These entries can be stored in a B-tree for efficient insertion deletion and search

                                                                                  5 Space maps ZFS divides the disk space into chunks of manageable sizes called metaslabs Each metaslab has a space mapZFS uses counting algorithm to store information about free blocks Space map is a log of all block activities in time order in counting formatWhen ZFS allocates or frees space from a metaslab it loads the associated space map into memory and updates it Finally free-space list is updated on diskSynchronization Hardware Hardware solution to synchronization is to provide atomic operations These operations operate as a single instruction without interruption Two such operations are 1 Test and Set() hardware instruction2 Swap() instruction

                                                                                  1 Test and Set instruction is as shown

                                                                                  Solution using Test and SetThe shared variable lock is initialized to false

                                                                                  2 Swap() instruction definition is as follows

                                                                                  Solution using Swap1 Shared Boolean variable lock is initialized to false and each process has a local boolean variable key

                                                                                  Both Test and Set Swap instructions satisfy the mutual exclusion requirement but unfortunately do not guarantee bounded waiting If there are multiple processes trying to get into their critical sections there is no guarantee of what order they will enter and any one process could have the bad luck to wait forever until they got their turn in the critical section (Since there is no guarantee as to the relative rates of the processes a very fast process could theoretically release the lock whip through their remainder section and re-lock the lock before a slower process got a chance As more and more processes are involved vying for the same resource the odds of a slow process getting locked out completely increase )

                                                                                  Below figure illustrates a solution using test-and-set that satisfies bounded waiting using two shared data structures boolean lock and boolean waiting[ N ] where N is the number of processes in contention for critical sections

                                                                                  Bounded-waiting mutual exclusion with TestAndSet( )

                                                                                  It first looks in an order ( starting with the next process on the list ) for a process that has been waiting and if it finds one then it releases that particular process from its waiting state without unlocking the critical section thereby allowing a specific process into the critical section while continuing to block all the others

                                                                                  Q) What is semaphoreAns Semaphore is used to solve synchronization problems A semaphore S is an integer variable accessed only through 2 standard atomic operations 1 wait() 2 signal()Wait primitive Signal primitivewait(S)

                                                                                  while(S lt= 0) do nothingS--

                                                                                  signal (S)

                                                                                  S++

                                                                                  Each process that wishes to use a resource performs wait() operation When a process releases a resource it performs signal() operation

                                                                                  Semaphores Usage1 binary semaphore or mutex locks can take values 0 (or) 1

                                                                                  2 Counting Semaphore can take any integer value Counting semaphore is used to count the number remaining resources The counter is initialized to the number of resources available whenever the counting semaphore gt 0 then a process can enter a critical section and use one of the resources When the counter = 0 ( or negative in some implementations ) then the process blocks until another process frees a resource and increments the counting semaphore with a signal callFor example if 3 resources are there and 4 processes Value of counting semaphore =3if process P1 requires a resource it perform wait() operationNow counting semaphore = 2if process P2 requires a resource it perform wait() operationNow counting semaphore = 1if process P3 requires a resource it perform wait() operationNow counting semaphore = 0if process P4 requires a resource it perform wait() operation P4 waits until a resource is available

                                                                                  3 Semaphores can also be used to synchronize certain operations

                                                                                  Semaphore ImplementationWhen a process is in critical section and any other process that tries to enter critical section must loop continuously wasting CPU time (busy waiting) This type of semaphore is called spin lock as the process spins while waiting For example suppose it is important that process P1 execute statement S1 before process P2 executes statement S2

                                                                                  First we create a semaphore named synch that is shared by the two processes and initialize it to zero Then in process P1 we insert the code

                                                                                  S1signal( synch )

                                                                                  and in process P2 we insert the code wait( synch )

                                                                                  S2 Because synch was initialized to 0 process P2 will block on the wait until after P1 executes the call

                                                                                  to signal Each semaphore has a integer value and maintains a waiting queue of waiting processes

                                                                                  Semaphore implementation with no busy waiting To overcome busy waiting we block the waiting process and waiting process is restarted by wakeup() operation(ie the process is removed from waiting queue to ready state)

                                                                                  Problems due to semaphore1 Deadlocks Deadlocks occur when multiple processes are blocked each waiting for a resource that can only be freed by one of the other (blocked) processes shown below

                                                                                  2 Starvation In Starvation one or more processes gets blocked forever and never get a chance to enter critical section For example if we do not specify the algorithms for adding processes to the waiting queue or selecting one to be removed from the queue in the signal( ) call If a LIFO queue is chosen then the first process that starts waiting will never get a chance

                                                                                  3 Priority inversionLet Process LMH have priorities as shown belowLltMltHprocess H is waiting for a resource that has been held by process L so Process L is running When process M enters it pre-empts process L and M is running nowHere M runs before H even though M has lower priority than HTo avoid this L is given high priority so that M does not pre-empt L=====================================================================Q) Classical problems on synchronization

                                                                                  1 The Bounded Buffer Problem (also called the The Producer-Consumer Problem)2 The Readers-Writers Problem3 The Dining Philosophers Problem

                                                                                  These problems are used to test newly proposed synchronization scheme

                                                                                  1 The Bounded Buffer ProblemConsider

                                                                                  n buffers and each buffer holds one item a producer process which creates the items (1 at a time) a consumer process which processes them (1 at a time)

                                                                                  Producer process cannot produce an item if all the buffers are full and Consumer process cannot consume if all the buffers are empty As both processes modify the contents of buffer synchronization must be there For that we use 3 Semaphores

                                                                                  1 empty = n (empty semaphore counts the number of empty buffers)2 full=0 (full semaphore counts the number of filled buffers)3 mutex=1 (mutex provides mutual exclusion for access to the buffer)

                                                                                  Producer process Consumer processdo

                                                                                  hellip produce an item wait(empty) wait(mutex) hellip

                                                                                  do wait(full) wait(mutex) hellip remove an item from buffer hellip

                                                                                  add item to buffer hellip signal(mutex) signal(full)

                                                                                  while (1)

                                                                                  signal(mutex) signal(empty) hellip consume the item hellip while (1)

                                                                                  2 The Readers-Writers ProblemA data item such as a file is shared among several processesEach process is classified as reader or writerMultiple readers can read the file at same timeA writer must have exclusive access (ie cannot share with either a reader or another writer)Two versions of the readers-writers problem

                                                                                  readers priority new reader need not wait because a writer is waiting writers priority if a writer is waiting to access the database no new readers can start reading

                                                                                  A solution to both version may cause starvation in the readers priority version writers may starve in the writers priority version readers may starve

                                                                                  A semaphore solution to the readers priority version uses 3 semaphores1 readcount = 0 (readcount semaphore counts the number of reading shared data2 mutex =1 (mutex semaphore is used for mutually exclusion when readcount is updated)3 wrt = 1 (wrt semaphore is common to both writer and reader)

                                                                                  wrt provides mutually exclusive access to shared data Reader process Writer process

                                                                                  wait(mutex) readcount++ if (readcount == 1) wait(rt) signal(mutex) hellip reading is performed hellip wait(mutex) readcount-- if (readcount == 0) signal(wrt) signal(mutex)

                                                                                  do wait(wrt) hellip writing is performed hellip signal(wrt)

                                                                                  while (TRUE)

                                                                                  3 The Dining Philosophers Problemn philosophers sit around a table thinking and eatingAs time passes a philosopher gets hungry and tries to pick up the chopstick on his left and on his right A philosopher may only pick up one chopstick at a time and cannot pick up a chopstick already in the hand of neighbour philosopherThe dining philosophers problems is an example of a large class or concurrency control problems it is a simple representation of the need to allocate several resources among several processes in a deadlock-free and starvation-free mannerA semaphore solution represent each chopstick with a semaphore

                                                                                  semaphore chopstick[5] Initially all values are 1

                                                                                  Philosopher ido

                                                                                  51

                                                                                  wait(chopstick[i])wait(chopstick[(i+1) 5])hellipeathellipsignal(chopstick[i])signal(chopstick[(i+1) 5])hellipthinkhellip

                                                                                  while (1)This solution guarantees no two neighbouring philosophers eat simultaneously but has the possibility of creating a deadlockQ) Types of operating systems

                                                                                  MULTIPROCESSING SYSTEMS has multiple hardware CPUs

                                                                                  In symmetric multiprocessing (SMP) systems The processes share the same main memory and io facilities All processors can perform the same functions (hence term symmetric)

                                                                                  52

                                                                                  ASYMMETRIC MULTIPROCESSING SYSTEMS In asymmetric multiprocessing (ASMP) all CPUs are not equal (Master and slave CPUs) ASMP have dedicated processors to specific tasks For example one processor may be dedicated to disk operations another to video operations etc These systems dont have the flexibility to assign processes to the least-loaded CPU unlike an SMP system

                                                                                  REAL TIME SYSTEM is used to implement a computer application that must complete its execution within its time constraint

                                                                                  Two kinds of real-time systems have evolved

                                                                                  1 A hard real-time system can guarantee that response requirement will be met under all conditions 2 A soft real-time system cannot guarantee that response requirement meets under all conditions Ex

                                                                                  Digital audio or multimedia systems Digital telephones

                                                                                  DISTRIBUTED SYSTEMS execute parts of a computation in different systems at the same time It uses distributed control ie it spreads its decision-making actions across different computers in the system so that failures of individual computers or the network does not cripple its operation

                                                                                  A distributed operating system appears as a uniprocessor system even though it has multiple processors The users may not know where their programs are being run or where their files are located that should all be handled automatically by the operating system

                                                                                  Distributed systems allow applications to run on several processors at the same time thus requiring more complex processor scheduling algorithms

                                                                                  DESKTOP SYSTEMS A desktop system is a personal computer (PC) system used at a single location Modern desktop systems support multiprogramming Common examples are Linux FreeBSD Windows 8 and the Macintosh operating system

                                                                                  HANDHELD SYSTEMS (palmtop computers) Two of the most popular operating systems for handhelds are Symbian OS and Android OS They have very less memory

                                                                                  53

                                                                                  CLUSTERED SYSTEMS A computer cluster consists of a set of loosely connected computers that work together so that in many respects they can be viewed as a single system The components of a cluster are usually connected to each other through fast local area networks each node (computer used as a server) running its own instance of an operating system

                                                                                  Q) Modern operating system

                                                                                  New developments in hardware applications and security threats lead to the development of modern operating system

                                                                                  New Hardware developments like increased machine speed high speed network increased size and variety of memory devices

                                                                                  New Applications developments like multimedia applications internet and web access and client server computing

                                                                                  Developments in modern operating system can be categorized as

                                                                                  1 Microkernel architecture assigns only few functions to kernel ( IPC and CPU scheduling etc) Other OS services are provided by processes called servers that run in user mode This approach decouples server and kernel development This approach is well suited to distributed environment

                                                                                  2 Multithreading a process is divided into threads that can run concurrently Useful for applications that perform a number of independent tasks Example database server that listens and processes many client requests

                                                                                  3 Symmetric multiprocessing As now-a-days there are multiple microprocessors in a single system SMP operating system provides greater efficiency when there are multiple processors The processes share the same main memory and io facilities All processors can perform the same functions (hence term symmetric)

                                                                                  SMP schedules processes or threads across all of the processors

                                                                                  Advantages

                                                                                  1 Increased performance if some portion of the program can be run in parallel the performance of SMP OS increases

                                                                                  2 increased availability as all processes can perform the same task if one processor fails others will work 3 incremental growth one can add a new processor to increase the performance4 Scaling cost can be dependent on the number of processors used

                                                                                  4 Distributed operating systems Distributed systems allow applications to run on several processors at the same time

                                                                                  5 Object oriented design adds modularity to kernel Os can be customized without effecting system integrity

                                                                                  Q) Linux Operating systems It is open source as its source code is freely available It is free to use Linux was designed considering UNIX compatibility Linux is a multi-user multitasking system Main design goals are speed efficiency and standardization Components of Linux System

                                                                                  54

                                                                                  Linux Operating System has primarily three components as shown in the below diagram

                                                                                  Kernel - Kernel is the core part of Linux It is responsible for all major activities of this operating system It interacts directly with hardware Kernel hides low level hardware details to system or application programs

                                                                                  System Library - System libraries are special functions or programs These are used by application programs to use Kernelrsquos features

                                                                                  System Utility - System Utility programs are responsible to do specialized individual level tasks like updating log file accepting login requests from terminals etc

                                                                                  Kernel Modules Sections of kernel code that can be compiled loaded and unloaded independent of the rest of the kernel A kernel module may typically implement a device driver a file system or a networking protocol

                                                                                  Three components to Linux module support 1 module management Supports loading modules into memory and allows to talk to the rest of the kernel 2driver registration Allows modules to inform rest of the kernel that a new driver has become available 3 conflict resolution protect reserved resources of one driver from accidental use by another driver

                                                                                  Kernel Mode vs User ModeKernel component code executes in a special privileged mode called kernel mode with full access to all resources of the computer This code represents a single process executes in single address space and do not require any context switch and hence is very efficient and fast Kernel runs each processes and provides system services to processes provides protected access to hardwares to processesCode which is not required to run in kernel mode is in System Library User programs and other system programs works in User Mode which has no access to system hardwares and kernel code User programs utilities use System libraries to access Kernel functions to get systems low level tasksBasic FeaturesFollowing are some of the important features of Linux Operating System

                                                                                  Portable - Portability means softwares can works on different types of hardwares in same way Linux kernel and application programs supports their installation on any kind of hardware platform

                                                                                  Open Source - Linux source code is freely available and it is community based development project Multiple teams works in collaboration to enhance the capability of Linux operating system and it is continuously evolving

                                                                                  Multi-User - Linux is a multiuser system means multiple users can access system resources like memory ram application programs at same time

                                                                                  Multiprogramming - Linux is a multiprogramming system means multiple applications can run at same time

                                                                                  Hierarchical File System - Linux provides a standard file structure in which system files user files are arranged

                                                                                  Shell - Linux provides a special interpreter program which can be used to execute commands of the operating system It can be used to do various types of operations call application programs etc

                                                                                  55

                                                                                  Security - Linux provides user security using authentication features like password protection controlled access to specific files encryption of data

                                                                                  Architecture Linux System Architecture is consists of following layers

                                                                                  Hardware layer - Hardware consists of all peripheral devices (RAM HDD CPU etc)

                                                                                  Kernel - Core component of Operating System interacts directly with hardware provides low level services to upper layer components

                                                                                  Shell - An interface to kernel Takes commands from user and executes kernels functions

                                                                                  Utilities - Utility programs giving user most of the functionalities of an operating systems

                                                                                  Q) windows XP operating System1Extensibility mdash layered architecture

                                                                                  Executive which runs in protected mode provides the basic system services On top of the executive several server subsystems operate in user mode Modular structure allows additional environmental subsystems to be added without affecting the

                                                                                  executive 2Portability mdash XP can be moved from on hardware architecture to another with relatively few changes

                                                                                  Written in C and C++ Processor-dependent code is isolated in a dynamic link library (DLL) called the ldquohardware abstraction

                                                                                  layerrdquo (HAL)3 Reliability mdashXP uses hardware protection for virtual memory and software protection mechanisms for operating system resources 4 Compatibility mdash applications that follow the IEEE 10031 (POSIX) standard can be complied to run on 2000 without changing the source code 5 Performance mdash XP subsystems can communicate with one another via high-performance message passing

                                                                                  Preemption of low priority threads enables the system to respond quickly to external events Designed for symmetrical multiprocessing

                                                                                  6 International support mdash supports different locales via the national language support (NLS) APIXP Architecture 1 Layered system of modules 2 Protected mode mdash hardware abstraction layer (HAL) kernel executive 3 User mode mdash collection of subsystems

                                                                                  a Environmental subsystems emulate different operating systems b Protection subsystems provide security functions

                                                                                  XP architecture

                                                                                  56

                                                                                  Q) Windows network Operating systemExample Windows 2000 operating system

                                                                                  All the pictures are taken from Silberschatz Abraham et al Peter Baer Galvin and Greg Gagne Operating system concepts Reading Addison-Wesley

                                                                                  Visit my blog enthusiaststudentblogspotin57

                                                                                  mtechmessengerblogspotin

                                                                                  Keywords OS exam notes OS notesOperating system notes Opearing system galvin notesSimple batch system Multi- programmed Batch System Time sharing System Difference between Batch Multi-programming and Time sharing What is process structure of process in memory Process states PCB Operations on Processes Process Creation Process Termination cascading terminationcontext switch Basic Concepts of threads two modes of CPU execution Inter process communication Shared ndash Memory Systems Message passing systemswhat is deadlock four conditions that are necessary for deadlock to occur Methods for handling deadlocks Deadlock prevention Deadlock avoidanceResource Allocation graphBankers AlgorithmSafe State Unsafe state Safety Algorithm Resource-Request Algorithm Memory management Address Binding Logical vs Physical Address Space MMU Memory-Management Unit Dynamic loading Dynamic Linking Overlays Swapping Contiguous memory allocation Single-partition allocation Multiple-partition allocation Internal fragmentation Variable Size partitions External fragmentation First-fit Best-fit Worst-fit Non-Contiguous Memory allocation Paging Implementation of Page Table PTLR PTBR page table base register Page-table length register translation look-aside buffers TLB Protection in Paging Shared pages in Paging Segmentation Shared Segments Virtual memory Demand paging Procedure for Handling a Page Fault pure demand paging Page Replacement Page Replacement Algorithms FIFO Optimal Page replacement AlgorithmLeast Recently used (LRU) Page replacement algorithm ThrashingWorking Set model global and local page replacement

                                                                                  58

                                                                                  • 101 File Concept Data is stored in secondary memory in the form of files A file is a sequence of bits bytes lines or records A file has structure which depends on its type
                                                                                  • File Attributes
                                                                                  • File Operations
                                                                                  • Kernel Mode vs User Mode
                                                                                  • Basic Features
                                                                                  • Architecture

                                                                                    To add more data to existing file free block is found and data is written to it and is linked to end of file To read a file we read by following the pointers block to block

                                                                                    Advantages 1 Easy to find space to a new file (or) add more data to existing file2 Size of the file need not be declared when the file is created3 There is no external fragmentation

                                                                                    Disadvantages1 Seek time is more2 Number of disk seeks required is more3 Pointers use larger percentage of filersquos disk space4 Inefficient for direct access files To access i th record i disk reads are required5 Not Reliable if pointers were lost or damaged

                                                                                    Solution to disadvantage 3 is to use to group multiple blocks as clusters(for ex 4 blocks = 1cluster) We allocate clusters instead of blocks to file Makes logical to physical block mapping simpleImproves disk throughput by reducing disk access timeReduces wastage of space due to pointers as only few pointers may be neededFree ndashlist management also becomes simpleDisadvantage internal fragmentation more space is wasted when the cluster is partially full

                                                                                    Solution to disadvantage 5 is to use doubly linked list (or) store the file name and relative block number in each block

                                                                                    An variation of linked allocation is the use of file allocation table (FAT)A section of disk at the beginning of each volume contains FAT FAT has one entry for each disk block and is indexed by block numberThe block number contains the block number of next block that contains the file The last block will have special end of the file value as table entry An unused block is indicated by table value 0

                                                                                    Advantages

                                                                                    1 Easy to find space to a new file (or) add more data to existing file2 Random access time is improved

                                                                                    Disadvantage number of disk seeks is more The disk head must move to read the FAT and find the location of block then move to location of the block

                                                                                    3 Indexed allocation

                                                                                    Each file has its own index block Index block is an array of disk block addresses The ith entry in index block points to ith block of the file

                                                                                    The directory contains the address of the index block

                                                                                    When the file is created all pointers in the index block are set to nil

                                                                                    To add more data to existing file free block is found and data is written to it The address of block is put in the index block

                                                                                    Advantages

                                                                                    1 Easy to find space to a new file (or) add more data to existing file2 Supports direct (or) random access efficiently3 No external fragmentation

                                                                                    Disadvantage

                                                                                    1 Seek time is more2 Number of disk seeks required is more3 Not Reliable4 Index file must be kept in memory If this memory is not available then we have the read the index

                                                                                    block and then desired data block ie 2 disk accesses are required To access a block near the end of the file we need to read all the index blocks to read the needed data block

                                                                                    5 Pointer overhead of index block is gt than that of linked allocation For example if we have file of only one or two blocks Entire index block is allocated to store these one or two pointers

                                                                                    Various mechanisms followed for size of index block

                                                                                    1 linked scheme size of index block is one disk block Index block contains a small header containing name of the file and set of the first 100 disk block addresses The next address contains the address of another index block for large files

                                                                                    2 multilevel index to access a block the OS uses first-level index to find second-level index block which points to file block This approach can be continued to third or fourth level

                                                                                    3 combined scheme

                                                                                    For example Say there are 15 pointers of index block in the filersquos inode The first 12 of these pointers have address of blocks that contains file data The next 3 pointers point to indirect blocks The first points to single indirect block which is an index block that contain addresses of filersquos data blocks

                                                                                    The second points to double indirect blocks and third points to triple indirect blocks as shown below

                                                                                    Q) Free space management

                                                                                    To keep track of free disk space the system maintains free space list To create a file we search free space list and allocate the required space to new file

                                                                                    The free space list can be implemented as

                                                                                    1 Bit vector (or) Bit map2 Linked list3 Grouping4 Counting 5 Space maps

                                                                                    1 Bit vector (or) Bit map Each block is represented by 1 bit If the block is free the bit is 1 and if the block is allocated the bit is 0 For example consider a disk where blocks 23458 blocks are free The free-space bit map would be 001111001000

                                                                                    Advantages simple and easy to get contiguous files

                                                                                    Disadvantages

                                                                                    1 Bit map requires extra space2 Bit map is inefficient if kept in disk Bit maps of smaller disk can be kept in main memory but may

                                                                                    not be possible to keep the bit map of larger disk in main memory3 Bit map must be modified both when blocks are allocated and freed Freeing 1 GB of data on a 1-TB

                                                                                    disk could cause thousands of blocks of bit maps to be updated because those data blocks are scattered all over the disk

                                                                                    2 Linked List Free-space list is implemented by linking all the free disk blocks together A pointer to the first free block is stored in special location on the disk For example consider a disk where blocks 23458 blocks are freeWe keep a pointer to block 2 as the first free block Block 2 would contain a pointer to block 3 Block 3 would point to block 4 and block 4 would point to block 5and so onAdvantage No waste of spaceDisadvantage Cannot get contiguous space easily ie To traverse the free-space list we must read each block which requires more IO time

                                                                                    3 Grouping First free block stores the addresses of n free blocks The first n-1 free blocks are free and the last block contains the addresses of another n free blocks and so onAdvantage large number of free blocks can be found quickly4 Counting free-space list can be maintained by keeping the address of first free block and the number (n)of free contiguous blocks that follow the first block

                                                                                    Each entry in the free-space list consists of disk address and a countEach entry requires more space andlist is shorter only if count is gt1These entries can be stored in a B-tree for efficient insertion deletion and search

                                                                                    5 Space maps ZFS divides the disk space into chunks of manageable sizes called metaslabs Each metaslab has a space mapZFS uses counting algorithm to store information about free blocks Space map is a log of all block activities in time order in counting formatWhen ZFS allocates or frees space from a metaslab it loads the associated space map into memory and updates it Finally free-space list is updated on diskSynchronization Hardware Hardware solution to synchronization is to provide atomic operations These operations operate as a single instruction without interruption Two such operations are 1 Test and Set() hardware instruction2 Swap() instruction

                                                                                    1 Test and Set instruction is as shown

                                                                                    Solution using Test and SetThe shared variable lock is initialized to false

                                                                                    2 Swap() instruction definition is as follows

                                                                                    Solution using Swap1 Shared Boolean variable lock is initialized to false and each process has a local boolean variable key

                                                                                    Both Test and Set Swap instructions satisfy the mutual exclusion requirement but unfortunately do not guarantee bounded waiting If there are multiple processes trying to get into their critical sections there is no guarantee of what order they will enter and any one process could have the bad luck to wait forever until they got their turn in the critical section (Since there is no guarantee as to the relative rates of the processes a very fast process could theoretically release the lock whip through their remainder section and re-lock the lock before a slower process got a chance As more and more processes are involved vying for the same resource the odds of a slow process getting locked out completely increase )

                                                                                    Below figure illustrates a solution using test-and-set that satisfies bounded waiting using two shared data structures boolean lock and boolean waiting[ N ] where N is the number of processes in contention for critical sections

                                                                                    Bounded-waiting mutual exclusion with TestAndSet( )

                                                                                    It first looks in an order ( starting with the next process on the list ) for a process that has been waiting and if it finds one then it releases that particular process from its waiting state without unlocking the critical section thereby allowing a specific process into the critical section while continuing to block all the others

                                                                                    Q) What is semaphoreAns Semaphore is used to solve synchronization problems A semaphore S is an integer variable accessed only through 2 standard atomic operations 1 wait() 2 signal()Wait primitive Signal primitivewait(S)

                                                                                    while(S lt= 0) do nothingS--

                                                                                    signal (S)

                                                                                    S++

                                                                                    Each process that wishes to use a resource performs wait() operation When a process releases a resource it performs signal() operation

                                                                                    Semaphores Usage1 binary semaphore or mutex locks can take values 0 (or) 1

                                                                                    2 Counting Semaphore can take any integer value Counting semaphore is used to count the number remaining resources The counter is initialized to the number of resources available whenever the counting semaphore gt 0 then a process can enter a critical section and use one of the resources When the counter = 0 ( or negative in some implementations ) then the process blocks until another process frees a resource and increments the counting semaphore with a signal callFor example if 3 resources are there and 4 processes Value of counting semaphore =3if process P1 requires a resource it perform wait() operationNow counting semaphore = 2if process P2 requires a resource it perform wait() operationNow counting semaphore = 1if process P3 requires a resource it perform wait() operationNow counting semaphore = 0if process P4 requires a resource it perform wait() operation P4 waits until a resource is available

                                                                                    3 Semaphores can also be used to synchronize certain operations

                                                                                    Semaphore ImplementationWhen a process is in critical section and any other process that tries to enter critical section must loop continuously wasting CPU time (busy waiting) This type of semaphore is called spin lock as the process spins while waiting For example suppose it is important that process P1 execute statement S1 before process P2 executes statement S2

                                                                                    First we create a semaphore named synch that is shared by the two processes and initialize it to zero Then in process P1 we insert the code

                                                                                    S1signal( synch )

                                                                                    and in process P2 we insert the code wait( synch )

                                                                                    S2 Because synch was initialized to 0 process P2 will block on the wait until after P1 executes the call

                                                                                    to signal Each semaphore has a integer value and maintains a waiting queue of waiting processes

                                                                                    Semaphore implementation with no busy waiting To overcome busy waiting we block the waiting process and waiting process is restarted by wakeup() operation(ie the process is removed from waiting queue to ready state)

                                                                                    Problems due to semaphore1 Deadlocks Deadlocks occur when multiple processes are blocked each waiting for a resource that can only be freed by one of the other (blocked) processes shown below

                                                                                    2 Starvation In Starvation one or more processes gets blocked forever and never get a chance to enter critical section For example if we do not specify the algorithms for adding processes to the waiting queue or selecting one to be removed from the queue in the signal( ) call If a LIFO queue is chosen then the first process that starts waiting will never get a chance

                                                                                    3 Priority inversionLet Process LMH have priorities as shown belowLltMltHprocess H is waiting for a resource that has been held by process L so Process L is running When process M enters it pre-empts process L and M is running nowHere M runs before H even though M has lower priority than HTo avoid this L is given high priority so that M does not pre-empt L=====================================================================Q) Classical problems on synchronization

                                                                                    1 The Bounded Buffer Problem (also called the The Producer-Consumer Problem)2 The Readers-Writers Problem3 The Dining Philosophers Problem

                                                                                    These problems are used to test newly proposed synchronization scheme

                                                                                    1 The Bounded Buffer ProblemConsider

                                                                                    n buffers and each buffer holds one item a producer process which creates the items (1 at a time) a consumer process which processes them (1 at a time)

                                                                                    Producer process cannot produce an item if all the buffers are full and Consumer process cannot consume if all the buffers are empty As both processes modify the contents of buffer synchronization must be there For that we use 3 Semaphores

                                                                                    1 empty = n (empty semaphore counts the number of empty buffers)2 full=0 (full semaphore counts the number of filled buffers)3 mutex=1 (mutex provides mutual exclusion for access to the buffer)

                                                                                    Producer process Consumer processdo

                                                                                    hellip produce an item wait(empty) wait(mutex) hellip

                                                                                    do wait(full) wait(mutex) hellip remove an item from buffer hellip

                                                                                    add item to buffer hellip signal(mutex) signal(full)

                                                                                    while (1)

                                                                                    signal(mutex) signal(empty) hellip consume the item hellip while (1)

                                                                                    2 The Readers-Writers ProblemA data item such as a file is shared among several processesEach process is classified as reader or writerMultiple readers can read the file at same timeA writer must have exclusive access (ie cannot share with either a reader or another writer)Two versions of the readers-writers problem

                                                                                    readers priority new reader need not wait because a writer is waiting writers priority if a writer is waiting to access the database no new readers can start reading

                                                                                    A solution to both version may cause starvation in the readers priority version writers may starve in the writers priority version readers may starve

                                                                                    A semaphore solution to the readers priority version uses 3 semaphores1 readcount = 0 (readcount semaphore counts the number of reading shared data2 mutex =1 (mutex semaphore is used for mutually exclusion when readcount is updated)3 wrt = 1 (wrt semaphore is common to both writer and reader)

                                                                                    wrt provides mutually exclusive access to shared data Reader process Writer process

                                                                                    wait(mutex) readcount++ if (readcount == 1) wait(rt) signal(mutex) hellip reading is performed hellip wait(mutex) readcount-- if (readcount == 0) signal(wrt) signal(mutex)

                                                                                    do wait(wrt) hellip writing is performed hellip signal(wrt)

                                                                                    while (TRUE)

                                                                                    3 The Dining Philosophers Problemn philosophers sit around a table thinking and eatingAs time passes a philosopher gets hungry and tries to pick up the chopstick on his left and on his right A philosopher may only pick up one chopstick at a time and cannot pick up a chopstick already in the hand of neighbour philosopherThe dining philosophers problems is an example of a large class or concurrency control problems it is a simple representation of the need to allocate several resources among several processes in a deadlock-free and starvation-free mannerA semaphore solution represent each chopstick with a semaphore

                                                                                    semaphore chopstick[5] Initially all values are 1

                                                                                    Philosopher ido

                                                                                    51

                                                                                    wait(chopstick[i])wait(chopstick[(i+1) 5])hellipeathellipsignal(chopstick[i])signal(chopstick[(i+1) 5])hellipthinkhellip

                                                                                    while (1)This solution guarantees no two neighbouring philosophers eat simultaneously but has the possibility of creating a deadlockQ) Types of operating systems

                                                                                    MULTIPROCESSING SYSTEMS has multiple hardware CPUs

                                                                                    In symmetric multiprocessing (SMP) systems The processes share the same main memory and io facilities All processors can perform the same functions (hence term symmetric)

                                                                                    52

                                                                                    ASYMMETRIC MULTIPROCESSING SYSTEMS In asymmetric multiprocessing (ASMP) all CPUs are not equal (Master and slave CPUs) ASMP have dedicated processors to specific tasks For example one processor may be dedicated to disk operations another to video operations etc These systems dont have the flexibility to assign processes to the least-loaded CPU unlike an SMP system

                                                                                    REAL TIME SYSTEM is used to implement a computer application that must complete its execution within its time constraint

                                                                                    Two kinds of real-time systems have evolved

                                                                                    1 A hard real-time system can guarantee that response requirement will be met under all conditions 2 A soft real-time system cannot guarantee that response requirement meets under all conditions Ex

                                                                                    Digital audio or multimedia systems Digital telephones

                                                                                    DISTRIBUTED SYSTEMS execute parts of a computation in different systems at the same time It uses distributed control ie it spreads its decision-making actions across different computers in the system so that failures of individual computers or the network does not cripple its operation

                                                                                    A distributed operating system appears as a uniprocessor system even though it has multiple processors The users may not know where their programs are being run or where their files are located that should all be handled automatically by the operating system

                                                                                    Distributed systems allow applications to run on several processors at the same time thus requiring more complex processor scheduling algorithms

                                                                                    DESKTOP SYSTEMS A desktop system is a personal computer (PC) system used at a single location Modern desktop systems support multiprogramming Common examples are Linux FreeBSD Windows 8 and the Macintosh operating system

                                                                                    HANDHELD SYSTEMS (palmtop computers) Two of the most popular operating systems for handhelds are Symbian OS and Android OS They have very less memory

                                                                                    53

                                                                                    CLUSTERED SYSTEMS A computer cluster consists of a set of loosely connected computers that work together so that in many respects they can be viewed as a single system The components of a cluster are usually connected to each other through fast local area networks each node (computer used as a server) running its own instance of an operating system

                                                                                    Q) Modern operating system

                                                                                    New developments in hardware applications and security threats lead to the development of modern operating system

                                                                                    New Hardware developments like increased machine speed high speed network increased size and variety of memory devices

                                                                                    New Applications developments like multimedia applications internet and web access and client server computing

                                                                                    Developments in modern operating system can be categorized as

                                                                                    1 Microkernel architecture assigns only few functions to kernel ( IPC and CPU scheduling etc) Other OS services are provided by processes called servers that run in user mode This approach decouples server and kernel development This approach is well suited to distributed environment

                                                                                    2 Multithreading a process is divided into threads that can run concurrently Useful for applications that perform a number of independent tasks Example database server that listens and processes many client requests

                                                                                    3 Symmetric multiprocessing As now-a-days there are multiple microprocessors in a single system SMP operating system provides greater efficiency when there are multiple processors The processes share the same main memory and io facilities All processors can perform the same functions (hence term symmetric)

                                                                                    SMP schedules processes or threads across all of the processors

                                                                                    Advantages

                                                                                    1 Increased performance if some portion of the program can be run in parallel the performance of SMP OS increases

                                                                                    2 increased availability as all processes can perform the same task if one processor fails others will work 3 incremental growth one can add a new processor to increase the performance4 Scaling cost can be dependent on the number of processors used

                                                                                    4 Distributed operating systems Distributed systems allow applications to run on several processors at the same time

                                                                                    5 Object oriented design adds modularity to kernel Os can be customized without effecting system integrity

                                                                                    Q) Linux Operating systems It is open source as its source code is freely available It is free to use Linux was designed considering UNIX compatibility Linux is a multi-user multitasking system Main design goals are speed efficiency and standardization Components of Linux System

                                                                                    54

                                                                                    Linux Operating System has primarily three components as shown in the below diagram

                                                                                    Kernel - Kernel is the core part of Linux It is responsible for all major activities of this operating system It interacts directly with hardware Kernel hides low level hardware details to system or application programs

                                                                                    System Library - System libraries are special functions or programs These are used by application programs to use Kernelrsquos features

                                                                                    System Utility - System Utility programs are responsible to do specialized individual level tasks like updating log file accepting login requests from terminals etc

                                                                                    Kernel Modules Sections of kernel code that can be compiled loaded and unloaded independent of the rest of the kernel A kernel module may typically implement a device driver a file system or a networking protocol

                                                                                    Three components to Linux module support 1 module management Supports loading modules into memory and allows to talk to the rest of the kernel 2driver registration Allows modules to inform rest of the kernel that a new driver has become available 3 conflict resolution protect reserved resources of one driver from accidental use by another driver

                                                                                    Kernel Mode vs User ModeKernel component code executes in a special privileged mode called kernel mode with full access to all resources of the computer This code represents a single process executes in single address space and do not require any context switch and hence is very efficient and fast Kernel runs each processes and provides system services to processes provides protected access to hardwares to processesCode which is not required to run in kernel mode is in System Library User programs and other system programs works in User Mode which has no access to system hardwares and kernel code User programs utilities use System libraries to access Kernel functions to get systems low level tasksBasic FeaturesFollowing are some of the important features of Linux Operating System

                                                                                    Portable - Portability means softwares can works on different types of hardwares in same way Linux kernel and application programs supports their installation on any kind of hardware platform

                                                                                    Open Source - Linux source code is freely available and it is community based development project Multiple teams works in collaboration to enhance the capability of Linux operating system and it is continuously evolving

                                                                                    Multi-User - Linux is a multiuser system means multiple users can access system resources like memory ram application programs at same time

                                                                                    Multiprogramming - Linux is a multiprogramming system means multiple applications can run at same time

                                                                                    Hierarchical File System - Linux provides a standard file structure in which system files user files are arranged

                                                                                    Shell - Linux provides a special interpreter program which can be used to execute commands of the operating system It can be used to do various types of operations call application programs etc

                                                                                    55

                                                                                    Security - Linux provides user security using authentication features like password protection controlled access to specific files encryption of data

                                                                                    Architecture Linux System Architecture is consists of following layers

                                                                                    Hardware layer - Hardware consists of all peripheral devices (RAM HDD CPU etc)

                                                                                    Kernel - Core component of Operating System interacts directly with hardware provides low level services to upper layer components

                                                                                    Shell - An interface to kernel Takes commands from user and executes kernels functions

                                                                                    Utilities - Utility programs giving user most of the functionalities of an operating systems

                                                                                    Q) windows XP operating System1Extensibility mdash layered architecture

                                                                                    Executive which runs in protected mode provides the basic system services On top of the executive several server subsystems operate in user mode Modular structure allows additional environmental subsystems to be added without affecting the

                                                                                    executive 2Portability mdash XP can be moved from on hardware architecture to another with relatively few changes

                                                                                    Written in C and C++ Processor-dependent code is isolated in a dynamic link library (DLL) called the ldquohardware abstraction

                                                                                    layerrdquo (HAL)3 Reliability mdashXP uses hardware protection for virtual memory and software protection mechanisms for operating system resources 4 Compatibility mdash applications that follow the IEEE 10031 (POSIX) standard can be complied to run on 2000 without changing the source code 5 Performance mdash XP subsystems can communicate with one another via high-performance message passing

                                                                                    Preemption of low priority threads enables the system to respond quickly to external events Designed for symmetrical multiprocessing

                                                                                    6 International support mdash supports different locales via the national language support (NLS) APIXP Architecture 1 Layered system of modules 2 Protected mode mdash hardware abstraction layer (HAL) kernel executive 3 User mode mdash collection of subsystems

                                                                                    a Environmental subsystems emulate different operating systems b Protection subsystems provide security functions

                                                                                    XP architecture

                                                                                    56

                                                                                    Q) Windows network Operating systemExample Windows 2000 operating system

                                                                                    All the pictures are taken from Silberschatz Abraham et al Peter Baer Galvin and Greg Gagne Operating system concepts Reading Addison-Wesley

                                                                                    Visit my blog enthusiaststudentblogspotin57

                                                                                    mtechmessengerblogspotin

                                                                                    Keywords OS exam notes OS notesOperating system notes Opearing system galvin notesSimple batch system Multi- programmed Batch System Time sharing System Difference between Batch Multi-programming and Time sharing What is process structure of process in memory Process states PCB Operations on Processes Process Creation Process Termination cascading terminationcontext switch Basic Concepts of threads two modes of CPU execution Inter process communication Shared ndash Memory Systems Message passing systemswhat is deadlock four conditions that are necessary for deadlock to occur Methods for handling deadlocks Deadlock prevention Deadlock avoidanceResource Allocation graphBankers AlgorithmSafe State Unsafe state Safety Algorithm Resource-Request Algorithm Memory management Address Binding Logical vs Physical Address Space MMU Memory-Management Unit Dynamic loading Dynamic Linking Overlays Swapping Contiguous memory allocation Single-partition allocation Multiple-partition allocation Internal fragmentation Variable Size partitions External fragmentation First-fit Best-fit Worst-fit Non-Contiguous Memory allocation Paging Implementation of Page Table PTLR PTBR page table base register Page-table length register translation look-aside buffers TLB Protection in Paging Shared pages in Paging Segmentation Shared Segments Virtual memory Demand paging Procedure for Handling a Page Fault pure demand paging Page Replacement Page Replacement Algorithms FIFO Optimal Page replacement AlgorithmLeast Recently used (LRU) Page replacement algorithm ThrashingWorking Set model global and local page replacement

                                                                                    58

                                                                                    • 101 File Concept Data is stored in secondary memory in the form of files A file is a sequence of bits bytes lines or records A file has structure which depends on its type
                                                                                    • File Attributes
                                                                                    • File Operations
                                                                                    • Kernel Mode vs User Mode
                                                                                    • Basic Features
                                                                                    • Architecture

                                                                                      Advantages

                                                                                      1 Easy to find space to a new file (or) add more data to existing file2 Random access time is improved

                                                                                      Disadvantage number of disk seeks is more The disk head must move to read the FAT and find the location of block then move to location of the block

                                                                                      3 Indexed allocation

                                                                                      Each file has its own index block Index block is an array of disk block addresses The ith entry in index block points to ith block of the file

                                                                                      The directory contains the address of the index block

                                                                                      When the file is created all pointers in the index block are set to nil

                                                                                      To add more data to existing file free block is found and data is written to it The address of block is put in the index block

                                                                                      Advantages

                                                                                      1 Easy to find space to a new file (or) add more data to existing file2 Supports direct (or) random access efficiently3 No external fragmentation

                                                                                      Disadvantage

                                                                                      1 Seek time is more2 Number of disk seeks required is more3 Not Reliable4 Index file must be kept in memory If this memory is not available then we have the read the index

                                                                                      block and then desired data block ie 2 disk accesses are required To access a block near the end of the file we need to read all the index blocks to read the needed data block

                                                                                      5 Pointer overhead of index block is gt than that of linked allocation For example if we have file of only one or two blocks Entire index block is allocated to store these one or two pointers

                                                                                      Various mechanisms followed for size of index block

                                                                                      1 linked scheme size of index block is one disk block Index block contains a small header containing name of the file and set of the first 100 disk block addresses The next address contains the address of another index block for large files

                                                                                      2 multilevel index to access a block the OS uses first-level index to find second-level index block which points to file block This approach can be continued to third or fourth level

                                                                                      3 combined scheme

                                                                                      For example Say there are 15 pointers of index block in the filersquos inode The first 12 of these pointers have address of blocks that contains file data The next 3 pointers point to indirect blocks The first points to single indirect block which is an index block that contain addresses of filersquos data blocks

                                                                                      The second points to double indirect blocks and third points to triple indirect blocks as shown below

                                                                                      Q) Free space management

                                                                                      To keep track of free disk space the system maintains free space list To create a file we search free space list and allocate the required space to new file

                                                                                      The free space list can be implemented as

                                                                                      1 Bit vector (or) Bit map2 Linked list3 Grouping4 Counting 5 Space maps

                                                                                      1 Bit vector (or) Bit map Each block is represented by 1 bit If the block is free the bit is 1 and if the block is allocated the bit is 0 For example consider a disk where blocks 23458 blocks are free The free-space bit map would be 001111001000

                                                                                      Advantages simple and easy to get contiguous files

                                                                                      Disadvantages

                                                                                      1 Bit map requires extra space2 Bit map is inefficient if kept in disk Bit maps of smaller disk can be kept in main memory but may

                                                                                      not be possible to keep the bit map of larger disk in main memory3 Bit map must be modified both when blocks are allocated and freed Freeing 1 GB of data on a 1-TB

                                                                                      disk could cause thousands of blocks of bit maps to be updated because those data blocks are scattered all over the disk

                                                                                      2 Linked List Free-space list is implemented by linking all the free disk blocks together A pointer to the first free block is stored in special location on the disk For example consider a disk where blocks 23458 blocks are freeWe keep a pointer to block 2 as the first free block Block 2 would contain a pointer to block 3 Block 3 would point to block 4 and block 4 would point to block 5and so onAdvantage No waste of spaceDisadvantage Cannot get contiguous space easily ie To traverse the free-space list we must read each block which requires more IO time

                                                                                      3 Grouping First free block stores the addresses of n free blocks The first n-1 free blocks are free and the last block contains the addresses of another n free blocks and so onAdvantage large number of free blocks can be found quickly4 Counting free-space list can be maintained by keeping the address of first free block and the number (n)of free contiguous blocks that follow the first block

                                                                                      Each entry in the free-space list consists of disk address and a countEach entry requires more space andlist is shorter only if count is gt1These entries can be stored in a B-tree for efficient insertion deletion and search

                                                                                      5 Space maps ZFS divides the disk space into chunks of manageable sizes called metaslabs Each metaslab has a space mapZFS uses counting algorithm to store information about free blocks Space map is a log of all block activities in time order in counting formatWhen ZFS allocates or frees space from a metaslab it loads the associated space map into memory and updates it Finally free-space list is updated on diskSynchronization Hardware Hardware solution to synchronization is to provide atomic operations These operations operate as a single instruction without interruption Two such operations are 1 Test and Set() hardware instruction2 Swap() instruction

                                                                                      1 Test and Set instruction is as shown

                                                                                      Solution using Test and SetThe shared variable lock is initialized to false

                                                                                      2 Swap() instruction definition is as follows

                                                                                      Solution using Swap1 Shared Boolean variable lock is initialized to false and each process has a local boolean variable key

                                                                                      Both Test and Set Swap instructions satisfy the mutual exclusion requirement but unfortunately do not guarantee bounded waiting If there are multiple processes trying to get into their critical sections there is no guarantee of what order they will enter and any one process could have the bad luck to wait forever until they got their turn in the critical section (Since there is no guarantee as to the relative rates of the processes a very fast process could theoretically release the lock whip through their remainder section and re-lock the lock before a slower process got a chance As more and more processes are involved vying for the same resource the odds of a slow process getting locked out completely increase )

                                                                                      Below figure illustrates a solution using test-and-set that satisfies bounded waiting using two shared data structures boolean lock and boolean waiting[ N ] where N is the number of processes in contention for critical sections

                                                                                      Bounded-waiting mutual exclusion with TestAndSet( )

                                                                                      It first looks in an order ( starting with the next process on the list ) for a process that has been waiting and if it finds one then it releases that particular process from its waiting state without unlocking the critical section thereby allowing a specific process into the critical section while continuing to block all the others

                                                                                      Q) What is semaphoreAns Semaphore is used to solve synchronization problems A semaphore S is an integer variable accessed only through 2 standard atomic operations 1 wait() 2 signal()Wait primitive Signal primitivewait(S)

                                                                                      while(S lt= 0) do nothingS--

                                                                                      signal (S)

                                                                                      S++

                                                                                      Each process that wishes to use a resource performs wait() operation When a process releases a resource it performs signal() operation

                                                                                      Semaphores Usage1 binary semaphore or mutex locks can take values 0 (or) 1

                                                                                      2 Counting Semaphore can take any integer value Counting semaphore is used to count the number remaining resources The counter is initialized to the number of resources available whenever the counting semaphore gt 0 then a process can enter a critical section and use one of the resources When the counter = 0 ( or negative in some implementations ) then the process blocks until another process frees a resource and increments the counting semaphore with a signal callFor example if 3 resources are there and 4 processes Value of counting semaphore =3if process P1 requires a resource it perform wait() operationNow counting semaphore = 2if process P2 requires a resource it perform wait() operationNow counting semaphore = 1if process P3 requires a resource it perform wait() operationNow counting semaphore = 0if process P4 requires a resource it perform wait() operation P4 waits until a resource is available

                                                                                      3 Semaphores can also be used to synchronize certain operations

                                                                                      Semaphore ImplementationWhen a process is in critical section and any other process that tries to enter critical section must loop continuously wasting CPU time (busy waiting) This type of semaphore is called spin lock as the process spins while waiting For example suppose it is important that process P1 execute statement S1 before process P2 executes statement S2

                                                                                      First we create a semaphore named synch that is shared by the two processes and initialize it to zero Then in process P1 we insert the code

                                                                                      S1signal( synch )

                                                                                      and in process P2 we insert the code wait( synch )

                                                                                      S2 Because synch was initialized to 0 process P2 will block on the wait until after P1 executes the call

                                                                                      to signal Each semaphore has a integer value and maintains a waiting queue of waiting processes

                                                                                      Semaphore implementation with no busy waiting To overcome busy waiting we block the waiting process and waiting process is restarted by wakeup() operation(ie the process is removed from waiting queue to ready state)

                                                                                      Problems due to semaphore1 Deadlocks Deadlocks occur when multiple processes are blocked each waiting for a resource that can only be freed by one of the other (blocked) processes shown below

                                                                                      2 Starvation In Starvation one or more processes gets blocked forever and never get a chance to enter critical section For example if we do not specify the algorithms for adding processes to the waiting queue or selecting one to be removed from the queue in the signal( ) call If a LIFO queue is chosen then the first process that starts waiting will never get a chance

                                                                                      3 Priority inversionLet Process LMH have priorities as shown belowLltMltHprocess H is waiting for a resource that has been held by process L so Process L is running When process M enters it pre-empts process L and M is running nowHere M runs before H even though M has lower priority than HTo avoid this L is given high priority so that M does not pre-empt L=====================================================================Q) Classical problems on synchronization

                                                                                      1 The Bounded Buffer Problem (also called the The Producer-Consumer Problem)2 The Readers-Writers Problem3 The Dining Philosophers Problem

                                                                                      These problems are used to test newly proposed synchronization scheme

                                                                                      1 The Bounded Buffer ProblemConsider

                                                                                      n buffers and each buffer holds one item a producer process which creates the items (1 at a time) a consumer process which processes them (1 at a time)

                                                                                      Producer process cannot produce an item if all the buffers are full and Consumer process cannot consume if all the buffers are empty As both processes modify the contents of buffer synchronization must be there For that we use 3 Semaphores

                                                                                      1 empty = n (empty semaphore counts the number of empty buffers)2 full=0 (full semaphore counts the number of filled buffers)3 mutex=1 (mutex provides mutual exclusion for access to the buffer)

                                                                                      Producer process Consumer processdo

                                                                                      hellip produce an item wait(empty) wait(mutex) hellip

                                                                                      do wait(full) wait(mutex) hellip remove an item from buffer hellip

                                                                                      add item to buffer hellip signal(mutex) signal(full)

                                                                                      while (1)

                                                                                      signal(mutex) signal(empty) hellip consume the item hellip while (1)

                                                                                      2 The Readers-Writers ProblemA data item such as a file is shared among several processesEach process is classified as reader or writerMultiple readers can read the file at same timeA writer must have exclusive access (ie cannot share with either a reader or another writer)Two versions of the readers-writers problem

                                                                                      readers priority new reader need not wait because a writer is waiting writers priority if a writer is waiting to access the database no new readers can start reading

                                                                                      A solution to both version may cause starvation in the readers priority version writers may starve in the writers priority version readers may starve

                                                                                      A semaphore solution to the readers priority version uses 3 semaphores1 readcount = 0 (readcount semaphore counts the number of reading shared data2 mutex =1 (mutex semaphore is used for mutually exclusion when readcount is updated)3 wrt = 1 (wrt semaphore is common to both writer and reader)

                                                                                      wrt provides mutually exclusive access to shared data Reader process Writer process

                                                                                      wait(mutex) readcount++ if (readcount == 1) wait(rt) signal(mutex) hellip reading is performed hellip wait(mutex) readcount-- if (readcount == 0) signal(wrt) signal(mutex)

                                                                                      do wait(wrt) hellip writing is performed hellip signal(wrt)

                                                                                      while (TRUE)

                                                                                      3 The Dining Philosophers Problemn philosophers sit around a table thinking and eatingAs time passes a philosopher gets hungry and tries to pick up the chopstick on his left and on his right A philosopher may only pick up one chopstick at a time and cannot pick up a chopstick already in the hand of neighbour philosopherThe dining philosophers problems is an example of a large class or concurrency control problems it is a simple representation of the need to allocate several resources among several processes in a deadlock-free and starvation-free mannerA semaphore solution represent each chopstick with a semaphore

                                                                                      semaphore chopstick[5] Initially all values are 1

                                                                                      Philosopher ido

                                                                                      51

                                                                                      wait(chopstick[i])wait(chopstick[(i+1) 5])hellipeathellipsignal(chopstick[i])signal(chopstick[(i+1) 5])hellipthinkhellip

                                                                                      while (1)This solution guarantees no two neighbouring philosophers eat simultaneously but has the possibility of creating a deadlockQ) Types of operating systems

                                                                                      MULTIPROCESSING SYSTEMS has multiple hardware CPUs

                                                                                      In symmetric multiprocessing (SMP) systems The processes share the same main memory and io facilities All processors can perform the same functions (hence term symmetric)

                                                                                      52

                                                                                      ASYMMETRIC MULTIPROCESSING SYSTEMS In asymmetric multiprocessing (ASMP) all CPUs are not equal (Master and slave CPUs) ASMP have dedicated processors to specific tasks For example one processor may be dedicated to disk operations another to video operations etc These systems dont have the flexibility to assign processes to the least-loaded CPU unlike an SMP system

                                                                                      REAL TIME SYSTEM is used to implement a computer application that must complete its execution within its time constraint

                                                                                      Two kinds of real-time systems have evolved

                                                                                      1 A hard real-time system can guarantee that response requirement will be met under all conditions 2 A soft real-time system cannot guarantee that response requirement meets under all conditions Ex

                                                                                      Digital audio or multimedia systems Digital telephones

                                                                                      DISTRIBUTED SYSTEMS execute parts of a computation in different systems at the same time It uses distributed control ie it spreads its decision-making actions across different computers in the system so that failures of individual computers or the network does not cripple its operation

                                                                                      A distributed operating system appears as a uniprocessor system even though it has multiple processors The users may not know where their programs are being run or where their files are located that should all be handled automatically by the operating system

                                                                                      Distributed systems allow applications to run on several processors at the same time thus requiring more complex processor scheduling algorithms

                                                                                      DESKTOP SYSTEMS A desktop system is a personal computer (PC) system used at a single location Modern desktop systems support multiprogramming Common examples are Linux FreeBSD Windows 8 and the Macintosh operating system

                                                                                      HANDHELD SYSTEMS (palmtop computers) Two of the most popular operating systems for handhelds are Symbian OS and Android OS They have very less memory

                                                                                      53

                                                                                      CLUSTERED SYSTEMS A computer cluster consists of a set of loosely connected computers that work together so that in many respects they can be viewed as a single system The components of a cluster are usually connected to each other through fast local area networks each node (computer used as a server) running its own instance of an operating system

                                                                                      Q) Modern operating system

                                                                                      New developments in hardware applications and security threats lead to the development of modern operating system

                                                                                      New Hardware developments like increased machine speed high speed network increased size and variety of memory devices

                                                                                      New Applications developments like multimedia applications internet and web access and client server computing

                                                                                      Developments in modern operating system can be categorized as

                                                                                      1 Microkernel architecture assigns only few functions to kernel ( IPC and CPU scheduling etc) Other OS services are provided by processes called servers that run in user mode This approach decouples server and kernel development This approach is well suited to distributed environment

                                                                                      2 Multithreading a process is divided into threads that can run concurrently Useful for applications that perform a number of independent tasks Example database server that listens and processes many client requests

                                                                                      3 Symmetric multiprocessing As now-a-days there are multiple microprocessors in a single system SMP operating system provides greater efficiency when there are multiple processors The processes share the same main memory and io facilities All processors can perform the same functions (hence term symmetric)

                                                                                      SMP schedules processes or threads across all of the processors

                                                                                      Advantages

                                                                                      1 Increased performance if some portion of the program can be run in parallel the performance of SMP OS increases

                                                                                      2 increased availability as all processes can perform the same task if one processor fails others will work 3 incremental growth one can add a new processor to increase the performance4 Scaling cost can be dependent on the number of processors used

                                                                                      4 Distributed operating systems Distributed systems allow applications to run on several processors at the same time

                                                                                      5 Object oriented design adds modularity to kernel Os can be customized without effecting system integrity

                                                                                      Q) Linux Operating systems It is open source as its source code is freely available It is free to use Linux was designed considering UNIX compatibility Linux is a multi-user multitasking system Main design goals are speed efficiency and standardization Components of Linux System

                                                                                      54

                                                                                      Linux Operating System has primarily three components as shown in the below diagram

                                                                                      Kernel - Kernel is the core part of Linux It is responsible for all major activities of this operating system It interacts directly with hardware Kernel hides low level hardware details to system or application programs

                                                                                      System Library - System libraries are special functions or programs These are used by application programs to use Kernelrsquos features

                                                                                      System Utility - System Utility programs are responsible to do specialized individual level tasks like updating log file accepting login requests from terminals etc

                                                                                      Kernel Modules Sections of kernel code that can be compiled loaded and unloaded independent of the rest of the kernel A kernel module may typically implement a device driver a file system or a networking protocol

                                                                                      Three components to Linux module support 1 module management Supports loading modules into memory and allows to talk to the rest of the kernel 2driver registration Allows modules to inform rest of the kernel that a new driver has become available 3 conflict resolution protect reserved resources of one driver from accidental use by another driver

                                                                                      Kernel Mode vs User ModeKernel component code executes in a special privileged mode called kernel mode with full access to all resources of the computer This code represents a single process executes in single address space and do not require any context switch and hence is very efficient and fast Kernel runs each processes and provides system services to processes provides protected access to hardwares to processesCode which is not required to run in kernel mode is in System Library User programs and other system programs works in User Mode which has no access to system hardwares and kernel code User programs utilities use System libraries to access Kernel functions to get systems low level tasksBasic FeaturesFollowing are some of the important features of Linux Operating System

                                                                                      Portable - Portability means softwares can works on different types of hardwares in same way Linux kernel and application programs supports their installation on any kind of hardware platform

                                                                                      Open Source - Linux source code is freely available and it is community based development project Multiple teams works in collaboration to enhance the capability of Linux operating system and it is continuously evolving

                                                                                      Multi-User - Linux is a multiuser system means multiple users can access system resources like memory ram application programs at same time

                                                                                      Multiprogramming - Linux is a multiprogramming system means multiple applications can run at same time

                                                                                      Hierarchical File System - Linux provides a standard file structure in which system files user files are arranged

                                                                                      Shell - Linux provides a special interpreter program which can be used to execute commands of the operating system It can be used to do various types of operations call application programs etc

                                                                                      55

                                                                                      Security - Linux provides user security using authentication features like password protection controlled access to specific files encryption of data

                                                                                      Architecture Linux System Architecture is consists of following layers

                                                                                      Hardware layer - Hardware consists of all peripheral devices (RAM HDD CPU etc)

                                                                                      Kernel - Core component of Operating System interacts directly with hardware provides low level services to upper layer components

                                                                                      Shell - An interface to kernel Takes commands from user and executes kernels functions

                                                                                      Utilities - Utility programs giving user most of the functionalities of an operating systems

                                                                                      Q) windows XP operating System1Extensibility mdash layered architecture

                                                                                      Executive which runs in protected mode provides the basic system services On top of the executive several server subsystems operate in user mode Modular structure allows additional environmental subsystems to be added without affecting the

                                                                                      executive 2Portability mdash XP can be moved from on hardware architecture to another with relatively few changes

                                                                                      Written in C and C++ Processor-dependent code is isolated in a dynamic link library (DLL) called the ldquohardware abstraction

                                                                                      layerrdquo (HAL)3 Reliability mdashXP uses hardware protection for virtual memory and software protection mechanisms for operating system resources 4 Compatibility mdash applications that follow the IEEE 10031 (POSIX) standard can be complied to run on 2000 without changing the source code 5 Performance mdash XP subsystems can communicate with one another via high-performance message passing

                                                                                      Preemption of low priority threads enables the system to respond quickly to external events Designed for symmetrical multiprocessing

                                                                                      6 International support mdash supports different locales via the national language support (NLS) APIXP Architecture 1 Layered system of modules 2 Protected mode mdash hardware abstraction layer (HAL) kernel executive 3 User mode mdash collection of subsystems

                                                                                      a Environmental subsystems emulate different operating systems b Protection subsystems provide security functions

                                                                                      XP architecture

                                                                                      56

                                                                                      Q) Windows network Operating systemExample Windows 2000 operating system

                                                                                      All the pictures are taken from Silberschatz Abraham et al Peter Baer Galvin and Greg Gagne Operating system concepts Reading Addison-Wesley

                                                                                      Visit my blog enthusiaststudentblogspotin57

                                                                                      mtechmessengerblogspotin

                                                                                      Keywords OS exam notes OS notesOperating system notes Opearing system galvin notesSimple batch system Multi- programmed Batch System Time sharing System Difference between Batch Multi-programming and Time sharing What is process structure of process in memory Process states PCB Operations on Processes Process Creation Process Termination cascading terminationcontext switch Basic Concepts of threads two modes of CPU execution Inter process communication Shared ndash Memory Systems Message passing systemswhat is deadlock four conditions that are necessary for deadlock to occur Methods for handling deadlocks Deadlock prevention Deadlock avoidanceResource Allocation graphBankers AlgorithmSafe State Unsafe state Safety Algorithm Resource-Request Algorithm Memory management Address Binding Logical vs Physical Address Space MMU Memory-Management Unit Dynamic loading Dynamic Linking Overlays Swapping Contiguous memory allocation Single-partition allocation Multiple-partition allocation Internal fragmentation Variable Size partitions External fragmentation First-fit Best-fit Worst-fit Non-Contiguous Memory allocation Paging Implementation of Page Table PTLR PTBR page table base register Page-table length register translation look-aside buffers TLB Protection in Paging Shared pages in Paging Segmentation Shared Segments Virtual memory Demand paging Procedure for Handling a Page Fault pure demand paging Page Replacement Page Replacement Algorithms FIFO Optimal Page replacement AlgorithmLeast Recently used (LRU) Page replacement algorithm ThrashingWorking Set model global and local page replacement

                                                                                      58

                                                                                      • 101 File Concept Data is stored in secondary memory in the form of files A file is a sequence of bits bytes lines or records A file has structure which depends on its type
                                                                                      • File Attributes
                                                                                      • File Operations
                                                                                      • Kernel Mode vs User Mode
                                                                                      • Basic Features
                                                                                      • Architecture

                                                                                        Advantages

                                                                                        1 Easy to find space to a new file (or) add more data to existing file2 Supports direct (or) random access efficiently3 No external fragmentation

                                                                                        Disadvantage

                                                                                        1 Seek time is more2 Number of disk seeks required is more3 Not Reliable4 Index file must be kept in memory If this memory is not available then we have the read the index

                                                                                        block and then desired data block ie 2 disk accesses are required To access a block near the end of the file we need to read all the index blocks to read the needed data block

                                                                                        5 Pointer overhead of index block is gt than that of linked allocation For example if we have file of only one or two blocks Entire index block is allocated to store these one or two pointers

                                                                                        Various mechanisms followed for size of index block

                                                                                        1 linked scheme size of index block is one disk block Index block contains a small header containing name of the file and set of the first 100 disk block addresses The next address contains the address of another index block for large files

                                                                                        2 multilevel index to access a block the OS uses first-level index to find second-level index block which points to file block This approach can be continued to third or fourth level

                                                                                        3 combined scheme

                                                                                        For example Say there are 15 pointers of index block in the filersquos inode The first 12 of these pointers have address of blocks that contains file data The next 3 pointers point to indirect blocks The first points to single indirect block which is an index block that contain addresses of filersquos data blocks

                                                                                        The second points to double indirect blocks and third points to triple indirect blocks as shown below

                                                                                        Q) Free space management

                                                                                        To keep track of free disk space the system maintains free space list To create a file we search free space list and allocate the required space to new file

                                                                                        The free space list can be implemented as

                                                                                        1 Bit vector (or) Bit map2 Linked list3 Grouping4 Counting 5 Space maps

                                                                                        1 Bit vector (or) Bit map Each block is represented by 1 bit If the block is free the bit is 1 and if the block is allocated the bit is 0 For example consider a disk where blocks 23458 blocks are free The free-space bit map would be 001111001000

                                                                                        Advantages simple and easy to get contiguous files

                                                                                        Disadvantages

                                                                                        1 Bit map requires extra space2 Bit map is inefficient if kept in disk Bit maps of smaller disk can be kept in main memory but may

                                                                                        not be possible to keep the bit map of larger disk in main memory3 Bit map must be modified both when blocks are allocated and freed Freeing 1 GB of data on a 1-TB

                                                                                        disk could cause thousands of blocks of bit maps to be updated because those data blocks are scattered all over the disk

                                                                                        2 Linked List Free-space list is implemented by linking all the free disk blocks together A pointer to the first free block is stored in special location on the disk For example consider a disk where blocks 23458 blocks are freeWe keep a pointer to block 2 as the first free block Block 2 would contain a pointer to block 3 Block 3 would point to block 4 and block 4 would point to block 5and so onAdvantage No waste of spaceDisadvantage Cannot get contiguous space easily ie To traverse the free-space list we must read each block which requires more IO time

                                                                                        3 Grouping First free block stores the addresses of n free blocks The first n-1 free blocks are free and the last block contains the addresses of another n free blocks and so onAdvantage large number of free blocks can be found quickly4 Counting free-space list can be maintained by keeping the address of first free block and the number (n)of free contiguous blocks that follow the first block

                                                                                        Each entry in the free-space list consists of disk address and a countEach entry requires more space andlist is shorter only if count is gt1These entries can be stored in a B-tree for efficient insertion deletion and search

                                                                                        5 Space maps ZFS divides the disk space into chunks of manageable sizes called metaslabs Each metaslab has a space mapZFS uses counting algorithm to store information about free blocks Space map is a log of all block activities in time order in counting formatWhen ZFS allocates or frees space from a metaslab it loads the associated space map into memory and updates it Finally free-space list is updated on diskSynchronization Hardware Hardware solution to synchronization is to provide atomic operations These operations operate as a single instruction without interruption Two such operations are 1 Test and Set() hardware instruction2 Swap() instruction

                                                                                        1 Test and Set instruction is as shown

                                                                                        Solution using Test and SetThe shared variable lock is initialized to false

                                                                                        2 Swap() instruction definition is as follows

                                                                                        Solution using Swap1 Shared Boolean variable lock is initialized to false and each process has a local boolean variable key

                                                                                        Both Test and Set Swap instructions satisfy the mutual exclusion requirement but unfortunately do not guarantee bounded waiting If there are multiple processes trying to get into their critical sections there is no guarantee of what order they will enter and any one process could have the bad luck to wait forever until they got their turn in the critical section (Since there is no guarantee as to the relative rates of the processes a very fast process could theoretically release the lock whip through their remainder section and re-lock the lock before a slower process got a chance As more and more processes are involved vying for the same resource the odds of a slow process getting locked out completely increase )

                                                                                        Below figure illustrates a solution using test-and-set that satisfies bounded waiting using two shared data structures boolean lock and boolean waiting[ N ] where N is the number of processes in contention for critical sections

                                                                                        Bounded-waiting mutual exclusion with TestAndSet( )

                                                                                        It first looks in an order ( starting with the next process on the list ) for a process that has been waiting and if it finds one then it releases that particular process from its waiting state without unlocking the critical section thereby allowing a specific process into the critical section while continuing to block all the others

                                                                                        Q) What is semaphoreAns Semaphore is used to solve synchronization problems A semaphore S is an integer variable accessed only through 2 standard atomic operations 1 wait() 2 signal()Wait primitive Signal primitivewait(S)

                                                                                        while(S lt= 0) do nothingS--

                                                                                        signal (S)

                                                                                        S++

                                                                                        Each process that wishes to use a resource performs wait() operation When a process releases a resource it performs signal() operation

                                                                                        Semaphores Usage1 binary semaphore or mutex locks can take values 0 (or) 1

                                                                                        2 Counting Semaphore can take any integer value Counting semaphore is used to count the number remaining resources The counter is initialized to the number of resources available whenever the counting semaphore gt 0 then a process can enter a critical section and use one of the resources When the counter = 0 ( or negative in some implementations ) then the process blocks until another process frees a resource and increments the counting semaphore with a signal callFor example if 3 resources are there and 4 processes Value of counting semaphore =3if process P1 requires a resource it perform wait() operationNow counting semaphore = 2if process P2 requires a resource it perform wait() operationNow counting semaphore = 1if process P3 requires a resource it perform wait() operationNow counting semaphore = 0if process P4 requires a resource it perform wait() operation P4 waits until a resource is available

                                                                                        3 Semaphores can also be used to synchronize certain operations

                                                                                        Semaphore ImplementationWhen a process is in critical section and any other process that tries to enter critical section must loop continuously wasting CPU time (busy waiting) This type of semaphore is called spin lock as the process spins while waiting For example suppose it is important that process P1 execute statement S1 before process P2 executes statement S2

                                                                                        First we create a semaphore named synch that is shared by the two processes and initialize it to zero Then in process P1 we insert the code

                                                                                        S1signal( synch )

                                                                                        and in process P2 we insert the code wait( synch )

                                                                                        S2 Because synch was initialized to 0 process P2 will block on the wait until after P1 executes the call

                                                                                        to signal Each semaphore has a integer value and maintains a waiting queue of waiting processes

                                                                                        Semaphore implementation with no busy waiting To overcome busy waiting we block the waiting process and waiting process is restarted by wakeup() operation(ie the process is removed from waiting queue to ready state)

                                                                                        Problems due to semaphore1 Deadlocks Deadlocks occur when multiple processes are blocked each waiting for a resource that can only be freed by one of the other (blocked) processes shown below

                                                                                        2 Starvation In Starvation one or more processes gets blocked forever and never get a chance to enter critical section For example if we do not specify the algorithms for adding processes to the waiting queue or selecting one to be removed from the queue in the signal( ) call If a LIFO queue is chosen then the first process that starts waiting will never get a chance

                                                                                        3 Priority inversionLet Process LMH have priorities as shown belowLltMltHprocess H is waiting for a resource that has been held by process L so Process L is running When process M enters it pre-empts process L and M is running nowHere M runs before H even though M has lower priority than HTo avoid this L is given high priority so that M does not pre-empt L=====================================================================Q) Classical problems on synchronization

                                                                                        1 The Bounded Buffer Problem (also called the The Producer-Consumer Problem)2 The Readers-Writers Problem3 The Dining Philosophers Problem

                                                                                        These problems are used to test newly proposed synchronization scheme

                                                                                        1 The Bounded Buffer ProblemConsider

                                                                                        n buffers and each buffer holds one item a producer process which creates the items (1 at a time) a consumer process which processes them (1 at a time)

                                                                                        Producer process cannot produce an item if all the buffers are full and Consumer process cannot consume if all the buffers are empty As both processes modify the contents of buffer synchronization must be there For that we use 3 Semaphores

                                                                                        1 empty = n (empty semaphore counts the number of empty buffers)2 full=0 (full semaphore counts the number of filled buffers)3 mutex=1 (mutex provides mutual exclusion for access to the buffer)

                                                                                        Producer process Consumer processdo

                                                                                        hellip produce an item wait(empty) wait(mutex) hellip

                                                                                        do wait(full) wait(mutex) hellip remove an item from buffer hellip

                                                                                        add item to buffer hellip signal(mutex) signal(full)

                                                                                        while (1)

                                                                                        signal(mutex) signal(empty) hellip consume the item hellip while (1)

                                                                                        2 The Readers-Writers ProblemA data item such as a file is shared among several processesEach process is classified as reader or writerMultiple readers can read the file at same timeA writer must have exclusive access (ie cannot share with either a reader or another writer)Two versions of the readers-writers problem

                                                                                        readers priority new reader need not wait because a writer is waiting writers priority if a writer is waiting to access the database no new readers can start reading

                                                                                        A solution to both version may cause starvation in the readers priority version writers may starve in the writers priority version readers may starve

                                                                                        A semaphore solution to the readers priority version uses 3 semaphores1 readcount = 0 (readcount semaphore counts the number of reading shared data2 mutex =1 (mutex semaphore is used for mutually exclusion when readcount is updated)3 wrt = 1 (wrt semaphore is common to both writer and reader)

                                                                                        wrt provides mutually exclusive access to shared data Reader process Writer process

                                                                                        wait(mutex) readcount++ if (readcount == 1) wait(rt) signal(mutex) hellip reading is performed hellip wait(mutex) readcount-- if (readcount == 0) signal(wrt) signal(mutex)

                                                                                        do wait(wrt) hellip writing is performed hellip signal(wrt)

                                                                                        while (TRUE)

                                                                                        3 The Dining Philosophers Problemn philosophers sit around a table thinking and eatingAs time passes a philosopher gets hungry and tries to pick up the chopstick on his left and on his right A philosopher may only pick up one chopstick at a time and cannot pick up a chopstick already in the hand of neighbour philosopherThe dining philosophers problems is an example of a large class or concurrency control problems it is a simple representation of the need to allocate several resources among several processes in a deadlock-free and starvation-free mannerA semaphore solution represent each chopstick with a semaphore

                                                                                        semaphore chopstick[5] Initially all values are 1

                                                                                        Philosopher ido

                                                                                        51

                                                                                        wait(chopstick[i])wait(chopstick[(i+1) 5])hellipeathellipsignal(chopstick[i])signal(chopstick[(i+1) 5])hellipthinkhellip

                                                                                        while (1)This solution guarantees no two neighbouring philosophers eat simultaneously but has the possibility of creating a deadlockQ) Types of operating systems

                                                                                        MULTIPROCESSING SYSTEMS has multiple hardware CPUs

                                                                                        In symmetric multiprocessing (SMP) systems The processes share the same main memory and io facilities All processors can perform the same functions (hence term symmetric)

                                                                                        52

                                                                                        ASYMMETRIC MULTIPROCESSING SYSTEMS In asymmetric multiprocessing (ASMP) all CPUs are not equal (Master and slave CPUs) ASMP have dedicated processors to specific tasks For example one processor may be dedicated to disk operations another to video operations etc These systems dont have the flexibility to assign processes to the least-loaded CPU unlike an SMP system

                                                                                        REAL TIME SYSTEM is used to implement a computer application that must complete its execution within its time constraint

                                                                                        Two kinds of real-time systems have evolved

                                                                                        1 A hard real-time system can guarantee that response requirement will be met under all conditions 2 A soft real-time system cannot guarantee that response requirement meets under all conditions Ex

                                                                                        Digital audio or multimedia systems Digital telephones

                                                                                        DISTRIBUTED SYSTEMS execute parts of a computation in different systems at the same time It uses distributed control ie it spreads its decision-making actions across different computers in the system so that failures of individual computers or the network does not cripple its operation

                                                                                        A distributed operating system appears as a uniprocessor system even though it has multiple processors The users may not know where their programs are being run or where their files are located that should all be handled automatically by the operating system

                                                                                        Distributed systems allow applications to run on several processors at the same time thus requiring more complex processor scheduling algorithms

                                                                                        DESKTOP SYSTEMS A desktop system is a personal computer (PC) system used at a single location Modern desktop systems support multiprogramming Common examples are Linux FreeBSD Windows 8 and the Macintosh operating system

                                                                                        HANDHELD SYSTEMS (palmtop computers) Two of the most popular operating systems for handhelds are Symbian OS and Android OS They have very less memory

                                                                                        53

                                                                                        CLUSTERED SYSTEMS A computer cluster consists of a set of loosely connected computers that work together so that in many respects they can be viewed as a single system The components of a cluster are usually connected to each other through fast local area networks each node (computer used as a server) running its own instance of an operating system

                                                                                        Q) Modern operating system

                                                                                        New developments in hardware applications and security threats lead to the development of modern operating system

                                                                                        New Hardware developments like increased machine speed high speed network increased size and variety of memory devices

                                                                                        New Applications developments like multimedia applications internet and web access and client server computing

                                                                                        Developments in modern operating system can be categorized as

                                                                                        1 Microkernel architecture assigns only few functions to kernel ( IPC and CPU scheduling etc) Other OS services are provided by processes called servers that run in user mode This approach decouples server and kernel development This approach is well suited to distributed environment

                                                                                        2 Multithreading a process is divided into threads that can run concurrently Useful for applications that perform a number of independent tasks Example database server that listens and processes many client requests

                                                                                        3 Symmetric multiprocessing As now-a-days there are multiple microprocessors in a single system SMP operating system provides greater efficiency when there are multiple processors The processes share the same main memory and io facilities All processors can perform the same functions (hence term symmetric)

                                                                                        SMP schedules processes or threads across all of the processors

                                                                                        Advantages

                                                                                        1 Increased performance if some portion of the program can be run in parallel the performance of SMP OS increases

                                                                                        2 increased availability as all processes can perform the same task if one processor fails others will work 3 incremental growth one can add a new processor to increase the performance4 Scaling cost can be dependent on the number of processors used

                                                                                        4 Distributed operating systems Distributed systems allow applications to run on several processors at the same time

                                                                                        5 Object oriented design adds modularity to kernel Os can be customized without effecting system integrity

                                                                                        Q) Linux Operating systems It is open source as its source code is freely available It is free to use Linux was designed considering UNIX compatibility Linux is a multi-user multitasking system Main design goals are speed efficiency and standardization Components of Linux System

                                                                                        54

                                                                                        Linux Operating System has primarily three components as shown in the below diagram

                                                                                        Kernel - Kernel is the core part of Linux It is responsible for all major activities of this operating system It interacts directly with hardware Kernel hides low level hardware details to system or application programs

                                                                                        System Library - System libraries are special functions or programs These are used by application programs to use Kernelrsquos features

                                                                                        System Utility - System Utility programs are responsible to do specialized individual level tasks like updating log file accepting login requests from terminals etc

                                                                                        Kernel Modules Sections of kernel code that can be compiled loaded and unloaded independent of the rest of the kernel A kernel module may typically implement a device driver a file system or a networking protocol

                                                                                        Three components to Linux module support 1 module management Supports loading modules into memory and allows to talk to the rest of the kernel 2driver registration Allows modules to inform rest of the kernel that a new driver has become available 3 conflict resolution protect reserved resources of one driver from accidental use by another driver

                                                                                        Kernel Mode vs User ModeKernel component code executes in a special privileged mode called kernel mode with full access to all resources of the computer This code represents a single process executes in single address space and do not require any context switch and hence is very efficient and fast Kernel runs each processes and provides system services to processes provides protected access to hardwares to processesCode which is not required to run in kernel mode is in System Library User programs and other system programs works in User Mode which has no access to system hardwares and kernel code User programs utilities use System libraries to access Kernel functions to get systems low level tasksBasic FeaturesFollowing are some of the important features of Linux Operating System

                                                                                        Portable - Portability means softwares can works on different types of hardwares in same way Linux kernel and application programs supports their installation on any kind of hardware platform

                                                                                        Open Source - Linux source code is freely available and it is community based development project Multiple teams works in collaboration to enhance the capability of Linux operating system and it is continuously evolving

                                                                                        Multi-User - Linux is a multiuser system means multiple users can access system resources like memory ram application programs at same time

                                                                                        Multiprogramming - Linux is a multiprogramming system means multiple applications can run at same time

                                                                                        Hierarchical File System - Linux provides a standard file structure in which system files user files are arranged

                                                                                        Shell - Linux provides a special interpreter program which can be used to execute commands of the operating system It can be used to do various types of operations call application programs etc

                                                                                        55

                                                                                        Security - Linux provides user security using authentication features like password protection controlled access to specific files encryption of data

                                                                                        Architecture Linux System Architecture is consists of following layers

                                                                                        Hardware layer - Hardware consists of all peripheral devices (RAM HDD CPU etc)

                                                                                        Kernel - Core component of Operating System interacts directly with hardware provides low level services to upper layer components

                                                                                        Shell - An interface to kernel Takes commands from user and executes kernels functions

                                                                                        Utilities - Utility programs giving user most of the functionalities of an operating systems

                                                                                        Q) windows XP operating System1Extensibility mdash layered architecture

                                                                                        Executive which runs in protected mode provides the basic system services On top of the executive several server subsystems operate in user mode Modular structure allows additional environmental subsystems to be added without affecting the

                                                                                        executive 2Portability mdash XP can be moved from on hardware architecture to another with relatively few changes

                                                                                        Written in C and C++ Processor-dependent code is isolated in a dynamic link library (DLL) called the ldquohardware abstraction

                                                                                        layerrdquo (HAL)3 Reliability mdashXP uses hardware protection for virtual memory and software protection mechanisms for operating system resources 4 Compatibility mdash applications that follow the IEEE 10031 (POSIX) standard can be complied to run on 2000 without changing the source code 5 Performance mdash XP subsystems can communicate with one another via high-performance message passing

                                                                                        Preemption of low priority threads enables the system to respond quickly to external events Designed for symmetrical multiprocessing

                                                                                        6 International support mdash supports different locales via the national language support (NLS) APIXP Architecture 1 Layered system of modules 2 Protected mode mdash hardware abstraction layer (HAL) kernel executive 3 User mode mdash collection of subsystems

                                                                                        a Environmental subsystems emulate different operating systems b Protection subsystems provide security functions

                                                                                        XP architecture

                                                                                        56

                                                                                        Q) Windows network Operating systemExample Windows 2000 operating system

                                                                                        All the pictures are taken from Silberschatz Abraham et al Peter Baer Galvin and Greg Gagne Operating system concepts Reading Addison-Wesley

                                                                                        Visit my blog enthusiaststudentblogspotin57

                                                                                        mtechmessengerblogspotin

                                                                                        Keywords OS exam notes OS notesOperating system notes Opearing system galvin notesSimple batch system Multi- programmed Batch System Time sharing System Difference between Batch Multi-programming and Time sharing What is process structure of process in memory Process states PCB Operations on Processes Process Creation Process Termination cascading terminationcontext switch Basic Concepts of threads two modes of CPU execution Inter process communication Shared ndash Memory Systems Message passing systemswhat is deadlock four conditions that are necessary for deadlock to occur Methods for handling deadlocks Deadlock prevention Deadlock avoidanceResource Allocation graphBankers AlgorithmSafe State Unsafe state Safety Algorithm Resource-Request Algorithm Memory management Address Binding Logical vs Physical Address Space MMU Memory-Management Unit Dynamic loading Dynamic Linking Overlays Swapping Contiguous memory allocation Single-partition allocation Multiple-partition allocation Internal fragmentation Variable Size partitions External fragmentation First-fit Best-fit Worst-fit Non-Contiguous Memory allocation Paging Implementation of Page Table PTLR PTBR page table base register Page-table length register translation look-aside buffers TLB Protection in Paging Shared pages in Paging Segmentation Shared Segments Virtual memory Demand paging Procedure for Handling a Page Fault pure demand paging Page Replacement Page Replacement Algorithms FIFO Optimal Page replacement AlgorithmLeast Recently used (LRU) Page replacement algorithm ThrashingWorking Set model global and local page replacement

                                                                                        58

                                                                                        • 101 File Concept Data is stored in secondary memory in the form of files A file is a sequence of bits bytes lines or records A file has structure which depends on its type
                                                                                        • File Attributes
                                                                                        • File Operations
                                                                                        • Kernel Mode vs User Mode
                                                                                        • Basic Features
                                                                                        • Architecture

                                                                                          To keep track of free disk space the system maintains free space list To create a file we search free space list and allocate the required space to new file

                                                                                          The free space list can be implemented as

                                                                                          1 Bit vector (or) Bit map2 Linked list3 Grouping4 Counting 5 Space maps

                                                                                          1 Bit vector (or) Bit map Each block is represented by 1 bit If the block is free the bit is 1 and if the block is allocated the bit is 0 For example consider a disk where blocks 23458 blocks are free The free-space bit map would be 001111001000

                                                                                          Advantages simple and easy to get contiguous files

                                                                                          Disadvantages

                                                                                          1 Bit map requires extra space2 Bit map is inefficient if kept in disk Bit maps of smaller disk can be kept in main memory but may

                                                                                          not be possible to keep the bit map of larger disk in main memory3 Bit map must be modified both when blocks are allocated and freed Freeing 1 GB of data on a 1-TB

                                                                                          disk could cause thousands of blocks of bit maps to be updated because those data blocks are scattered all over the disk

                                                                                          2 Linked List Free-space list is implemented by linking all the free disk blocks together A pointer to the first free block is stored in special location on the disk For example consider a disk where blocks 23458 blocks are freeWe keep a pointer to block 2 as the first free block Block 2 would contain a pointer to block 3 Block 3 would point to block 4 and block 4 would point to block 5and so onAdvantage No waste of spaceDisadvantage Cannot get contiguous space easily ie To traverse the free-space list we must read each block which requires more IO time

                                                                                          3 Grouping First free block stores the addresses of n free blocks The first n-1 free blocks are free and the last block contains the addresses of another n free blocks and so onAdvantage large number of free blocks can be found quickly4 Counting free-space list can be maintained by keeping the address of first free block and the number (n)of free contiguous blocks that follow the first block

                                                                                          Each entry in the free-space list consists of disk address and a countEach entry requires more space andlist is shorter only if count is gt1These entries can be stored in a B-tree for efficient insertion deletion and search

                                                                                          5 Space maps ZFS divides the disk space into chunks of manageable sizes called metaslabs Each metaslab has a space mapZFS uses counting algorithm to store information about free blocks Space map is a log of all block activities in time order in counting formatWhen ZFS allocates or frees space from a metaslab it loads the associated space map into memory and updates it Finally free-space list is updated on diskSynchronization Hardware Hardware solution to synchronization is to provide atomic operations These operations operate as a single instruction without interruption Two such operations are 1 Test and Set() hardware instruction2 Swap() instruction

                                                                                          1 Test and Set instruction is as shown

                                                                                          Solution using Test and SetThe shared variable lock is initialized to false

                                                                                          2 Swap() instruction definition is as follows

                                                                                          Solution using Swap1 Shared Boolean variable lock is initialized to false and each process has a local boolean variable key

                                                                                          Both Test and Set Swap instructions satisfy the mutual exclusion requirement but unfortunately do not guarantee bounded waiting If there are multiple processes trying to get into their critical sections there is no guarantee of what order they will enter and any one process could have the bad luck to wait forever until they got their turn in the critical section (Since there is no guarantee as to the relative rates of the processes a very fast process could theoretically release the lock whip through their remainder section and re-lock the lock before a slower process got a chance As more and more processes are involved vying for the same resource the odds of a slow process getting locked out completely increase )

                                                                                          Below figure illustrates a solution using test-and-set that satisfies bounded waiting using two shared data structures boolean lock and boolean waiting[ N ] where N is the number of processes in contention for critical sections

                                                                                          Bounded-waiting mutual exclusion with TestAndSet( )

                                                                                          It first looks in an order ( starting with the next process on the list ) for a process that has been waiting and if it finds one then it releases that particular process from its waiting state without unlocking the critical section thereby allowing a specific process into the critical section while continuing to block all the others

                                                                                          Q) What is semaphoreAns Semaphore is used to solve synchronization problems A semaphore S is an integer variable accessed only through 2 standard atomic operations 1 wait() 2 signal()Wait primitive Signal primitivewait(S)

                                                                                          while(S lt= 0) do nothingS--

                                                                                          signal (S)

                                                                                          S++

                                                                                          Each process that wishes to use a resource performs wait() operation When a process releases a resource it performs signal() operation

                                                                                          Semaphores Usage1 binary semaphore or mutex locks can take values 0 (or) 1

                                                                                          2 Counting Semaphore can take any integer value Counting semaphore is used to count the number remaining resources The counter is initialized to the number of resources available whenever the counting semaphore gt 0 then a process can enter a critical section and use one of the resources When the counter = 0 ( or negative in some implementations ) then the process blocks until another process frees a resource and increments the counting semaphore with a signal callFor example if 3 resources are there and 4 processes Value of counting semaphore =3if process P1 requires a resource it perform wait() operationNow counting semaphore = 2if process P2 requires a resource it perform wait() operationNow counting semaphore = 1if process P3 requires a resource it perform wait() operationNow counting semaphore = 0if process P4 requires a resource it perform wait() operation P4 waits until a resource is available

                                                                                          3 Semaphores can also be used to synchronize certain operations

                                                                                          Semaphore ImplementationWhen a process is in critical section and any other process that tries to enter critical section must loop continuously wasting CPU time (busy waiting) This type of semaphore is called spin lock as the process spins while waiting For example suppose it is important that process P1 execute statement S1 before process P2 executes statement S2

                                                                                          First we create a semaphore named synch that is shared by the two processes and initialize it to zero Then in process P1 we insert the code

                                                                                          S1signal( synch )

                                                                                          and in process P2 we insert the code wait( synch )

                                                                                          S2 Because synch was initialized to 0 process P2 will block on the wait until after P1 executes the call

                                                                                          to signal Each semaphore has a integer value and maintains a waiting queue of waiting processes

                                                                                          Semaphore implementation with no busy waiting To overcome busy waiting we block the waiting process and waiting process is restarted by wakeup() operation(ie the process is removed from waiting queue to ready state)

                                                                                          Problems due to semaphore1 Deadlocks Deadlocks occur when multiple processes are blocked each waiting for a resource that can only be freed by one of the other (blocked) processes shown below

                                                                                          2 Starvation In Starvation one or more processes gets blocked forever and never get a chance to enter critical section For example if we do not specify the algorithms for adding processes to the waiting queue or selecting one to be removed from the queue in the signal( ) call If a LIFO queue is chosen then the first process that starts waiting will never get a chance

                                                                                          3 Priority inversionLet Process LMH have priorities as shown belowLltMltHprocess H is waiting for a resource that has been held by process L so Process L is running When process M enters it pre-empts process L and M is running nowHere M runs before H even though M has lower priority than HTo avoid this L is given high priority so that M does not pre-empt L=====================================================================Q) Classical problems on synchronization

                                                                                          1 The Bounded Buffer Problem (also called the The Producer-Consumer Problem)2 The Readers-Writers Problem3 The Dining Philosophers Problem

                                                                                          These problems are used to test newly proposed synchronization scheme

                                                                                          1 The Bounded Buffer ProblemConsider

                                                                                          n buffers and each buffer holds one item a producer process which creates the items (1 at a time) a consumer process which processes them (1 at a time)

                                                                                          Producer process cannot produce an item if all the buffers are full and Consumer process cannot consume if all the buffers are empty As both processes modify the contents of buffer synchronization must be there For that we use 3 Semaphores

                                                                                          1 empty = n (empty semaphore counts the number of empty buffers)2 full=0 (full semaphore counts the number of filled buffers)3 mutex=1 (mutex provides mutual exclusion for access to the buffer)

                                                                                          Producer process Consumer processdo

                                                                                          hellip produce an item wait(empty) wait(mutex) hellip

                                                                                          do wait(full) wait(mutex) hellip remove an item from buffer hellip

                                                                                          add item to buffer hellip signal(mutex) signal(full)

                                                                                          while (1)

                                                                                          signal(mutex) signal(empty) hellip consume the item hellip while (1)

                                                                                          2 The Readers-Writers ProblemA data item such as a file is shared among several processesEach process is classified as reader or writerMultiple readers can read the file at same timeA writer must have exclusive access (ie cannot share with either a reader or another writer)Two versions of the readers-writers problem

                                                                                          readers priority new reader need not wait because a writer is waiting writers priority if a writer is waiting to access the database no new readers can start reading

                                                                                          A solution to both version may cause starvation in the readers priority version writers may starve in the writers priority version readers may starve

                                                                                          A semaphore solution to the readers priority version uses 3 semaphores1 readcount = 0 (readcount semaphore counts the number of reading shared data2 mutex =1 (mutex semaphore is used for mutually exclusion when readcount is updated)3 wrt = 1 (wrt semaphore is common to both writer and reader)

                                                                                          wrt provides mutually exclusive access to shared data Reader process Writer process

                                                                                          wait(mutex) readcount++ if (readcount == 1) wait(rt) signal(mutex) hellip reading is performed hellip wait(mutex) readcount-- if (readcount == 0) signal(wrt) signal(mutex)

                                                                                          do wait(wrt) hellip writing is performed hellip signal(wrt)

                                                                                          while (TRUE)

                                                                                          3 The Dining Philosophers Problemn philosophers sit around a table thinking and eatingAs time passes a philosopher gets hungry and tries to pick up the chopstick on his left and on his right A philosopher may only pick up one chopstick at a time and cannot pick up a chopstick already in the hand of neighbour philosopherThe dining philosophers problems is an example of a large class or concurrency control problems it is a simple representation of the need to allocate several resources among several processes in a deadlock-free and starvation-free mannerA semaphore solution represent each chopstick with a semaphore

                                                                                          semaphore chopstick[5] Initially all values are 1

                                                                                          Philosopher ido

                                                                                          51

                                                                                          wait(chopstick[i])wait(chopstick[(i+1) 5])hellipeathellipsignal(chopstick[i])signal(chopstick[(i+1) 5])hellipthinkhellip

                                                                                          while (1)This solution guarantees no two neighbouring philosophers eat simultaneously but has the possibility of creating a deadlockQ) Types of operating systems

                                                                                          MULTIPROCESSING SYSTEMS has multiple hardware CPUs

                                                                                          In symmetric multiprocessing (SMP) systems The processes share the same main memory and io facilities All processors can perform the same functions (hence term symmetric)

                                                                                          52

                                                                                          ASYMMETRIC MULTIPROCESSING SYSTEMS In asymmetric multiprocessing (ASMP) all CPUs are not equal (Master and slave CPUs) ASMP have dedicated processors to specific tasks For example one processor may be dedicated to disk operations another to video operations etc These systems dont have the flexibility to assign processes to the least-loaded CPU unlike an SMP system

                                                                                          REAL TIME SYSTEM is used to implement a computer application that must complete its execution within its time constraint

                                                                                          Two kinds of real-time systems have evolved

                                                                                          1 A hard real-time system can guarantee that response requirement will be met under all conditions 2 A soft real-time system cannot guarantee that response requirement meets under all conditions Ex

                                                                                          Digital audio or multimedia systems Digital telephones

                                                                                          DISTRIBUTED SYSTEMS execute parts of a computation in different systems at the same time It uses distributed control ie it spreads its decision-making actions across different computers in the system so that failures of individual computers or the network does not cripple its operation

                                                                                          A distributed operating system appears as a uniprocessor system even though it has multiple processors The users may not know where their programs are being run or where their files are located that should all be handled automatically by the operating system

                                                                                          Distributed systems allow applications to run on several processors at the same time thus requiring more complex processor scheduling algorithms

                                                                                          DESKTOP SYSTEMS A desktop system is a personal computer (PC) system used at a single location Modern desktop systems support multiprogramming Common examples are Linux FreeBSD Windows 8 and the Macintosh operating system

                                                                                          HANDHELD SYSTEMS (palmtop computers) Two of the most popular operating systems for handhelds are Symbian OS and Android OS They have very less memory

                                                                                          53

                                                                                          CLUSTERED SYSTEMS A computer cluster consists of a set of loosely connected computers that work together so that in many respects they can be viewed as a single system The components of a cluster are usually connected to each other through fast local area networks each node (computer used as a server) running its own instance of an operating system

                                                                                          Q) Modern operating system

                                                                                          New developments in hardware applications and security threats lead to the development of modern operating system

                                                                                          New Hardware developments like increased machine speed high speed network increased size and variety of memory devices

                                                                                          New Applications developments like multimedia applications internet and web access and client server computing

                                                                                          Developments in modern operating system can be categorized as

                                                                                          1 Microkernel architecture assigns only few functions to kernel ( IPC and CPU scheduling etc) Other OS services are provided by processes called servers that run in user mode This approach decouples server and kernel development This approach is well suited to distributed environment

                                                                                          2 Multithreading a process is divided into threads that can run concurrently Useful for applications that perform a number of independent tasks Example database server that listens and processes many client requests

                                                                                          3 Symmetric multiprocessing As now-a-days there are multiple microprocessors in a single system SMP operating system provides greater efficiency when there are multiple processors The processes share the same main memory and io facilities All processors can perform the same functions (hence term symmetric)

                                                                                          SMP schedules processes or threads across all of the processors

                                                                                          Advantages

                                                                                          1 Increased performance if some portion of the program can be run in parallel the performance of SMP OS increases

                                                                                          2 increased availability as all processes can perform the same task if one processor fails others will work 3 incremental growth one can add a new processor to increase the performance4 Scaling cost can be dependent on the number of processors used

                                                                                          4 Distributed operating systems Distributed systems allow applications to run on several processors at the same time

                                                                                          5 Object oriented design adds modularity to kernel Os can be customized without effecting system integrity

                                                                                          Q) Linux Operating systems It is open source as its source code is freely available It is free to use Linux was designed considering UNIX compatibility Linux is a multi-user multitasking system Main design goals are speed efficiency and standardization Components of Linux System

                                                                                          54

                                                                                          Linux Operating System has primarily three components as shown in the below diagram

                                                                                          Kernel - Kernel is the core part of Linux It is responsible for all major activities of this operating system It interacts directly with hardware Kernel hides low level hardware details to system or application programs

                                                                                          System Library - System libraries are special functions or programs These are used by application programs to use Kernelrsquos features

                                                                                          System Utility - System Utility programs are responsible to do specialized individual level tasks like updating log file accepting login requests from terminals etc

                                                                                          Kernel Modules Sections of kernel code that can be compiled loaded and unloaded independent of the rest of the kernel A kernel module may typically implement a device driver a file system or a networking protocol

                                                                                          Three components to Linux module support 1 module management Supports loading modules into memory and allows to talk to the rest of the kernel 2driver registration Allows modules to inform rest of the kernel that a new driver has become available 3 conflict resolution protect reserved resources of one driver from accidental use by another driver

                                                                                          Kernel Mode vs User ModeKernel component code executes in a special privileged mode called kernel mode with full access to all resources of the computer This code represents a single process executes in single address space and do not require any context switch and hence is very efficient and fast Kernel runs each processes and provides system services to processes provides protected access to hardwares to processesCode which is not required to run in kernel mode is in System Library User programs and other system programs works in User Mode which has no access to system hardwares and kernel code User programs utilities use System libraries to access Kernel functions to get systems low level tasksBasic FeaturesFollowing are some of the important features of Linux Operating System

                                                                                          Portable - Portability means softwares can works on different types of hardwares in same way Linux kernel and application programs supports their installation on any kind of hardware platform

                                                                                          Open Source - Linux source code is freely available and it is community based development project Multiple teams works in collaboration to enhance the capability of Linux operating system and it is continuously evolving

                                                                                          Multi-User - Linux is a multiuser system means multiple users can access system resources like memory ram application programs at same time

                                                                                          Multiprogramming - Linux is a multiprogramming system means multiple applications can run at same time

                                                                                          Hierarchical File System - Linux provides a standard file structure in which system files user files are arranged

                                                                                          Shell - Linux provides a special interpreter program which can be used to execute commands of the operating system It can be used to do various types of operations call application programs etc

                                                                                          55

                                                                                          Security - Linux provides user security using authentication features like password protection controlled access to specific files encryption of data

                                                                                          Architecture Linux System Architecture is consists of following layers

                                                                                          Hardware layer - Hardware consists of all peripheral devices (RAM HDD CPU etc)

                                                                                          Kernel - Core component of Operating System interacts directly with hardware provides low level services to upper layer components

                                                                                          Shell - An interface to kernel Takes commands from user and executes kernels functions

                                                                                          Utilities - Utility programs giving user most of the functionalities of an operating systems

                                                                                          Q) windows XP operating System1Extensibility mdash layered architecture

                                                                                          Executive which runs in protected mode provides the basic system services On top of the executive several server subsystems operate in user mode Modular structure allows additional environmental subsystems to be added without affecting the

                                                                                          executive 2Portability mdash XP can be moved from on hardware architecture to another with relatively few changes

                                                                                          Written in C and C++ Processor-dependent code is isolated in a dynamic link library (DLL) called the ldquohardware abstraction

                                                                                          layerrdquo (HAL)3 Reliability mdashXP uses hardware protection for virtual memory and software protection mechanisms for operating system resources 4 Compatibility mdash applications that follow the IEEE 10031 (POSIX) standard can be complied to run on 2000 without changing the source code 5 Performance mdash XP subsystems can communicate with one another via high-performance message passing

                                                                                          Preemption of low priority threads enables the system to respond quickly to external events Designed for symmetrical multiprocessing

                                                                                          6 International support mdash supports different locales via the national language support (NLS) APIXP Architecture 1 Layered system of modules 2 Protected mode mdash hardware abstraction layer (HAL) kernel executive 3 User mode mdash collection of subsystems

                                                                                          a Environmental subsystems emulate different operating systems b Protection subsystems provide security functions

                                                                                          XP architecture

                                                                                          56

                                                                                          Q) Windows network Operating systemExample Windows 2000 operating system

                                                                                          All the pictures are taken from Silberschatz Abraham et al Peter Baer Galvin and Greg Gagne Operating system concepts Reading Addison-Wesley

                                                                                          Visit my blog enthusiaststudentblogspotin57

                                                                                          mtechmessengerblogspotin

                                                                                          Keywords OS exam notes OS notesOperating system notes Opearing system galvin notesSimple batch system Multi- programmed Batch System Time sharing System Difference between Batch Multi-programming and Time sharing What is process structure of process in memory Process states PCB Operations on Processes Process Creation Process Termination cascading terminationcontext switch Basic Concepts of threads two modes of CPU execution Inter process communication Shared ndash Memory Systems Message passing systemswhat is deadlock four conditions that are necessary for deadlock to occur Methods for handling deadlocks Deadlock prevention Deadlock avoidanceResource Allocation graphBankers AlgorithmSafe State Unsafe state Safety Algorithm Resource-Request Algorithm Memory management Address Binding Logical vs Physical Address Space MMU Memory-Management Unit Dynamic loading Dynamic Linking Overlays Swapping Contiguous memory allocation Single-partition allocation Multiple-partition allocation Internal fragmentation Variable Size partitions External fragmentation First-fit Best-fit Worst-fit Non-Contiguous Memory allocation Paging Implementation of Page Table PTLR PTBR page table base register Page-table length register translation look-aside buffers TLB Protection in Paging Shared pages in Paging Segmentation Shared Segments Virtual memory Demand paging Procedure for Handling a Page Fault pure demand paging Page Replacement Page Replacement Algorithms FIFO Optimal Page replacement AlgorithmLeast Recently used (LRU) Page replacement algorithm ThrashingWorking Set model global and local page replacement

                                                                                          58

                                                                                          • 101 File Concept Data is stored in secondary memory in the form of files A file is a sequence of bits bytes lines or records A file has structure which depends on its type
                                                                                          • File Attributes
                                                                                          • File Operations
                                                                                          • Kernel Mode vs User Mode
                                                                                          • Basic Features
                                                                                          • Architecture

                                                                                            Each entry in the free-space list consists of disk address and a countEach entry requires more space andlist is shorter only if count is gt1These entries can be stored in a B-tree for efficient insertion deletion and search

                                                                                            5 Space maps ZFS divides the disk space into chunks of manageable sizes called metaslabs Each metaslab has a space mapZFS uses counting algorithm to store information about free blocks Space map is a log of all block activities in time order in counting formatWhen ZFS allocates or frees space from a metaslab it loads the associated space map into memory and updates it Finally free-space list is updated on diskSynchronization Hardware Hardware solution to synchronization is to provide atomic operations These operations operate as a single instruction without interruption Two such operations are 1 Test and Set() hardware instruction2 Swap() instruction

                                                                                            1 Test and Set instruction is as shown

                                                                                            Solution using Test and SetThe shared variable lock is initialized to false

                                                                                            2 Swap() instruction definition is as follows

                                                                                            Solution using Swap1 Shared Boolean variable lock is initialized to false and each process has a local boolean variable key

                                                                                            Both Test and Set Swap instructions satisfy the mutual exclusion requirement but unfortunately do not guarantee bounded waiting If there are multiple processes trying to get into their critical sections there is no guarantee of what order they will enter and any one process could have the bad luck to wait forever until they got their turn in the critical section (Since there is no guarantee as to the relative rates of the processes a very fast process could theoretically release the lock whip through their remainder section and re-lock the lock before a slower process got a chance As more and more processes are involved vying for the same resource the odds of a slow process getting locked out completely increase )

                                                                                            Below figure illustrates a solution using test-and-set that satisfies bounded waiting using two shared data structures boolean lock and boolean waiting[ N ] where N is the number of processes in contention for critical sections

                                                                                            Bounded-waiting mutual exclusion with TestAndSet( )

                                                                                            It first looks in an order ( starting with the next process on the list ) for a process that has been waiting and if it finds one then it releases that particular process from its waiting state without unlocking the critical section thereby allowing a specific process into the critical section while continuing to block all the others

                                                                                            Q) What is semaphoreAns Semaphore is used to solve synchronization problems A semaphore S is an integer variable accessed only through 2 standard atomic operations 1 wait() 2 signal()Wait primitive Signal primitivewait(S)

                                                                                            while(S lt= 0) do nothingS--

                                                                                            signal (S)

                                                                                            S++

                                                                                            Each process that wishes to use a resource performs wait() operation When a process releases a resource it performs signal() operation

                                                                                            Semaphores Usage1 binary semaphore or mutex locks can take values 0 (or) 1

                                                                                            2 Counting Semaphore can take any integer value Counting semaphore is used to count the number remaining resources The counter is initialized to the number of resources available whenever the counting semaphore gt 0 then a process can enter a critical section and use one of the resources When the counter = 0 ( or negative in some implementations ) then the process blocks until another process frees a resource and increments the counting semaphore with a signal callFor example if 3 resources are there and 4 processes Value of counting semaphore =3if process P1 requires a resource it perform wait() operationNow counting semaphore = 2if process P2 requires a resource it perform wait() operationNow counting semaphore = 1if process P3 requires a resource it perform wait() operationNow counting semaphore = 0if process P4 requires a resource it perform wait() operation P4 waits until a resource is available

                                                                                            3 Semaphores can also be used to synchronize certain operations

                                                                                            Semaphore ImplementationWhen a process is in critical section and any other process that tries to enter critical section must loop continuously wasting CPU time (busy waiting) This type of semaphore is called spin lock as the process spins while waiting For example suppose it is important that process P1 execute statement S1 before process P2 executes statement S2

                                                                                            First we create a semaphore named synch that is shared by the two processes and initialize it to zero Then in process P1 we insert the code

                                                                                            S1signal( synch )

                                                                                            and in process P2 we insert the code wait( synch )

                                                                                            S2 Because synch was initialized to 0 process P2 will block on the wait until after P1 executes the call

                                                                                            to signal Each semaphore has a integer value and maintains a waiting queue of waiting processes

                                                                                            Semaphore implementation with no busy waiting To overcome busy waiting we block the waiting process and waiting process is restarted by wakeup() operation(ie the process is removed from waiting queue to ready state)

                                                                                            Problems due to semaphore1 Deadlocks Deadlocks occur when multiple processes are blocked each waiting for a resource that can only be freed by one of the other (blocked) processes shown below

                                                                                            2 Starvation In Starvation one or more processes gets blocked forever and never get a chance to enter critical section For example if we do not specify the algorithms for adding processes to the waiting queue or selecting one to be removed from the queue in the signal( ) call If a LIFO queue is chosen then the first process that starts waiting will never get a chance

                                                                                            3 Priority inversionLet Process LMH have priorities as shown belowLltMltHprocess H is waiting for a resource that has been held by process L so Process L is running When process M enters it pre-empts process L and M is running nowHere M runs before H even though M has lower priority than HTo avoid this L is given high priority so that M does not pre-empt L=====================================================================Q) Classical problems on synchronization

                                                                                            1 The Bounded Buffer Problem (also called the The Producer-Consumer Problem)2 The Readers-Writers Problem3 The Dining Philosophers Problem

                                                                                            These problems are used to test newly proposed synchronization scheme

                                                                                            1 The Bounded Buffer ProblemConsider

                                                                                            n buffers and each buffer holds one item a producer process which creates the items (1 at a time) a consumer process which processes them (1 at a time)

                                                                                            Producer process cannot produce an item if all the buffers are full and Consumer process cannot consume if all the buffers are empty As both processes modify the contents of buffer synchronization must be there For that we use 3 Semaphores

                                                                                            1 empty = n (empty semaphore counts the number of empty buffers)2 full=0 (full semaphore counts the number of filled buffers)3 mutex=1 (mutex provides mutual exclusion for access to the buffer)

                                                                                            Producer process Consumer processdo

                                                                                            hellip produce an item wait(empty) wait(mutex) hellip

                                                                                            do wait(full) wait(mutex) hellip remove an item from buffer hellip

                                                                                            add item to buffer hellip signal(mutex) signal(full)

                                                                                            while (1)

                                                                                            signal(mutex) signal(empty) hellip consume the item hellip while (1)

                                                                                            2 The Readers-Writers ProblemA data item such as a file is shared among several processesEach process is classified as reader or writerMultiple readers can read the file at same timeA writer must have exclusive access (ie cannot share with either a reader or another writer)Two versions of the readers-writers problem

                                                                                            readers priority new reader need not wait because a writer is waiting writers priority if a writer is waiting to access the database no new readers can start reading

                                                                                            A solution to both version may cause starvation in the readers priority version writers may starve in the writers priority version readers may starve

                                                                                            A semaphore solution to the readers priority version uses 3 semaphores1 readcount = 0 (readcount semaphore counts the number of reading shared data2 mutex =1 (mutex semaphore is used for mutually exclusion when readcount is updated)3 wrt = 1 (wrt semaphore is common to both writer and reader)

                                                                                            wrt provides mutually exclusive access to shared data Reader process Writer process

                                                                                            wait(mutex) readcount++ if (readcount == 1) wait(rt) signal(mutex) hellip reading is performed hellip wait(mutex) readcount-- if (readcount == 0) signal(wrt) signal(mutex)

                                                                                            do wait(wrt) hellip writing is performed hellip signal(wrt)

                                                                                            while (TRUE)

                                                                                            3 The Dining Philosophers Problemn philosophers sit around a table thinking and eatingAs time passes a philosopher gets hungry and tries to pick up the chopstick on his left and on his right A philosopher may only pick up one chopstick at a time and cannot pick up a chopstick already in the hand of neighbour philosopherThe dining philosophers problems is an example of a large class or concurrency control problems it is a simple representation of the need to allocate several resources among several processes in a deadlock-free and starvation-free mannerA semaphore solution represent each chopstick with a semaphore

                                                                                            semaphore chopstick[5] Initially all values are 1

                                                                                            Philosopher ido

                                                                                            51

                                                                                            wait(chopstick[i])wait(chopstick[(i+1) 5])hellipeathellipsignal(chopstick[i])signal(chopstick[(i+1) 5])hellipthinkhellip

                                                                                            while (1)This solution guarantees no two neighbouring philosophers eat simultaneously but has the possibility of creating a deadlockQ) Types of operating systems

                                                                                            MULTIPROCESSING SYSTEMS has multiple hardware CPUs

                                                                                            In symmetric multiprocessing (SMP) systems The processes share the same main memory and io facilities All processors can perform the same functions (hence term symmetric)

                                                                                            52

                                                                                            ASYMMETRIC MULTIPROCESSING SYSTEMS In asymmetric multiprocessing (ASMP) all CPUs are not equal (Master and slave CPUs) ASMP have dedicated processors to specific tasks For example one processor may be dedicated to disk operations another to video operations etc These systems dont have the flexibility to assign processes to the least-loaded CPU unlike an SMP system

                                                                                            REAL TIME SYSTEM is used to implement a computer application that must complete its execution within its time constraint

                                                                                            Two kinds of real-time systems have evolved

                                                                                            1 A hard real-time system can guarantee that response requirement will be met under all conditions 2 A soft real-time system cannot guarantee that response requirement meets under all conditions Ex

                                                                                            Digital audio or multimedia systems Digital telephones

                                                                                            DISTRIBUTED SYSTEMS execute parts of a computation in different systems at the same time It uses distributed control ie it spreads its decision-making actions across different computers in the system so that failures of individual computers or the network does not cripple its operation

                                                                                            A distributed operating system appears as a uniprocessor system even though it has multiple processors The users may not know where their programs are being run or where their files are located that should all be handled automatically by the operating system

                                                                                            Distributed systems allow applications to run on several processors at the same time thus requiring more complex processor scheduling algorithms

                                                                                            DESKTOP SYSTEMS A desktop system is a personal computer (PC) system used at a single location Modern desktop systems support multiprogramming Common examples are Linux FreeBSD Windows 8 and the Macintosh operating system

                                                                                            HANDHELD SYSTEMS (palmtop computers) Two of the most popular operating systems for handhelds are Symbian OS and Android OS They have very less memory

                                                                                            53

                                                                                            CLUSTERED SYSTEMS A computer cluster consists of a set of loosely connected computers that work together so that in many respects they can be viewed as a single system The components of a cluster are usually connected to each other through fast local area networks each node (computer used as a server) running its own instance of an operating system

                                                                                            Q) Modern operating system

                                                                                            New developments in hardware applications and security threats lead to the development of modern operating system

                                                                                            New Hardware developments like increased machine speed high speed network increased size and variety of memory devices

                                                                                            New Applications developments like multimedia applications internet and web access and client server computing

                                                                                            Developments in modern operating system can be categorized as

                                                                                            1 Microkernel architecture assigns only few functions to kernel ( IPC and CPU scheduling etc) Other OS services are provided by processes called servers that run in user mode This approach decouples server and kernel development This approach is well suited to distributed environment

                                                                                            2 Multithreading a process is divided into threads that can run concurrently Useful for applications that perform a number of independent tasks Example database server that listens and processes many client requests

                                                                                            3 Symmetric multiprocessing As now-a-days there are multiple microprocessors in a single system SMP operating system provides greater efficiency when there are multiple processors The processes share the same main memory and io facilities All processors can perform the same functions (hence term symmetric)

                                                                                            SMP schedules processes or threads across all of the processors

                                                                                            Advantages

                                                                                            1 Increased performance if some portion of the program can be run in parallel the performance of SMP OS increases

                                                                                            2 increased availability as all processes can perform the same task if one processor fails others will work 3 incremental growth one can add a new processor to increase the performance4 Scaling cost can be dependent on the number of processors used

                                                                                            4 Distributed operating systems Distributed systems allow applications to run on several processors at the same time

                                                                                            5 Object oriented design adds modularity to kernel Os can be customized without effecting system integrity

                                                                                            Q) Linux Operating systems It is open source as its source code is freely available It is free to use Linux was designed considering UNIX compatibility Linux is a multi-user multitasking system Main design goals are speed efficiency and standardization Components of Linux System

                                                                                            54

                                                                                            Linux Operating System has primarily three components as shown in the below diagram

                                                                                            Kernel - Kernel is the core part of Linux It is responsible for all major activities of this operating system It interacts directly with hardware Kernel hides low level hardware details to system or application programs

                                                                                            System Library - System libraries are special functions or programs These are used by application programs to use Kernelrsquos features

                                                                                            System Utility - System Utility programs are responsible to do specialized individual level tasks like updating log file accepting login requests from terminals etc

                                                                                            Kernel Modules Sections of kernel code that can be compiled loaded and unloaded independent of the rest of the kernel A kernel module may typically implement a device driver a file system or a networking protocol

                                                                                            Three components to Linux module support 1 module management Supports loading modules into memory and allows to talk to the rest of the kernel 2driver registration Allows modules to inform rest of the kernel that a new driver has become available 3 conflict resolution protect reserved resources of one driver from accidental use by another driver

                                                                                            Kernel Mode vs User ModeKernel component code executes in a special privileged mode called kernel mode with full access to all resources of the computer This code represents a single process executes in single address space and do not require any context switch and hence is very efficient and fast Kernel runs each processes and provides system services to processes provides protected access to hardwares to processesCode which is not required to run in kernel mode is in System Library User programs and other system programs works in User Mode which has no access to system hardwares and kernel code User programs utilities use System libraries to access Kernel functions to get systems low level tasksBasic FeaturesFollowing are some of the important features of Linux Operating System

                                                                                            Portable - Portability means softwares can works on different types of hardwares in same way Linux kernel and application programs supports their installation on any kind of hardware platform

                                                                                            Open Source - Linux source code is freely available and it is community based development project Multiple teams works in collaboration to enhance the capability of Linux operating system and it is continuously evolving

                                                                                            Multi-User - Linux is a multiuser system means multiple users can access system resources like memory ram application programs at same time

                                                                                            Multiprogramming - Linux is a multiprogramming system means multiple applications can run at same time

                                                                                            Hierarchical File System - Linux provides a standard file structure in which system files user files are arranged

                                                                                            Shell - Linux provides a special interpreter program which can be used to execute commands of the operating system It can be used to do various types of operations call application programs etc

                                                                                            55

                                                                                            Security - Linux provides user security using authentication features like password protection controlled access to specific files encryption of data

                                                                                            Architecture Linux System Architecture is consists of following layers

                                                                                            Hardware layer - Hardware consists of all peripheral devices (RAM HDD CPU etc)

                                                                                            Kernel - Core component of Operating System interacts directly with hardware provides low level services to upper layer components

                                                                                            Shell - An interface to kernel Takes commands from user and executes kernels functions

                                                                                            Utilities - Utility programs giving user most of the functionalities of an operating systems

                                                                                            Q) windows XP operating System1Extensibility mdash layered architecture

                                                                                            Executive which runs in protected mode provides the basic system services On top of the executive several server subsystems operate in user mode Modular structure allows additional environmental subsystems to be added without affecting the

                                                                                            executive 2Portability mdash XP can be moved from on hardware architecture to another with relatively few changes

                                                                                            Written in C and C++ Processor-dependent code is isolated in a dynamic link library (DLL) called the ldquohardware abstraction

                                                                                            layerrdquo (HAL)3 Reliability mdashXP uses hardware protection for virtual memory and software protection mechanisms for operating system resources 4 Compatibility mdash applications that follow the IEEE 10031 (POSIX) standard can be complied to run on 2000 without changing the source code 5 Performance mdash XP subsystems can communicate with one another via high-performance message passing

                                                                                            Preemption of low priority threads enables the system to respond quickly to external events Designed for symmetrical multiprocessing

                                                                                            6 International support mdash supports different locales via the national language support (NLS) APIXP Architecture 1 Layered system of modules 2 Protected mode mdash hardware abstraction layer (HAL) kernel executive 3 User mode mdash collection of subsystems

                                                                                            a Environmental subsystems emulate different operating systems b Protection subsystems provide security functions

                                                                                            XP architecture

                                                                                            56

                                                                                            Q) Windows network Operating systemExample Windows 2000 operating system

                                                                                            All the pictures are taken from Silberschatz Abraham et al Peter Baer Galvin and Greg Gagne Operating system concepts Reading Addison-Wesley

                                                                                            Visit my blog enthusiaststudentblogspotin57

                                                                                            mtechmessengerblogspotin

                                                                                            Keywords OS exam notes OS notesOperating system notes Opearing system galvin notesSimple batch system Multi- programmed Batch System Time sharing System Difference between Batch Multi-programming and Time sharing What is process structure of process in memory Process states PCB Operations on Processes Process Creation Process Termination cascading terminationcontext switch Basic Concepts of threads two modes of CPU execution Inter process communication Shared ndash Memory Systems Message passing systemswhat is deadlock four conditions that are necessary for deadlock to occur Methods for handling deadlocks Deadlock prevention Deadlock avoidanceResource Allocation graphBankers AlgorithmSafe State Unsafe state Safety Algorithm Resource-Request Algorithm Memory management Address Binding Logical vs Physical Address Space MMU Memory-Management Unit Dynamic loading Dynamic Linking Overlays Swapping Contiguous memory allocation Single-partition allocation Multiple-partition allocation Internal fragmentation Variable Size partitions External fragmentation First-fit Best-fit Worst-fit Non-Contiguous Memory allocation Paging Implementation of Page Table PTLR PTBR page table base register Page-table length register translation look-aside buffers TLB Protection in Paging Shared pages in Paging Segmentation Shared Segments Virtual memory Demand paging Procedure for Handling a Page Fault pure demand paging Page Replacement Page Replacement Algorithms FIFO Optimal Page replacement AlgorithmLeast Recently used (LRU) Page replacement algorithm ThrashingWorking Set model global and local page replacement

                                                                                            58

                                                                                            • 101 File Concept Data is stored in secondary memory in the form of files A file is a sequence of bits bytes lines or records A file has structure which depends on its type
                                                                                            • File Attributes
                                                                                            • File Operations
                                                                                            • Kernel Mode vs User Mode
                                                                                            • Basic Features
                                                                                            • Architecture

                                                                                              Both Test and Set Swap instructions satisfy the mutual exclusion requirement but unfortunately do not guarantee bounded waiting If there are multiple processes trying to get into their critical sections there is no guarantee of what order they will enter and any one process could have the bad luck to wait forever until they got their turn in the critical section (Since there is no guarantee as to the relative rates of the processes a very fast process could theoretically release the lock whip through their remainder section and re-lock the lock before a slower process got a chance As more and more processes are involved vying for the same resource the odds of a slow process getting locked out completely increase )

                                                                                              Below figure illustrates a solution using test-and-set that satisfies bounded waiting using two shared data structures boolean lock and boolean waiting[ N ] where N is the number of processes in contention for critical sections

                                                                                              Bounded-waiting mutual exclusion with TestAndSet( )

                                                                                              It first looks in an order ( starting with the next process on the list ) for a process that has been waiting and if it finds one then it releases that particular process from its waiting state without unlocking the critical section thereby allowing a specific process into the critical section while continuing to block all the others

                                                                                              Q) What is semaphoreAns Semaphore is used to solve synchronization problems A semaphore S is an integer variable accessed only through 2 standard atomic operations 1 wait() 2 signal()Wait primitive Signal primitivewait(S)

                                                                                              while(S lt= 0) do nothingS--

                                                                                              signal (S)

                                                                                              S++

                                                                                              Each process that wishes to use a resource performs wait() operation When a process releases a resource it performs signal() operation

                                                                                              Semaphores Usage1 binary semaphore or mutex locks can take values 0 (or) 1

                                                                                              2 Counting Semaphore can take any integer value Counting semaphore is used to count the number remaining resources The counter is initialized to the number of resources available whenever the counting semaphore gt 0 then a process can enter a critical section and use one of the resources When the counter = 0 ( or negative in some implementations ) then the process blocks until another process frees a resource and increments the counting semaphore with a signal callFor example if 3 resources are there and 4 processes Value of counting semaphore =3if process P1 requires a resource it perform wait() operationNow counting semaphore = 2if process P2 requires a resource it perform wait() operationNow counting semaphore = 1if process P3 requires a resource it perform wait() operationNow counting semaphore = 0if process P4 requires a resource it perform wait() operation P4 waits until a resource is available

                                                                                              3 Semaphores can also be used to synchronize certain operations

                                                                                              Semaphore ImplementationWhen a process is in critical section and any other process that tries to enter critical section must loop continuously wasting CPU time (busy waiting) This type of semaphore is called spin lock as the process spins while waiting For example suppose it is important that process P1 execute statement S1 before process P2 executes statement S2

                                                                                              First we create a semaphore named synch that is shared by the two processes and initialize it to zero Then in process P1 we insert the code

                                                                                              S1signal( synch )

                                                                                              and in process P2 we insert the code wait( synch )

                                                                                              S2 Because synch was initialized to 0 process P2 will block on the wait until after P1 executes the call

                                                                                              to signal Each semaphore has a integer value and maintains a waiting queue of waiting processes

                                                                                              Semaphore implementation with no busy waiting To overcome busy waiting we block the waiting process and waiting process is restarted by wakeup() operation(ie the process is removed from waiting queue to ready state)

                                                                                              Problems due to semaphore1 Deadlocks Deadlocks occur when multiple processes are blocked each waiting for a resource that can only be freed by one of the other (blocked) processes shown below

                                                                                              2 Starvation In Starvation one or more processes gets blocked forever and never get a chance to enter critical section For example if we do not specify the algorithms for adding processes to the waiting queue or selecting one to be removed from the queue in the signal( ) call If a LIFO queue is chosen then the first process that starts waiting will never get a chance

                                                                                              3 Priority inversionLet Process LMH have priorities as shown belowLltMltHprocess H is waiting for a resource that has been held by process L so Process L is running When process M enters it pre-empts process L and M is running nowHere M runs before H even though M has lower priority than HTo avoid this L is given high priority so that M does not pre-empt L=====================================================================Q) Classical problems on synchronization

                                                                                              1 The Bounded Buffer Problem (also called the The Producer-Consumer Problem)2 The Readers-Writers Problem3 The Dining Philosophers Problem

                                                                                              These problems are used to test newly proposed synchronization scheme

                                                                                              1 The Bounded Buffer ProblemConsider

                                                                                              n buffers and each buffer holds one item a producer process which creates the items (1 at a time) a consumer process which processes them (1 at a time)

                                                                                              Producer process cannot produce an item if all the buffers are full and Consumer process cannot consume if all the buffers are empty As both processes modify the contents of buffer synchronization must be there For that we use 3 Semaphores

                                                                                              1 empty = n (empty semaphore counts the number of empty buffers)2 full=0 (full semaphore counts the number of filled buffers)3 mutex=1 (mutex provides mutual exclusion for access to the buffer)

                                                                                              Producer process Consumer processdo

                                                                                              hellip produce an item wait(empty) wait(mutex) hellip

                                                                                              do wait(full) wait(mutex) hellip remove an item from buffer hellip

                                                                                              add item to buffer hellip signal(mutex) signal(full)

                                                                                              while (1)

                                                                                              signal(mutex) signal(empty) hellip consume the item hellip while (1)

                                                                                              2 The Readers-Writers ProblemA data item such as a file is shared among several processesEach process is classified as reader or writerMultiple readers can read the file at same timeA writer must have exclusive access (ie cannot share with either a reader or another writer)Two versions of the readers-writers problem

                                                                                              readers priority new reader need not wait because a writer is waiting writers priority if a writer is waiting to access the database no new readers can start reading

                                                                                              A solution to both version may cause starvation in the readers priority version writers may starve in the writers priority version readers may starve

                                                                                              A semaphore solution to the readers priority version uses 3 semaphores1 readcount = 0 (readcount semaphore counts the number of reading shared data2 mutex =1 (mutex semaphore is used for mutually exclusion when readcount is updated)3 wrt = 1 (wrt semaphore is common to both writer and reader)

                                                                                              wrt provides mutually exclusive access to shared data Reader process Writer process

                                                                                              wait(mutex) readcount++ if (readcount == 1) wait(rt) signal(mutex) hellip reading is performed hellip wait(mutex) readcount-- if (readcount == 0) signal(wrt) signal(mutex)

                                                                                              do wait(wrt) hellip writing is performed hellip signal(wrt)

                                                                                              while (TRUE)

                                                                                              3 The Dining Philosophers Problemn philosophers sit around a table thinking and eatingAs time passes a philosopher gets hungry and tries to pick up the chopstick on his left and on his right A philosopher may only pick up one chopstick at a time and cannot pick up a chopstick already in the hand of neighbour philosopherThe dining philosophers problems is an example of a large class or concurrency control problems it is a simple representation of the need to allocate several resources among several processes in a deadlock-free and starvation-free mannerA semaphore solution represent each chopstick with a semaphore

                                                                                              semaphore chopstick[5] Initially all values are 1

                                                                                              Philosopher ido

                                                                                              51

                                                                                              wait(chopstick[i])wait(chopstick[(i+1) 5])hellipeathellipsignal(chopstick[i])signal(chopstick[(i+1) 5])hellipthinkhellip

                                                                                              while (1)This solution guarantees no two neighbouring philosophers eat simultaneously but has the possibility of creating a deadlockQ) Types of operating systems

                                                                                              MULTIPROCESSING SYSTEMS has multiple hardware CPUs

                                                                                              In symmetric multiprocessing (SMP) systems The processes share the same main memory and io facilities All processors can perform the same functions (hence term symmetric)

                                                                                              52

                                                                                              ASYMMETRIC MULTIPROCESSING SYSTEMS In asymmetric multiprocessing (ASMP) all CPUs are not equal (Master and slave CPUs) ASMP have dedicated processors to specific tasks For example one processor may be dedicated to disk operations another to video operations etc These systems dont have the flexibility to assign processes to the least-loaded CPU unlike an SMP system

                                                                                              REAL TIME SYSTEM is used to implement a computer application that must complete its execution within its time constraint

                                                                                              Two kinds of real-time systems have evolved

                                                                                              1 A hard real-time system can guarantee that response requirement will be met under all conditions 2 A soft real-time system cannot guarantee that response requirement meets under all conditions Ex

                                                                                              Digital audio or multimedia systems Digital telephones

                                                                                              DISTRIBUTED SYSTEMS execute parts of a computation in different systems at the same time It uses distributed control ie it spreads its decision-making actions across different computers in the system so that failures of individual computers or the network does not cripple its operation

                                                                                              A distributed operating system appears as a uniprocessor system even though it has multiple processors The users may not know where their programs are being run or where their files are located that should all be handled automatically by the operating system

                                                                                              Distributed systems allow applications to run on several processors at the same time thus requiring more complex processor scheduling algorithms

                                                                                              DESKTOP SYSTEMS A desktop system is a personal computer (PC) system used at a single location Modern desktop systems support multiprogramming Common examples are Linux FreeBSD Windows 8 and the Macintosh operating system

                                                                                              HANDHELD SYSTEMS (palmtop computers) Two of the most popular operating systems for handhelds are Symbian OS and Android OS They have very less memory

                                                                                              53

                                                                                              CLUSTERED SYSTEMS A computer cluster consists of a set of loosely connected computers that work together so that in many respects they can be viewed as a single system The components of a cluster are usually connected to each other through fast local area networks each node (computer used as a server) running its own instance of an operating system

                                                                                              Q) Modern operating system

                                                                                              New developments in hardware applications and security threats lead to the development of modern operating system

                                                                                              New Hardware developments like increased machine speed high speed network increased size and variety of memory devices

                                                                                              New Applications developments like multimedia applications internet and web access and client server computing

                                                                                              Developments in modern operating system can be categorized as

                                                                                              1 Microkernel architecture assigns only few functions to kernel ( IPC and CPU scheduling etc) Other OS services are provided by processes called servers that run in user mode This approach decouples server and kernel development This approach is well suited to distributed environment

                                                                                              2 Multithreading a process is divided into threads that can run concurrently Useful for applications that perform a number of independent tasks Example database server that listens and processes many client requests

                                                                                              3 Symmetric multiprocessing As now-a-days there are multiple microprocessors in a single system SMP operating system provides greater efficiency when there are multiple processors The processes share the same main memory and io facilities All processors can perform the same functions (hence term symmetric)

                                                                                              SMP schedules processes or threads across all of the processors

                                                                                              Advantages

                                                                                              1 Increased performance if some portion of the program can be run in parallel the performance of SMP OS increases

                                                                                              2 increased availability as all processes can perform the same task if one processor fails others will work 3 incremental growth one can add a new processor to increase the performance4 Scaling cost can be dependent on the number of processors used

                                                                                              4 Distributed operating systems Distributed systems allow applications to run on several processors at the same time

                                                                                              5 Object oriented design adds modularity to kernel Os can be customized without effecting system integrity

                                                                                              Q) Linux Operating systems It is open source as its source code is freely available It is free to use Linux was designed considering UNIX compatibility Linux is a multi-user multitasking system Main design goals are speed efficiency and standardization Components of Linux System

                                                                                              54

                                                                                              Linux Operating System has primarily three components as shown in the below diagram

                                                                                              Kernel - Kernel is the core part of Linux It is responsible for all major activities of this operating system It interacts directly with hardware Kernel hides low level hardware details to system or application programs

                                                                                              System Library - System libraries are special functions or programs These are used by application programs to use Kernelrsquos features

                                                                                              System Utility - System Utility programs are responsible to do specialized individual level tasks like updating log file accepting login requests from terminals etc

                                                                                              Kernel Modules Sections of kernel code that can be compiled loaded and unloaded independent of the rest of the kernel A kernel module may typically implement a device driver a file system or a networking protocol

                                                                                              Three components to Linux module support 1 module management Supports loading modules into memory and allows to talk to the rest of the kernel 2driver registration Allows modules to inform rest of the kernel that a new driver has become available 3 conflict resolution protect reserved resources of one driver from accidental use by another driver

                                                                                              Kernel Mode vs User ModeKernel component code executes in a special privileged mode called kernel mode with full access to all resources of the computer This code represents a single process executes in single address space and do not require any context switch and hence is very efficient and fast Kernel runs each processes and provides system services to processes provides protected access to hardwares to processesCode which is not required to run in kernel mode is in System Library User programs and other system programs works in User Mode which has no access to system hardwares and kernel code User programs utilities use System libraries to access Kernel functions to get systems low level tasksBasic FeaturesFollowing are some of the important features of Linux Operating System

                                                                                              Portable - Portability means softwares can works on different types of hardwares in same way Linux kernel and application programs supports their installation on any kind of hardware platform

                                                                                              Open Source - Linux source code is freely available and it is community based development project Multiple teams works in collaboration to enhance the capability of Linux operating system and it is continuously evolving

                                                                                              Multi-User - Linux is a multiuser system means multiple users can access system resources like memory ram application programs at same time

                                                                                              Multiprogramming - Linux is a multiprogramming system means multiple applications can run at same time

                                                                                              Hierarchical File System - Linux provides a standard file structure in which system files user files are arranged

                                                                                              Shell - Linux provides a special interpreter program which can be used to execute commands of the operating system It can be used to do various types of operations call application programs etc

                                                                                              55

                                                                                              Security - Linux provides user security using authentication features like password protection controlled access to specific files encryption of data

                                                                                              Architecture Linux System Architecture is consists of following layers

                                                                                              Hardware layer - Hardware consists of all peripheral devices (RAM HDD CPU etc)

                                                                                              Kernel - Core component of Operating System interacts directly with hardware provides low level services to upper layer components

                                                                                              Shell - An interface to kernel Takes commands from user and executes kernels functions

                                                                                              Utilities - Utility programs giving user most of the functionalities of an operating systems

                                                                                              Q) windows XP operating System1Extensibility mdash layered architecture

                                                                                              Executive which runs in protected mode provides the basic system services On top of the executive several server subsystems operate in user mode Modular structure allows additional environmental subsystems to be added without affecting the

                                                                                              executive 2Portability mdash XP can be moved from on hardware architecture to another with relatively few changes

                                                                                              Written in C and C++ Processor-dependent code is isolated in a dynamic link library (DLL) called the ldquohardware abstraction

                                                                                              layerrdquo (HAL)3 Reliability mdashXP uses hardware protection for virtual memory and software protection mechanisms for operating system resources 4 Compatibility mdash applications that follow the IEEE 10031 (POSIX) standard can be complied to run on 2000 without changing the source code 5 Performance mdash XP subsystems can communicate with one another via high-performance message passing

                                                                                              Preemption of low priority threads enables the system to respond quickly to external events Designed for symmetrical multiprocessing

                                                                                              6 International support mdash supports different locales via the national language support (NLS) APIXP Architecture 1 Layered system of modules 2 Protected mode mdash hardware abstraction layer (HAL) kernel executive 3 User mode mdash collection of subsystems

                                                                                              a Environmental subsystems emulate different operating systems b Protection subsystems provide security functions

                                                                                              XP architecture

                                                                                              56

                                                                                              Q) Windows network Operating systemExample Windows 2000 operating system

                                                                                              All the pictures are taken from Silberschatz Abraham et al Peter Baer Galvin and Greg Gagne Operating system concepts Reading Addison-Wesley

                                                                                              Visit my blog enthusiaststudentblogspotin57

                                                                                              mtechmessengerblogspotin

                                                                                              Keywords OS exam notes OS notesOperating system notes Opearing system galvin notesSimple batch system Multi- programmed Batch System Time sharing System Difference between Batch Multi-programming and Time sharing What is process structure of process in memory Process states PCB Operations on Processes Process Creation Process Termination cascading terminationcontext switch Basic Concepts of threads two modes of CPU execution Inter process communication Shared ndash Memory Systems Message passing systemswhat is deadlock four conditions that are necessary for deadlock to occur Methods for handling deadlocks Deadlock prevention Deadlock avoidanceResource Allocation graphBankers AlgorithmSafe State Unsafe state Safety Algorithm Resource-Request Algorithm Memory management Address Binding Logical vs Physical Address Space MMU Memory-Management Unit Dynamic loading Dynamic Linking Overlays Swapping Contiguous memory allocation Single-partition allocation Multiple-partition allocation Internal fragmentation Variable Size partitions External fragmentation First-fit Best-fit Worst-fit Non-Contiguous Memory allocation Paging Implementation of Page Table PTLR PTBR page table base register Page-table length register translation look-aside buffers TLB Protection in Paging Shared pages in Paging Segmentation Shared Segments Virtual memory Demand paging Procedure for Handling a Page Fault pure demand paging Page Replacement Page Replacement Algorithms FIFO Optimal Page replacement AlgorithmLeast Recently used (LRU) Page replacement algorithm ThrashingWorking Set model global and local page replacement

                                                                                              58

                                                                                              • 101 File Concept Data is stored in secondary memory in the form of files A file is a sequence of bits bytes lines or records A file has structure which depends on its type
                                                                                              • File Attributes
                                                                                              • File Operations
                                                                                              • Kernel Mode vs User Mode
                                                                                              • Basic Features
                                                                                              • Architecture

                                                                                                Semaphores Usage1 binary semaphore or mutex locks can take values 0 (or) 1

                                                                                                2 Counting Semaphore can take any integer value Counting semaphore is used to count the number remaining resources The counter is initialized to the number of resources available whenever the counting semaphore gt 0 then a process can enter a critical section and use one of the resources When the counter = 0 ( or negative in some implementations ) then the process blocks until another process frees a resource and increments the counting semaphore with a signal callFor example if 3 resources are there and 4 processes Value of counting semaphore =3if process P1 requires a resource it perform wait() operationNow counting semaphore = 2if process P2 requires a resource it perform wait() operationNow counting semaphore = 1if process P3 requires a resource it perform wait() operationNow counting semaphore = 0if process P4 requires a resource it perform wait() operation P4 waits until a resource is available

                                                                                                3 Semaphores can also be used to synchronize certain operations

                                                                                                Semaphore ImplementationWhen a process is in critical section and any other process that tries to enter critical section must loop continuously wasting CPU time (busy waiting) This type of semaphore is called spin lock as the process spins while waiting For example suppose it is important that process P1 execute statement S1 before process P2 executes statement S2

                                                                                                First we create a semaphore named synch that is shared by the two processes and initialize it to zero Then in process P1 we insert the code

                                                                                                S1signal( synch )

                                                                                                and in process P2 we insert the code wait( synch )

                                                                                                S2 Because synch was initialized to 0 process P2 will block on the wait until after P1 executes the call

                                                                                                to signal Each semaphore has a integer value and maintains a waiting queue of waiting processes

                                                                                                Semaphore implementation with no busy waiting To overcome busy waiting we block the waiting process and waiting process is restarted by wakeup() operation(ie the process is removed from waiting queue to ready state)

                                                                                                Problems due to semaphore1 Deadlocks Deadlocks occur when multiple processes are blocked each waiting for a resource that can only be freed by one of the other (blocked) processes shown below

                                                                                                2 Starvation In Starvation one or more processes gets blocked forever and never get a chance to enter critical section For example if we do not specify the algorithms for adding processes to the waiting queue or selecting one to be removed from the queue in the signal( ) call If a LIFO queue is chosen then the first process that starts waiting will never get a chance

                                                                                                3 Priority inversionLet Process LMH have priorities as shown belowLltMltHprocess H is waiting for a resource that has been held by process L so Process L is running When process M enters it pre-empts process L and M is running nowHere M runs before H even though M has lower priority than HTo avoid this L is given high priority so that M does not pre-empt L=====================================================================Q) Classical problems on synchronization

                                                                                                1 The Bounded Buffer Problem (also called the The Producer-Consumer Problem)2 The Readers-Writers Problem3 The Dining Philosophers Problem

                                                                                                These problems are used to test newly proposed synchronization scheme

                                                                                                1 The Bounded Buffer ProblemConsider

                                                                                                n buffers and each buffer holds one item a producer process which creates the items (1 at a time) a consumer process which processes them (1 at a time)

                                                                                                Producer process cannot produce an item if all the buffers are full and Consumer process cannot consume if all the buffers are empty As both processes modify the contents of buffer synchronization must be there For that we use 3 Semaphores

                                                                                                1 empty = n (empty semaphore counts the number of empty buffers)2 full=0 (full semaphore counts the number of filled buffers)3 mutex=1 (mutex provides mutual exclusion for access to the buffer)

                                                                                                Producer process Consumer processdo

                                                                                                hellip produce an item wait(empty) wait(mutex) hellip

                                                                                                do wait(full) wait(mutex) hellip remove an item from buffer hellip

                                                                                                add item to buffer hellip signal(mutex) signal(full)

                                                                                                while (1)

                                                                                                signal(mutex) signal(empty) hellip consume the item hellip while (1)

                                                                                                2 The Readers-Writers ProblemA data item such as a file is shared among several processesEach process is classified as reader or writerMultiple readers can read the file at same timeA writer must have exclusive access (ie cannot share with either a reader or another writer)Two versions of the readers-writers problem

                                                                                                readers priority new reader need not wait because a writer is waiting writers priority if a writer is waiting to access the database no new readers can start reading

                                                                                                A solution to both version may cause starvation in the readers priority version writers may starve in the writers priority version readers may starve

                                                                                                A semaphore solution to the readers priority version uses 3 semaphores1 readcount = 0 (readcount semaphore counts the number of reading shared data2 mutex =1 (mutex semaphore is used for mutually exclusion when readcount is updated)3 wrt = 1 (wrt semaphore is common to both writer and reader)

                                                                                                wrt provides mutually exclusive access to shared data Reader process Writer process

                                                                                                wait(mutex) readcount++ if (readcount == 1) wait(rt) signal(mutex) hellip reading is performed hellip wait(mutex) readcount-- if (readcount == 0) signal(wrt) signal(mutex)

                                                                                                do wait(wrt) hellip writing is performed hellip signal(wrt)

                                                                                                while (TRUE)

                                                                                                3 The Dining Philosophers Problemn philosophers sit around a table thinking and eatingAs time passes a philosopher gets hungry and tries to pick up the chopstick on his left and on his right A philosopher may only pick up one chopstick at a time and cannot pick up a chopstick already in the hand of neighbour philosopherThe dining philosophers problems is an example of a large class or concurrency control problems it is a simple representation of the need to allocate several resources among several processes in a deadlock-free and starvation-free mannerA semaphore solution represent each chopstick with a semaphore

                                                                                                semaphore chopstick[5] Initially all values are 1

                                                                                                Philosopher ido

                                                                                                51

                                                                                                wait(chopstick[i])wait(chopstick[(i+1) 5])hellipeathellipsignal(chopstick[i])signal(chopstick[(i+1) 5])hellipthinkhellip

                                                                                                while (1)This solution guarantees no two neighbouring philosophers eat simultaneously but has the possibility of creating a deadlockQ) Types of operating systems

                                                                                                MULTIPROCESSING SYSTEMS has multiple hardware CPUs

                                                                                                In symmetric multiprocessing (SMP) systems The processes share the same main memory and io facilities All processors can perform the same functions (hence term symmetric)

                                                                                                52

                                                                                                ASYMMETRIC MULTIPROCESSING SYSTEMS In asymmetric multiprocessing (ASMP) all CPUs are not equal (Master and slave CPUs) ASMP have dedicated processors to specific tasks For example one processor may be dedicated to disk operations another to video operations etc These systems dont have the flexibility to assign processes to the least-loaded CPU unlike an SMP system

                                                                                                REAL TIME SYSTEM is used to implement a computer application that must complete its execution within its time constraint

                                                                                                Two kinds of real-time systems have evolved

                                                                                                1 A hard real-time system can guarantee that response requirement will be met under all conditions 2 A soft real-time system cannot guarantee that response requirement meets under all conditions Ex

                                                                                                Digital audio or multimedia systems Digital telephones

                                                                                                DISTRIBUTED SYSTEMS execute parts of a computation in different systems at the same time It uses distributed control ie it spreads its decision-making actions across different computers in the system so that failures of individual computers or the network does not cripple its operation

                                                                                                A distributed operating system appears as a uniprocessor system even though it has multiple processors The users may not know where their programs are being run or where their files are located that should all be handled automatically by the operating system

                                                                                                Distributed systems allow applications to run on several processors at the same time thus requiring more complex processor scheduling algorithms

                                                                                                DESKTOP SYSTEMS A desktop system is a personal computer (PC) system used at a single location Modern desktop systems support multiprogramming Common examples are Linux FreeBSD Windows 8 and the Macintosh operating system

                                                                                                HANDHELD SYSTEMS (palmtop computers) Two of the most popular operating systems for handhelds are Symbian OS and Android OS They have very less memory

                                                                                                53

                                                                                                CLUSTERED SYSTEMS A computer cluster consists of a set of loosely connected computers that work together so that in many respects they can be viewed as a single system The components of a cluster are usually connected to each other through fast local area networks each node (computer used as a server) running its own instance of an operating system

                                                                                                Q) Modern operating system

                                                                                                New developments in hardware applications and security threats lead to the development of modern operating system

                                                                                                New Hardware developments like increased machine speed high speed network increased size and variety of memory devices

                                                                                                New Applications developments like multimedia applications internet and web access and client server computing

                                                                                                Developments in modern operating system can be categorized as

                                                                                                1 Microkernel architecture assigns only few functions to kernel ( IPC and CPU scheduling etc) Other OS services are provided by processes called servers that run in user mode This approach decouples server and kernel development This approach is well suited to distributed environment

                                                                                                2 Multithreading a process is divided into threads that can run concurrently Useful for applications that perform a number of independent tasks Example database server that listens and processes many client requests

                                                                                                3 Symmetric multiprocessing As now-a-days there are multiple microprocessors in a single system SMP operating system provides greater efficiency when there are multiple processors The processes share the same main memory and io facilities All processors can perform the same functions (hence term symmetric)

                                                                                                SMP schedules processes or threads across all of the processors

                                                                                                Advantages

                                                                                                1 Increased performance if some portion of the program can be run in parallel the performance of SMP OS increases

                                                                                                2 increased availability as all processes can perform the same task if one processor fails others will work 3 incremental growth one can add a new processor to increase the performance4 Scaling cost can be dependent on the number of processors used

                                                                                                4 Distributed operating systems Distributed systems allow applications to run on several processors at the same time

                                                                                                5 Object oriented design adds modularity to kernel Os can be customized without effecting system integrity

                                                                                                Q) Linux Operating systems It is open source as its source code is freely available It is free to use Linux was designed considering UNIX compatibility Linux is a multi-user multitasking system Main design goals are speed efficiency and standardization Components of Linux System

                                                                                                54

                                                                                                Linux Operating System has primarily three components as shown in the below diagram

                                                                                                Kernel - Kernel is the core part of Linux It is responsible for all major activities of this operating system It interacts directly with hardware Kernel hides low level hardware details to system or application programs

                                                                                                System Library - System libraries are special functions or programs These are used by application programs to use Kernelrsquos features

                                                                                                System Utility - System Utility programs are responsible to do specialized individual level tasks like updating log file accepting login requests from terminals etc

                                                                                                Kernel Modules Sections of kernel code that can be compiled loaded and unloaded independent of the rest of the kernel A kernel module may typically implement a device driver a file system or a networking protocol

                                                                                                Three components to Linux module support 1 module management Supports loading modules into memory and allows to talk to the rest of the kernel 2driver registration Allows modules to inform rest of the kernel that a new driver has become available 3 conflict resolution protect reserved resources of one driver from accidental use by another driver

                                                                                                Kernel Mode vs User ModeKernel component code executes in a special privileged mode called kernel mode with full access to all resources of the computer This code represents a single process executes in single address space and do not require any context switch and hence is very efficient and fast Kernel runs each processes and provides system services to processes provides protected access to hardwares to processesCode which is not required to run in kernel mode is in System Library User programs and other system programs works in User Mode which has no access to system hardwares and kernel code User programs utilities use System libraries to access Kernel functions to get systems low level tasksBasic FeaturesFollowing are some of the important features of Linux Operating System

                                                                                                Portable - Portability means softwares can works on different types of hardwares in same way Linux kernel and application programs supports their installation on any kind of hardware platform

                                                                                                Open Source - Linux source code is freely available and it is community based development project Multiple teams works in collaboration to enhance the capability of Linux operating system and it is continuously evolving

                                                                                                Multi-User - Linux is a multiuser system means multiple users can access system resources like memory ram application programs at same time

                                                                                                Multiprogramming - Linux is a multiprogramming system means multiple applications can run at same time

                                                                                                Hierarchical File System - Linux provides a standard file structure in which system files user files are arranged

                                                                                                Shell - Linux provides a special interpreter program which can be used to execute commands of the operating system It can be used to do various types of operations call application programs etc

                                                                                                55

                                                                                                Security - Linux provides user security using authentication features like password protection controlled access to specific files encryption of data

                                                                                                Architecture Linux System Architecture is consists of following layers

                                                                                                Hardware layer - Hardware consists of all peripheral devices (RAM HDD CPU etc)

                                                                                                Kernel - Core component of Operating System interacts directly with hardware provides low level services to upper layer components

                                                                                                Shell - An interface to kernel Takes commands from user and executes kernels functions

                                                                                                Utilities - Utility programs giving user most of the functionalities of an operating systems

                                                                                                Q) windows XP operating System1Extensibility mdash layered architecture

                                                                                                Executive which runs in protected mode provides the basic system services On top of the executive several server subsystems operate in user mode Modular structure allows additional environmental subsystems to be added without affecting the

                                                                                                executive 2Portability mdash XP can be moved from on hardware architecture to another with relatively few changes

                                                                                                Written in C and C++ Processor-dependent code is isolated in a dynamic link library (DLL) called the ldquohardware abstraction

                                                                                                layerrdquo (HAL)3 Reliability mdashXP uses hardware protection for virtual memory and software protection mechanisms for operating system resources 4 Compatibility mdash applications that follow the IEEE 10031 (POSIX) standard can be complied to run on 2000 without changing the source code 5 Performance mdash XP subsystems can communicate with one another via high-performance message passing

                                                                                                Preemption of low priority threads enables the system to respond quickly to external events Designed for symmetrical multiprocessing

                                                                                                6 International support mdash supports different locales via the national language support (NLS) APIXP Architecture 1 Layered system of modules 2 Protected mode mdash hardware abstraction layer (HAL) kernel executive 3 User mode mdash collection of subsystems

                                                                                                a Environmental subsystems emulate different operating systems b Protection subsystems provide security functions

                                                                                                XP architecture

                                                                                                56

                                                                                                Q) Windows network Operating systemExample Windows 2000 operating system

                                                                                                All the pictures are taken from Silberschatz Abraham et al Peter Baer Galvin and Greg Gagne Operating system concepts Reading Addison-Wesley

                                                                                                Visit my blog enthusiaststudentblogspotin57

                                                                                                mtechmessengerblogspotin

                                                                                                Keywords OS exam notes OS notesOperating system notes Opearing system galvin notesSimple batch system Multi- programmed Batch System Time sharing System Difference between Batch Multi-programming and Time sharing What is process structure of process in memory Process states PCB Operations on Processes Process Creation Process Termination cascading terminationcontext switch Basic Concepts of threads two modes of CPU execution Inter process communication Shared ndash Memory Systems Message passing systemswhat is deadlock four conditions that are necessary for deadlock to occur Methods for handling deadlocks Deadlock prevention Deadlock avoidanceResource Allocation graphBankers AlgorithmSafe State Unsafe state Safety Algorithm Resource-Request Algorithm Memory management Address Binding Logical vs Physical Address Space MMU Memory-Management Unit Dynamic loading Dynamic Linking Overlays Swapping Contiguous memory allocation Single-partition allocation Multiple-partition allocation Internal fragmentation Variable Size partitions External fragmentation First-fit Best-fit Worst-fit Non-Contiguous Memory allocation Paging Implementation of Page Table PTLR PTBR page table base register Page-table length register translation look-aside buffers TLB Protection in Paging Shared pages in Paging Segmentation Shared Segments Virtual memory Demand paging Procedure for Handling a Page Fault pure demand paging Page Replacement Page Replacement Algorithms FIFO Optimal Page replacement AlgorithmLeast Recently used (LRU) Page replacement algorithm ThrashingWorking Set model global and local page replacement

                                                                                                58

                                                                                                • 101 File Concept Data is stored in secondary memory in the form of files A file is a sequence of bits bytes lines or records A file has structure which depends on its type
                                                                                                • File Attributes
                                                                                                • File Operations
                                                                                                • Kernel Mode vs User Mode
                                                                                                • Basic Features
                                                                                                • Architecture

                                                                                                  2 Starvation In Starvation one or more processes gets blocked forever and never get a chance to enter critical section For example if we do not specify the algorithms for adding processes to the waiting queue or selecting one to be removed from the queue in the signal( ) call If a LIFO queue is chosen then the first process that starts waiting will never get a chance

                                                                                                  3 Priority inversionLet Process LMH have priorities as shown belowLltMltHprocess H is waiting for a resource that has been held by process L so Process L is running When process M enters it pre-empts process L and M is running nowHere M runs before H even though M has lower priority than HTo avoid this L is given high priority so that M does not pre-empt L=====================================================================Q) Classical problems on synchronization

                                                                                                  1 The Bounded Buffer Problem (also called the The Producer-Consumer Problem)2 The Readers-Writers Problem3 The Dining Philosophers Problem

                                                                                                  These problems are used to test newly proposed synchronization scheme

                                                                                                  1 The Bounded Buffer ProblemConsider

                                                                                                  n buffers and each buffer holds one item a producer process which creates the items (1 at a time) a consumer process which processes them (1 at a time)

                                                                                                  Producer process cannot produce an item if all the buffers are full and Consumer process cannot consume if all the buffers are empty As both processes modify the contents of buffer synchronization must be there For that we use 3 Semaphores

                                                                                                  1 empty = n (empty semaphore counts the number of empty buffers)2 full=0 (full semaphore counts the number of filled buffers)3 mutex=1 (mutex provides mutual exclusion for access to the buffer)

                                                                                                  Producer process Consumer processdo

                                                                                                  hellip produce an item wait(empty) wait(mutex) hellip

                                                                                                  do wait(full) wait(mutex) hellip remove an item from buffer hellip

                                                                                                  add item to buffer hellip signal(mutex) signal(full)

                                                                                                  while (1)

                                                                                                  signal(mutex) signal(empty) hellip consume the item hellip while (1)

                                                                                                  2 The Readers-Writers ProblemA data item such as a file is shared among several processesEach process is classified as reader or writerMultiple readers can read the file at same timeA writer must have exclusive access (ie cannot share with either a reader or another writer)Two versions of the readers-writers problem

                                                                                                  readers priority new reader need not wait because a writer is waiting writers priority if a writer is waiting to access the database no new readers can start reading

                                                                                                  A solution to both version may cause starvation in the readers priority version writers may starve in the writers priority version readers may starve

                                                                                                  A semaphore solution to the readers priority version uses 3 semaphores1 readcount = 0 (readcount semaphore counts the number of reading shared data2 mutex =1 (mutex semaphore is used for mutually exclusion when readcount is updated)3 wrt = 1 (wrt semaphore is common to both writer and reader)

                                                                                                  wrt provides mutually exclusive access to shared data Reader process Writer process

                                                                                                  wait(mutex) readcount++ if (readcount == 1) wait(rt) signal(mutex) hellip reading is performed hellip wait(mutex) readcount-- if (readcount == 0) signal(wrt) signal(mutex)

                                                                                                  do wait(wrt) hellip writing is performed hellip signal(wrt)

                                                                                                  while (TRUE)

                                                                                                  3 The Dining Philosophers Problemn philosophers sit around a table thinking and eatingAs time passes a philosopher gets hungry and tries to pick up the chopstick on his left and on his right A philosopher may only pick up one chopstick at a time and cannot pick up a chopstick already in the hand of neighbour philosopherThe dining philosophers problems is an example of a large class or concurrency control problems it is a simple representation of the need to allocate several resources among several processes in a deadlock-free and starvation-free mannerA semaphore solution represent each chopstick with a semaphore

                                                                                                  semaphore chopstick[5] Initially all values are 1

                                                                                                  Philosopher ido

                                                                                                  51

                                                                                                  wait(chopstick[i])wait(chopstick[(i+1) 5])hellipeathellipsignal(chopstick[i])signal(chopstick[(i+1) 5])hellipthinkhellip

                                                                                                  while (1)This solution guarantees no two neighbouring philosophers eat simultaneously but has the possibility of creating a deadlockQ) Types of operating systems

                                                                                                  MULTIPROCESSING SYSTEMS has multiple hardware CPUs

                                                                                                  In symmetric multiprocessing (SMP) systems The processes share the same main memory and io facilities All processors can perform the same functions (hence term symmetric)

                                                                                                  52

                                                                                                  ASYMMETRIC MULTIPROCESSING SYSTEMS In asymmetric multiprocessing (ASMP) all CPUs are not equal (Master and slave CPUs) ASMP have dedicated processors to specific tasks For example one processor may be dedicated to disk operations another to video operations etc These systems dont have the flexibility to assign processes to the least-loaded CPU unlike an SMP system

                                                                                                  REAL TIME SYSTEM is used to implement a computer application that must complete its execution within its time constraint

                                                                                                  Two kinds of real-time systems have evolved

                                                                                                  1 A hard real-time system can guarantee that response requirement will be met under all conditions 2 A soft real-time system cannot guarantee that response requirement meets under all conditions Ex

                                                                                                  Digital audio or multimedia systems Digital telephones

                                                                                                  DISTRIBUTED SYSTEMS execute parts of a computation in different systems at the same time It uses distributed control ie it spreads its decision-making actions across different computers in the system so that failures of individual computers or the network does not cripple its operation

                                                                                                  A distributed operating system appears as a uniprocessor system even though it has multiple processors The users may not know where their programs are being run or where their files are located that should all be handled automatically by the operating system

                                                                                                  Distributed systems allow applications to run on several processors at the same time thus requiring more complex processor scheduling algorithms

                                                                                                  DESKTOP SYSTEMS A desktop system is a personal computer (PC) system used at a single location Modern desktop systems support multiprogramming Common examples are Linux FreeBSD Windows 8 and the Macintosh operating system

                                                                                                  HANDHELD SYSTEMS (palmtop computers) Two of the most popular operating systems for handhelds are Symbian OS and Android OS They have very less memory

                                                                                                  53

                                                                                                  CLUSTERED SYSTEMS A computer cluster consists of a set of loosely connected computers that work together so that in many respects they can be viewed as a single system The components of a cluster are usually connected to each other through fast local area networks each node (computer used as a server) running its own instance of an operating system

                                                                                                  Q) Modern operating system

                                                                                                  New developments in hardware applications and security threats lead to the development of modern operating system

                                                                                                  New Hardware developments like increased machine speed high speed network increased size and variety of memory devices

                                                                                                  New Applications developments like multimedia applications internet and web access and client server computing

                                                                                                  Developments in modern operating system can be categorized as

                                                                                                  1 Microkernel architecture assigns only few functions to kernel ( IPC and CPU scheduling etc) Other OS services are provided by processes called servers that run in user mode This approach decouples server and kernel development This approach is well suited to distributed environment

                                                                                                  2 Multithreading a process is divided into threads that can run concurrently Useful for applications that perform a number of independent tasks Example database server that listens and processes many client requests

                                                                                                  3 Symmetric multiprocessing As now-a-days there are multiple microprocessors in a single system SMP operating system provides greater efficiency when there are multiple processors The processes share the same main memory and io facilities All processors can perform the same functions (hence term symmetric)

                                                                                                  SMP schedules processes or threads across all of the processors

                                                                                                  Advantages

                                                                                                  1 Increased performance if some portion of the program can be run in parallel the performance of SMP OS increases

                                                                                                  2 increased availability as all processes can perform the same task if one processor fails others will work 3 incremental growth one can add a new processor to increase the performance4 Scaling cost can be dependent on the number of processors used

                                                                                                  4 Distributed operating systems Distributed systems allow applications to run on several processors at the same time

                                                                                                  5 Object oriented design adds modularity to kernel Os can be customized without effecting system integrity

                                                                                                  Q) Linux Operating systems It is open source as its source code is freely available It is free to use Linux was designed considering UNIX compatibility Linux is a multi-user multitasking system Main design goals are speed efficiency and standardization Components of Linux System

                                                                                                  54

                                                                                                  Linux Operating System has primarily three components as shown in the below diagram

                                                                                                  Kernel - Kernel is the core part of Linux It is responsible for all major activities of this operating system It interacts directly with hardware Kernel hides low level hardware details to system or application programs

                                                                                                  System Library - System libraries are special functions or programs These are used by application programs to use Kernelrsquos features

                                                                                                  System Utility - System Utility programs are responsible to do specialized individual level tasks like updating log file accepting login requests from terminals etc

                                                                                                  Kernel Modules Sections of kernel code that can be compiled loaded and unloaded independent of the rest of the kernel A kernel module may typically implement a device driver a file system or a networking protocol

                                                                                                  Three components to Linux module support 1 module management Supports loading modules into memory and allows to talk to the rest of the kernel 2driver registration Allows modules to inform rest of the kernel that a new driver has become available 3 conflict resolution protect reserved resources of one driver from accidental use by another driver

                                                                                                  Kernel Mode vs User ModeKernel component code executes in a special privileged mode called kernel mode with full access to all resources of the computer This code represents a single process executes in single address space and do not require any context switch and hence is very efficient and fast Kernel runs each processes and provides system services to processes provides protected access to hardwares to processesCode which is not required to run in kernel mode is in System Library User programs and other system programs works in User Mode which has no access to system hardwares and kernel code User programs utilities use System libraries to access Kernel functions to get systems low level tasksBasic FeaturesFollowing are some of the important features of Linux Operating System

                                                                                                  Portable - Portability means softwares can works on different types of hardwares in same way Linux kernel and application programs supports their installation on any kind of hardware platform

                                                                                                  Open Source - Linux source code is freely available and it is community based development project Multiple teams works in collaboration to enhance the capability of Linux operating system and it is continuously evolving

                                                                                                  Multi-User - Linux is a multiuser system means multiple users can access system resources like memory ram application programs at same time

                                                                                                  Multiprogramming - Linux is a multiprogramming system means multiple applications can run at same time

                                                                                                  Hierarchical File System - Linux provides a standard file structure in which system files user files are arranged

                                                                                                  Shell - Linux provides a special interpreter program which can be used to execute commands of the operating system It can be used to do various types of operations call application programs etc

                                                                                                  55

                                                                                                  Security - Linux provides user security using authentication features like password protection controlled access to specific files encryption of data

                                                                                                  Architecture Linux System Architecture is consists of following layers

                                                                                                  Hardware layer - Hardware consists of all peripheral devices (RAM HDD CPU etc)

                                                                                                  Kernel - Core component of Operating System interacts directly with hardware provides low level services to upper layer components

                                                                                                  Shell - An interface to kernel Takes commands from user and executes kernels functions

                                                                                                  Utilities - Utility programs giving user most of the functionalities of an operating systems

                                                                                                  Q) windows XP operating System1Extensibility mdash layered architecture

                                                                                                  Executive which runs in protected mode provides the basic system services On top of the executive several server subsystems operate in user mode Modular structure allows additional environmental subsystems to be added without affecting the

                                                                                                  executive 2Portability mdash XP can be moved from on hardware architecture to another with relatively few changes

                                                                                                  Written in C and C++ Processor-dependent code is isolated in a dynamic link library (DLL) called the ldquohardware abstraction

                                                                                                  layerrdquo (HAL)3 Reliability mdashXP uses hardware protection for virtual memory and software protection mechanisms for operating system resources 4 Compatibility mdash applications that follow the IEEE 10031 (POSIX) standard can be complied to run on 2000 without changing the source code 5 Performance mdash XP subsystems can communicate with one another via high-performance message passing

                                                                                                  Preemption of low priority threads enables the system to respond quickly to external events Designed for symmetrical multiprocessing

                                                                                                  6 International support mdash supports different locales via the national language support (NLS) APIXP Architecture 1 Layered system of modules 2 Protected mode mdash hardware abstraction layer (HAL) kernel executive 3 User mode mdash collection of subsystems

                                                                                                  a Environmental subsystems emulate different operating systems b Protection subsystems provide security functions

                                                                                                  XP architecture

                                                                                                  56

                                                                                                  Q) Windows network Operating systemExample Windows 2000 operating system

                                                                                                  All the pictures are taken from Silberschatz Abraham et al Peter Baer Galvin and Greg Gagne Operating system concepts Reading Addison-Wesley

                                                                                                  Visit my blog enthusiaststudentblogspotin57

                                                                                                  mtechmessengerblogspotin

                                                                                                  Keywords OS exam notes OS notesOperating system notes Opearing system galvin notesSimple batch system Multi- programmed Batch System Time sharing System Difference between Batch Multi-programming and Time sharing What is process structure of process in memory Process states PCB Operations on Processes Process Creation Process Termination cascading terminationcontext switch Basic Concepts of threads two modes of CPU execution Inter process communication Shared ndash Memory Systems Message passing systemswhat is deadlock four conditions that are necessary for deadlock to occur Methods for handling deadlocks Deadlock prevention Deadlock avoidanceResource Allocation graphBankers AlgorithmSafe State Unsafe state Safety Algorithm Resource-Request Algorithm Memory management Address Binding Logical vs Physical Address Space MMU Memory-Management Unit Dynamic loading Dynamic Linking Overlays Swapping Contiguous memory allocation Single-partition allocation Multiple-partition allocation Internal fragmentation Variable Size partitions External fragmentation First-fit Best-fit Worst-fit Non-Contiguous Memory allocation Paging Implementation of Page Table PTLR PTBR page table base register Page-table length register translation look-aside buffers TLB Protection in Paging Shared pages in Paging Segmentation Shared Segments Virtual memory Demand paging Procedure for Handling a Page Fault pure demand paging Page Replacement Page Replacement Algorithms FIFO Optimal Page replacement AlgorithmLeast Recently used (LRU) Page replacement algorithm ThrashingWorking Set model global and local page replacement

                                                                                                  58

                                                                                                  • 101 File Concept Data is stored in secondary memory in the form of files A file is a sequence of bits bytes lines or records A file has structure which depends on its type
                                                                                                  • File Attributes
                                                                                                  • File Operations
                                                                                                  • Kernel Mode vs User Mode
                                                                                                  • Basic Features
                                                                                                  • Architecture

                                                                                                    add item to buffer hellip signal(mutex) signal(full)

                                                                                                    while (1)

                                                                                                    signal(mutex) signal(empty) hellip consume the item hellip while (1)

                                                                                                    2 The Readers-Writers ProblemA data item such as a file is shared among several processesEach process is classified as reader or writerMultiple readers can read the file at same timeA writer must have exclusive access (ie cannot share with either a reader or another writer)Two versions of the readers-writers problem

                                                                                                    readers priority new reader need not wait because a writer is waiting writers priority if a writer is waiting to access the database no new readers can start reading

                                                                                                    A solution to both version may cause starvation in the readers priority version writers may starve in the writers priority version readers may starve

                                                                                                    A semaphore solution to the readers priority version uses 3 semaphores1 readcount = 0 (readcount semaphore counts the number of reading shared data2 mutex =1 (mutex semaphore is used for mutually exclusion when readcount is updated)3 wrt = 1 (wrt semaphore is common to both writer and reader)

                                                                                                    wrt provides mutually exclusive access to shared data Reader process Writer process

                                                                                                    wait(mutex) readcount++ if (readcount == 1) wait(rt) signal(mutex) hellip reading is performed hellip wait(mutex) readcount-- if (readcount == 0) signal(wrt) signal(mutex)

                                                                                                    do wait(wrt) hellip writing is performed hellip signal(wrt)

                                                                                                    while (TRUE)

                                                                                                    3 The Dining Philosophers Problemn philosophers sit around a table thinking and eatingAs time passes a philosopher gets hungry and tries to pick up the chopstick on his left and on his right A philosopher may only pick up one chopstick at a time and cannot pick up a chopstick already in the hand of neighbour philosopherThe dining philosophers problems is an example of a large class or concurrency control problems it is a simple representation of the need to allocate several resources among several processes in a deadlock-free and starvation-free mannerA semaphore solution represent each chopstick with a semaphore

                                                                                                    semaphore chopstick[5] Initially all values are 1

                                                                                                    Philosopher ido

                                                                                                    51

                                                                                                    wait(chopstick[i])wait(chopstick[(i+1) 5])hellipeathellipsignal(chopstick[i])signal(chopstick[(i+1) 5])hellipthinkhellip

                                                                                                    while (1)This solution guarantees no two neighbouring philosophers eat simultaneously but has the possibility of creating a deadlockQ) Types of operating systems

                                                                                                    MULTIPROCESSING SYSTEMS has multiple hardware CPUs

                                                                                                    In symmetric multiprocessing (SMP) systems The processes share the same main memory and io facilities All processors can perform the same functions (hence term symmetric)

                                                                                                    52

                                                                                                    ASYMMETRIC MULTIPROCESSING SYSTEMS In asymmetric multiprocessing (ASMP) all CPUs are not equal (Master and slave CPUs) ASMP have dedicated processors to specific tasks For example one processor may be dedicated to disk operations another to video operations etc These systems dont have the flexibility to assign processes to the least-loaded CPU unlike an SMP system

                                                                                                    REAL TIME SYSTEM is used to implement a computer application that must complete its execution within its time constraint

                                                                                                    Two kinds of real-time systems have evolved

                                                                                                    1 A hard real-time system can guarantee that response requirement will be met under all conditions 2 A soft real-time system cannot guarantee that response requirement meets under all conditions Ex

                                                                                                    Digital audio or multimedia systems Digital telephones

                                                                                                    DISTRIBUTED SYSTEMS execute parts of a computation in different systems at the same time It uses distributed control ie it spreads its decision-making actions across different computers in the system so that failures of individual computers or the network does not cripple its operation

                                                                                                    A distributed operating system appears as a uniprocessor system even though it has multiple processors The users may not know where their programs are being run or where their files are located that should all be handled automatically by the operating system

                                                                                                    Distributed systems allow applications to run on several processors at the same time thus requiring more complex processor scheduling algorithms

                                                                                                    DESKTOP SYSTEMS A desktop system is a personal computer (PC) system used at a single location Modern desktop systems support multiprogramming Common examples are Linux FreeBSD Windows 8 and the Macintosh operating system

                                                                                                    HANDHELD SYSTEMS (palmtop computers) Two of the most popular operating systems for handhelds are Symbian OS and Android OS They have very less memory

                                                                                                    53

                                                                                                    CLUSTERED SYSTEMS A computer cluster consists of a set of loosely connected computers that work together so that in many respects they can be viewed as a single system The components of a cluster are usually connected to each other through fast local area networks each node (computer used as a server) running its own instance of an operating system

                                                                                                    Q) Modern operating system

                                                                                                    New developments in hardware applications and security threats lead to the development of modern operating system

                                                                                                    New Hardware developments like increased machine speed high speed network increased size and variety of memory devices

                                                                                                    New Applications developments like multimedia applications internet and web access and client server computing

                                                                                                    Developments in modern operating system can be categorized as

                                                                                                    1 Microkernel architecture assigns only few functions to kernel ( IPC and CPU scheduling etc) Other OS services are provided by processes called servers that run in user mode This approach decouples server and kernel development This approach is well suited to distributed environment

                                                                                                    2 Multithreading a process is divided into threads that can run concurrently Useful for applications that perform a number of independent tasks Example database server that listens and processes many client requests

                                                                                                    3 Symmetric multiprocessing As now-a-days there are multiple microprocessors in a single system SMP operating system provides greater efficiency when there are multiple processors The processes share the same main memory and io facilities All processors can perform the same functions (hence term symmetric)

                                                                                                    SMP schedules processes or threads across all of the processors

                                                                                                    Advantages

                                                                                                    1 Increased performance if some portion of the program can be run in parallel the performance of SMP OS increases

                                                                                                    2 increased availability as all processes can perform the same task if one processor fails others will work 3 incremental growth one can add a new processor to increase the performance4 Scaling cost can be dependent on the number of processors used

                                                                                                    4 Distributed operating systems Distributed systems allow applications to run on several processors at the same time

                                                                                                    5 Object oriented design adds modularity to kernel Os can be customized without effecting system integrity

                                                                                                    Q) Linux Operating systems It is open source as its source code is freely available It is free to use Linux was designed considering UNIX compatibility Linux is a multi-user multitasking system Main design goals are speed efficiency and standardization Components of Linux System

                                                                                                    54

                                                                                                    Linux Operating System has primarily three components as shown in the below diagram

                                                                                                    Kernel - Kernel is the core part of Linux It is responsible for all major activities of this operating system It interacts directly with hardware Kernel hides low level hardware details to system or application programs

                                                                                                    System Library - System libraries are special functions or programs These are used by application programs to use Kernelrsquos features

                                                                                                    System Utility - System Utility programs are responsible to do specialized individual level tasks like updating log file accepting login requests from terminals etc

                                                                                                    Kernel Modules Sections of kernel code that can be compiled loaded and unloaded independent of the rest of the kernel A kernel module may typically implement a device driver a file system or a networking protocol

                                                                                                    Three components to Linux module support 1 module management Supports loading modules into memory and allows to talk to the rest of the kernel 2driver registration Allows modules to inform rest of the kernel that a new driver has become available 3 conflict resolution protect reserved resources of one driver from accidental use by another driver

                                                                                                    Kernel Mode vs User ModeKernel component code executes in a special privileged mode called kernel mode with full access to all resources of the computer This code represents a single process executes in single address space and do not require any context switch and hence is very efficient and fast Kernel runs each processes and provides system services to processes provides protected access to hardwares to processesCode which is not required to run in kernel mode is in System Library User programs and other system programs works in User Mode which has no access to system hardwares and kernel code User programs utilities use System libraries to access Kernel functions to get systems low level tasksBasic FeaturesFollowing are some of the important features of Linux Operating System

                                                                                                    Portable - Portability means softwares can works on different types of hardwares in same way Linux kernel and application programs supports their installation on any kind of hardware platform

                                                                                                    Open Source - Linux source code is freely available and it is community based development project Multiple teams works in collaboration to enhance the capability of Linux operating system and it is continuously evolving

                                                                                                    Multi-User - Linux is a multiuser system means multiple users can access system resources like memory ram application programs at same time

                                                                                                    Multiprogramming - Linux is a multiprogramming system means multiple applications can run at same time

                                                                                                    Hierarchical File System - Linux provides a standard file structure in which system files user files are arranged

                                                                                                    Shell - Linux provides a special interpreter program which can be used to execute commands of the operating system It can be used to do various types of operations call application programs etc

                                                                                                    55

                                                                                                    Security - Linux provides user security using authentication features like password protection controlled access to specific files encryption of data

                                                                                                    Architecture Linux System Architecture is consists of following layers

                                                                                                    Hardware layer - Hardware consists of all peripheral devices (RAM HDD CPU etc)

                                                                                                    Kernel - Core component of Operating System interacts directly with hardware provides low level services to upper layer components

                                                                                                    Shell - An interface to kernel Takes commands from user and executes kernels functions

                                                                                                    Utilities - Utility programs giving user most of the functionalities of an operating systems

                                                                                                    Q) windows XP operating System1Extensibility mdash layered architecture

                                                                                                    Executive which runs in protected mode provides the basic system services On top of the executive several server subsystems operate in user mode Modular structure allows additional environmental subsystems to be added without affecting the

                                                                                                    executive 2Portability mdash XP can be moved from on hardware architecture to another with relatively few changes

                                                                                                    Written in C and C++ Processor-dependent code is isolated in a dynamic link library (DLL) called the ldquohardware abstraction

                                                                                                    layerrdquo (HAL)3 Reliability mdashXP uses hardware protection for virtual memory and software protection mechanisms for operating system resources 4 Compatibility mdash applications that follow the IEEE 10031 (POSIX) standard can be complied to run on 2000 without changing the source code 5 Performance mdash XP subsystems can communicate with one another via high-performance message passing

                                                                                                    Preemption of low priority threads enables the system to respond quickly to external events Designed for symmetrical multiprocessing

                                                                                                    6 International support mdash supports different locales via the national language support (NLS) APIXP Architecture 1 Layered system of modules 2 Protected mode mdash hardware abstraction layer (HAL) kernel executive 3 User mode mdash collection of subsystems

                                                                                                    a Environmental subsystems emulate different operating systems b Protection subsystems provide security functions

                                                                                                    XP architecture

                                                                                                    56

                                                                                                    Q) Windows network Operating systemExample Windows 2000 operating system

                                                                                                    All the pictures are taken from Silberschatz Abraham et al Peter Baer Galvin and Greg Gagne Operating system concepts Reading Addison-Wesley

                                                                                                    Visit my blog enthusiaststudentblogspotin57

                                                                                                    mtechmessengerblogspotin

                                                                                                    Keywords OS exam notes OS notesOperating system notes Opearing system galvin notesSimple batch system Multi- programmed Batch System Time sharing System Difference between Batch Multi-programming and Time sharing What is process structure of process in memory Process states PCB Operations on Processes Process Creation Process Termination cascading terminationcontext switch Basic Concepts of threads two modes of CPU execution Inter process communication Shared ndash Memory Systems Message passing systemswhat is deadlock four conditions that are necessary for deadlock to occur Methods for handling deadlocks Deadlock prevention Deadlock avoidanceResource Allocation graphBankers AlgorithmSafe State Unsafe state Safety Algorithm Resource-Request Algorithm Memory management Address Binding Logical vs Physical Address Space MMU Memory-Management Unit Dynamic loading Dynamic Linking Overlays Swapping Contiguous memory allocation Single-partition allocation Multiple-partition allocation Internal fragmentation Variable Size partitions External fragmentation First-fit Best-fit Worst-fit Non-Contiguous Memory allocation Paging Implementation of Page Table PTLR PTBR page table base register Page-table length register translation look-aside buffers TLB Protection in Paging Shared pages in Paging Segmentation Shared Segments Virtual memory Demand paging Procedure for Handling a Page Fault pure demand paging Page Replacement Page Replacement Algorithms FIFO Optimal Page replacement AlgorithmLeast Recently used (LRU) Page replacement algorithm ThrashingWorking Set model global and local page replacement

                                                                                                    58

                                                                                                    • 101 File Concept Data is stored in secondary memory in the form of files A file is a sequence of bits bytes lines or records A file has structure which depends on its type
                                                                                                    • File Attributes
                                                                                                    • File Operations
                                                                                                    • Kernel Mode vs User Mode
                                                                                                    • Basic Features
                                                                                                    • Architecture

                                                                                                      2 The Readers-Writers ProblemA data item such as a file is shared among several processesEach process is classified as reader or writerMultiple readers can read the file at same timeA writer must have exclusive access (ie cannot share with either a reader or another writer)Two versions of the readers-writers problem

                                                                                                      readers priority new reader need not wait because a writer is waiting writers priority if a writer is waiting to access the database no new readers can start reading

                                                                                                      A solution to both version may cause starvation in the readers priority version writers may starve in the writers priority version readers may starve

                                                                                                      A semaphore solution to the readers priority version uses 3 semaphores1 readcount = 0 (readcount semaphore counts the number of reading shared data2 mutex =1 (mutex semaphore is used for mutually exclusion when readcount is updated)3 wrt = 1 (wrt semaphore is common to both writer and reader)

                                                                                                      wrt provides mutually exclusive access to shared data Reader process Writer process

                                                                                                      wait(mutex) readcount++ if (readcount == 1) wait(rt) signal(mutex) hellip reading is performed hellip wait(mutex) readcount-- if (readcount == 0) signal(wrt) signal(mutex)

                                                                                                      do wait(wrt) hellip writing is performed hellip signal(wrt)

                                                                                                      while (TRUE)

                                                                                                      3 The Dining Philosophers Problemn philosophers sit around a table thinking and eatingAs time passes a philosopher gets hungry and tries to pick up the chopstick on his left and on his right A philosopher may only pick up one chopstick at a time and cannot pick up a chopstick already in the hand of neighbour philosopherThe dining philosophers problems is an example of a large class or concurrency control problems it is a simple representation of the need to allocate several resources among several processes in a deadlock-free and starvation-free mannerA semaphore solution represent each chopstick with a semaphore

                                                                                                      semaphore chopstick[5] Initially all values are 1

                                                                                                      Philosopher ido

                                                                                                      51

                                                                                                      wait(chopstick[i])wait(chopstick[(i+1) 5])hellipeathellipsignal(chopstick[i])signal(chopstick[(i+1) 5])hellipthinkhellip

                                                                                                      while (1)This solution guarantees no two neighbouring philosophers eat simultaneously but has the possibility of creating a deadlockQ) Types of operating systems

                                                                                                      MULTIPROCESSING SYSTEMS has multiple hardware CPUs

                                                                                                      In symmetric multiprocessing (SMP) systems The processes share the same main memory and io facilities All processors can perform the same functions (hence term symmetric)

                                                                                                      52

                                                                                                      ASYMMETRIC MULTIPROCESSING SYSTEMS In asymmetric multiprocessing (ASMP) all CPUs are not equal (Master and slave CPUs) ASMP have dedicated processors to specific tasks For example one processor may be dedicated to disk operations another to video operations etc These systems dont have the flexibility to assign processes to the least-loaded CPU unlike an SMP system

                                                                                                      REAL TIME SYSTEM is used to implement a computer application that must complete its execution within its time constraint

                                                                                                      Two kinds of real-time systems have evolved

                                                                                                      1 A hard real-time system can guarantee that response requirement will be met under all conditions 2 A soft real-time system cannot guarantee that response requirement meets under all conditions Ex

                                                                                                      Digital audio or multimedia systems Digital telephones

                                                                                                      DISTRIBUTED SYSTEMS execute parts of a computation in different systems at the same time It uses distributed control ie it spreads its decision-making actions across different computers in the system so that failures of individual computers or the network does not cripple its operation

                                                                                                      A distributed operating system appears as a uniprocessor system even though it has multiple processors The users may not know where their programs are being run or where their files are located that should all be handled automatically by the operating system

                                                                                                      Distributed systems allow applications to run on several processors at the same time thus requiring more complex processor scheduling algorithms

                                                                                                      DESKTOP SYSTEMS A desktop system is a personal computer (PC) system used at a single location Modern desktop systems support multiprogramming Common examples are Linux FreeBSD Windows 8 and the Macintosh operating system

                                                                                                      HANDHELD SYSTEMS (palmtop computers) Two of the most popular operating systems for handhelds are Symbian OS and Android OS They have very less memory

                                                                                                      53

                                                                                                      CLUSTERED SYSTEMS A computer cluster consists of a set of loosely connected computers that work together so that in many respects they can be viewed as a single system The components of a cluster are usually connected to each other through fast local area networks each node (computer used as a server) running its own instance of an operating system

                                                                                                      Q) Modern operating system

                                                                                                      New developments in hardware applications and security threats lead to the development of modern operating system

                                                                                                      New Hardware developments like increased machine speed high speed network increased size and variety of memory devices

                                                                                                      New Applications developments like multimedia applications internet and web access and client server computing

                                                                                                      Developments in modern operating system can be categorized as

                                                                                                      1 Microkernel architecture assigns only few functions to kernel ( IPC and CPU scheduling etc) Other OS services are provided by processes called servers that run in user mode This approach decouples server and kernel development This approach is well suited to distributed environment

                                                                                                      2 Multithreading a process is divided into threads that can run concurrently Useful for applications that perform a number of independent tasks Example database server that listens and processes many client requests

                                                                                                      3 Symmetric multiprocessing As now-a-days there are multiple microprocessors in a single system SMP operating system provides greater efficiency when there are multiple processors The processes share the same main memory and io facilities All processors can perform the same functions (hence term symmetric)

                                                                                                      SMP schedules processes or threads across all of the processors

                                                                                                      Advantages

                                                                                                      1 Increased performance if some portion of the program can be run in parallel the performance of SMP OS increases

                                                                                                      2 increased availability as all processes can perform the same task if one processor fails others will work 3 incremental growth one can add a new processor to increase the performance4 Scaling cost can be dependent on the number of processors used

                                                                                                      4 Distributed operating systems Distributed systems allow applications to run on several processors at the same time

                                                                                                      5 Object oriented design adds modularity to kernel Os can be customized without effecting system integrity

                                                                                                      Q) Linux Operating systems It is open source as its source code is freely available It is free to use Linux was designed considering UNIX compatibility Linux is a multi-user multitasking system Main design goals are speed efficiency and standardization Components of Linux System

                                                                                                      54

                                                                                                      Linux Operating System has primarily three components as shown in the below diagram

                                                                                                      Kernel - Kernel is the core part of Linux It is responsible for all major activities of this operating system It interacts directly with hardware Kernel hides low level hardware details to system or application programs

                                                                                                      System Library - System libraries are special functions or programs These are used by application programs to use Kernelrsquos features

                                                                                                      System Utility - System Utility programs are responsible to do specialized individual level tasks like updating log file accepting login requests from terminals etc

                                                                                                      Kernel Modules Sections of kernel code that can be compiled loaded and unloaded independent of the rest of the kernel A kernel module may typically implement a device driver a file system or a networking protocol

                                                                                                      Three components to Linux module support 1 module management Supports loading modules into memory and allows to talk to the rest of the kernel 2driver registration Allows modules to inform rest of the kernel that a new driver has become available 3 conflict resolution protect reserved resources of one driver from accidental use by another driver

                                                                                                      Kernel Mode vs User ModeKernel component code executes in a special privileged mode called kernel mode with full access to all resources of the computer This code represents a single process executes in single address space and do not require any context switch and hence is very efficient and fast Kernel runs each processes and provides system services to processes provides protected access to hardwares to processesCode which is not required to run in kernel mode is in System Library User programs and other system programs works in User Mode which has no access to system hardwares and kernel code User programs utilities use System libraries to access Kernel functions to get systems low level tasksBasic FeaturesFollowing are some of the important features of Linux Operating System

                                                                                                      Portable - Portability means softwares can works on different types of hardwares in same way Linux kernel and application programs supports their installation on any kind of hardware platform

                                                                                                      Open Source - Linux source code is freely available and it is community based development project Multiple teams works in collaboration to enhance the capability of Linux operating system and it is continuously evolving

                                                                                                      Multi-User - Linux is a multiuser system means multiple users can access system resources like memory ram application programs at same time

                                                                                                      Multiprogramming - Linux is a multiprogramming system means multiple applications can run at same time

                                                                                                      Hierarchical File System - Linux provides a standard file structure in which system files user files are arranged

                                                                                                      Shell - Linux provides a special interpreter program which can be used to execute commands of the operating system It can be used to do various types of operations call application programs etc

                                                                                                      55

                                                                                                      Security - Linux provides user security using authentication features like password protection controlled access to specific files encryption of data

                                                                                                      Architecture Linux System Architecture is consists of following layers

                                                                                                      Hardware layer - Hardware consists of all peripheral devices (RAM HDD CPU etc)

                                                                                                      Kernel - Core component of Operating System interacts directly with hardware provides low level services to upper layer components

                                                                                                      Shell - An interface to kernel Takes commands from user and executes kernels functions

                                                                                                      Utilities - Utility programs giving user most of the functionalities of an operating systems

                                                                                                      Q) windows XP operating System1Extensibility mdash layered architecture

                                                                                                      Executive which runs in protected mode provides the basic system services On top of the executive several server subsystems operate in user mode Modular structure allows additional environmental subsystems to be added without affecting the

                                                                                                      executive 2Portability mdash XP can be moved from on hardware architecture to another with relatively few changes

                                                                                                      Written in C and C++ Processor-dependent code is isolated in a dynamic link library (DLL) called the ldquohardware abstraction

                                                                                                      layerrdquo (HAL)3 Reliability mdashXP uses hardware protection for virtual memory and software protection mechanisms for operating system resources 4 Compatibility mdash applications that follow the IEEE 10031 (POSIX) standard can be complied to run on 2000 without changing the source code 5 Performance mdash XP subsystems can communicate with one another via high-performance message passing

                                                                                                      Preemption of low priority threads enables the system to respond quickly to external events Designed for symmetrical multiprocessing

                                                                                                      6 International support mdash supports different locales via the national language support (NLS) APIXP Architecture 1 Layered system of modules 2 Protected mode mdash hardware abstraction layer (HAL) kernel executive 3 User mode mdash collection of subsystems

                                                                                                      a Environmental subsystems emulate different operating systems b Protection subsystems provide security functions

                                                                                                      XP architecture

                                                                                                      56

                                                                                                      Q) Windows network Operating systemExample Windows 2000 operating system

                                                                                                      All the pictures are taken from Silberschatz Abraham et al Peter Baer Galvin and Greg Gagne Operating system concepts Reading Addison-Wesley

                                                                                                      Visit my blog enthusiaststudentblogspotin57

                                                                                                      mtechmessengerblogspotin

                                                                                                      Keywords OS exam notes OS notesOperating system notes Opearing system galvin notesSimple batch system Multi- programmed Batch System Time sharing System Difference between Batch Multi-programming and Time sharing What is process structure of process in memory Process states PCB Operations on Processes Process Creation Process Termination cascading terminationcontext switch Basic Concepts of threads two modes of CPU execution Inter process communication Shared ndash Memory Systems Message passing systemswhat is deadlock four conditions that are necessary for deadlock to occur Methods for handling deadlocks Deadlock prevention Deadlock avoidanceResource Allocation graphBankers AlgorithmSafe State Unsafe state Safety Algorithm Resource-Request Algorithm Memory management Address Binding Logical vs Physical Address Space MMU Memory-Management Unit Dynamic loading Dynamic Linking Overlays Swapping Contiguous memory allocation Single-partition allocation Multiple-partition allocation Internal fragmentation Variable Size partitions External fragmentation First-fit Best-fit Worst-fit Non-Contiguous Memory allocation Paging Implementation of Page Table PTLR PTBR page table base register Page-table length register translation look-aside buffers TLB Protection in Paging Shared pages in Paging Segmentation Shared Segments Virtual memory Demand paging Procedure for Handling a Page Fault pure demand paging Page Replacement Page Replacement Algorithms FIFO Optimal Page replacement AlgorithmLeast Recently used (LRU) Page replacement algorithm ThrashingWorking Set model global and local page replacement

                                                                                                      58

                                                                                                      • 101 File Concept Data is stored in secondary memory in the form of files A file is a sequence of bits bytes lines or records A file has structure which depends on its type
                                                                                                      • File Attributes
                                                                                                      • File Operations
                                                                                                      • Kernel Mode vs User Mode
                                                                                                      • Basic Features
                                                                                                      • Architecture

                                                                                                        wait(chopstick[i])wait(chopstick[(i+1) 5])hellipeathellipsignal(chopstick[i])signal(chopstick[(i+1) 5])hellipthinkhellip

                                                                                                        while (1)This solution guarantees no two neighbouring philosophers eat simultaneously but has the possibility of creating a deadlockQ) Types of operating systems

                                                                                                        MULTIPROCESSING SYSTEMS has multiple hardware CPUs

                                                                                                        In symmetric multiprocessing (SMP) systems The processes share the same main memory and io facilities All processors can perform the same functions (hence term symmetric)

                                                                                                        52

                                                                                                        ASYMMETRIC MULTIPROCESSING SYSTEMS In asymmetric multiprocessing (ASMP) all CPUs are not equal (Master and slave CPUs) ASMP have dedicated processors to specific tasks For example one processor may be dedicated to disk operations another to video operations etc These systems dont have the flexibility to assign processes to the least-loaded CPU unlike an SMP system

                                                                                                        REAL TIME SYSTEM is used to implement a computer application that must complete its execution within its time constraint

                                                                                                        Two kinds of real-time systems have evolved

                                                                                                        1 A hard real-time system can guarantee that response requirement will be met under all conditions 2 A soft real-time system cannot guarantee that response requirement meets under all conditions Ex

                                                                                                        Digital audio or multimedia systems Digital telephones

                                                                                                        DISTRIBUTED SYSTEMS execute parts of a computation in different systems at the same time It uses distributed control ie it spreads its decision-making actions across different computers in the system so that failures of individual computers or the network does not cripple its operation

                                                                                                        A distributed operating system appears as a uniprocessor system even though it has multiple processors The users may not know where their programs are being run or where their files are located that should all be handled automatically by the operating system

                                                                                                        Distributed systems allow applications to run on several processors at the same time thus requiring more complex processor scheduling algorithms

                                                                                                        DESKTOP SYSTEMS A desktop system is a personal computer (PC) system used at a single location Modern desktop systems support multiprogramming Common examples are Linux FreeBSD Windows 8 and the Macintosh operating system

                                                                                                        HANDHELD SYSTEMS (palmtop computers) Two of the most popular operating systems for handhelds are Symbian OS and Android OS They have very less memory

                                                                                                        53

                                                                                                        CLUSTERED SYSTEMS A computer cluster consists of a set of loosely connected computers that work together so that in many respects they can be viewed as a single system The components of a cluster are usually connected to each other through fast local area networks each node (computer used as a server) running its own instance of an operating system

                                                                                                        Q) Modern operating system

                                                                                                        New developments in hardware applications and security threats lead to the development of modern operating system

                                                                                                        New Hardware developments like increased machine speed high speed network increased size and variety of memory devices

                                                                                                        New Applications developments like multimedia applications internet and web access and client server computing

                                                                                                        Developments in modern operating system can be categorized as

                                                                                                        1 Microkernel architecture assigns only few functions to kernel ( IPC and CPU scheduling etc) Other OS services are provided by processes called servers that run in user mode This approach decouples server and kernel development This approach is well suited to distributed environment

                                                                                                        2 Multithreading a process is divided into threads that can run concurrently Useful for applications that perform a number of independent tasks Example database server that listens and processes many client requests

                                                                                                        3 Symmetric multiprocessing As now-a-days there are multiple microprocessors in a single system SMP operating system provides greater efficiency when there are multiple processors The processes share the same main memory and io facilities All processors can perform the same functions (hence term symmetric)

                                                                                                        SMP schedules processes or threads across all of the processors

                                                                                                        Advantages

                                                                                                        1 Increased performance if some portion of the program can be run in parallel the performance of SMP OS increases

                                                                                                        2 increased availability as all processes can perform the same task if one processor fails others will work 3 incremental growth one can add a new processor to increase the performance4 Scaling cost can be dependent on the number of processors used

                                                                                                        4 Distributed operating systems Distributed systems allow applications to run on several processors at the same time

                                                                                                        5 Object oriented design adds modularity to kernel Os can be customized without effecting system integrity

                                                                                                        Q) Linux Operating systems It is open source as its source code is freely available It is free to use Linux was designed considering UNIX compatibility Linux is a multi-user multitasking system Main design goals are speed efficiency and standardization Components of Linux System

                                                                                                        54

                                                                                                        Linux Operating System has primarily three components as shown in the below diagram

                                                                                                        Kernel - Kernel is the core part of Linux It is responsible for all major activities of this operating system It interacts directly with hardware Kernel hides low level hardware details to system or application programs

                                                                                                        System Library - System libraries are special functions or programs These are used by application programs to use Kernelrsquos features

                                                                                                        System Utility - System Utility programs are responsible to do specialized individual level tasks like updating log file accepting login requests from terminals etc

                                                                                                        Kernel Modules Sections of kernel code that can be compiled loaded and unloaded independent of the rest of the kernel A kernel module may typically implement a device driver a file system or a networking protocol

                                                                                                        Three components to Linux module support 1 module management Supports loading modules into memory and allows to talk to the rest of the kernel 2driver registration Allows modules to inform rest of the kernel that a new driver has become available 3 conflict resolution protect reserved resources of one driver from accidental use by another driver

                                                                                                        Kernel Mode vs User ModeKernel component code executes in a special privileged mode called kernel mode with full access to all resources of the computer This code represents a single process executes in single address space and do not require any context switch and hence is very efficient and fast Kernel runs each processes and provides system services to processes provides protected access to hardwares to processesCode which is not required to run in kernel mode is in System Library User programs and other system programs works in User Mode which has no access to system hardwares and kernel code User programs utilities use System libraries to access Kernel functions to get systems low level tasksBasic FeaturesFollowing are some of the important features of Linux Operating System

                                                                                                        Portable - Portability means softwares can works on different types of hardwares in same way Linux kernel and application programs supports their installation on any kind of hardware platform

                                                                                                        Open Source - Linux source code is freely available and it is community based development project Multiple teams works in collaboration to enhance the capability of Linux operating system and it is continuously evolving

                                                                                                        Multi-User - Linux is a multiuser system means multiple users can access system resources like memory ram application programs at same time

                                                                                                        Multiprogramming - Linux is a multiprogramming system means multiple applications can run at same time

                                                                                                        Hierarchical File System - Linux provides a standard file structure in which system files user files are arranged

                                                                                                        Shell - Linux provides a special interpreter program which can be used to execute commands of the operating system It can be used to do various types of operations call application programs etc

                                                                                                        55

                                                                                                        Security - Linux provides user security using authentication features like password protection controlled access to specific files encryption of data

                                                                                                        Architecture Linux System Architecture is consists of following layers

                                                                                                        Hardware layer - Hardware consists of all peripheral devices (RAM HDD CPU etc)

                                                                                                        Kernel - Core component of Operating System interacts directly with hardware provides low level services to upper layer components

                                                                                                        Shell - An interface to kernel Takes commands from user and executes kernels functions

                                                                                                        Utilities - Utility programs giving user most of the functionalities of an operating systems

                                                                                                        Q) windows XP operating System1Extensibility mdash layered architecture

                                                                                                        Executive which runs in protected mode provides the basic system services On top of the executive several server subsystems operate in user mode Modular structure allows additional environmental subsystems to be added without affecting the

                                                                                                        executive 2Portability mdash XP can be moved from on hardware architecture to another with relatively few changes

                                                                                                        Written in C and C++ Processor-dependent code is isolated in a dynamic link library (DLL) called the ldquohardware abstraction

                                                                                                        layerrdquo (HAL)3 Reliability mdashXP uses hardware protection for virtual memory and software protection mechanisms for operating system resources 4 Compatibility mdash applications that follow the IEEE 10031 (POSIX) standard can be complied to run on 2000 without changing the source code 5 Performance mdash XP subsystems can communicate with one another via high-performance message passing

                                                                                                        Preemption of low priority threads enables the system to respond quickly to external events Designed for symmetrical multiprocessing

                                                                                                        6 International support mdash supports different locales via the national language support (NLS) APIXP Architecture 1 Layered system of modules 2 Protected mode mdash hardware abstraction layer (HAL) kernel executive 3 User mode mdash collection of subsystems

                                                                                                        a Environmental subsystems emulate different operating systems b Protection subsystems provide security functions

                                                                                                        XP architecture

                                                                                                        56

                                                                                                        Q) Windows network Operating systemExample Windows 2000 operating system

                                                                                                        All the pictures are taken from Silberschatz Abraham et al Peter Baer Galvin and Greg Gagne Operating system concepts Reading Addison-Wesley

                                                                                                        Visit my blog enthusiaststudentblogspotin57

                                                                                                        mtechmessengerblogspotin

                                                                                                        Keywords OS exam notes OS notesOperating system notes Opearing system galvin notesSimple batch system Multi- programmed Batch System Time sharing System Difference between Batch Multi-programming and Time sharing What is process structure of process in memory Process states PCB Operations on Processes Process Creation Process Termination cascading terminationcontext switch Basic Concepts of threads two modes of CPU execution Inter process communication Shared ndash Memory Systems Message passing systemswhat is deadlock four conditions that are necessary for deadlock to occur Methods for handling deadlocks Deadlock prevention Deadlock avoidanceResource Allocation graphBankers AlgorithmSafe State Unsafe state Safety Algorithm Resource-Request Algorithm Memory management Address Binding Logical vs Physical Address Space MMU Memory-Management Unit Dynamic loading Dynamic Linking Overlays Swapping Contiguous memory allocation Single-partition allocation Multiple-partition allocation Internal fragmentation Variable Size partitions External fragmentation First-fit Best-fit Worst-fit Non-Contiguous Memory allocation Paging Implementation of Page Table PTLR PTBR page table base register Page-table length register translation look-aside buffers TLB Protection in Paging Shared pages in Paging Segmentation Shared Segments Virtual memory Demand paging Procedure for Handling a Page Fault pure demand paging Page Replacement Page Replacement Algorithms FIFO Optimal Page replacement AlgorithmLeast Recently used (LRU) Page replacement algorithm ThrashingWorking Set model global and local page replacement

                                                                                                        58

                                                                                                        • 101 File Concept Data is stored in secondary memory in the form of files A file is a sequence of bits bytes lines or records A file has structure which depends on its type
                                                                                                        • File Attributes
                                                                                                        • File Operations
                                                                                                        • Kernel Mode vs User Mode
                                                                                                        • Basic Features
                                                                                                        • Architecture

                                                                                                          ASYMMETRIC MULTIPROCESSING SYSTEMS In asymmetric multiprocessing (ASMP) all CPUs are not equal (Master and slave CPUs) ASMP have dedicated processors to specific tasks For example one processor may be dedicated to disk operations another to video operations etc These systems dont have the flexibility to assign processes to the least-loaded CPU unlike an SMP system

                                                                                                          REAL TIME SYSTEM is used to implement a computer application that must complete its execution within its time constraint

                                                                                                          Two kinds of real-time systems have evolved

                                                                                                          1 A hard real-time system can guarantee that response requirement will be met under all conditions 2 A soft real-time system cannot guarantee that response requirement meets under all conditions Ex

                                                                                                          Digital audio or multimedia systems Digital telephones

                                                                                                          DISTRIBUTED SYSTEMS execute parts of a computation in different systems at the same time It uses distributed control ie it spreads its decision-making actions across different computers in the system so that failures of individual computers or the network does not cripple its operation

                                                                                                          A distributed operating system appears as a uniprocessor system even though it has multiple processors The users may not know where their programs are being run or where their files are located that should all be handled automatically by the operating system

                                                                                                          Distributed systems allow applications to run on several processors at the same time thus requiring more complex processor scheduling algorithms

                                                                                                          DESKTOP SYSTEMS A desktop system is a personal computer (PC) system used at a single location Modern desktop systems support multiprogramming Common examples are Linux FreeBSD Windows 8 and the Macintosh operating system

                                                                                                          HANDHELD SYSTEMS (palmtop computers) Two of the most popular operating systems for handhelds are Symbian OS and Android OS They have very less memory

                                                                                                          53

                                                                                                          CLUSTERED SYSTEMS A computer cluster consists of a set of loosely connected computers that work together so that in many respects they can be viewed as a single system The components of a cluster are usually connected to each other through fast local area networks each node (computer used as a server) running its own instance of an operating system

                                                                                                          Q) Modern operating system

                                                                                                          New developments in hardware applications and security threats lead to the development of modern operating system

                                                                                                          New Hardware developments like increased machine speed high speed network increased size and variety of memory devices

                                                                                                          New Applications developments like multimedia applications internet and web access and client server computing

                                                                                                          Developments in modern operating system can be categorized as

                                                                                                          1 Microkernel architecture assigns only few functions to kernel ( IPC and CPU scheduling etc) Other OS services are provided by processes called servers that run in user mode This approach decouples server and kernel development This approach is well suited to distributed environment

                                                                                                          2 Multithreading a process is divided into threads that can run concurrently Useful for applications that perform a number of independent tasks Example database server that listens and processes many client requests

                                                                                                          3 Symmetric multiprocessing As now-a-days there are multiple microprocessors in a single system SMP operating system provides greater efficiency when there are multiple processors The processes share the same main memory and io facilities All processors can perform the same functions (hence term symmetric)

                                                                                                          SMP schedules processes or threads across all of the processors

                                                                                                          Advantages

                                                                                                          1 Increased performance if some portion of the program can be run in parallel the performance of SMP OS increases

                                                                                                          2 increased availability as all processes can perform the same task if one processor fails others will work 3 incremental growth one can add a new processor to increase the performance4 Scaling cost can be dependent on the number of processors used

                                                                                                          4 Distributed operating systems Distributed systems allow applications to run on several processors at the same time

                                                                                                          5 Object oriented design adds modularity to kernel Os can be customized without effecting system integrity

                                                                                                          Q) Linux Operating systems It is open source as its source code is freely available It is free to use Linux was designed considering UNIX compatibility Linux is a multi-user multitasking system Main design goals are speed efficiency and standardization Components of Linux System

                                                                                                          54

                                                                                                          Linux Operating System has primarily three components as shown in the below diagram

                                                                                                          Kernel - Kernel is the core part of Linux It is responsible for all major activities of this operating system It interacts directly with hardware Kernel hides low level hardware details to system or application programs

                                                                                                          System Library - System libraries are special functions or programs These are used by application programs to use Kernelrsquos features

                                                                                                          System Utility - System Utility programs are responsible to do specialized individual level tasks like updating log file accepting login requests from terminals etc

                                                                                                          Kernel Modules Sections of kernel code that can be compiled loaded and unloaded independent of the rest of the kernel A kernel module may typically implement a device driver a file system or a networking protocol

                                                                                                          Three components to Linux module support 1 module management Supports loading modules into memory and allows to talk to the rest of the kernel 2driver registration Allows modules to inform rest of the kernel that a new driver has become available 3 conflict resolution protect reserved resources of one driver from accidental use by another driver

                                                                                                          Kernel Mode vs User ModeKernel component code executes in a special privileged mode called kernel mode with full access to all resources of the computer This code represents a single process executes in single address space and do not require any context switch and hence is very efficient and fast Kernel runs each processes and provides system services to processes provides protected access to hardwares to processesCode which is not required to run in kernel mode is in System Library User programs and other system programs works in User Mode which has no access to system hardwares and kernel code User programs utilities use System libraries to access Kernel functions to get systems low level tasksBasic FeaturesFollowing are some of the important features of Linux Operating System

                                                                                                          Portable - Portability means softwares can works on different types of hardwares in same way Linux kernel and application programs supports their installation on any kind of hardware platform

                                                                                                          Open Source - Linux source code is freely available and it is community based development project Multiple teams works in collaboration to enhance the capability of Linux operating system and it is continuously evolving

                                                                                                          Multi-User - Linux is a multiuser system means multiple users can access system resources like memory ram application programs at same time

                                                                                                          Multiprogramming - Linux is a multiprogramming system means multiple applications can run at same time

                                                                                                          Hierarchical File System - Linux provides a standard file structure in which system files user files are arranged

                                                                                                          Shell - Linux provides a special interpreter program which can be used to execute commands of the operating system It can be used to do various types of operations call application programs etc

                                                                                                          55

                                                                                                          Security - Linux provides user security using authentication features like password protection controlled access to specific files encryption of data

                                                                                                          Architecture Linux System Architecture is consists of following layers

                                                                                                          Hardware layer - Hardware consists of all peripheral devices (RAM HDD CPU etc)

                                                                                                          Kernel - Core component of Operating System interacts directly with hardware provides low level services to upper layer components

                                                                                                          Shell - An interface to kernel Takes commands from user and executes kernels functions

                                                                                                          Utilities - Utility programs giving user most of the functionalities of an operating systems

                                                                                                          Q) windows XP operating System1Extensibility mdash layered architecture

                                                                                                          Executive which runs in protected mode provides the basic system services On top of the executive several server subsystems operate in user mode Modular structure allows additional environmental subsystems to be added without affecting the

                                                                                                          executive 2Portability mdash XP can be moved from on hardware architecture to another with relatively few changes

                                                                                                          Written in C and C++ Processor-dependent code is isolated in a dynamic link library (DLL) called the ldquohardware abstraction

                                                                                                          layerrdquo (HAL)3 Reliability mdashXP uses hardware protection for virtual memory and software protection mechanisms for operating system resources 4 Compatibility mdash applications that follow the IEEE 10031 (POSIX) standard can be complied to run on 2000 without changing the source code 5 Performance mdash XP subsystems can communicate with one another via high-performance message passing

                                                                                                          Preemption of low priority threads enables the system to respond quickly to external events Designed for symmetrical multiprocessing

                                                                                                          6 International support mdash supports different locales via the national language support (NLS) APIXP Architecture 1 Layered system of modules 2 Protected mode mdash hardware abstraction layer (HAL) kernel executive 3 User mode mdash collection of subsystems

                                                                                                          a Environmental subsystems emulate different operating systems b Protection subsystems provide security functions

                                                                                                          XP architecture

                                                                                                          56

                                                                                                          Q) Windows network Operating systemExample Windows 2000 operating system

                                                                                                          All the pictures are taken from Silberschatz Abraham et al Peter Baer Galvin and Greg Gagne Operating system concepts Reading Addison-Wesley

                                                                                                          Visit my blog enthusiaststudentblogspotin57

                                                                                                          mtechmessengerblogspotin

                                                                                                          Keywords OS exam notes OS notesOperating system notes Opearing system galvin notesSimple batch system Multi- programmed Batch System Time sharing System Difference between Batch Multi-programming and Time sharing What is process structure of process in memory Process states PCB Operations on Processes Process Creation Process Termination cascading terminationcontext switch Basic Concepts of threads two modes of CPU execution Inter process communication Shared ndash Memory Systems Message passing systemswhat is deadlock four conditions that are necessary for deadlock to occur Methods for handling deadlocks Deadlock prevention Deadlock avoidanceResource Allocation graphBankers AlgorithmSafe State Unsafe state Safety Algorithm Resource-Request Algorithm Memory management Address Binding Logical vs Physical Address Space MMU Memory-Management Unit Dynamic loading Dynamic Linking Overlays Swapping Contiguous memory allocation Single-partition allocation Multiple-partition allocation Internal fragmentation Variable Size partitions External fragmentation First-fit Best-fit Worst-fit Non-Contiguous Memory allocation Paging Implementation of Page Table PTLR PTBR page table base register Page-table length register translation look-aside buffers TLB Protection in Paging Shared pages in Paging Segmentation Shared Segments Virtual memory Demand paging Procedure for Handling a Page Fault pure demand paging Page Replacement Page Replacement Algorithms FIFO Optimal Page replacement AlgorithmLeast Recently used (LRU) Page replacement algorithm ThrashingWorking Set model global and local page replacement

                                                                                                          58

                                                                                                          • 101 File Concept Data is stored in secondary memory in the form of files A file is a sequence of bits bytes lines or records A file has structure which depends on its type
                                                                                                          • File Attributes
                                                                                                          • File Operations
                                                                                                          • Kernel Mode vs User Mode
                                                                                                          • Basic Features
                                                                                                          • Architecture

                                                                                                            CLUSTERED SYSTEMS A computer cluster consists of a set of loosely connected computers that work together so that in many respects they can be viewed as a single system The components of a cluster are usually connected to each other through fast local area networks each node (computer used as a server) running its own instance of an operating system

                                                                                                            Q) Modern operating system

                                                                                                            New developments in hardware applications and security threats lead to the development of modern operating system

                                                                                                            New Hardware developments like increased machine speed high speed network increased size and variety of memory devices

                                                                                                            New Applications developments like multimedia applications internet and web access and client server computing

                                                                                                            Developments in modern operating system can be categorized as

                                                                                                            1 Microkernel architecture assigns only few functions to kernel ( IPC and CPU scheduling etc) Other OS services are provided by processes called servers that run in user mode This approach decouples server and kernel development This approach is well suited to distributed environment

                                                                                                            2 Multithreading a process is divided into threads that can run concurrently Useful for applications that perform a number of independent tasks Example database server that listens and processes many client requests

                                                                                                            3 Symmetric multiprocessing As now-a-days there are multiple microprocessors in a single system SMP operating system provides greater efficiency when there are multiple processors The processes share the same main memory and io facilities All processors can perform the same functions (hence term symmetric)

                                                                                                            SMP schedules processes or threads across all of the processors

                                                                                                            Advantages

                                                                                                            1 Increased performance if some portion of the program can be run in parallel the performance of SMP OS increases

                                                                                                            2 increased availability as all processes can perform the same task if one processor fails others will work 3 incremental growth one can add a new processor to increase the performance4 Scaling cost can be dependent on the number of processors used

                                                                                                            4 Distributed operating systems Distributed systems allow applications to run on several processors at the same time

                                                                                                            5 Object oriented design adds modularity to kernel Os can be customized without effecting system integrity

                                                                                                            Q) Linux Operating systems It is open source as its source code is freely available It is free to use Linux was designed considering UNIX compatibility Linux is a multi-user multitasking system Main design goals are speed efficiency and standardization Components of Linux System

                                                                                                            54

                                                                                                            Linux Operating System has primarily three components as shown in the below diagram

                                                                                                            Kernel - Kernel is the core part of Linux It is responsible for all major activities of this operating system It interacts directly with hardware Kernel hides low level hardware details to system or application programs

                                                                                                            System Library - System libraries are special functions or programs These are used by application programs to use Kernelrsquos features

                                                                                                            System Utility - System Utility programs are responsible to do specialized individual level tasks like updating log file accepting login requests from terminals etc

                                                                                                            Kernel Modules Sections of kernel code that can be compiled loaded and unloaded independent of the rest of the kernel A kernel module may typically implement a device driver a file system or a networking protocol

                                                                                                            Three components to Linux module support 1 module management Supports loading modules into memory and allows to talk to the rest of the kernel 2driver registration Allows modules to inform rest of the kernel that a new driver has become available 3 conflict resolution protect reserved resources of one driver from accidental use by another driver

                                                                                                            Kernel Mode vs User ModeKernel component code executes in a special privileged mode called kernel mode with full access to all resources of the computer This code represents a single process executes in single address space and do not require any context switch and hence is very efficient and fast Kernel runs each processes and provides system services to processes provides protected access to hardwares to processesCode which is not required to run in kernel mode is in System Library User programs and other system programs works in User Mode which has no access to system hardwares and kernel code User programs utilities use System libraries to access Kernel functions to get systems low level tasksBasic FeaturesFollowing are some of the important features of Linux Operating System

                                                                                                            Portable - Portability means softwares can works on different types of hardwares in same way Linux kernel and application programs supports their installation on any kind of hardware platform

                                                                                                            Open Source - Linux source code is freely available and it is community based development project Multiple teams works in collaboration to enhance the capability of Linux operating system and it is continuously evolving

                                                                                                            Multi-User - Linux is a multiuser system means multiple users can access system resources like memory ram application programs at same time

                                                                                                            Multiprogramming - Linux is a multiprogramming system means multiple applications can run at same time

                                                                                                            Hierarchical File System - Linux provides a standard file structure in which system files user files are arranged

                                                                                                            Shell - Linux provides a special interpreter program which can be used to execute commands of the operating system It can be used to do various types of operations call application programs etc

                                                                                                            55

                                                                                                            Security - Linux provides user security using authentication features like password protection controlled access to specific files encryption of data

                                                                                                            Architecture Linux System Architecture is consists of following layers

                                                                                                            Hardware layer - Hardware consists of all peripheral devices (RAM HDD CPU etc)

                                                                                                            Kernel - Core component of Operating System interacts directly with hardware provides low level services to upper layer components

                                                                                                            Shell - An interface to kernel Takes commands from user and executes kernels functions

                                                                                                            Utilities - Utility programs giving user most of the functionalities of an operating systems

                                                                                                            Q) windows XP operating System1Extensibility mdash layered architecture

                                                                                                            Executive which runs in protected mode provides the basic system services On top of the executive several server subsystems operate in user mode Modular structure allows additional environmental subsystems to be added without affecting the

                                                                                                            executive 2Portability mdash XP can be moved from on hardware architecture to another with relatively few changes

                                                                                                            Written in C and C++ Processor-dependent code is isolated in a dynamic link library (DLL) called the ldquohardware abstraction

                                                                                                            layerrdquo (HAL)3 Reliability mdashXP uses hardware protection for virtual memory and software protection mechanisms for operating system resources 4 Compatibility mdash applications that follow the IEEE 10031 (POSIX) standard can be complied to run on 2000 without changing the source code 5 Performance mdash XP subsystems can communicate with one another via high-performance message passing

                                                                                                            Preemption of low priority threads enables the system to respond quickly to external events Designed for symmetrical multiprocessing

                                                                                                            6 International support mdash supports different locales via the national language support (NLS) APIXP Architecture 1 Layered system of modules 2 Protected mode mdash hardware abstraction layer (HAL) kernel executive 3 User mode mdash collection of subsystems

                                                                                                            a Environmental subsystems emulate different operating systems b Protection subsystems provide security functions

                                                                                                            XP architecture

                                                                                                            56

                                                                                                            Q) Windows network Operating systemExample Windows 2000 operating system

                                                                                                            All the pictures are taken from Silberschatz Abraham et al Peter Baer Galvin and Greg Gagne Operating system concepts Reading Addison-Wesley

                                                                                                            Visit my blog enthusiaststudentblogspotin57

                                                                                                            mtechmessengerblogspotin

                                                                                                            Keywords OS exam notes OS notesOperating system notes Opearing system galvin notesSimple batch system Multi- programmed Batch System Time sharing System Difference between Batch Multi-programming and Time sharing What is process structure of process in memory Process states PCB Operations on Processes Process Creation Process Termination cascading terminationcontext switch Basic Concepts of threads two modes of CPU execution Inter process communication Shared ndash Memory Systems Message passing systemswhat is deadlock four conditions that are necessary for deadlock to occur Methods for handling deadlocks Deadlock prevention Deadlock avoidanceResource Allocation graphBankers AlgorithmSafe State Unsafe state Safety Algorithm Resource-Request Algorithm Memory management Address Binding Logical vs Physical Address Space MMU Memory-Management Unit Dynamic loading Dynamic Linking Overlays Swapping Contiguous memory allocation Single-partition allocation Multiple-partition allocation Internal fragmentation Variable Size partitions External fragmentation First-fit Best-fit Worst-fit Non-Contiguous Memory allocation Paging Implementation of Page Table PTLR PTBR page table base register Page-table length register translation look-aside buffers TLB Protection in Paging Shared pages in Paging Segmentation Shared Segments Virtual memory Demand paging Procedure for Handling a Page Fault pure demand paging Page Replacement Page Replacement Algorithms FIFO Optimal Page replacement AlgorithmLeast Recently used (LRU) Page replacement algorithm ThrashingWorking Set model global and local page replacement

                                                                                                            58

                                                                                                            • 101 File Concept Data is stored in secondary memory in the form of files A file is a sequence of bits bytes lines or records A file has structure which depends on its type
                                                                                                            • File Attributes
                                                                                                            • File Operations
                                                                                                            • Kernel Mode vs User Mode
                                                                                                            • Basic Features
                                                                                                            • Architecture

                                                                                                              Linux Operating System has primarily three components as shown in the below diagram

                                                                                                              Kernel - Kernel is the core part of Linux It is responsible for all major activities of this operating system It interacts directly with hardware Kernel hides low level hardware details to system or application programs

                                                                                                              System Library - System libraries are special functions or programs These are used by application programs to use Kernelrsquos features

                                                                                                              System Utility - System Utility programs are responsible to do specialized individual level tasks like updating log file accepting login requests from terminals etc

                                                                                                              Kernel Modules Sections of kernel code that can be compiled loaded and unloaded independent of the rest of the kernel A kernel module may typically implement a device driver a file system or a networking protocol

                                                                                                              Three components to Linux module support 1 module management Supports loading modules into memory and allows to talk to the rest of the kernel 2driver registration Allows modules to inform rest of the kernel that a new driver has become available 3 conflict resolution protect reserved resources of one driver from accidental use by another driver

                                                                                                              Kernel Mode vs User ModeKernel component code executes in a special privileged mode called kernel mode with full access to all resources of the computer This code represents a single process executes in single address space and do not require any context switch and hence is very efficient and fast Kernel runs each processes and provides system services to processes provides protected access to hardwares to processesCode which is not required to run in kernel mode is in System Library User programs and other system programs works in User Mode which has no access to system hardwares and kernel code User programs utilities use System libraries to access Kernel functions to get systems low level tasksBasic FeaturesFollowing are some of the important features of Linux Operating System

                                                                                                              Portable - Portability means softwares can works on different types of hardwares in same way Linux kernel and application programs supports their installation on any kind of hardware platform

                                                                                                              Open Source - Linux source code is freely available and it is community based development project Multiple teams works in collaboration to enhance the capability of Linux operating system and it is continuously evolving

                                                                                                              Multi-User - Linux is a multiuser system means multiple users can access system resources like memory ram application programs at same time

                                                                                                              Multiprogramming - Linux is a multiprogramming system means multiple applications can run at same time

                                                                                                              Hierarchical File System - Linux provides a standard file structure in which system files user files are arranged

                                                                                                              Shell - Linux provides a special interpreter program which can be used to execute commands of the operating system It can be used to do various types of operations call application programs etc

                                                                                                              55

                                                                                                              Security - Linux provides user security using authentication features like password protection controlled access to specific files encryption of data

                                                                                                              Architecture Linux System Architecture is consists of following layers

                                                                                                              Hardware layer - Hardware consists of all peripheral devices (RAM HDD CPU etc)

                                                                                                              Kernel - Core component of Operating System interacts directly with hardware provides low level services to upper layer components

                                                                                                              Shell - An interface to kernel Takes commands from user and executes kernels functions

                                                                                                              Utilities - Utility programs giving user most of the functionalities of an operating systems

                                                                                                              Q) windows XP operating System1Extensibility mdash layered architecture

                                                                                                              Executive which runs in protected mode provides the basic system services On top of the executive several server subsystems operate in user mode Modular structure allows additional environmental subsystems to be added without affecting the

                                                                                                              executive 2Portability mdash XP can be moved from on hardware architecture to another with relatively few changes

                                                                                                              Written in C and C++ Processor-dependent code is isolated in a dynamic link library (DLL) called the ldquohardware abstraction

                                                                                                              layerrdquo (HAL)3 Reliability mdashXP uses hardware protection for virtual memory and software protection mechanisms for operating system resources 4 Compatibility mdash applications that follow the IEEE 10031 (POSIX) standard can be complied to run on 2000 without changing the source code 5 Performance mdash XP subsystems can communicate with one another via high-performance message passing

                                                                                                              Preemption of low priority threads enables the system to respond quickly to external events Designed for symmetrical multiprocessing

                                                                                                              6 International support mdash supports different locales via the national language support (NLS) APIXP Architecture 1 Layered system of modules 2 Protected mode mdash hardware abstraction layer (HAL) kernel executive 3 User mode mdash collection of subsystems

                                                                                                              a Environmental subsystems emulate different operating systems b Protection subsystems provide security functions

                                                                                                              XP architecture

                                                                                                              56

                                                                                                              Q) Windows network Operating systemExample Windows 2000 operating system

                                                                                                              All the pictures are taken from Silberschatz Abraham et al Peter Baer Galvin and Greg Gagne Operating system concepts Reading Addison-Wesley

                                                                                                              Visit my blog enthusiaststudentblogspotin57

                                                                                                              mtechmessengerblogspotin

                                                                                                              Keywords OS exam notes OS notesOperating system notes Opearing system galvin notesSimple batch system Multi- programmed Batch System Time sharing System Difference between Batch Multi-programming and Time sharing What is process structure of process in memory Process states PCB Operations on Processes Process Creation Process Termination cascading terminationcontext switch Basic Concepts of threads two modes of CPU execution Inter process communication Shared ndash Memory Systems Message passing systemswhat is deadlock four conditions that are necessary for deadlock to occur Methods for handling deadlocks Deadlock prevention Deadlock avoidanceResource Allocation graphBankers AlgorithmSafe State Unsafe state Safety Algorithm Resource-Request Algorithm Memory management Address Binding Logical vs Physical Address Space MMU Memory-Management Unit Dynamic loading Dynamic Linking Overlays Swapping Contiguous memory allocation Single-partition allocation Multiple-partition allocation Internal fragmentation Variable Size partitions External fragmentation First-fit Best-fit Worst-fit Non-Contiguous Memory allocation Paging Implementation of Page Table PTLR PTBR page table base register Page-table length register translation look-aside buffers TLB Protection in Paging Shared pages in Paging Segmentation Shared Segments Virtual memory Demand paging Procedure for Handling a Page Fault pure demand paging Page Replacement Page Replacement Algorithms FIFO Optimal Page replacement AlgorithmLeast Recently used (LRU) Page replacement algorithm ThrashingWorking Set model global and local page replacement

                                                                                                              58

                                                                                                              • 101 File Concept Data is stored in secondary memory in the form of files A file is a sequence of bits bytes lines or records A file has structure which depends on its type
                                                                                                              • File Attributes
                                                                                                              • File Operations
                                                                                                              • Kernel Mode vs User Mode
                                                                                                              • Basic Features
                                                                                                              • Architecture

                                                                                                                Security - Linux provides user security using authentication features like password protection controlled access to specific files encryption of data

                                                                                                                Architecture Linux System Architecture is consists of following layers

                                                                                                                Hardware layer - Hardware consists of all peripheral devices (RAM HDD CPU etc)

                                                                                                                Kernel - Core component of Operating System interacts directly with hardware provides low level services to upper layer components

                                                                                                                Shell - An interface to kernel Takes commands from user and executes kernels functions

                                                                                                                Utilities - Utility programs giving user most of the functionalities of an operating systems

                                                                                                                Q) windows XP operating System1Extensibility mdash layered architecture

                                                                                                                Executive which runs in protected mode provides the basic system services On top of the executive several server subsystems operate in user mode Modular structure allows additional environmental subsystems to be added without affecting the

                                                                                                                executive 2Portability mdash XP can be moved from on hardware architecture to another with relatively few changes

                                                                                                                Written in C and C++ Processor-dependent code is isolated in a dynamic link library (DLL) called the ldquohardware abstraction

                                                                                                                layerrdquo (HAL)3 Reliability mdashXP uses hardware protection for virtual memory and software protection mechanisms for operating system resources 4 Compatibility mdash applications that follow the IEEE 10031 (POSIX) standard can be complied to run on 2000 without changing the source code 5 Performance mdash XP subsystems can communicate with one another via high-performance message passing

                                                                                                                Preemption of low priority threads enables the system to respond quickly to external events Designed for symmetrical multiprocessing

                                                                                                                6 International support mdash supports different locales via the national language support (NLS) APIXP Architecture 1 Layered system of modules 2 Protected mode mdash hardware abstraction layer (HAL) kernel executive 3 User mode mdash collection of subsystems

                                                                                                                a Environmental subsystems emulate different operating systems b Protection subsystems provide security functions

                                                                                                                XP architecture

                                                                                                                56

                                                                                                                Q) Windows network Operating systemExample Windows 2000 operating system

                                                                                                                All the pictures are taken from Silberschatz Abraham et al Peter Baer Galvin and Greg Gagne Operating system concepts Reading Addison-Wesley

                                                                                                                Visit my blog enthusiaststudentblogspotin57

                                                                                                                mtechmessengerblogspotin

                                                                                                                Keywords OS exam notes OS notesOperating system notes Opearing system galvin notesSimple batch system Multi- programmed Batch System Time sharing System Difference between Batch Multi-programming and Time sharing What is process structure of process in memory Process states PCB Operations on Processes Process Creation Process Termination cascading terminationcontext switch Basic Concepts of threads two modes of CPU execution Inter process communication Shared ndash Memory Systems Message passing systemswhat is deadlock four conditions that are necessary for deadlock to occur Methods for handling deadlocks Deadlock prevention Deadlock avoidanceResource Allocation graphBankers AlgorithmSafe State Unsafe state Safety Algorithm Resource-Request Algorithm Memory management Address Binding Logical vs Physical Address Space MMU Memory-Management Unit Dynamic loading Dynamic Linking Overlays Swapping Contiguous memory allocation Single-partition allocation Multiple-partition allocation Internal fragmentation Variable Size partitions External fragmentation First-fit Best-fit Worst-fit Non-Contiguous Memory allocation Paging Implementation of Page Table PTLR PTBR page table base register Page-table length register translation look-aside buffers TLB Protection in Paging Shared pages in Paging Segmentation Shared Segments Virtual memory Demand paging Procedure for Handling a Page Fault pure demand paging Page Replacement Page Replacement Algorithms FIFO Optimal Page replacement AlgorithmLeast Recently used (LRU) Page replacement algorithm ThrashingWorking Set model global and local page replacement

                                                                                                                58

                                                                                                                • 101 File Concept Data is stored in secondary memory in the form of files A file is a sequence of bits bytes lines or records A file has structure which depends on its type
                                                                                                                • File Attributes
                                                                                                                • File Operations
                                                                                                                • Kernel Mode vs User Mode
                                                                                                                • Basic Features
                                                                                                                • Architecture

                                                                                                                  Q) Windows network Operating systemExample Windows 2000 operating system

                                                                                                                  All the pictures are taken from Silberschatz Abraham et al Peter Baer Galvin and Greg Gagne Operating system concepts Reading Addison-Wesley

                                                                                                                  Visit my blog enthusiaststudentblogspotin57

                                                                                                                  mtechmessengerblogspotin

                                                                                                                  Keywords OS exam notes OS notesOperating system notes Opearing system galvin notesSimple batch system Multi- programmed Batch System Time sharing System Difference between Batch Multi-programming and Time sharing What is process structure of process in memory Process states PCB Operations on Processes Process Creation Process Termination cascading terminationcontext switch Basic Concepts of threads two modes of CPU execution Inter process communication Shared ndash Memory Systems Message passing systemswhat is deadlock four conditions that are necessary for deadlock to occur Methods for handling deadlocks Deadlock prevention Deadlock avoidanceResource Allocation graphBankers AlgorithmSafe State Unsafe state Safety Algorithm Resource-Request Algorithm Memory management Address Binding Logical vs Physical Address Space MMU Memory-Management Unit Dynamic loading Dynamic Linking Overlays Swapping Contiguous memory allocation Single-partition allocation Multiple-partition allocation Internal fragmentation Variable Size partitions External fragmentation First-fit Best-fit Worst-fit Non-Contiguous Memory allocation Paging Implementation of Page Table PTLR PTBR page table base register Page-table length register translation look-aside buffers TLB Protection in Paging Shared pages in Paging Segmentation Shared Segments Virtual memory Demand paging Procedure for Handling a Page Fault pure demand paging Page Replacement Page Replacement Algorithms FIFO Optimal Page replacement AlgorithmLeast Recently used (LRU) Page replacement algorithm ThrashingWorking Set model global and local page replacement

                                                                                                                  58

                                                                                                                  • 101 File Concept Data is stored in secondary memory in the form of files A file is a sequence of bits bytes lines or records A file has structure which depends on its type
                                                                                                                  • File Attributes
                                                                                                                  • File Operations
                                                                                                                  • Kernel Mode vs User Mode
                                                                                                                  • Basic Features
                                                                                                                  • Architecture

                                                                                                                    mtechmessengerblogspotin

                                                                                                                    Keywords OS exam notes OS notesOperating system notes Opearing system galvin notesSimple batch system Multi- programmed Batch System Time sharing System Difference between Batch Multi-programming and Time sharing What is process structure of process in memory Process states PCB Operations on Processes Process Creation Process Termination cascading terminationcontext switch Basic Concepts of threads two modes of CPU execution Inter process communication Shared ndash Memory Systems Message passing systemswhat is deadlock four conditions that are necessary for deadlock to occur Methods for handling deadlocks Deadlock prevention Deadlock avoidanceResource Allocation graphBankers AlgorithmSafe State Unsafe state Safety Algorithm Resource-Request Algorithm Memory management Address Binding Logical vs Physical Address Space MMU Memory-Management Unit Dynamic loading Dynamic Linking Overlays Swapping Contiguous memory allocation Single-partition allocation Multiple-partition allocation Internal fragmentation Variable Size partitions External fragmentation First-fit Best-fit Worst-fit Non-Contiguous Memory allocation Paging Implementation of Page Table PTLR PTBR page table base register Page-table length register translation look-aside buffers TLB Protection in Paging Shared pages in Paging Segmentation Shared Segments Virtual memory Demand paging Procedure for Handling a Page Fault pure demand paging Page Replacement Page Replacement Algorithms FIFO Optimal Page replacement AlgorithmLeast Recently used (LRU) Page replacement algorithm ThrashingWorking Set model global and local page replacement

                                                                                                                    58

                                                                                                                    • 101 File Concept Data is stored in secondary memory in the form of files A file is a sequence of bits bytes lines or records A file has structure which depends on its type
                                                                                                                    • File Attributes
                                                                                                                    • File Operations
                                                                                                                    • Kernel Mode vs User Mode
                                                                                                                    • Basic Features
                                                                                                                    • Architecture

                                                                                                                      top related