MAHALAKSHMI ENGINEERING COLLEGE Page 1
QUESTION BANK
Sub Code: CS2411 Sub Name: Operating Systems
Dept: EEE Sem/Year:VII/IV
UNIT-V
PART – A (2 Marks)
1. Define swap space. (AUC NOV’08, NOV ‘10, APR’10)
The main goal for the design and implementation of swap space is to provide the best
throughput for the virtual memory system.
Swap-space — Virtual memory uses disk space as an extension of main memory.
Swap-space can be carved out of the normal file system, or, more commonly, it can be in a
separate disk partition.
2. Write the basic functions which are provided by the hardware clocks and timers.
(AUC APR/MAY2011)
Most computers have hardware clocks and timers that provide three basic functions: 1. Give the current time 2. Give the elapsed time 3. Set a timer to trigger operation X at time T These functions are used heavily by the operating system, and also by time sensitive
applications. The hardware to measure elapsed time and to trigger operations is called a
programmable interval timer.
3. What is polling?
Polling-The host repeatedly reads the busy bit until that bit becomes clear.
The interaction on between the host and controller can be done usinghand shaking concept.
This can be done the following steps.
Determines state of device
command-ready
busy
Error
Busy-wait cycle to wait for I/O from device
4. What is storage-area network? . (AUC APR/MAY2011)
It is a private network among the services and storage units separates from the LAN and
WAN that connects the servers to the clients.
5. What is rotational latency? (AUC NOV 2010)
MAHALAKSHMI ENGINEERING COLLEGE Page 2
Rotational latency is the additional time waiting for the disk to rotatethe desired sector to the disk head.
6. What are the advantages of DMA?
DMA can be used with either polling or interrupt software. DMA is particularly
useful on devices like disks, where many bytes of information can be transferred
in single I/O operations.
When used in conjunction with an interrupt, the CPU is notified only after the
entire block of data has been transferred.
For each byte or word transferred, it must provide the memory address and all
the bus signals that control the data transfer. 7. What are the responsibilities of DMA Controller?
The work of moving data between devices and main memory is performed by the CPU as
programmed I/O or is offloaded to a DMA controller. 8. What are the differences between blocking I/O and non-blocking I/O?
Blocking - process suspended until I/O completed
Easy to use and understand
Insufficient for some needs
Non Blocking - I/O call returns as much as available
User interface, data copy (buffered I/O)
Implemented via multi- threading
Returns quickly with count of bytes read or written
9. Define caching
A cache is a region of fast memory that holds copies of data. Access to the cached copy is
more efficient than access to the original. Caching and buffering are distinct functions, but
sometimes a region of memory can be used for both purposes
10. Define spooling. (AUC NOV 2007,10)
A spool is a buffer that holds output for a device, such as printer, that cannot accept
interleaved data streams. When an application finishes printing, the spooling system queues
the corresponding spool file for output to the printer. The spooling system copies the queued
spool files to the printer one at a time.
11. What is the need for disk scheduling? .(AUC APR/MAY2010,13)
The operating system is responsible for using hardware efficiently — for the disk drives, this means having a fast access time and disk bandwidth
Access time has two major components a. Seek time is the time for the disk are to move the heads to the cylinder containing the desired sector. b. Rotational latency is the additional time waiting for the disk to rotate the desired sector to the disk head.
To Minimize seek time is the major process for disk scheduling. 12. What is low-level formatting?
MAHALAKSHMI ENGINEERING COLLEGE Page 3
Before a disk can store data, it must be divided into sectors that the disk controller can read
and write. This process is called low-level formatting or physical formatting. Low-level
formatting fills the disk with a special data structure for each sector. The data structure for a
sector consists of a header, a data area, and a trailer.
13. What is the use of boot block?
For a computer to start running when powered up or rebooted it needs to have an initial
program to run. This bootstrap program tends to be simple. It finds the operating system on
the disk loads that kernel into memory and jumps to an initial address to begin the operating
system execution. The full bootstrap program is stored in a partition called the boot blocks,
at fixed location on the disk. A disk that has boot partition is called boot disk or system disk
14. What is sector sparing?
Low-level formatting also sets aside spare sectors not visible to the operating system. The
controller can be told to replace each bad sector logically with one of the spare sectors. This
scheme is known as sector sparing or forwarding.
15. What is RAID? List out its advantages
Redundant Array of Inexpensive Disks is a series of increasing reliable & expensive ways of
organizing multiple physical hard disks into groups that work as a single logicaldisk.
16. Writable CD-ROM media are available in both 650 MB and 700 MBversions. What is
the principle disadvantage, other than cost, of the 700MB version? .(AUC
NOV/DEC2011)
It can be store more data.
17. Which disk scheduling algorithm would be best to optimize the performance of a RAM
disk? (AUC NOV/DEC2011,13)
Shortest seek time first algorithm is used to optimize the performance of a RAM disk.
18. What is mirroring?
It duplicates the data from one disk onto a second disk using a single disk controller. 19. Give some examples for tertiary storage
1. Low cost is the defining characteristic of tertiary storage. 2. Generally, tertiary storage is built using removable media 3. Common examples of removable media are floppy disks and CD-ROMs..
20. What is seek time? (AUC MAY/JUNE 2012,14)
The seek time is the time for the disk arm to the heads to the cylinder containingdesired
sector.
21. What characteristics determine the disk access speed? (AUC MAY/JUNE 2012)
Access time has two major components
MAHALAKSHMI ENGINEERING COLLEGE Page 4
a. Seek time is the time for the disk are to move the heads to the cylinder containing the
desired sector.
b. Rotational latency is the additional time waiting for the disk to rotate the desired sector
to the disk head.
PART-B ( 16 Marks )
1. Explain in detail various disk scheduling algorithms with suitable example. (16)
(AUC NOV 2010), (AUC NOV/DEC2011)
The operating system is responsible for using hardware efficiently — for the disk drives,
this means having a fast access time and disk bandwidth.
2. Access time has two major components
a. Seek time is the time for the disk are to move the heads to the cylinder containing the
desired sector.
b. Rotational latency is the additional time waiting for the disk to rotate the desired sector
to the disk head.
3. Minimize seek time
4. Seek time =seek distance
5. Disk bandwidth is the total number of bytes transferred, divided by the total time
between the first request for service and the completion of the last transfer.
6. Several algorithms exist to schedule the servicing of disk I/O requests.
Different types of scheduling algorithms are as follows.
1. First Come, First Served scheduling algorithm(FCFS).
2. Shortest Seek Time First (SSTF) algorithm
3. SCAN algorithm
4. Circular SCAN (C-SCAN) algorithm
FCFS
The simplest form of scheduling is first-in-first-out (FIFO) scheduling, which processes
items from the queue in sequential order. This strategy has the advantage of being fair,
because every request is honored and the requests are honored in the order received.
With FIFO, if there are only a few processes that require access and if many of the
requests are to clustered file sectors, then we can hope for good performance.
MAHALAKSHMI ENGINEERING COLLEGE Page 5
Priority With a system based on priority (PRI), the control of the scheduling is outside
the control of disk management software.
Last In First Out ln transaction processing systems, giving the device to the most recent
user should result. In little or no arm movement for moving through a sequential file.
Taking advantage of this locality improves throughput and reduces queue length.
Illustration shows total head movement of 640 cylinders.
FCFS disk scheduling
Shortest Seek Time First (SSTF) algorithm
The SSTF policy is to select the disk I/O request the requires the least movement of the
disk arm from its current position. Scan With the exception of FIFO, all of the policies
described so far can leave some request unfulfilled until the entire queue is emptied.
That is, there may always be new requests arriving that will be chosen before an existing
request.
The choice should provide better performance than FCFS algorithm.
1. Selects the request with the minimum seek time from the current head position.
2. SSTF scheduling is a form of SJF scheduling; may cause starvation of some
requests.
3. Illustration shows total head movement of 236 cylinders
MAHALAKSHMI ENGINEERING COLLEGE Page 6
SSTF scheduling
Under heavy load, SSTF can prevent distant request from ever being serviced. This
phenomenon is known as starvation. SSTF scheduling is essentially a from of shortest
job first scheduling. SSTF scheduling algorithm are not very popular because of two
reasons.
1. Starvation possibly exists.
2. it increases higher overheads.
3 SCAN scheduling algorithm
The scan algorithm has the head start at track 0 and move towards the highest
numbered track, servicing all requests for a track as it passes the track. The service
direction is then reserved and the scan proceeds in the opposite direction, again picking
up all requests in order.
SCAN algorithm is guaranteed to service every request in one complete pass through
the disk. SCAN algorithm behaves almost identically with the SSTF algorithm. The
SCAN algorithm is sometimes called elevator algorithm.
1. The disk arm starts at one end of the disk, and moves toward the other end, servicing
requests until it gets to the other end of the disk, where the head movement is reversed
and servicing continues.
2. Sometimes called the elevator algorithm.
3. Illustration shows total head movement of 208 cylinders.
MAHALAKSHMI ENGINEERING COLLEGE Page 7
4 C SCAN Scheduling Algorithm
The C-SCAN policy restricts scanning to one direction only. Thus, when the last track
has been visited in one direction, the arm is returned to the opposite end of the disk and
the scan begins again.
1. Provides a more uniform wait time than SCAN.
2. The head moves from one end of the disk to the other. Servicing requests as it goes.
When it reaches the other end, however,
a. it immediately returns to the beginning of the disk, without servicing any requests
on the return trip.
3. Treats the cylinders as a circular list that wraps around from the last cylinder to the
firstone.
MAHALAKSHMI ENGINEERING COLLEGE Page 8
C-SCAN scheduling
5.C - LOOK Scheduling Algorithm
Start the head moving in one direction. Satisfy the request for the closest track in that
direction when there is no more request in the direction, the head is traveling, reverse
direction and repeat. This algorithm is similar to innermost and outermost track on each
circuit.
1. Version of C-SCAN
2. Arm only goes as far as the last request in each direction, then reverses direction
immediately, without first going all the way to the end of the disk.
C-LOOK Scheduling
Selecting a Disk-Scheduling Algorithm
1. SSTF is common and has a natural appeal
2. SCAN and C-SCAN perform better for systems that place a heavy load on the disk.
3. Performance depends on the number and types of requests.
4. Requests for disk service can be influenced by the file-allocation method.
5. The disk-scheduling algorithm should be written as a separate module of the
operating
system, allowing it to be replaced with a different algorithm if necessary.
6. Either SSTF or LOOK is a reasonable choice for the default algorithm.
2. Write short notes on the following :
(i) I/O Hardware (8 Marks)
(ii) RAID structure. (8 Marks) (AUC NOV 2010/MAY 2010)
(i)I/O Hardware
Computer operate variety of I/O devices
Transmission devices (network card, modems)
MAHALAKSHMI ENGINEERING COLLEGE Page 9
-interface devices (screen, keyboard, mouse)
port or bus.
-connection point
-It is a set wires and a rigidly defined protocol that specifies a set ofmessages that
can be sent on the wires.
processorcommunicate with the controller by reading and writing bit patterns in these
registers.
instructions triggers bus lines to select the proper device and to move bits intoor
out of a device registers. control devices
-mapped I/O. The device –control registersare
mapped into the address, space of the processor
A Typical PC Bus Structure
Device I/O Port Locations on PCs (partial)
MAHALAKSHMI ENGINEERING COLLEGE Page 10
1. Polling
The interaction on between the host and controller can be done usinghand shaking
concept. This can be done the following steps.
Determines state of device
command-ready
busy
Error
Busy-wait cycle to wait for I/O from device
.2. Interrupt
CPU Interrupt request line triggered by I/O device
Interrupt handler receives interrupts
Maskable to ignore or delay some interrupts
Interrupt vector to dispatch interrupt to correct handler
Based on priority
Some unmaskable
Interrupt mechanism also used for exceptions
Interrupt-Driven I/O Cycle
Interrupt vector contains the memory addresses of specialized interrupt Handles.
The purpose of a vectored interrupt mechanism is to reduce the need for a single
interrupt handlerto search all possible sources of interrupts to determine which one
needs service.
RAID structure
To provide redundancy at lower cost by using the idea of disk striping combinedwith
"parity" bits (which we describe next) have been proposed. These schemeshave
different cost-performance trade-offs and are classified according tolevels called RAID
levels..
MAHALAKSHMI ENGINEERING COLLEGE Page 11
1. RAID – multiple disk drives provides reliability via redundancy.
2. RAID is arranged into six different levels.
3. Several improvements in disk-use techniques involve the use of multiple disks
workingcooperatively.
4. Disk striping uses a group of disks as one storage unit.
5. RAID schemes improve performance and improve the reliability of the storage system
bystoring redundant data.
RAID Level 0. RAID level 0 refers to disk arrays with striping at the level ofblocks but
without any redundancy
RAID Level 1. RAID level 1 refers to disk mirroring
RAID Level 2. RAID level 2 is also known as memory-style error-correcting
code(ECC) organization. Memory systems have long detected certain errors by using
parity bits. Each byte in a memory system may have a parity bit associated with it that
records whether the number of bits in the
byte set to 1 is even (parity = 0) or odd (parity = 1). If one of the bits in the byte is
damaged (either a 1 becomes a 0, or a 0 becomes a 1), the parity of the byte changes
and thus will not match the stored parity.
MAHALAKSHMI ENGINEERING COLLEGE Page 12
ECC can be used directly in disk arrays via striping of bytes across disks. If one of the
disks fails, the remaining bits of the byte and the associated error-correction bits can be
read from otherdisks and used to reconstruct the damaged data.
RAID Level 3. RAID level 3, or bit-interleaved parity organization. If one of the sectors is
damaged, whether any bit in the sector is a 1 or a 0 by computing the parity of the
corresponding bits from sectors in the other disks. If the parity of the remaining bits is
equalto the stored parity, the missing bit is 0; otherwise, it is 1.RAID level 3 is as good as
level 2 but is less expensive in the number of extra disks requiredRAID level 3 has two
advantages over level 1.
MAHALAKSHMI ENGINEERING COLLEGE Page 13
First, the storage overheadis reduced because only one parity disk is
needed for several regulardisks, whereas one mirror disk is needed for
every disk in level 1.
Second,since reads and writes of a byte are spread out over multiple
disks withA/-way striping of data, the transfer rate for reading or writing a
singleblock is N times as fast as with RAID level 1. On the negative side,
RAIDlevel 3 supports fewer I/O’s per second, since every disk has to
participate in every I/O request..
RAID Level 4. RAID level 4, or block-interleaved parity organization, uses block-level
striping, as in RAID 0, and in addition keeps a parity block on a separate disk for
corresponding blocks from A! other disks. If one of the disks fails, the parity block can be
used with the corresponding blocks from the other disks to restore the blocks of the
failed disk. Data-transfer rate for each accessis slower, but multiple read accesses can
proceed in parallel, leading to ahigher overall I/O rate.
The transfer rates for large reads are high, since allthe disks can be read in
parallel; large writes also have high transfer rates, since the data and parity can be
written in parallel. An operatingsystem write of data smaller than a block requires that
the block be read,modified with the new data, and written back. The parity block has to
beupdated as well. This is known as the read-modify-write cycle. Thus, asingle write
requires four disk accesses: two to read the two old blocks andtwo to write the two new
blocks.
RAID Level 5. RAID level 5, or block-interleaved distributed parity, differsfrom level 4 by
spreading data and parity among all N + 1 disks, ratherthan storing data in N disks and
parity in one disk. For each block, one ofthe disks stores the parity, and the others store
data.
RAID Level 6. RAID level 6, also called the P + Q redundancy scheme, is much like
RAID level 5 but stores extra redundant information to guard against multiple disk
failures. Instead of parity, error-correcting codes such as the Reed-Solomon codes are
used.
RAID Level 0 + 1. RAID level 0 + 1 refers to a combination of RAID levels0 and 1. RAID
0 provides the performance, while RAID 1 provides the reliability.
(ii)Kernel I/O Subsystem
Kernels provide many services related to I/O. Several services—scheduling,
buffering, caching, spooling. The I/O subsystem is also responsible for protecting itself
from errant processes and malicious users.
1.I/O Scheduling
Operating-system developers implement scheduling by maintaining a waitqueue
of requests for each device. When an application issues a blocking I/Osystem call, the
request is placed on the queue for that device. The I/O scheduler rearranges the order of
the queue to improve the overall system efficiencyand the average response time
experienced by applications.
The operating system may also try to be fair, so that no one application receives
especially poor service, or it may give priority service for delay-sensitive requests.
MAHALAKSHMI ENGINEERING COLLEGE Page 14
When a kernel supports asynchronous I/O, it must be able to keep trackof many
I/O requests at the same time. For this purpose, the operating systemmight attach the
wait queue to a device-status table. The kernel manages this table, which contains an
entry for each I/O device, I/O subsystem improves the efficiency of the computer is by
scheduling I/O operations. Another way is by using storage space in main memory or on
disk via techniques called buffering, caching, and spooling.
2.Buffering
A buffer is a memory area that stores data while they are transferred between
two devices or between a device and an application. Buffering is done for three reasons.
One reason is to cope with a speed mismatch between the producer and consumer of a
data stream.
This double buffering decouples the producer of data from the consumer, thus
relaxing timing requirements between them.
The use of buffering is to adapt between devices that have different data-
transfer sizes. Such disparities are especially common in computer networking, where
buffers are used widely for fragmentation and reassembly of messages.
buffering is to support copy semantics for application I/O.An example will clarify
the meaning of "copy semantics.'' Suppose that an application has a buffer of data that it
wishes to write to disk. It calls the write () system call, providing a pointer to the buffer
and an integer specifying the number of bytes to write
3.Caching
A cache is a region of fast memory that holds copies of data. Access to the
cached copy is more efficient than access to the original .Caching and buffering are
distinct functions, but sometimes a region of memory can be used for both purposes.
4 Spooling and Device Reservation
A spool is a buffer that holds output for a device, such as a printer, that cannot accept
interleaved data streams. Although a printer can serve only one job at a time, several
MAHALAKSHMI ENGINEERING COLLEGE Page 15
applications may wish to print their output concurrently. The operating system solves
thisproblem by intercepting all output to the printer. Each application's outputis spooled to a
separate disk file. When an application finishes printing, the
spooling system queues the corresponding spool file for output to the printer.In some operating
systems The spooling system copies the queued spool files to the printer one at a time. In
some operating systems, spooling is managed by a system daemon process. In others, it is
handled by an in-kernel thread.
5 Error Handling
An operating system that uses protected memory can guard against many kinds of hardware
and application errors, so that a complete system failure is not the usual result of each minor
mechanical glitch. Operating systems can often compensate effectively for transient failures.
For instance, a disk read() failure results in a readC) retry, and a network send() error results in
a resendO, if the protocol specifies. some hardware can provide highly detailed error
information, although many current operating systems are not designed to convey this
information to the application.
3. Explain the services provided by a kernel I/O subsystem. (8)(refer Q.no.2)
Explain and compare the C-LOOK and C-SCAN disk scheduling
algorithms. (8) (AUC APR ‘10)
The C-SCAN policy restricts scanning to one direction only. Thus, when the last track has been
visited in one direction, the arm is returned to the opposite end of the disk and the scan begins
again.
This reduces the maximum delay experienced by new requests
1. Provides a more uniform wait time than SCAN.
2. The head moves from one end of the disk to the other. Servicing requests as it goes.When it
reaches the other end, however,
a. it immediately returns to the beginning of the disk, without servicing any requestson the return
trip.
3. Treats the cylinders as a circular list that wraps around from the last cylinder to the firstone.
MAHALAKSHMI ENGINEERING COLLEGE Page 16
C-LOOK Scheduling
Start the head moving in one direction. Satisfy the request for the closest track in that direction
when there is no more request in the direction, the head is traveling, reverse direction and
repeat. This algorithm is similar to innermost and outermost track on each circuit.
1. Version of C-SCAN
2. Arm only goes as far as the last request in each direction, then reverses
directionimmediately, without first going all the way to the end of the disk.
4. Explain in detail the salient features of Linux I/O. (10) (AUC APR/MAY 2010)
To the user, the I/O system in Linux looks much like that in any UNIX system.That is, to
the extent possible, all device drivers appear as normal files.
MAHALAKSHMI ENGINEERING COLLEGE Page 17
The systemadministrator can create special files within a file system that contain referencesto
a specific device driver, and a user opening such a file will be able to read from and write to
the device referenced.Linux splits all devices into three classes:
block devices,
character devices,
network devices.
Block devices include all devices that allow random access to completely independent,
fixed-sized blocks of data, including hard disks and floppy disks,CD-ROMs, and flash
memory. Block devices are typically used to store file systems, but direct access to a block
device is also allowed so that program scan create and repair the file system.
ablock represents the unit with which the kernel performs I/O. When a block is read into
memory, it is stored in a buffer.The request manager is the layer of software that manages
the reading andwriting of buffer contents to and from a block-device driver.
A separate list of requests is kept for each block-device driver. Traditionally, these requests
have been scheduled according to a unidirectional-elevator (C-SCAN) algorithm that exploits
the order in which requests are inserted in and removed from the per-device lists.
The fundamental problem with the elevator algorithm is that I/O operations concentrated in a
specific region of the disk can result in starvation of requests that need to occur in other
regions of the disk.
The deadlineI/O scheduler used in version 2.6 works similarly to the elevator algorithm
except that it also associates a deadline with each request, thus addressing the starvation
issue. The deadline scheduler maintains asorted queue of pending I/O operations sorted by
sector number. However, it also maintains two other queues—a read queue for read
operations and a write queue for write operations. These two queues are ordered according
to deadline. Every I/O request is placed in both the sorted queue and either thread or the
write queue
Character devices include most other devices, such as mice and keyboards.The fundamental
difference between block and character devices is randomaccess—block devices may be
accessed randomly, while character devices areonly accessed serially. For example, seeking to
a certain position in a file might
be supported for a DVD but makes no sense to a pointing device such as amouse.
The kernel maintains a standard interface to these drivers by means of a set of tty_struct
structures. Each of these structures provides buffering and flow control on the data stream from
the terminal device and feeds those data to a line discipline.
A line discipline is an interpreter for the information from the terminaldevice. The most
common line discipline is the t t y discipline, which glues theterminal's data stream onto the
standard input and output streams of a user'srunning processes, allowing those processes to
communicate directly with theuser's terminal. This job is complicated by the fact that several
such processesmay be running simultaneously, and the t t y line discipline is responsible
forattaching and detaching the terminal's input and output from the variousprocesses connected
to it as those processes are suspended or awakened by theuser.
MAHALAKSHMI ENGINEERING COLLEGE Page 18
Network devices are dealt with differently from block and character devices. Users
cannot directly transfer data to network devices; instead, they must communicate
indirectly by opening a connection to the kernel'snetworking subsystem.
5. Describe the important concepts of application I/O interface. (16) (AUC NOV’11)
Structuring techniques and interfaces for the operating system enable I/O devices to be treated in a standard, uniform way. For instance, how an application can open a file on a disk without knowing what kind of disk it is, and how new disks and other devices can be added to a computer without the operating system being disrupted
The actual differences are encapsulated in ken modules called device drivers mat internally are
custom tailored to each device but that export one of the standard interfaces.
The purpose of the device-driver layer is to hide the differences among device controllers from
the I/O subsystem of the kernel, much as the I/O system calls.
Character-stream or block. A character-stream device transfers bytes one by one, whereas a
block device transfers a block of bytes as a unit.
Sequential or random-access. A sequential device transfers data in a fixed order that is
determined by the device, whereas the user of a random-access device can instruct the device
to seek to any of the available data storage locations.
Synchronous or asynchronous.A synchronous device is one that performs data transfers with
predictable response times. An asynchronous device exhibits irregular or unpredictable
response times.
Sharable or dedicated.A sharable device can be used concurrently by several processes or
threads; a dedicated device cannot.
Speed of operation. Device speeds range from a few bytes per second to a few gigabytes per
second.
Read-write, read only, or write only. Some devices perform both input and output, but others
support only one data direction. For the purpose of application access, many of these
differences are hidden by the operating system, and the devices are grouped into a few
conventional types.
Operating systems also provide special system calls to access a few additional devices,
such as a time-of-day clock and a timer. The performance and addressing
characteristics of network I/O differ significantly from those of disk I/O, most operating
systems provide a network I/O interface that is different from the read-write-seek
interface used for disks.
6. (i) Consider the following I/O scenarios on a single-user PC.
(1) A mouse used with a graphical user interface
(2) A tape drive on a multi tasking operating system (assume no device
reallocation is available)
(3) A disk drive containing user files
MAHALAKSHMI ENGINEERING COLLEGE Page 19
(4) A graphics card with direct bus connection, accessible through memory-
mapped I/O
For each of these I/O scenarios, would you design the operating system to use
buffering, spooling, caching, or a combination? Would you use polled I/O, or
interrupt driven I/O?
Give reasons for your choices. (8)
(ii) How do you choose a optimal technique among the various disk scheduling
techniques? Explain.(8) (AUC MAY/JUNE 2012)
7. Describe the various levels of RAID. (8) (AUC MAY/JUNE 2012)
RAID structure
To provide redundancy at lower cost by using the idea of disk striping combined with
"parity" bits (which we describe next) have been proposed. These schemeshave
different cost-performance trade-offs and are classified according tolevels called RAID
levels..
1. RAID – multiple disk drives provides reliability via redundancy.
2. RAID is arranged into six different levels.
3. Several improvements in disk-use techniques involve the use of multiple disks
workingcooperatively.
4. Disk striping uses a group of disks as one storage unit.
5. RAID schemes improve performance and improve the reliability of the storage system
bystoring redundant data.
RAID Level 0. RAID level 0 refers to disk arrays with striping at the level ofblocks but
without any redundancy
RAID Level 1. RAID level 1 refers to disk mirroring
RAID Level 2. RAID level 2 is also known as memory-style error-correcting
code(ECC) organization. Memory systems have long detected certain errors by using
parity bits. Each byte in a memory system may have a parity bit associated with it that
records whether the number of bits in the
byte set to 1 is even (parity = 0) or odd (parity = 1). If one of the bits in the byte is
damaged (either a 1 becomes a 0, or a 0 becomes a 1), the parity of the byte changes
and thus will not match the stored parity.
MAHALAKSHMI ENGINEERING COLLEGE Page 20
ECC can be used directly in disk arrays via striping of bytes across disks. If one of the
disks fails, the remaining bits of the byte and the associated error-correction bits can be
read from otherdisks and used to reconstruct the damaged data.
RAID Level 3. RAID level 3, or bit-interleaved parity organization. If one of the sectors is
damaged, whether any bit in the sector is a 1 or a 0 by computing the parity of the
corresponding bits from sectors in the other disks. If the parity of the remaining bits is
equalto the stored parity, the missing bit is 0; otherwise, it is 1.RAID level 3 is as good as
level 2 but is less expensive in the number of extra disks requiredRAID level 3 has two
advantages over level 1.
First, the storage overhead is reduced because only one parity disk is
needed for several regular disks, whereas one mirror disk is needed for
every disk in level 1.
Second, since reads and writes of a byte are spread out over multiple
disks with A/-way striping of data, the transfer rate for reading or writing a
single block is N times as fast as with RAID level 1. On the negative side,
RAID level 3 supports fewer I/O’s per second, since every disk has to
participate in every I/O request..
MAHALAKSHMI ENGINEERING COLLEGE Page 21
RAID Level 4. RAID level 4, or block-interleaved parity organization, uses block-level
striping, as in RAID 0, and in addition keeps a parity block on a separate disk for
corresponding blocks from A! other disks. If one of the disks fails, the parity block can be
used with the corresponding blocks from the other disks to restore the blocks of the
failed disk. Data-transfer rate for each accessis slower, but multiple read accesses can
proceed in parallel, leading to ahigher overall I/O rate.
The transfer rates for large reads are high, since allthe disks can be read in
parallel; large writes also have high transfer rates, since the data and parity can be
written in parallel. An operatingsystem write of data smaller than a block requires that
the block be read,modified with the new data, and written back. The parity block has to
beupdated as well. This is known as the read-modify-write cycle. Thus, asingle write
requires four disk accesses: two to read the two old blocks andtwo to write the two new
blocks.
RAID Level 5. RAID level 5, or block-interleaved distributed parity, differsfrom level 4 by
spreading data and parity among all N + 1 disks, ratherthan storing data in N disks and
parity in one disk. For each block, one ofthe disks stores the parity, and the others store
data.
RAID Level 6. RAID level 6, also called the P + Q redundancy scheme, is much like
RAID level 5 but stores extra redundant information to guard against multiple disk
failures. Instead of parity, error-correcting codes such as the Reed-Solomon codes are
used.
RAID Level 0 + 1. RAID level 0 + 1 refers to a combination of RAID levels0 and 1. RAID
0 provides the performance, while RAID 1 provides the reliability.
8. Suppose that a disk drive has 5000 cylinders, numbered 0 to 4999. The drive is
currently serving a request at cylinder 143, and the previous request was at
cylinder 125. The queue of pending requests, in FIFO order, is 86, 1470, 913, 1774,
948, 1509, 1022, 1750, 130 Starting from the current head position, what is the
total distance ((in cylinders) that the disk arm moves to satisfy all the pending
requests, for each of the following disk scheduling
a. FCFS b. SSTFc. SCAN d. LOOK e. C-SCAN
The FCFS schedule is 143, 86, 1470, 913, 1774, 948, 1509, 1022,1750, 130. The total seek distance is 7081. b. The SSTF schedule is 143, 130, 86, 913, 948, 1022, 1470, 1509, 1750,1774. The total seek distance is 1745. c. The SCAN schedule is 143, 913, 948, 1022, 1470, 1509, 1750, 1774,4999, 130, 86. The total seek distance is 9769. d. The LOOK schedule is 143, 913, 948, 1022, 1470, 1509, 1750, 1774,130, 86. The total seek distance is 3319.
MAHALAKSHMI ENGINEERING COLLEGE Page 22
e. The C-SCAN schedule is 143, 913, 948, 1022, 1470, 1509, 1750, 1774,4999, 86, 130. The total seek distance is 9813. f. (Bonus.) The C-LOOK schedule is 143, 913, 948, 1022, 1470, 1509,1750, 1774, 86, 130.
The total seek distance is 3363.
9. Compare the performance of C-SCAN and SCAN scheduling, assuming a inform
distribution of requests. Consider the average response time (the time between the
arrival of a request and the completion of that request’s service), the variation in
response time, and the effective bandwidth. How does performance depend on the
relative sizes of seek time and rotational latency?
(16)
10. Write note on
(i) Disk attachment
(ii) Streams
(iii) Tertiary Storage
Disk attachment
Disks may be attached one of two ways:
Host attached via an I/O port
Network-Attached Storage
FigureNetwork-Attached Storage
Storage-Area Network
MAHALAKSHMI ENGINEERING COLLEGE Page 23
Storage-Area Networks
(ii) Streams
A stream isa full-duplex connection between a device driver and a user-level process. Itconsists
of a stream head that interfaces with the user process, a driverthat controls the device, and
zero or modules more between the stream
head and the driver end. Each of these components contains a pair of queues-a read queue
and a write queue. Message passing is used to transfer databetween queues.
MAHALAKSHMI ENGINEERING COLLEGE Page 24
Modules provide the functionality of STREAMS processing; they are pushedonto a stream by
use of the ioctl () system call.
STREAMS I/0 is asynchronous (or nonblocking) except when the userprocess communicates
with the stream~ head. When writing to the stream,the user process will block, assuming the
next queue uses flow controtuntilthere is room to copy the message.
The benefit of using STREAMS is that it provides a framework for amodular and incremental
approach to writing device drivers and networkprotocols. Modules may be used by different
streams and hence by differentdevices.
For example, a networking module may be used by both an Ethernetnetwork card and a 802.11
wireless network card
(iii)Territory storage
1. Low cost is the defining characteristic of tertiary storage.
2. Generally, tertiary storage is built using removable media
3. Common examples of removable media are floppy disks and CD-ROMs; other types are
available
Removable Disks
1. Floppy disk — thin flexible disk coated with magnetic material, enclosed in a protective plastic
case.
imilar technology is used for removable disks that hold more
than 1 GB.
are at a greater risk
of damage from exposure.
2. A magneto-optic disk records data on a rigid platter coated w
heat is used to amplify a large, weak magnetic field to record a bit.
-optic head flies much farther from the disk surface than magnetic disk head,
and the magnetic material is covered with
a protective layer of plastic or glass; resistant to head crashes.
3. Optical disks do not use magnetism; they employ special materials that are altered by laser
light.
2. WORM Disks
1. The data on read-write disks can be modified over and over.
2. WORM (“Write Once, Read Many Times”) disks can be written only once.
3. Thin aluminum film sandwiched between two glass or plastic platters.
4. To write a bit, the drive uses a laser light to burn a small hole through the aluminum;
information can be destroyed by not altered.
5. Very durable and reliable
6. Read Only disks, such ad CD-ROM and DVD, com from the factory with the data
prerecorded.
3. Tapes
1. Compared to a disk, a tape is less expensive and holds more data, but random access is
much slower.
MAHALAKSHMI ENGINEERING COLLEGE Page 25
2. Tape is an economical medium for purposes that do not require fast random access, e.g.,
backup copies of disk data, holding huge volumes of data.
3. Large tape installations typically use robotic tape changers that move tapes between tape
drives and storage slots in a tape library.
a. stacker – library that holds a few tapes
b. silo – library that holds thousands of tapes
4. A disk-resident file can be archived to