This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
I/O Performance MeasuresI/O Performance Measures• Diversity: The variety of I/O devices that can be connected to the system.
• Capacity: The maximum number of I/O devices that can be connected tothe system.
• Producer/server Model of I/O: The producer (CPU, human etc.)creates tasks to be performed and places them in a task buffer (queue);the server (I/O device or controller) takes tasks from the queue andperforms them.
• I/O Throughput: The maximum data rate that can be transferredto/from an I/O device or sub-system, or the maximum number of I/Otasks or transactions completed by I/O in a certain period of time
⇒ Maximized when task buffer is never empty.
• I/O Latency or response time: The time an I/O task takes from the timeit is placed in the task buffer or queue until the server (I/O system)finishes the task. Includes buffer waiting or queuing time.
Since the 1980's smaller form factor disk drives have grown in storage capacity. Today's 3.5 inch form factordrives designed for the entry-server market can store more than 75 Gbytes at the 1.6 inch height on 5 disks.
Drive areal density has increased by a factor of 8.5 million since the first disk drive, IBM's RAMAC, was introduced in 1957. Since 1991, the rate of increase in areal density has accelerated to 60% per year, andsince 1997 this rate has further accelerated to an incredible 100% per year.
The price per megabyte of disk storage has been decreasing at about 40% per year based on improvements in data density,-- even faster than the price decline for flash memory chips. Recent trends in HDD price per megabyte show an even steeper reduction.
I/O data transfer methodsI/O data transfer methodsDirect Memory Access (DMA):Direct Memory Access (DMA):• Implemented with a specialized controller that transfers data between
an I/O device and memory independent of the processor.
• The DMA controller becomes the bus master and directs reads andwrites between itself and memory.
• Interrupts are still used only on completion of the transfer or when anerror occurs.
• DMA transfer steps:– The CPU sets up DMA by supplying device identity, operation,
memory address of source and destination of data, the number ofbytes to be transferred.
– The DMA controller starts the operation. When the data is availableit transfers the data, including generating memory addresses for datato be transferred.
– Once the DMA transfer is complete, the controller interrupts theprocessor, which determines whether the entire operation is complete.
• Given: An I/O system in equilibrium input rate is equal to output rate) and:– Tser : Average time to service a task– Tq : Average time per task in the queue– Tsys : Average time per task in the system, or the response time, the sum of Tser and Tq
– r : Average number of arriving tasks/sec– Lser : Average number of tasks in service.– Lq : Average length of queue– Lsys : Average number of tasks in the system, the sum of L q and Lser
• Little’s Law states: Lsys = r x Tsys
• Server utilization = u = r / Service rate = r x Tser
u must be between 0 and 1 otherwise there would be more tasks arrivingthan could be serviced.
• Service time completions vs. waiting time for a busy server:randomly arriving event joins a queue of arbitrary lengthwhen server is busy, otherwise serviced immediately
– Unlimited length queues key simplification
• A single server queue: combination of a servicing facility thataccomodates 1 customer at a time (server) + waiting area(queue): together called a system
• Server spends a variable amount of time with customers;how do you characterize variability?
– Distribution of a random variable: histogram? curve?
A Little Queuing Theory: Variable Service TimeA Little Queuing Theory: Variable Service Time
• Server spends a variable amount of time with customers– Weighted mean m1 = (f1xT1 + f2xT2 +...+ fnXTn)/F (F=f1+f2+...)– Squared coefficient of variance C
• Disk response times C 1.5 (majority seeks < average)
• Yet usually pick C = 1.0 for simplicity
• Another useful value is average time must wait for server tocomplete task: m1(z)
– Not just 1/2 x m1 because doesn’t capture variance– Can derive m1(z) = 1/2 x m1 x (1 + C)– No variance => C= 0 => m1(z) = 1/2 x m1
A Little Queuing Theory:A Little Queuing Theory:Average Wait TimeAverage Wait Time
• Calculating average wait time in queue Tq
– If something at server, it takes to complete on average m1(z)– Chance server is busy = u; average delay is u x m1(z)– All customers in line must complete; each avg Tser
Tq = u x m1(z) + Lq x Ts er= 1/2 x u x Tser x (1 + C) + Lq x Ts er
Tq = 1/2 x u x Ts er x (1 + C) + r x Tq x Ts er
Tq = 1/2 x u x Ts er x (1 + C) + u x Tq
Tq x (1 – u) = Ts er x u x (1 + C) /2Tq = Ts er x u x (1 + C) / (2 x (1 – u))
• Notation: r average number of arriving customers/second
Tser average time to service a customeru server utilization (0..1): u = r x Tser
Tq average time/customer in queueLq average length of queue:Lq= r x Tq
A Little Queuing Theory: M/G/1 and M/M/1A Little Queuing Theory: M/G/1 and M/M/1
• Assumptions so far:– System in equilibrium
– Time between two successive arrivals in line are random
– Server can start on next customer immediately after prior finishes
– No limit to the queue: works First-In-First-Out
– Afterward, all customers in line must complete; each avg Tser
• Described “memoryless” or Markovian request arrival(M for C=1 exponentially random), General servicedistribution (no restrictions), 1 server: M/G/1 queue
• When Service times have C = 1, M/M/1 queueTq = Tser x u x (1 + C) /(2 x (1 – u)) = Tser x u / (1 – u)
Tser average time to service a customeru server utilization (0..1): u = r x TserTq average time/customer in queue
I/O I/O QueuingQueuing Performance: An Example Performance: An Example• A processor sends 10 x 8KB disk I/O requests per second, requests &
service are exponentially distributed, average disk service time = 20 ms
• On average:– How utilized is the disk, u?– What is the average time spent in the queue, Tq?– What is the average response time for a disk request, Tsys ?– What is the number of requests in the queue Lq? In system, Lsys?
• We have:r average number of arriving requests/second = 10Tser average time to service a request = 20 ms (0.02s)
• We obtain:
u server utilization: u = r x Tser = 10/s x .02s = 0.2Tq average time/request in queue = Tser x u / (1 – u)
= 20 x 0.2/(1-0.2) = 20 x 0.25 = 5 ms (0 .005s)Tsys average time/request in system: Tsys = Tq +Tser= 25 msLq average length of queue: Lq= r x Tq
= 10/s x .005s = 0.05 requests in queueLsys average # tasks in system: Lsys = r x Tsys = 10/s x .025s = 0.25
A Little Queuing Theory: Another ExampleA Little Queuing Theory: Another Example• Processor sends 20 x 8KB disk I/Os per sec, requests & service
exponentially distrib., avg. disk service = 12 ms
• On average:– how utilized is the disk?
– What is the number of requests in the queue?
– What is the average time a spent in the queue?
– What is the average response time for a disk request?
• Notation: r average number of arriving customers/second= 20
Tser average time to service a customer= 12 msu server utilization (0..1): u = r x Tser= 20/s x .012s = 0.24Tq average time/customer in queue = Ts er x u / (1 – u)
= 12 x 0.24/(1-0.24) = 12 x 0.32 = 3.8 msTsys average time/customer in system: Tsys =Tq +Tser= 15.8 msLq average length of queue:Lq= r x Tq
= 20/s x .0038s = 0.076 requests in queue Lsys average # tasks in system : Lsys = r x Tsys = 20/s x .016s = 0.32
A Little Queuing Theory: Yet Another ExampleA Little Queuing Theory: Yet Another Example• Suppose processor sends 10 x 8KB disk I/Os per second,
squared coef. var.(C) = 1.5, avg. disk service time = 20 ms
• On average:– How utilized is the disk?
– What is the number of requests in the queue?
– What is the average time a spent in the queue?
– What is the average response time for a disk request?
• Notation: r average number of arriving customers/second= 10
Tser average time to service a customer= 20 msu server utilization (0..1): u = r x Tser= 10/s x .02s = 0.2Tq average time/customer in queue = Tser x u x (1 + C) /(2 x (1 – u))
= 20 x 0.2(2.5)/2(1 – 0.2) = 20 x 0.32 = 6.25 ms Tsys average time/customer in system: Tsys = Tq +Tser= 26 msLq average length of queue:Lq= r x Tq
= 10/s x .006s = 0.06 requests in queueLsys average # tasks in system :Lsys = r x Tsys = 10/s x .026s = 0.26
RAID (Redundant Array of Inexpensive Disks)• The term RAID was coined in a 1988 paper by Patterson, Gibson
and Katz of the University of California at Berkeley.
• In that article, the authors proposed that large arrays of small,inexpensive disks could be used to replace the large, expensive disksused on mainframes and minicomputers.
• In such arrays files are "striped" and/or mirrored across multipledrives.
• Their analysis showed that the cost per megabyte could besubstantially reduced, while both performance and fault tolerancecould be increased.
• Array Reliability: Reliability of N disks = Reliability of 1 Disk ÷ N 50,000 Hours ÷ 70 disks = 700 hours– Disk system MTTF: Drops from 6 years to 1 month!– Arrays (without redundancy) too unreliable to be useful!
Non-Redundant (RAID Level 0)Non-Redundant (RAID Level 0)• RAID 0 simply stripes data across all drives (minimum 2 drives) to
increase data throughput but provides no fault protection.
– Sequential blocks of data are written across multiple disks in stripes,as follows:
• The size of a data block, which is known as the "stripe width", varieswith the implementation, but is always at least as large as a disk's sectorsize.
• This scheme offers the best write performance since it never needs toupdate redundant information.
• It does not have the best read performance.– Redundancy schemes that duplicate data, such as mirroring, can
perform better on reads by selectively scheduling requests on the diskwith the shortest expected seek and rotational delays.
Optimal Size of Data Striping UnitOptimal Size of Data Striping Unit• Lee and Katz [1991] use an analytic model of non-redundant disk arrays to derive an
equation for the optimal size of data striping unit.
• They show that the optimal size of data strip-ing is equal to:
• Where:
– P is the average disk positioning time,
– X is the average disk transfer rate,
– L is the concurrency, Z is the request size, and
– N is the array size in disks.
• Their equation also predicts that the optimal size of data striping unit is dependentonly the relative rates at which a disk positions and transfers data, PX, rather than Por X individually. Lee and Katz show that the opti-mal
• striping unit depends on request size; Chen and Patterson show that this dependencycan be ignored without significantly affecting performance.
Memory-Style ECC (RAID Level 2)Memory-Style ECC (RAID Level 2)• RAID 2 performs data striping with a block size of one bit or byte, so
that all disks in the array must be read to perform any read operation.
• A RAID 2 system would normally have as many data disks as the wordsize of the computer, typically 32.
• In addition, RAID 2 requires the use of extra disks to store an error-correcting code for redundancy.– With 32 data disks, a RAID 2 system would require 7 additional disks for a
Hamming-code ECC.
– Such an array of 39 disks was the subject of a U.S. patent granted to UnisysCorporation in 1988, but no commercial product was ever released.
• For a number of reasons, including the fact that modern disk drivescontain their own internal ECC, RAID 2 is not a practical disk arrayscheme.
Bit-Interleaved Parity (RAID Level 3)Bit-Interleaved Parity (RAID Level 3)• One can improve upon memory-style ECC disk arrays ( RAID 2) by
noting that, unlike memory component failures, disk controllers caneasily identify which disk has failed. Thus, one can use a single paritydisk rather than a set of parity disks to recover lost information.
• As with RAID 2, RAID 3 must read all data disks for every readoperation.– This requires synchronized disk spindles for optimal performance, and
works best on a single-tasking system with large sequential datarequirements. An example might be a system used to perform videoediting, where huge video files must be read sequentially.
Block-Interleaved Parity (RAID Level 4)Block-Interleaved Parity (RAID Level 4)• RAID 4 is similar to RAID 3 except that blocks of data are striped across
the disks rather than bits/bytes.
• Read requests smaller than the striping unit access only a single data disk.
• Write requests must update the requested data blocks and must alsocompute and update the parity block.
– For large writes that touch blocks on all disks, parity is easily computed byexclusive-or’ing the new data for each disk.
– For small write requests that update only one data disk, parity is computed bynoting how the new data differs from the old data and apply-ing thosedifferences to the parity block.
• This can be an important performance improvement for small or randomfile access (like a typical database application) if the application recordsize can be matched to the RAID 4 block size.
Block-Interleaved Distributed-Parity (RAID Level 5)• The block-interleaved distributed-parity disk array eliminates the
parity disk bottleneck present in RAID 4 by distributing the parityuniformly over all of the disks.
• An additional, frequently overlooked advantage to distributing theparity is that it also distributes data over all of the disks rather thanover all but one.
• RAID 5 has the best small read, large read and large write performanceof any redundant disk array.– Small write requests are somewhat inefficient compared with redundancy
schemes such as mirroring however, due to the need to perform read-modify-write operations to update parity.
• An enhanced RAID 5 with stronger error-correcting codes used .
• One such scheme, called P+Q redundancy, uses Reed-Solomon codes,in addition to parity, to protect against up to two disk failures usingthe bare minimum of two redundant disks.
• The P+Q redundant disk arrays are structurally very similar to theblock-interleaved distributed-parity disk arrays (RAID 5) andoperate in much the same manner.
– In particular, P+Q redundant disk arrays also perform smallwrite opera-tions using a read-modify-write procedure, exceptthat instead of four disk accesses per write requests, P+Qredundant disk arrays require six disk accesses due to the need toupdate both the ‘P’ and ‘Q’ information.
RAID 10 (Striped Mirrors)RAID 10 (Striped Mirrors)• RAID 10 (also known as RAID 1+0) was not mentioned in the original
1988 article that defined RAID 1 through RAID 5.
• The term is now used to mean the combination of RAID 0 (striping)and RAID 1 (mirroring).
• Disks are mirrored in pairs for redundancy and improvedperformance, then data is striped across multiple disks for maximumperformance.
• In the diagram below, Disks 0 & 2 and Disks 1 & 3 are mirrored pairs.
• Obviously, RAID 10 uses more disk space to provide redundant data thanRAID 5. However, it also provides a performance advantage by readingfrom all disks in parallel while eliminating the write penalty of RAID 5.
RAID ReliabilityRAID Reliability• Redundancy in disk arrays is motivated by the need to
overcome disk failures.
• When only independent disk failures are considered, a simpleparity scheme works admirably. Patterson, Gibson, and Katzderive the mean time between failures for a RAID level 5 to be:
MTTF (disk)2 / N (G - 1) MTTR(disk)• where MTTF(disk) is the mean-time-to-failure of a single disk,
• MTTR(disk) is the mean-time-to-repair of a single disk,
• N is the total number of disks in the disk array
• G is the parity group size• For illustration purposes, let us assume we have
• 100 disks that each had a mean time to failure (MTTF) of 200,000 hoursand a mean time to repair of one hour. If we organized these 100 disksinto parity groups of average size 16, then the mean time to failure of thesystem would be an astounding 3000 years! Mean times to failure of this
• magnitude lower the chances of failure over any given period of time.
RAID Case Studies: RAID Case Studies: NCR 6298• The NCR 6298 Disk Array Subsystem, released in 1992, is a low cost RAID
subsystem supporting RAID levels 0, 1, 3 and 5.
• Designed for commercial environments, the system supports up to fourcontrollers, redundant power supplies and fans, and up to 20 3.5” SCSI-2 drives.All components
• power supplies, drives, and controllers—can be replaced while the systemservices requests.
• The array controller architecture features a unique lock-step design thatrequires almost no buffering. For all requests except RAID level 5 writes, dataflows directly through the controller to the drives. The controller duplexes thedata stream for mirroring configurations and generates parity for RAID level 3synchronously with data transfer.
• The host interface is fast, wide, differential SCSI-2 (20 MB/S), while the drivechannels are fast, narrow SCSI-2 (10 MB/S). Because of the lock-steparchitecture, transfer bandwidth to the host is limited to 10 MB/S for RAID level0, 1 and 5. However, in RAID level 3 configurations, performance on largetransfers has been measured at over 14 MB/S
The RAID-II Storage Server• RAID-II is a high-bandwidth, network file server designed and implemented at the
University of California at Berkeley.
• RAID-II interfaces a SCSI-based disk array to a HIPPI network.
• One of RAID-II’s unique features is its ability to provide
• high-bandwidth access from the network to the disks without transferring datathrough the relatively slow file server (a Sun4/280 workstation) memory system. Todo this, the RAID project designed a custom printed-circuit board called the XBUScard.
• The XBUS card provides a high-bandwidth path between the major systemcomponents: the HIPPI network, four VME busses that connect to VME diskcontrollers, and an interleaved, multi-ported semiconductor memory. The XBUS cardalso contains a parity computation engine that generates parity for writes andreconstruction on the disk array.
• The data path between these system components is a 4 ⋅ ⋅ 8 crossbar switch that cansustain approximately 160 MB/s. The entire system is controlled by an external Sun4/280 file server through a memory-mapped control register interface.
• The maximum bandwidth of RAID-II is between 20 and 30 MB/s, enough to supportthe full disk bandwidth of approximately 20 disk drives.