Top Banner
4/24/01 ©UCB Spring 2001 CS152 / Kubiatowicz Lec25.1 CS152 Computer Architecture and Engineering Lecture 25 I/O and Storage Systems Continued April 24, 2001 John Kubiatowicz (http.cs.berkeley.edu/~kubitron) lecture slides: http://www- inst.eecs.berkeley.edu/~cs152/
60

CS152 / Kubiatowicz Lec25.1 4/24/01©UCB Spring 2001 CS152 Computer Architecture and Engineering Lecture 25 I/O and Storage Systems Continued April 24,

Jan 04, 2016

Download

Documents

Jack Cameron
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: CS152 / Kubiatowicz Lec25.1 4/24/01©UCB Spring 2001 CS152 Computer Architecture and Engineering Lecture 25 I/O and Storage Systems Continued April 24,

4/24/01 ©UCB Spring 2001 CS152 / Kubiatowicz

Lec25.1

CS152Computer Architecture and Engineering

Lecture 25

I/O and Storage SystemsContinued

April 24, 2001

John Kubiatowicz (http.cs.berkeley.edu/~kubitron)

lecture slides: http://www-inst.eecs.berkeley.edu/~cs152/

Page 2: CS152 / Kubiatowicz Lec25.1 4/24/01©UCB Spring 2001 CS152 Computer Architecture and Engineering Lecture 25 I/O and Storage Systems Continued April 24,

4/24/01 ©UCB Spring 2001 CS152 / Kubiatowicz

Lec25.2

The Big Picture: Where are We Now?

Control

Datapath

Memory

Processor

Input

Output

° Today’s Topic: I/O Systems

Control

Datapath

Memory

Processor

Input

Output

Network/Bus

Page 3: CS152 / Kubiatowicz Lec25.1 4/24/01©UCB Spring 2001 CS152 Computer Architecture and Engineering Lecture 25 I/O and Storage Systems Continued April 24,

4/24/01 ©UCB Spring 2001 CS152 / Kubiatowicz

Lec25.3

Recap: A Multi-Bus System

MemoryProcessor Memory Bus

BusAdaptor

BusAdaptor

I/O Bus

BacksideCache bus

I/O Bus

L2 Cache

NorthBridge

° Separate sets of pins for different functions• Memory bus • Caches• Graphics bus (for fast frame buffer)• I/O buses are connected to the backplane bus

° Advantage: • Buses can run at different speeds• Much less overall loading!

SouthBridge

Processor

Page 4: CS152 / Kubiatowicz Lec25.1 4/24/01©UCB Spring 2001 CS152 Computer Architecture and Engineering Lecture 25 I/O and Storage Systems Continued April 24,

4/24/01 ©UCB Spring 2001 CS152 / Kubiatowicz

Lec25.4

Recap: Main components of Intel Chipset: Pentium II/III

° Northbridge:

• Handles memory

• Graphics

° Southbridge: I/O

• PCI bus

• Disk controllers

• USB controlers

• Audio

• Serial I/O

• Interrupt controller

• Timers

Page 5: CS152 / Kubiatowicz Lec25.1 4/24/01©UCB Spring 2001 CS152 Computer Architecture and Engineering Lecture 25 I/O and Storage Systems Continued April 24,

4/24/01 ©UCB Spring 2001 CS152 / Kubiatowicz

Lec25.5

Recap: Bus Summary

° Buses are an important technique for building large-scale systems

• Their speed is critically dependent on factors such as length, number of devices, etc.

• Critically limited by capacitance• Tricks: esoteric drive technology such as GTL

° Important terminology:• Master: The device that can initiate new transactions• Slaves: Devices that respond to the master

° Two types of bus timing:• Synchronous: bus includes clock• Asynchronous: no clock, just REQ/ACK strobing

° Direct Memory Access (DMA) allows fast, burst transfer into processor’s memory:

• Processor’s memory acts like a slave• Probably requires some form of cache-coherence so that DMA’ed

memory can be invalidated from cache.

Page 6: CS152 / Kubiatowicz Lec25.1 4/24/01©UCB Spring 2001 CS152 Computer Architecture and Engineering Lecture 25 I/O and Storage Systems Continued April 24,

4/24/01 ©UCB Spring 2001 CS152 / Kubiatowicz

Lec25.6

Disk Capacity now doubles every 18 months; before1990 every 36 months

• Today: Processing Power Doubles Every 18 months

• Today: Memory Size Doubles Every 18 months(4X/3yr)

• Today: Disk Capacity Doubles Every 18 months

• Disk Positioning Rate (Seek + Rotate) Doubles Every Ten Years!

The I/OGAP

The I/OGAP

Recap: Technology Trends

Page 7: CS152 / Kubiatowicz Lec25.1 4/24/01©UCB Spring 2001 CS152 Computer Architecture and Engineering Lecture 25 I/O and Storage Systems Continued April 24,

4/24/01 ©UCB Spring 2001 CS152 / Kubiatowicz

Lec25.7

0%

5%

10%

15%

20%

25%

30%

35%

40%

1974 1980 1986 1992 1998

source: New York Times, 2/23/98, page C3, “Makers of disk drives crowd even more data into even smaller spaces”

470 v. 3000 Mb/si

9 v. 22 Mb/si

0.2 v. 1.7 Mb/si

Recap: MBits per square inch: DRAM as % of Disk over time

Page 8: CS152 / Kubiatowicz Lec25.1 4/24/01©UCB Spring 2001 CS152 Computer Architecture and Engineering Lecture 25 I/O and Storage Systems Continued April 24,

4/24/01 ©UCB Spring 2001 CS152 / Kubiatowicz

Lec25.8

Recap: Nano-layered Disk Heads

° Special sensitivity of Disk head comes from “Giant Magneto-Resistive effect” or (GMR)

° IBM is leader in this technology• Same technology as TMJ-RAM breakthrough we described in

earlier class.

Coil for writing

Page 9: CS152 / Kubiatowicz Lec25.1 4/24/01©UCB Spring 2001 CS152 Computer Architecture and Engineering Lecture 25 I/O and Storage Systems Continued April 24,

4/24/01 ©UCB Spring 2001 CS152 / Kubiatowicz

Lec25.9

Typical Numbers of a Magnetic Disk

° Rotational Latency:

• Most disks rotate at 3,600 to 7200 RPM

• Approximately 16 ms to 8 ms per revolution, respectively

• An average latency to the desiredinformation is halfway around the disk: 8 ms at 3600 RPM, 4 ms at 7200 RPM

° Transfer Time is a function of :

• Transfer size (usually a sector): 1 KB / sector

• Rotation speed: 3600 RPM to 10000 RPM

• Recording density: bits per inch on a track

• Diameter typical diameter ranges from 2.5 to 5.25 in

• Typical values: 2 to 40 MB per second

SectorTrack

Cylinder

HeadPlatter

Page 10: CS152 / Kubiatowicz Lec25.1 4/24/01©UCB Spring 2001 CS152 Computer Architecture and Engineering Lecture 25 I/O and Storage Systems Continued April 24,

4/24/01 ©UCB Spring 2001 CS152 / Kubiatowicz

Lec25.10

Disk I/O Performance

° Disk Access Time = Seek time + Rotational Latency + Transfer time

+ Controller Time + Queueing Delay

° Estimating Queue Length:

• Utilization = u = Request Rate / Service Rate= /• Mean Queue Length = u / (1 - u)

• As Request Rate -> Service Rate

- Mean Queue Length -> Infinity

ProcessorQueue

DiskController

Disk

Service RateRequest Rate

Queue

DiskController

Disk

Page 11: CS152 / Kubiatowicz Lec25.1 4/24/01©UCB Spring 2001 CS152 Computer Architecture and Engineering Lecture 25 I/O and Storage Systems Continued April 24,

4/24/01 ©UCB Spring 2001 CS152 / Kubiatowicz

Lec25.11

Disk Latency = Queueing Time + Controller time + Seek Time + Rotation Time + Xfer Time

Order of magnitude times for 4K byte transfers:

Average Seek: 8 ms or less

Rotate: 4.2 ms @ 7200 rpm

Xfer: 1 ms @ 7200 rpm

Disk Device Terminology

Page 12: CS152 / Kubiatowicz Lec25.1 4/24/01©UCB Spring 2001 CS152 Computer Architecture and Engineering Lecture 25 I/O and Storage Systems Continued April 24,

4/24/01 ©UCB Spring 2001 CS152 / Kubiatowicz

Lec25.12

Example

° 512 byte sector, rotate at 5400 RPM, advertised seeks is 12 ms, transfer rate is 4 MB/sec, controller overhead is 1 ms, queue idle so no service time

° Disk Access Time = Seek time + Rotational Latency + Transfer time

+ Controller Time + Queueing Delay

° Disk Access Time = 12 ms + 0.5 / 5400 RPM + 0.5 KB / 4 MB/s + 1 ms + 0

° Disk Access Time = 12 ms + 0.5 / 90 RPS + 0.125 / 1024 s + 1 ms + 0

° Disk Access Time = 12 ms + 5.5 ms + 0.1 ms + 1 ms + 0 ms

° Disk Access Time = 18.6 ms

° If real seeks are 1/3 advertised seeks, then its 10.6 ms, with rotation delay at 50% of the time!

Page 13: CS152 / Kubiatowicz Lec25.1 4/24/01©UCB Spring 2001 CS152 Computer Architecture and Engineering Lecture 25 I/O and Storage Systems Continued April 24,

4/24/01 ©UCB Spring 2001 CS152 / Kubiatowicz

Lec25.13

Reliability and Availability

° Two terms that are often confused:

• Reliability: Is anything broken?

• Availability: Is the system still available to the user?

° Availability can be improved by adding hardware:

• Example: adding ECC on memory

° Reliability can only be improved by:

• Better environmental conditions

• Building more reliable components

• Building with fewer components

- Improve availability may come at the cost of lower reliability

Page 14: CS152 / Kubiatowicz Lec25.1 4/24/01©UCB Spring 2001 CS152 Computer Architecture and Engineering Lecture 25 I/O and Storage Systems Continued April 24,

4/24/01 ©UCB Spring 2001 CS152 / Kubiatowicz

Lec25.14

Simple Producer-Server Model

° Throughput:

• The number of tasks completed by the server in unit time

• In order to get the highest possible throughput:

- The server should never be idle

- The queue should never be empty

° Response time:

• Begins when a task is placed in the queue

• Ends when it is completed by the server

• In order to minimize the response time:

- The queue should be empty

- The server will be idle

Producer ServerQueue

Page 15: CS152 / Kubiatowicz Lec25.1 4/24/01©UCB Spring 2001 CS152 Computer Architecture and Engineering Lecture 25 I/O and Storage Systems Continued April 24,

4/24/01 ©UCB Spring 2001 CS152 / Kubiatowicz

Lec25.15

Disk I/O Performance

Response time = Queue + Device Service time

100%

ResponseTime (ms)

Throughput (Utilization)(% total BW)

0

100

200

300

0%

Proc

Queue

IOC Device

Metrics: Response Time Throughput

latency goes as Tser×u/(1-u) u = utilization

Page 16: CS152 / Kubiatowicz Lec25.1 4/24/01©UCB Spring 2001 CS152 Computer Architecture and Engineering Lecture 25 I/O and Storage Systems Continued April 24,

4/24/01 ©UCB Spring 2001 CS152 / Kubiatowicz

Lec25.16

° Queueing Theory applies to long term, steady state behavior Arrival rate = Departure rate

° Little’s Law: Mean number tasks in system = arrival rate x mean reponse time

• Observed by many, Little was first to prove• Simple interpretation: you should see the same number of

tasks in queue when entering as when leaving.

° Applies to any system in equilibrium, as long as nothing in black box is creating or destroying tasks

“Black Box”Queueing

System

Arrivals Departures

Introduction to Queueing Theory

Page 17: CS152 / Kubiatowicz Lec25.1 4/24/01©UCB Spring 2001 CS152 Computer Architecture and Engineering Lecture 25 I/O and Storage Systems Continued April 24,

4/24/01 ©UCB Spring 2001 CS152 / Kubiatowicz

Lec25.17

° Queuing models assume state of equilibrium: input rate = output rate

° Notation: average number of arriving customers/secondTser average time to service a customer (tradtionally µ = 1/ Tser )u server utilization (0..1): u = x Tser (or u = / µ )Tq average time/customer in queue Tsys average time/customer in system: Tsys = Tq + Tser

Lq average length of queue: Lq = x Tq

Lsys average length of system: Lsys = x Tsys

° Little’s Law: Lsys = x Tsys(Mean number customers = arrival rate x mean service time)

Proc IOC Device

Queue server

System

A Little Queuing Theory: Notation

Page 18: CS152 / Kubiatowicz Lec25.1 4/24/01©UCB Spring 2001 CS152 Computer Architecture and Engineering Lecture 25 I/O and Storage Systems Continued April 24,

4/24/01 ©UCB Spring 2001 CS152 / Kubiatowicz

Lec25.18

° Server spends a variable amount of time with customers• Weighted mean m1 = (f1 x T1 + f2 x T2 +...+ fn x Tn)/F

= p(T)xT• variance = (f1 x T12 + f2 x T22 +...+ fn x Tn2)/F – m12

= p(T)xT2 - m12

• Squared coefficient of variance: C = variance/m12

- Unitless measure (100 ms2 vs. 0.1 s2)° Exponential distribution C = 1 : most short relative to average, few others

long; 90% < 2.3 x average, 63% < averageHypoexponential distribution C < 1 : most close to average, C=0.5 => 90% < 2.0 x average, only 57% < averageHyperexponential distribution C > 1 : further from average C=2.0 => 90% < 2.8 x average, 69% < average

Avg.

A Little Queuing Theory: Use of random distributions

Avg.

0

Proc IOC Device

Queue server

System

Page 19: CS152 / Kubiatowicz Lec25.1 4/24/01©UCB Spring 2001 CS152 Computer Architecture and Engineering Lecture 25 I/O and Storage Systems Continued April 24,

4/24/01 ©UCB Spring 2001 CS152 / Kubiatowicz

Lec25.19

° Disk response times C 1.5 (majority seeks < average)° Yet usually pick C = 1.0 for simplicity

• Memoryless, exponential dist• Many complex systems well described

by memoryless distribution!° Another useful value is average time

must wait for server to complete current task: m1(z)• Called “Average Residual Wait Time”• Not just 1/2 x m1 because doesn’t capture variance• Can derive m1(z) = 1/2 x m1 x (1 + C)• No variance C= 0 => m1(z) = 1/2 x m1• Exponential C= 1 => m1(z) = m1

A Little Queuing Theory: Variable Service Time

Proc IOC Device

Queue server

System

Avg.

0 Time

Page 20: CS152 / Kubiatowicz Lec25.1 4/24/01©UCB Spring 2001 CS152 Computer Architecture and Engineering Lecture 25 I/O and Storage Systems Continued April 24,

4/24/01 ©UCB Spring 2001 CS152 / Kubiatowicz

Lec25.20

° Calculating average wait time in queue Tq:• All customers in line must complete; avg time: m1Tser= 1/• If something at server, it takes to complete on average m1(z)

- Chance server is busy = u=/; average delay is u x m1(z)

Tq = u x m1(z) + Lq x Ts er

Tq = u x m1(z) + x Tq x Ts er

Tq = u x m1(z) + u x Tq

Tq x (1 – u) = m1(z) x uTq = m1(z) x u/(1-u) = Ts er x {1/2 x (1+C)} x u/(1 – u))

Notation: average number of arriving customers/second

Tser average time to service a customeru server utilization (0..1): u = x Tser

Tq average time/customer in queueLq average length of queue:Lq= x Tq

m1(z) average residual wait time = Ts er x {1/2 x (1+C)}

A Little Queuing Theory: Average Wait Time

Little’s Law

Defn of utilization (u)

Page 21: CS152 / Kubiatowicz Lec25.1 4/24/01©UCB Spring 2001 CS152 Computer Architecture and Engineering Lecture 25 I/O and Storage Systems Continued April 24,

4/24/01 ©UCB Spring 2001 CS152 / Kubiatowicz

Lec25.21

° Assumptions so far:• System in equilibrium• Time between two successive arrivals in line are random• Server can start on next customer immediately after prior finishes• No limit to the queue: works First-In-First-Out• Afterward, all customers in line must complete; each avg Tser

° Described “memoryless” or Markovian request arrival (M for C=1 exponentially random), General service distribution (no restrictions), 1 server: M/G/1 queue

° When Service times have C = 1, M/M/1 queueTq = Tser x u / (1 – u)

Tser average time to service a customeru server utilization (0..1): u = x TserTq average time/customer in queue

A Little Queuing Theory: M/G/1 and M/M/1

Page 22: CS152 / Kubiatowicz Lec25.1 4/24/01©UCB Spring 2001 CS152 Computer Architecture and Engineering Lecture 25 I/O and Storage Systems Continued April 24,

4/24/01 ©UCB Spring 2001 CS152 / Kubiatowicz

Lec25.22

° Processor sends 10 x 8KB disk I/Os per second, requests & service exponentially distrib., avg. disk service = 20 ms

• This number comes from disk equation:Service time = Ave seek + ave rot delay + transfer time + ctrl overhead

° On average, how utilized is the disk?• What is the number of requests in the queue?• What is the average time spent in the queue?• What is the average response time for a disk request?

° Notation: average number of arriving customers/second = 10Tser average time to service a customer = 20 ms (0.02s)u server utilization (0..1): u = x Tser= 10/s x .02s = 0.2Tq average time/customer in queue = Tser x u / (1 – u)

= 20 x 0.2/(1-0.2) = 20 x 0.25 = 5 ms (0 .005s)Tsys average time/customer in system: Tsys =Tq +Tser= 25 msLq average length of queue:Lq= x Tq

= 10/s x .005s = 0.05 requests in queueLsys average # tasks in system: Lsys = x Tsys = 10/s x .025s = 0.25

A Little Queuing Theory: An Example

Page 23: CS152 / Kubiatowicz Lec25.1 4/24/01©UCB Spring 2001 CS152 Computer Architecture and Engineering Lecture 25 I/O and Storage Systems Continued April 24,

4/24/01 ©UCB Spring 2001 CS152 / Kubiatowicz

Lec25.23

Administrivia: Not much left° Only 2 groups have put up project descriptions!

• Go to “Projects” link on home page° Tomorrow: Sections in lab again (119 Cory)

• Bring complete Lab 6 status with you- What are you doing?- How are you doing?- What is your testing strategy?

° NO LECTURE ON THURSDAY! (4/26)° Midterm II next Tuesday (5/1)

• 277 Cory as before• Pizza afterwards• Topics:

- Pipelining, Out-of-order scheduling, Caches, Memory, Buses, I/O° Review session this Sunday (4/29)° Remaining schedule:

• Lecture on Power/assorted topics (Quantum computing?) on 5/3• Wrap up lecture on 5/8• Oral presentations/contest on 5/10• Grades out by 5/12

Page 24: CS152 / Kubiatowicz Lec25.1 4/24/01©UCB Spring 2001 CS152 Computer Architecture and Engineering Lecture 25 I/O and Storage Systems Continued April 24,

4/24/01 ©UCB Spring 2001 CS152 / Kubiatowicz

Lec25.24

Giving Commands to I/O Devices

° Two methods are used to address the device:

• Special I/O instructions

• Memory-mapped I/O

° Special I/O instructions specify:

• Both the device number and the command word

- Device number: the processor communicates this via aset of wires normally included as part of the I/O bus

- Command word: this is usually send on the bus’s data lines

° Memory-mapped I/O:

• Portions of the address space are assigned to I/O device

• Read and writes to those addresses are interpretedas commands to the I/O devices

• User programs are prevented from issuing I/O operations directly:

- The I/O address space is protected by the address translation

Page 25: CS152 / Kubiatowicz Lec25.1 4/24/01©UCB Spring 2001 CS152 Computer Architecture and Engineering Lecture 25 I/O and Storage Systems Continued April 24,

4/24/01 ©UCB Spring 2001 CS152 / Kubiatowicz

Lec25.25

Single Memory & I/O Bus No Separate I/O Instructions

CPU

Interface Interface

Peripheral Peripheral

Memory

ROM

RAM

I/O$

CPU

L2 $

Memory Bus

Memory Bus Adaptor

I/O bus

Memory Mapped I/O

Page 26: CS152 / Kubiatowicz Lec25.1 4/24/01©UCB Spring 2001 CS152 Computer Architecture and Engineering Lecture 25 I/O and Storage Systems Continued April 24,

4/24/01 ©UCB Spring 2001 CS152 / Kubiatowicz

Lec25.26

I/O Device Notifying the OS

° The OS needs to know when:

• The I/O device has completed an operation

• The I/O operation has encountered an error

° This can be accomplished in two different ways

• I/O Interrupt:

- Whenever an I/O device needs attention from the processor,it interrupts the processor from what it is currently doing.

• Polling:

- The I/O device put information in a status register

- The OS periodically check the status register

Page 27: CS152 / Kubiatowicz Lec25.1 4/24/01©UCB Spring 2001 CS152 Computer Architecture and Engineering Lecture 25 I/O and Storage Systems Continued April 24,

4/24/01 ©UCB Spring 2001 CS152 / Kubiatowicz

Lec25.27

I/O Interrupt

° An I/O interrupt is just like the exceptions except:

• An I/O interrupt is asynchronous

• Further information needs to be conveyed

° An I/O interrupt is asynchronous with respect to instruction execution:

• I/O interrupt is not associated with any instruction

• I/O interrupt does not prevent any instruction from completion

- You can pick your own convenient point to take an interrupt

° I/O interrupt is more complicated than exception:

• Needs to convey the identity of the device generating the interrupt

• Interrupt requests can have different urgencies:

- Interrupt request needs to be prioritized

Page 28: CS152 / Kubiatowicz Lec25.1 4/24/01©UCB Spring 2001 CS152 Computer Architecture and Engineering Lecture 25 I/O and Storage Systems Continued April 24,

4/24/01 ©UCB Spring 2001 CS152 / Kubiatowicz

Lec25.28

add $r1,$r2,$r3subi $r4,$r1,#4slli $r4,$r4,#2

Hiccup(!)

lw $r2,0($r4)lw $r3,4($r4)add $r2,$r2,$r3sw 8($r4),$r2

Raise priorityReenable All IntsSave registers

lw $r1,20($r0)lw $r2,0($r1)addi $r3,$r0,#5sw $r3,0($r1)

Restore registersClear current IntDisable All IntsRestore priorityRTI

Ext

erna

l Int

erru

pt

PC saved

Disable

All Ints

Superviso

r Mode

Restore PC

User Mode

“Int

erru

pt H

andl

er”

Example: Device Interrupt

° Advantage:• User program progress is only halted during actual transfer

° Disadvantage, special hardware is needed to:• Cause an interrupt (I/O device)• Detect an interrupt (processor)• Save the proper states to resume after the interrupt (processor)

Page 29: CS152 / Kubiatowicz Lec25.1 4/24/01©UCB Spring 2001 CS152 Computer Architecture and Engineering Lecture 25 I/O and Storage Systems Continued April 24,

4/24/01 ©UCB Spring 2001 CS152 / Kubiatowicz

Lec25.29

Disable Network Intr

subi $r4,$r1,#4slli $r4,$r4,#2lw $r2,0($r4)lw $r3,4($r4)add $r2,$r2,$r3sw 8($r4),$r2lw $r1,12($zero)beq $r1,no_messlw $r1,20($r0)lw $r2,0($r1)addi $r3,$r0,#5sw 0($r1),$r3Clear Network Intr

Exte

rnal In

terr

up

t

“Handler”

no_mess:

Polling Point(check device register)

Alternative: Polling

Page 30: CS152 / Kubiatowicz Lec25.1 4/24/01©UCB Spring 2001 CS152 Computer Architecture and Engineering Lecture 25 I/O and Storage Systems Continued April 24,

4/24/01 ©UCB Spring 2001 CS152 / Kubiatowicz

Lec25.30

Polling: Programmed I/O

° Advantage:

• Simple: the processor is totally in control and does all the work

° Disadvantage:

• Polling overhead can consume a lot of CPU time

CPU

IOC

device

Memory

Is thedata

ready?

readdata

storedata

yes no

done? no

yes

busy wait loopnot an efficient

way to use the CPUunless the device

is very fast!

but checks for I/O completion can bedispersed among

computation intensive code

Page 31: CS152 / Kubiatowicz Lec25.1 4/24/01©UCB Spring 2001 CS152 Computer Architecture and Engineering Lecture 25 I/O and Storage Systems Continued April 24,

4/24/01 ©UCB Spring 2001 CS152 / Kubiatowicz

Lec25.31

° Polling is faster than interrupts because

• Compiler knows which registers in use at polling point. Hence, do not need to save and restore registers (or not as many).

• Other interrupt overhead avoided (pipeline flush, trap priorities, etc).

° Polling is slower than interrupts because

• Overhead of polling instructions is incurred regardless of whether or not handler is run. This could add to inner-loop delay.

• Device may have to wait for service for a long time.

° When to use one or the other?

• Multi-axis tradeoff

- Frequent/regular events good for polling, as long as device can be controlled at user level.

- Interrupts good for infrequent/irregular events

- Interrupts good for ensuring regular/predictable service of events.

Polling is faster/slower than Interrupts

Page 32: CS152 / Kubiatowicz Lec25.1 4/24/01©UCB Spring 2001 CS152 Computer Architecture and Engineering Lecture 25 I/O and Storage Systems Continued April 24,

4/24/01 ©UCB Spring 2001 CS152 / Kubiatowicz

Lec25.32

Delegating I/O Responsibility from the CPU: DMA

° Direct Memory Access (DMA):

• External to the CPU

• Act as a maser on the bus

• Transfer blocks of data to or from memory without CPU intervention

CPU

IOC

device

Memory DMAC

CPU sends a starting address, direction, and length count to DMAC. Then issues "start".

DMAC provides handshakesignals for PeripheralController, and MemoryAddresses and handshakesignals for Memory.

Page 33: CS152 / Kubiatowicz Lec25.1 4/24/01©UCB Spring 2001 CS152 Computer Architecture and Engineering Lecture 25 I/O and Storage Systems Continued April 24,

4/24/01 ©UCB Spring 2001 CS152 / Kubiatowicz

Lec25.33

Delegating I/O Responsibility from the CPU: IOP

CPU IOP

Mem

D1

D2

Dn

. . .main memory

bus

I/Obus

CPU

IOP

(1) Issuesinstructionto IOP

memory

(2)

(3)

Device to/from memorytransfers are controlledby the IOP directly.

IOP steals memory cycles.

OP Device Address

target devicewhere cmnds are

IOP looks in memory for commands

OP Addr Cnt Other

whatto do

whereto putdata

howmuch

specialrequests

(4) IOP interrupts CPU when done

Page 34: CS152 / Kubiatowicz Lec25.1 4/24/01©UCB Spring 2001 CS152 Computer Architecture and Engineering Lecture 25 I/O and Storage Systems Continued April 24,

4/24/01 ©UCB Spring 2001 CS152 / Kubiatowicz

Lec25.34

Responsibilities of the Operating System

° The operating system acts as the interface between:

• The I/O hardware and the program that requests I/O

° Three characteristics of the I/O systems:

• The I/O system is shared by multiple program using the processor

• I/O systems often use interrupts (external generated exceptions) to communicate information about I/O operations.

- Interrupts must be handled by the OS because they cause a transfer to supervisor mode

• The low-level control of an I/O device is complex:

- Managing a set of concurrent events

- The requirements for correct device control are very detailed

Page 35: CS152 / Kubiatowicz Lec25.1 4/24/01©UCB Spring 2001 CS152 Computer Architecture and Engineering Lecture 25 I/O and Storage Systems Continued April 24,

4/24/01 ©UCB Spring 2001 CS152 / Kubiatowicz

Lec25.35

Operating System Requirements

° Provide protection to shared I/O resources

• Guarantees that a user’s program can only access theportions of an I/O device to which the user has rights

° Provides abstraction for accessing devices:

• Supply routines that handle low-level device operation

° Handles the interrupts generated by I/O devices

° Provide equitable access to the shared I/O resources

• All user programs must have equal access to the I/O resources

° Schedule accesses in order to enhance system throughput

Page 36: CS152 / Kubiatowicz Lec25.1 4/24/01©UCB Spring 2001 CS152 Computer Architecture and Engineering Lecture 25 I/O and Storage Systems Continued April 24,

4/24/01 ©UCB Spring 2001 CS152 / Kubiatowicz

Lec25.36

OS and I/O Systems Communication Requirements

° The Operating System must be able to prevent:

• The user program from communicating with the I/O device directly

° If user programs could perform I/O directly:

• Protection to the shared I/O resources could not be provided

° Three types of communication are required:

• The OS must be able to give commands to the I/O devices

• The I/O device must be able to notify the OS when the I/O device has completed an operation or has encountered an error

• Data must be transferred between memory and an I/O device

Page 37: CS152 / Kubiatowicz Lec25.1 4/24/01©UCB Spring 2001 CS152 Computer Architecture and Engineering Lecture 25 I/O and Storage Systems Continued April 24,

4/24/01 ©UCB Spring 2001 CS152 / Kubiatowicz

Lec25.37

14”10”5.25”3.5”

3.5”

Disk Array: 1 disk design

Conventional: 4 disk designs

Low End High End

Disk Product Families

Manufacturing Advantages of Disk Arrays

Page 38: CS152 / Kubiatowicz Lec25.1 4/24/01©UCB Spring 2001 CS152 Computer Architecture and Engineering Lecture 25 I/O and Storage Systems Continued April 24,

4/24/01 ©UCB Spring 2001 CS152 / Kubiatowicz

Lec25.38

Data Capacity

Volume

Power

Data Rate

I/O Rate

MTTF

Cost

IBM 3390 (K)

20 GBytes

97 cu. ft.

3 KW

15 MB/s

600 I/Os/s

250 KHrs

$250K

IBM 3.5" 0061

320 MBytes

0.1 cu. ft.

11 W

1.5 MB/s

55 I/Os/s

50 KHrs

$2K

x70

23 GBytes

11 cu. ft.

1 KW

120 MB/s

3900 IOs/s

??? Hrs

$150K

Disk Arrays have potential for

large data and I/O rates

high MB per cu. ft., high MB per KW

reliability?

Small # of Large Disks Large # of Small Disks!

Page 39: CS152 / Kubiatowicz Lec25.1 4/24/01©UCB Spring 2001 CS152 Computer Architecture and Engineering Lecture 25 I/O and Storage Systems Continued April 24,

4/24/01 ©UCB Spring 2001 CS152 / Kubiatowicz

Lec25.39

• Reliability of N disks = Reliability of 1 Disk ÷ N

50,000 Hours ÷ 70 disks = 700 hours

Disk system MTTF: Drops from 6 years to 1 month!

• Arrays (without redundancy) too unreliable to be useful!

Hot spares support reconstruction in parallel with access: very high media availability can be achievedHot spares support reconstruction in parallel with access: very high media availability can be achieved

Array Reliability

Page 40: CS152 / Kubiatowicz Lec25.1 4/24/01©UCB Spring 2001 CS152 Computer Architecture and Engineering Lecture 25 I/O and Storage Systems Continued April 24,

4/24/01 ©UCB Spring 2001 CS152 / Kubiatowicz

Lec25.40

• Files are "striped" across multiple spindles• Redundancy yields high data availability

Disks will fail

Contents reconstructed from data redundantly stored in the array

Capacity penalty to store it

Bandwidth penalty to update

Mirroring/Shadowing (high capacity cost)

Horizontal Hamming Codes (overkill)

Parity & Reed-Solomon Codes

Failure Prediction (no capacity overhead!)VaxSimPlus — Technique is controversial

Techniques:

Redundant Arrays of Disks

Page 41: CS152 / Kubiatowicz Lec25.1 4/24/01©UCB Spring 2001 CS152 Computer Architecture and Engineering Lecture 25 I/O and Storage Systems Continued April 24,

4/24/01 ©UCB Spring 2001 CS152 / Kubiatowicz

Lec25.41

• Each disk is fully duplicated onto its "shadow" Very high availability can be achieved

• Bandwidth sacrifice on write: Logical write = two physical writes

• Reads may be optimized

• Most expensive solution: 100% capacity overhead

Targeted for high I/O rate , high availability environments

recoverygroup

RAID 1: Disk Mirroring/Shadowing

Page 42: CS152 / Kubiatowicz Lec25.1 4/24/01©UCB Spring 2001 CS152 Computer Architecture and Engineering Lecture 25 I/O and Storage Systems Continued April 24,

4/24/01 ©UCB Spring 2001 CS152 / Kubiatowicz

Lec25.42

P100100111100110110010011

. . .

logical record 10010011

11001101

10010011

00110000

Striped physicalrecords

• Parity computed across recovery group to protect against hard disk failures 33% capacity cost for parity in this configuration wider arrays reduce capacity costs, decrease expected availability, increase reconstruction time• Arms logically synchronized, spindles rotationally synchronized logically a single high capacity, high transfer rate disk

Targeted for high bandwidth applications: Scientific, Image Processing

RAID 3: Parity Disk

Page 43: CS152 / Kubiatowicz Lec25.1 4/24/01©UCB Spring 2001 CS152 Computer Architecture and Engineering Lecture 25 I/O and Storage Systems Continued April 24,

4/24/01 ©UCB Spring 2001 CS152 / Kubiatowicz

Lec25.43

A logical writebecomes fourphysical I/Os

Independent writespossible because ofinterleaved parity

Reed-SolomonCodes ("Q") forprotection duringreconstruction

A logical writebecomes fourphysical I/Os

Independent writespossible because ofinterleaved parity

Reed-SolomonCodes ("Q") forprotection duringreconstruction

D0 D1 D2 D3 P

D4 D5 D6 P D7

D8 D9 P D10 D11

D12 P D13 D14 D15

P D16 D17 D18 D19

D20 D21 D22 D23 P

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.Disk Columns

IncreasingLogical

Disk Addresses

Stripe

StripeUnit

Targeted for mixedapplications

RAID 5+: High I/O Rate Parity

Page 44: CS152 / Kubiatowicz Lec25.1 4/24/01©UCB Spring 2001 CS152 Computer Architecture and Engineering Lecture 25 I/O and Storage Systems Continued April 24,

4/24/01 ©UCB Spring 2001 CS152 / Kubiatowicz

Lec25.44

D0 D1 D2 D3 PD0'

+

+

D0' D1 D2 D3 P'

newdata

olddata

old parity

XOR

XOR

(1. Read) (2. Read)

(3. Write) (4. Write)

RAID-5: Small Write Algorithm

1 Logical Write = 2 Physical Reads + 2 Physical Writes

Problems of Disk Arrays: Small Writes

Page 45: CS152 / Kubiatowicz Lec25.1 4/24/01©UCB Spring 2001 CS152 Computer Architecture and Engineering Lecture 25 I/O and Storage Systems Continued April 24,

4/24/01 ©UCB Spring 2001 CS152 / Kubiatowicz

Lec25.45

Hewlett-Packard (HP) AutoRAID

° HP has interesting solution which combines both mirroring and RAID level 5.

• Dynamically adapts disk storage

- For recent or highly used data, uses mirroring

- For less recently used data, uses RAID 5

• Gets speed of mirroring when it matters and density of RAID 5 on average

Page 46: CS152 / Kubiatowicz Lec25.1 4/24/01©UCB Spring 2001 CS152 Computer Architecture and Engineering Lecture 25 I/O and Storage Systems Continued April 24,

4/24/01 ©UCB Spring 2001 CS152 / Kubiatowicz

Lec25.46

hostarray

controller

single boarddisk

controller

single boarddisk

controller

single boarddisk

controller

single boarddisk

controller

hostadapter

manages interfaceto host, DMA

control, buffering,parity logic

physical devicecontrol

often piggy-backedin small format devices

striping software off-loaded from host to array controller

no applications modifications

no reduction of host performance

Subsystem Organization

Page 47: CS152 / Kubiatowicz Lec25.1 4/24/01©UCB Spring 2001 CS152 Computer Architecture and Engineering Lecture 25 I/O and Storage Systems Continued April 24,

4/24/01 ©UCB Spring 2001 CS152 / Kubiatowicz

Lec25.47

ArrayController

StringController

StringController

StringController

StringController

StringController

StringController

. . .

. . .

. . .

. . .

. . .

. . .

Data Recovery Group: unit of data redundancy

Redundant Support Components: fans, power supplies, controller, cables

End to End Data Integrity: internal parity protected data paths

System Availability: Orthogonal RAIDs

Page 48: CS152 / Kubiatowicz Lec25.1 4/24/01©UCB Spring 2001 CS152 Computer Architecture and Engineering Lecture 25 I/O and Storage Systems Continued April 24,

4/24/01 ©UCB Spring 2001 CS152 / Kubiatowicz

Lec25.48

Fully dual redundantI/O Controller I/O Controller

Array Controller Array Controller

. . .

. . .

. . .

. . . . . .

.

.

.RecoveryGroup

Goal: No SinglePoints ofFailure

Goal: No SinglePoints ofFailure

host host

with duplicated paths, higher performance can beobtained when there are no failures

System-Level Availability

Page 49: CS152 / Kubiatowicz Lec25.1 4/24/01©UCB Spring 2001 CS152 Computer Architecture and Engineering Lecture 25 I/O and Storage Systems Continued April 24,

4/24/01 ©UCB Spring 2001 CS152 / Kubiatowicz

Lec25.49

Decreasing Disk Diameters

Increasing Network Bandwidth

Network File ServicesHigh PerformanceStorage Serviceon a High Speed

Network

High PerformanceStorage Serviceon a High Speed

Network

14" » 10" » 8" » 5.25" » 3.5" » 2.5" » 1.8" » 1.3" » . . .high bandwidth disk systems based on arrays of disks

3 Mb/s » 10Mb/s » 50 Mb/s » 100 Mb/s » 1 Gb/s » 10 Gb/snetworks capable of sustaining high bandwidth transfers

Network provideswell defined physicaland logical interfaces:separate CPU and storage system!

OS structuressupporting remotefile access

Network Attached Storage

Page 50: CS152 / Kubiatowicz Lec25.1 4/24/01©UCB Spring 2001 CS152 Computer Architecture and Engineering Lecture 25 I/O and Storage Systems Continued April 24,

4/24/01 ©UCB Spring 2001 CS152 / Kubiatowicz

Lec25.50

OceanStore:The Oceanic Data Utility:

Page 51: CS152 / Kubiatowicz Lec25.1 4/24/01©UCB Spring 2001 CS152 Computer Architecture and Engineering Lecture 25 I/O and Storage Systems Continued April 24,

4/24/01 ©UCB Spring 2001 CS152 / Kubiatowicz

Lec25.51

OceanStore Context: Ubiquitous Computing

°Computing everywhere:•Desktop, Laptop, Palmtop

•Cars, Cellphones

•Shoes? Clothing? Walls?

°Connectivity everywhere:•Rapid growth of bandwidth in the interior of the net

•Broadband to the home and office

•Wireless technologies such as CMDA, Satelite, laser

Page 52: CS152 / Kubiatowicz Lec25.1 4/24/01©UCB Spring 2001 CS152 Computer Architecture and Engineering Lecture 25 I/O and Storage Systems Continued April 24,

4/24/01 ©UCB Spring 2001 CS152 / Kubiatowicz

Lec25.52

Questions about information:

°Where is persistent information stored?•Want: Geographic independence for availability, durability, and freedom to adapt to circumstances

°How is it protected?•Want: Encryption for privacy, signatures for authenticity, and Byzantine commitment for integrity

°Can we make it indestructible? •Want: Redundancy with continuous repair and redistribution for long-term durability

° Is it hard to manage?•Want: automatic optimization, diagnosis and repair

°Who owns the aggregate resouces?•Want: Utility Infrastructure!

Page 53: CS152 / Kubiatowicz Lec25.1 4/24/01©UCB Spring 2001 CS152 Computer Architecture and Engineering Lecture 25 I/O and Storage Systems Continued April 24,

4/24/01 ©UCB Spring 2001 CS152 / Kubiatowicz

Lec25.53

First Observation: Want Utility Infrastructure

° Mark Weiser from Xerox: Transparent computing is the ultimate goal• Computers should disappear into the

background

° In storage context:• Don’t want to worry about backup

• Don’t want to worry about obsolescence

• Need lots of resources to make data secure and highly available, BUT don’t want to own them

• Outsourcing of storage already becoming popular

° Pay monthly fee and your “data is out there” • Simple payment interface

one bill from one company

Page 54: CS152 / Kubiatowicz Lec25.1 4/24/01©UCB Spring 2001 CS152 Computer Architecture and Engineering Lecture 25 I/O and Storage Systems Continued April 24,

4/24/01 ©UCB Spring 2001 CS152 / Kubiatowicz

Lec25.54

Second Observation: Want Automatic Maintenance

°Can’t possibly manage billions of servers by hand!

°System should automatically:•Adapt to failure •Repair itself •Incorporate new elements

°Can we guarantee data is available for 1000 years?

•New servers added from time to time•Old servers removed from time to time•Everything just works

°Many components with geographic separation•System not disabled by natural disasters•Can adapt to changes in demand and regional outages•Gain in stability through statistics

Page 55: CS152 / Kubiatowicz Lec25.1 4/24/01©UCB Spring 2001 CS152 Computer Architecture and Engineering Lecture 25 I/O and Storage Systems Continued April 24,

4/24/01 ©UCB Spring 2001 CS152 / Kubiatowicz

Lec25.55

Utility-based Infrastructure

Pac Bell

Sprint

IBMAT&T

CanadianOceanStore

IBM

° Transparent data service provided by federationof companies:• Monthly fee paid to one service provider

• Companies buy and sell capacity from each other

Page 56: CS152 / Kubiatowicz Lec25.1 4/24/01©UCB Spring 2001 CS152 Computer Architecture and Engineering Lecture 25 I/O and Storage Systems Continued April 24,

4/24/01 ©UCB Spring 2001 CS152 / Kubiatowicz

Lec25.56

OceanStore: Everyone’s Data, One Big Utility

°How many files in the OceanStore?•Assume 1010 people in world

•Say 10,000 files/person (very conservative?)

•So 1014 files in OceanStore!

•If 1 gig files (ok, a stretch), get 1 mole of bytes!

Truly impressive number of elements…… but small relative to physical constants

Aside: new results: 1.5 Exabytes/year (1.51018)

Page 57: CS152 / Kubiatowicz Lec25.1 4/24/01©UCB Spring 2001 CS152 Computer Architecture and Engineering Lecture 25 I/O and Storage Systems Continued April 24,

4/24/01 ©UCB Spring 2001 CS152 / Kubiatowicz

Lec25.57

OceanStore Assumptions

°Untrusted Infrastructure: •The OceanStore is comprised of untrusted components

•Only ciphertext within the infrastructure°Responsible Party:

•Some organization (i.e. service provider) guarantees that your data is consistent and durable

•Not trusted with content of data, merely its integrity

°Mostly Well-Connected: •Data producers and consumers are connected to a high-bandwidth network most of the time

•Exploit multicast for quicker consistency when possible

°Promiscuous Caching: •Data may be cached anywhere, anytime

°Optimistic Concurrency via Conflict Resolution:•Avoid locking in the wide area•Applications use object-based interface for updates

Page 58: CS152 / Kubiatowicz Lec25.1 4/24/01©UCB Spring 2001 CS152 Computer Architecture and Engineering Lecture 25 I/O and Storage Systems Continued April 24,

4/24/01 ©UCB Spring 2001 CS152 / Kubiatowicz

Lec25.58

Use of Moore’s law gains: The OceanStore Creed

°Question: Can we use Moore’s law gains for something other than just raw performance?

°Examples:•Stability through Statistics

- Use of redundancy of servers, network packets, etc. in order to gain more predictable behavior

- Systems version of Thermodynamics!

•Extreme Durability (1000-year time scale?)

- Use of erasure coding and continuous repair

•Security and Authentication

- Signatures and secure hashes in many places

•Continuous dynamic optimization

Page 59: CS152 / Kubiatowicz Lec25.1 4/24/01©UCB Spring 2001 CS152 Computer Architecture and Engineering Lecture 25 I/O and Storage Systems Continued April 24,

4/24/01 ©UCB Spring 2001 CS152 / Kubiatowicz

Lec25.59

Basic Structure: Irregular Mesh of “Pools”

Page 60: CS152 / Kubiatowicz Lec25.1 4/24/01©UCB Spring 2001 CS152 Computer Architecture and Engineering Lecture 25 I/O and Storage Systems Continued April 24,

4/24/01 ©UCB Spring 2001 CS152 / Kubiatowicz

Lec25.60

I/O Summary:° I/O performance limited by weakest link in chain between OS and device

° Queueing theory is important

• 100% utilization means very large latency

• Remember, for M/M/1 queue (exponential source of requests/service)

- queue size goes as u/(1-u)

- latency goes as Tser×u/(1-u)

• For M/G/1 queue (more general server, exponential sources)

- latency goes as m1(z) x u/(1-u) = Tser x {1/2 x (1+C)} x u/(1-u)

° Three Components of Disk Access Time:

• Seek Time: advertised to be 8 to 12 ms. May be lower in real life.

• Rotational Latency: 4.1 ms at 7200 RPM and 8.3 ms at 3600 RPM

• Transfer Time: 2 to 12 MB per second

° I/O device notifying the operating system:

• Polling: it can waste a lot of processor time

• I/O interrupt: similar to exception except it is asynchronous

° Delegating I/O responsibility from the CPU: DMA, or even IOP

° Today: Researchers thinking about the wide scale for I/O