Top Banner
Chapter 12-2 Mass-Storage Chapter 12-2 Mass-Storage Systems Systems
27

Chapter 12-2 Mass-Storage Systems. 12.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts Chapter 12-2: Mass-Storage Systems Overview of.

Dec 21, 2015

Download

Documents

Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: Chapter 12-2 Mass-Storage Systems. 12.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts Chapter 12-2: Mass-Storage Systems Overview of.

Chapter 12-2 Mass-Storage SystemsChapter 12-2 Mass-Storage Systems

Page 2: Chapter 12-2 Mass-Storage Systems. 12.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts Chapter 12-2: Mass-Storage Systems Overview of.

12.2 Silberschatz, Galvin and Gagne ©2005Operating System Concepts

Chapter 12-2: Mass-Storage SystemsChapter 12-2: Mass-Storage Systems

Overview of Mass Storage Structure – have spent lots of time here. Chapter 12-2:

Disk Attachment Disk Scheduling

Chapter 12-3: Disk Management Swap-Space Management RAID Structure

Disk Attachment Stable-Storage Implementation Tertiary Storage Devices Operating System Issues Performance Issues

Page 3: Chapter 12-2 Mass-Storage Systems. 12.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts Chapter 12-2: Mass-Storage Systems Overview of.

12.3 Silberschatz, Galvin and Gagne ©2005Operating System Concepts

Disk Attachment

Page 4: Chapter 12-2 Mass-Storage Systems. 12.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts Chapter 12-2: Mass-Storage Systems Overview of.

12.4 Silberschatz, Galvin and Gagne ©2005Operating System Concepts

Disk AttachmentDisk Attachment

Essentially two approaches:

1. Host-attached storage – typical on small systems

2. Networked-attached storage – storage via a remote host in a distributed file system.

Page 5: Chapter 12-2 Mass-Storage Systems. 12.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts Chapter 12-2: Mass-Storage Systems Overview of.

12.5 Silberschatz, Galvin and Gagne ©2005Operating System Concepts

Disk Attachment – Disk Attachment – Host AttachedHost Attached Storage (1 of 2) Storage (1 of 2)

Host-attached storage accessed through I/O ports talking to I/O busses

Typical attachment is the IDE I/O bus architecture (or ATA) This, however, only supports two drives per I/O bus,

which may be acceptable in many environments. IDE is simply an abbreviation of either Intelligent Drive

Electronics or Integrated Drive Electronics, depending on whom you ask.

An IDE interface is an interface for mass storage device, in which the controller is integrated into the disk or CD-ROM drive.

Although this really refers to a general technology, most people use the term to refer the ATA specification, which uses this technology.

Page 6: Chapter 12-2 Mass-Storage Systems. 12.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts Chapter 12-2: Mass-Storage Systems Overview of.

12.6 Silberschatz, Galvin and Gagne ©2005Operating System Concepts

Disk Attachment – Disk Attachment – Host AttachedHost Attached Storage (2 of 2) Storage (2 of 2)

SCSI architecture itself is a bus that can support up to 16 devices typically on one ribbon cable.

These sixteen devices may consist of the controller card in the host (the SCSI initiator) and up to 15 storage devices (SCSI targets). Showed this picture is last set of slides. Most SCSI targets are SCSI disks,

The SCSI protocol allows addressing of up to eight logical units in each SCSI target.

The SCSI architecture is very powerful. “Logical units” are often used to direct commands to

components of a RAID array or components of a removable media library

Page 7: Chapter 12-2 Mass-Storage Systems. 12.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts Chapter 12-2: Mass-Storage Systems Overview of.

12.7 Silberschatz, Galvin and Gagne ©2005Operating System Concepts

Disk Attachment –Disk Attachment – Network-Attached Network-Attached Storage Storage

Network-attached storage (NAS) is a special-purpose storage made available over a network rather than over a local connection (such as a bus)

Two common protocols:

NFS (Network File System) and

CIFS (Common Internet File System).

A network file system is any computer file system that supports sharing of files, printers and other resources as persistent storage over a computer network.

NFS

the first widely used network file system

implemented via remote procedure calls (RPCs) between host and storage handled via TCP or UDP using an IP network.

Often the network-attached storage units are implemented as a RAID array with software that implements the RPC interface.

Let’s see a visual on a network-attached storage setup…

Page 8: Chapter 12-2 Mass-Storage Systems. 12.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts Chapter 12-2: Mass-Storage Systems Overview of.

12.8 Silberschatz, Galvin and Gagne ©2005Operating System Concepts

Disk Attachment – Network-Attached StorageDisk Attachment – Network-Attached Storage

Network attached storage…Can see clients attached to a LAN or WAN that connect to network storage…

Downside to Networked-attached storage: need for large amounts of bandwidth, which slows data communications.

This bandwidth problem / issue become quite significant in large client-server shops.As your book points out: we’ve competition for bandwidth between

servers and clients and then between servers and storage devices (both directions).

Let’s expand this concept of “network attached storage” to Storage Area Networks

Page 9: Chapter 12-2 Mass-Storage Systems. 12.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts Chapter 12-2: Mass-Storage Systems Overview of.

12.9 Silberschatz, Galvin and Gagne ©2005Operating System Concepts

Storage Area NetworkStorage Area Network Storage-area networks

are private networks becoming more and more common. These use storage protocols vice networking protocols.

A storage area network is, a high-speed, special purpose network interconnects different kinds of data storage devices with associated

data servers on behalf of a larger network of users. often part of an overall network of computing resources

perhaps for a specific enterprise, such that the SAN may be clustered closely to other computing resources i.e mainframe or cluster of servers.

SANs are often found in remote locations; often for backup and archival. Nice feature: if a host is running low on storage, SAN can allocate more

storage to a host. Too, SANs allow clusters of servers to share the same storage and for

storage arrays to include multiple direct host connections.

Page 10: Chapter 12-2 Mass-Storage Systems. 12.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts Chapter 12-2: Mass-Storage Systems Overview of.

12.10 Silberschatz, Galvin and Gagne ©2005Operating System Concepts

Storage Area NetworkStorage Area Network

Boiling this all down, SANs are really a way of offering one of the oldest of networking services – making access to data storage devices available to clients.

A SAN can be anything from two servers on a network accessing a central pool of storage devices to serving thousand servers accessing many millions of megs of storage.

A SAN is really a separate network of storage devices physically removed from but still connected to a network. (These sentences taken from: Storage Basics: Storage Area Networks, February 26, 2002, by Drew Bird. )

Page 11: Chapter 12-2 Mass-Storage Systems. 12.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts Chapter 12-2: Mass-Storage Systems Overview of.

12.11 Silberschatz, Galvin and Gagne ©2005Operating System Concepts

Disk SchedulingDisk Scheduling

Page 12: Chapter 12-2 Mass-Storage Systems. 12.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts Chapter 12-2: Mass-Storage Systems Overview of.

12.12 Silberschatz, Galvin and Gagne ©2005Operating System Concepts

Disk SchedulingDisk Scheduling The operating system is responsible for using hardware efficiently — for

the disk drives, this means fast access time and disk bandwidth.

Access Time: As discussed, access time has two major components

Seek time is the time for the disk are to move the heads to the cylinder containing the desired sector(s).

Rotational latency is the additional time waiting for the disk to rotate the desired sector to the disk head.

We want to minimize seek time; we do this by minimizing seeks, if possible; this means, normally, minimizing seek distance.

Disk bandwidth: the total number of bytes transferred, divided by the total time between the first request for service and the completion of the last byte transferred.

Our concern: we can improve both access time and bandwidth by scheduling disk I/O requests in good order.

To that end, we will consider disk scheduling algorithms.

Page 13: Chapter 12-2 Mass-Storage Systems. 12.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts Chapter 12-2: Mass-Storage Systems Overview of.

12.13 Silberschatz, Galvin and Gagne ©2005Operating System Concepts

Disk Scheduling (Cont.)Disk Scheduling (Cont.)

Lots of parameters go into disk scheduling activities. Is the request for service (system call) an input or output operation? What is the disk address for the transfer to / transfer from? What is the memory address associated with the transfer? How much is to be transferred?

Of course, if disk and controller are immediately available, then system call can be accommodated very quickly.

Otherwise, these calls will need to be queued for that particular drive. Then, too, once a request is completed, another request will be handled,

but the selection and the algorithm that decides which request is handled next impacts overall disk performance very significantly!

What approach will provide the best performance and service to awaiting processes?

That’s what this section is all about.

Page 14: Chapter 12-2 Mass-Storage Systems. 12.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts Chapter 12-2: Mass-Storage Systems Overview of.

12.14 Silberschatz, Galvin and Gagne ©2005Operating System Concepts

First Come First ServedFirst Come First Served - FCFS - FCFSIllustration shows total head movement of 640 cylinders.

Fair but not very efficient! Note block requests for reading specific cylinders: Tremendous efficiency loss in large head movements!

Page 15: Chapter 12-2 Mass-Storage Systems. 12.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts Chapter 12-2: Mass-Storage Systems Overview of.

12.15 Silberschatz, Galvin and Gagne ©2005Operating System Concepts

Shortest Seek Time FirstShortest Seek Time First - SSTF - SSTF

Selects the request in the queue with the minimum seek time from the current head position.

In truth, this algorithm does provide for major increases in performance, but… disk requests may arrive in a random or If so, we might experience starvation for a request not ‘near’

current activity!

Stated equivalently, ‘next’ access served might have just arrived at the expense of other requests for disk access to ‘far away’ cylinders which may have been in the queue for a long time.

Next slide provides an illustration of total head movement of 236

cylinders. Much improved over FCFS, but…. Contrast the access served with the queue!

Page 16: Chapter 12-2 Mass-Storage Systems. 12.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts Chapter 12-2: Mass-Storage Systems Overview of.

12.16 Silberschatz, Galvin and Gagne ©2005Operating System Concepts

SSTF (Cont.)SSTF (Cont.)Contrast the access served with the queue!

Not the best. A better algorithm could reduce total head movement to 208 cylinders.Having a queue of requests is reasonable, since requests can arrive much quicker than they can be served.

Page 17: Chapter 12-2 Mass-Storage Systems. 12.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts Chapter 12-2: Mass-Storage Systems Overview of.

12.17 Silberschatz, Galvin and Gagne ©2005Operating System Concepts

SCANSCAN Scheduling Scheduling

The disk arm starts at one end of the disk, and moves toward the other end, servicing requests until it gets to the other end of the disk, where the head movement is reversed and servicing continues.

Sometimes called the elevator algorithm.

Illustration shows total head movement of 208 cylinders. Improvement.

Overhead: need to know the direction disk arm will be moving relative to its current position.

Unfortunately, if a request arrives near to the disk arm but in the opposite direction, the wait might be very long!

This approach ‘assumes’ a relative uniform distribution of requests.

In practice, this is rarely the case. Requests are usually ‘clumped.’

Page 18: Chapter 12-2 Mass-Storage Systems. 12.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts Chapter 12-2: Mass-Storage Systems Overview of.

12.18 Silberschatz, Galvin and Gagne ©2005Operating System Concepts

SCAN (Cont.)SCAN (Cont.)

Note: request 183 has a long wait!! 14 is serviced very quickly….Minimizes head movement, but is it ‘fair?’

See order of requests in queue -But scan is going to the left…

Page 19: Chapter 12-2 Mass-Storage Systems. 12.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts Chapter 12-2: Mass-Storage Systems Overview of.

12.19 Silberschatz, Galvin and Gagne ©2005Operating System Concepts

Circular ScanCircular Scan (C-SCAN) (C-SCAN)

Provides a more uniform (more fair?) wait time than SCAN.

The head moves from one end of the disk to the other. servicing requests as it goes. (no change here from SCAN)

But, when it reaches the other end, however, it immediately returns to the beginning of the disk, without servicing any requests on the return trip.

Treats the cylinders as a circular list that wraps around from the last cylinder to the first one.

Can you see the inefficiencies? At least from the requestor perspective….

See next slide.

Page 20: Chapter 12-2 Mass-Storage Systems. 12.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts Chapter 12-2: Mass-Storage Systems Overview of.

12.20 Silberschatz, Galvin and Gagne ©2005Operating System Concepts

C-SCAN (Cont.)C-SCAN (Cont.)

Note how long the request for cylinder 37 must wait!! 65 serviced now!)

Complete reset to start of cylinder

Page 21: Chapter 12-2 Mass-Storage Systems. 12.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts Chapter 12-2: Mass-Storage Systems Overview of.

12.21 Silberschatz, Galvin and Gagne ©2005Operating System Concepts

C-LOOKC-LOOK

Both of the previous algorithms are easy to understand, But inherent unfairness is a concern. A better approach is, upon reaching the last request in one

direction – but not proceeding all the way to end of the disk, change direction and proceed in opposite direction.

C-LOOK appears to be merely a version of C-SCAN There are two variations of C-SCAN called C-LOOK and LOOK. LOOK looks for requests (goes to end of the disk) before

returning the other direction; C-LOOK doesn’t go to the end of the disk.

Page 22: Chapter 12-2 Mass-Storage Systems. 12.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts Chapter 12-2: Mass-Storage Systems Overview of.

12.22 Silberschatz, Galvin and Gagne ©2005Operating System Concepts

C-LOOK (Cont.)C-LOOK (Cont.)

Page 23: Chapter 12-2 Mass-Storage Systems. 12.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts Chapter 12-2: Mass-Storage Systems Overview of.

12.23 Silberschatz, Galvin and Gagne ©2005Operating System Concepts

Selecting a Disk-Scheduling AlgorithmSelecting a Disk-Scheduling Algorithm

So which is the right choice – if there in fact an optimal choice?

SSTF is common and has a natural appeal.

Does better than FCFS for sure.

SCAN and C-SCAN perform better for systems that place a heavy load on the disk.

Much less likely to have starvation problems since they do ‘scan’ the entire disk.

Overall performance always depends on the number and types of requests.

Attempts at optimization may not be worth savings over SSTF or SCAN.

Page 24: Chapter 12-2 Mass-Storage Systems. 12.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts Chapter 12-2: Mass-Storage Systems Overview of.

12.24 Silberschatz, Galvin and Gagne ©2005Operating System Concepts

Disk Scheduling – continuedDisk Scheduling – continued Requests for disk service can be influenced by the file-allocation

method.

Contiguous file allocation means that subsequent requests will probably by near. (Theory of Locality)

Linked file allocation – one cannot assert this at all.

Also, the location of the directory is important because these are always searched / accessed first for opens and closes.

This suggests that these structures be searched and updated, as appropriate.

Many times directories are on the first cylinder of the disk.

But the data could be on the last cylinder.

Put directories in the middle?

Page 25: Chapter 12-2 Mass-Storage Systems. 12.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts Chapter 12-2: Mass-Storage Systems Overview of.

12.25 Silberschatz, Galvin and Gagne ©2005Operating System Concepts

Disk Scheduling – continued

Indices and their location for say, indexed sequential file organizations, require access, searching and updating too.

Caching directories and indices well help – especially for read and write operations.

Your book points out:

The disk-scheduling algorithm should be written as a separate module of the operating system, allowing it to be replaced with a different algorithm if necessary.

Overall, either SSTF or LOOK are reasonable choices for the default

Page 26: Chapter 12-2 Mass-Storage Systems. 12.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts Chapter 12-2: Mass-Storage Systems Overview of.

12.26 Silberschatz, Galvin and Gagne ©2005Operating System Concepts

Disk Scheduling – other than for Seeks

There are other issues in trying to gain improved performance.

Latency: (speed of rotation…) can be very significant.

In some cases, latency can be as large as average seek time.

Some manufacturers burn in disk scheduling algorithms into the disk controller hardware.

In this way, the controller can queue requests and schedule I/O requests that realize both improved seek time and latency.

Any code implemented in hardware will be faster than corresponding code implemented in software.

There are other very real, subtle problems not addressed here.

Page 27: Chapter 12-2 Mass-Storage Systems. 12.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts Chapter 12-2: Mass-Storage Systems Overview of.

End of Chapter 12.2End of Chapter 12.2