An Analysis of Data Corruption in the Storage Stack Lakshmi N. Bairavasundaram Andrea C. Arpaci-Dusseau Remzi H. Arpaci-Dusseau University of Wisconsin-Madison.

Post on 29-Mar-2015

222 Views

Category:

Documents

0 Downloads

Preview:

Click to see full reader

Transcript

An Analysis of Data Corruption in the Storage Stack

Lakshmi N. BairavasundaramAndrea C. Arpaci-DusseauRemzi H. Arpaci-Dusseau

University of Wisconsin-Madison

Garth R. GoodsonNetwork Appliance, Inc.

Bianca SchroederUniversity of Toronto

2

Does Data Corruption Occur?

The corrupt remains of a photo I stored on my laptop

3

Corruption Anecdote• The thumbnail is still uncorrupted• A few other photos in same directory corrupted

– Spatial locality ?

• Another file was unreadable– Corrupt metadata or latent sector errors ?– Does corruption correlate with other errors ?

• System designers know of similar occurrences– Data protection often based on anecdotes

• Anecdotes: interesting and useful, but not enough– A more rigorous understanding is needed

4

Our Analysis

• First large scale study of data corruption– 1.53 million disks in 1000s of NetApp systems

• Time period– 41 months (Jan 2004 – Jun 2007)

• Corruption detection– Using various data protection techniques– Network Appliance Autosupport Database

• Also used in latent sector error [Bairavasundaram07], disk and storage failure [Jiang08] studies

5

Questions

• What kinds of corruption occur and how often ?• Does disk class matter ?

– Expensive enterprise (FC) disks versus cheaper nearline (SATA) disks

• Does disk drive product matter ?• Are corruption instances independent ?• Do corruption instances have spatial locality?

6

Talk Outline

• Introduction

• Background– Data corruption– Protection techniques

• Results

• Lessons

• Conclusion

7

Data Corruption

• Data stored on a disk block is incorrect• Many sources

– Software bugs• File system, software RAID, device drivers, etc.

– Firmware bugs• Disk drives, shelf controllers, adapters, etc.

• Corruption is silent– Not reported by the disk drive– Could have greater impact than other errors

8

Forms of Data Corruption

• Bit corruption– Contents of existing disk block are modified– Data being written to a disk block is corrupted

• Lost writes– Disk write not performed but completion is reported

• Misdirected writes– Data is written to the wrong disk block

• Torn writes– Data only partially written but completion is reported

9

NetApp® System

• Parity generation• Reconstruction on failure• Data scrubbing – read blocks, verify parity – Detect parity inconsistency – Lost or misdirected writes, parity miscalculations

• Store, verify checksum• Detect checksum mismatch• Bit corruptions, torn writes

WAFL® file system

RAID layer

Storage layer

Disk drives

Aut

osup

port

Client interface (NFS)

• Store, verify block identity (Inode X, offset Y)• Detect identity discrepancy• Lost or misdirected writes

1

2

3

10

Talk Outline

• Introduction

• Background

• Results

• Lessons

• Conclusion

11

Overall Numbers

What percentage of disks are affected by the different kinds of corruption?

12

Overall Numbers(% disks affected in 17 months of use)

• ~10 times fewer disks than latent sector errors• Higher % of Nearline disks affected

– Order of magnitude more than enterprise disks

• Bit corruptions or torn writes affect more disks than lost or misdirected writes

Corruption type Nearline

(SATA)

Enterprise

(FC)

Checksum mismatches 0.661% 0.059%

Parity inconsistencies 0.147% 0.017%

Identity discrepancies 0.042% 0.006%

1

2

3

13

Checksum Mismatch (CM) Analysis

1. Factors

2. Characteristics

3. Correlations with other errors

4. Request type

• Disk class (Nearline / Enterprise)• Disk model• Disk age• Disk size (capacity)• Workload

• CMs per corrupt disk• Independence• Spatial locality• Temporal locality

• Not ready conditions• Latent sector errors• System reset

• Scrubs vs. FS reads etc.

14

Checksum Mismatch (CM) Analysis

1. Factors

2. Characteristics

3. Correlations with other errors

4. Request type

• Disk class• Disk model• Disk age• Disk size• Workload

15

Factors

• Do disk class, model, or age affect development of checksum mismatches?– Disk class: Nearline (SATA) or Enterprise (FC)– Disk model: Specific disk drive product

(say Vendor V’s disk product P of capacity 80 GB)– Disk age: Time in the field since ship date

• Can we use these factors to determine corruption handling policies or mechanisms?– Ex: Aggressive scrubbing for some disks

16

Class, Model, Age – Nearline

• Fraction of disks affected varies across models– From 0.27% to

3.51%

• More than 3%– 4 out of 6 models

• Response to age also varies

4.0%

3.5%

3.0%

2.5%

2.0%

1.5%

1.0%

0.5%

0.0%

% o

f di

sks

with

at

leas

t 1

CM

Disk age (months)

0 3 6 9 12 15 18

17

Class, Model, Age – Enterprise

• Fraction of disks affected varies across models– From 0% to 0.17%– All less than lowest

Nearline (0.27%)

• Response to age also varies

0.18%

0.16%

0.14%

0.12%

0.10%

0.08%

0.06%

0.04%

0.02%

0.00%

% o

f di

sks

with

at

leas

t 1

CM

Disk age (months)

0 3 6 9 12 15 18

18

Factors – Summary

• Class, Model matter– Nearline disks require greater attention

• Effect of age is unclear– Cannot use age-specific corruption handling

19

Checksum Mismatch (CM) Analysis

1. Factors

2. Characteristics

3. Correlations with other errors

4. Request type

• CMs per corrupt disk• Independence• Spatial locality• Temporal locality

20

CMs per Corrupt Disk

• Corrupt disk: A disk with at least 1 checksum mismatch (CM)

• How many CMs does a corrupt disk have?

• Should we “fail-out” disks when one corruption is detected?

21

CMs per Corrupt Disk – Nearline

• CMs per corrupt disk is low– 50% of corrupt disks

have ≤ 2 CMs– 90% of corrupt disks

have ≤ 100 CMs

• Anomaly: E-1– Develops many CMs

100%

90%

80%

70%

60%

50%

40%

30%

20%

10%

0%% o

f c

orru

pt d

isks

with

≤ X

CM

s

Number of CMs 1 2 3 4 5 10 20 50 100 200 500 1K

22

CMs per Corrupt Disk – Enterprise

CMs per corrupt disk higher– 50% of corrupt disks

have ≤ 10 CMs (2 for Nearline)– 90% of corrupt disks

have ≤ 200 CMs (100 for Nearline)

100%

90%

80%

70%

60%

50%

40%

30%

20%

10%

0%

% o

f c

orru

pt d

isks

with

≤ X

CM

s

Number of CMs

1 2 3 4 5 10 20 50 100 200 500 1K

23

CMs per Corrupt Disk – Summary

• Class and model matter

• Fewer enterprise disks have CMs, but corrupt disks have more CMs– Fail-out enterprise disks on first CM

• Corrupt nearline disks develop fewer CMs– There can be anomalies (Disk model E-1)

24

Other Characteristics

• Very high spatial locality– When multiple CMs occur, they are often for

consecutive disk blocks

• High temporal locality

• Not independent– Over different disks in same system– Defect may be in common hardware components (Example: shelf controller)

25

Checksum Mismatch (CM) Analysis

1. Factors

2. Characteristics

3. Correlations with other errors

4. Request type• Scrubs vs. FS reads etc.

26

Request Type

• What types of disk requests detect checksum mismatches?

• Is data scrubbing useful?

27

Request Type

• Data scrubbing finds most CMs– Nearline: 49%– Enterprise: 73%

• Reconstruction finds CMs– Nearline: 9%– Enterprise: 4%

Disk Model

100%90%80%70%60%50%40%30%20%10%

0%

% o

f C

Ms

disc

over

ed

28

Request Type – Summary

• Data scrubbing appears to be very useful– Study of scrub rates, workload needed

• Mismatches found during reconstruction– Data loss without double disk failure protection

[Alvarez97, Blaum94, Corbett04, Park95, Hafner05]– More aggressive scrubbing may be needed

29

Interesting Behavior

Do system designers need to factor in any abnormal behavior?

30

Block numbers are not created equal!

• Typically, each block number has 1 disk where it is corrupt

• A series of block numbers are corrupt in many disks– A block-number

specific bug?Block Number Space

Disk Model: E-1

120

100

80

60

40

20

0Num

ber

of d

isks

with

CM

at

bloc

k X

31

Talk Outline

• Introduction

• Background

• Results

• Lessons

• Conclusion

32

Lessons

• Data corruption does occur– Even rare errors like lost writes do occur– Corruption handling mechanisms are essential

• Very few enterprise disks develop corruption– “Fail-out” these disks on first corruption detection

• High spatial locality– Spread out redundant data within the same disk

33

Lessons (contd.)

• Temporal locality, consecutive blocks affected– May be corruption occurs during the same write op– Write redundant data with separate disk requests,

spaced out over time

• Corruption could be block number specific

0

N

34

Lessons (contd.)

• Temporal locality, consecutive blocks affected– May be corruption occurs during the same write op– Write redundant data with separate disk requests,

spaced out over time

• Corruption could be block number specific– “Staggered” RAID stripes could be used

0

N

35

Conclusion• Our analysis

– First large scale study of data corruption– Corruptions detected by NetApp production systems

• Data corruptions do occur– Affect ~10 times fewer disks than latent sector errors– Nearline (SATA) disks are most affected– Corruption handling mechanisms are essential

• Data corruption characteristics– Depend on disk class and disk model– Not independent (both within disk and within system)– High spatial and temporal locality– May occur at specific block numbers

36

Thank You!

Advanced Systems Lab (ADSL)University of Wisconsin-Madison

http://www.cs.wisc.edu/adsl

Advanced Technology Group (ATG)Network Appliance, Inc.

http://www.netapp.com/company/research/

Department of Computer ScienceUniversity of Toronto

http://www.cs.toronto.edu/~bianca

37

Spatial Locality – Nearline

38

Spatial Locality – Enterprise

39

Temporal Locality – Nearline

40

Temporal Locality – Enterprise

41

Temporal Locality

• Isn’t temporal locality tied to when the mismatch is detected ?– Yes. It may be due to scrubbing– We also looked at mismatches detected more

than 2 weeks apart• We found strong autocorrelation across

mismatches up to 10 months apart• Indicates temporal locality fairly independent of

detection time

42

Non-independence within systems

• We found a system with 92 corrupt disks

• Fraction of systems = 1*10-5

(about 100,000 systems in the study)

• Probability of such an occurrence = 1*10-12

(based on fraction of disks that develop CMs)

43

Comparison: Latent Sector Errors

Characteristics Latent sector errors

Checksum mismatches

As Age increases, % of disks affected

NL: increases ES: decreases

Varies across disk models

Disk size Matters Unclear

Spatial locality At 1- 10 MB At 4 KB

In addition: Both errors are not independent, have very high temporal locality, are often detected by scrubbing, and have correlations with each other

44

Spatial Locality

• Use locality radius to measure locality

Logical Block Number Space

100 block: 2/5 errors have 1 neighbor1000 block: 4/5 errors have 1 neighbor

Beginning of disk(Block no. 0)

End of disk

100 Block locality radius

1000 Block locality radius

45

References• [Alvarez97] G. A. Alvarez, W. A. Burkhard, and F. Christian, "Tolerating multiple failures in RAID

architectures with optimal storage and uniform declustering" Proceedings of the 24th Annual International Symposium on Computer Architecture pgs. 62-72, 1997

• [Blaum94] M. Blaum, J. Brady, J. Bruck, and J. Menon. "EVENODD: An efficient scheme for tolerating double disk failures in RAID architectures". In Proc. of the Annual International Symposium on Computer Architecture, pgs. 245-254, 1994

• [Corbett04] P. Corbett, B. English, A. Goel, T. Grcanac, S. Kleiman, J. Leong, and S. Sankar. “Row-diagonal parity for double disk failure”. In Proceedings of the Third USENIX Conference on File and Storage Technologies, pages 1-14, 2004

• [Hafner05a] J. L. Hafner, V. Deenadhayalan, KK Rao, and J. A. Tomlin. “Matrix methods for lost data reconstruction in erasure codes”. In Proceedings of the Fourth USENIX Conference on File and Storage Technologies, San Francisco, CA USA, December 2005.

• [Hafner05b] J. L. Hafner. “WEAVER Codes: Highly Fault Tolerant Erasure Codes for Storage Systems”. In Proceedings of the Fourth USENIX Conference on File and Storage Technologies, San Francisco, CA USA, December 2005.

• [Park95] C. I. Park, "Efficient placement of parity and data to tolerate two disk failures in disk array systems". IEEE Transactions on Parallel and Distributed Systems, Nov. 1995

46

Terminology

• Disk class– Nearline: ATA interface, secondary storage– Enterprise: Fibre-Channel interface, primary storage– Enterprise disks are higher performance, better built,

more flexible disks

• Disk family– A particular disk product– E.g. Quantum Fireball EX– Denoted as ‘A’ to ‘E’ (nearline), ‘f’ to ‘o’ (enterprise)

47

Terminology (contd.)

• Disk model– Combination of disk family and particular size– E.g. Quantum Fireball EX – 6.4 GB– Denoted as ‘E-1’, ‘E-2’ etc.

• Disk age– Amount of time in the field since ship date

• Corrupt disk– A disk with at least one checksum mismatch

48

Types of Data Corruption

• Checksum Mismatch (CM)– The data does not match the checksum– Causes: Data bit corruption, torn writes

• Parity Inconsistency (PI)– The parity does not match the data blocks– Causes: Lost writes, misdirected writes, incorrect

parity calculation

• Identity Discrepancy (ID)– The disk block identity does not match the expected

identity on file read– Causes: Lost writes, misdirected writes

49

Overall Numbers(% disks affected in 17 months of use)

• Checksum mismatches– Nearline: 0.66%, Enterprise: 0.06%

• Parity inconsistencies– Nearline: 0.147%, Enterprise: 0.017%

• Identity discrepancies– Nearline: 0.042%, Enterprise: 0.006%

• Higher % of Nearline disks affected– Order of magnitude more than enterprise disks

• Bit corruptions or torn writes are important– Affect more disks than lost or misdirected writes

50

Important Results

• Data corruption does occur– Affects ~10 times fewer disks than latent sector errors

• Class matters: Expensive is better– More Nearline (SATA) disks affected (0.66%)– Fewer Enterprise (FC) disks affected (0.06%)

• Model matters– Different disk models show different behavior

• Corruption instances are not independent– Instances show high spatial, temporal locality

• Number of corruptions low in general

51

52

Disk Block Protection

top related