Top Banner
236601 - Coding and Algorithms for Memories Lecture 2 1
41

236601 - Coding and Algorithms for Memories Lecture 2

Feb 22, 2016

Download

Documents

elvin

236601 - Coding and Algorithms for Memories Lecture 2. Overview. Lecturer : Eitan Yaakobi [email protected] , Taub 638 Lectures hours : Thur 12:30-14:30 @ Taub 8 Course website : http://webcourse.cs.technion.ac.il/236601/Spring2014/ - PowerPoint PPT Presentation
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: 236601 - Coding and Algorithms  for  Memories Lecture 2

1

236601 - Coding and Algorithms for

MemoriesLecture 2

Page 2: 236601 - Coding and Algorithms  for  Memories Lecture 2

2

Overview• Lecturer: Eitan Yaakobi

[email protected], Taub 638• Lectures hours: Thur 12:30-14:30 @ Taub 8• Course website:

http://webcourse.cs.technion.ac.il/236601/Spring2014/• Office hours: Thur 14:30-15:30 and/or other

times (please contact by email before)• Final grade:

– Class participation (10%) – Homeworks (50%) – Take home exam/final Homework + project (40%)

Page 3: 236601 - Coding and Algorithms  for  Memories Lecture 2

3

What is this class about?Coding and Algorithms to Memories

• Memories – HDDs, flash memories, and other non-volatile memories

• Coding and algorithms – how to manage the memory and handle the interface between the physical level and the operating system

• Both from the theoretical and practical points of view

• Q: What is the difference between theory and practice?

Page 4: 236601 - Coding and Algorithms  for  Memories Lecture 2

4

You do not really understand something unless you can explain it to your

grandmother

Page 5: 236601 - Coding and Algorithms  for  Memories Lecture 2

5

One of the focuses during this class: How to ask the right questions, both as a theorist and as a practical engineer

Page 6: 236601 - Coding and Algorithms  for  Memories Lecture 2

6

Memory Storage• Computer data storage (from

Wikipedia):Computer components, devices, and recording media that retain digital data used for computing for some interval of time.

• What kind of data?– Pictures, word files, movies, other computer

files etc.• What kind of memories?

– Many kinds…

Page 7: 236601 - Coding and Algorithms  for  Memories Lecture 2

7

Memories• Volatile Memories – need power to

maintain the information– Ex: RAM memories, DRAM, SRAM

• Non-Volatile Memories – do NOT need power to maintain the information– Ex: HDD, optical disc (CD, DVD), flash

memories • Q: Examples of old non-volatile memories?

Page 8: 236601 - Coding and Algorithms  for  Memories Lecture 2

8

Some of the main goals in designing a computer storage:

PriceCapacity (size)

EnduranceSpeed

Power Consumption

Page 9: 236601 - Coding and Algorithms  for  Memories Lecture 2

9

Optical Storage• First generation – CD (Compact Disc), 700MB• Second generation – DVD (Digital Versatile Disc),

4.7GB, 1995• Third generation – BD (Blu-Ray Disc)

– Blue ray laser (shorter wavelength)– A single layer can store 25GB, dual layer – 50GB– Supported by Sony, Apple, Dell, Panasonic, LG, Pioneer

Page 10: 236601 - Coding and Algorithms  for  Memories Lecture 2

10

Page 11: 236601 - Coding and Algorithms  for  Memories Lecture 2

11

Page 12: 236601 - Coding and Algorithms  for  Memories Lecture 2

12

The Magnetic Hard Disk Drive

Disk DriveSuspendedMR Head

Rotating Thin Film Disk

Track widthA Recording Track

Slider/ MR Head

“1” “0”

Page 13: 236601 - Coding and Algorithms  for  Memories Lecture 2

13

Flash Memories

10

32

Page 14: 236601 - Coding and Algorithms  for  Memories Lecture 2

14

Gartner & Phison

Page 15: 236601 - Coding and Algorithms  for  Memories Lecture 2

15

SLC, MLC and TLC FlashHigh Voltage

Low Voltage

1 Bit Per Cell2 States

SLC Flash

011010000001101100110111

01

00

10

11

0

1

High Voltage

Low Voltage

2 Bits Per Cell

4 States

MLC Flash

High Voltage

Low Voltage

3 Bits Per Cell

8 States

TLC Flash

Page 16: 236601 - Coding and Algorithms  for  Memories Lecture 2

16

Flash Memory Structure• A group of cells constitute a page• A group of pages constitute a block

– In SLC flash, a typical block layout is as follows

page 0 page 1page 2 page 3page 4 page 5

.

.

.

.

.

.page 62 page 63

Page 17: 236601 - Coding and Algorithms  for  Memories Lecture 2

17

• In MLC flash the two bits within a cell DO NOT belong to the same page – MSB page and LSB page

• Given a group of cells, all the MSB’s constitute one page and all the LSB’s constitute another page

Row index

MSB of first 214

cells

LSB of first 214

cells

MSB of last 214

cells

LSB of last 214

cells0 page 0 page 4 page 1 page 51 page 2 page 8 page 3 page 92 page 6 page 12 page 7 page 133 page 10 page 16 page 11 page 17

⋮ ⋮ ⋮ ⋮ ⋮30 page 118 page 124 page 119 page 12531 page 122 page 126 page 123 page 127

01

10

00

11

MSB/LSBFlash Memory Structure

Page 18: 236601 - Coding and Algorithms  for  Memories Lecture 2

18

MLC Write Process

voltage

voltagePV1 PV2 PV3

MSB=1LSB=1

MSB=1LSB=0

MSB=0LSB=0

MSB=0LSB=1

MSB=1 MSB=0

Vread

MSB=1 MSB=0

Page 19: 236601 - Coding and Algorithms  for  Memories Lecture 2

19

Row inde

x

MSB of first 216

cells

CSB of first 216

cells

LSB of first 216

cells

MSB of last 216

cells

CSB of last 216

cells

LSB of last 216

cells0 page 0 page 11 page 2 page 6 page 12 page 3 page 7 page 132 page 4 page 10 page 18 page 5 page 11 page 193 page 8 page 16 page 24 page 9 page 17 page 254 page 14 page 22 page 30 page 15 page 23 page 31

⋮ ⋮ ⋮ ⋮ ⋮62 page

362page 370

page 378

page 363

page 371

page 379

63 page 368

page 376

page 369

page 377

64 page 374

page 382

page 375

page 383

65 page 380

page 381

MSB Page CSB Page LSB Page MSB Page CSB Page LSB Page

Flash Memory Structure

Page 20: 236601 - Coding and Algorithms  for  Memories Lecture 2

20

Flash Memories ProgrammingArray of cells made from floating gate transistorsTypical size can be 32×215

The cells are programmed by pulsing electrons via hot-electron injection

Page 21: 236601 - Coding and Algorithms  for  Memories Lecture 2

21

Flash Memories Programming

Each cell can have q levels, represented by different amounts of electronsIn order to reduce a cell level, thee cell and its containing block must be reset to level 0 before rewriting – A VERY EXPENSIVE OPERATION

Array of cells made from floating gate transistorsTypical size can be 32×215

The cells are programmed by pulsing electrons via hot-electron injection

Page 22: 236601 - Coding and Algorithms  for  Memories Lecture 2

Programming of Flash Memory Cells

• Flash memory cells are programmed in parallel in order to increase the write speed– Cells can only increase their value– In order to decrease a cell level, its entire containing block (~106

cells) has to be erased first• Flash memory cells do not behave identically

– When charge is injected, only a fraction of it is trapped in the cell

– Easy cells – most of the charge is trapped in the cell– Hard cells – a small fraction of the charge is trapped in the cell

22

Page 23: 236601 - Coding and Algorithms  for  Memories Lecture 2

Programming of Flash Memory Cells

• Flash memory cells are programmed in parallel in order to increase the write speed– Cells can only increase their value– In order to decrease a cell level, its entire containing block (~106 cells) has

to be erased first• Flash memory cells do not behave identically

– When charge is injected, only a fraction of it is trapped in the cell– Easy cells – most of the charge is trapped in the cell– Hard cells – a small fraction of the charge is trapped in the cell

• Goals:– Programming is done cautiously to prevent over-shooting– Programming should work for both easy and hard cells– And still… fast enough

23

Page 24: 236601 - Coding and Algorithms  for  Memories Lecture 2

Incremental Step Pulse Programming (ISPP)

• Incremental Step Pulse Programming– Gradually increase the program voltage– First the easy cells reach their level– On subsequent steps, only cells which didn’t

reach their level are programmed– Enable fast programming of both easy and hard

cells

24

Page 25: 236601 - Coding and Algorithms  for  Memories Lecture 2

25

• Array of cells, made of floating gate transistors─ Each cell can store q different levels─ Today, q typically ranges between 2 and 16─ The levels are represented by the number of

electrons ─ The cell’s level is increased by pulsing electrons─ To reduce a cell level, all cells in its containing

block must first be reset to level 0A VERY EXPENSIVE OPERATION

Rewriting Codes

Page 26: 236601 - Coding and Algorithms  for  Memories Lecture 2

26

Rewriting Codes• Problem: Cannot rewrite the memory

without an erasure• However… It is still possible to rewrite if

only cells in low level are programmed

Page 27: 236601 - Coding and Algorithms  for  Memories Lecture 2

27

From Wikipedia:One limitation of flash memory is that, although it can be read or programmed a byte or a word at a time in a random access fashion, it can only be erased a "block" at a time. This generally sets all bits in the block to 1. Starting with a freshly erased block, any location within that block can be programmed. However, once a bit has been set to 0, only by erasing the entire block can it be changed back to 1. In other words, flash memory (specifically NOR flash) offers random-access read and programming operations, but does not offer arbitrary random-access rewrite or erase operations. A location can, however, be rewritten as long as the new value's 0 bits are a superset of the over-written values. For example, a nibble value may be erased to 1111, then written e.g. as 1110. Successive writes to that nibble can change it to 1010, then 0010, and finally 0000. Essentially, erasure sets all bits to 1, and programming can only clear bits to 0. File systems designed for flash devices can make use of this capability, for example to represent sector metadata.

Page 28: 236601 - Coding and Algorithms  for  Memories Lecture 2

28

Rewriting Codes

Store 3 bits once

Store 1 bit 8 timesStore 4

bits onceStore 1 bit 16 times

Rewrite codes significantly reduce the number of block

erasures

Page 29: 236601 - Coding and Algorithms  for  Memories Lecture 2

29

• One of the most efficient schemes to decrease the number of block erasures

• Floating Codes • Buffer Codes• Trajectory Codes• Rank Modulation Codes• WOM Codes

Rewriting Codes

Page 30: 236601 - Coding and Algorithms  for  Memories Lecture 2

30

Write-Once Memories (WOM)• Introduced by Rivest and Shamir, “How to

reuse a write-once memory”, 1982• The memory elements represent

bits (2 levels) and are irreversibly programmed from ‘0’ to ‘1’

1st Write

2nd Write

Page 31: 236601 - Coding and Algorithms  for  Memories Lecture 2

31

Write-Once Memories (WOM)• Examples:

1st Write

2nd Write

data Memory

State00 00011 011

data Memory

State10 01000 111

data Memory

State11 10010 101

data Memory

State01 00101 001

Page 32: 236601 - Coding and Algorithms  for  Memories Lecture 2

WOM Implementation in SLC Flash

• A scheme for storing two bits twice using only three cells before erasing the cells

• The cells only increase their level• How to implement? (in SLC block)

– Each page stores 2KB/1.5 = 4/3KB per write

– A page can be written twice before erasing– Pages are encoded using the WOM code– When the block has to be rewritten, mark

its pages as invalid– Again write pages using the WOM code

without erasing– Read before write at the second write

data 1st write 2nd write

00 000 111

01 100 011

10 010 101

11 001 110

00.11.01.10.11 … 10

WOM ENCODE

R000.001.100.010.001 … 010

000.001.100.010.001 … 01001.10.00.10.11 … 11

100.010.000.010.001 … 001

100.010.000.010.001 … 001100.100.000.001.010 … 000

000.010.001.100.000 … 010001.010.100.000.100 … 010

I N V A L I D

01.11.10.00.01 … 00

011.001.101.111.011 … 111

011.001.101.111.011 … 11100.11.00.01.11 … 10

111.110.000.011.001 … 101

111.110.000.011.001 … 101101.100.101.101.110 … 000

000.110.111.111.110 … 010111.110.100.101.101 … 11032

Page 33: 236601 - Coding and Algorithms  for  Memories Lecture 2

33

BER for the First and Second Write

Page 34: 236601 - Coding and Algorithms  for  Memories Lecture 2

34

Why/When to Use WOM Codes?• Disadvantage: sacrifice a large amount

of the capacity– Ex: Two write WOM codes

• The best sum-rate is log3≈1.58• Can write (at most) only 0.79n bits so there is a

lost of (at least) 21% of the capacity • Advantage: Can increase the lifetime of

the memory and reduce the write amplification

Page 35: 236601 - Coding and Algorithms  for  Memories Lecture 2

35

Why/When to Use WOM Codes?• Advantage: Can increase the lifetime of the

memory and reduce the write amplification• Example:

– User has 3GB of flash with lifetime 100 P/E cycles

– Each day the user writes 2GB of new data (no need to store the old data)

– Without WOM, the memory lasts 3/2*100=150 days

– With WOM (the Rivest Shamir scheme)every two days the memory is erased oncethe memory lasts 2*100=200 days

Page 36: 236601 - Coding and Algorithms  for  Memories Lecture 2

36

Write-Once Memories (WOM)• Introduced by Rivest and Shamir, “How to

reuse a write-once memory”, 1982• The memory elements represent

bits (2 levels) and are irreversibly programmed from ‘0’ to ‘1’

Q: How many cells are required to write 100 bits twice? P1: Is it possible to do better…?P2: How many cells to write k bits twice?P3: How many cells to write k bits t times?P3’: What is the total number of bits that is possible to write in n cells in t writes?

1st Write

2nd Write

Page 37: 236601 - Coding and Algorithms  for  Memories Lecture 2

37

Binary WOM Codes • k1,…,kt:the number of bits on each write

– n cells and t writes • The sum-rate of the WOM code is

R = (Σ1t ki)/n

– Rivest Shamir: R = (2+2)/3 = 4/3

Page 38: 236601 - Coding and Algorithms  for  Memories Lecture 2

38

Definition: WOM Codes • Definition: An [n,t;M1,…,Mt] t-write WOM code is a

coding scheme which consists of n cells and guarantees any t writes of alphabet size M1,…,Mt by programming cells from zero to one– A WOM code consists of t encoding and decoding maps Ei, Di, 1

≤i≤ t– E1: {1,…,M1} {0,1}n

– For 2 ≤i≤ t, Ei: {1,…,Mi}×{0,1}n {0,1}n

such that for all (m,c)∊{1,…,Mi}×{0,1}n, Ei(m,c) ≥ c– For 1 ≤i≤ t, Di: {0,1}n {1,…,Mi}

such that for Di(Ei(m,c)) =m for all (m,c)∊{1,…,Mi}×{0,1}n

• The sum-rate of the WOM code is R = (Σ1

t logMi)/n Rivest Shamir: [3,2;4,4], R = (log4+log4)/3=4/3

Page 39: 236601 - Coding and Algorithms  for  Memories Lecture 2

39

Definition: WOM Codes • There are two cases

– The individual rates on each write must all be the same: fixed-rate

– The individual rates are allowed to be different: unrestricted-rate

• We assume that the write number on each write is known. This knowledge does not affect the rate– Assume there exists a [n,t;M1,…,Mt] t-write WOM code

where the write number is known– It is possible to construct a [Nn+t,t;M1

N,…,MtN] t-write

WOM code where the write number is not-known so asymptotically the sum-rate is the same

Page 40: 236601 - Coding and Algorithms  for  Memories Lecture 2

40

The Capacity of WOM Codes• The Capacity Region for two writes

C2-WOM={(R1,R2)|∃p∊[0,0.5],R1≤h(p), R2≤1-p}h(p) – the binary entropy function

h(p) = -plog(p)-(1-p)log(1-p)• The maximum achievable sum-rate is

maxp∊[0,0.5]{h(p)+(1-p)} = log3 achieved for p=1/3:

R1 = h(1/3) = log(3)-2/3R2 = 1-1/3 = 2/3

• Capacity region (Heegard ‘86, Fu and Han Vinck ‘99)Ct-WOM={(R1,…,Rt)| R1 ≤ h(p1), R2 ≤ (1–p1)h(p2),…,

Rt-1≤ (1–p1)(1–pt–2)h(pt–1) Rt ≤ (1–p1)(1–pt–2)(1–pt–1)}

• The maximum achievable sum-rate is log(t+1)

Page 41: 236601 - Coding and Algorithms  for  Memories Lecture 2

41

The Capacity for Fixed Rate• The capacity region for two writes

C2-WOM={(R1,R2)|∃p∊[0,0.5],R1≤h(p), R2≤1-p}• When forcing R1=R2 we get h(p) = 1-p • The (numerical) solution is p = 0.2271, the sum-rate is 1.54

• Multiple writes: A recursive formula to calculate the maximum achievable sum-rateRF(1)=1RF(t+1) = (t+1)root{h(zt/RF(t))-z}where root{f(z)} is the min positive value z s.t. f(z)=0– For example:

RF(2) = 2root{h(z)-z} = 2 0.7729=1.5458RF(3) = 3root{h(2z/1.54)-z}=3 0.6437=1.9311