This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
CPU-DRAM Gap
• 1980: no cache in µproc; 1995 2-level cache on chip(1989 first Intel µproc with a cache on chip)
• Time of a full cache miss in instructions executed:
1st Alpha: 340 ns/5.0 ns = 68 clks x 2 or 136
2nd Alpha: 266 ns/3.3 ns = 80 clks x 4 or 320
3rd Alpha: 180 ns/1.7 ns =108 clks x 6 or 648
Caching
• Principle: results of operations that are expensive should be kept around for reuse
• Examples:– CPU caching– Forwarding table caching– File caching– Web caching– Query caching– Computation caching
• Most processor performance improvements in the lastr
What is a cache?• Small, fast storage used to improve average
access time to slow memory.• Exploits spacial and temporal locality• In computer architecture, almost everything is a
cache!– Registers a cache on variables– First-level cache a cache on second-level cache– Second-level cache a cache on memory– Memory a cache on disk (virtual memory)– TLB a cache on page table– Branch-prediction a cache on prediction information?Proc/Regs
L1-Cache
L2-Cache
Memory
Disk, Tape, etc.
Bigger Faster
Example: 1 KB Direct Mapped Cache• For a 2 ** N byte cache:
– The uppermost (32 - N) bits are always the Cache Tag– The lowest M bits are the Byte Select (Block Size = 2 **
M)
Cache Index
0
1
2
3
:
Cache Data
Byte 0
0431
:
Cache Tag Example: 0x50
Ex: 0x01
0x50
Stored as partof the cache “state”
Valid Bit
:
31
Byte 1Byte 31 :
Byte 32Byte 33Byte 63 :Byte 992Byte 1023 :
Cache Tag
Byte Select
Ex: 0x00
9Block address
Set Associative Cache• N-way set associative: N entries for each
Cache Index– N direct mapped caches operates in parallel
• Example: Two-way set associative cache– Cache Index selects a “set” from the cache– The two tags in the set are compared to the input in
parallel– Data is selected based on the tag result
Cache Data
Cache Block 0
Cache TagValid
:: :
Cache Data
Cache Block 0
Cache Tag Valid
: ::
Cache Index
Mux 01Sel1 Sel0
Cache Block
CompareAdr Tag
Compare
OR
Hit
Disadvantage of Set Associative Cache
• N-way Set Associative Cache versus Direct Mapped Cache:
– N comparators vs. 1– Extra MUX delay for the data– Data comes AFTER Hit/Miss decision and set selection
• In a direct mapped cache, Cache Block is available BEFORE Hit/Miss:
– Possible to assume a hit and continue. Recover later if miss.
Cache Data
Cache Block 0
Cache Tag Valid
: ::
Cache Data
Cache Block 0
Cache TagValid
:: :
Cache Index
Mux 01Sel1 Sel0
Cache Block
CompareAdr Tag
Compare
OR
Hit
Basic Units of Cache
• Cache Line/Set (index)• Cache Block (tag)• Cache Sector or Subblock (valid bit)• S: cache size, A: degree of
associativity, B: block size, N: # of cache lines,
I: # of index bits
S = B*A*N
N = 2I/B
• Miss-oriented Approach to Memory Access:
– CPIExecution includes ALU and Memory instructions
CycleTimeyMissPenaltMissRateInst
MemAccessExecution
CPIICCPUtime
CycleTimeyMissPenaltInst
MemMissesExecution
CPIICCPUtime
Cache performance
• Separating out Memory component entirely
– AMAT = Average Memory Access Time
– CPIALUOps does not include memory instructionsCycleTimeAMAT
Inst
MemAccessCPI
Inst
AluOpsICCPUtime
AluOps
yMissPenaltMissRateHitTimeAMAT DataDataData
InstInstInst
yMissPenaltMissRateHitTime
yMissPenaltMissRateHitTime
Impact on Performance•Suppose a processor executes at
– Clock Rate = 200 MHz (5 ns per cycle), Ideal (no misses) CPI = 1.1
– 50% arith/logic, 30% ld/st, 20% control
•Suppose that 10% of memory operations get 50 cycle miss penalty
•Suppose that 1% of instructions get same miss penalty
•CPI = ideal CPI + average stalls per instruction1.1(cycles/ins) +[ 0.30 (DataMops/ins)
x 0.10 (miss/DataMop) x 50 (cycle/miss)] +
[ 1 (InstMop/ins) x 0.01 (miss/InstMop) x 50
(cycle/miss)] = (1.1 + 1.5 + .5) cycle/ins = 3.1
•58% of the time the proc is stalled waiting for memory!
Example: Harvard Architecture• Unified vs Separate I&D (Harvard)
• Table on page 384:– 16KB I&D: Inst miss rate=0.64%, Data miss rate=6.47%– 32KB unified: Aggregate miss rate=1.99%
• Which is better (ignore L2 cache)?– Assume 33% data ops 75% accesses from instructions (1.0/1.33)– hit time=1, miss time=50– Note that data hit has 1 stall for unified cache (only one port)
Designers• Q1: Where can a block be placed in the upper
level? (Block placement)– Fully Associative, Set Associative, Direct Mapped
• Q2: How is a block found if it is in the upper level? (Block identification)
– Tag/Block
• Q3: Which block should be replaced on a miss? (Block replacement)
– Random, LRU
• Q4: What happens on a write? (Write strategy)
– Write Back or Write Through (with Write Buffer)
Improving Cache Performance
1. Reduce the miss rate,
2. Reduce the miss penalty, or
3. Reduce the time to hit in the cache.
Reducing Misses• Classifying Misses: 3 Cs
– Compulsory—The first access to a block is not in the cache, so the block must be brought into the cache. Also called cold start misses or first reference misses.(Misses in even an Infinite Cache)
– Capacity—If the cache cannot contain all the blocks needed during execution of a program, capacity misses will occur due to blocks being discarded and later retrieved.(Misses in Fully Associative Size X Cache)
– Conflict—If block-placement strategy is set associative or direct mapped, conflict misses (in addition to compulsory & capacity misses) will occur because a block can be discarded and later retrieved if too many blocks map to its set. Also called collision misses or interference misses.(Misses in N-way Associative, Size X Cache)
• More recent, 4th “C”:– Coherence - Misses caused by cache coherence.
Cache Size (KB)
Mis
s R
ate
per
Typ
e
0
0.02
0.04
0.06
0.08
0.1
0.12
0.141 2 4 8
16
32
64
12
8
1-way
2-way
4-way
8-way
Capacity
Compulsory
3Cs Absolute Miss Rate (SPEC92)
Conflict
Compulsory vanishinglysmall
Cache Size (KB)
Mis
s R
ate
per
Typ
e
0
0.02
0.04
0.06
0.08
0.1
0.12
0.141 2 4 8
16
32
64
12
8
1-way
2-way
4-way
8-way
Capacity
Compulsory
2:1 Cache Rule
Conflict
miss rate 1-way associative cache size X = miss rate 2-way associative cache size X/2
3Cs Relative Miss Rate
Cache Size (KB)
Mis
s R
ate
per
Typ
e
0%
20%
40%
60%
80%
100%1 2 4 8
16
32
64
12
8
1-way
2-way4-way
8-way
Capacity
Compulsory
Conflict
Flaws: for fixed block sizeGood: insight => invention
How Can Reduce Misses?• 3 Cs: Compulsory, Capacity, Conflict• In all cases, assume total cache size not
changed:• What happens if:
1) Change Block Size: Which of 3Cs is obviously affected?
2) Change Associativity: Which of 3Cs is obviously affected?
3) Change Compiler: Which of 3Cs is obviously affected?
Mapping Between Cache and Memory
000001010011100101110111
00000001001000110100010101100111
Locality
• Temporal Locality: things that get referenced recently tend to be referenced in the near future
• Spatial Locality: things that are close to those that are referenced recently tend to be referenced in the near future
Block Size (bytes)
Miss Rate
0%
5%
10%
15%
20%
25%
16
32
64
12
8
25
6
1K
4K
16K
64K
256K
1. Reduce Misses via Larger Block Size
2. Reduce Misses via Higher Associativity
•2:1 Cache Rule: – Miss Rate DM cache size N Miss Rate 2-way
cache size N/2
•Beware: Execution time is only final measure!
– Will Clock Cycle time increase?– Hill [1988] suggested hit time for 2-way vs. 1-
way external cache +10%, internal + 2%
Example: Avg. Memory Access Time vs. Miss Rate
• Example: assume CCT = 1.10 for 2-way, 1.12 for 4-way, 1.14 for 8-way vs. CCT direct mapped
(Red means A.M.A.T. not improved by more associativity)
3. Reducing Misses via a“Victim Cache”
• How to combine fast hit time of direct mapped yet still avoid conflict misses?
• Add buffer to place data discarded from cache
• Jouppi [1990]: 4-entry victim cache removed 20% to 95% of conflicts for a 4 KB direct mapped data cache
• Used in Alpha, HP machines
To Next Lower Level InHierarchy
DATATAGS
One Cache line of DataTag and Comparator
One Cache line of DataTag and Comparator
One Cache line of DataTag and Comparator
One Cache line of DataTag and Comparator
4. Reducing Misses via “Pseudo-Associativity”
• How to combine fast hit time of Direct Mapped and have the lower conflict misses of 2-way SA cache?
• Divide cache: on a miss, check other half of cache to see if there, if so have a pseudo-hit (slow hit)
• Drawback: CPU pipeline is hard if hit takes 1 or 2 cycles– Used in MIPS R1000 L2 cache, similar in UltraSPARC– Better for caches not tied directly to processor (L2)
• Conflict misses in caches not FA vs. Blocking size– Lam et al [1991] a blocking factor of 24 had a fifth the misses
vs. 48 despite both fit in cache
Blocking Factor
Mis
s R
ate
0
0.05
0.1
0 50 100 150
Fully Associative Cache
Direct Mapped Cache
Performance Improvement
1 1.5 2 2.5 3
compress
cholesky(nasa7)
spice
mxm (nasa7)
btrix (nasa7)
tomcatv
gmty (nasa7)
vpenta (nasa7)
mergedarrays
loopinterchange
loop fusion blocking
Summary of Compiler Optimizations to Reduce Cache
Misses (by hand)
Summary: Miss Rate Reduction
• 3 Cs: Compulsory, Capacity, Conflict1. Reduce Misses via Larger Block Size2. Reduce Misses via Higher Associativity3. Reducing Misses via Victim Cache4. Reducing Misses via Pseudo-Associativity5. Reducing Misses by HW Prefetching Instr, Data6. Reducing Misses by SW Prefetching Data7. Reducing Misses by Compiler Optimizations
• Prefetching comes in two flavors:– Binding prefetch: Requests load directly into
register.» Must be correct address and register!
– Non-Binding prefetch: Load into cache. » Can be incorrect. Frees HW/SW to guess!
CPUtimeIC CPIExecution
Memory accesses
InstructionMiss rateMiss penalty
Clock cycle time
Improving Cache Performance
1. Reduce the miss rate,
2. Reduce the miss penalty, or
3. Reduce the time to hit in the cache.
Write Policy:Write-Through vs Write-
Back• Write-through: all writes update cache and underlying
memory/cache– Can always discard cached data - most up-to-date data is in memory– Cache control bit: only a valid bit
• Write-back: all writes simply update cache– Can’t just discard cached data - may have to write it back to memory– Cache control bits: both valid and dirty bits
• Other Advantages:– Write-through:
» memory (or other processors) always have latest data» Simpler management of cache
– Write-back:» much lower bandwidth, since data often overwritten multiple
times» Better tolerance to long-latency memory?
WT vs. WB
• Write burst• Error tolerance• Speculative write: DB and WT
What happens on a Cache miss?• For in-order pipeline, 2 options:
– Freeze pipeline in Mem stage (popular early on: Sparc, R4000)
IF ID EX Mem stall stall stall … stall Mem Wr IF ID EX stall stall stall … stall stall Ex
Wr
– Use Full/Empty bits in registers + MSHR queue» MSHR = “Miss Status/Handler Registers” (Kroft)
Each entry in this queue keeps track of status of outstanding memory requests to one complete memory line.
• Per cache-line: keep info about memory address.• For each word: register (if any) that is waiting for result.• Used to “merge” multiple requests to one memory line
» New load creates MSHR entry and sets destination register to “Empty”. Load is “released” from pipeline.
» Attempt to use register before result returns causes instruction to block in decode stage.
» Limited “out-of-order” execution with respect to loads. Popular with in-order superscalar architectures.
• Out-of-order pipelines already have this functionality built in… (load queues, etc).
Write Policy 2:Write Allocate vs Non-Allocate(What happens on write-miss)
• Write allocate: allocate new cache line in cache– Usually means that you have to do a “read
miss” to fill in rest of the cache-line!– Alternative: per/word valid bits
• Write non-allocate (or “write-around”):– Simply send write data through to underlying
memory/cache - don’t allocate new cache line!
Write Miss Policy
• Allocate and fetch: normal• Allocate but no fetch• No allocate: write around/bypassing• Cacheability
Review: Improving Cache Performance
1. Reduce the miss rate,
2. Reduce the time to hit in the cache.
3. Reduce the miss penalty
yMissPenaltMissRateHitTimeAMAT
1. Fast Hit times via Small and Simple
Caches• Why Alpha 21164 has 8KB Instruction
and 8KB data cache + 96KB second level cache?
– Small data cache and clock rate
• Direct Mapped, on chip
2. Fast hits by Avoiding Address Translation
• Send virtual address to cache? Called Virtually Addressed Cache or just Virtual Cache vs. Physical Cache
– Every time process is switched logically must flush the cache; otherwise get false hits
» Cost is time to flush + “compulsory” misses from empty cache– Dealing with aliases (sometimes called synonyms);
Two different virtual addresses map to same physical address– I/O must interact with cache, so need virtual address
• Solution to aliases– HW guaranteess covers index field & direct mapped, they must be
unique;called page coloring
• Solution to cache flush– Add process identifier tag that identifies process as well as address
within process: can’t get a hit if wrong process
Virtual Memory Hardware
• Translation Lookaside Buffer (TLB): cache for page table entries
• Typically full associative• Block size?• TLB refill: HW or SW• Variable page size• Process ID in TLB tags
Virtually Addressed Caches
CPU
TB
$
MEM
VA
PA
PA
ConventionalOrganization
CPU
$
TB
MEM
VA
VA
PA
Virtually Addressed CacheTranslate only on miss
Synonym Problem
CPU
$ TB
MEM
VA
PATags
PA
Overlap $ accesswith VA translation:requires $ index to
remain invariantacross translation
VATags
L2 $
Virtually Indexed and Tagged
• Homonym problem– A1 in P1 and A1 in P2 are mapped to different
PA– Process ID comes into rescue
• Synonym problem– A1 in P1 and A2 in P2 are mapped to same PA– Multiple copies of PA inconsistent– For direct-mapped cache, if the index part of A1
and A2 are the same, or if A1 and A2 are mapped to the same cache set, it is OK.
Virtually Indexed Physically Tagged
• If index is physical part of address, can start tag access in parallel with translation so that can compare to physical tag
• Limits cache to page size: what if want bigger caches and uses same trick?
– Higher associativity moves barrier to right– Page coloring
Page Address Page Offset
Address Tag Index Block Offset
01231 11
• Pipeline Tag Check and Update Cache as separate stages; current write tag check & previous write cache update
• Only STORES in the pipeline; empty during a miss
• In shade is “Delayed Write Buffer”; must be checked on reads; either complete write or read from buffer
3. Fast Hit Times Via Pipelined Writes
4. Fast Writes on Misses Via Small Subblocks
• If most writes are 1 word, subblock size is 1 word, & write through then always write subblock & tag immediately
– Tag match and valid bit already set: Writing the block was proper, & nothing lost by setting valid bit on again.
– Tag match and valid bit not set: The tag match means that this is the proper block; writing the data into the subblock makes it appropriate to turn the valid bit on.
– Tag mismatch: This is a miss and will modify the data portion of the block. Since write-through cache, no harm was done; memory still has an up-to-date copy of the old value. Only the tag to the address of the write and the valid bits of the other subblock need be changed because the valid bit for this subblock has already been set
• Doesn’t work with write back due to last case
Review: Improving Cache Performance
1. Reduce the miss rate,
2. Reduce the time to hit in the cache.
3. Reduce the miss penalty
yMissPenaltMissRateHitTimeAMAT
0. Faster Memory
• This requires a bit of discussion. • Hold a bit until we discuss memory.
1. Reducing Miss Penalty: Read Priority over Write on
Miss• Write through with write buffers offer RAW
conflicts with main memory reads on cache misses
– If simply wait for write buffer to empty, might increase read miss penalty (old MIPS 1000 by 50% )
– Check write buffer contents before read; if no conflicts, let the memory access continue
• Alternative: Write Back– Read miss replacing dirty block– Normal: Write dirty block to memory, and then do the
read– Instead copy the dirty block to a write buffer, then do the
read, and then do the write– CPU stall less since restarts as soon as do read
• Write Buffer is needed between the Cache and Memory
– Processor: writes data into the cache and the write buffer– Memory controller: write contents of the buffer to memory
• Write buffer is just a FIFO:– Typical number of entries: 4– Works fine if:Store frequency (w.r.t. time) << 1 / DRAM write cycle– Must handle burst behavior as well!
ProcessorCache
Write Buffer
DRAM
1. Reducing Penalty: Read Priority over Write on Miss
• Write-Buffer Issues: Could introduce RAW Hazard with memory!
– Write buffer may contain only copy of valid data Reads to memory may get wrong result if we ignore write buffer
• Solutions:– Simply wait for write buffer to empty before servicing reads:
» Might increase read miss penalty (old MIPS 1000 by 50% )– Check write buffer contents before read (“fully associative”);
» If no conflicts, let the memory access continue» Else grab data from buffer
• Can Write Buffer help with Write Back?– Read miss replacing dirty block
» Copy dirty block to write buffer while starting read to memory
RAW Hazards from Write Buffer!
RAS/CAS
WriteDATA
RAS/CAS
ReadDATA
3 8 3 8
Processor + DRAM
RAS/CAS
ReadDATA
RAS/CAS
WriteDATA
8 3 83
WriteDATA
ReadDATA
8 8
DRAM
Proc
2. Reduce Miss Penalty: Subblock Placement
• Don’t have to load full block on a miss• Have valid bits per subblock to indicate
valid• (Originally invented to reduce tag
storage)
Valid Bits Subblocks
3. Reduce Miss Penalty: Early Restart and Critical
Word First• Don’t wait for full block to be loaded before
restarting CPU– Early restart—As soon as the requested word of the block
arrives, send it to the CPU and let the CPU continue execution
– Critical Word First—Request the missed word first from memory and send it to the CPU as soon as it arrives; let the CPU continue execution while filling the rest of the words in the block. Also called wrapped fetch and requested word first
• Generally useful only in large blocks, • Spatial locality a problem; tend to want next
sequential word, so not clear if benefit by early restart
block
4. Reduce Miss Penalty: Non-blocking Caches to reduce
stalls on misses• Non-blocking cache or lockup-free cache allow
data cache to continue to supply cache hits during a miss
– requires F/E bits on registers or out-of-order execution– requires multi-bank memories
• “hit under miss” reduces the effective miss penalty by working during miss vs. ignoring CPU requests
• “hit under multiple miss” or “miss under miss” may further lower the effective miss penalty by overlapping multiple misses
– Significantly increases the complexity of the cache controller as there can be multiple outstanding memory accesses
• Global miss rate close to single level cache rate provided L2 >> L1
• Don’t use local miss rate• L2 not tied to CPU clock
cycle!• Cost & A.M.A.T.• Generally Fast Hit Times
and fewer misses• Since hits are few, target
miss reduction
Linear
Log
Cache Size
Cache Size
Reducing Misses: Which apply to L2 Cache?
• Reducing Miss Rate1. Reduce Misses via Larger Block Size2. Reduce Conflict Misses via Higher Associativity3. Reducing Conflict Misses via Victim Cache4. Reducing Conflict Misses via Pseudo-Associativity5. Reducing Misses by HW Prefetching Instr, Data6. Reducing Misses by SW Prefetching Data7. Reducing Capacity/Conf. Misses by Compiler
Optimizations
Relative CPU Time
Block Size
11.11.21.31.41.51.61.71.81.9
2
16 32 64 128 256 512
1.361.28 1.27
1.34
1.54
1.95
L2 cache block size & A.M.A.T.
• 32KB L1, 8 byte path to memory
Reducing Miss Penalty Summary
• Five techniques– Read priority over write on miss– Subblock placement– Early Restart and Critical Word First on miss– Non-blocking Caches (Hit under Miss, Miss under
Miss)– Second Level Cache
• Can be applied recursively to Multilevel Caches
– Danger is that time to DRAM will grow with multiple levels in between
– First attempts at L2 caches can make things worse, since increased worst case is worse
• What does this mean for– Compilers?,Operating Systems?, Algorithms?
Data Structures?
1
10
100
1000
1980
1981
1982
1983
1984
1985
1986
1987
1988
1989
1990
1991
1992
1993
1994
1995
1996
1997
1998
1999
2000
DRAM
CPU
Cache Parameter Estimation
• D: cache size, b: cache line size, a: degree of associativity
• A strided access to an N-element array, with stride being s, s=1, 2, 4, … N/2
• Each iteration contains a read and write of the same array element
• Four cases:– N <= D Tno-miss– N > D and 1<= s < b: Tno-miss + Ms/b– N > D and N/s > D/b and b <= s < N/a: Tno-miss +
M– N> D and N/a <= s <= N/2 : Tno-miss
Virtual Memory Hardware in X86 Architecture
PLTI
GDT/LDTTwo-LevelPage Table
Physical Address
Offset Segment Selector
Virtual Address
Base31:24
Limit19:16
Base23:16
Limit15:00
Base15:00
P DPL
Segment Descriptor Format Page Table Entry Format
Linear Address
PWUPage Frame Address
Segment Limit Check
X86 architecture’s virtual memory hardware supports both segmentation and paging
Virtual Address = Segment Selector + Offset
Linear Address
Physical Address
segmentation
paging
base + offset <= limit
Array Bound Checking
• Prevent unauthorized modification of the address space (e.g., return address or bank account) through buffer overflowing The cleanest solution
• Check each memory reference with respect to the upper/lower limits of its associated object1. Figure out which is the associated object 2. Perform the limit check (more time-consuming)
• Current software-based array bound checking methods: 2-30 times slowdown
The CASH Approach
• Goal: Exploiting segment limit check hardware to perform array bound checking for free
• Idea: Each array or buffer is treated as a separate segment and referenced accordingly
offset = &(B[M]) – B_Segment_Base;
for (i = M; i < N; I++) { GS = B_Segment_Selector;
B[i] = 5; for (i = M; i < N; i++) {
} GS:offset = 5;
offset += 4;
}
Array Access Code Generation
A[i] = 10
Without Array Bound Checkmovl -60(%ebp), %eax ; load i
leal 0(, %eax, 4), %edx ; i * 4
movl -56(%ebp), %eax ; load a
movl $10, (%edx, %eax) ; mem[a+4*i] = 10
Checking Array Bound using Cashmovl -60(%ebp), %eax ; load i
movl $10, %gs:(%edx,%eax) ; check bounds and Mem[a+4*i]=10
Intra-AS Protection
• A program and an untrusted component: OS and its device drivers, Apache and CGIs, Java program and C components, etc.
• Run kernel at SPL0 (0-4GB), extensible application at SPL2 (0-3GB) and extension at SPL3 (0-3GB)
• Exposed pages of extensible application at PPL 1
• Design Issues– Control transfer– Data sharing– Libraries
PaX
• Non-executable stack and heap• Invariant: VM areas that can be
modified cannot be executed; VM areas that can be executed cannot be modified
• Partition the address space into two disjoint segments, one CS and one DS
• Updates happen in DS, and instruction fetch happen in CS
• Use randomization to address return-to-libc attacks
Main Memory Background• Performance of Main Memory:
– Latency: Cache Miss Penalty» Access Time: time between request and word arrives» Cycle Time: time between requests
– Bandwidth: I/O & Large Block Miss Penalty (L2)
• Main Memory is DRAM: Dynamic Random Access Memory– Dynamic since needs to be refreshed periodically (8 ms, 1% time)– Addresses divided into 2 halves (Memory as a 2D matrix):
» RAS or Row Access Strobe» CAS or Column Access Strobe
• Cache uses SRAM: Static Random Access Memory– No refresh (6 transistors/bit vs. 1 transistor
for (j = 0; j < 512; j = j+1)for (i = 0; i < 256; i = i+1)
x[i][j] = 2 * x[i][j];• Even with 128 banks, since 512 is multiple of 128,
conflict on word accesses• SW: loop interchange or declaring array not power of 2
(“array padding”)• HW: Prime number of banks
– bank number = address mod number of banks– address within bank = address / number of words in bank– modulo & divide per memory access with prime no. banks?– address within bank = address mod number words in bank– bank number? easy if 2N words per bank
• Chinese Remainder TheoremAs long as two sets of integers ai and bi follow these rules
and that ai and aj are co-prime if i j, then the integer x has only one solution (unambiguous mapping):
– bank number = b0, number of banks = a0 (= 3 in example)– address within bank = b1, number of words in bank = a1
(= 8 in example)– N word address 0 to N-1, prime no. banks, words power of 2
• Memory banks for independent accesses vs. faster sequential accesses
– Multiprocessor– I/O– CPU with Hit under n Misses, Non-blocking Cache
• Superbank: all memory active on one block transfer (or Bank)
• Bank: portion within a superbank that is word interleaved (or Subbank)
Superbank Bank
…
Superbank NumberSuperbank
OffsetBank Number Bank Offset
Independent Memory Banks
• How many banks?number banks number clocks to access word in bank
– For sequential accesses, otherwise will return to original bank before it has next word ready
– (like in vector case)
• Increasing DRAM => fewer chips => harder to have banks
Fast Memory Systems: DRAM specific• Multiple CAS accesses: several names (page mode)
– Extended Data Out (EDO): 30% faster in page mode
• New DRAMs to address gap; what will they cost, will they survive?
– RAMBUS: startup company; reinvent DRAM interface» Each Chip a module vs. slice of memory» Short bus between CPU and chips» Does own refresh» Variable amount of data returned» 1 byte / 2 ns (500 MB/s per chip)
– Synchronous DRAM: 2 banks on chip, a clock signal to DRAM, transfer synchronous to system clock (66 - 150 MHz)
– Intel claims RAMBUS Direct (16 b wide) is future PC memory
• Niche memory or main memory?– e.g., Video RAM for frame buffers, DRAM + fast serial output
Fast Page Mode Operation• Regular DRAM
Organization:– N rows x N column x M-bit– Read & Write M-bit at a time– Each M-bit access requires
a RAS / CAS cycle
• Fast Page Mode DRAM– N x M “SRAM” to save a row
• After a row is read into the register
– Only CAS is needed to access other M-bit blocks on that row
– RAS_L remains asserted while CAS_L is toggled
N r
ows
N cols
DRAM
ColumnAddress
M-bit OutputM bits
N x M “SRAM”
RowAddress
A Row Address
CAS_L
RAS_L
Col Address Col Address
1st M-bit Access
Col Address Col Address
2nd M-bit 3rd M-bit 4th M-bit
SDRAM timing
• Micron 128M-bit dram (using 2Meg16bit4bank ver)– Row (12 bits), bank (2 bits), column (9 bits)
RAS(New Bank)
CAS End RASx
BurstREADCAS Latency
DRAM History• DRAMs: capacity +60%/yr, cost –30%/yr
– 2.5X cells/area, 1.5X die size in 3 years
• ‘98 DRAM fab line costs $2B– DRAM only: density, leakage v. speed
• Rely on increasing no. of computers & memory per computer (60% market)
– SIMM or DIMM is replaceable unit => computers use any generation DRAM
• Commodity, second source industry => high volume, low profit, conservative
– Little organization innovation in 20 years
• Order of importance: 1) Cost/bit 2) Capacity– First RAMBUS: 10X BW, +30% cost => little impact
DRAM Future: 1 Gbit+ DRAM
Mitsubishi Samsung• Blocks 512 x 2 Mbit 1024 x 1
Mbit• Clock 200 MHz 250 MHz• Data Pins 64 16• Die Size 24 x 24 mm 31 x 21 mm
– Sizes will be much smaller in production
• Metal Layers 3 4• Technology 0.15 micron 0.16 micron
• Slow and need refreshing• Slow -> Larger transistor• Refreshing no access means invalid• Similar technique used in
switch/router buffer
Memory Wall
• What if the CPU is so fast that we cannot even afford compulsory misses
• Intelligent RAM (IRAM) or Processor-in-Memory
• Fast data copying within DRAM• Memory-oriented architecture:
assuming processing logic is abundant
Need for Error Correction!• Motivation:
– Failures/time proportional to number of bits!– As DRAM cells shrink, more vulnerable
• Went through period in which failure rate was low enough without error correction that people didn’t do correction
– DRAM banks too large now– Servers always corrected memory systems
• Basic idea: add redundancy through parity bits– Simple but wastful version:
» Keep three copies of everything, vote to find right value» 200% overhead, so not good!
– Common configuration: Random error correction» SEC-DED (single error correct, double error detect)» One example: 64 data bits + 8 parity bits (11% overhead)
– Really want to handle failures of physical components as well» Organization is multiple DRAMs/SIMM, multiple SIMMs» Want to recover from failed DRAM and failed SIMM!» Requires more redundancy to do this» All major vendors thinking about this in high-end machines
• Tunneling Magnetic Junction RAM (TMJ-RAM):– Speed of SRAM, density of DRAM, non-
volatile (no refresh)– New field called “Spintronics”:
combination of quantum spin and electronics
– Same technology used in high-density disk-drives
• MEMs storage devices:– Large magnetic “sled” floating on top of
lots of little read/write heads– Micromechanical actuators move the sled
back and forth over the heads
More esoteric Storage Technologies?
• Tunneling Magnetic Junction RAM (TMJ-RAM)– Speed of SRAM, density of DRAM, non-volatile
(no refresh)– “Spintronics”: combination quantum spin and
electronics– Same technology used in high-density disk-drives
Something new: Structure of Tunneling Magnetic Junction
MEMS-based Storage• Magnetic “sled” floats
on array of read/write heads
– Approx 250 Gbit/in2
– Data rates:IBM: 250 MB/s w 1000 headsCMU: 3.1 MB/s w 400 heads
• Electrostatic actuators move media around to align it with heads
– Sweep sled ±50m in < 0.5s
• Capacity estimated to be in the 1-10GB in 10cm2
See Ganger et all: http://www.lcs.ece.cmu.edu/research/MEMS