Top Banner
Memory/Storage Architecture Lab Computer Architecture Virtual Memory
32

Memory/Storage Architecture Lab Computer Architecture Virtual Memory.

Dec 19, 2015

Download

Documents

Jonah Shelton
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: Memory/Storage Architecture Lab Computer Architecture Virtual Memory.

Memory/Storage Architecture Lab

Computer Architecture

Virtual Memory

Page 2: Memory/Storage Architecture Lab Computer Architecture Virtual Memory.

2Memory/Storage Architecture Lab 2

What do we want?

Physical Logical

Memory with infinite capacity

Page 3: Memory/Storage Architecture Lab Computer Architecture Virtual Memory.

3Memory/Storage Architecture Lab 3

Virtual Memory Concept

Hide all physical aspects of memory from users.

Memory is a logically unbounded virtual (logical) address space of 2n bytes.

Only portions of virtual address space are in physical memory at any one time.

Page 4: Memory/Storage Architecture Lab Computer Architecture Virtual Memory.

4Memory/Storage Architecture Lab 4

Paging

A process’s virtual address space is divided into equal sized pages.

A virtual address is a pair (p, o).

Page 5: Memory/Storage Architecture Lab Computer Architecture Virtual Memory.

5Memory/Storage Architecture Lab 5

Paging

Physical memory is divided into equal sized frames.

size of page = size of frame Physical memory address is a pair

(f, o).

Page 6: Memory/Storage Architecture Lab Computer Architecture Virtual Memory.

6Memory/Storage Architecture Lab 6

Paging

Page 7: Memory/Storage Architecture Lab Computer Architecture Virtual Memory.

7Memory/Storage Architecture Lab 7

Mapping from a Virtual to a Physical Address

Page 8: Memory/Storage Architecture Lab Computer Architecture Virtual Memory.

8Memory/Storage Architecture Lab 8

Paging: Virtual Address Translation

Page 9: Memory/Storage Architecture Lab Computer Architecture Virtual Memory.

9Memory/Storage Architecture Lab 9

Paging: Page Table Structure

One table for each process - part of process’s state. Contents

Flags: valid/invalid (also called resident) bit, dirty bit, reference (also called clock or used) bit.

Page frame number.

Page 10: Memory/Storage Architecture Lab Computer Architecture Virtual Memory.

10Memory/Storage Architecture Lab 10

Paging: Example

Page 11: Memory/Storage Architecture Lab Computer Architecture Virtual Memory.

11Memory/Storage Architecture Lab 11

Demand Paging

Bring a page into physical memory (i.e., map a page to a frame) only when it is needed.

Advantages: Program size is no longer constrained by the physical

memory size. Less memory needed more processes. Less I/O needed faster response. Advantages from paging

− Contiguous allocation is no longer needed no external fragmentation problem.

− Arbitrary relocation is possible.− Variable-sized I/O is no longer needed.

Page 12: Memory/Storage Architecture Lab Computer Architecture Virtual Memory.

12Memory/Storage Architecture Lab 12

Translation Look-aside Buffer (TLB)

Problem - Each (virtual) memory reference requires two memory references!

Solution: Translation lookaside buffer.

Page 13: Memory/Storage Architecture Lab Computer Architecture Virtual Memory.

13Memory/Storage Architecture Lab 13

A Big Picture

Page 14: Memory/Storage Architecture Lab Computer Architecture Virtual Memory.

14Memory/Storage Architecture Lab 14

On TLB misses

If page is in memory Load the PTE (page table entry) from memory and retry Could be handled in hardware

− Can get complex for more complicated page table structures

Or in software− Raise a special exception, with optimized handler

If page is not in memory (page fault) OS handles fetching the page and updating the page table Then restart the faulting instruction

Page 15: Memory/Storage Architecture Lab Computer Architecture Virtual Memory.

15Memory/Storage Architecture Lab 15

TLB Miss Handler

TLB miss indicates Page present, but PTE not in TLB Page not preset

Must recognize TLB miss before destination register overwritten

Raise exception Handler copies PTE from memory to TLB

Then restarts instruction If page not present, page fault will occur

Page 16: Memory/Storage Architecture Lab Computer Architecture Virtual Memory.

16Memory/Storage Architecture Lab 16

Page Fault Handler

Use faulting virtual address to find PTE Locate page on disk Choose page to replace

If dirty, write to disk first Read page into memory and update page table Make process runnable again

Restart from faulting instruction

Page 17: Memory/Storage Architecture Lab Computer Architecture Virtual Memory.

17Memory/Storage Architecture Lab 17

Paging: Protection and Sharing

Protection Protection is specified per

page basis. Sharing

Sharing is done by pages in different processes mapped to the same frames.

Sharing

Page 18: Memory/Storage Architecture Lab Computer Architecture Virtual Memory.

18Memory/Storage Architecture Lab 18

Virtual Memory Performance

Example Memory access time: 100 ns Disk access time: 25 ms Effective access time

− Let p = the probability of a page fault− Effective access time = 100(1-p) + 25,000,000p− If we want only 10% degradation

110 > 100 + 25,000,000p 10 > 25,000,000p p < 0.0000004 (one fault every 2,500,000 references)

Lesson: OS had better do a good job of page replacement!

Page 19: Memory/Storage Architecture Lab Computer Architecture Virtual Memory.

19Memory/Storage Architecture Lab 19

Replacement Algorithm - LRU (Least Recently Used) Algorithm

Replace the page that has not been used for the longest time.

Page 20: Memory/Storage Architecture Lab Computer Architecture Virtual Memory.

20Memory/Storage Architecture Lab 20

LRU Algorithm - Implementation

Maintain a stack of recently used pages according to the recency of their uses.

Top: Most recently used (MRU) page. Bottom: Least recently used (LRU) page.

Always replace the bottom (LRU) page.

Page 21: Memory/Storage Architecture Lab Computer Architecture Virtual Memory.

21Memory/Storage Architecture Lab 21

LRU Approximation - Second-Chance Algorithm

Also called the clock algorithm. A variation used in UNIX. Maintain a circular list of pages resident in memory.

At each reference, the reference (also called used or clock) bit is simply set by hardware.

At a page fault, clock sweeps over pages looking for one with reference bit = 0.

− Replace a page that has not been referenced for one complete revolution of the clock.

Page 22: Memory/Storage Architecture Lab Computer Architecture Virtual Memory.

22Memory/Storage Architecture Lab 22

Second-Chance Algorithm

valid/invalid bitreference (used) bit frame number

Page 23: Memory/Storage Architecture Lab Computer Architecture Virtual Memory.

23Memory/Storage Architecture Lab 23

Page Size

Small page sizes+ less internal fragmentation, better memory utilization.

- large page table, high page fault handling overheads. Large page sizes

+ small page table, small page fault handling overheads.

- more internal fragmentation, worse memory utilization.

Page 24: Memory/Storage Architecture Lab Computer Architecture Virtual Memory.

24Memory/Storage Architecture Lab 24

I/O Interlock

Problem - DMA Assume global page replacement. A process blocked on an I/O

operation appears to be an ideal candidate for replacement.

If replaced, however, I/O operation can corrupt the system.

Solutions1. Lock pages in physical memory using

lock bits, or

2. Perform all I/O into and out of OS space.

Page 25: Memory/Storage Architecture Lab Computer Architecture Virtual Memory.

25Memory/Storage Architecture Lab 25

Segmentation with Paging

Page 26: Memory/Storage Architecture Lab Computer Architecture Virtual Memory.

26Memory/Storage Architecture Lab 26

Segmentation with Paging

Individual segments are implemented as a paged, virtual address space.

A logical address is now a triple (s, p, o)

Page 27: Memory/Storage Architecture Lab Computer Architecture Virtual Memory.

27Memory/Storage Architecture Lab 27

Segmentation with Paging

Address translation

Page 28: Memory/Storage Architecture Lab Computer Architecture Virtual Memory.

28Memory/Storage Architecture Lab 28

Segmentation with Paging

Additional benefits Protection: protection can be specified per segment basis rather than

per page basis. Sharing

Page 29: Memory/Storage Architecture Lab Computer Architecture Virtual Memory.

29Memory/Storage Architecture Lab 29

Typical Memory Hierarchy - The Big Picture

Page 30: Memory/Storage Architecture Lab Computer Architecture Virtual Memory.

30Memory/Storage Architecture Lab 30

Typical Memory Hierarchy - The Big Picture

Page 31: Memory/Storage Architecture Lab Computer Architecture Virtual Memory.

31Memory/Storage Architecture Lab 31

Typical Memory Hierarchy - The Big Picture

Page 32: Memory/Storage Architecture Lab Computer Architecture Virtual Memory.

32Memory/Storage Architecture Lab 32

A Common Framework for Memory Hierarchies

Question 1: Where can a Block be Placed? One place (direct-

mapped), a few places (set associative), or any place (fully

associative)

Question 2: How is a Block Found? Indexing (direct-mapped),

limited search (set associative), full search (fully associative)

Question 3: Which Block is Replaced on a Miss? Typically LRU

or random

Question 4: How are Writes Handled? Write-through or write-

back