10/25/2007 ecs150, Fall 2007 1 UCDavis, ecs150 Fall 2007 ecs150 Fall 2007: Operating System Operating System #4: Memory Management (chapter 5) Dr. S. Felix Wu Computer Science Department University of California, Davis http://www.cs.ucdavis.edu/~wu/ [email protected]
136
Embed
ecs150 Fall 2007 : Operating System #4: Memory Management (chapter 5)
ecs150 Fall 2007 : Operating System #4: Memory Management (chapter 5). Dr. S. Felix Wu Computer Science Department University of California, Davis http://www.cs.ucdavis.edu/~wu/ [email protected]. text. data. BSS. user stack. args/env. kernel. - PowerPoint PPT Presentation
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
10/25/2007 ecs150, Fall 2007 1
UCDavis, ecs150Fall 2007
ecs150 Fall 2007:Operating SystemOperating System#4: Memory Management(chapter 5)
Dr. S. Felix Wu
Computer Science Department
University of California, Davishttp://www.cs.ucdavis.edu/~wu/
Fetches for clean text or data are typically fill-from-file.
Modified (dirty) pages are pushed to backing store (swap) on eviction.
Paged-out pages are fetched from backing store when needed.
Initial references to user stack and BSS are satisfied by zero-fill on demand.
10/25/2007 ecs150, Fall 2007 4
UCDavis, ecs150Fall 2007
Logical vs. Physical AddressLogical vs. Physical Address The concept of a logical address space that is bound to a
separate physical address space is central to proper memory management.– Logical address – generated by the CPU; also referred to as
virtual address.
– Physical address – address seen by the memory unit.
Logical and physical addresses are the same in compile-time and load-time address-binding schemes; logical (virtual) and physical addresses differ in execution-time address-binding scheme.
10/25/2007 ecs150, Fall 2007 5
UCDavis, ecs150Fall 2007 Memory-Management Unit Memory-Management Unit
((MMUMMU))
Hardware device that maps virtual to physical address.
In MMU scheme, the value in the relocation register is added to every address generated by a user process at the time it is sent to memory.
The user program deals with logical addresses; it never sees the real physical addresses.
MMU
CPU
Memory
Virtualaddress
Physicaladdress
Data
10/25/2007 ecs150, Fall 2007 6
UCDavis, ecs150Fall 2007
Paging: Paging: Page Page and and FrameFrame
Logical address space of a process can be noncontiguous; process is allocated physical memory whenever the latter is available.
Divide physical memory into fixed-sized blocks called frames (size is power of 2, between 512 bytes and 8192 bytes).
Divide logical memory into blocks of same size called pages. Keep track of all free frames. To run a program of size n pages, need to find n free frames and
load program. Set up a page table to translate logical to physical addresses. Internal fragmentation.
Address Translation SchemeAddress Translation Scheme Address generated by CPU is divided into:
– Page number (p) – used as an index into a page table which contains base address of each page in physical memory.
– Page offset (d) – combined with base address to define the physical memory address that is sent to the memory unit.
10/25/2007 ecs150, Fall 2007 10
UCDavis, ecs150Fall 2007
Virtual MemoryVirtual Memory
MAPPINGin MMU
10/25/2007 ecs150, Fall 2007 11
UCDavis, ecs150Fall 2007
shared by all user processes
10/25/2007 ecs150, Fall 2007 12
UCDavis, ecs150Fall 2007
kernel
10/25/2007 ecs150, Fall 2007 13
UCDavis, ecs150Fall 2007
10/25/2007 ecs150, Fall 2007 14
UCDavis, ecs150Fall 2007
text
dataidatawdata
header
symboltable, etc.
programsections
text
data
BSS
user stack
args/envkernel
data
processsegments
physicalpage frames
virtualmemory
(big)
physicalmemory(small)
executablefile
backingstorage
virtual-to-physical translations
pageout/eviction
page fetch
MAPPINGin MMU
How to represent
10/25/2007 ecs150, Fall 2007 15
UCDavis, ecs150Fall 2007
PagingPaging
Advantages? Disadvantages?
10/25/2007 ecs150, Fall 2007 16
UCDavis, ecs150Fall 2007
FragmentationFragmentation
External Fragmentation – total memory space exists to satisfy a request, but it is not contiguous.
Internal Fragmentation – allocated memory may be slightly larger than requested memory; this size difference is memory internal to a partition, but not being used.
Reduce external fragmentation by compaction– Shuffle memory contents to place all free memory together in one
large block.– Compaction is possible only if relocation is dynamic, and is done at
execution time.– I/O problem
Latch job in memory while it is involved in I/O. Do I/O only into OS buffers.
Page FaultsPage Faults Page table access Load the missing page (replace one) Re-access the page table access.
How large is the page table?– 232 address space, 4K (212) size page.– How many entries? 220 entries (1 MB).– If 246, you need to access to both segment table and page
table…. (226 GB or 216 TB) Cache the page table!!
10/25/2007 ecs150, Fall 2007 22
UCDavis, ecs150Fall 2007
Page FaultsPage Faults Hardware Trap
– /usr/src/sys/i386/i386/trap.c VM page fault handler vm_fault()
– /usr/src/sys/vm/vm_fault.c
10/25/2007 ecs150, Fall 2007 23
UCDavis, ecs150Fall 2007
/usr/src/sys/vm/vm_map.h
How to implement?
On the hard disk or Cache – Page Faults
10/25/2007 ecs150, Fall 2007 24
UCDavis, ecs150Fall 2007
Implementation of Page TableImplementation of Page Table Page table is kept in main memory. Page-table base register (PTBR) points to the page table. Page-table length register (PRLR) indicates size of the
page table. In this scheme every data/instruction access requires two
memory accesses. One for the page table and one for the data/instruction.
The two memory access problem can be solved by the use of a special fast-lookup hardware cache called associative memory or translation look-aside buffers (TLBs)
10/25/2007 ecs150, Fall 2007 25
UCDavis, ecs150Fall 2007
Two IssuesTwo Issues
Virtual Address Access Overhead The size of the page table
– expensive, but fast -- parallel searching TLB: select a small number of page table
entries and store them in TLBvirt-page modified protectionpage frame
140 1 RW 31
20 0 RX 38
130 1 RW 29
129 1 RW 62
10/25/2007 ecs150, Fall 2007 42
UCDavis, ecs150Fall 2007
Paging Paging Virtual MemoryVirtual Memory
CPU address-ability: 32 bits -- 232 bytes!!– 232 is 4 Giga bytes (un-segmented).
– Pentium II can support up to 246 (64 Tera) bytes 32 bits – address, 14 bits – segment#, 2 bits – protection.
Very large addressable space (64 bits), and relatively smaller physical memory available…– Let the programs/processes enjoy a much larger virtual
space!!
10/25/2007 ecs150, Fall 2007 43
UCDavis, ecs150Fall 2007 VM with 1 SegmentVM with 1 Segment
MAPPINGin MMU
10/25/2007 ecs150, Fall 2007 44
UCDavis, ecs150Fall 2007 Eventually…Eventually…
MAPPINGin MMU
???
10/25/2007 ecs150, Fall 2007 45
UCDavis, ecs150Fall 2007
On-Demand PagingOn-Demand Paging
On-demand paging:– we have to kick someone out…. But which
one?– Triggered by page faults.
Loading in advance. (Predictive/Proactive)– try to avoid page fault at all.
10/25/2007 ecs150, Fall 2007 46
UCDavis, ecs150Fall 2007
Demand PagingDemand Paging
On a page fault the OS:– Save user registers and process state. – Determine that exception was page fault. – Find a free page frame. – Issue read from disk to free page frame. – Wait for seek and latency and transfers page
into memory. – Restore process state and resume execution.
Reference bit (one-bit timestamp)– With each page associate a bit, initially = 0– When page is referenced bit set to 1.– Replace the one which is 0 (if one exists). We do not know
the order, however. Second chance
– Need reference bit.– Clock replacement.– If page to be replaced (in clock order) has reference bit = 1.
then: set reference bit 0. leave page in memory. replace next page (in clock order), subject to same rules.
10/25/2007 ecs150, Fall 2007 53
UCDavis, ecs150Fall 2007 NRUNRU
Not Recently Used Clear the bits every 20 milliseconds.
referenced modified
What is the problem??
10/25/2007 ecs150, Fall 2007 54
UCDavis, ecs150Fall 2007
Page Replacement??Page Replacement??
Efficient Approximation of LRU No periodic refreshing
How to do that?
10/25/2007 ecs150, Fall 2007 55
UCDavis, ecs150Fall 2007
Second Chance/Clock PagingSecond Chance/Clock Paging
Do not need any “periodic” bit clearing Have a “current candidate pointer” moving
along the “clock” Choose the first page with zero flag(s)
10/25/2007 ecs150, Fall 2007 56
UCDavis, ecs150Fall 2007
Clock PagesClock Pages
A
B
C
D
E
F
A
B
C
D
E
F
G
10/25/2007 ecs150, Fall 2007 57
UCDavis, ecs150Fall 2007
Clock PagesClock Pages
G
B
C
D
E
F
G
B
C
D
E
F
H
10/25/2007 ecs150, Fall 2007 58
UCDavis, ecs150Fall 2007
Clock PagesClock Pages
G
B
C
D
E
F
G
B
C
D
E
F
H
10/25/2007 ecs150, Fall 2007 59
UCDavis, ecs150Fall 2007
Clock PagesClock Pages
G
B
H
D
E
F
G
B
H
D
E
F
I
10/25/2007 ecs150, Fall 2007 60
UCDavis, ecs150Fall 2007
Clock PagesClock Pages
G
B
H
D
E
F
G
B
H
D
E
F
I
10/25/2007 ecs150, Fall 2007 61
UCDavis, ecs150Fall 2007
Clock PagesClock Pages
G
I
H
D
E
F
G
I
H
D
E
F
10/25/2007 ecs150, Fall 2007 62
UCDavis, ecs150Fall 2007
EvaluationEvaluation the page-fault rate. Evaluate algorithm by running it on a
particular string of memory references (reference string) and computing the number of page faults on that string.
In all our examples, the reference string is
2, 3, 2, 1, 5, 2, 4, 5, 3, 2, 5, 2.
10/25/2007 ecs150, Fall 2007 63
UCDavis, ecs150Fall 2007
3 physical pages
2
2, 3, 2, 1, 5, 2, 4, 5, 3, 2, 5, 2
2
3
2
3
2
3
1
5 (2)
3
1
5
2 (3)
1
FIFO
Page Faults
10/25/2007 ecs150, Fall 2007 64
UCDavis, ecs150Fall 2007
Page ReplacementPage Replacement
2, 3, 2, 1, 5, 2, 4, 5, 3, 2, 5, 2 OPT/LRU/FIFO/CLOCK and 3 pages how many page faults?
10/25/2007 ecs150, Fall 2007 65
UCDavis, ecs150Fall 2007
10/25/2007 ecs150, Fall 2007 66
UCDavis, ecs150Fall 2007 ThrashingThrashing
If a process does not have “enough” pages, the page-fault rate is very high. This leads to:– low CPU utilization.
– operating system thinks that it needs to increase the degree of multiprogramming.
– another process added to the system.
Thrashing a process is busy swapping pages in and out.
10/25/2007 ecs150, Fall 2007 67
UCDavis, ecs150Fall 2007 Thrashing Thrashing
Why does paging work?Locality model– Process migrates from one locality to another.– Localities may overlap.
Why does thrashing occur? size of locality > total memory size
10/25/2007 ecs150, Fall 2007 68
UCDavis, ecs150Fall 2007
How to Handle Thrashing?How to Handle Thrashing?
Brainstorming!!
10/25/2007 ecs150, Fall 2007 69
UCDavis, ecs150Fall 2007 Locality In A Memory-Reference PatternLocality In A Memory-Reference Pattern
10/25/2007 ecs150, Fall 2007 70
UCDavis, ecs150Fall 2007
FreeBSD VMFreeBSD VM
10/25/2007 ecs150, Fall 2007 71
UCDavis, ecs150Fall 2007
/usr/src/sys/vm/vm_map.h
How to implement?
10/25/2007 ecs150, Fall 2007 72
UCDavis, ecs150Fall 2007
Text
InitializedData(Copy on Write)
UnintializedData(Zero-Fill)AnonymousObject
Stack(Zero-Fill)AnonymousObject
10/25/2007 ecs150, Fall 2007 73
UCDavis, ecs150Fall 2007
Page-level Allocation
• Kernel maintains a list of free physical pages.
• Two principal clients:the paging systemthe kernel memory allocator
– not restricted to memory allocation, any collection of objects that are sequentially ordered and require allocation and freeing in contiguous chunks.
– Can allocate exact size within any alignment restrictions. Thus no internal fragmentation.
– Client may release portion of allocated memory.
– adjacent free regions are coalesced
10/25/2007 ecs150, Fall 2007 82
UCDavis, ecs150Fall 2007 Resource Map -Good/Bad
• Disadvantages:Map may become highly fragmented resulting in
low utilization. Poor for performing large requests.
Resource map size increases with fragmentation static table will overflow dynamic table needs it’s own allocator
Map must be sort for free region coalescing. Sorting operations are expensive.
Requires linear search of map to find free region that matches allocation request.
Difficult to return borrowed pages to paging system.
10/25/2007 ecs150, Fall 2007 83
UCDavis, ecs150Fall 2007
Simple Power of TwosSimple Power of Twos has been used to implement malloc() and free() in the
user-level C library (libc).
Do you know how it is implemented?
10/25/2007 ecs150, Fall 2007 84
UCDavis, ecs150Fall 2007
Simple Power of TwosSimple Power of Twos has been used to implement malloc() and free() in the
user-level C library (libc). Uses a set of free lists with each list storing a particular
size of buffer. Buffer sizes are a power of two. Each buffer has a one word header
– when free, header stores pointer to next free list element
– when allocated, header stores pointer to associated free list (where it is returned to when freed). Alternatively, header may contain size of buffer
10/25/2007 ecs150, Fall 2007 85
UCDavis, ecs150Fall 2007
How to allocate?– char *ptr = (char *) malloc(100);
10/25/2007 ecs150, Fall 2007 86
UCDavis, ecs150Fall 2007
How to allocate?– char *ptr = (char *) malloc(100);
10/25/2007 ecs150, Fall 2007 87
UCDavis, ecs150Fall 2007
How to allocate?– char *ptr = (char *) malloc(100);
10/25/2007 ecs150, Fall 2007 88
UCDavis, ecs150Fall 2007
How to free?– char *ptr = (char *) malloc(100);– free(ptr);
10/25/2007 ecs150, Fall 2007 89
UCDavis, ecs150Fall 2007
Extra FOUR bytes for a pointer or sizeFree next Free blockUsed size
10/25/2007 ecs150, Fall 2007 90
UCDavis, ecs150Fall 2007
free list One word header per buffer (pointer)
– malloc(X): size = roundup(X + sizeof(header))– roundup(Y) = 2n, where 2n-1 < Y <= 2n
free(buf) must free entire buffer.
10/25/2007 ecs150, Fall 2007 91
UCDavis, ecs150Fall 2007
Simple and reasonably fast eliminates linear searches and fragmentation.
– Bounded time for allocations when buffers are available
familiar API simple to share buffers between kernel modules
since free’ing a buffer does not require knowing its size
10/25/2007 ecs150, Fall 2007 92
UCDavis, ecs150Fall 2007
Rounding requests to power of 2 results in wasted memory and poor utilization.– aggravated by requiring buffer headers since it is not unusual
for memory requests to already be a power-of-two.
no provision for coalescing free buffers since buffer sizes are generally fixed.
no provision for borrowing pages from paging system although some implementations do this.
no provision for returning unused buffers to page allocator
10/25/2007 ecs150, Fall 2007 93
UCDavis, ecs150Fall 2007
Simple Power of Two Simple Power of Two void *malloc (size){ int ndx = 0; /* free list index */ int bufsize = 1 << MINPOWER /* size of smallest buffer */ size += 4; /* Add for header */ assert (size <= MAXBUFSIZE); while (bufsize < size) { ndx++; bufsize <<= 1; } /* ndx is the index on the freelist array from which a buffer * will be allocated */}
10/25/2007 ecs150, Fall 2007 94
UCDavis, ecs150Fall 2007
Can we eliminate the need for the Extra FOUR bytes?
Improved power of twos implementation All buffers within a page must be of equal size Adds page usage array, kmemsizes[], to manage pages Managed Memory must be contiguous pages Does not require buffer headers to indicate page size.
When freeing memory, free(buff) simply masks of the lower order bit to get the page address (actually the page offset = pg) which is used as an index into the kmemsizes array.
Improved power of twos implementation All buffers within a page must be of equal size Adds page usage array, kmemsizes[], to manage pages Managed Memory must be contiguous pages Does not require buffer headers to indicate page size.
When freeing memory, free(buff) simply masks of the lower order bit to get the page address (actually the page offset = pg) which is used as an index into the kmemsizes array.
10/25/2007 ecs150, Fall 2007 104
UCDavis, ecs150Fall 2007
• Disadvantages:similar drawbacks to simple power-of-twos
allocatorvulnerable to burst-usage patterns since no
provision for moving buffers between lists
• Advantages:eliminates space wastage in common case where
allocation request is a power-of-twooptimizes round-up computation and eliminates it
if size is known at compile time
McKusick-Karels Allocator
10/25/2007 ecs150, Fall 2007 105
UCDavis, ecs150Fall 2007
The Buddy SystemThe Buddy System
Another interesting power-of-2 memory allocation used in Linux Kernel