MEMORY MANAGEMENT prepared by Visakh V,Assistant Professor, LBSITW
Jun 26, 2015
prepared by Visakh V,Assistant Professor, LBSITW
MEMORY MANAGEMENT
prepared by Visakh V,Assistant Professor, LBSITW
Operating System
prepared by Visakh V,Assistant Professor, LBSITW
• An operating system (OS) is a collection of software that manages computer hardware resources and provides common services for computer programs.
Operating System
• The operating system is a vital component of the system software in a computer system
•Ex; Android, iOS, Linux, OS X, QNX, Microsoft Windows,[3] Windows Phone, and IBM z/OS
prepared by Visakh V,Assistant Professor, LBSITW
prepared by Visakh V,Assistant Professor, LBSITW
prepared by Visakh V,Assistant Professor, LBSITW
Memory management
• The memory management portion of the Operating System
• It is responsible for the efficient usage of main memory, especially in a multiprogramming environment where processes contend for memory.
• It must also offer protection of one process address space from another (including protection of system address space from user processes).
prepared by Visakh V,Assistant Professor, LBSITW
Memory Hierarchy
prepared by Visakh V,Assistant Professor, LBSITW
prepared by Visakh V,Assistant Professor, LBSITW
Issues Allocation schemes Protection from each other Protecting OS code Translating logical addresses to physical Swapping programs What if physical memory is small: Virtualmemory
Sharing Memory
prepared by Visakh V,Assistant Professor, LBSITW
Memory management Schemes
• Single contiguous memory allocation• Fixed partition memory allocation• Variable partition memory allocation
prepared by Visakh V,Assistant Professor, LBSITW
Single contiguous memory allocation
• The user’s job is assigned the complete control of the CPU until the job completes or an error occurs• During this time, the user’s job is the only program which would reside in memory apart from the operating system
prepared by Visakh V,Assistant Professor, LBSITW
• Case1: Denotes a scenario where in the user’s job occupies the complete available memory (the upper part of the memory is however, reserved for the OS program)
prepared by Visakh V,Assistant Professor, LBSITW
• Case2 : 30K of memory which is free but the new job cannot be put in memory because of the single contiguous allocation technique. From the above figure and the related discussion, the following advantages and disadvantages of single contiguous allocation technique can be seen.
prepared by Visakh V,Assistant Professor, LBSITW
Advantages: • It is simple to implement
Disadvantages: • It leads to wastage of memory which is called fragmentation
• This memory management technique would lead to uniprogramming. Hence it cannot be used for multiprogramming.
• It leads to wastage of CPU time (wastage of time). When the current job in memory is waiting for an input or output operation the CPU is left idle
prepared by Visakh V,Assistant Professor, LBSITW
Fixed partition memory allocation
• The memory is divided into various partitions each of fixed size. • This would allow several user jobs to reside in the memory
prepared by Visakh V,Assistant Professor, LBSITW
Case1: There are three jobs residing in memory each of which fits exactly into the respective partitions. One more partitioned memory is available for a user job. This type of fixed partition allocation supports multiprogramming.
prepared by Visakh V,Assistant Professor, LBSITW
Case2: Suppose that a new job of size 40K arrives for execution. It can be seen that the total amount of free memory is 40K but the new job cannot fit in to memory for execution because of lack of contiguous free space.
Case2 leads to external fragmentation wherein there is enough free memory for a new job but they are not contiguous
prepared by Visakh V,Assistant Professor, LBSITW
Case3: This depicts a scenario where job 4 is allocated a memory partition of 20K but it has occupied only 10K of this memory partition and the remaining 10K is unused.
Case3 leads to internal fragmentation wherein there is an unused part of memory internal to a memory partition.
prepared by Visakh V,Assistant Professor, LBSITW
Advantages:• provide multiprogramming Disadvantages: • internal and external fragmentation of memory
prepared by Visakh V,Assistant Professor, LBSITW
Variable partition memory allocation
• There is no pre-determined (fixed) partitioning of memory.• This technique allocates the exact amount of memory required for a job
prepared by Visakh V,Assistant Professor, LBSITW
prepared by Visakh V,Assistant Professor, LBSITW
Advantages:• prevents internal fragmentation Disadvantages: •cause external fragmentation of memory.
prepared by Visakh V,Assistant Professor, LBSITW
prepared by Visakh V,Assistant Professor, LBSITW
Memory Management Strategies
prepared by Visakh V,Assistant Professor, LBSITW
Memory Allocation Techniques
• First Fit• Best Fit• Worst Fit• Next Fit
prepared by Visakh V,Assistant Professor, LBSITW
First Fit
1 KB
4 KB
2 KB
2 KB
prepared by Visakh V,Assistant Professor, LBSITW
1 KB
4 KB
2 KB
2 KB
Best Fit
Best Fit
Disadvantage : Searching Time Increases for Exact fit
prepared by Visakh V,Assistant Professor, LBSITW
1 KB
10 KB
2 KB
Worst Fit1 KB
Worst Fit
prepared by Visakh V,Assistant Professor, LBSITW
2 KB
4 KB
2 KB
1 KBNext Fit
Previous
Similar to First fit
prepared by Visakh V,Assistant Professor, LBSITW
BuddySystems
prepared by Visakh V,Assistant Professor, LBSITW
• According to Donald Knuth, the buddy system was invented in 1963 by Harry Markowitz, who won the 1990 Nobel Memorial Prize in Economics.
• It was first described by Kenneth C. Knowlton(published 1965).
• Now a days Linux uses the buddy system to manage allocation of memory, possibly because it is allocating many structures which are already powers of two, like frames.
History
prepared by Visakh V,Assistant Professor, LBSITW
• The buddy memory allocation technique is a memory allocation algorithm that divides memory into partitions to try to satisfy a memory request as suitably as possible.
• This system makes use of splitting memory into halves to try to give a best-fit.
• Compared to the more complex memory allocation techniques that some modern operating systems use, buddy memory allocation is relatively easy to implement.
• It supports limited but efficient splitting and coalescing of memory blocks.
INTRODUCTION
prepared by Visakh V,Assistant Professor, LBSITW
• A fixed partitioning scheme limits the number of active processes and may use space inefficiently if there is a poor match between available partition size and process size
• A dynamic partitioning scheme is more complex to maintain and includes the overhead of compaction.
• An interesting compromise of fixed and dynamic partitioning is the buddy system.
Why Buddy System?
prepared by Visakh V,Assistant Professor, LBSITW
• The buddy system(binary) allows a single allocation block to be split, to form two blocks half the size of the parent block. These two blocks are known as 'buddies'.
• Part of the definition of a 'buddy' is that the buddy of block B must be the same size as B, and must be adjacent in memory (so that it is possible to merge them later).
• The other important property of buddies, stems from the fact that in the buddy system, every block is at an address in memory which is exactly divisible by its size.
• So all the 16-byte blocks are at addresses which are multiples of 16; all the 64K blocks are at addresses which are multiples of 64K... and so on.
What are Buddies….?
prepared by Visakh V,Assistant Professor, LBSITW
• There are number of buddy systems, proposed by
researcher, which are capable of reducing
execution time and increase memory utilization.
Four Types of Buddy System• Binary buddy system• Fibonacci buddy system• Weighted buddy system• Tertiary buddy system
TYPES OF BUDDY SYSTEM
prepared by Visakh V,Assistant Professor, LBSITW
• These three Buddy Systems are similar in the design of the algorithm, the major difference is the sizes of the memory blocks.
• It also differs in memory utilization and execution time.
• In some situations, one buddy system looks good, may not be good in other situation.
• It simply lies on the requests for memory which causes external and internal fragmentation higher at some situations.
How it Differs?
prepared by Visakh V,Assistant Professor, LBSITW
• In binary buddy system the memory block of 2m is into two equal parts of 2m-1.
• It satisfies the following recurrence relation
Li = Li-1+ Li-1
BINARY BUDDY SYSTEM
8
4
2 2
4
2 2
prepared by Visakh V,Assistant Professor, LBSITW
• The memory consists of a collection of blocks of consecutive memory, each of which is a power of two in size.
• Each block is marked either occupied or free, depending on whether it is allocated to the user.
• For each block we also know its size .• The system provides two operations for supporting
dynamic memory allocation:• 1. Allocate (2k): Finds a free block of size 2k, marks it as
occupied, and returns a pointer to it.• 2. Deallocate (B): Marks the previously allocated block B
as free and may merge it with others to form a larger free block.
Binary Buddy System
prepared by Visakh V,Assistant Professor, LBSITW
• The buddy system maintains a list of the free blocks of each size (called a free list), so that it is easy to find a block of the desired size, if one is available.
• If no block of the requested size is available, Allocate searches for the first nonempty list for blocks of at least the size requested.
• In either case, a block is removed from the free list. • This process of finding a large enough free block will
indeed be the most difficult operation for us to perform quickly.
Allocation in Binary Buddy System
prepared by Visakh V,Assistant Professor, LBSITW
• If the found block is larger than the requested size, say 2k instead of the desired 2i, then the block is split in half, making two blocks of size 2k−1.
• If this is still too large (k − 1 > i),then one of the blocks of size 2k−1 is split in half.
• This process is repeated until we have blocks of size 2k−1, 2k−2, . . . , 2i+1, 2i, and 2i.
• Then one of the blocks of size 2i is marked as occupied and returned to the user.
• The others are added to the appropriate free lists.• Each block B1 was created by splitting another block into
two halves, call them B1 (Buddy of B2) and
B2(Buddy of B1).
prepared by Visakh V,Assistant Professor, LBSITW
• Now when a block is deallocated, the buddy system checks whether the block can be merged with any others or more precisely whether we can undo any splits that were performed to make this block.
• The merging process checks whether the buddy of a deallocated block is also free, in which case the two blocks are merged;
• then it checks whether the buddy of the resulting block is also free, in which case they are merged; and so on.
Deallocation of Binary Buddy System
prepared by Visakh V,Assistant Professor, LBSITW
• Thus it is crucial for performance purposes to know, given a block address, the size of the block and whether it is occupied.
• This is usually done by storing a block header in the first few bits of the block.
• More precisely, we use headers in which the first bit is the occupied bit , and the remaining bits specify the size of the block.
• Eg) To determine whether the buddy of a block is free, we compute the buddy’s address, look at the first bit at this address, and also check that the two sizes match.
Block Header In Buddy System
prepared by Visakh V,Assistant Professor, LBSITW
Example:Let us consider 1-Mbyte of memory is allocated using
Buddy System. Show the Binary tree form and list form for the following :
Request 100k(A)Request 240k(B)Request 64k(C)Request 256k(D)Release BRelease ARequest 75kRelease CRelease ERelease D.
prepared by Visakh V,Assistant Professor, LBSITW
1MB
(A)100 28 128 256 512
(A)100 28 128 (B)240 16 512
(A)100 28 (C)64 64 (B)240 16 512
(A)100 28 (C)64 64 (B)240 16 (D)256 256
(A)100 28 (C)64 64 256 (D)256 256
128 (C)64 64 256 (D)256 256
(E)75 53 (C)64 64 256 (D)256 256
(E)75 53 128 256 (D)256 256
512 (D)256 256
1MB
prepared by Visakh V,Assistant Professor, LBSITW
A=128 C=64 64 256 D=256 256
Unused memory
Used memory
prepared by Visakh V,Assistant Professor, LBSITW
• Hirschberg taking Knuth's suggestion has designed a Fibonacci buddy system with block sizes which are Fibonacci numbers.
• It satisfies the following recurrence relation :
Li=Li-1 + Li-2.
• 0, 1,1, 2, 3, 5, 8,13, 21, 34, 55,89, 144, 233,377, 610, 987, 1597, 2582…
FIBONACCI BUDDY SYSTEM
610
233
144 89144233
377
prepared by Visakh V,Assistant Professor, LBSITW
• Less external fragmentation.• Search for a block of the right size is cheaper than,
best fit because we need only find the first available block on the block list for blocks of size 2k;
• Merging adjacent free blocks is easy. • In buddy systems, the cost to allocate and free a block
of memory is low compared to that of best-fit or first-fit algorithms.
Advantages of Buddy System
prepared by Visakh V,Assistant Professor, LBSITW
• It allows internal fragmentation. • For example, a request for 515k will require a block
of size 1024k. In consequence, such an approach gives a waste of 509 k.
• Splitting and merging adjacent areas is a recurrent operation and thus very unpredictable and inefficient.
• The another drawback of the buddy system is the time required to fragment and merge blocks.
Disadvantages of Buddy System
prepared by Visakh V,Assistant Professor, LBSITW
FreeingMemory
prepared by Visakh V,Assistant Professor, LBSITW
Freeing Memory• Whenever a memory block is freed, it has to be added to the free list*.• Sometimes there will be lot of small free blocks on the free list and it won’t be possible to fulfill a request for a large memory block
*A free list : is a data structure used in a scheme for dynamic memory allocation. It operates by connecting unallocated regions of memory together in a linked list, using the first word of each unallocated region as a pointer to the next.
prepared by Visakh V,Assistant Professor, LBSITW
• At the time of allocation bigger blocks are split into smaller blocks, so we need some combination procedure that combines free blocks into large blocks.
• If any neighbor of the freed block is free , then we can remove it from the free list and combine these contiguous free blocks to form a larger free block and put this larger free block on the free list. • Usually free list is arranged in order of increasing memory address.
prepared by Visakh V,Assistant Professor, LBSITW
Boundary Tag Method
prepared by Visakh V,Assistant Professor, LBSITW
Boundary Tag Method
•In this, there is no need to traverse the free list for finding the address and free status of adjacent free blocks. • to achieve this we need to store some extra information in all blocks• When a block is freed we need to locate its left and right neighbors and find whether they are free or not.
prepared by Visakh V,Assistant Professor, LBSITW
In approach a each block is bracketed with size and status of that block, thus allowing one end of any block to be found from the other, and allowing the status of a block to be inspected from either end.
In approach b each block is prefixed with its status and with the address of each of its neighbors.
prepared by Visakh V,Assistant Professor, LBSITW
Tag : descriptive information associated with a block of data is called a tag
boundary tag algorithms:These tag are stored in the boundaries between adjacent blocks and these approaches are called boundary tag algorithms
prepared by Visakh V,Assistant Professor, LBSITW
If the block is Free 0 , Otherwise 1
Block size
Address of previous free blockAddress of next free block
prepared by Visakh V,Assistant Professor, LBSITW
FIG-A
prepared by Visakh V,Assistant Professor, LBSITW
Free P3
prepared by Visakh V,Assistant Professor, LBSITW
Using Boundary Tag Method Free P4 IN FIG- AFree P2 IN FIG-AFree P1 IN FIG-A
prepared by Visakh V,Assistant Professor, LBSITW
Compaction
prepared by Visakh V,Assistant Professor, LBSITW
Compaction• After repeated allocation and de-allocation of blocks , the memory becomes fragmented. •Compaction is a technique that joins the non contiguous free memory blocks to form one large blocks so that the total free memory becomes contiguous. • All the memory blocks that are in use are moved towards the beginning of the memory.
prepared by Visakh V,Assistant Professor, LBSITW
prepared by Visakh V,Assistant Professor, LBSITW
GarbageCollection
prepared by Visakh V,Assistant Professor, LBSITW
Garbage Collection•Garbage : It refers those memory blocks that are allocated but not in use • Garbage collection techniques is used to recognize garbage blocks and automatically free them.• It is also known as automatic memory management
prepared by Visakh V,Assistant Professor, LBSITW
• The main work of a garbage collector is to differentiate between garbage and non garbage blocks and return the garbage blocks to the free list. •Two common appraches of garbage collection are
i. Reference Countingii. Mark and Sweep
prepared by Visakh V,Assistant Professor, LBSITW
i. Reference Counting• Each allocated block contain a reference count • Reference count : Which indicates the number of pointers points to this block • Each time• Incremented : We create or copy a pointer
to the block • Decremented: when a pointer to the block s
destroyed
• When the reference count of an object becomes zero , it becomes unreachable and is considered as garbage.• The garbage block is immediately made reusable by placing it on the free list
prepared by Visakh V,Assistant Professor, LBSITW
A block memory is freed as soon as it becomes garbage
It cannot handle cyclic reference correctly
prepared by Visakh V,Assistant Professor, LBSITW
ii. Mark and Sweep
•The mark and sweep garbage collector is run when the system is very low on memory and it is not possible to allocate any space for the user.• All the application programs come to halt temporarily when this garbage collector runs.
prepared by Visakh V,Assistant Professor, LBSITW
• This takes place in two phase
• Mark Phase
• Sweep Phase
All the non garbage blocks are marked
The collector sweeps over the memory and returns all the unmarked (garbage) blocks to the freelist [No movement of blocks here!!!!]
prepared by Visakh V,Assistant Professor, LBSITW
a) Can handle cyclic reference
b) No overhead of maintaining reference variable
a)It uses stop the world approachesb) Thrashing occurs when most of the memory is being used
prepared by Visakh V,Assistant Professor, LBSITW
prepared by Visakh V,Assistant Professor, LBSITW