Top Banner
CS 61C: Great Ideas in Computer Architecture (Machine Structures) Instruction Level Parallelism: Multiple Instruction Issue Guest Lecturer: Justin Hsia
60

Guest Lecturer: Justin Hsia

Feb 12, 2016

Download

Documents

Ozil Ozil

CS 61C: Great Ideas in Computer Architecture (Machine Structures) Instruction Level Parallelism: Multiple Instruction Issue. Guest Lecturer: Justin Hsia. You Are Here!. Software Hardware. Parallel Requests Assigned to computer e.g., Search “Katz” Parallel Threads Assigned to core - PowerPoint PPT Presentation
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: Guest Lecturer:  Justin  Hsia

CS 61C: Great Ideas in Computer Architecture (Machine Structures)

Instruction Level Parallelism:Multiple Instruction Issue

Guest Lecturer: Justin Hsia

Page 2: Guest Lecturer:  Justin  Hsia

Summer 2011 -- Lecture #23 2

You Are Here!

• Parallel RequestsAssigned to computere.g., Search “Katz”

• Parallel ThreadsAssigned to coree.g., Lookup, Ads

• Parallel Instructions>1 instruction @ one timee.g., 5 pipelined instructions

• Parallel Data>1 data item @ one timee.g., Add of 4 pairs of words

• Hardware descriptionsAll gates functioning in

parallel at same time

SmartPhone

Warehouse Scale

Computer

Software Hardware

HarnessParallelism &Achieve HighPerformance

Logic Gates

Core Core…

Memory (Cache)

Input/Output

Computer

Cache

Core

Instruction Unit(s) FunctionalUnit(s)

A3+B3A2+B2A1+B1A0+B0

Today’sLecture

7/28/2011

Page 3: Guest Lecturer:  Justin  Hsia

Summer 2011 -- Lecture #23 3

Agenda

• Control Hazards• Higher Level ILP• Administrivia• Dynamic Scheduling• Technology Break• Example AMD Barcelona• Big Picture: Types of Parallelism• Summary7/28/2011

Page 4: Guest Lecturer:  Justin  Hsia

4Summer 2011 -- Lecture #23

Review: HazardsSituations that prevent starting the next instruction in

the next clock cycle1. Structural hazards

– A required resource is busy (e.g., needed in multiple stages)

2. Data hazard– Data dependency between instructions.– Need to wait for previous instruction to complete its

data read/write3. Control hazard

– Flow of execution depends on previous instruction

7/28/2011

Page 5: Guest Lecturer:  Justin  Hsia

Summer 2011 -- Lecture #23 5

Review: Load / Branch Delay Slots• Stall is equivalent to nop

sub $t3,$t0,$t2

and $t5,$t0,$t4

or $t7,$t0,$t6 I$

AL

UReg D$

lw $t0, 0($t1) AL

UI$ Reg D$ Reg

bubble

bubble

bubble

bubble

bubble

AL

UI$ Reg D$ Reg

AL

UI$ Reg D$ Reg

nop

7/28/2011

Page 6: Guest Lecturer:  Justin  Hsia

Summer 2011 -- Lecture #23 6

Agenda

• Control Hazards• Higher Level ILP• Administrivia• Dynamic Scheduling• Technology Break• Example AMD Barcelona• Big Picture: Types of Parallelism• Summary7/28/2011

Page 7: Guest Lecturer:  Justin  Hsia

7Summer 2011 -- Lecture #23

Greater Instruction-Level Parallelism (ILP)

• Deeper pipeline (5 => 10 => 15 stages)– Less work per stage shorter clock cycle

• Multiple issue is superscalar– Replicate pipeline stages multiple pipelines– Start multiple instructions per clock cycle– CPI < 1, so use Instructions Per Cycle (IPC)– E.g., 4 GHz 4-way multiple-issue

• 16 BIPS, peak CPI = 0.25, peak IPC = 4– But dependencies reduce this in practice

§4.10 Parallelism and Advanced Instruction Level Parallelism

7/28/2011

Page 8: Guest Lecturer:  Justin  Hsia

8Summer 2011 -- Lecture #23

Multiple Issue

• Static multiple issue– Compiler groups instructions to be issued together– Packages them into “issue slots”– Compiler detects and avoids hazards

• Dynamic multiple issue– CPU examines instruction stream and chooses instructions

to issue each cycle– Compiler can help by reordering instructions– CPU resolves hazards using advanced techniques at

runtime

7/28/2011

Page 9: Guest Lecturer:  Justin  Hsia

Summer 2011 -- Lecture #23 9

Superscalar Laundry: Parallel per stage

• More resources, HW to match mix of parallel tasks?

Task

Order

12 2 AM6 PM 7 8 9 10 11 1

Time

BCD

A

EF

(light clothing) (dark clothing) (very dirty clothing)

(light clothing) (dark clothing) (very dirty clothing)

303030 3030

7/28/2011

Page 10: Guest Lecturer:  Justin  Hsia

10Summer 2011 -- Lecture #23

Pipeline Depth and Issue Width• Intel Processors over Time

Microprocessor Year Clock Rate Pipeline Stages

Issue width

Cores Power

i486 1989 25 MHz 5 1 1 5WPentium 1993 66 MHz 5 2 1 10WPentium Pro 1997 200 MHz 10 3 1 29WP4 Willamette 2001 2000 MHz 22 3 1 75WP4 Prescott 2004 3600 MHz 31 3 1 103WCore 2 Conroe 2006 2930 MHz 14 4 2 75WCore 2 Yorkfield 2008 2930 MHz 16 4 4 95WCore i7 Gulftown 2010 3460 MHz 16 4 6 130W

7/28/2011

Page 11: Guest Lecturer:  Justin  Hsia

Summer 2011 -- Lecture #23 11

Pipeline Depth and Issue Width

1989 1992 1995 1998 2001 2004 2007 20101

10

100

1000

10000

Clock

Power

Pipeline Stages

Issue width

Cores

7/28/2011

Page 12: Guest Lecturer:  Justin  Hsia

12Summer 2011 -- Lecture #23

Static Multiple Issue

• Compiler groups instructions into issue packets– Group of instructions that can be issued on a

single cycle– Determined by pipeline resources required

• Think of an issue packet as a very long instruction (VLIW)– Specifies multiple concurrent operations

7/28/2011

Page 13: Guest Lecturer:  Justin  Hsia

13Summer 2011 -- Lecture #23

Scheduling Static Multiple Issue

• Compiler must remove some/all hazards– Reorder instructions into issue packets– No dependencies with a packet– Possibly some dependencies between packets

• Varies between ISAs; compiler must know!– Pad with nop if necessary

7/28/2011

Page 14: Guest Lecturer:  Justin  Hsia

14Summer 2011 -- Lecture #23

MIPS with Static Dual Issue• Dual-issue packets

– One ALU/branch instruction– One load/store instruction– 64-bit aligned

• ALU/branch, then load/store• Pad an unused instruction with nop

Address Instruction type Pipeline Stages

n ALU/branch IF ID EX MEM WB

n + 4 Load/store IF ID EX MEM WB

n + 8 ALU/branch IF ID EX MEM WB

n + 12 Load/store IF ID EX MEM WB

n + 16 ALU/branch IF ID EX MEM WB

n + 20 Load/store IF ID EX MEM WB

7/28/2011

Page 15: Guest Lecturer:  Justin  Hsia

15Summer 2011 -- Lecture #23

Hazards in the Dual-Issue MIPS

• More instructions executing in parallel• EX data hazard

– Forwarding avoided stalls with single-issue– Now can’t use ALU result in load/store in same packet

• add $t0, $s0, $s1load $s2, 0($t0)

• Split into two packets, effectively a stall

• Load-use hazard– Still one cycle use latency, but now two instructions

• More aggressive scheduling required

7/28/2011

Page 16: Guest Lecturer:  Justin  Hsia

16Summer 2011 -- Lecture #23

Scheduling Example• Schedule this for dual-issue MIPS

Loop: lw $t0, 0($s1) # $t0=array element addu $t0, $t0, $s2 # add scalar in $s2 sw $t0, 0($s1) # store result addi $s1, $s1,–4 # decrement pointer bne $s1, $zero, Loop # branch $s1!=0

ALU/branch Load/store cycle

Loop: nop lw $t0, 0($s1) 1addi $s1, $s1,–4 nop 2addu $t0, $t0, $s2 nop 3bne $s1, $zero, Loop sw $t0, 4($s1) 4

IPC = 5/4 = 1.25 (c.f. peak IPC = 2)7/28/2011

Page 17: Guest Lecturer:  Justin  Hsia

17Summer 2011 -- Lecture #23

Loop Unrolling

• Replicate loop body to expose more parallelism

• Use different registers per replication– Called register renaming– Avoid loop-carried anti-dependencies

• Store followed by a load of the same register• Aka “name dependence”

– Reuse of a register name

7/28/2011

Page 18: Guest Lecturer:  Justin  Hsia

18Summer 2011 -- Lecture #23

Loop Unrolling Example

• IPC = 14/8 = 1.75– Closer to 2, but at cost of registers and code size

ALU/branch Load/store cycleLoop: addi $s1, $s1,–16 lw $t0, 0($s1) 1

nop lw $t1, 12($s1) 2addu $t0, $t0, $s2 lw $t2, 8($s1) 3addu $t1, $t1, $s2 lw $t3, 4($s1) 4addu $t2, $t2, $s2 sw $t0, 16($s1) 5addu $t3, $t4, $s2 sw $t1, 12($s1) 6nop sw $t2, 8($s1) 7bne $s1, $zero, Loop sw $t3, 4($s1) 8

7/28/2011

Page 19: Guest Lecturer:  Justin  Hsia

Summer 2011 -- Lecture #23 19

Agenda

• Control Hazards• Higher Level ILP• Administrivia• Dynamic Scheduling• Technology Break• Example AMD Barcelona• Big Picture: Types of Parallelism• Summary7/28/2011

Page 20: Guest Lecturer:  Justin  Hsia

Summer 2011 -- Lecture #23 20

Administrivia

• Project 2 Part 2 due Sunday.– Slides at end of July 12 lecture contain useful info.

• Lab 12 cancelled!– Replaced with free study session where you can

catch up on labs / work on project 2.– The TAs will still be there.

• Project 3 will be posted late Sunday (7/31)– Two-stage pipelined CPU in Logisim

7/28/2011

Page 21: Guest Lecturer:  Justin  Hsia

Summer 2011 -- Lecture #23 21

Agenda

• Control Hazards• Administrivia• Higher Level ILP• Dynamic Scheduling• Technology Break• Example AMD Barcelona• Big Picture: Types of Parallelism• Summary7/28/2011

Page 22: Guest Lecturer:  Justin  Hsia

22Summer 2011 -- Lecture #23

Dynamic Multiple Issue

• “Superscalar” processors• CPU decides whether to issue 0, 1, 2, …

instructions each cycle– Avoiding structural and data hazards

• Avoids need for compiler scheduling– Though it may still help– Code semantics ensured by the CPU

7/28/2011

Page 23: Guest Lecturer:  Justin  Hsia

23Summer 2011 -- Lecture #23

Dynamic Pipeline Scheduling

• Allow the CPU to execute instructions out of order to avoid stalls– But commit result to registers in order

• Examplelw $t0, 20($s2)addu $t1, $t0, $t2subu $s4, $s4, $t3slti $t5, $s4, 20

– Can start subu while addu is waiting for lw

7/28/2011

Page 24: Guest Lecturer:  Justin  Hsia

24Summer 2011 -- Lecture #23

Why Do Dynamic Scheduling?

• Why not just let the compiler schedule code?• Not all stalls are predicable

– e.g., cache misses• Can’t always schedule around branches

– Branch outcome is dynamically determined• Different implementations of an ISA have

different latencies and hazards

7/28/2011

Page 25: Guest Lecturer:  Justin  Hsia

25Summer 2011 -- Lecture #23

Speculation

• “Guess” what to do with an instruction– Start operation as soon as possible– Check whether guess was right

• If so, complete the operation• If not, roll-back and do the right thing

• Common to both static and dynamic multiple issue• Examples

– Speculate on branch outcome (Branch Prediction)• Roll back if path taken is different

– Speculate on load• Roll back if location is updated

7/28/2011

Page 26: Guest Lecturer:  Justin  Hsia

Summer 2011 -- Lecture #23 26

Pipeline Hazard:Matching socks in later load

• A depends on D; stall since folder tied up;

Task

Order

BCD

A

EF

bubble

12 2 AM6 PM 7 8 9 10 11 1

Time303030 3030 3030

7/28/2011

Page 27: Guest Lecturer:  Justin  Hsia

27

Out-of-Order Laundry: Don’t Wait

• A depends on D; rest continue; need more resources to allow out-of-order

Task

Order

12 2 AM6 PM 7 8 9 10 11 1

Time

BCD

A303030 3030 3030

EF

bubble

Summer 2011 -- Lecture #237/28/2011

Page 28: Guest Lecturer:  Justin  Hsia

Summer 2011 -- Lecture #23 28

Out-of-Order Execution (1/3)Basically, unroll loops in hardware1. Fetch instructions in program order (≤4/clock)

2. Predict branches as taken/untaken

3. To avoid hazards on registers, rename registers using a set of internal registers (~80 registers)

7/28/2011

Page 29: Guest Lecturer:  Justin  Hsia

Summer 2011 -- Lecture #23 29

Out-of-Order Execution (2/3)Basically, unroll loops in hardware4. Collection of renamed instructions might

execute in a window (~60 instructions)

5. Execute instructions with ready operands in 1 of multiple functional units (ALUs, FPUs, Ld/St)

6. Buffer results of executed instructions until predicted branches are resolved in reorder buffer

7/28/2011

Page 30: Guest Lecturer:  Justin  Hsia

Summer 2011 -- Lecture #23 30

Out-of-Order Execution (3/3)Basically, unroll loops in hardware7. If predicted branch correctly, commit results in

program order

8. If predicted branch incorrectly, discard all dependent results and start with correct PC

7/28/2011

Page 31: Guest Lecturer:  Justin  Hsia

31Summer 2011 -- Lecture #23

Dynamically Scheduled CPU

Results also sent to any waiting reservation stations

Reorder buffer for register and memory writes Can supply operands

for issued instructions

Preserves dependencies

Wait here until all operands available

7/28/2011

Branch prediction,Register renaming

Execute…

… and Hold

Page 32: Guest Lecturer:  Justin  Hsia

32Summer 2011 -- Lecture #23

Out-Of-Order Intel• All use O-O-O since 2001

Microprocessor Year Clock Rate Pipeline Stages

Issue width

Out-of-order/ Speculation

Cores Power

i486 1989 25MHz 5 1 No 1 5W

Pentium 1993 66MHz 5 2 No 1 10W

Pentium Pro 1997 200MHz 10 3 Yes 1 29W

P4 Willamette 2001 2000MHz 22 3 Yes 1 75W

P4 Prescott 2004 3600MHz 31 3 Yes 1 103W

Core 2006 2930MHz 14 4 Yes 2 75W

Core 2 Yorkfield 2008 2930 MHz 16 4 Yes 4 95W

Core i7 Gulftown 2010 3460 MHz 16 4 Yes 6 130W

7/28/2011

Page 33: Guest Lecturer:  Justin  Hsia

Summer 2011 -- Lecture #23 33

Agenda

• Control Hazards• Administrivia• Higher Level ILP• Dynamic Scheduling• Technology Break• Example AMD Barcelona• Big Picture: Types of Parallelism• Summary7/28/2011

Page 34: Guest Lecturer:  Justin  Hsia

Summer 2011 -- Lecture #23 34

Agenda

• Control Hazards• Administrivia• Higher Level ILP• Dynamic Scheduling• Technology Break• Example AMD Barcelona• Big Picture: Types of Parallelism• Summary7/28/2011

Page 35: Guest Lecturer:  Justin  Hsia

35Summer 2011 -- Lecture #23

AMD Opteron X4 Microarchitecture§4.11 Real Stuff: The AM

D Opteron X4 (Barcelona) Pipeline

72 physical registers

7/28/2011

renaming

16 architectural registers

x86 instructions

RISC operations

Queues:- 106 RISC ops - 24 integer ops- 36 FP/SSE ops- 44 ld/st

Page 36: Guest Lecturer:  Justin  Hsia

36Summer 2011 -- Lecture #23

AMD Opteron X4 Pipeline Flow• For integer operations

12 stages (Floating Point is 17 stages) Up to 106 RISC-ops in progress

• Intel Nehalem is 16 stages for integer operations, details not revealed, but likely similar to above. Intel calls RISC operations “Micro operations” or “μops”

7/28/2011

Page 37: Guest Lecturer:  Justin  Hsia

37Summer 2011 -- Lecture #23

Does Multiple Issue Work?

• Yes, but not as much as we’d like• Programs have real dependencies that limit ILP• Some dependencies are hard to eliminate

– e.g., pointer aliasing• Some parallelism is hard to expose

– Limited window size during instruction issue• Memory delays and limited bandwidth

– Hard to keep pipelines full• Speculation can help if done well

The BIG Picture

7/28/2011

Page 38: Guest Lecturer:  Justin  Hsia

Summer 2011 -- Lecture #23 38

Agenda

• Higher Level ILP• Administrivia• Dynamic Scheduling• Example AMD Barcelona• Technology Break• Big Picture: Types of Parallelism• Summary

7/28/2011

Page 39: Guest Lecturer:  Justin  Hsia

Summer 2011 -- Lecture #23 39

New-School Machine Structures(It’s a bit more complicated!)

• Parallel RequestsAssigned to computere.g., Search “Katz”

• Parallel ThreadsAssigned to coree.g., Lookup, Ads

• Parallel Instructions>1 instruction @ one timee.g., 5 pipelined instructions

• Parallel Data>1 data item @ one timee.g., Add of 4 pairs of words

• Hardware descriptionsAll gates functioning in

parallel at same time

SmartPhone

Warehouse Scale

Computer

Software Hardware

HarnessParallelism &Achieve HighPerformance

Logic Gates

Core Core…

Memory (Cache)

Input/Output

Computer

Main Memory

Core

Instruction Unit(s) FunctionalUnit(s)

A3+B3A2+B2A1+B1A0+B0

Project 1

Lab 14

Project 2

Project 37/28/2011

Page 40: Guest Lecturer:  Justin  Hsia

Summer 2011 -- Lecture #23 40

Big Picture on Parallelism

Two types of parallelism in applications1. Data-Level Parallelism (DLP): arises because

there are many data items that can be operated on at the same time

2. Task-Level Parallelism: arises because tasks of work are created that can operate largely in parallel

7/28/2011

Page 41: Guest Lecturer:  Justin  Hsia

Summer 2011 -- Lecture #23 41

Big Picture on ParallelismHardware can exploit app DLP and Task LP in four ways:1. Instruction-Level Parallelism: Hardware exploits application

DLP using ideas like pipelining and speculative execution2. SIMD architectures: exploit app DLP by applying a single

instruction to a collection of data in parallel3. Thread-Level Parallelism: exploits either app DLP or TLP in a

tightly-coupled hardware model that allows for interaction among parallel threads

4. Request-Level Parallelism: exploits parallelism among largely decoupled tasks and is specified by the programmer of the operating system

7/28/2011

Page 42: Guest Lecturer:  Justin  Hsia

Summer 2011 -- Lecture #23 42

Instr LP, SIMD, Thread LP, Request LP are examples of

• Parallelism above the Instruction Set Architecture

• Parallelism explicitly at the level of the ISA

• Parallelism below the level of the ISA

Peer Instruction

Inst. LP SIMD Thr. LP Req. LPRed = = = ∧White ∨ = = ∧Green = = ∧ ∧Yellow ∨ = ∧ ∧Purple = ∧ ∧ ∧Blue ∧ ∧ ∧ ∧

7/28/2011

Page 43: Guest Lecturer:  Justin  Hsia

Summer 2011 -- Lecture #23 43

Instr LP, SIMD, Thread LP, Request LP are examples of

• Parallelism above the Instruction Set Architecture

• Parallelism explicitly at the level of the ISA

• Parallelism below the level of the ISA

Peer Answer

Inst. LP SIMD Thr. LP Req. LPRed = = = ∧White ∨ = = ∧Green = = ∧ ∧Yellow ∨ = ∧ ∧Purple = ∧ ∧ ∧Blue ∧ ∧ ∧ ∧

7/28/2011

Page 44: Guest Lecturer:  Justin  Hsia

Summer 2011 -- Lecture #23 44

State if the following techniques are associated primarily with a software- or hardware-based approach to exploiting ILP (in some cases, the answer may be both): Superscalar, Out-of-Order execution, Speculation, Register Renaming

Peer Question

Super-scalar

Out of Order

Specu-lation

Register Renaming

Red HW HW HW HWWhite SW SW SW SWGreen Both Both Both BothYellow HW HW Both BothPurple HW HW HW BothBlue HW HW HW SW

7/28/2011

Page 45: Guest Lecturer:  Justin  Hsia

Summer 2011 -- Lecture #23 45

State if the following techniques are associated primarily with a software- or hardware-based approach to exploiting ILP (in some cases, the answer may be both): Superscalar, Out-of-Order execution, Speculation, Register Renaming

Peer Answer

Super-scalar

Out of Order

Specu-lation

Register Renaming

Red HW HW HW HWOrange SW SW SW SWGreen Both Both Both BothYellow HW HW Both BothPink HW HW HW BothBlue HW HW HW SW

7/28/2011

Page 46: Guest Lecturer:  Justin  Hsia

Summer 2011 -- Lecture #23 46

“And in Conclusion, …”

• Big Ideas of Instruction Level Parallelism• Pipelining, Hazards, and Stalls• Forwarding, Speculation to overcome Hazards• Multiple issue to increase performance

– IPC instead of CPI• Dynamic Execution: Superscalar in-order issue,

branch prediction, register renaming, out-of-order execution, in-order commit – “unroll loops in HW”, hide cache misses

7/28/2011

Page 47: Guest Lecturer:  Justin  Hsia

But wait… there’s more???

If we have time…

Page 48: Guest Lecturer:  Justin  Hsia

Review: C Memory Management

• C has three pools of data memory (+ code memory)– Static storage: global variable storage,

basically permanent, entire program run– The Stack: local variable storage, parameters,

return address– The Heap (dynamic storage): malloc() grabs space from here, free() returns it

• Common (Dynamic) Memory Problems– Using uninitialized values– Accessing memory beyond your allocated

region– Improper use of free/realloc by messing with

the pointer handle returned by malloc– Memory leaks: mismatched malloc/free pairs7/28/2011 48Summer 2011 -- Lecture #23

code

static data

heap

stack~ FFFF FFFFhex

~ 0hex

OS prevents accessesbetween stack and heap

(via virtual memory)

Page 49: Guest Lecturer:  Justin  Hsia

Summer 2011 -- Lecture #23 49

Simplest Model

• Only one program running on the computer– Addresses in the program are exactly the physical

memory addresses• Extensions to the simple model:

– What if less physical memory than full address space?

– What if we want to run multiple programs at the same time?

7/28/2011

Page 50: Guest Lecturer:  Justin  Hsia

Summer 2011 -- Lecture #23 50

Problem #1: Physical Memory Less Than the Full Address Space

• One architecture, many implementations, with possibly different amounts of memory

• Memory used to very expensive and physically bulky

• Where does the stack grow from then?

7/28/2011

code

static data

heap

stack~ FFFF FFFFhex

~ 0hex

“Logical”“Virtual”

Real

Page 51: Guest Lecturer:  Justin  Hsia

Summer 2011 -- Lecture #23 51

Idea: Level of Indirection to Create Illusion of Large Physical Memory

7/28/2011

code

static data

heap

stack~ FFFF FFFFhex

~ 0hex

“Logical”“Virtual”

RealVirtual“Page”

Address

Physical“Page”

Address

7

6

5

4

3

2

1

0

Address Mapor Table

Hi Order Bitsof Virtual Address

Hi Order Bitsof Physical Address

Page 52: Guest Lecturer:  Justin  Hsia

Summer 2011 -- Lecture #23 52

Problem #2: Multiple Programs Sharing the Machine’s Address Space

• How can we run multiple programs without accidentally stepping on same addresses?

• How can we protect programs from clobbering each other?

7/28/2011

code

static data

heap

stack~ FFFF FFFFhex

~ 0hex

Application 1

code

static data

heap

stack~ FFFF FFFFhex

~ 0hex

Application 2

Page 53: Guest Lecturer:  Justin  Hsia

Summer 2011 -- Lecture #23 53

Idea: Level of Indirection to Create Illusion of Separate Address Spaces

7/28/2011

code

static data

heap

stack~ FFFF FFFFhex

~ 0hex

Real

7

6

5

4

3

2

1

0

code

static data

heap

stack~ FFFF FFFFhex

~ 0hex

7

6

5

4

3

2

1

0

One table per running application ORswap table contents when switching

Page 54: Guest Lecturer:  Justin  Hsia

Summer 2011 -- Lecture #23 54

Extension to the Simple Model

• Multiple programs sharing the same address space– E.g., Operating system uses low end of address range

shared with application– Multiple programs in shared (virtual) address space

• Static management: fixed partitioning/allocation of space• Dynamic management: programs come and go, take different

amount of time to execute, use different amounts of memory

• How can we protect programs from clobbering each other?

• How can we allocate memory to applications on demand?

7/28/2011

Page 55: Guest Lecturer:  Justin  Hsia

Summer 2011 -- Lecture #23 55

Static Division of Shared Address Space

• E.g., how to manage the carving up of the address space among OS and applications?

• Where does the OS end and the application begin?

• Dynamic management, with protection, would be better!

7/28/2011

code

static data

heap

stack~ FFFF FFFFhex

~ 1000 0000hex

code

static data

heap

stack~ 0FFF FFFFhex

~ 0hex

OperatingSystem

Application

228 bytes(64 MB)

(4 GB- 64 MB)

Page 56: Guest Lecturer:  Justin  Hsia

Summer 2011 -- Lecture #23 56

First Idea: Base + Bounds Registersfor Location Independence

7/28/2011

Location-independent programsProgramming and storage management ease: need for a base register

ProtectionIndependent programs should not affect each other inadvertently: need for a bound register

Historically, base + bounds registers were a very early idea in computer architecture

prog2

prog1

Phys

ical

Mem

ory

Max addr

0 addr

Page 57: Guest Lecturer:  Justin  Hsia

Summer 2011 -- Lecture #23 57

Simple Base and Bound Translation

lw X

ProgramAddressSpace

BoundRegister

BoundsViolation?

Phys

ical

Mem

ory

currentsegment

BaseRegister

+

PhysicalAddressEffective

Address

Base and bounds registers are visible/accessible to programmerTrap to OS if bounds violation detected (“seg fault”/”core dumped”)

Base Physical Address

Segment Length

7/28/2011

Page 58: Guest Lecturer:  Justin  Hsia

Summer 2011 -- Lecture #23 58

Programs Sharing Memory

Why do we want to run multiple programs? Run others while waiting for I/O

What prevents programs from accessing each other’s data?

OSSpace16K24K24K

32K

24K

pgm 1pgm 2

pgm 3

OSSpace16K24K16K

32K

24K

pgm 1pgm 2

pgm 3

pgm 5

pgm 48K

pgms 4 & 5 arrive

pgms 2 & 5leave OS

Space16K24K16K

32K

24K

pgm 1

pgm 48K

pgm 3

free

7/28/2011Student Roulette?

Page 59: Guest Lecturer:  Justin  Hsia

Summer 2011 -- Lecture #23 59

Restriction on Base + Bounds Regs

Want only the Operating System to be able to change Base and Bound Registers

Processors need different execution modes1. User mode: can use Base and Bound Registers, but

cannot change them2. Supervisor mode: can use and change Base and

Bound Registers– Also need Mode Bit (0=User, 1=Supervisor) to determine

processor mode– Also need way for program in User Mode to invoke

operating system in Supervisor Mode, and vice versa7/28/2011

Page 60: Guest Lecturer:  Justin  Hsia

Summer 2011 -- Lecture #23 60

Programs Sharing Memory

As programs come and go, the storage is “fragmented”. Therefore, at some stage programs have to be moved around to compact the storage. Easy way to do this?

OSSpace16K24K24K

32K

24K

pgm 1pgm 2

pgm 3

OSSpace16K24K16K

32K

24K

pgm 1pgm 2

pgm 3

pgm 5

pgm 48K

pgms 4 & 5 arrive

pgms 2 & 5leave OS

Space16K24K16K

32K

24K

pgm 1

pgm 48K

pgm 3

free

7/28/2011Student Roulette?