YOU ARE DOWNLOADING DOCUMENT

Please tick the box to continue:

Transcript
Page 1: ch09 morris mano

Computer System Architecture Dept. of Info. Of ComputerChap. 9 Pipeline and Vector ProcessingChap. 9 Pipeline and Vector Processing

9-1Chap. 9 Pipeline and Vector Processing

9-1 Parallel Processing Simultaneous data processing tasks for the purpose of increasing the

computational speed Perform concurrent data processing to achieve faster execution time Multiple Functional Unit : Fig. 9-1

Separate the execution unit into eight functional units operating in parallel Computer Architectural Classification

Data-Instruction Stream : Flynn Serial versus Parallel Processing : Feng Parallelism and Pipelining : Händler

Flynn’s Classification 1) SISD (Single Instruction - Single Data stream)

» for practical purpose: only one processor is useful

» Example systems : Amdahl 470V/6, IBM 360/91

Parallel Processing Example

Adder- subtrac tor

Integer multiply

Floatint- pointadd- subtrac t

Inc rementer

Shift unit

Logic unit

Floatint- pointdivide

Floatint- pointmultiply

Processorregisters

To Memory

=

C U MMPUIS DS

IS

Page 2: ch09 morris mano

Computer System Architecture Dept. of Info. Of ComputerChap. 9 Pipeline and Vector ProcessingChap. 9 Pipeline and Vector Processing

9-2

2) SIMD (Single Instruction - Multiple Data stream)

» vector or array operations 에 적합한 형태 one vector operation includes many

operations on a data stream

» Example systems : CRAY -1, ILLIAC-IV

3) MISD

(Multiple Instruction - Single Data stream)» Data Stream 에 Bottle neck 으로 인해 실제로 사용되지 않음

C U

PU 1

PU n

PU 2

MM1

MMn

MM2

DS 1

DS 2

DS n

IS

IS

Shared memmory

PU 1

PU n

PU 2

DS

C U 1

C U n

C U 2

IS 1

IS 2

ISn

MM1MMn MM2

IS 1

IS 2

IS n

DS

Shared memory

Page 3: ch09 morris mano

Computer System Architecture Dept. of Info. Of ComputerChap. 9 Pipeline and Vector ProcessingChap. 9 Pipeline and Vector Processing

9-3

4) MIMD

(Multiple Instruction - Multiple Data stream)» 대부분의 Multiprocessor

System 에서 사용됨

Main topics in this Chapter Pipeline processing : Sec. 9-2

» Arithmetic pipeline : Sec. 9-3» Instruction pipeline : Sec. 9-4

Vector processing :adder/multiplier pipeline 이용 , Sec. 9-6 Array processing : 별도의 array processor 이용 , Sec. 9-7

» Attached array processor : Fig. 9-14» SIMD array processor : Fig. 9-15

Large vector, Matrices, 그리고 Array Data 계산

PU 1

PU n

PU 2

DSC U 1

C U n

C U 2

IS 1

IS 2

ISn

IS 1

IS 2

ISn

MM1

MMn

MM2

Shared memory

vv

Page 4: ch09 morris mano

Computer System Architecture Dept. of Info. Of ComputerChap. 9 Pipeline and Vector ProcessingChap. 9 Pipeline and Vector Processing

9-4

9-2 Pipelining Pipelining 의 원리

Decomposing a sequential process into suboperations Each subprocess is executed in a special dedicated segment concurrently

Pipelining 의 예제 : Fig. 9-2 Multiply and add operation : ( for i = 1, 2, …, 7 ) 3 개의 Suboperation Segment 로 분리

» 1) : Input Ai and Bi

» 2) : Multiply and input Ci

» 3) : Add Ci Content of registers in pipeline example : Tab. 9-1

General considerations 4 segment pipeline : Fig. 9-3

» S : Combinational circuit for Suboperation

» R : Register(intermediate results between the segments) Space-time diagram : Fig. 9-4

» Show segment utilization as a function of time Task : T1, T2, T3,…, T6

» Total operation performed going through all the segment

435

4,2*13

2,1

RRR

CiRRRR

BiRAiR

CiBiAi *

Segmentversus

clock-cycle

Page 5: ch09 morris mano

Computer System Architecture Dept. of Info. Of ComputerChap. 9 Pipeline and Vector ProcessingChap. 9 Pipeline and Vector Processing

9-5

Speedup S : Nonpipeline / Pipeline S = n • tn / ( k + n - 1 ) • tp = 6 • 6 tn / ( 4 + 6 - 1 ) • tp = 36 tn / 9 tn = 4

» n : task number ( 6 )

» tn : time to complete each task in nonpipeline ( 6 cycle times = 6 tp)

» tp : clock cycle time ( 1 clock cycle )

» k : segment number ( 4 )

If n 이면 , S = tn / tp

한 개의 task 를 처리하는 시간이 같을 때 즉 , nonpipeline ( tn ) = pipeline ( k • tp )

이라고 가정하면 ,

S = tn / tp = k • tp / tp = k

따라서 이론적으로 k 배 (segment 개수 )

만큼 처리 속도가 향상된다 . Pipeline 에는 Arithmetic Pipeline(Sec. 9-3) 과 Instruction Pipeline(Sec. 9-4) 이 있다

Sec. 9-3 Arithmetic Pipeline Floating-point Adder Pipeline Example : Fig. 9-6

Add / Subtract two normalized floating-point binary number» X = A x 2a = 0.9504 x 103

» Y = B x 2b = 0.8200 x 102

1 8765432 9

1

4

3

2

C lock cyc les

T1 T6T3 T5T2 T4

T1 T6T3 T5T2 T4

T1 T6T3 T5T2 T4

T1 T6T3 T5T2 T4

Segm

ent

Pipeline 에서의 처리 시간 = 9 clock cycles

k + n - 1 n

Page 6: ch09 morris mano

Computer System Architecture Dept. of Info. Of ComputerChap. 9 Pipeline and Vector ProcessingChap. 9 Pipeline and Vector Processing

9-6

4 segments suboperations» 1) Compare exponents by subtraction :

3 - 2 = 1 X = 0.9504 x 103

Y = 0.8200 x 102

» 2) Align mantissas X = 0.9504 x 103

Y = 0.08200 x 103

» 3) Add mantissas Z = 1.0324 x 103

» 4) Normalize result Z = 0.1324 x 104

R

C ompareexponents

by subtrac tion

R

C hoose exponent Align mantissas

R

Add or subtrac tmantissas

R

Normalizeresult

R

R

Adjustexponent

R

R

a b BAExponents Mantissas

DifferenceSegment 1 :

Segment 4 :

Segment 3 :

Segment 2 :

Page 7: ch09 morris mano

Computer System Architecture Dept. of Info. Of ComputerChap. 9 Pipeline and Vector ProcessingChap. 9 Pipeline and Vector Processing

9-7

9-4 Instruction Pipeline Instruction Cycle

1) Fetch the instruction from memory

2) Decode the instruction

3) Calculate the effective address

4) Fetch the operands from memory

5) Execute the instruction

6) Store the result in the proper place

Example : Four-segment Instruction Pipeline Four-segment CPU pipeline : Fig. 9-7

» 1) FI : Instruction Fetch

» 2) DA : Decode Instruction & calculate EA

» 3) FO : Operand Fetch

» 4) EX : Execution Timing of Instruction Pipeline : Fig. 9-8

» Instruction 3 에서 Branch 명령 실행

Segment 1 :

Segment 4 :

Segment 3 :

Segment 2 :

Fetch instruc tionfrom memory

Decode instruc tionand calculate

effective address

Fetch operandfrom memory

Execute instruc tion

Branch ?

Interrupt ?Interrupthandling

Update PC

Empty pipe

1 32

1

4

3

2

7

6

5

87654 9 121110 13

FI EXFODA

FI EXFODA

FI EXFODA

FI EXFODA

FI EXFODA

FI EXFODA

FI EXFODA

FI

Instruc tion :

(Branch)

Step :

BranchNo Branch

Page 8: ch09 morris mano

Computer System Architecture Dept. of Info. Of ComputerChap. 9 Pipeline and Vector ProcessingChap. 9 Pipeline and Vector Processing

9-8

Pipeline Conflicts : 3 major difficulties 1) Resource conflicts

» memory access by two segments at the same time 2) Data dependency

» when an instruction depend on the result of a previous instruction, but this result is not yet available

3) Branch difficulties» branch and other instruction (interrupt, ret, ..) that change the value of PC

Data Dependency 해결 방법 Hardware 적인 방법

» Hardware Interlock previous instruction 의 결과가 나올 때 까지 Hardware 적인 Delay 를 강제 삽입

» Operand Forwarding previous instruction 의 결과를 곧바로 ALU 로 전달 ( 정상적인 경우 , register 를 경유함 )

Software 적인 방법» Delayed Load

previous instruction 의 결과가 나올 때 까지 No-operation instruction 을 삽입

Handling of Branch Instructions Prefetch target instruction

» Conditional branch 에서 branch target instruction ( 조건 맞음 ) 과 다음 instruction (조건 안 맞음 ) 을 모두 fetch

Page 9: ch09 morris mano

Computer System Architecture Dept. of Info. Of ComputerChap. 9 Pipeline and Vector ProcessingChap. 9 Pipeline and Vector Processing

9-9

Branch Target Buffer : BTB» 1) Associative memory 를 이용하여 branch target address 이후에 몇 개에 instruction

을 미리 BTB 에 저장한다 .

» 2) 만약 branch instruction 이면 우선 BTB 를 검사하여 BTB 에 있으면 곧바로 가져온다 (Cache 개념 도입 )

Loop Buffer» 1) small very high speed register file (RAM) 을 이용하여 프로그램에서 loop 를 detect 한다 .

» 2) 만약 loop 가 발견되면 loop 프로그램 전체를 Loop Buffer 에 load 하여 실행하면 외부 메모리를 access 하지 않는다 .

Branch Prediction» Branch 를 predict 하는 additional hardware logic 사용

Delayed Branch 해결 방법 Fig. 9-8 에서와 같이 branch instruction 이 pipeline operation 을 지연시키는 경우 예제 : Fig. 9-10, p. 318, Sec. 9-5

» 1) No-operation instruction 삽입» 2) Instruction Rearranging : Compiler 지원

1 32 654

1. Load

4. Subtrac t

3. Add

2. Inc rement

I EA

I EA

I EA

I EA

(a) Using no- operation instruc tions

C lock cyc les :

1 32 654

I EA

I EA

I EA

I EA

(b) Rearranging the instruc tions

7

I EA

C lock cyc les :

5. Branch to X

8. Instruc tion in X

6. No- operation

7. No- operation

7 1098

I EA

I EA

I EA

I EA

1. Load

5. Subtrac t

4. Add

2. Inc rement

3. Branch to X

6. Instruc tion in X

8

I EA

Page 10: ch09 morris mano

Computer System Architecture Dept. of Info. Of ComputerChap. 9 Pipeline and Vector ProcessingChap. 9 Pipeline and Vector Processing

9-10

9-5 RISC Pipeline RISC CPU 의 특징

Instruction Pipeline 을 이용함 Single-cycle instruction execution Compiler support

Example : Three-segment Instruction Pipeline 3 Suboperations Instruction Cycle

» 1) I : Instruction fetch

» 2) A : Instruction decoded and ALU operation

» 3) E : Transfer the output of ALU to a register,

memory, or PC Delayed Load : Fig. 9-9(a)

» 3 번째 Instruction(ADD R1 + R3) 에서 Conflict 발생 4 번째 clock cycle 에서 2 번째 Instruction (LOAD R2)

실행과 동시에 3 번째 instruction 에서 R2 를 연산

» Delayed Load 해결 방법 : Fig. 9-9(b) No-operation 삽입

Delayed Branch : Sec. 9-4 에서 이미 설명

1 32 654

1. Load R1

4. Store R3

3. Add R1+R2

2. Load R2

I EA

I EA

I EA

I EA

(a) Pipeline timing with data conflic t

1 32 654

1. Load R1

4. Add R1+R2

2. Load R2

I EA

I EA

I EA

I EA

(b) Pipeline timing with delayed load

5. Store R3

3. No- operation

7

I EA

C lock cyc les :

C lock cyc les :

Conflict 발생

Page 11: ch09 morris mano

Computer System Architecture Dept. of Info. Of ComputerChap. 9 Pipeline and Vector ProcessingChap. 9 Pipeline and Vector Processing

9-11

9-6 Vector Processing Science and Engineering Applications

Long-range weather forecasting, Petroleum explorations, Seismic data analysis, Medical diagnosis, Aerodynamics and space flight simulations, Artificial intelligence and expert systems, Mapping the human genome, Image processing

Vector Operations Arithmetic operations on large arrays of numbers Conventional scalar processor

» Machine language

Vector processor» Single vector instruction

Initialize I = 020 Read A(I) Read B(I) Store C(I) = A(I) + B(I) Increment I = I + 1 If I 100 go to 20 Continue

» Fortran language

DO 20 I = 1, 10020 C(I) = A(I) + B(I)

C(1:100) = A(1:100) + B(1:100)

Page 12: ch09 morris mano

Computer System Architecture Dept. of Info. Of ComputerChap. 9 Pipeline and Vector ProcessingChap. 9 Pipeline and Vector Processing

9-12

Vector Instruction Format : Fig. 9-11

ADD A B C 100

Matrix Multiplication 3 x 3 matrices multiplication : n2 = 9 inner product

» : 이와 같은 inner product 가 9 개

Cumulative multiply-add operation : n3 = 27 multiply-add

» : 이와 같은 multiply-add 가 3 개 따라서 9 X 3 multiply-add = 27

Operationcode

Base addresssource 1

Base addresssource 2

Base addressdestination

Vectorlength

333231

232221

131211

333231

232221

131211

333231

232221

131211

ccc

ccc

ccc

bbb

bbb

bbb

aaa

aaa

aaa

31132112111111 bababac

bacc

3113211211111111 bababacc

C11 의 초기값 = 0

Page 13: ch09 morris mano

Computer System Architecture Dept. of Info. Of ComputerChap. 9 Pipeline and Vector ProcessingChap. 9 Pipeline and Vector Processing

9-13

Pipeline for calculating an inner product : Fig. 9-12 Floating point multiplier pipeline : 4 segment Floating point adder pipeline : 4 segment 예제 )

» after 1st clock input

» after 8th clock input

» Four section summation

kkBABABABAC 332211

SourceA

SourceB

Multiplierpipeline

Adderpipeline

» after 4th clock input

A1B1

SourceA

SourceB

Multiplierpipeline

Adderpipeline

A4B4 A3B3 A2B2 A1B1

SourceA

SourceB

Multiplierpipeline

Adderpipeline

SourceA

SourceB

Multiplierpipeline

Adderpipeline

» after 9th, 10th, 11th ,...

A8B8 A7B7 A6B6 A5B5 A4B4 A3B3 A2B2 A1B1 A8B8 A7B7 A6B6 A5B5 A4B4 A3B3 A2B2 A1B1

5511 BABA

, , ,

6622 BABA

161612128844

151511117733

141410106622

1313995511

BABABABA

BABABABA

BABABABA

BABABABAC

Page 14: ch09 morris mano

Computer System Architecture Dept. of Info. Of ComputerChap. 9 Pipeline and Vector ProcessingChap. 9 Pipeline and Vector Processing

9-14

Memory Interleaving : Fig. 9-13 Simultaneous access to memory from two or

more source using one memory bus system AR 의 하위 2 bit 를 사용하여 4 개중 1 개의

memory module 선택 예제 ) Even / Odd Address Memory Access

AR

Memoryarray

DR

AR

Memoryarray

DR

AR

Memoryarray

DR

AR

Memoryarray

DR

Address bus

Data bus Supercomputer Supercomputer = Vector Instruction + Pipelined floating-point arithmetic Performance Evaluation Index

» MIPS : Million Instruction Per Second

» FLOPS : Floating-point Operation Per Second megaflops : 106, gigaflops : 109

Cray supercomputer : Cray Research» Clay-1 : 80 megaflops, 4 million 64 bit words memory

» Clay-2 : 12 times more powerful than the clay-1 VP supercomputer : Fujitsu

» VP-200 : 300 megaflops, 32 million memory, 83 vector instruction, 195 scalar instruction

» VP-2600 : 5 gigaflops

Page 15: ch09 morris mano

Computer System Architecture Dept. of Info. Of ComputerChap. 9 Pipeline and Vector ProcessingChap. 9 Pipeline and Vector Processing

9-15

9-7 Array Processors Performs computations on large arrays of data

Array Processing Attached array processor : Fig. 9-14

» Auxiliary processor attached to a general purpose computer SIMD array processor : Fig. 9-15

» Computer with multiple processing units operating in parallel Vector 계산 C = A + B 에서 ci = ai + bi 를

각각의 PEi 에서 동시에 실행

Vector processing : Adder/Multiplier pipeline 이용Array processing : 별도의 array processor 이용

G eneral- purposecomputer

Input- Outputinterface

Attached arrayProcessor

Main memory Local memoryHigh- speed memory to-

memory bus

PE 1

PE 3

PE 2

M1

M3

M2

PE n Mn

Master controlunit

Main memory


Related Documents