Top Banner
Thoai Nam
23

ParallelProcessing KSTN2002 13 MatrixMultiplicationnam/PP_Undergraduate/PP_MatrixMultiplication.pdf · i,j and b i,j At the end of the algorithm, P i,j will hold the element c i,j

Oct 07, 2019

Download

Documents

dariahiddleston
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: ParallelProcessing KSTN2002 13 MatrixMultiplicationnam/PP_Undergraduate/PP_MatrixMultiplication.pdf · i,j and b i,j At the end of the algorithm, P i,j will hold the element c i,j

Thoai Nam

Page 2: ParallelProcessing KSTN2002 13 MatrixMultiplicationnam/PP_Undergraduate/PP_MatrixMultiplication.pdf · i,j and b i,j At the end of the algorithm, P i,j will hold the element c i,j

-2-Khoa Coâng Ngheä Thoâng Tin – Ñaïi Hoïc Baùch Khoa Tp.HCM

�Sequential matrix multiplication�Algorithms for processor arrays

– Matrix multiplication on 2-D mesh SIMD model– Matrix multiplication on hypercube SIMD model

�Matrix multiplication on UMA multiprocessors

�Matrix multiplication on multicomputers

Page 3: ParallelProcessing KSTN2002 13 MatrixMultiplicationnam/PP_Undergraduate/PP_MatrixMultiplication.pdf · i,j and b i,j At the end of the algorithm, P i,j will hold the element c i,j

-3-Khoa Coâng Ngheä Thoâng Tin – Ñaïi Hoïc Baùch Khoa Tp.HCM

Global a[0..l-1,0..m-1], b[0..m-1][0..n-1], {Matrices to be multiplied}c[0..l-1,0..n-1], {Product matrix}t, {Accumulates dot product}i, j, k;

Beginfor i:=0 to l-1 do

for j:=0 to n-1 dot:=0;for k:=0 to m-1 do

t:=t+a[i][k]*b[k][j];endfor k;c[i][j]:=t;

endfor j;endfor i;

End.

Page 4: ParallelProcessing KSTN2002 13 MatrixMultiplicationnam/PP_Undergraduate/PP_MatrixMultiplication.pdf · i,j and b i,j At the end of the algorithm, P i,j will hold the element c i,j

-4-Khoa Coâng Ngheä Thoâng Tin – Ñaïi Hoïc Baùch Khoa Tp.HCM

�Matrix multiplication on 2-D mesh SIMD model�Matrix multiplication on Hypercube SIMD model

Page 5: ParallelProcessing KSTN2002 13 MatrixMultiplicationnam/PP_Undergraduate/PP_MatrixMultiplication.pdf · i,j and b i,j At the end of the algorithm, P i,j will hold the element c i,j

-5-Khoa Coâng Ngheä Thoâng Tin – Ñaïi Hoïc Baùch Khoa Tp.HCM

�Gentleman(1978) has shown that multiplication of to n*n matrices on the 2-D mesh SIMD model requires 0(n) routing steps

�We will consider a multiplication algorithm on a 2-D mesh SIMD model with wraparoundconnections

Page 6: ParallelProcessing KSTN2002 13 MatrixMultiplicationnam/PP_Undergraduate/PP_MatrixMultiplication.pdf · i,j and b i,j At the end of the algorithm, P i,j will hold the element c i,j

-6-Khoa Coâng Ngheä Thoâng Tin – Ñaïi Hoïc Baùch Khoa Tp.HCM

�For simplicity, we suppose that– Size of the mesh is n*n– Size of each matrix (A and B) is n*n– Each processor Pi,j in the mesh (located at row i,column

j) contains ai,j and bi,j

�At the end of the algorithm, Pi,j will hold the element ci,j of the product matrix

Page 7: ParallelProcessing KSTN2002 13 MatrixMultiplicationnam/PP_Undergraduate/PP_MatrixMultiplication.pdf · i,j and b i,j At the end of the algorithm, P i,j will hold the element c i,j

-7-Khoa Coâng Ngheä Thoâng Tin – Ñaïi Hoïc Baùch Khoa Tp.HCM

� Major phases

(a) Initial distribution of matrices A and B

(b) Staggering all A’s elements in row i to the left by i positions and all B’s elements in col j upwards by i positions

a3,3b3,3

a3,2b3,2

a3,1b3,1

a3,0b3,0

a2,3b2,3

a2,2b2,2

a2,1b2,1

a2,0b2,0

a1,3b1,3

a1,2b1,2

a1,1b1,1

a1,0b1,0

a0,3b0,3

a0,2b0,2

a0,1b0,1

a0,0b0,0

a3,3b3,0

a2,3b3,1

a2,2b2,0

a1,3b3,2

a1,2b2,1

a1,1b1,0

a0,3b3,3

a0,2b2,2

a0,1b1,1

a0,0b0,0

a3,2a3,1a3,0

a2,1a2,0

a1,0

b2,3b1,2b0,1

b1,3b0,2

b0,3

Page 8: ParallelProcessing KSTN2002 13 MatrixMultiplicationnam/PP_Undergraduate/PP_MatrixMultiplication.pdf · i,j and b i,j At the end of the algorithm, P i,j will hold the element c i,j

-8-Khoa Coâng Ngheä Thoâng Tin – Ñaïi Hoïc Baùch Khoa Tp.HCM

(c) Distribution of 2 matrices A and B after staggering in a 2-D mesh with wrapparound connection

a3,2b2,3

a3,1 b1,2

a3,0

b0,1

a3,3b3,0

a2,1b1,3

a2,0b0,2

a2,3b3,1

a2,2b2,0

a1,0b0,3

a1,3b3,2

a1,2b2,1

a1,1b1,0

a0,3b3,3

a0,2b2,2

a0,1b1,1

a0,0b0,0

(b) Staggering all A’s elements in row i to the left by i positions and all B’s elements in col j upwards by i positions

a3,3b3,0

a2,3b3,1

a2,2b2,0

a1,3b3,2

a1,2b2,1

a1,1b1,0

a0,3b3,3

a0,2b2,2

a0,1b1,1

a0,0b0,0

a3,2a3,1a3,0

a2,1a2,0

a1,0

b2,3b1,2b0,1

b1,3b0,2

b0,3 Each processor P(i,j) has a pair of elements to multiply ai,k and bk,j

Page 9: ParallelProcessing KSTN2002 13 MatrixMultiplicationnam/PP_Undergraduate/PP_MatrixMultiplication.pdf · i,j and b i,j At the end of the algorithm, P i,j will hold the element c i,j

-9-Khoa Coâng Ngheä Thoâng Tin – Ñaïi Hoïc Baùch Khoa Tp.HCM

� The rest steps of the algorithm from the viewpoint of processor P(1,2)

a1,3

b3,2

b2,2

b0,2

b1,2

a1,1 a1,2 a1,0

(a) First scalar multiplication step

a1,0

b0,2

b3,2

b1,2

b2,2

a1,2 a1,3 a1,1

(b) Second scalar multiplication step after elements of A are cycled to the left and elements of B are cycled upwards

Page 10: ParallelProcessing KSTN2002 13 MatrixMultiplicationnam/PP_Undergraduate/PP_MatrixMultiplication.pdf · i,j and b i,j At the end of the algorithm, P i,j will hold the element c i,j

-10-Khoa Coâng Ngheä Thoâng Tin – Ñaïi Hoïc Baùch Khoa Tp.HCM

a1,1

b1,2

b0,2

b2,2

b3,2

a1,3 a1,0 a1,2

(c) Third scalar multiplication step after second cycle step

(d) Third scalar multiplication step after second cycle step. At this point processor P(1,2) has computed the dot product c1,2

a1,2

b2,2

b1,2

b3,2

b0,2

a1,0 a1,1 a1,3

Page 11: ParallelProcessing KSTN2002 13 MatrixMultiplicationnam/PP_Undergraduate/PP_MatrixMultiplication.pdf · i,j and b i,j At the end of the algorithm, P i,j will hold the element c i,j

-11-Khoa Coâng Ngheä Thoâng Tin – Ñaïi Hoïc Baùch Khoa Tp.HCM

Detailed AlgorithmGlobal n, {Dimension of matrices}

k ;Local a, b, c;Begin

for k:=1 to n-1 doforall P(i,j) where 1 ≤ i,j < n do

if i ≥ k then a:= fromleft(a); if j ≥ k then b:=fromdown(b);

end forall;endfor k;

Stagger 2 matrices

a[0..n-1,0..n-1] and b[0..n-1,0..n-1]

Page 12: ParallelProcessing KSTN2002 13 MatrixMultiplicationnam/PP_Undergraduate/PP_MatrixMultiplication.pdf · i,j and b i,j At the end of the algorithm, P i,j will hold the element c i,j

-12-Khoa Coâng Ngheä Thoâng Tin – Ñaïi Hoïc Baùch Khoa Tp.HCM

forall P(i,j) where 0 ≤ i,j < n doc:= a*b;

end forall;for k:=1 to n-1 do

forall P(i,j) where 0 ≤ i,j < n doa:= fromleft(a); b:=fromdown(b);c:= c + a*b;

end forall;endfor k;

End.

Compute dot product

Page 13: ParallelProcessing KSTN2002 13 MatrixMultiplicationnam/PP_Undergraduate/PP_MatrixMultiplication.pdf · i,j and b i,j At the end of the algorithm, P i,j will hold the element c i,j

-13-Khoa Coâng Ngheä Thoâng Tin – Ñaïi Hoïc Baùch Khoa Tp.HCM

�Can we implement the above mentioned algorithm on a 2-D mesh SIMD model without wrapparound connection?

Page 14: ParallelProcessing KSTN2002 13 MatrixMultiplicationnam/PP_Undergraduate/PP_MatrixMultiplication.pdf · i,j and b i,j At the end of the algorithm, P i,j will hold the element c i,j

-14-Khoa Coâng Ngheä Thoâng Tin – Ñaïi Hoïc Baùch Khoa Tp.HCM

�Design strategy 5– If load balancing is not a problem, maximize grain size

�Grain size: the amount of work performed between processor interactions

�Things to be considered– Parallelizing the most outer loop of the sequential

algorithm is a good choice since the attained grain size (0(n3/p)) is the biggest

– Resolving memory contention as much as possible

Page 15: ParallelProcessing KSTN2002 13 MatrixMultiplicationnam/PP_Undergraduate/PP_MatrixMultiplication.pdf · i,j and b i,j At the end of the algorithm, P i,j will hold the element c i,j

-15-Khoa Coâng Ngheä Thoâng Tin – Ñaïi Hoïc Baùch Khoa Tp.HCM

Algorithm using p processorsGlobal n, {Dimension of matrices}

a[0..n-1,0..n-1], b[0..n-1,0..n-1]; {Two input matrices}c[0..n-1,0..n-1]; {Product matrix}

Local i,j,k,t;Begin

forall Pm where 1 ≤ m ≤ p dofor i:=m to n step p do

for j:= 1 to n tot:=0;for k:=1 to n do t:=t+a[i,k]*b[k,j];

endfor j;c[i][j]:=t;

endfor i;end forall;

End.

Page 16: ParallelProcessing KSTN2002 13 MatrixMultiplicationnam/PP_Undergraduate/PP_MatrixMultiplication.pdf · i,j and b i,j At the end of the algorithm, P i,j will hold the element c i,j

-16-Khoa Coâng Ngheä Thoâng Tin – Ñaïi Hoïc Baùch Khoa Tp.HCM

�Things to be considered– Try to resolve memory contention as much as possible– Increase the locality of memory references to reduce

memory access time�Design strategy 6

– Reduce average memory latency time by increasing locality

�The block matrix multiplication algorithm is a reasonable choice in this situation – Section 7.3, p.187, Parallel Computing: Theory and Practice

Page 17: ParallelProcessing KSTN2002 13 MatrixMultiplicationnam/PP_Undergraduate/PP_MatrixMultiplication.pdf · i,j and b i,j At the end of the algorithm, P i,j will hold the element c i,j

-17-Khoa Coâng Ngheä Thoâng Tin – Ñaïi Hoïc Baùch Khoa Tp.HCM

�We will study 2 algorithms on multicomputers– Row-Column-Oriented Algorithm– Block-Oriented Algorithm

Page 18: ParallelProcessing KSTN2002 13 MatrixMultiplicationnam/PP_Undergraduate/PP_MatrixMultiplication.pdf · i,j and b i,j At the end of the algorithm, P i,j will hold the element c i,j

-18-Khoa Coâng Ngheä Thoâng Tin – Ñaïi Hoïc Baùch Khoa Tp.HCM

�The processes are organized as a ring– Step 1: Initially, each process is given 1 row of the matrix

A and 1 column of the matrix B– Step 2: Each process uses vector multiplication to get 1

element of the product matrix C.– Step 3: After a process has used its column of matrix B, it

fetches the next column of B from its successor in the ring

– Step 4: If all rows of B have already been processed, quit. Otherwise, go to step 2

Page 19: ParallelProcessing KSTN2002 13 MatrixMultiplicationnam/PP_Undergraduate/PP_MatrixMultiplication.pdf · i,j and b i,j At the end of the algorithm, P i,j will hold the element c i,j

-19-Khoa Coâng Ngheä Thoâng Tin – Ñaïi Hoïc Baùch Khoa Tp.HCM

�Why do we have to organize processes as a ring and make them use B’s rows in turn?

�Design strategy 7:– Eliminate contention for shared resources by changing

the order of data access

Page 20: ParallelProcessing KSTN2002 13 MatrixMultiplicationnam/PP_Undergraduate/PP_MatrixMultiplication.pdf · i,j and b i,j At the end of the algorithm, P i,j will hold the element c i,j

-20-Khoa Coâng Ngheä Thoâng Tin – Ñaïi Hoïc Baùch Khoa Tp.HCM

A B C

A B C A B C

A B C

Example: Use 4 processes to mutliply two matrices A4*4 and B4*4

1st iteration

Page 21: ParallelProcessing KSTN2002 13 MatrixMultiplicationnam/PP_Undergraduate/PP_MatrixMultiplication.pdf · i,j and b i,j At the end of the algorithm, P i,j will hold the element c i,j

-21-Khoa Coâng Ngheä Thoâng Tin – Ñaïi Hoïc Baùch Khoa Tp.HCM

A B C

A B C A B C

A B C

Example: Use 4 processes to mutliply two matrices A4*4 and B4*4

2nd iteration

Page 22: ParallelProcessing KSTN2002 13 MatrixMultiplicationnam/PP_Undergraduate/PP_MatrixMultiplication.pdf · i,j and b i,j At the end of the algorithm, P i,j will hold the element c i,j

-22-Khoa Coâng Ngheä Thoâng Tin – Ñaïi Hoïc Baùch Khoa Tp.HCM

A B C

A B C A B C

A B C

Example: Use 4 processes to mutliply two matrices A4*4 and B4*4

3rd iteration

Page 23: ParallelProcessing KSTN2002 13 MatrixMultiplicationnam/PP_Undergraduate/PP_MatrixMultiplication.pdf · i,j and b i,j At the end of the algorithm, P i,j will hold the element c i,j

-23-Khoa Coâng Ngheä Thoâng Tin – Ñaïi Hoïc Baùch Khoa Tp.HCM

A B C

A B C A B C

A B C

Example: Use 4 processes to mutliply two matrices A4*4 and B4*4

4th iteration (the last)