How to Compute and Prove Lower and Upper Bounds on the Communication Costs of Your Algorithm Part II: Geometric embedding Oded Schwartz CS294, Lecture #3 Fall, 2011 Communication-Avoiding Algorithms www.cs.berkeley.edu/~odedsc/CS294 Based on: D. Irony, S. Toledo, and A. Tiskin: Communication lower bounds for distributed-memory matrix multiplication G. Ballard, J. Demmel, O. Holtz, and O. Schwartz: Minimizing communication in linear algebra.
CS294, Lecture #3 Fall, 2011 Communication-Avoiding Algorithms www.cs.berkeley.edu/~odedsc/CS294. How to Compute and Prove Lower and Upper Bounds on the Communication Costs of Your Algorithm Part II: Geometric embedding. Oded Schwartz. Based on: D. Irony, S. Toledo, and A. Tiskin: - PowerPoint PPT Presentation
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
From Sequential Lower bound to Parallel Lower Bound
We showed:Any algorithm that agrees with Form (1) on sequential model requires:Bandwidth: (G / M1/2)Latency: (G / M3/2)where G is the gijk count.
Cor:Any algorithm that agrees with Form (1), on a P-processors machine, where at least two processors perform (1/P) of G each requires:
Bandwidth: (G /PM1/2)Latency: (G / PM3/2)
20
Geometric Embedding (2nd approach) [Ballard, Demmel, Holtz, S. 2011a]Follows [Irony,Toledo,Tiskin 04], based on [Loomis & Whitney 49]
Lower bounds: for algorithms with “flavor” of 3 nested loopsBLAS, LU, Cholesky, LDLT, and QR factorizations, eigenvalues and SVD, i.e., essentially all direct methods of linear algebra.
• Dense or sparse matricesIn sparse cases: bandwidth is a function NNZ.
• Bandwidth and latency.• Sequential, hierarchical, and
parallel – distributed and shared memory models.• Compositions of linear algebra operations.• Certain graph optimization problems
[Demmel, Pearson, Poloni, Van Loan, 11], [Ballard, Demmel, S. 11]• Tensor contraction
M
M
n3
P
M
M
n3
For dense:
21
Do conventional dense algorithms as implemented in LAPACK and ScaLAPACK attain these bounds?
Mostly not.
Are there other algorithms that do?Mostly yes.
22
Dense Linear Algebra: Sequential Model
Lower bound Attaining algorithm
Algorithm Bandwidth Latency Bandwidth Latency
Matrix-Multiplication
[Ballard, Demmel, Holtz, S. 11]
[Ballard, Demmel, Holtz, S. 11]
[Frigo, Leiserson, Prokop, Ramachandran 99]
Cholesky [Ahmad, Pingali 00][Ballard, Demmel, Holtz, S. 09]
LU [Toledo97] [DGX08]
QR [EG98] [DGHL08a]
Symmetric Eigenvalues
[Ballard,Demmel,Dumitriu 10]
SVD [Ballard,Demmel,Dumitriu 10]
(Generalized) Nonsymetric Eigenvalues
[Ballard,Demmel,Dumitriu 10]
M
M
n3
3
M
n
Dense 2D parallel algorithms • Assume nxn matrices on P processors, memory per processor = O(n2 / P)
But many algorithms just don’t fit the generalized form!
For example: Strassen’s fast matrix multiplication
29
Beyond 3-nested loops
How about the communication costs of algorithmsthat have a more complex structure?
30
Communication Lower Bounds – to be continued…
Approaches:
1. Reduction [Ballard, Demmel, Holtz, S. 2009]
2. Geometric Embedding[Irony,Toledo,Tiskin 04],
[Ballard, Demmel, Holtz, S. 2011a]
3. Graph Analysis [Hong & Kung 81], [Ballard, Demmel, Holtz, S. 2011b]
Proving that your algorithm/implementation is as good as it gets.
31
Further reduction techniques: Imposing reads and writes
Example: Computing ||A∙B|| where each matrix element is a formulas, computed only once.
Problem: Input/output do not agree with Form (1).Solution: •Impose writes/reads of (computed) entries of A and B.•Impose writes of the entries of C.•The new algorithm has lower bound
Further reduction techniques: Imposing reads and writes
The previous example can be generalized to other “black-box” uses of algorithms that fit Form (1).
Consider a more general class of algorithms:•Some arguments of the generalized form may be computed “on the fly” and discarded immediately after used. …
33
S1
S2
S3
Read
Read
Read
Read
Read
Read
Write
Write
Write
FLOP
FLOP
FLOP
FLOP
FLOP
FLOP
Tim
e
...
M
Example of a partition,M = 3
For a given run (Algorithm, Machine, Input)1. Partition computations into segments
of 3M reads / writes
2. Any segment S has M inputs/outputs.
3. Show that S performs G(3M) FLOPs gijk
4. The total communication BW isBW = BW of one segment #segments
M G / G(3M)
But now some operands inside a segmentmay be computed on-the fly and discarded.So no read/write performed.
...
Recall…
How to generalize this lower bound:How to deal with on-the-fly generated operands
34
• Need to distinguish Sources, Destinations of each operand in fast memory during a segment:
• Possible Sources:R1: Already in fast memory at start of segment,
or read; at most 2MR2: Created during segment;
no bound without more information• Possible Destinations:
D1: Left in fast memory at end of segment, or written; at most 2M
D2: Discarded; no bound without more information
S1
S2
S3
Read
Read
Read
Read
Read
Read
Write
Write
Write
FLOP
FLOP
FLOP
FLOP
FLOP
FLOP
...
How to generalize this lower bound:How to deal with on-the-fly generated operands
35
There are at most 4M of types: R1/D1, R1/D2, R2/D1.
Need to assume/prove: not too many R2/D2 arguments;
Then we can use LW, and obtain the lower bound of Form (1).
Bounding R2/D2 is sometimes quite subtle.
S1
S2
S3
Read
Read
Read
Read
Read
Read
Write
Write
Write
FLOP
FLOP
FLOP
FLOP
FLOP
FLOP
...
“A shadow”“B shadow”
“C shadow”
A BC
V
36
Composition of algorithms
Many algorithms and applications use composition of other (linear algebra) algorithms.
How to compute lower and upper bounds for such cases?
Example - Dense matrix poweringCompute An by (log n times) repeated squaring:
A A2 A4 … An
Each squaring step agrees with Form (1).Do we get
or is there a way to reorder (interleave) computations to reduce communication?
M
M
nnBW
3
log
37
Communication hiding vs. Communication avoiding
Q.The Model assumes that computation and communication do not overlap.Is this a realistic assumption? Can we not gain time by such overlapping?
A.Right. This is called communication hiding. It is done in practice, and ignored in our model. It may save up to a factor of 2 in the running time. Note that the speedup gained by avoiding (minimizing) communication is typically larger than a constant factor.
38
Two-nested loops: when the input/output size dominates
Q. Do two-nested-loops algorithms fall into the paradigm of Form (1)?For example, what lower bound do we obtain for computing Matrix-vector multiplication?
A. Yes, but the lower bound we obtain is
Where just reading the input costsMore generally, the communication cost lower bound for algorithms that agree with Form (1) is
where LW is the one we obtain from the geometric embedding, and #inputs+#outputs is the size of the inputs and outputs.
For some algorithms LW dominates, for others #inputs+#outputs dominate.
outputsinputsLWMaxBW ##,
M
nBW
2
2nBW
39
Composition of algorithms
Claim: any implementation of An by (log n times) repeated squaring requires
Therefore we cannot reduce communication by more than a constant factor (compared to log n separate calls to matrix multiplications)by reordering computations.
Composition of algorithms: when interleaving does matter
Example 1: Input: A,v1,v2,…,vn
Output: Av1,Av2,…,Avn
The phased solution costs
But we already know that we can save a M1/2 factor:Set B = (v1,v2,…,vn), and compute AB, then the cost is
Other examples?
M
M
nBW
3
3nBW
42
Composition of algorithms: when interleaving does matter
Example 2: Input: A,B, t
Output: C(k) = A B(k) for k = 1,2,…,t
where Bi,j(k) = Bi,j
1/k
Phased solution:
Upper bound:
(by adding up the BW cost of t matrix multiplication calls).
Lower bound:
(by imposing writes/reads between phases).
M
M
ntBW
3
M
M
ntOBW
3
43
Composition of algorithms: when interleaving does matter
Example 2: Input: A,B, t
Output: C(k) = A B(k) for k = 1,2,…,t
where Bi,j(k) = Bi,j
1/k
Can we do better than ?
M
M
ntBW
3
44
Composition of algorithms: when interleaving does matter
Example 2: Input: A,B, t
Output: C(k) = A B(k) for k = 1,2,…,t
where Bi,j(k) = Bi,j
1/k
Can we do better than ?
Yes.Claim:There exists an implementation for the above algorithm, with communication cost (tight lower and upper bounds):
M
M
ntBW
3
M
M
ntBW
3
45
Composition of algorithms: when interleaving does matter
Example 2: Input: A,B, t
Output: C(k) = A B(k) for k = 1,2,…,t
where Bi,j(k) = Bi,j
1/k
Proofs idea:
•Upper bound: Having both Ai,k and Bk,j in fast memory lets us do up to t evaluations of gijk.
•Lower bound: The union of all these tn3 operations does not match Form (1), since the inputs Bk,j cannot be indexed in a one-to-one fashion. We need a more careful argument regarding the numbers of gijk. Operations in a segment as a function of the number of accessed elements of A, B and C(k).
46
Composition of algorithms: when interleaving does matter
Can you think of natural examples where reordering / interleaving of known algorithms may improve the communication costs, compared to the phased implementation?
47
Summary
How to compute an upper bound on the communication costs of your algorithm?
Typically straightforward. Not always.
How to compute and prove a lower bound on the communication costs of your algorithm?
Reductions: from another algorithm/problem from another model of computing By using the generalized form (“flavor” of 3 nested loops) and imposing reads/writes – black-box-wise or bounding the number of R2/D2 operands By carefully composing the lower bounds of the building blocks.
Next time: by graph analysis
48
Open Problems
Find algorithms that attain the lower bounds:• Sparse matrix algorithms• that auto-tune or are cache oblivious• cache oblivious for parallel (distributed memory) • Cache oblivious parallel matrix multiplication? (Cilk++ ?)
Address complex heterogeneous hardware:• Lower bounds and algorithms