Top Banner
Iterative Row Sampling Richard Peng Joint work with Mu Li (CMU) and Gary Miller (CMU) CMU MIT
70

Iterative Row Sampling

Jan 04, 2016

Download

Documents

mari-mccall

Iterative Row Sampling. Richard Peng. CMU  MIT. Joint work with Mu Li (CMU) and Gary Miller (CMU). Outline. Matrix Sketches Existence Samples  better samples Iterative algorithms. Data. n-by-d matrix A , m entries Columns: data Rows: attributes. A. Goal: - PowerPoint PPT Presentation
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: Iterative Row Sampling

Iterative Row Sampling

Richard Peng

Joint work with Mu Li (CMU) and Gary Miller (CMU)

CMU MIT

Page 2: Iterative Row Sampling

OUTLINE

•Matrix Sketches• Existence• Samples better samples• Iterative algorithms

Page 3: Iterative Row Sampling

DATA

• n-by-d matrix A, m entries• Columns: data• Rows: attributes A

Goal:• Classification/ clustering• Identify patterns• Interpret new data

Page 4: Iterative Row Sampling

LINEAR MODEL

• Can add/scale data points• x1: coefficients, combo: Ax

Ax x1A:,1 x2A:,2 x3A:,3

Page 5: Iterative Row Sampling

PROBLEM

Interpret new data point b as combination of known ones Ax

?

Page 6: Iterative Row Sampling

REGRESSION

• Express as combination of current examples• Regression: minx ║Ax–b║

p

• p=2: least squares• p=1: compressive sensing

• ║x║2: Euclidean norm of x• ║x║1: sum of absolute

values

Page 7: Iterative Row Sampling

VARIANTS OF COMPRESSIVE SENSING

•minx ║Ax-b║1 +║x║1

•minx ║Ax-b║2 +║x║1

•minx ║x║1 s.t. Ax=b

•minx ║Ax║1 s.t. Bx = y

•minx ║Ax-b║1 + ║Bx - y║1

All similar to minx║Ax-b║1

Page 8: Iterative Row Sampling

SIMPLIFIED

•minx║Ax–b║p = minx║[A, b] [x; -1]║p

• Regression equivalent to min║Ax║p with one entry of x fixed

A b

x

-1

Page 9: Iterative Row Sampling

‘BIG’ DATA POINTS

• Each data point has many attributes•#rows (n) >> #columns (d)• Examples:• Genetic data• Time series (videos)

• Reverse (d>>n) also common: images + SIFT

A

Page 10: Iterative Row Sampling

FASTER?

A’

A

Smaller, equivalent A’Matrix sketch

Page 11: Iterative Row Sampling

ROW SAMPLING

A’

A

• Pick some rows of A to be A’•How to pick? Random

Page 12: Iterative Row Sampling

SHORTER EQUIVALENT

• Find shorter A’ that preserves answer• |Ax|p≈1+ε|A’x|p for all x

• Run algorithm on A’, same answer good for A

A’

Simplified error notation ≈:a≈kb if there exists k1, k2 s.t.k2/k1 ≤ k and k1a ≤ b ≤ k2 b

Page 13: Iterative Row Sampling

OUTLINE

•Matrix Sketches•How? Existence• Samples better samples• Iterative algorithms

Page 14: Iterative Row Sampling

SKETCHES EXIST

• Linear sketches: A’=SA• [Drineals et al. `12]:Row sampling: one non-zero in each row of S• [Clarkson-Woodruff `12]:S = countSketch, one non-zero per column.

A’

|Ax|p≈|A’x|p for all x

Page 15: Iterative Row Sampling

SKETCHES EXIST

p=2 p=1

Dasgupta et al. `09 d2.5

Magdon-Ismail `10 dlog2d

Sohler & Woodruff `11 d3.5

Drineals et al. `12 dlogd

Clarkson et al. `12 d4.5log1.5d

Clarkson & Woodruff `12 d2logd d8

Mahoney & Meng `12 d2 d3.5

Nelson & Nguyen `12 d1+α

This Paper dlogd d3.66

Hidden: runtime costs, ε-2 dependency

Page 16: Iterative Row Sampling

WHY IS ≈D POSSIBLE?

• ║Ax║22 = xTATAx

• ATA: d-by-d matrix• Any factorization (e.g.

QR) of ATA suffices as A’

|Ax|p≈|A’x|p for all x

Page 17: Iterative Row Sampling

ATA

• Covariance matrix•Dot product of all pairs of columns (data)• Covariance:cov(j1,j2) = Σi Ai,j1

TAi,j2

A:,j1 A:,j2

Page 18: Iterative Row Sampling

USE OF COVARIANCE MATRIX

• Clustering: l2 distances of all pairs given by C• Kernel methods: all pair dot products suffice for many models.

C

C=AT

A

Page 19: Iterative Row Sampling

OTHER USE OF COVARIANCE

• Covariance of attributes used to tune parameters• Images + SIFT: many data points, few attributes.• http://www.image-net.org/:

14,197,122 images

1000 SIFT features

C

Page 20: Iterative Row Sampling

HOW EXPENSIVE IS THIS?

• d2 dots of length n vectors• Total: O(nd2)• Faster: O(ndω-1)• Expensive: nd2 > nd > m

AC

Page 21: Iterative Row Sampling

EQUIVALENT VIEW OF SKETCHES

• Approximate covariance matrix: C’=(A’)TA’• ║Ax║2≈║A’x║2 is the same as C ≈ C’

C’

A’

Page 22: Iterative Row Sampling

APPLICATION OF SKETCHES

•A’: n’ rows• d2 dots of length n’ vectors• Total cost: O(n’dω-1)

AC’

A’

Page 23: Iterative Row Sampling

SKETCHES IN INPUT SPARSITY TIME

•Need: cost of computing C’ < cost of computing C = ATA• 2 goals:• n’ small• A’ found efficiently

A

C’

A’

Page 24: Iterative Row Sampling

COST AND QUALITY OF A’

p=2 p=1

cost size cost size

Dasgupta et al. `09 nd5 d2.5

Magdon-Ismail `10 nd2/logd dlog2d

Sohler & Woodruff `11 ndω-1+α d3.5

Drineals et al. `12 ndlogd+dω dlogd

Clarkson et al. `12 ndlogd d4.5log1.5d

Clarkson & Woodruff `12

m d2logd m + d7 d8

Mahoney & Meng `12 m d2 mlogn+d8

d3.5

Nelson & Nguyen `12 m d1+α Same as above

This Paper m + dω+α dlogd m + dω+α d3.66

Page 25: Iterative Row Sampling

OUTLINE

•Matrix Sketches•How? Existence•Samples better samples• Iterative algorithms

Page 26: Iterative Row Sampling

PREVIOUS APPROACHES

•Go go poly(d) rows directly• Projection to obtain key info, or the sketch itself

A

A’

m

poly(d)

A miracle happens

Page 27: Iterative Row Sampling

OUR MAIN APPROACH

•Utilize the robustness of sketches, covariance matrices, and sampling• Iteratively reduce errors and sizes

A

A” A’

Page 28: Iterative Row Sampling

BETTER ALGORITHM FOR P=2

p=2 p=1

cost size cost size

Dasgupta et al. `09 nd5 d2.5

Magdon-Ismail `10 nd2/logd dlog2d

Sohler & Woodruff `11 ndω-1+α d3.5

Drineals et al. `12 ndlogd+dω dlogd

Clarkson et al. `12 ndlogd d4.5log1.5d

Clarkson & Woodruff `12

m d2logd m + d7 d8

Mahoney & Meng `12 m d2 mlogn+d8

d3.5

Nelson & Nguyen `12 m d1+α Same as above

This Paper m + dω+α dlogd m + dω+α d3.66

Page 29: Iterative Row Sampling

COMPOSING SKETCHES

Total cost: O(m + n’dlogd + dω) = O(m + dω)

A

A” A’

n rows

n’ = d1+α

O(m) O(n’dlogd +dω)

dlogd rows

Page 30: Iterative Row Sampling

ACCUMULATION OF ERRORS

A

A” A’

n rows

n’ = d1+α

║Ax║2 ≈k║A”x║2

dlogd rows

║A”x║2 ≈k’║A’x║2

║Ax║2 ≈kk’║A’x║2

Page 31: Iterative Row Sampling

ACCMULATION OF ERRORS

║Ax║ 2 ≈kk’║A’x║2

• Final error: product of both errors•Dependency of error in cost: usually ε-2 or more for 1± ε error• [Avron & Toledo `11]: only final step needs to be accurate• Idea: compute sketches indirectly

Page 32: Iterative Row Sampling

ROW SAMPLING

A’

A

• Pick some rows of A to be A’•How to pick? Random

Page 33: Iterative Row Sampling

ARE ALL ROWS EQUAL?

one non-zero row

A

column with one entry

A

|A[1;0;…;0]|p≠ 0

Page 34: Iterative Row Sampling

ROW SAMPLING

A’

A

• τ’ : weights on rows distribution• Pick a number of rows independently from this distribution, rescale to form A’

Page 35: Iterative Row Sampling

MATRIX CHERNOFF BOUNDS

A

• Sufficient property of τ’• τ: statistical leverage scores• If τ' ≥ τ,║τ'║1logd (scaled) rows suffices for A’ ≈ A

τ'

Page 36: Iterative Row Sampling

STATISTICAL LEVERAGE SCORES

•Studied in stats since 70s•Importance of rows•leverage score of row i, Ai:

τi = Ai (ATA)-1Ai

T

•Key fact: ║τ║1 = rank ≤ d ║τ'║1logd = dlogd rows

Page 37: Iterative Row Sampling

COMPUTING LEVERAGE SCORES

τi = Ai(ATA)-

1AiT

= AiC-1AiT

•ATA: covariance matrix, C•Given C-1, can compute each τi in O(d2) time•Total cost: O(nd2+dω)

Page 38: Iterative Row Sampling

COMPUTING LEVERAGE SCORES

τi = AiC-1AiT

=║AiC-1/2║22

•2-norm of a vector, AiC-1/2

•rows in isotropic positions•Decorrelates columns

Page 39: Iterative Row Sampling

ASIDE: WHAT IS LEVERAGE?

Geometric view:• Rows define ‘energy’ directions.• Normalize so total energy is

uniform• τi : norm of row i after normalizing

Ai

AiC-1/2

Page 40: Iterative Row Sampling

ASIDE: WHAT IS LEVERAGE?

How to interpret statistical leverage scores?•Statistics ([Hoaglin-Welsh `78], [Chatterjee-Hadi `86]):• Influence on data set• Likelihood of outlier

•Uniqueness of Row

Page 41: Iterative Row Sampling

ASIDE: WHAT IS LEVERAGE?

High Leverage Score:• Key attribute?• Outlier (measuring

error)?

Page 42: Iterative Row Sampling

ASIDE: WHAT IS LEVERAGE?

My current view (motivated by graph sparsification):• Sampling probabilities• Use them to find

sketches

Page 43: Iterative Row Sampling

COMPUTING LEVERAGE SCORES

τi = ║AiC-1/2║22

•Only need τ' ≥ τ•Can use approximations after scaling them up•Error leads to larger ║τ'║1

Page 44: Iterative Row Sampling

DIMENSIONALITY REDUCTION

║x║22 ≈jl ║Gx║2

2

•Johnson Lindenstrauss Transform•G: d-by-O(1/α) Gaussian• Errorjl = dα

x

Gx

Page 45: Iterative Row Sampling

ESTIMATING LEVERAGE SCORES

τi =║AiC-1/2║22

≈jl║AiC-1/2G║22

•G: d-by-O(1/α) Gaussian• C1/2G: d-by-O(1/α)• Cost: O(α ∙ nnz(Ai))

total: O(α ∙ m + α ∙ d2logd)

Page 46: Iterative Row Sampling

ESTIMATING LEVERAGE SCORES

•C ≈k C’ gives ║C-1/2x║2 ≈k║C’-1/2x║2

•Using C’ as a preconditioner for C•Can also combine with JL

τi =║AiC-1/2 ║ 22

≈║AiC’-1/2 ║ 22

Page 47: Iterative Row Sampling

ESTIMATING LEVERAGE SCORES

τi’ =║AiC’-1/2G║2

2

≈jl║AiC-1/2║22

≈jl∙k τ i

•(jl ∙ k) ∙ τ’ ≥ τ•Total number of rows:

║jl ∙ k ∙ τ’║1 ≤ jl ∙ k ∙ ║τ’║1

≤ k d1 + α

Page 48: Iterative Row Sampling

ESTIMATING LEVERAGE SCORES

•Quality of A’ does not depend on quality of τ'•C ≈k C’ gives A’ ≈2 A with O(kd1+α) rows in O(m + dω) time

•(jl k) ∙ ∙ τ’ ≥ τ•║jl k ∙ ∙ τ’║1 ≤ jl k d∙ ∙ 1+α

Some fixable issues when n >>>d

Page 49: Iterative Row Sampling

SIZE REDUCTION

•A” ≈O(1) A•C” ≈O(1) C•τ' ≈O(1) τ•A’ ≈O(1) A , O(d 1+α logd) rows

A” C”

τ'A’

Page 50: Iterative Row Sampling

HIGH ERROR SETTING

•A” ≈k A•C” ≈k C•τ' ≈k τ•A’ ≈O(1) A , O(kd 1+α logd) rows

A” C”

τ'A’

Page 51: Iterative Row Sampling

ACCURACY BOOSTING

•Can reduce any error, k, in O(m + kdω+α ) time•All intermediate steps can have large (constant) error

A

A’’ A’

Page 52: Iterative Row Sampling

OUTLINE

•Matrix Sketches•How? Existence• Samples better samples• Iterative algorithms

Page 53: Iterative Row Sampling

ONE STEP SKETCHING

•Obtain sketch of size poly(d)• Error correct to O(dlogd) rows in poly(d) time

A

A’A”

m

poly(d)dlogd

A miracle happens

Page 54: Iterative Row Sampling

WHAT WE WILL SHOW

• A number of iterative steps can give a similar result•More work, less miraculous, more robust• Key idea: find leverage scores

Page 55: Iterative Row Sampling

ALGORITHMIC PICTURE

C’τ'A’

sketch, covariance matrix, leverage scores with error k gives all three with high accuracy in O(m + kdω+α ) time

Page 56: Iterative Row Sampling

OBSERVATIONS

C’τ'A’

• Error does not accumulate• Can loop around many

times• Unused parameter: size of

A

≈k ≈k

≈O(1), O(K) size increase

Page 57: Iterative Row Sampling

OUR APPROACH

A As

Create shorter matrix As s.t. total leverage score of each block is close

Page 58: Iterative Row Sampling

LEVERAGE SCORE OF A BLOCK

A As

• l22 of leverage scores : Frobenius norm of A1:kC-1/2

•≈ under random projection•G: O(1)-by-k, GA1:k: O(1) rows

║τ1..k║22 =║A1:kC-1/2║F

2

≈║GA1:kC-1/2║F2

Page 59: Iterative Row Sampling

SIZE REDUCTION

Recursing on As gives leverages scores that:• Sum to ≤d• Can row sample A

A As

Page 60: Iterative Row Sampling

ALGORITHM

C’τ'A’

• Decrease size by dα, recurse• Bring back leverage scores• Reduce error

≈k ≈k

≈O(1), O(K) size increase

Page 61: Iterative Row Sampling

PROBLEM

• Leverage scores in As measured using Cs = As

TAs

• Already have bound on total, suffices to show

║xC-1/2║2 ≤ k║xCs-1/2║2

Page 62: Iterative Row Sampling

PROOF SKETCH

• Show ║Cs1/2x|2 ≤ k║C1/2x║2

• Invert both sides• Some issues when As has smaller rank than A

Need: ║xC-1/2║2 ≤ k║xCs-1/2║2

Page 63: Iterative Row Sampling

║CS1/2X║2 ≤ K║C1/2X║2

║Cs1/2x║2=║Asx║2

= ΣbΣi(Gi,bAbTx)2

2

≤ ΣbΣi║Gi,b║22║Ab

Tx║22

≤ maxb,i║Gi,b║22 ║Ax║2

2

≤ O(klogn)║Ax║22

b: blocks of As

Page 64: Iterative Row Sampling

P=1, OR ARBITRARY P

• Same approach can still work• P-norm leverage scores•Need: well-conditioned basis, U for column space

║Ax║p≈║A’x║p for any x

Page 65: Iterative Row Sampling

QUALITY OF BASIS (P=1)

•Quality of U: maximum distortion in dual norm:β = maxx≠0║Ux║∞ /║x║∞

• Analog of leverage scores: τi = β║Ui,:║1

• Total number of rows: β║U║1

Page 66: Iterative Row Sampling

BASIS CONSTRUCTION

• Basis using linear transform, U = AC• Compute |Ui|1 using p-stable distributions (Indyk `06) instead of JL

C1, U τ'A’

≈k ≈k

≈O(1), O(K) size increase

Page 67: Iterative Row Sampling

ITERATIVE ALGORITHM FOR P=1

• C1 = C-1/2, l2 basis

• Quality of U=AC1: β║Ui,:║1= n1/2d

• Too coarse for a single step, but good enough to iterate

• n approaches poly(d) quickly• Need to run l2 algorithm for C

Page 68: Iterative Row Sampling

SUMMARYp=2 p=1

Cost for dlog rows cost size

Sohler & Woodruff `11 ndω-1+α d3.5

Drineals et al. `12 ndlogd+dω

Clarkson et al. `12 ndlogd d4.5log1.5d

Clarkson & Woodruff `12

m+d3log2d m + d7 d8

Mahoney & Meng `12 m+d3logd mlogn+d8 d3.5

Nelson & Nguyen `12 m+dω Same as above

This Paper m + dω+α m + dω+α d3.66

• Robust steps algorithms• l2: more complicated than

sketching• Smaller overhead for p-norm

Page 69: Iterative Row Sampling

FUTURE WORK

•What are leverage scores???• Iterative low rank approximation?• Better p-norm leverage scores?•More streamlined view of the projections in our algorithm?• Empirical evaluation?

Page 70: Iterative Row Sampling

THANK YOU!

Questions?