Top Banner
Singular Value Decomposition (matrix factorization)
26

Singular Value Decomposition (matrix factorization)

Nov 14, 2021

Download

Documents

dariahiddleston
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: Singular Value Decomposition (matrix factorization)

Singular Value Decomposition(matrix factorization)

Page 2: Singular Value Decomposition (matrix factorization)

Singular Value DecompositionThe SVD is a factorization of a ๐‘šร—๐‘› matrix into

๐‘จ = ๐‘ผ ๐šบ ๐‘ฝ๐‘ป

where ๐‘ผ is a ๐‘šร—๐‘š orthogonal matrix, ๐‘ฝ๐‘ป is a ๐‘›ร—๐‘› orthogonal matrix and ๐šบis a ๐‘šร—๐‘› diagonal matrix.

For a square matrix (๐’Ž = ๐’):

๐‘จ =โ‹ฎ โ€ฆ โ‹ฎ๐’–& โ€ฆ ๐’–'โ‹ฎ โ€ฆ โ‹ฎ

๐œŽ&โ‹ฑ

๐œŽ'

โ€ฆ ๐ฏ&( โ€ฆโ‹ฎ โ‹ฎ โ‹ฎโ€ฆ ๐ฏ'( โ€ฆ

๐‘จ =โ‹ฎ โ€ฆ โ‹ฎ๐’–& โ€ฆ ๐’–'โ‹ฎ โ€ฆ โ‹ฎ

๐œŽ&โ‹ฑ

๐œŽ'

โ‹ฎ โ€ฆ โ‹ฎ๐’—& โ€ฆ ๐’—'โ‹ฎ โ€ฆ โ‹ฎ

(

๐œŽ& โ‰ฅ ๐œŽ) โ‰ฅ ๐œŽ*โ€ฆ

Page 3: Singular Value Decomposition (matrix factorization)

Reduced SVD

๐‘จ = ๐‘ผ ๐šบ ๐‘ฝ๐‘ป =โ‹ฎ โ€ฆ โ‹ฎ โ€ฆ โ‹ฎ๐’–" โ€ฆ ๐’–# โ€ฆ ๐’–$โ‹ฎ โ€ฆ โ‹ฎ โ€ฆ โ‹ฎ

๐œŽ"โ‹ฑ

๐œŽ#0โ‹ฎ0

โ€ฆ ๐ฏ"% โ€ฆโ‹ฎ โ‹ฎ โ‹ฎโ€ฆ ๐ฏ#% โ€ฆ

๐‘šร—๐‘›๐‘šร—๐‘š ๐‘›ร—๐‘›

๐‘จ = ๐‘ผ๐‘น ๐šบ๐‘น๐‘ฝ๐‘ป

What happens when ๐‘จ is not a square matrix?

1) ๐’Ž > ๐’

We can instead re-write the above as:

Where ๐‘ผ๐‘น is a ๐‘šร—๐‘› matrix and ๐šบ๐‘น is a ๐‘›ร—๐‘› matrix

Page 4: Singular Value Decomposition (matrix factorization)

Reduced SVD

๐‘จ = ๐‘ผ ๐šบ ๐‘ฝ๐‘ป =โ‹ฎ โ€ฆ โ‹ฎ๐’–" โ€ฆ ๐’–#โ‹ฎ โ€ฆ โ‹ฎ

๐œŽ&โ‹ฑ

๐œŽ.

0โ‹ฑ

0

โ€ฆ ๐ฏ"% โ€ฆโ‹ฎ โ‹ฎ โ‹ฎโ€ฆ ๐ฏ$% โ€ฆโ‹ฎ โ‹ฎ โ‹ฎโ€ฆ ๐ฏ#% โ€ฆ

๐‘šร—๐‘›๐‘šร—๐‘š ๐‘›ร—๐‘›

๐‘จ = ๐‘ผ ๐šบ๐‘น๐‘ฝ๐‘น๐‘ป

2) ๐’ > ๐’Ž

We can instead re-write the above as:

where ๐‘ฝ๐‘น is a ๐‘›ร—๐‘š matrix and ๐šบ๐‘น is a ๐‘šร—๐‘š matrix

In general:

๐‘จ = ๐‘ผ๐‘น๐šบ๐‘น๐‘ฝ๐‘น๐‘ป๐‘ผ๐‘น is a ๐‘šร—๐‘˜ matrix ๐šบ๐‘น is a ๐‘˜ ร—๐‘˜ matrix๐‘ฝ๐‘น is a ๐‘›ร—๐‘˜ matrix

๐‘˜ = min(๐‘š, ๐‘›)

Page 5: Singular Value Decomposition (matrix factorization)

Letโ€™s take a look at the product ๐šบ๐‘ป๐šบ, where ๐šบ has the singular values of a ๐‘จ, a ๐‘šร—๐‘› matrix.

๐šบ๐‘ป๐šบ =๐œŽ"

โ‹ฑ๐œŽ#

0โ‹ฑ

0

๐œŽ"โ‹ฑ

๐œŽ#0โ‹ฎ0

=๐œŽ"$

โ‹ฑ๐œŽ#$

๐‘šร—๐‘›

๐‘›ร—๐‘š ๐‘›ร—๐‘›

๐šบ๐‘ป๐šบ =

๐œŽ"โ‹ฑ

๐œŽ%0โ‹ฎ0

๐œŽ"โ‹ฑ

๐œŽ%

0โ‹ฑ

0=

๐œŽ"$โ‹ฑ

๐œŽ%$

0โ‹ฑ

00

โ‹ฑ0

0โ‹ฑ

0๐‘šร—๐‘›๐‘›ร—๐‘š ๐‘›ร—๐‘›

๐‘š > ๐‘›

๐‘› > ๐‘š

Page 6: Singular Value Decomposition (matrix factorization)

Assume ๐‘จ with the singular value decomposition ๐‘จ = ๐‘ผ ๐šบ ๐‘ฝ๐‘ป. Letโ€™s take a look at the eigenpairs corresponding to ๐‘จ๐‘ป๐‘จ:

๐‘จ๐‘ป๐‘จ = ๐‘ผ ๐šบ ๐‘ฝ๐‘ป๐‘ป๐‘ผ ๐šบ ๐‘ฝ๐‘ป

๐‘ฝ๐‘ป ๐‘ป ๐šบ ๐‘ป๐‘ผ๐‘ป ๐‘ผ ๐šบ ๐‘ฝ๐‘ป = ๐‘ฝ๐šบ๐‘ป๐‘ผ๐‘ป ๐‘ผ ๐šบ ๐‘ฝ๐‘ป = ๐‘ฝ ๐šบ๐‘ป๐šบ ๐‘ฝ๐‘ป

Hence ๐‘จ๐‘ป๐‘จ = ๐‘ฝ ๐šบ๐Ÿ ๐‘ฝ๐‘ป

Recall that columns of ๐‘ฝ are all linear independent (orthogonal matrix), then from diagonalization (๐‘ฉ = ๐‘ฟ๐‘ซ๐‘ฟ1๐Ÿ), we get:

โ€ข the columns of ๐‘ฝ are the eigenvectors of the matrix ๐‘จ๐‘ป๐‘จโ€ข The diagonal entries of ๐šบ๐Ÿ are the eigenvalues of ๐‘จ๐‘ป๐‘จ

Letโ€™s call ๐œ† the eigenvalues of ๐‘จ๐‘ป๐‘จ, then ๐œŽ3) = ๐œ†3

Page 7: Singular Value Decomposition (matrix factorization)

In a similar way,

๐‘จ๐‘จ๐‘ป = ๐‘ผ ๐šบ ๐‘ฝ๐‘ป ๐‘ผ ๐šบ ๐‘ฝ๐‘ป๐‘ป

๐‘ผ ๐šบ ๐‘ฝ๐‘ป ๐‘ฝ๐‘ป ๐‘ป ๐šบ ๐‘ป๐‘ผ๐‘ป = ๐‘ผ ๐šบ ๐‘ฝ๐‘ป๐‘ฝ๐šบ๐‘ป๐‘ผ๐‘ป = ๐‘ผ๐šบ ๐šบ๐‘ป๐‘ผ๐‘ป

Hence ๐‘จ๐‘จ๐‘ป = ๐‘ผ ๐šบ๐Ÿ ๐‘ผ๐‘ป

Recall that columns of ๐‘ผ are all linear independent (orthogonal matrices), then from diagonalization (๐‘ฉ = ๐‘ฟ๐‘ซ๐‘ฟ1๐Ÿ), we get:

โ€ข The columns of ๐‘ผ are the eigenvectors of the matrix ๐‘จ๐‘จ๐‘ป

Page 8: Singular Value Decomposition (matrix factorization)

How can we compute an SVD of a matrix A ?1. Evaluate the ๐‘› eigenvectors ๐ฏ3 and eigenvalues ๐œ†3 of ๐‘จ๐‘ป๐‘จ2. Make a matrix ๐‘ฝ from the normalized vectors ๐ฏ3. The columns are called

โ€œright singular vectorsโ€.

๐‘ฝ =โ‹ฎ โ€ฆ โ‹ฎ๐ฏ& โ€ฆ ๐ฏ'โ‹ฎ โ€ฆ โ‹ฎ

3. Make a diagonal matrix from the square roots of the eigenvalues.

๐šบ =๐œŽ&

โ‹ฑ๐œŽ'

๐œŽ3= ๐œ†3 and ๐œŽ&โ‰ฅ ๐œŽ) โ‰ฅ ๐œŽ*โ€ฆ

4. Find ๐‘ผ: ๐‘จ = ๐‘ผ ๐šบ ๐‘ฝ๐‘ป โŸน๐‘ผ๐šบ = ๐‘จ ๐‘ฝโŸน ๐‘ผ = ๐‘จ ๐‘ฝ ๐šบ1๐Ÿ. The columns are called the โ€œleft singular vectorsโ€.

Page 9: Singular Value Decomposition (matrix factorization)

True or False?

๐‘จ has the singular value decomposition ๐‘จ = ๐‘ผ ๐šบ ๐‘ฝ๐‘ป.

โ€ข The matrices ๐‘ผ and ๐‘ฝ are not singular

โ€ข The matrix ๐šบ can have zero diagonal entries

โ€ข ๐‘ผ ) = 1

โ€ข The SVD exists when the matrix ๐‘จ is singular

โ€ข The algorithm to evaluate SVD will fail when taking the square root of a negative eigenvalue

Page 10: Singular Value Decomposition (matrix factorization)

Singular values cannot be negative since ๐‘จ๐‘ป๐‘จ is a positive semi-definite matrix (for real matrices ๐‘จ)

โ€ข A matrix is positive definite if ๐’™๐‘ป๐‘ฉ๐’™ > ๐ŸŽ for โˆ€๐’™ โ‰  ๐ŸŽโ€ข A matrix is positive semi-definite if ๐’™๐‘ป๐‘ฉ๐’™ โ‰ฅ ๐ŸŽ for โˆ€๐’™ โ‰  ๐ŸŽ

โ€ข What do we know about the matrix ๐‘จ๐‘ป๐‘จ ?๐’™๐‘ป ๐‘จ๐‘ป๐‘จ ๐’™ = (๐‘จ๐’™)๐‘ป๐‘จ๐’™ = ๐‘จ๐’™ ๐Ÿ

๐Ÿ โ‰ฅ 0

โ€ข Hence we know that ๐‘จ๐‘ป๐‘จ is a positive semi-definite matrix

โ€ข A positive semi-definite matrix has non-negative eigenvalues

๐‘ฉ๐’™ = ๐œ†๐’™ โŸน ๐’™๐‘ป๐‘ฉ๐’™ = ๐’™๐‘ป ๐œ† ๐’™ = ๐œ† ๐’™ ๐Ÿ๐Ÿ โ‰ฅ 0 โŸน ๐œ† โ‰ฅ 0

Singular values are always non-negative

Page 11: Singular Value Decomposition (matrix factorization)

Cost of SVDThe cost of an SVD is proportional to ๐’Ž๐’๐Ÿ + ๐’๐Ÿ‘where the constant of proportionality constant ranging from 4 to 10 (or more) depending on the algorithm.

๐ถ456 = ๐›ผ ๐‘š ๐‘›) + ๐‘›* = ๐‘‚ ๐‘›*๐ถ.78.78 = ๐‘›*= ๐‘‚ ๐‘›*๐ถ9: = 2๐‘›*/3 = ๐‘‚ ๐‘›*

Page 12: Singular Value Decomposition (matrix factorization)

SVD summary:โ€ข The SVD is a factorization of a ๐‘šร—๐‘› matrix into ๐‘จ = ๐‘ผ ๐šบ ๐‘ฝ๐‘ป where ๐‘ผ is a ๐‘šร—๐‘š

orthogonal matrix, ๐‘ฝ๐‘ป is a ๐‘›ร—๐‘› orthogonal matrix and ๐šบ is a ๐‘šร—๐‘› diagonal matrix.

โ€ข In reduced form: ๐‘จ = ๐‘ผ๐‘น๐šบ๐‘น๐‘ฝ๐‘น๐‘ป, where ๐‘ผ๐‘น is a ๐‘šร—๐‘˜ matrix, ๐šบ๐‘น is a ๐‘˜ ร—๐‘˜ matrix, and ๐‘ฝ๐‘น is a ๐‘›ร—๐‘˜ matrix, and ๐‘˜ = min(๐‘š, ๐‘›).

โ€ข The columns of ๐‘ฝ are the eigenvectors of the matrix ๐‘จ๐‘ป๐‘จ, denoted the right singular vectors.

โ€ข The columns of ๐‘ผ are the eigenvectors of the matrix ๐‘จ๐‘จ๐‘ป, denoted the left singular vectors.

โ€ข The diagonal entries of ๐šบ๐Ÿ are the eigenvalues of ๐‘จ๐‘ป๐‘จ. ๐œŽ&= ๐œ†& are called the singular values.

โ€ข The singular values are always non-negative (since ๐‘จ๐‘ป๐‘จ is a positive semi-definite matrix, the eigenvalues are always ๐œ† โ‰ฅ 0)

Page 13: Singular Value Decomposition (matrix factorization)

Singular Value Decomposition(applications)

Page 14: Singular Value Decomposition (matrix factorization)

1) Determining the rank of a matrix

๐‘จ =โ‹ฎ โ€ฆ โ‹ฎ โ€ฆ โ‹ฎ๐’–" โ€ฆ ๐’–' โ€ฆ ๐’–#โ‹ฎ โ€ฆ โ‹ฎ โ€ฆ โ‹ฎ

๐œŽ"โ‹ฑ

๐œŽ'0โ‹ฎ0

โ€ฆ ๐ฏ"( โ€ฆโ‹ฎ โ‹ฎ โ‹ฎโ€ฆ ๐ฏ'( โ€ฆ

Suppose ๐‘จ is a ๐‘šร—๐‘› rectangular matrix where๐‘š > ๐‘›:

๐‘จ ==!"#

$

๐œŽ!๐’–!๐ฏ!%

๐‘จ# = ๐œŽ#๐’–#๐ฏ#% what is rank ๐‘จ# = ?

In general, rank ๐‘จ& = ๐‘˜

A) 1B) nC) depends on the matrixD) NOTA

๐‘จ =โ‹ฎ โ€ฆ โ‹ฎ๐’–" โ€ฆ ๐’–'โ‹ฎ โ€ฆ โ‹ฎ

โ€ฆ ๐œŽ" ๐ฏ"( โ€ฆโ‹ฎ โ‹ฎ โ‹ฎโ€ฆ ๐œŽ' ๐ฏ'( โ€ฆ

= ๐œŽ"๐’–"๐ฏ"( + ๐œŽ)๐’–)๐ฏ)( +โ‹ฏ+ ๐œŽ'๐’–'๐ฏ'(

Page 15: Singular Value Decomposition (matrix factorization)

Rank of a matrixFor general rectangular matrix ๐‘จ with dimensions ๐‘šร—๐‘›, the reduced SVD is:

๐‘จ =I3G&

H

๐œŽ3๐’–3๐ฏ3(

๐‘จ = ๐‘ผ๐‘น๐šบ๐‘น๐‘ฝ๐‘น๐‘ป

๐‘šร—๐‘› ๐‘šร—๐‘˜๐‘˜ร—๐‘˜

๐‘˜ ร—๐‘›

๐‘˜ = min(๐‘š, ๐‘›)

๐œฎ =๐œŽ#

โ‹ฑ๐œŽ&

0โ‹ฑ

0 โ€ฆ 0๐œฎ =

๐œŽ#โ‹ฑ

๐œŽ&0 0

โ‹ฑ โ‹ฎ0

If ๐œŽ& โ‰  0 โˆ€๐‘–, then rank ๐‘จ = ๐‘˜ (Full rank matrix)

In general, rank ๐‘จ = ๐’“, where ๐’“ is the number of non-zero singular values ๐œŽ&

๐‘Ÿ < ๐‘˜ (Rank deficient)

Page 16: Singular Value Decomposition (matrix factorization)

โ€ข The rank of A equals the number of non-zero singular values which is the same as the number of non-zero diagonal elements in ฮฃ.

โ€ข Rounding errors may lead to small but non-zero singular values in a rank deficient matrix, hence the rank of a matrix determined by the number of non-zero singular values is sometimes called โ€œeffective rankโ€.

โ€ข The right-singular vectors (columns of ๐‘ฝ) corresponding to vanishing singular values span the null space of A.

โ€ข The left-singular vectors (columns of ๐‘ผ) corresponding to the non-zero singular values of A span the range of A.

Rank of a matrix

Page 17: Singular Value Decomposition (matrix factorization)

2) Pseudo-inverseโ€ข Problem: if A is rank-deficient, ๐šบ is not be invertible

โ€ข How to fix it: Define the Pseudo Inverse

โ€ข Pseudo-Inverse of a diagonal matrix:

๐šบN 3 = N&O&, if ๐œŽ3 โ‰  0

0, if ๐œŽ3 = 0

โ€ข Pseudo-Inverse of a matrix ๐‘จ:

๐‘จN = ๐‘ฝ๐šบN๐‘ผ๐‘ป

Page 18: Singular Value Decomposition (matrix factorization)

3) Matrix normsThe Euclidean norm of an orthogonal matrix is equal to 1

๐‘ผ ) = max๐’™ !+"

๐‘ผ๐’™ ) = max๐’™ !+"

๐‘ผ๐’™ ๐‘ป(๐‘ผ๐’™)= max๐’™ !+"

๐’™๐‘ป๐’™ = max๐’™ !+"

๐’™ ) = 1

The Euclidean norm of a matrix is given by the largest singular value

๐‘จ ) = max๐’™ !+"

๐‘จ๐’™ ) = max๐’™ !+"

๐‘ผ ๐šบ ๐‘ฝ๐‘ป๐’™ ) = max๐’™ !+"

๐šบ ๐‘ฝ๐‘ป๐’™ ) =

= max๐‘ฝ๐‘ป๐’™ !+"

๐šบ ๐‘ฝ๐‘ป๐’™ ) = max๐’š !+"

๐šบ ๐’š ) =max(๐œŽ&)

Where we used the fact that ๐‘ผ ) = 1, ๐‘ฝ ) = 1 and ๐šบ is diagonal

๐‘จ ) = max ๐œŽ& = ๐œŽ#./ ๐œŽ'() is the largest singular value

Page 19: Singular Value Decomposition (matrix factorization)

4) Norm for the inverse of a matrixThe Euclidean norm of the inverse of a square-matrix is given by:

Assume here ๐‘จ is full rank, so that ๐‘จ1& exists

๐‘จ0" ) = max๐’™ !+"

(๐‘ผ ๐šบ ๐‘ฝ๐‘ป)0"๐’™ ) = max๐’™ !+"

๐‘ฝ ๐šบ0๐Ÿ๐‘ผ๐‘ป๐’™ )

Since ๐‘ผ ) = 1, ๐‘ฝ ) = 1 and ๐šบ is diagonal then

๐‘จ0" )="

2#$%๐œŽ'!$ is the smallest singular value

Page 20: Singular Value Decomposition (matrix factorization)

5) Norm of the pseudo-inverse matrixThe norm of the pseudo-inverse of a ๐‘š ร— ๐‘› matrix is:

๐‘จ3 )="2&

where ๐œŽ4 is the smallest non-zero singular value. This is valid for any matrix, regardless of the shape or rank.

Note that for a full rank square matrix, ๐‘จ3 ) is the same as ๐‘จ0" ).

Zero matrix: If ๐‘จ is a zero matrix, then ๐‘จ3 is also the zero matrix, and ๐‘จ3 )= 0

Page 21: Singular Value Decomposition (matrix factorization)

The condition number of a matrix is given by

๐‘๐‘œ๐‘›๐‘‘) ๐‘จ = ๐‘จ ) ๐‘จ3 )

If the matrix is full rank: ๐‘Ÿ๐‘Ž๐‘›๐‘˜ ๐‘จ = ๐‘š๐‘–๐‘› ๐‘š, ๐‘›

๐‘๐‘œ๐‘›๐‘‘) ๐‘จ =๐œŽ#./๐œŽ#&'

where ๐œŽ#./ is the largest singular value and ๐œŽ#&' is the smallest singular value

If the matrix is rank deficient: ๐‘Ÿ๐‘Ž๐‘›๐‘˜ ๐‘จ < ๐‘š๐‘–๐‘› ๐‘š, ๐‘›

๐‘๐‘œ๐‘›๐‘‘) ๐‘จ = โˆž

6) Condition number of a matrix

Page 22: Singular Value Decomposition (matrix factorization)

7) Low-Rank ApproximationAnother way to write the SVD (assuming for now ๐‘š > ๐‘› for simplicity)

๐‘จ =โ‹ฎ โ€ฆ โ‹ฎ๐’–# โ€ฆ ๐’–'โ‹ฎ โ€ฆ โ‹ฎ

๐œŽ#โ‹ฑ

๐œŽ$0โ‹ฎ0

โ€ฆ ๐ฏ#% โ€ฆโ‹ฎ โ‹ฎ โ‹ฎโ€ฆ ๐ฏ$% โ€ฆ

=โ‹ฎ โ€ฆ โ‹ฎ๐’–# โ€ฆ ๐’–$โ‹ฎ โ€ฆ โ‹ฎ

โ€ฆ ๐œŽ# ๐ฏ#% โ€ฆโ‹ฎ โ‹ฎ โ‹ฎโ€ฆ ๐œŽ$ ๐ฏ$% โ€ฆ

= ๐œŽ#๐’–#๐ฏ#% + ๐œŽ*๐’–*๐ฏ*% +โ‹ฏ+ ๐œŽ$๐’–$๐ฏ$%

The SVD writes the matrix A as a sum of outer products (of left and right singular vectors).

Page 23: Singular Value Decomposition (matrix factorization)

๐œŽ" โ‰ฅ ๐œŽ) โ‰ฅ ๐œŽ*โ€ฆ โ‰ฅ 0

๐‘จH = ๐œŽ&๐’–&๐ฏ&( + ๐œŽ)๐’–)๐ฏ)( +โ‹ฏ+ ๐œŽH๐’–H๐ฏH(

Note that ๐‘Ÿ๐‘Ž๐‘›๐‘˜ ๐‘จ = ๐‘› and ๐‘Ÿ๐‘Ž๐‘›๐‘˜(๐‘จH) = ๐‘˜ and the norm of the difference between the matrix and its approximation is

The best rank-๐’Œ approximation for a ๐‘šร—๐‘› matrix ๐‘จ, (where ๐‘˜โ‰ค ๐‘š๐‘–๐‘›(๐‘š, ๐‘›)) is the one that minimizes the following problem:

When using the induced 2-norm, the best rank-๐’Œ approximation is given by:

7) Low-Rank Approximation (cont.)

๐‘จ โˆ’ ๐‘จ& * = ๐œŽ&+#๐’–&+#๐ฏ&+#% + ๐œŽ&+*๐’–&+*๐ฏ&+*% +โ‹ฏ+ ๐œŽ$๐’–$๐ฏ$% * = ๐œŽ&+#

Page 24: Singular Value Decomposition (matrix factorization)

Example: Image compression

๐Ÿ“๐ŸŽ๐ŸŽ

๐Ÿ“๐ŸŽ๐ŸŽ

๐Ÿ“๐ŸŽ๐ŸŽ

๐Ÿ๐Ÿ’๐Ÿ๐Ÿ•

๐Ÿ๐Ÿ’๐Ÿ๐Ÿ•

๐Ÿ๐Ÿ’๐Ÿ๐Ÿ•

๐Ÿ“๐ŸŽ๐ŸŽ

๐Ÿ๐Ÿ’๐Ÿ๐Ÿ•

Page 25: Singular Value Decomposition (matrix factorization)

Example: Image compression

๐Ÿ“๐ŸŽ๐ŸŽ

๐Ÿ๐Ÿ’๐Ÿ๐Ÿ•

Image using rank-50 approximation

Page 26: Singular Value Decomposition (matrix factorization)

8) Using SVD to solve square system of linear equations

If ๐‘จ is a ๐‘›ร—๐‘› square matrix and we want to solve ๐‘จ ๐’™ = ๐’ƒ, we can use the SVD for ๐‘จ such that

๐‘ผ ๐šบ ๐‘ฝ๐‘ป๐’™ = ๐’ƒ๐šบ ๐‘ฝ๐‘ป๐’™ = ๐‘ผ๐‘ป๐’ƒ

Solve: ๐šบ ๐’š = ๐‘ผ๐‘ป๐’ƒ (diagonal matrix, easy to solve!)Evaluate: ๐’™ = ๐‘ฝ ๐’š

Cost of solve: ๐‘‚ ๐‘›(Cost of decomposition ๐‘‚ ๐‘›) (recall that SVD and LU have the same cost asymptotic behavior, however the number of operations - constant factor before ๐‘›) - for the SVD is larger than LU)