6/3/2015 An Introduction to the Conjugate Gradient Method without the Agonizing Pain Jonathan Richard Shewchuk Reading Group Presention By David Cline.

Post on 19-Dec-2015

215 Views

Category:

Documents

1 Downloads

Preview:

Click to see full reader

Transcript

04/18/23

An Introduction to the Conjugate Gradient Method without the

Agonizing Pain

Jonathan Richard Shewchuk

Reading Group Presention ByDavid Cline

04/18/23

Linear System

Square matrixUnknown vector

(what we want to find)Known vector

04/18/23

Matrix Multiplication

04/18/23

Positive Definite Matrix

> 0[ x1 x2 … xn ]

* Also, all eigenvalues of the matrix are positive

04/18/23

Quadtratic form

• An expression of the form

04/18/23

Why do we care?

The gradient of the quadratic form is our original system if A is symmetric:

04/18/23

Visual interpretation

04/18/23

Example Problem:

04/18/23

Visual representation

f(x) f(x) f’(x)

04/18/23

Solution

• the solution to the system, x, is the global minimum of f. … if A is symmetric,

• And since A is positive definite, x is the global minimum of f

04/18/23

Definitions

• Error

• Residual

Whenever you read ‘residual’,Think ‘the direction of steepestDescent’.

04/18/23

Method of steepest descent

• Start with arbitrary point, x(0)

• move in direction opposite gradient of f, r(0)

• reach minimum in that direction at distance alpha

• repeat

04/18/23

Steepest descent, mathematically

- OR -

04/18/23

Steepest descent, graphically

04/18/23

Eigen vectors

04/18/23

Steepest descent does well:

Steepest descent converges in oneIteration if the error term is anEigenvector.

Steepest descent converges in oneIteration if the all the eigenvalues Are equal.

04/18/23

Steepest descent does poorly

If the error term is a mix of large and small eigenvectors, steepest descent will move back and forth along toward the solution, but take many iterations to converge.

The worst case convergence is related to the ratio of the largest and smallest eigenvalues of A, called the “condition number”:

04/18/23

Convergence of steepest descent:

# iterations

“energy norm” at iteration i

“energy norm”at iteration 0

04/18/23

How can we speed up or guarantee convergence?

• Use the eigenvectors as directions.

• terminates in n iterations.

04/18/23

Method of conjugate directions• Instead of eigenvectors, which are too hard to compute, use

directions that are “conjugate” or “A-orthogonal”:

04/18/23

Method of conjugate directions

04/18/23

How to find conjugate directions?

• Gram-Shmidt Conjugation:Start with n linearly independent vectors u0…un-1

• For each vector, subract those parts that are not A-orthogonal to the other processed vectors:

04/18/23

Problem

• Gram-Schmidt conjugation is slow and we have to store all of the vectors we have created.

04/18/23

Conjugate Gradient Method

• Apply the method of conjugate directions, but use the residuals for the u values:

ui = r(i)

04/18/23

How does this help us?

• It turns out that the residual ri is A-orthogonal to all of the previous residuals, except ri-1, so we simply make it A-orthogonal to ri-1, and we are set.

04/18/23

Simplifying further

k=i-1

04/18/23

Putting it all together

Start with steepest descent

Compute distance to bottomOf parabola

Slide down to bottom of parabola

Compute steepest descentAt next location

Remove part of vector thatIs not A-orthogonal to di

04/18/23

Starting and stopping

• Start either with a rough estimate of the solution, or the zero vector.

• Stop when the norm of the residual is small enough.

04/18/23

Benefit over steepest descent

04/18/23

Preconditioning

04/18/23

Diagonal preconditioning

• Just use the diagonal of A as M. A diagonal matrix is easy to invert, but of course it isn’t the best method out there.

04/18/23

CG on the normal equations

If A is not symmetric, or positive-definite, or not square, we can’t use CG directly to solve

However, we can use it to solve the system is always symmetric, positive definite and square.

The problem that we solve with this is the least-squares fit but the condition number increases.

Also note that we never actually have to form Instead we multiply by AT and then by A.

04/18/23

top related