Top Banner
Your logo Iterative Methods for Solution of Systems of Linear Equations By Erika Villarreal
12
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: Chapter 5

Your logo

Iterative Methods for Solution of Systems of Linear Equations

By Erika Villarreal

Page 2: Chapter 5

Here comes your footer Page 2

1. Jacobi Method

Jacobi method is an algorithm for determining the solutions of a system of linear equations with largest absolute values in each row and column dominated by the diagonal element. Each diagonal element is solved for, and an approximate value plugged in. The process is then iterated until it converges. This algorithm is a stripped-down version of the Jacobi transformation method of matrix diagonalization.

Given a square system of n linear equations: Where:

Then A can be decomposed into a diagonal component D, and the remainder R:

AX = b

Page 3: Chapter 5

Here comes your footer Page 3

1. Jacobi Method

The system of linear equations may be rewritten as:

and finally: The Jacobi method is an iterative technique that solves the left hand side of this expression for x, using previous value for x on the right hand side. Analytically, this may be written as:

The element-based formula is thus:

Note that the computation of xi

(k+1) requires each element in x(k) except itself.

Unlike the Gauss–Seidel method, we can't overwrite xi(k) withxi

(k+1), as that value

will be needed by the rest of the computation. This is the most meaningful difference between the Jacobi and Gauss–Seidel methods, and is the reason why the former can be implemented as a parallel algorithm, unlike the latter. The minimum amount of storage is two vectors of size n.

Page 4: Chapter 5

Here comes your footer Page 4

1. Jacobi Method

example

A linear system of the form Ax = b with initial estimate x(0) is given by

We use the equation x(k + 1) = D − 1(b − Rx(k)), described above, to estimate x. First, we rewrite the equation in a more convenient formD − 1(b − Rx(k)) = Tx(k) + C, where T = − D − 1R and C = D − 1b. Note that R = L + U where L and U are the strictly lower and upper parts of A. From the known values

we determine T = − D − 1(L + U) as

Further, C is found as

Page 5: Chapter 5

Here comes your footer Page 5

1. Jacobi Method

example

With T and C calculated, we estimate x as x(1) = Tx(0) + C:

The next iteration yields

This process is repeated until convergence (i.e., until   is small). The solution after 25 iterations is

Page 6: Chapter 5

Here comes your footer Page 6

1. Jacobi Method

example

Page 7: Chapter 5

Here comes your footer Page 7

2. Gauss–Seidel method

Given a square system of n linear equations with unknown x:

where:

Then A can be decomposed into a lower triangular component L*, and a strictly upper triangular component U:

The system of linear equations may be rewritten as:

Page 8: Chapter 5

Here comes your footer Page 8

2. Gauss–Seidel method

The Gauss–Seidel method is an iterative technique that solves the left hand side of this expression for x, using previous value for x on the right hand side. Analytically, this may be written as:

However, by taking advantage of the triangular form of L*, the elements of x(k+1) can be

computed sequentially using forward substitution:

Note that the sum inside this computation of xi

(k+1) requires each element

in x(k) except xi(k) itself.

The procedure is generally continued until the changes made by an iteration are below some tolerance.

Page 9: Chapter 5

Here comes your footer Page 9

2. Gauss–Seidel method

exampleA linear system shown as  Ax=b  is given by  and  .

We want to use the equation

in the form where:  and  .We must decompose A   into the sum of a lower triangular component L* and a strict upper triangular component  U :

 and  The inverse of   is:

.

Page 10: Chapter 5

Here comes your footer Page 10

2. Gauss–Seidel method

example

Now we can find:

Now we have  T  and  C  and we can use them to obtain the vectors  Xiteratively. First of all, we have to choose  X0 : we can only guess.The better the guess, the quicker will perform the algorithm .We suppose:

.

Page 11: Chapter 5

Here comes your footer Page 11

2. Gauss–Seidel method

exampleWe can then calculate:

As expected, the algorithm converges to the exact solution:

.In fact, the matrix A is diagonally dominant (but not positive definite).

Page 12: Chapter 5

Here comes your footer Page 12

BIBLIOGRAPY

•This article incorporates text from the article Jacobi_method on CFD-Wiki that is under the GFDL license.

•Black, Noel; Moore, Shirley; and Weisstein, Eric W., "Jacobi method" from MathWorld.

•Jacobi Method from www.math-linux.com

•Module for Jacobi and Gauss–Seidel Iteration

•Numerical matrix inversion•Gauss–Seidel from www.math-linux.com•Module for Gauss–Seidel Iteration•Gauss–Seidel From Holistic Numerical Methods Institute•Gauss Siedel Iteration from www.geocities.com•The Gauss-Seidel Method