Top Banner
Section 5: Linear Systems and Matrices Washkewicz College of Engineering 1 Solution Methods System of Linear Equations Earlier we saw that a generic system of n equations in n unknowns could be represented in the following matrix format n nn n n n n n n n x a x a x a b x a x a x a b x a x a x a b 2 2 1 1 2 2 22 1 21 2 1 2 12 1 11 1 X A B x x x a a a a a a a a a b b b n nn n n n n n 2 1 2 1 2 22 21 1 12 11 2 1 The elements of the square [A] matrix and the {b} vector will be known and our goal is finding the elements of the vector{x}.
48

n n n nn n b a a a x ¾ - Cleveland State Universityacademic.csuohio.edu/duffy_s/511_05.pdf · Solution Methods –System of Linear Equations ... Probably the best known of these

May 11, 2018

Download

Documents

NguyễnKhánh
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: n n n nn n b a a a x ¾ - Cleveland State Universityacademic.csuohio.edu/duffy_s/511_05.pdf · Solution Methods –System of Linear Equations ... Probably the best known of these

Section 5: Linear Systems and Matrices

Washkewicz College of Engineering

1

Solution Methods – System of Linear Equations

Earlier we saw that a generic system of n equations in n unknowns

could be represented in the following matrix format

nnnnnn

nn

nn

xaxaxab

xaxaxab

xaxaxab

2211

22221212

12121111

XAB

x

x

x

aaa

aaa

aaa

b

b

b

nnnnn

n

n

n

2

1

21

22221

11211

2

1The elements of the

square [A] matrix and

the {b} vector will be

known and our goal is

finding the elements of

the vector{x}.

Page 2: n n n nn n b a a a x ¾ - Cleveland State Universityacademic.csuohio.edu/duffy_s/511_05.pdf · Solution Methods –System of Linear Equations ... Probably the best known of these

Section 5: Linear Systems and Matrices

Washkewicz College of Engineering

Finding the elements of the {x} vector can be accomplished using approaches from an

extensive library of methods that are quite diverse. All methods seek to solve a linear

system of equations that can be expressed in a matrix format as

for the vector {x}. If we could simply “divide” this expression by the matrix [A], i.e.,

then we could easily formulate the vector {x}. As we will see this task is labor intensive.

The methods used to accomplish this can be broadly grouped into the following two

categories:

1. direct methods and

2. iterative methods

Each group contains a number of methods and we will look at several in each category.

Keep in mind that there are hybrid methods exist that are combinations of the two methods

in the categories.

bxA

bAx1

2

Page 3: n n n nn n b a a a x ¾ - Cleveland State Universityacademic.csuohio.edu/duffy_s/511_05.pdf · Solution Methods –System of Linear Equations ... Probably the best known of these

Section 5: Linear Systems and Matrices

Washkewicz College of Engineering

Basic Definitions

In scalar algebra we easily make use of the concept of zero and one as follows:

11

00

where is a scalar quantity. A scalar certainly possesses a reciprocal, or multiplicative

inverse. that when applied to the scalar quantity produces one:

11 1

The above can be extended to n x n matrices. Here scalar one (1) becomes the identity

matrix [I], and zero is the null matrix [0], i.e.,

AAIIA

AAA

00

3

Page 4: n n n nn n b a a a x ¾ - Cleveland State Universityacademic.csuohio.edu/duffy_s/511_05.pdf · Solution Methods –System of Linear Equations ... Probably the best known of these

Section 5: Linear Systems and Matrices

Washkewicz College of Engineering

At this point we note that if there is an n x n matrix [A]-1 that pre- and post-multiplies the

matrix [A] such that

then the matrix [A]-1 is termed the inverse of the matrix [A] with respect to matrix

multiplication. The matrix [A] is said to be invertible, or non-singular if [A]-1 exists, and

non-invertible or singular if [A]-1 does not exist.

The concept of matrix inversion is important in the study of structural analysis with matrix

methods. We will study this topic in detail several times, and refer to it often throughout the

course.

We will formally define the inverse of a matrix though the use of the determinant of the

matrix and its self –adjoint matrix. We will do that in a formal manner after revisiting

properties of the determinants and co-factors of a matrix.

However, there are a number of methods that enable one to find the solution without finding

the inverse of the matrix. Probably the best known of these is Cramer's Rule followed by

Gaussian elimination and the Gauss-Jordan method.

IAAAA 11

4

Page 5: n n n nn n b a a a x ¾ - Cleveland State Universityacademic.csuohio.edu/duffy_s/511_05.pdf · Solution Methods –System of Linear Equations ... Probably the best known of these

Section 5: Linear Systems and Matrices

Washkewicz College of Engineering

5

Cramer’s Rule – Three Equations and Three Unknowns

It is unfortunate that usually the method for the solution of linear equations that students

remember from secondary education is Cramer's rule which is really an expansion by minors

(topic discussed subsequently). This method is rather inefficient and relatively difficult to

program. However, as it forms sort of a standard by which other methods can by judged, we

will review it here for a system of three equations and three unknowns. The more general

formulation is inductive.

Consider the following system of three equations in terms of three unknowns {x1, x2, x3}

Where we identify

3

2

1

333231

232221

131211

2

1

x

x

x

aaa

aaa

aaa

b

b

b

n

333231

232221

131211

aaa

aaa

aaa

A

Page 6: n n n nn n b a a a x ¾ - Cleveland State Universityacademic.csuohio.edu/duffy_s/511_05.pdf · Solution Methods –System of Linear Equations ... Probably the best known of these

Section 5: Linear Systems and Matrices

Washkewicz College of Engineering

and

The solution is formulated as follows

6

33323

23222

13121

1

aab

aab

aab

A

33331

23221

13111

2

aba

aba

aba

A

33231

22221

11211

2

baa

baa

baa

A

333232

232221

131211

33323

23222

13121

11

aaa

aaa

aaa

aab

aab

aab

A

Ax

333232

232221

131211

33331

23221

13111

22

aaa

aaa

aaa

aba

aba

aba

A

Ax

Page 7: n n n nn n b a a a x ¾ - Cleveland State Universityacademic.csuohio.edu/duffy_s/511_05.pdf · Solution Methods –System of Linear Equations ... Probably the best known of these

Section 5: Linear Systems and Matrices

Washkewicz College of Engineering

7

and

Proof follows from the solution of a system of two equations and two unknowns.

For a system of n equations with n unknowns this solution method requires evaluating the

determinant of the matrix [A] as well as augmented matrices (see above and previous page)

where the jth column has been replaced by the elements of the vector {B}. Evaluation of the

determinant of an n × n matrix requires about 3n2 operations and this must be repeated for

each unknown. Thus solution by Cramer's rule will require at least 3n3 operations.

333232

232221

131211

33231

22221

11211

33

aaa

aaa

aaa

baa

baa

baa

A

Ax

Page 8: n n n nn n b a a a x ¾ - Cleveland State Universityacademic.csuohio.edu/duffy_s/511_05.pdf · Solution Methods –System of Linear Equations ... Probably the best known of these

Section 5: Linear Systems and Matrices

Washkewicz College of Engineering

8

Gaussian Elimination

Let us consider a simpler algorithm, which forms the basis for one of the most reliable and

stable direct methods for the solution of linear equations. It also provides a method for the

inversion of matrices. Let begin by describing the method and then trying to understand why

it works.

Consider representing the set of linear equations

Here we have suppressed the presence of the elements of the solution vector {x} and

parentheses are used in lieu of brackets and braces so as not to infer matrix multiplication in

this expression. We will refer to the above as an “augmented matrix.”

nnnnn

n

n

b

b

b

aaa

aaa

aaa

2

1

21

22221

11211

Page 9: n n n nn n b a a a x ¾ - Cleveland State Universityacademic.csuohio.edu/duffy_s/511_05.pdf · Solution Methods –System of Linear Equations ... Probably the best known of these

Section 5: Linear Systems and Matrices

Washkewicz College of Engineering

9

Now we perform a series of operations on the rows and columns of the coefficient matrix

[A] and we shall carry through the row operations to include the elements of the constant

vector {B}. The rows are treated as if they were the equations so that anything done to one

element is done to all. Start by dividing each row including the vector {B} by the lead

element in the row – initially a11. The first row is then multiplied by an appropriate constant

and subtracted from all the lower rows. Thus all rows but the first will have zero in the first

column. That row should have a one (1) in the first column.

This is repeated for each succeeding row. The second row is divided by the second element

producing a one in the second column. This row is multiplied by appropriate constants and

subtracted from the lower rows producing zeroes in the second column. This process is

repeated until the following matrix is obtained

n

nnn

n

n

a

1

2

1

,1

223

112

1000

1

00

10

1

Page 10: n n n nn n b a a a x ¾ - Cleveland State Universityacademic.csuohio.edu/duffy_s/511_05.pdf · Solution Methods –System of Linear Equations ... Probably the best known of these

Section 5: Linear Systems and Matrices

Washkewicz College of Engineering

10

When the diagonal coefficients are all unity, the last term of the vector {} contains the

value of xn, i.e.,

This can be used in the (n -1)th equation represented by the second to the last line to obtain

xn-1 and so on right up to the first line which will yield the value of x1.

nnx

n

ij

jijni xx1

Page 11: n n n nn n b a a a x ¾ - Cleveland State Universityacademic.csuohio.edu/duffy_s/511_05.pdf · Solution Methods –System of Linear Equations ... Probably the best known of these

Section 5: Linear Systems and Matrices

Washkewicz College of Engineering

11

Gaussian-Jordan Elimination

A simple modification to Gauss elimination method allows us to obtain the inverse to the

matrix [A] as well as the solution vector {x}. Consider representing the set of linear

equations as

Now the unit matrix [I] is included in the augmented matrix. The procedure is carried out

as before, the Gauss elimination method producing zeros in the columns below and to the

left of the diagonal element. However the same row operations are conducted on the unit

matrix as well.

At the end of the procedure we have both solved the system of equations and found the

inverse of the original matrix.

100

0

10

001

2

1

21

22221

11211

nnnnn

n

n

b

b

b

aaa

aaa

aaa

Page 12: n n n nn n b a a a x ¾ - Cleveland State Universityacademic.csuohio.edu/duffy_s/511_05.pdf · Solution Methods –System of Linear Equations ... Probably the best known of these

Section 5: Linear Systems and Matrices

Washkewicz College of Engineering

Example 5.1

12

Page 13: n n n nn n b a a a x ¾ - Cleveland State Universityacademic.csuohio.edu/duffy_s/511_05.pdf · Solution Methods –System of Linear Equations ... Probably the best known of these

Section 5: Linear Systems and Matrices

Washkewicz College of Engineering

Example 5.2

13

Page 14: n n n nn n b a a a x ¾ - Cleveland State Universityacademic.csuohio.edu/duffy_s/511_05.pdf · Solution Methods –System of Linear Equations ... Probably the best known of these

Section 5: Linear Systems and Matrices

Washkewicz College of Engineering

The Determinant of a Square Matrix

A square matrix of order n (an n x n matrix), i.e.,

possesses a uniquely defined scalar that is designated as the determinant of the matrix, or

merely the determinant

AA det

Observe that only square matrices possess determinants.

nnnn

n

n

aaa

aaa

aaa

A

21

22221

11211

14

Page 15: n n n nn n b a a a x ¾ - Cleveland State Universityacademic.csuohio.edu/duffy_s/511_05.pdf · Solution Methods –System of Linear Equations ... Probably the best known of these

Section 5: Linear Systems and Matrices

Washkewicz College of Engineering

Vertical lines and not brackets designate a determinant, and while det[A] is a number and

has no elements, it is customary to represent it as an array of elements of the matrix

A general procedure for finding the value of a determinant sometimes is called “expansion

by minors.” We will discuss this method after going over some ground rules for operating

with determinants.

nnnn

n

n

aaa

aaa

aaa

A

21

22221

11211

det

15

Page 16: n n n nn n b a a a x ¾ - Cleveland State Universityacademic.csuohio.edu/duffy_s/511_05.pdf · Solution Methods –System of Linear Equations ... Probably the best known of these

Section 5: Linear Systems and Matrices

Washkewicz College of Engineering

Rules for Operating with Determinants

Rules pertaining to the manipulation of determinants are presented in this section without

formal proof. Their validity is demonstrated through examples presented at the end of the

section.

Rule #1: Interchanging any row (or column) of a determinant with its immediate adjacent

row (or column) flips the sign of the determinant.

Rule #2: The multiplication of any single row (column) of determinant by a scalar constant

is equivalent to the multiplication of the determinant by the scalar.

Rule #3: If any two rows (columns) of a determinant are identical, the value of the

determinant is zero and the matrix from which the determinant is derived is said to be

singular.

Rule #4: If any row (column) of a determinant contains nothing but zeroes then the matrix

from which the determinant is derived is singular.

Rule #5: If any two rows (two columns) of a determinant are proportional, i.e., the two

rows (two columns) are linearly dependent, then the determinant is zero and the matrix from

which the determinant is derived is singular.16

Page 17: n n n nn n b a a a x ¾ - Cleveland State Universityacademic.csuohio.edu/duffy_s/511_05.pdf · Solution Methods –System of Linear Equations ... Probably the best known of these

Section 5: Linear Systems and Matrices

Washkewicz College of Engineering

Rule #6: If the elements of any row (column) of a determinant are added to or subtracted

from the corresponding elements of another row (column) the value of the determinant is

unchanged.

Rule #6a: If the elements of any row (column) of a determinant are multiplied by a constant

and then added or subtracted from the corresponding elements of another row (column), the

value of the determinant is unchanged.

Rule #7: The value of the determinant of a diagonal matrix is equal to the product of the

terms on the diagonal.

Rule #8: The value for the determinant of a matrix is equal to the value of the determinant of

the transpose of the matrix.

Rule #9: The determinant of the product of two matrices is equal to the product of the

determinants of the two matrices.

Rule #10: If the determinant of the product of two square matrices is zero, then at least one

of the two matrices is singular.

Rule #11: If an m x n rectangular matrix A is post-multiplied by an n x m rectangular matrix

B, the resulting square matrix [C] = [A][B] of order m will, in general, be singular if m > n.17

Page 18: n n n nn n b a a a x ¾ - Cleveland State Universityacademic.csuohio.edu/duffy_s/511_05.pdf · Solution Methods –System of Linear Equations ... Probably the best known of these

Section 5: Linear Systems and Matrices

Washkewicz College of Engineering

Rule #12: A determinant may be evaluated by summing the products of every element in any

row or column by the respective cofactor. This is known as Laplace’s expansion.

Rule #13: If all cofactors in a row or a column are zero, the determinant is zero and matrix

from which they are derived is singular.

Rule #14: If the elements in a row or a column of a determinant are multiplied by cofactors

of the corresponding elements of a different row or column, the resulting sum of these

products are zero.

18

Page 19: n n n nn n b a a a x ¾ - Cleveland State Universityacademic.csuohio.edu/duffy_s/511_05.pdf · Solution Methods –System of Linear Equations ... Probably the best known of these

Section 5: Linear Systems and Matrices

Washkewicz College of Engineering

Example 5.3

19

Page 20: n n n nn n b a a a x ¾ - Cleveland State Universityacademic.csuohio.edu/duffy_s/511_05.pdf · Solution Methods –System of Linear Equations ... Probably the best known of these

Section 5: Linear Systems and Matrices

Washkewicz College of Engineering

Example 5.4

20

Page 21: n n n nn n b a a a x ¾ - Cleveland State Universityacademic.csuohio.edu/duffy_s/511_05.pdf · Solution Methods –System of Linear Equations ... Probably the best known of these

Section 5: Linear Systems and Matrices

Washkewicz College of Engineering

Example 5.5

21

Page 22: n n n nn n b a a a x ¾ - Cleveland State Universityacademic.csuohio.edu/duffy_s/511_05.pdf · Solution Methods –System of Linear Equations ... Probably the best known of these

Section 5: Linear Systems and Matrices

Washkewicz College of Engineering

Minors and Cofactors

Consider the nth order determinant:

nnnn

n

n

aaa

aaa

aaa

A

21

22221

11211

det

The mth order minor of the nth order matrix is the determinant formed by deleting ( n – m )

rows and ( n – m ) columns in the nth order determinant. For example the minor |M|ir of the

determinant |A| is formed by deleting the ith row and the rth column. Because |A| is an nth

order determinant, the minor |M|ir is of order m = n – 1 and contains m2 elements.

In general, a minor formed by deleting p rows and p columns in the nth ordered determinant

|A| is an (n – p)th order minor. If p = n – 1, the minor is of first order and contains only a

single element from |A|.

From this it is easy to see that the determinant |A| contains n2 elements of first order minors,

each containing a single element.22

Page 23: n n n nn n b a a a x ¾ - Cleveland State Universityacademic.csuohio.edu/duffy_s/511_05.pdf · Solution Methods –System of Linear Equations ... Probably the best known of these

Section 5: Linear Systems and Matrices

Washkewicz College of Engineering

When dealing with minors other than the (n – 1)th order, the designation of the eliminated

rows and columns of the determinant |A| must be considered carefully. It is best to consider

consecutive rows j, k, l, m … and consecutive columns r, s, t, u … so that the (n – 1)th,

(n – 2)th, and (n – 3)th order minors would be designated, respectively, as |M|j,r, |M|jk,rs and

|M|jkl,rst.

The complementary minor, or the complement of the minor, is designated as |N| (with

subscripts). This minor is the determinant formed by placing the elements that lie at the

intersections of the deleted rows and columns of the original determinant into a square array

in the same order that they appear in the original determinant. For example, given the

determinant from the previous page, then

3331

2321

31,23

2323

aa

aaN

aN

23

Page 24: n n n nn n b a a a x ¾ - Cleveland State Universityacademic.csuohio.edu/duffy_s/511_05.pdf · Solution Methods –System of Linear Equations ... Probably the best known of these

Section 5: Linear Systems and Matrices

Washkewicz College of Engineering

The algebraic complement of the minor |M| is the “signed” complementary minor. If a

minor is obtained by deleting rows i, k, l and columns r, s, t from the determinant |A| the

minor is designated

rstiklM

,

the complementary minor is designated

rstiklN

,

and the algebraic complement is designated

rstikl

tsrlkiN

,1

The cofactor, designated with capital letters and subscripts, is the signed (n – 1)th minor

formed from the nth order determinant. Suppose the that the (n – 1)th order minor is formed

by deleting the ith row and jth column from the determinant |A|. Then corresponding cofactor

is

ij

ji

ij MA

1

24

Page 25: n n n nn n b a a a x ¾ - Cleveland State Universityacademic.csuohio.edu/duffy_s/511_05.pdf · Solution Methods –System of Linear Equations ... Probably the best known of these

Section 5: Linear Systems and Matrices

Washkewicz College of Engineering

Observe the cofactor has no meaning for minors with orders smaller than (n – 1) unless the

minor itself is being treated as a determinant of order one less than the determinant |A| from

which it was derived.

Also observe that when the minor is order (n – 1), the product of the cofactor and the

complement is equal to the product of the minor and the algebraic complement.

We can assemble the cofactors of a square matrix of order n (an n x n matrix) into a square

cofactor matrix, i.e.,

So when the elements of a matrix are denoted with capital letters the matrix represents a

matrix of cofactors for another matrix.

nnnn

n

n

C

AAA

AAA

AAA

A

21

22221

11211

25

Page 26: n n n nn n b a a a x ¾ - Cleveland State Universityacademic.csuohio.edu/duffy_s/511_05.pdf · Solution Methods –System of Linear Equations ... Probably the best known of these

Section 5: Linear Systems and Matrices

Washkewicz College of Engineering

26

Example 5.6

Page 27: n n n nn n b a a a x ¾ - Cleveland State Universityacademic.csuohio.edu/duffy_s/511_05.pdf · Solution Methods –System of Linear Equations ... Probably the best known of these

Section 5: Linear Systems and Matrices

Washkewicz College of Engineering

27

Example 5.7

Page 28: n n n nn n b a a a x ¾ - Cleveland State Universityacademic.csuohio.edu/duffy_s/511_05.pdf · Solution Methods –System of Linear Equations ... Probably the best known of these

Section 5: Linear Systems and Matrices

Washkewicz College of Engineering

Determinants through Expansion by Minors

Using Rule #12 the determinant for a three by three matrix can be computed via the

expansion of the matrix by minors as follows:

2322

1312

31

3332

1312

21

3332

2322

11

333231

232221

131211

detaa

aaa

aa

aaa

aa

aaa

aaa

aaa

aaa

A

This can be confirmed using the classic expansion technique for 3 x 3 determinants. This

expression can be rewritten as:

313121211111

333231

232221

131211

det MaMaMa

aaa

aaa

aaa

A

or using cofactor notation:

313121211111det AaAaAaAA 28

Page 29: n n n nn n b a a a x ¾ - Cleveland State Universityacademic.csuohio.edu/duffy_s/511_05.pdf · Solution Methods –System of Linear Equations ... Probably the best known of these

Section 5: Linear Systems and Matrices

Washkewicz College of Engineering

Using the Adjoint Matrix to Formulate the Inverse

The adjoint the matrix [A] is the matrix of transposed cofactors. If we have an nth order

matrix [A] this matrix possess the following matrix of cofactors

and the adjoint of the matrix is defined as the transpose of the cofactor matrix

nnnn

n

n

C

AAA

AAA

AAA

A

21

22221

11211

29

nnnn

n

n

TC

AAA

AAA

AAA

AAadj

21

22212

12111

Page 30: n n n nn n b a a a x ¾ - Cleveland State Universityacademic.csuohio.edu/duffy_s/511_05.pdf · Solution Methods –System of Linear Equations ... Probably the best known of these

Section 5: Linear Systems and Matrices

Washkewicz College of Engineering

Suppose this n x n matrix is post multiplied by its adjoint and the resulting n x n matrix is

identified as [P]

The elements of matrix [P] are divided into two categories, i.e., elements that lie along the

diagonal

nnnn

n

n

nnnn

n

n

AAA

AAA

AAA

aaa

aaa

aaa

AadjAP

21

22212

12111

21

22221

11211

nnnnnnnnnn

nn

nn

AaAaAap

AaAaAap

AaAaAap

2211

222222212122

111212111111

30

Page 31: n n n nn n b a a a x ¾ - Cleveland State Universityacademic.csuohio.edu/duffy_s/511_05.pdf · Solution Methods –System of Linear Equations ... Probably the best known of these

Section 5: Linear Systems and Matrices

Washkewicz College of Engineering

and those that do not

nnnnnn

nn

nn

nn

nn

AaAaAap

AaAaAap

AaAaAap

AaAaAap

AaAaAap

33223113

232232213132

121222112121

313212311113

212212211112

The elements of [P] that lie on the diagonal are all equal to the determinant of [A] (see Rule

#12 and recognize the Laplace expansion for each diagonal value). Note that the non-

diagonal elements will be equal to zero since they involve the expansion of one row of

matrix A with the cofactors of an entirely different row (see Rule #14).

31

Page 32: n n n nn n b a a a x ¾ - Cleveland State Universityacademic.csuohio.edu/duffy_s/511_05.pdf · Solution Methods –System of Linear Equations ... Probably the best known of these

Section 5: Linear Systems and Matrices

Washkewicz College of Engineering

Thus

AAaAaAap

AAaAaAap

AAaAaAap

nnnnnnnnnn

nn

nn

2211

222222212122

111212111111

and

0

0

0

0

0

33223113

232232213132

121222112121

313212311113

212212211112

nnnnnn

nn

nn

nn

nn

AaAaAap

AaAaAap

AaAaAap

AaAaAap

AaAaAap

32

Page 33: n n n nn n b a a a x ¾ - Cleveland State Universityacademic.csuohio.edu/duffy_s/511_05.pdf · Solution Methods –System of Linear Equations ... Probably the best known of these

Section 5: Linear Systems and Matrices

Washkewicz College of Engineering

which leads to

IA

A

A

A

AadjAP

00

00

00

or

A

AadjAI

When this expression is compared to

1 AAI

then it is evident that

A

AadjA

1

The inverse exists only when the determinant of A is not zero, i.e., when A is not singular. 33

Page 34: n n n nn n b a a a x ¾ - Cleveland State Universityacademic.csuohio.edu/duffy_s/511_05.pdf · Solution Methods –System of Linear Equations ... Probably the best known of these

Section 5: Linear Systems and Matrices

Washkewicz College of Engineering

If we count the computations required in finding an inverse using adjoints and determinants

then this approach is as much of a “brute force” approach as finding the solution of a

system of linear equation by Cramer’s rule. From a computational standpoint the method is

inefficient (but doable) when the matrix is quite large. There are more efficient methods for

solving large systems of linear equations that do not involve finding the inverse.

Generally these approaches are divided into the following two categories:

• Direct Elimination (not inversion) Methods (LDU decomposition, Gauss

elimination, Cholesky)

• Iterative Methods (Gauss-Seidel, Jacobi)

We will look at methods from both categories.

34

Page 35: n n n nn n b a a a x ¾ - Cleveland State Universityacademic.csuohio.edu/duffy_s/511_05.pdf · Solution Methods –System of Linear Equations ... Probably the best known of these

Section 5: Linear Systems and Matrices

Washkewicz College of Engineering

35

Example 5.8

Page 36: n n n nn n b a a a x ¾ - Cleveland State Universityacademic.csuohio.edu/duffy_s/511_05.pdf · Solution Methods –System of Linear Equations ... Probably the best known of these

Section 5: Linear Systems and Matrices

Washkewicz College of Engineering

Direct Elimination Methods

Elimination methods factor the matrix [A] into products of triangular and diagonal matrices,

i.e., the matrix can be expressed as

Where [L] and [U] are lower and upper triangular matrices with all diagonal entries equal to

“1”. The matrix [D] is a diagonal matrix.

Variations of this decomposition are obtained if the matric [D] is associated with either the

matrix [L] or the matrix [U], i.e.,

where [L] and [U] in this last expression are not necessarily the same as the matrices

identified in the previous expression.

UDLA

ULA

36

Page 37: n n n nn n b a a a x ¾ - Cleveland State Universityacademic.csuohio.edu/duffy_s/511_05.pdf · Solution Methods –System of Linear Equations ... Probably the best known of these

Section 5: Linear Systems and Matrices

Washkewicz College of Engineering

In an expanded format

and using a generalized index notation

The matrices [L] and [U] in this decomposition are not unique. Differences in the many

variations of elimination methods are simply differences in how these two matrices are

constructed. Consider, for example, i=4 and j=3 then for any n x n matrix

nn

n

n

nnnnnnnn

n

n

u

uu

uuu

lll

ll

l

aaa

aaa

aaa

A

222

11211

21

2221

11

21

22221

11211

0

00

37

n

k

kjikij ula1

0001 34434433432342134143 nn ululululula

Page 38: n n n nn n b a a a x ¾ - Cleveland State Universityacademic.csuohio.edu/duffy_s/511_05.pdf · Solution Methods –System of Linear Equations ... Probably the best known of these

Section 5: Linear Systems and Matrices

Washkewicz College of Engineering

38

If by definition we stipulate that the diagonal entries of the upper triangular matrix are all

equal to “1” , i.e.,

Returning to the previous expression

and in general we can write for i > j

Solving for lij

1jju

1

1

1j

k

kjikjjijij ulula

njiulalj

k

kjikijij ,,1

1

1

001

334323421341

33432342134143

ululul

ululula

Page 39: n n n nn n b a a a x ¾ - Cleveland State Universityacademic.csuohio.edu/duffy_s/511_05.pdf · Solution Methods –System of Linear Equations ... Probably the best known of these

Section 5: Linear Systems and Matrices

Washkewicz College of Engineering

In solving a system of linear equations we can now write in matrix notation

If we let

then

which is an easier computation. Using generalized index notation

For example for i=3

bxULxA

39

yxU

byLxUL

i

i

j

jij byl 1

3333232131 bylylyl

Page 40: n n n nn n b a a a x ¾ - Cleveland State Universityacademic.csuohio.edu/duffy_s/511_05.pdf · Solution Methods –System of Linear Equations ... Probably the best known of these

Section 5: Linear Systems and Matrices

Washkewicz College of Engineering

40

From this we can rearrange the generalized index formulation as

Solving this expression for yi yields

Similarly

The process for solving for the unknown vector quantities {x} can be completed without

computing the inverse of [A].

i

i

j

jijiii bylyl

1

1

nil

ylb

yii

i

j

jiji

i ,,2,1

1

1

1,,1 ni

l

xuy

xii

n

ij

jiji

i

Page 41: n n n nn n b a a a x ¾ - Cleveland State Universityacademic.csuohio.edu/duffy_s/511_05.pdf · Solution Methods –System of Linear Equations ... Probably the best known of these

Section 5: Linear Systems and Matrices

Washkewicz College of Engineering

41

In general we can write for i < j

Solving for uij

If at any stage in this algorithm the coefficient of the first equation, i.e., ajj (often

referred to as the pivot point) or ljj becomes zero the method fails.

1

1

i

k

kjikijiiij ulula

nijl

ula

uii

i

k

kjikij

ij ,,

1

1

Page 42: n n n nn n b a a a x ¾ - Cleveland State Universityacademic.csuohio.edu/duffy_s/511_05.pdf · Solution Methods –System of Linear Equations ... Probably the best known of these

Section 5: Linear Systems and Matrices

Washkewicz College of Engineering

42

Example 5.9

Page 43: n n n nn n b a a a x ¾ - Cleveland State Universityacademic.csuohio.edu/duffy_s/511_05.pdf · Solution Methods –System of Linear Equations ... Probably the best known of these

Section 5: Linear Systems and Matrices

Washkewicz College of Engineering

Cholesky’s Decomposition – A Direct Elimination Method

In linear algebra, the Cholesky algorithm is a decomposition of a Hermitian, positive-definite

matrix (square symmetric matrix) into the product of a lower triangular matrix multiplied by an

upper triangular matrix that is the conjugate transpose of the lower triangular matrix, or

The approach was derived by André-Louis Cholesky. When applicable, the Cholesky

decomposition is roughly twice as efficient as the LU decomposition for solving systems of

linear equations.

Finding [L] can be loosely thought of as the matrix equivalent of taking the square root of [A].

Note that [A] is a positive definite matrix if for all non-zero vectors {z} the inner product

is always greater than zero. This is guaranteed if all the eigenvalues of the matrix are positive.

TLLA

0zAzT

43

Page 44: n n n nn n b a a a x ¾ - Cleveland State Universityacademic.csuohio.edu/duffy_s/511_05.pdf · Solution Methods –System of Linear Equations ... Probably the best known of these

Section 5: Linear Systems and Matrices

Washkewicz College of Engineering

44

With

then by columns (the second number/letter of the subscripts)

nn

n

n

nnnnnnnn

n

n

T

l

ll

lll

lll

ll

l

aaa

aaa

aaa

LLA

222

12111

21

2221

11

21

22221

11211

0

00

22

21122

22

21313232

21

2

212222

l

llal

l

llal

lal

nnn

11

11

11

3131

11

2121

1111

l

al

l

al

l

al

al

nn

21

2

23

2

133333 llal

Page 45: n n n nn n b a a a x ¾ - Cleveland State Universityacademic.csuohio.edu/duffy_s/511_05.pdf · Solution Methods –System of Linear Equations ... Probably the best known of these

Section 5: Linear Systems and Matrices

Washkewicz College of Engineering

The decomposition of [A] proceeds by forward substitution. As the decomposition is

performed, the following recurrence relationships for each successive column (ith index)

value in the lower triangular matrix can be extracted from the previous results

These expressions can be modified to where there is no need to take a square root (an

additional computation) in the first expression. To accomplish this recast the previous

matrix expression such that

where again [D] is a diagonal matrix (not necessarily the identity matrix).

TLDLA

nijl

lla

l

lal

ii

i

k

ikjkji

ji

i

k

ikiiii

,,1

1

1

1

1

2

45

Page 46: n n n nn n b a a a x ¾ - Cleveland State Universityacademic.csuohio.edu/duffy_s/511_05.pdf · Solution Methods –System of Linear Equations ... Probably the best known of these

Section 5: Linear Systems and Matrices

Washkewicz College of Engineering

The recurrence relationships for this form of the Cholesky decomposition (LDL) can be

expressed as follows for each successive column (ith index) entry

With [A] decomposed into a triple matrix product the solution to the system of equations

proceeds with

nijd

llda

l

l

ldad

ii

i

k

ikjkkkji

ji

ii

i

k

ikkkiiii

,,1

1

1

1

1

1

2

yL

xLDL

xAb

T

46

Page 47: n n n nn n b a a a x ¾ - Cleveland State Universityacademic.csuohio.edu/duffy_s/511_05.pdf · Solution Methods –System of Linear Equations ... Probably the best known of these

Section 5: Linear Systems and Matrices

Washkewicz College of Engineering

Again

but now

nil

ylb

yii

i

j

jiji

i ,,2,1

1

1

1,,1 ni

l

xldy

xii

n

ij

jkjiki

i

47

Page 48: n n n nn n b a a a x ¾ - Cleveland State Universityacademic.csuohio.edu/duffy_s/511_05.pdf · Solution Methods –System of Linear Equations ... Probably the best known of these

Section 5: Linear Systems and Matrices

Washkewicz College of Engineering

48

Example 5.10