Top Banner
page 1 2.1 Matrices Defs. A matrix is a table of entries (usually numbers). It is denoted by a capital letter such as A. The plural of matrix is matrices. Rows run horizontal. Columns run vertical. The entries of a matrix are called elements. If there are m rows and n columns, then the matrix is called an m by n matrix. The expression m × n is used to denote an m by n matrix. The expression m × n is called the size of the matrix. The numbers m and n themselves are called the dimensions of the matrix. A 1 × n matrix is called a row vector. An m × 1 matrix is called a column vector. The elements of a row vector or a column vector are called the components of the vector. The rows of an m × n matrix are sometimes called the row vectors of the matrix. The columns of an m × n matrix are sometimes called the column vectors of the matrix. The index form of an m × n matrix A is the following representation: 11 12 1 21 22 2 1 2 n n ij m m mn a a a a a a A a a a a The transpose of an m × n matrix A is formed by writing its columns as rows. The transpose of A is denoted A T and has size n × m. Equal matrices have the same size and their corresponding elements are equal. The matrix A is obtained by multiplying each element of A by 1. An n × n matrix is called a square matrix. Defs. The following definitions apply only to square matrices. Let A be an n × n square matrix. The main diagonal is the diagonal containing the elements ii a . The trace is the sum of the elements along the main diagonal. It is denoted tr(A). A matrix that has all zero elements above its main diagonal is called lower triangular. A matrix that has all zero elements below its main diagonal is called upper triangular. A matrix that is both lower and upper triangular is called diagonal. It is denoted 11 22 diag( , , , ) nn a a a . A matrix A is symmetric if T A A . A matrix A is skew-symmetric if T A A . Def. A matrix function is a matrix whose elements are functions of a single real variable t.
20

2.1 Matrices elements m n m by n matrix m n m n m n m n m ... · page 1 2.1 Matrices Defs. A matrix is a table of entries (usually numbers). It is denoted by a capital letter such

Aug 25, 2020

Download

Documents

dariahiddleston
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: 2.1 Matrices elements m n m by n matrix m n m n m n m n m ... · page 1 2.1 Matrices Defs. A matrix is a table of entries (usually numbers). It is denoted by a capital letter such

page 1

2.1 Matrices

Defs. A matrix is a table of entries (usually numbers). It is denoted by a capital letter such as A.

The plural of matrix is matrices.

Rows run horizontal.

Columns run vertical.

The entries of a matrix are called elements.

If there are m rows and n columns, then the matrix is called an m by n matrix.

The expression m × n is used to denote an m by n matrix.

The expression m × n is called the size of the matrix.

The numbers m and n themselves are called the dimensions of the matrix.

A 1 × n matrix is called a row vector.

An m × 1 matrix is called a column vector.

The elements of a row vector or a column vector are called the components of the vector.

The rows of an m × n matrix are sometimes called the row vectors of the matrix.

The columns of an m × n matrix are sometimes called the column vectors of the matrix.

The index form of an m × n matrix A is the following representation:

11 12 1

21 22 2

1 2

n

n

ij

m m mn

a a a

a a aA a

a a a

The transpose of an m × n matrix A is formed by writing its columns as rows.

The transpose of A is denoted AT and has size n × m.

Equal matrices have the same size and their corresponding elements are equal.

The matrix –A is obtained by multiplying each element of A by –1.

An n × n matrix is called a square matrix.

Defs. The following definitions apply only to square matrices.

Let A be an n × n square matrix.

The main diagonal is the diagonal containing the elements iia .

The trace is the sum of the elements along the main diagonal. It is denoted tr(A).

A matrix that has all zero elements above its main diagonal is called lower triangular.

A matrix that has all zero elements below its main diagonal is called upper triangular.

A matrix that is both lower and upper triangular is called diagonal.

It is denoted 11 22diag( , , , )nna a a .

A matrix A is symmetric if TA A .

A matrix A is skew-symmetric if TA A .

Def. A matrix function is a matrix whose elements are functions of a single real variable t.

Page 2: 2.1 Matrices elements m n m by n matrix m n m n m n m n m ... · page 1 2.1 Matrices Defs. A matrix is a table of entries (usually numbers). It is denoted by a capital letter such

page 2

2.2 Matrix Arithmetic

Let ijA a and

ijB b and let s be a scalar.

Def. If A is an m × n matrix, then scalar multiplication is defined:

ijsA sa

Def. If A and B are both m × n matrices, then matrix addition is defined:

ij ijA B a b

Def. If A and B are both m × n matrices, then matrix subtraction is defined:

ij ijA B a b

Def. If A is an m × n matrix and B is an n × p matrix, then matrix multiplication is defined:

ijAB c where

1 1 2 2

1

n

ij ik kj i j i j in nj

k

c a b a b a b a b

The size of AB is m × p.

Def. The zero matrix is a matrix where every element is 0. This exists for any matrix.

Def. The identity matrix is a matrix with all 1s in the main diagonal and 0s everywhere else.

This exists for square matrices only.

Def. The derivative of a matrix function is obtained by taking derivatives of each element.

Def. The antiderivative of a matrix function is obtained by taking antiderivatives of each

element.

Thms. (1) T

TA A

(2) T T TA B A B

(3) T T TAB B A

Thms. (1) The product of two lower triangular matrices is lower triangular.

(2) The product of two upper triangular matrices is upper triangular.

Page 3: 2.1 Matrices elements m n m by n matrix m n m n m n m n m ... · page 1 2.1 Matrices Defs. A matrix is a table of entries (usually numbers). It is denoted by a capital letter such

page 3

2.3 Terminology for Linear Systems

Let Ax = b be an m × n linear system.

Defs. A is called the coefficient matrix.

b is called the right-hand-side vector.

[ A b ] is called the augmented matrix. It is denoted by A#.

Thm. The linear system has either no solution, exactly one solution, or infinitely many solutions.

Def. A system that has at least one solution is called consistent.

Def. A system that has no solution is called inconsistent.

Def. If b = 0, then the linear system is called homogeneous.

Otherwise it is called nonhomogeneous.

Def. If x = 0 is a solution to the linear system, then x = 0 is called the trivial solution.

Thm. If the linear system is homogeneous, then it has the trivial solution.

Thm. A homogeneous system can never be inconsistent.

________________________________________________________________________________

2.4 REF and RREF

Def. Matrix in Row-Echelon Form (REF)

1. If there are any rows consisting entirely of zeros, then they are grouped together at the

bottom of the matrix.

2. The first nonzero element in any nonzero row is a 1 (called a leading 1).

3. The leading 1 of any row below the first row is to the right of the leading 1 of the row above

it.

Def. The number of nonzero rows in a REF matrix is called the rank of the matrix.

Def. Matrix in Reduced Row-Echelon Form (RREF)

1,2,3. The matrix is in REF.

4. Any column that contains a leading 1 has zeros everywhere else.

Page 4: 2.1 Matrices elements m n m by n matrix m n m n m n m n m ... · page 1 2.1 Matrices Defs. A matrix is a table of entries (usually numbers). It is denoted by a capital letter such

page 4

Def. Elementary Row Operations (3)

1. Interchange (or permute) rows. i jR R

2. Multiply a row by a nonzero constant. i isR R

3. Add a multiple of one row to another. i j jsR R R

Def. Joe’s Nonelementary Row Operation

4. Add a multiple of one row to a multiple of another. i j jsR tR R

Def. Let A and B be two matrices. B is row-equivalent to A iff B can be obtained from A by a

finite sequence of elementary row operations. Notation: B A.

Thm. Every matrix can be reduced to REF.

Def. If B A, then rank(B) = rank(A).

Thm. To find the rank of any matrix A, reduce A to REF and find the rank of the reduced matrix.

Thm. Reducing a matrix to REF

1. Go to the leftmost nonzero column.

2. Find a pivot number (desirably 1) and move that row to the top.

3. Force zeros below the pivot by using row operations.

4. Let S be the submatrix below the pivot. Repeat the above process on S.

5. Scale each nonzero row so that each pivot becomes a leading one.

Thm. Reducing a matrix to RREF

1. Reduce to REF.

2. Force zeros above each leading one using row operations, starting with the bottom

leading one and working your way up to the top leading one.

Thm. Every matrix can be reduced to RREF.

Page 5: 2.1 Matrices elements m n m by n matrix m n m n m n m n m ... · page 1 2.1 Matrices Defs. A matrix is a table of entries (usually numbers). It is denoted by a capital letter such

page 5

2.5 Gaussian Elimination

Def. The process of reducing the augmented matrix to REF and then using back substitution to

solve the equivalent system is called Gaussian Elimination.

Def. The process of reducing the augmented matrix to RREF and then solving the equivalent

system is called Gauss-Jordan Elimination.

Thm. Let A be an n × n matrix with real elements.

The following conditions on A are equivalent.

1. Ax = b has a unique solution for every b in n

.

2. Ax = 0 has only the trivial solution x = 0.

3. rank(A) = n.

4. A is row-equivalent to In.

Thm. Consider the m × n linear system Ax = b.

1. inconsistent iff rank(A) < rank(A#)

2. consistent iff rank(A) = rank(A#)

Def. Consider the m × n linear system Ax = b that is consistent. Reduce A to REF.

The variables that have leading 1s in their columns are called bound.

The variables that don’t have leading 1s in their columns are called free.

Thm. Let a linear system be consistent.

All variables are bound iff the system has 1 solution.

________________________________________________________________________________

2.6 Inverse Matrices

Def. If AB = BA = In , then A is called invertible (nonsingular).

B is the inverse of A and can be written B = A–1

.

Thm. A–1

is unique.

Def. A matrix with no inverse is noninvertible (singular).

Thm. If A and B are square matrices and AB = In , then BA = In.

Thm. Gauss-Jordan Technique

Let A be an n × n matrix.

1

n nA I I A

Page 6: 2.1 Matrices elements m n m by n matrix m n m n m n m n m ... · page 1 2.1 Matrices Defs. A matrix is a table of entries (usually numbers). It is denoted by a capital letter such

page 6

Thm. Invertible Matrix Theorem

Let A be an n × n matrix with real elements.

The following conditions on A are equivalent.

1. A is invertible

2. Ax = b has a unique solution for every b in n

.

3. Ax = 0 has only the trivial solution x = 0.

4. rank(A) = n.

5. A is row-equivalent to In.

Props. (1) 1

1A A

(2) 1 1 1AB B A

(3) 1

1T

TA A

________________________________________________________________________________

3 Determinants

Def. The determinant of the matrix A a is given by det(A) = a.

Def. The determinant of the matrix a b

Ac d

is given by det(A) = A = ad – bc.

Def. If A is a square matrix, then the minor Mij of the element aij is the determinant of the matrix

obtained by deleting the ith row and jth column of A.

Def. The cofactor Cij is given by 1i j

ij ijC M

.

Def. If A is a square matrix (of size 2 or greater), then the determinant of A is the sum of the

entries in the first row of A multiplied by their cofactors.

That is, 11 11 12 12 1 1n nA a C a C a C

Thm. Cofactor Expansion Theorem

If A is a square matrix (of size 2 or greater), then the determinant of A is given by

1 1 2 2i i i i in inA a C a C a C (ith row expansion)

1 1 2 2j j j j nj njA a C a C a C (jth column expansion)

Page 7: 2.1 Matrices elements m n m by n matrix m n m n m n m n m ... · page 1 2.1 Matrices Defs. A matrix is a table of entries (usually numbers). It is denoted by a capital letter such

page 7

Props. (1) If A has a row or column of 0s, then 0A .

(2) If A is a triangular matrix, then A = product of diagonals.

(3) AB A B

(4) 1 1

AA

(5) TA A

(6) ncA c A (when A is n × n)

(7) A is invertible iff 0A .

________________________________________________________________________________

4.1 Vectors in n

Let u, v, and w be vectors in n

.

Let r and s be scalars.

(1) u + v is in n

closed under add.

(2) ru is in n

closed under scalar mult.

(3) u + v = v + u commutative prop. of add.

(4) (u + v) + w = u + (v + w) associative prop. of add.

(5) u + 0 = u existence of a zero vector

(6) u + (–u) = 0 existence of additive inverses

(7) 1u = u unit property

(8) (rs)u = r(su) associative prop. of scalar mult.

(9) r(u + v) = ru + rv distributive prop. over vector add.

(10) (r + s)u = ru + su distributive prop. over scalar add.

Thm. Let 1 2,a aa , 1 2,b bb , and 1 1

2 2

a bA

a b

.

The area of a parallelogram with sides determined by a and b is det A .

a and b are colinear iff det 0A .

Thm. Let 1 2 3, ,a a aa , 1 2 3, ,b b bb , 1 2 3, ,c c cc and

1 1 1

2 2 2

3 3 3

a b c

A a b c

a b c

.

The volume of a parallelepiped determined by a, b, and c is det A .

a, b, and c are coplanar iff det 0A .

Page 8: 2.1 Matrices elements m n m by n matrix m n m n m n m n m ... · page 1 2.1 Matrices Defs. A matrix is a table of entries (usually numbers). It is denoted by a capital letter such

page 8

4.2 Vector Spaces

Def. Let u, v, and w be vectors in V. Let r and s be scalars in F.

(A1) u + v is in V closed under add.

(A2) ru is in V closed under scalar mult.

(A3) u + v = v + u commutative prop. of add.

(A4) (u+v)+w = u+(v+w) associative prop. of add.

(A5) u + 0 = u existence of a zero vector

(A6) u + (–u) = 0 existence of additive inverses

(A7) 1u = u unit property

(A8) (rs)u = r(su) associative prop. of scalar mult.

(A9) r(u + v) = ru + rv distributive prop. over vector add.

(A10) (r + s)u = ru + su distributive prop. over scalar add.

If the preceding axioms are satisfied for every u, v, and w in V and every r and s in F, then V

is called a vector space over F.

Additional Properties of Vector Spaces

Let V be a vector space over F. Let u V and s F .

1. The zero vector is unique.

2. 0u = 0.

3. s0 = 0.

4. The additive inverse of each element in V is unique.

5. –u = (–1)u.

6. If su = 0, then s = 0 or u = 0.

Important Vector Spaces (with standard operations)

1. n

= set of all n-tuples of real numbers

2. n = set of all n-tuples of complex numbers

3. m nM = set of all m×n matrices with real elements

4. nM = set of all n×n matrices with real elements

5. nP = set of all polynomials of degree ≤ n with real coeffs.

6. ( )kC I = real-valued functions that are continuous and

have (at least) k continuous derivatives on I.

Page 9: 2.1 Matrices elements m n m by n matrix m n m n m n m n m ... · page 1 2.1 Matrices Defs. A matrix is a table of entries (usually numbers). It is denoted by a capital letter such

page 9

4.3 Subspaces

Def. Let S be a nonempty subset of a vector space V over F. If S is itself a vector space over F

with the same operations of addition and scalar multiplication as used in V, then we say that

S is a subspace of V.

Thm. S is a subspace of V iff S is closed under the operations of addition and scalar multiplication

in V.

Thm. If a subset S of a vector space V fails to contain the zero vector 0, then it cannot form a

subspace.

Def. V and {0} are called the trivial subspaces of V.

Def. Let A be an m × n matrix with real elements. The solution set to the corresponding

homogeneous linear system Ax = 0 is called the null space of A and is denoted nullspace(A).

Thm. nullspace(A) is a subspace of n

.

________________________________________________________________________________

4.4 Spanning Sets

Let V be a vector space.

Let 1 2, , , kS v v v be a subset of V.

Let 1 2, , , kc c c be scalars.

Def. u is a linear combination of 1 2, , , kv v v iff

1 1 2 2 k kc c c u v v v .

Def. Forming all possible linear combinations of 1 2, , , kv v v generates a subset of V called the

linear span of 1 2, , , kv v v denoted by 1 2Span , , , or Span( )k Sv v v .

Thm. Span(S) is a subspace of V.

Def. If Span(S) = T, then S is a spanning set of T.

We can say, " S spans T " or " T is spanned by S ".

Thm. Any 3 nonzero, noncoplanar vectors in 3 span

3.

Any 2 nonzero, noncolinear vectors in 2

span 2

.

Page 10: 2.1 Matrices elements m n m by n matrix m n m n m n m n m ... · page 1 2.1 Matrices Defs. A matrix is a table of entries (usually numbers). It is denoted by a capital letter such

page 10

4.5 Linear Independence

Let V be a vector space.

Let 1 2, , , kS v v v be a subset of V.

Let 1 2, , , kc c c be scalars.

Def. A spanning set of V with the smallest number of vectors is called a minimal spanning set.

Thm. Let 1 2, , , ,kT v v v w .

If w can be written as a linear combination of vectors from S, then span(S) = span(T).

T would not be a minimal spanning set.

Def. S is linearly independent if the vector equation 1 1 2 2 k kc c c v v v 0

has only the trivial solution, 1 2 0kc c c .

If there are also nontrivial solutions, then S is linearly dependent.

Thm. If S is linearly dependent, then at least one of the vectors in S can be expressed as a linear

combination of the others.

Thm. If S is linearly dependent, then there exists a linearly independent subset of S that has the

same linear span as S.

Thm. If S contains the zero vector, then S is linearly dependent.

Thm. If S contains proportional vectors, then S is linearly dependent.

Thm. Let S contain exactly 2 vectors.

If those vectors are not proportional, then S is linearly independent.

Thm. Let S contain exactly 1 vector.

If that vector is nonzero, then S is linearly independent.

If that vector is the zero vector, then S is linearly dependent.

Thm. Let 1 2, , , kS v v v be vectors in n

and 1 2, , , kA v v v .

If k > n, then S is linearly dependent.

If k = n, then S is linearly dependent iff det(A) = 0

Page 11: 2.1 Matrices elements m n m by n matrix m n m n m n m n m ... · page 1 2.1 Matrices Defs. A matrix is a table of entries (usually numbers). It is denoted by a capital letter such

page 11

Def. The set of functions 1 2, , , kf f f is linearly independent on an interval I if the equation

1 1 2 2( ) ( ) ( ) 0k kc f x c f x c f x has only the trivial solution 1 2 0kc c c for all x

in I.

Def. The Wronskian of 1 2 3, , and f f f .

1 2 3

1 2 3 1 2 3

1 2 3

( ) ( ) ( )

, , ( ) ( ) ( )

( ) ( ) ( )

f x f x f x

W f f f f x f x f x

f x f x f x

Thm. If W ≠ 0 for some point in I, then independent.

If W = 0 for all points in I, then inconclusive.

________________________________________________________________________________

4.6 Basis and Dimension

Def. A minimal spanning set of a vector space is called a basis of that vector space.

The plural of basis is bases (pronounced bey-seez)

Let V be a vector space.

Let 1 2 3 4 5, , , ,S v v v v v be a basis of V.

Props. (1) All bases of V have exactly 5 vectors.

(2) The dimension of V is 5. This is a definition.

(3) S has no “dead weight”.

(4) minimal spanning set

If you add any vectors to S, then the bigger set is still a spanning set of V.

If you take away vectors from S, then the smaller set is not a spanning set of V.

(5) maximal independence set

If you take away vectors from S (provided you don’t take the last one),

then the smaller set is still an independent set.

If you add any vectors to S, then the bigger set is not an independent set.

(6) Any set of 5 independent vectors is a basis.

(7) Any set of 5 vectors that spans V is a basis.

(8) Any spanning set of V must contain 5 vectors or more.

(9) Any independent set must contain 5 vectors or less.

(10) If T is a subspace of V, then dim[T] ≤ 5

(11) If T is a subspace of V, then any basis for T is part of a basis for V.

Def. If a vector space V has a basis consisting of a finite number of vectors, then V is called a

finite-dimensional vector space.

Otherwise V is called an infinite-dimensional vector space.

Page 12: 2.1 Matrices elements m n m by n matrix m n m n m n m n m ... · page 1 2.1 Matrices Defs. A matrix is a table of entries (usually numbers). It is denoted by a capital letter such

page 12

Exs. v.s. standard basis dimension

(1) 2 (1,0), (0,1) , , 1 2i j e e 2

(2) n , , ,1 2 ne e e n

(3) 3P 2 31, , ,x x x 4

(4) nP 21, , , , nx x x n + 1

(5) 2M 11 12 21 22

1 0 0 1 0 0 0 0, , , , , ,

0 0 0 0 1 0 0 1E E E E

4

(6) m nM :1 ,1i jE i m j n mn

Thm. If dim[V] = n, and S is a set of vectors, then the following statements are equivalent.

(1) S is a basis.

(2) S is linearly independent.

(3) S spans V.

Thm. If V is a vector space with basis B, then every vector in V can be written uniquely as a linear

combination of the vectors in B.

________________________________________________________________________________

4.7 Change of Basis

Def. An ordered basis is a basis in which we are keeping track of the order in which the basis

vectors are written.

Def. Let V be a vector space with ordered basis , ,B 1 2 3v v v .

Let v be a vector in V.

Then 1 2 3c c c

1 2 3v v v v is the unique representation of v as a linear combination of B.

The component vector of v relative to B is written

1

2

3

c

c

c

or 1 2 3, ,c c c .

We use the notation 1 2 3, ,B

c c cv .

Def. Let V be a vector space with ordered bases , ,B 1 2 3v v v and , ,C 1 2 3w w w .

The matrix , ,C B C C CP

1 2 3v v v is called the

change-of-basis matrix from B to C.

This matrix changes vectors written in terms of B to vectors written in terms of C.

Thm. C BC BP v v

Thm. B CP

is the inverse of C BP

.

Page 13: 2.1 Matrices elements m n m by n matrix m n m n m n m n m ... · page 1 2.1 Matrices Defs. A matrix is a table of entries (usually numbers). It is denoted by a capital letter such

page 13

4.8 Row Space & Column Space

Let A be an m × n matrix.

Def. The row space of A is the subspace of n

spanned by the row vectors of A.

It is denoted by rowspace(A).

Thm. If A is row equivalent to B, then rowspace(A) = rowspace(B).

Thm. The nonzero rows in any row-echelon form of A form a basis for rowspace(A).

Def. The column space of A is the subspace of m

spanned by the column vectors of A.

It is denoted by colspace(A).

Thm. The set of column vectors of A corresponding to those column vectors containing leading

ones in any row-echelon form of A is a basis for colspace(A).

Thm. dim[rowspace(A)] = dim[colspace(A)] = rank(A)

________________________________________________________________________________

4.9 The Rank-Nullity Theorem

Def. The dimension of nullspace(A) is called the nullity of A.

Thm. Rank-Nullity Theorem

rank(A) + nullity(A) = # of columns of A = n bounds + free = variables

Thm. If px is a particular solution to Ax = b,

then every solution can be written h px x x , where h

x is a solution to Ax = 0.

The dimension of h

x is nullity(A).

Furthermore, b must be in colspace(A).

Thm. Let A be an n × n matrix with real elements.

The following conditions on A are equivalent.

1. A is invertible.

2. Ax = b has a unique solution for every b in n

.

3. Ax = 0 has only the trivial solution x = 0.

4. rank(A) = n.

5. A is row-equivalent to In.

6. det(A) ≠ 0.

7. TA is invertible.

8. nullity(A) = 0.

9. colspace(A) = n

(that is, the columns of A form a basis for n

).

10. rowspace(A) = n

(that is, the rows of A form a basis for n

).

Page 14: 2.1 Matrices elements m n m by n matrix m n m n m n m n m ... · page 1 2.1 Matrices Defs. A matrix is a table of entries (usually numbers). It is denoted by a capital letter such

page 14

5.1 Inner Product Spaces

Let u, v, and w be vectors in V. Let r be a scalar in F.

Def. An inner product on V is a function that associates a real number u, v with each pair of

vectors u and v and satisfies the following axioms.

(i1) u, u ≥ 0 and

u, u = 0 iff u = 0

(i2) u, v = v, u

(i3) ru, v = ru, v

(i4) u + v, w = u, w + v, w

Def. If V has an inner product, it is called an inner product space.

Def. The norm (or length) of u is ,u u u .

Def. The angle between two nonzero vectors u and v is given by

,

cos , 0 u v

u v

Def. u and v are orthogonal iff u, v = 0.

________________________________________________________________________________

5.2 Orthogonal Sets

Def. If a vector has norm of 1, then it is called a unit vector.

Thm. If v is any nonzero vector in an inner product space V,

then the vector v

uv

is a unit vector in the direction of v.

The process of replacing v with u is called normalization.

Def. A set S of nonzero vectors in an inner product space V is called orthogonal if every pair of

vectors in S is orthogonal. If, in addition each vector in the set is a unit vector, then S is

called orthonormal.

Page 15: 2.1 Matrices elements m n m by n matrix m n m n m n m n m ... · page 1 2.1 Matrices Defs. A matrix is a table of entries (usually numbers). It is denoted by a capital letter such

page 15

Thm. If S is an orthogonal set of nonzero vectors in an inner product space V, then S is linearly

independent.

Thm. If V is an inner product space of dimension n, then any orthogonal set of n vectors is a basis

of V.

Thm. Let V be an inner product space with orthonormal basis , ,B 1 2 3v v v .

Let u be a vector in V.

Then the component vector of u relative to B is

,

,

,B

1

2

3

u v

u u v

u v

.

________________________________________________________________________________

5.3 The Gram-Schmidt Process

Thm. The Gram-Schmidt Process

(1) Let , ,B 1 2 3v v v a basis for an inner product space V.

Scale the 'siv if desired.

(2) Let , ,B 1 2 3w w w where

1 1

w v

,

,

2 1

2 2 1

1 1

v ww v w

w w proj

12 2 w 2w v v

, ,

, ,

3 1 3 2

3 3 1 2

1 1 2 2

v w v ww v w w

w w w w proj proj

1 23 3 w 3 w 3w v v v

Then B is an orthogonal basis for V.

Scale the 'siw if desired.

(3) Let , ,B 1 2 3u u u where ii

i

w

uw

.

Then B is an orthonormal basis for V.

Page 16: 2.1 Matrices elements m n m by n matrix m n m n m n m n m ... · page 1 2.1 Matrices Defs. A matrix is a table of entries (usually numbers). It is denoted by a capital letter such

page 16

6.1 Linear Transformations

Def. Let V and W be vector spaces. A mapping T from V into W is a rule that assigns to each

vector v in V precisely one vector w = T(v) in W. We denote such a mapping by

T : V → W.

w is called the image of v.

v is called the preimage of w.

Exs. 1. T : 3→

2 defined by T(x, y, z) = (x – y, x – 2z)

2. T : 2

→ 2

defined by T(x, y) = (x + 3, y)

3. T : → defined by T(x) = x + 1

Def. A mapping V → W is called a linear transformation from V to W if it satisfies the

following properties:

(1) T(u + v) = T(u) + T(v) for all u, v in V.

(2) T(c v) = c T(v) for all v in V and all scalars c.

These properties are called the linearity properties.

V is called the domain of T.

W is called the codomain of T.

Thm. Let T : V → W be a linear transformation.

(1) T(c1v1 + c2v2) = c1T(v1) + c2T(v2) for all v1, v2 in V and scalars c1, c2.

(2) T(0V) = 0W.

(3) T(–v) = –T(v) for all v in V.

Def. If T : n

→ m

is a linear transformation, then the matrix of T is

A = [T(e1), T(e2), …, T(en)].

Thm. Every m × n matrix represents a linear transformation from n

to m

.

Every linear transformation from n

to m

can be represented by an m × n matrix.

App. Ti : 3→

3 is a linear transformation that rotates a point in 3-space around axis i through

an angle θ in counterclockwise direction relative to a person facing the negative direction of

axis i defined by the matrices

1 0 0

0 cos sin

0 sin cos

xA

cos 0 sin

0 1 0

sin 0 cos

yA

cos sin 0

sin cos 0

0 0 1

zA

Page 17: 2.1 Matrices elements m n m by n matrix m n m n m n m n m ... · page 1 2.1 Matrices Defs. A matrix is a table of entries (usually numbers). It is denoted by a capital letter such

page 17

6.3 Kernel & Range

Def. Let T : V → W be a linear transformation. The set of all vectors v in V such that T(v) = 0 is

called the kernel of T and is denoted Ker(T).

Def. The range of T : V → W is the subset of W consisting of all transformed vectors from V.

The range of T is denoted by Rng(T).

Thm. If T : n

→ m

is a linear transformation, then

(1) Ker(T) = nullspace(A) (which is a subspace of n

)

(2) Rng(T) = colspace(A) (which is a subspace of m

)

Thm. If T : V → W is a linear transformation, then

(1) Ker(T) is a subspace of V.

(2) Rng(T) is a subspace of W.

Thm. General Rank-Nullity Theorem

If T : V → W is a linear transformation and V is finite-dimensional, then

dim[Ker(T)] + dim[Rng(T)] = dim[V].

Def. Let T : V → W be a linear transformation.

T is one-to-one iff the preimage of every w in the range consists of a single vector;

that is, whenever v1 ≠ v2 in V, we have T(v1) ≠ T(v2).

T is onto iff every element of W has a preimage in V.

Thm. T is one-to-one iff Ker(T) = {0}.

T is onto iff Rng(T) = W. (This one is immediate from the definition.)

Thm. Let T : V → W be a linear transformation and let V and W be finite-dimensional.

(1) If T is one-to-one, then dim[V] ≤ dim[W].

(2) If T is onto, then dim[V] ≥ dim[W].

(3) If T is both one-to-one and onto, then dim[V] = dim[W].

(4) If dim[V] = dim[W], then T is one-to-one iff T is onto.

________________________________________________________________________________

6.4 Additional Properties of Linear Transformations

Def. If T is both one-to-one and onto, then the inverse transformation 1 :T W V is defined

by 1( )T w v iff ( )Tw v .

Thm. Let T :n

→ n

be a linear transformation with matrix A.

(1) 1T exists iff det(A) ≠ 0.

(2) 1T is a linear transformation with matrix

1A.

Page 18: 2.1 Matrices elements m n m by n matrix m n m n m n m n m ... · page 1 2.1 Matrices Defs. A matrix is a table of entries (usually numbers). It is denoted by a capital letter such

page 18

Def. If T : V → W is a linear transformation that is both one-to-one and onto, then T is called an

isomorphism, and we say that V and W are isomorphic, written V W.

Thm. All n-dimensional (real) vector spaces are isomorphic to n .

Thm. Let A be an n × n matrix with real elements.

Let T : n → n be the matrix transformation defined by T(x) = Ax.

The following conditions on A are equivalent.

1. A is invertible.

2. Ax = b has a unique solution for every b in n.

3. Ax = 0 has only the trivial solution x = 0.

4. rank(A) = n.

5. A is row-equivalent to In.

6. det(A) ≠ 0.

7. TA is invertible.

8. nullity(A) = 0.

9. colspace(A) = n

(that is, the columns of A form a basis for n

).

10. rowspace(A) = n

(that is, the rows of A form a basis for n

).

11. T is an isomorphism.

6.5 The Matrix of a Linear Transformation

Let V and W be vector spaces

with ordered bases , , , nB 1 2v v v and , , , mC 1 2w w w , respectively.

Let T : V → W be is a linear transformation.

Def. The m × n matrix

1 2( ) , ( ) , , ( )C

nB C C CT T T T v v v

is called the matrix representation of T relative to the bases B and C.

Thm. If T : nV and mW are each equipped with the standard bases, then C

BT is just the

matrix of T defined earlier.

Thm. If V = W and T(v) = v for all v in V, then C

BT is just the change-of-basis matrix from B to C

defined earlier.

Thm. C

B BCT T v v

Page 19: 2.1 Matrices elements m n m by n matrix m n m n m n m n m ... · page 1 2.1 Matrices Defs. A matrix is a table of entries (usually numbers). It is denoted by a capital letter such

page 19

7.1 Eigenvalues, Eigenvectors

7.2 Eigenspaces, Nondefective Matrices

Let A be an n × n matrix.

Def. The scalar λ is called an eigenvalue of A if there is a nonzero vector v such that

Av = λv.

The vector v is an eigenvector of A corresponding to λ.

Def. The polynomial det(A – λI) is the characteristic polynomial of A.

The equation det(A – λI) = 0 is the characteristic equation of A.

Thm. 1. An eigenvalue of A is a scalar λ that satisfies the characteristic equation.

2. The eigenvectors of A corresponding to λ are the nonzero solutions of (A – λI)v = 0.

Thm. Let λ be an eigenvalue of A. The set of all eigenvectors of λ, together with the zero vector,

is a subspace of n

. We call this subspace the eigenspace of λ.

Def. Let λ be an eigenvalue of A. The multiplicity of λ in the characteristic polynomial is called

the algebraic multiplicity of λ. The dimension of the eigenspace corresponding to λ is

called the geometric multiplicity of λ.

Thm. The union of the bases of all eigenspaces of A is linearly independent.

Def. If A has n linearly independent eigenvectors, then A is called nondefective.

Otherwise, A is called defective.

Thm. If the algebraic multiplicity equals the geometric multiplicity for each eigenvalue of A, then

A is nondefective. Otherwise, A is defective.

Thm. Let A have real elements, and let λ, v be an eigenvalue/eigenvector pair of A.

Then , v is an eigenvalue/eigenvector pair of A.

________________________________________________________________________________

7.3 Diagonalization

Let A and B be n × n matrices.

Thm. If A is triangular, then its eigenvalues are the entries on its main diagonal.

Def. A is similar to B if there exists an invertible matrix S such that 1B S AS .

Thm. If A and B are similar, then they have the same eigenvalues (including multiplicities).

Def. If A is similar to a diagonal matrix, then A is diagonalizable.

Page 20: 2.1 Matrices elements m n m by n matrix m n m n m n m n m ... · page 1 2.1 Matrices Defs. A matrix is a table of entries (usually numbers). It is denoted by a capital letter such

page 20

Thm. A is diagonalizable iff A is nondefective.

Thm. Let A be diagonalizable.

Suppose 1 2, , , n are the eigenvalues of A (not necessarily distinct) corresponding to the

following set of linearly independent eigenvectors: 1 2, , nv v v .

Then 1D S AS , where

1 2diag , , , nD and

1 2, , nS v v v

________________________________________________________________________________

7.5 Orthogonal Diagonalization

Let A be a real n × n matrix.

Def. A is orthogonal if 1 TA A .

Thm. A is orthogonal iff the row (or column) vectors form an orthonormal set.

Thm. If A is symmetric, then

1. A is diagonalizable (nondefective).

2. All the eigenvalues are real.

3. If λ1 and λ2 are distinct eigenvalues, then their corresponding eigenvectors are orthogonal.

4. A has a set of n orthonormal eigenvectors.

5. A can be diagonalized with an orthogonal matrix S.

That is, 1 TS AS S AS is diagonal.

That is,

1

1

2

3

0 0

0 0

0 0

TS AS S AS

(in the case that A is a 3 × 3 matrix).