Top Banner
Problem F02.10. Let T be a linear operator on a finite dimensional complex inner prod- uct space V such that T * T = TT * . Show that there is an orthonormal basis of V consisting of eigenvectors of B. Solution. When T satisfies T * T = TT * , we call T normal. We prove this by induction on the dimension of the space that T operates on. If T is operating on a 1-dimensional space, the claim is obvious. Suppose the claim holds for any normal T operating on an n - 1 dimensional space (n 2). By the fundamental theorem of algebra, the characteristic polynomial of T * has a root λ which is an eigenvalue of T * . Let v be the corresponding non-zero eigenvector (wlog ||v|| = 1). Then v = {x V :(x, v)=0} has dimension n - 1. Also, if x v , then (T x, v)=(x, T * v)= λ(x, v)=0. Thus v is T -invariant. Then the restriction of T to v is a normal operator on an n-1 dimen- sional space. Then by our inductive hypothesis, there is an orthonormal basis {v 2 ,...,v n } for v consisting of eigenvectors of T . Then {v,v 2 ,...,v n } is an orthonormal set with n elements and is thus a basis for V . It remains to prove that v is also an eigenvector of T and then we will have the required basis. Since T is normal, so is T * and thus T * - cI for every c C. Also, for any normal operator S and any vector x, we see ||Sx|| 2 =(Sx,Sx)=(x, S * Sx)=(x,SS * x)=(S * x, S * x)= ||S * x|| 2 . Then since v is an eigenvector of T * , we have (T * - λI )v =0. Then 0= ||(T * - λI )v|| 2 = ||(T * - λI ) * v|| 2 = (T - λI )v 2 . Thus Tv = λv so v is also an eigenvector of T . Thus {v,v 2 ,...,v n } is a basis of V consisting of eigenvectors of T . This completes the induction and the proof. Problem W02.8. Let T : V W and S : W X be linear transformations of real finite dimensional vector spaces. Prove that rank(T ) + rank(S ) - dim(W ) rank(S T ) max{rank(T ), rank(S )}. Solution. By the Rank-Nullity Theorem, rank(T ) + dim(ker(T )) = dim(V ), (1) rank(S ) + dim(ker(S )) = dim(W ), (2) rank(S T ) + dim(ker(S T )) = dim(V ). (3) Adding the (1), (2) and then subtracting (3) gives rank(T ) + rank(S ) - rank(S T ) + dim(ker(T )) + dim(ker(S )) - dim(ker(S T )) = dim(W ). Let {v 1 ,...,v } be a basis for ker(T ). Then (S T )(v i ) = 0 for each i so ker(T ) ker(S T ). Thus we can extend this to a basis {v 1 ,...,v ,y 1 ,...,y k } for ker(S T ). Then for each j =1,...,k, we have 0=(S T )(y j )= S (T (y j )). 1
55

Problem F02.10. Let T V T T TT V consisting Bchparkin/index/BasicExam...Problem S03.10. Let V be a nite dimensional real inner product space and T: V !V a hermitian linear operator.

Apr 06, 2020

Download

Documents

dariahiddleston
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: Problem F02.10. Let T V T T TT V consisting Bchparkin/index/BasicExam...Problem S03.10. Let V be a nite dimensional real inner product space and T: V !V a hermitian linear operator.

Problem F02.10. Let T be a linear operator on a finite dimensional complex inner prod-uct space V such that T ∗T = TT ∗. Show that there is an orthonormal basis of V consistingof eigenvectors of B.

Solution. When T satisfies T ∗T = TT ∗ , we call T normal.We prove this by induction on the dimension of the space that T operates on. If T is

operating on a 1-dimensional space, the claim is obvious.Suppose the claim holds for any normal T operating on an n − 1 dimensional space

(n ≥ 2). By the fundamental theorem of algebra, the characteristic polynomial of T ∗ has aroot λ which is an eigenvalue of T ∗. Let v be the corresponding non-zero eigenvector (wlog||v|| = 1). Then v⊥ = {x ∈ V : (x, v) = 0} has dimension n− 1. Also, if x ∈ v⊥, then

(Tx, v) = (x, T ∗v) = λ(x, v) = 0.

Thus v⊥ is T -invariant. Then the restriction of T to v⊥ is a normal operator on an n−1 dimen-sional space. Then by our inductive hypothesis, there is an orthonormal basis {v2, . . . , vn}for v⊥ consisting of eigenvectors of T . Then {v, v2, . . . , vn} is an orthonormal set with nelements and is thus a basis for V . It remains to prove that v is also an eigenvector of T andthen we will have the required basis. Since T is normal, so is T ∗ and thus T ∗ − cI for everyc ∈ C. Also, for any normal operator S and any vector x, we see

||Sx||2 = (Sx, Sx) = (x, S∗Sx) = (x, SS∗x) = (S∗x, S∗x) = ||S∗x||2 .

Then since v is an eigenvector of T ∗, we have (T ∗ − λI)v = 0. Then

0 = ||(T ∗ − λI)v||2 = ||(T ∗ − λI)∗v||2 =∣∣∣∣(T − λI)v

∣∣∣∣2 .Thus Tv = λv so v is also an eigenvector of T . Thus {v, v2, . . . , vn} is a basis of V consistingof eigenvectors of T . This completes the induction and the proof.

Problem W02.8. Let T : V → W and S : W → X be linear transformations of real finitedimensional vector spaces. Prove that

rank(T ) + rank(S)− dim(W ) ≤ rank(S ◦ T ) ≤ max{rank(T ), rank(S)}.

Solution. By the Rank-Nullity Theorem,

rank(T ) + dim(ker(T )) = dim(V ), (1)

rank(S) + dim(ker(S)) = dim(W ), (2)

rank(S ◦ T ) + dim(ker(S ◦ T )) = dim(V ). (3)

Adding the (1), (2) and then subtracting (3) gives

rank(T ) + rank(S)− rank(S ◦T ) + dim(ker(T )) + dim(ker(S))− dim(ker(S ◦T )) = dim(W ).

Let {v1, . . . , v`} be a basis for ker(T ). Then (S ◦T )(vi) = 0 for each i so ker(T ) ⊂ ker(S ◦T ).Thus we can extend this to a basis {v1, . . . , v`, y1, . . . , yk} for ker(S ◦ T ). Then for eachj = 1, . . . , k, we have

0 = (S ◦ T )(yj) = S(T (yj)).

1

Page 2: Problem F02.10. Let T V T T TT V consisting Bchparkin/index/BasicExam...Problem S03.10. Let V be a nite dimensional real inner product space and T: V !V a hermitian linear operator.

Hence T (yj) ∈ ker(S) for each j. Further if a1, . . . , ak ∈ C are such that

a1T (y1) + · · ·+ akT (yk) = 0,

ThenT (a1y1 + · · ·+ akyk) = 0

so a1y1 + · · ·+ akyk ∈ ker(T ) so there are b1, . . . , b` ∈ C such that

a1y1 + · · ·+ akyk = b1v1 + · · · b`v` =⇒ a1y1 + · · ·+ akyk − b1v1 − · · · − b`v` = 0.

But these vectors form a basis for ker(S ◦ T ) so in particular, a1 = · · · = ak = 0. Thus{T (y1), . . . , T (yk)} is a linearly independent subset of ker(S) and so dim(ker(S)) ≥ k. Hence

dim(ker(T )) + dim(ker(S)) ≥ dim(ker(S ◦ T ))

and so the equation above yields

rank(T ) + rank(S)− rank(S ◦ T ) ≤ dim(W )

orrank(T ) + rank(S)− dim(W ) ≤ rank(S ◦ T )

which is the first half of the inequality.Now suppose that {x1, . . . , xm} is a basis for im(S ◦ T ). Then there are u1, . . . , um ∈ V

such that (S ◦ T )(ui) = xi, i = 1, . . . ,m. Thus S(T (ui)) = xi for each i, and so in particularxi ∈ im(S) for each i and so we have m linearly independent vectors in im(S). This givesrank(S ◦ T ) ≤ rank(S) ≤ max{rank(S), rank(T )}. This is the second half of the inequality.

Problem W02.10. Let V be a finite dimensional complex inner product space andf : V → C a linear functional. Show that there exists a vector w ∈ V such that f(v) = (v, w)for all v ∈ V .

Solution. Let v1, . . . , vn be an orthonormal basis for V . Given f ∈ V ∗, put f(vi) = αi ∈ C.Next set

w = α1v1 + · · ·+ αnvn.

For any v ∈ V , there are β1, . . . , βn ∈ C such that

v = β1v1 + · · ·+ βnvn.

Thenf(v) = β1f(v1) + · · ·+ βnf(vn) = β1α1 + · · ·+ βnαn

and

(v, w) =

(n∑i=1

βivi,n∑j=1

αjvj

)=

n∑i=1

n∑j=1

(βivi, αjvj) =n∑i=1

n∑j=1

βiαj(vi, vj).

Thus by orthonormality,

(v, w) =n∑i=1

βiαi = f(v).

2

Page 3: Problem F02.10. Let T V T T TT V consisting Bchparkin/index/BasicExam...Problem S03.10. Let V be a nite dimensional real inner product space and T: V !V a hermitian linear operator.

Since v was arbitrary, f(v) = (v, w) for all v ∈ V .

Problem W02.11. Let V be a finite dimensional complex inner product space andT : V → V a linear transformation. Prove that there exists an orthonormal ordered basisfor V such that the matrix representation of T in this basis is upper triangular.

Solution. We prove this by induction on the dimension of the space T acts upon. If T isacting on a 1-dimensional space, the claim is obvious.

Suppose the claim holds for linear maps acting on n−1 dimensional spaces. Let dim(V ) =n. By the fundamental theorem of algebra, there is an eigenvalue λ ∈ C and correspondingnon-zero eigenvector 0 6= v1 ∈ V of T ∗; wlog ||v1|| = 1. Then v⊥1 = {x ∈ V : (x, v1) = 0} isan n− 1-dimensional space. Also for x ∈ v⊥1 , we have

(T (x), v1) = (x, T ∗(v1)) = (x, λv1) = λ(x, v1) = 0.

Hence v⊥1 is T -invariant. Thus T∣∣v⊥1

is an operator acting on an n − 1-dimensional space.

By our inductive hypothesis, there is an orthonormal basis {v2, . . . , vn} for v⊥ such thatthe matrix of T is upper-triangular with respect to this basis. Then {v1, v2, . . . , vn} is anorthonormal basis for V . Further, T (v1) =

∑nj=1 a1jvj for some a1j ∈ C and by assumption

T (vi) =∑n

j=i aijxj. Thus the matrix of T with respect to this basis is

A =

a11 a12 a13 · · · a1n0 a22 a23 · · · a2n0 0 a33 · · · a3n...

. . ....

0 0 0 · · · ann

.

Problem S03.8. Let V be an n-dimensional complex vector space and T : V → V alinear operator. Suppose that the characteristic polynomial of T has n distinct roots. Showthat there is a basis B of V such that the matrix representation of T in the basis B is diagonal.

Solution. Since each root of the characteristic polynomial (and thus each eigenvalue of T )is distinct and since eigenvectors corresponding to different eigenvalues are linearly indepen-dent, each eigenspace Eλ is a one-dimensional T -invariant subspace. Let λ1, . . . , λn be thedistinct eigenvalues of T with corresponding eigenvectors v1, . . . , vn. We know that eigen-vectors corresponding to distinct eigenvalues are linearly independent, thus Eλi ∩Eλj = {0}whenever i 6= j. Further, since we have n-linearly independent vectors, {v1, . . . , vn} is a basisfor V . The matrix of T with respect to this basis is

[T ] =

λ1

λ2 0. . .

0. . .

λn

,

3

Page 4: Problem F02.10. Let T V T T TT V consisting Bchparkin/index/BasicExam...Problem S03.10. Let V be a nite dimensional real inner product space and T: V !V a hermitian linear operator.

since T (vi) = λivi, i = 1, . . . , n.

Problem S03.9. Let A ∈ M3(R) satisfy det(A) = 1 and AtA = I = AAt where I is theidentity matrix. Prove that the characteristic polynomial of A has 1 as a root.

Solution. Clearly the characteristic polynomial of A has a real root since it has odd order.Let λ be a real root of the characteristic polynomial. Then λ is an eigenvalue of A. Suppose0 6= v ∈ R3 is a normalized eivengector corresponding to λ. Then

λ2 = λ2(v, v) = (λv, λv) = (Av,Av) = (v, AtAv) = (v, v) = 1.

Thus λ = ±1. If λ = 1, then we are done. If λ = −1, suppose µ, ν ∈ C are the othereigenvalues of A. Then −1 = − det(A) = −µνλ = µν. If µ, ν are not real, they must be aconjugate pair since A is real. But this is impossible, because then µν ≥ 0. Thus both µ, νare real. By the same reasoning as above, µ, ν = ±1. Then µν = −1 forces µ = 1, ν = −1(or vice versa). Thus A has 1 as an eigenvalue and so the characteristic polynomial of A has1 as a root.

Problem S03.10. Let V be a finite dimensional real inner product space and T : V → Va hermitian linear operator. Suppose the matrix representation of T 2 in the standard basishas trace zero. Prove that T is the zero operator.

Solution. Let dim(V ) = n and let A be the matrix of T in the standard basis. SinceT is hermitian, so is A and thus by the spectral theorem, there is an orthonormal basis{v1, . . . , vn} for Rn consisting of eigenvectors of A. Let λ1, . . . , λn ∈ R be the correspondingeigenvalues (repeats are allowed and the eigenvalues are real since A is hermitian). Then

Avi = λivi =⇒ A2vi = λiAvi = λ2i vi.

Thus (λ2i , vi) is an eigenpair for A2 which is the matrix of T 2. We are given that the traceof the matrix is zero, but the trace is the sum of the eigenvalues. Hence

n∑i=1

λ2i = 0 =⇒ λ1 = · · · = λn = 0;

again this holds since all eigenvalues of a hermitian operator are real. Then Avi = 0, i =1, . . . , n. But this means A sends a basis for Rn to zero. This is only possible if A is the zeromatrix. Thus T is the zero transformation.

Problem F03.9. Consider a 3×3 real symmetric matrix with determinant 6. Assume that(1, 2, 3) and (0, 3,−2) are eigenvectors with corresponding eigenvalues 1 and 2 respectively.

(a) Give an eigenvector of the form (1, x, y) which is linearly independent from the twovectors above.

(b) What is the eigenvalue of this eigenvector?

4

Page 5: Problem F02.10. Let T V T T TT V consisting Bchparkin/index/BasicExam...Problem S03.10. Let V be a nite dimensional real inner product space and T: V !V a hermitian linear operator.

Solution. We answer the questions in the reverse order. The product of the eigenvaluesequals the determinant, so the third eigenvalue is 3. This answers (b).

By the spectral theorem, the eigenspaces corresponding to distinct eigenvalues will beorthogonal. Here all eigenvalues are distinct. Since the first two eigenvectors span a twodimensional space, any vector orthogonal to both will necessarily be a third eigenvector.Taking the cross product of the two vectors gives a vector which is orthogonal to both. Wesee

(1, 2, 3)× (0, 3,−2) =

∣∣∣∣∣∣ı̂ ̂ k̂1 2 30 3 −2

∣∣∣∣∣∣ = (−13, 2, 3).

Then, v = (1,−2/13, 3/13) is an eigenvector of the desired form. This answers (a).

Problem F03.10.

(a) Take t ∈ R such that t is not an integer multiple of π. Prove that if

A =

(cos(t) sin(t)− sin(t) cos(t)

)then there is no invertible real matrix B such that BAB−1 is diagonal.

(b) Do the same for

A =

(1 λ0 1

)where λ ∈ R− {0}.

Solution.

(a) A doesn’t have any real eigenvalues so it cannot be diagonalizable in M2(R). Indeed,any eigenvalue λ of A would need to satisfy

(cos(t)− λ)2 + sin2(t) = 0 =⇒ cos(t)− λ = ±i sin(t) =⇒ λ = cos(t)± i sin(t)

which is not real since t is not an integer multiple of π.

(b) The only eigenvalue of A is 1 and it has algebraic multiplicity 2. However, the onlyeigenvalue corresponding to this eigenvalue (up to scaling) is v = (1, 0). Thus thegeometric multiplicity of the eigenvalue is 1. Since A has an eigenvalues whose algebraicand geometric multiplicities are unequal, A is not diagonalizable.

Problem S04.7. Let V be a finite dimensional real vector space and U,W ⊂ V besubspaces of V . Show both of the following:

(a) U0 ∩W 0 = (U +W )0

(b) (U ∩W )0 = U0 +W 0

Solution.

5

Page 6: Problem F02.10. Let T V T T TT V consisting Bchparkin/index/BasicExam...Problem S03.10. Let V be a nite dimensional real inner product space and T: V !V a hermitian linear operator.

(a) Let f ∈ U0 ∩W 0. Then f ∈ U0 and f ∈ W 0. Take x ∈ U + W . Then x = u + w forsome u ∈ U,w ∈ W . We see

f(x) = f(u+ w) = f(u) + f(w) = 0 + 0 = 0.

Thus f ∈ (U +W )0.

Now take f ∈ (U +W )0. For any u ∈ U , we have u ∈ U +W . Then f(u) = 0. Hence,since u was arbitrary, f ∈ U0. Similarly, f(w) = 0 for all w ∈ W so f ∈ W 0. Thusf ∈ U0 ∩W 0.

We conclude that U0 ∩W 0 = (U +W )0.

(b) Let f ∈ (U ∩ W )0. For any Z ⊂ V , define fZ ∈ V ∗ by fZ(v) = f(v), v ∈ Z,fZ(v) = 0, v ∈ V − Z. Then

f = fU + fV−U .

Clearly fV−U ∈ U0. We must show fU ∈ W 0. Let w ∈ W . If w ∈ U , then fV−U(w) = 0,and f(w) = 0 since w ∈ U∩W . Thus 0 = f(w) = fU(w)+fV−U(w) = fU(w). Otherwisew 6∈ U , in which case fU(w) = 0 by definition. Hence, fU(w) = 0 for all w ∈ W and sofU ∈ W 0. Then

f = fU + fV−U ∈ U0 +W 0.

Take f ∈ U0 +W 0. Then f = f1 + f2, f1 ∈ U0, f2 ∈ W 0. For any v ∈ U ∩W , we havev ∈ U and v ∈ W . Then

f(v) = f1(v) + f2(v) = 0 + 0 = 0.

Hence f(v) = 0 for all v ∈ U ∩W so f ∈ (U ∩W )0.

We conclude (U ∩W )0 = U0 +W 0.

Problem S04.9. Let V be a finite dimensional real inner product space and T : V → V alinear operator. Show the following are equivalent:

(a) (Tx, Ty) = (x, y) for all x, y ∈ V ,

(b) ||Tx|| = ||x|| for all x ∈ V ,

(c) T ∗T = I, where T ∗ is the adjoint of T and I : V → V is the identity map,

(d) TT ∗ = I.

Solution. (a) =⇒ (b) by taking x = y.Assume (b) is true. Then for any x, y ∈ V ,

(x, x) + 2(x, y) + (y, y) = (x+ y, x+ y)

= (Tx+ Ty, Tx+ Ty)

= (Tx, Tx) + 2(Tx, Ty) + (Ty, Ty)

= (x, x) + 2(Tx, Ty) + (y, y).

6

Page 7: Problem F02.10. Let T V T T TT V consisting Bchparkin/index/BasicExam...Problem S03.10. Let V be a nite dimensional real inner product space and T: V !V a hermitian linear operator.

Thus 2(x, y) = 2(Tx, Ty) and so (b) =⇒ (a).Assume (a), (b) are true. Consider, for any v ∈ V ,

((T ∗T − I)v, (T ∗T − I)v) = (T ∗Tv − v, T ∗Tv − v)

= (T ∗Tv, T ∗Tv)− (T ∗Tv, v)− (v, T ∗Tv) + (v, v)

= (T ∗Tv, T ∗Tv)− (Tv, Tv)− (Tv, Tv) + (v, v)

= (T ∗Tv, T ∗Tv)− (v, v)− (v, v) + (v, v) [by (a)]

= (T ∗Tv, T ∗Tv)− (v, v)

= (Tv, TT ∗Tv)− (v, v)

= (v, T ∗Tv)− (v, v) [by (a)]

= (Tv, Tv)− (v, v) = 0 [by (b)].

Thus (T ∗T − I)v = 0 for all v ∈ V so T ∗T = I. Thus (b) =⇒ (c).Assume (c) is true. Then for any v ∈ V ,

(v, v) = (v, Iv) = (v, T ∗Tv) = (Tv, Tv).

Hence (c) =⇒ (b).Thus far we have that (a),(b),(c) are equivalent. Assume these hold.Then (a) implies

(TT ∗x, TT ∗y) = (T ∗x, T ∗y) = (TT ∗x, y) =⇒ (TT ∗x, (TT ∗ −I)y) = 0

for all x, y ∈ V . From (c) we see that T, T ∗ are bijective, thus so is TT ∗. Hence (TT ∗ − I)yis orthogonal to all of V so (TT ∗ − I)y = 0. However y was arbitrary, so this implies thatTT ∗ − I = 0 so TT ∗ = I. Thus (a),(b),(c) imply (d).

Assume (d) is true. Then for any x, y ∈ V , we have

(x, y) = (x, Iy) = (x, TT ∗y) = (T ∗x, T ∗x).

Then

(T ∗Tx, T ∗Ty) = (Tx, Ty) = (T ∗Tx, y) =⇒ (T ∗Tx, (T ∗T − I)y) = 0.

This implies (c) (and thus (a),(b)) by the same reasoning as above.

Problem F04.10. Let T : Rn → Rn be a linear transformation and for λ ∈ C, define thesubspace V (λ) = {v ∈ V : (T − λI)nv = 0 for some n ≥ 1}. This is called the generalizedeigenspace of λ.

1. Prove there for each λ ∈ C there is a fixed N ∈ N such that V (λ) = ker((T − λI)N

).

2. Prove that if λ 6= µ then V (λ) ∩ V (µ) = {0}. Hint: use the fact that

I =T − λIµ− λ

+T − µIλ− µ

.

Solution.

7

Page 8: Problem F02.10. Let T V T T TT V consisting Bchparkin/index/BasicExam...Problem S03.10. Let V be a nite dimensional real inner product space and T: V !V a hermitian linear operator.

1. Let v1, . . . , vk be a basis for V (λ). For i = 1, . . . , k, define Ni ∈ N to be equal tothe least n ∈ N such that (T − λI)nvi = 0. Take N = max{N1, . . . , Nk}. Then(T − λI)Nvi = 0 for all i = 1, . . . , k. Since we can build any vector in V (λ) from alinear combination of v1, . . . , vk, we see V (λ) = ker

((T − λI)N

).

2. By part 1., there are N1, N2 ∈ N such that

V (λ) = ker((T − λI)N1

)and V (µ) = ker

((T − µI)N2

).

Take M = max{N1, N2}. We see that

I = I2M =

(T − λIµ− λ

+T − µIλ− µ

)2M

Since (T − λI) and (T − µI) commute with each other, by the binomial theorem, ifwe expand the right hand side above, each summand will have a factor of (T − λI) or(T −µI) which has a power greater than or equal to M . Thus if v ∈ V (λ)∩V (µ), then

v = Iv =

(T − λIµ− λ

+T − µIλ− µ

)2M

v = 0.

Hence V (λ) ∩ V (µ) = {0}.

Problem S05.1. Given n ≥ 1, let tr : Mn(C)→ C denote the trace operator:

tr(A) =n∑k=1

Akk, A ∈Mn(C).

(a) Determine a basis for the kernel of tr.

(b) For X ∈ Mn(C), show that tr(X) = 0 if and only if there are matrices A1, . . . , Am,B1, . . . , Bm ∈Mn(C) such that

X =m∑j=1

AjBj −BjAj.

Solution.

(a) Since dim(Mn(C)) = n2 and since the range of tr is C which has dimension 1 as a vectorspace over itself, we know from the Rank-Nullity theorem that dim ker(tr) = n2 − 1.Thus to find a basis for the kernel, it is sufficient to find n2 − 1 linearly independentmatrices on which the trace operator vanishes. For i = 1, . . . , n, j = 1, . . . n,, defineEij ∈ Mn(C) be such that the entry in the ith row and jth column is 1 and the otherentries are 0. Then the matrices Eij, i 6= j are clearly linearly independent and have 0trace. This constitutes n2 − n linearly independent matrices with 0 trace. For n − 1more, consider Fk = E11−Ekk, k = 2, . . . , n. These are also linearly independent. Thusthe collection

{Eij : i, j ∈ {1, . . . , n}, i 6= j} ∪ {Fk : k = 2, . . . , n}

forms a basis for the kernel of tr.

8

Page 9: Problem F02.10. Let T V T T TT V consisting Bchparkin/index/BasicExam...Problem S03.10. Let V be a nite dimensional real inner product space and T: V !V a hermitian linear operator.

(b) It is well-known that tr(AB) = tr(BA), A,B ∈ Mn(C). Thus if X has the specifiedform,

tr(X) =m∑j=1

tr(AjBj)− tr(BjAj) =m∑j=1

tr(AjBj)− tr(AjBj) = 0.

Conversely, if X has trace 0, we must have some complex numbers αij, 1 ≤ i, j ≤ n, i 6=j and βk, k = 2, . . . , n such that

X =n∑

i,j=1,i 6=j

αijEij +n∑k=2

βkFk.

Further, we see thatEij = EiiEij and 0 = EijEii.

ThusαijEij = Eii(αijEij)− (αijEij)Eii, i, j = 1, . . . , n, i 6= j.

FurtherFk = Ek1E1k − E1kEk1, k = 2, . . . , n.

Thus

X =n∑

i,j=1,i 6=j

Eii(αijEij)− (αijEij)Eii +n∑k=2

(βkEk1)E1k − E1k(βkEk1).

This is the desired form up to renaming matrices and indices.

Problem S05.2. Let V be a finite-dimensional vector space and let V ∗ denote the dualspace of V . For a set W ⊂ V , define

W⊥ = {f ∈ V ∗ : f(w) = 0 for all w ∈ W} ⊂ V ∗

and for a set U ⊂ V ∗, define

⊥U = {v ∈ V : f(v) = 0 for all f ∈ U} ⊂ V.

(a) Show that for any W ⊂ V , ⊥(W⊥) = Span (W ) .

(b) Let W be a subspace of V . Give an explicit isomorphism bewteen (V/W )∗ and W⊥.Show that it is an isomorphism.

Solution.

(a) Let W ⊂ V . Take x ∈ Span (W ). Then there are scalars α1, . . . , αn and vectorsw1, . . . , wn ∈ W such that

x = α1w1 + · · ·+ αnwn.

Then for any f ∈ W⊥ we have

f(x) = α1f(w1) + · · ·+ αnf(wn) = α1 · 0 + · · ·+ αn · 0 = 0.

9

Page 10: Problem F02.10. Let T V T T TT V consisting Bchparkin/index/BasicExam...Problem S03.10. Let V be a nite dimensional real inner product space and T: V !V a hermitian linear operator.

Since f was arbitrary, f(x) = 0 for all f ∈ W⊥. Hence by definition x ∈⊥ (W⊥). ThusSpan (W ) ⊂⊥ (W⊥).

Now take x ∈ W , then for all f ∈ W⊥, we have f(x) = 0. Hence, x ∈⊥ (W⊥). However,Span (W ) is the smallest vector space containing W . Thus ⊥(W⊥) ⊂ Span (W ).

Thus ⊥(W⊥) = Span (W ) .

(b) Define φ : (V/W )∗ → W⊥ by

φ(f) = gf , f ∈ (V/W )∗

where gf is the functional which sends x to f(x+W ). Then for any w ∈ W , we have

[φ(f)](w) = gf (w) = f(w +W ) = f(0) = 0.

Thus φ(f) is indeed in W⊥ when f ∈ (V/W )∗. Further, if φ(f) = 0, then

[φ(f)](x) = 0, =⇒ f(x+W ) = 0 for all x ∈ V.

Hence f is the zero functional. Thus φ(f) = 0 implies f = 0 which tells us that φ isinjective. Then since dim((V/W )∗) = dim(V/W ) = dim(V ) − dim(W ) = dim(W⊥),the injectivity of φ implies bijectivity, so φ is an isomorphism.

Problem S05.3. Let A be a Hermitian-symmetric n × n complex matrix. Show that if(Av, v) ≥ 0 for all v ∈ Cn then there exists and n× n matrix T such that A = T ∗T .

Solution. By the spectral theorem, there is an orthonormal basis {v1, . . . , vn} of Cn con-sisting of eigenvectors of A. Let λ1 . . . , λn be the corresponding eigenvalues. Since A isHermitian, all λi are real. Further, for each i,

0 ≤ (Avi, vi) = (λivi, vi) = λi(vi, vi) = λi.

Thus we can define a matrix T such that Txi =√λixi (recall that a linear operator is

uniquely identified by how it treats basis vectors). Then T ∗xi =√λixi =

√λixi since T is

clearly diagonal with respect to this basis. Then

Axi = λixi =√λi

(√λixi

)=√λiT

∗xi = T ∗(√

λixi

)= T ∗(Txi) = (T ∗T )xi.

Then since A and T ∗T agree on basis elements, they are the same matrix.

Problem S05.4. We say that I ⊂Mn(C) is a two-sided ideal in Mn(C) if

(i) for all A,B ∈ I, A+B ∈ I,

(ii) for all A ∈ I and B ∈Mn(C), AB and BA are in I.

10

Page 11: Problem F02.10. Let T V T T TT V consisting Bchparkin/index/BasicExam...Problem S03.10. Let V be a nite dimensional real inner product space and T: V !V a hermitian linear operator.

Show that the only two-sided ideals in Mn(C) are {0} and Mn(C) itself.

Solution. Let I be a two sided ideal in Mn(C). Suppose there is a non-zero matrix A ∈ I.Then by multiplying A by elementary matrices we can transform A → A which has a non-zero entry in its first row and column and is still in I. Next, by multiplying on the left andright by a matrix B such that (B)11 = 1 and (B)ij = 0 otherwise. We arrive at a matrixA∗ ∈ I which is zero everywhere except for a nonzero entry in the first row and column.Again, multiplying by an elementary matrix, we can reduce A∗ to a matrix A11 ∈ I whichhas a 1 in the first row and first column and zeroes elsewhere. Performing similar steps,we can create matrices Aii ∈ I which have a 1 in the ith row and ith column and zeroeselsewhere. Adding all these matrices together, we see have the identity matrix I ∈ I. Thenfor any C ∈ Mn(C), we have CI = C ∈ I. Hence I = Mn(C). Thus the only two-sidedideals in Mn(C) are {0} and Mn(C).

Problem F05.6.

(a) Prove that if P is a real-coefficient polynomial and A a real-symmetric matrix, then λis an eigenvalue of A if and only if P (λ) is an eigenvalue of P (A).

(b) Use (a) to prove that if A is real symmetric, then A2 is non-negative definite.

(c) Check part (b) by verifying directly that det(A2) and trace(A2) are non-negative whenA is real-symetric.

Solution.

(a) Suppose λ is an eigenvalue of A. Then there is an eigenvector v 6= 0 such that Av = λvThen

A2v = A(Av) = A(λv) = λAv = λ2v.

Similarly, Akv = λkv for k = 3, 4, 5, . . . . Let P be the polynomial

P (x) = a0 + a1x+ · · ·+ amxm.

Then

P (A)v = a0Iv + a1Av + · · ·+ amAmv

= a0v + a1λv + · · ·+ amλmv

= (a0 + a1λ+ · · ·+ amλm)v = P (λ)v.

So P (λ) is an eigenvalue of P (A) (note, we didn’t need the assumption that A isreal-symmetric for this direction).

By the spectral theorem, A is (orthogonally) similar to a diagonal matrix D. SayA = UDU∗ where U is orthogonal. Then

P (A) = a0I + a1UDU∗ + · · ·+ am(UDU∗)m

= a0UU∗ + a1UDU

∗ + · · ·+ amUDmU∗

= U(a0I + a1D + · · ·+ amDm)U∗ = UP (D)U∗.

11

Page 12: Problem F02.10. Let T V T T TT V consisting Bchparkin/index/BasicExam...Problem S03.10. Let V be a nite dimensional real inner product space and T: V !V a hermitian linear operator.

Thus P (A) is similar to P (D). Now if P (λ) is an eigenvalue of P (A) then it is alsoan eigenvalue of P (D). But P (D) is diagonal so P (λ) lies on the diagonal of P (D)and so λ lies on the diagonal of D and thus is an eigenvalue of D. Then λ is also aneigenvalue of A since A is similar to D (note, this argument also didn’t require that Ais symmetric, just that A is diagonalizable).

(b) If A is n × n, then there is an orthonormal basis for Rn consisting of eigenvectors{v1, . . . , vn} of A. Suppose the corresponding eigenvalues are λ1, . . . , λn (possibly withrepititions). All these eigenvalues are real since A is symmetric. Also, λ21, . . . , λ

2n are

eigenvalues of A2 corresponding to the same eigenvalues. Finally, for any x ∈ Rn,x = α1v1 + · · ·+ αnvn for some α1, . . . , αn ∈ R. Thus gives

xtA2x = (α1v1 + · · ·+ αnvn)t(α1λ21v1 + · · ·+ αnλ

2nvn) =

n∑i=1

α2iλ

2i , by orthogonality.

this last expression is clearly non-negative since all λi and αi are real. Thus A2 isnon-negative definite.

(c) We seedet(A2) = det(AA) = det(A)det(A) = (det(A))2 ≥ 0.

Also, the entries of A2 are

(A2)ij =n∑k=1

aikakj.

Thus the diagonal entries are

(A2)ii =n∑k=1

aikaki =n∑k=1

a2ik, by symmetry.

Thus all diagonal entries of A2 are non-negative and so the trace is non-negative.

Problem F05.9. Suppose U,W are subspaces of a finite-dimensional vector space V .

(a) Show that dim(U ∩W ) = dim(U) + dim(W )− dim(Span (U,W )).

(b) Let n = dim(V ). Use part (a) to show that if k < n then an intersection of k subspacesof dimension n− 1 always has dimension at least n− k.

Solution.

(a) This is called the dimension formula and is proven as follows.

Let {x1, . . . , xk} be a basis for U ∩W . Extend this separately to a basis

{x1, . . . , xk, u1, . . . , u`} of U and

{x1, . . . , xk, w1, . . . , wm} of W.

12

Page 13: Problem F02.10. Let T V T T TT V consisting Bchparkin/index/BasicExam...Problem S03.10. Let V be a nite dimensional real inner product space and T: V !V a hermitian linear operator.

Then dim(U ∩W ) = k, dim(U) = k + ` and dim(W ) = k +m. So it remains to provethat dim(Span (U,W )) = k + ` + m; the result will follow. To do this, we just throwall the vectors together and if there is any justice in the world

{x1, . . . , xk, u1, . . . , u`, w1, . . . , wm}

will form a basis for Span (U,W ). Take any y ∈ Span (U,W ). Then y can be builtfrom vectors in U and W . But each of those vectors can be built by basis vectors of Uand W and so y can be built using the basis vectors of U and W . That is

{x1, . . . , xk, u1, . . . , u`}∪{x1, . . . , xk, w1, . . . , wm} = {x1, . . . , xk, u1, . . . , u`, w1, . . . , wm}

forms a spanning set for Span (U,W ). Take scalars α1, . . . , αk, β1, . . . , β`, γ1, . . . , γmsuch that

α1x1 + · · ·+ αkxk︸ ︷︷ ︸..=x

+ β1u1 + · · ·+ β`u`︸ ︷︷ ︸..=u

+ γ1w1 + · · ·+ γmwm︸ ︷︷ ︸..=w

= 0.

Then w = −x − u ∈ U . Also, it is clear w ∈ W . Then w ∈ U ∩W . Hence there arescalars µ1, . . . , µk such that

w = µ1x1 + · · ·+ µkxk.

Thenγ1w1 + · · ·+ γmwm − µ1x1 − · · · − µkxk = 0.

But these vectors form a basis for W and thus a linealry independent set. Thusγ1 = · · · = γm = 0 (and the same for µi but these won’t matter). Then w = 0 sox + u = 0. But the vectors comprising x and u form a basis for U , thus α1 = · · · =αk = β1 = · · · = β` = 0. Thus

{x1, . . . , xk, u1, . . . , u`, w1, . . . , wm}

is a linearly independent set.

From this we see that there is a basis for Span (U,W ) which has k + ` + m elementsso dim(Span (U,W )) = k + `+m and the proof is completed.

(b) We prove the claim by induction on k. If k = 1, the result is trivial.

Suppose the result holds from some k ≥ 1. Let V1, . . . , Vk, Vk+1 be subspaces of V ofdimension n− 1. Then

dim(∩k+1i=1 Vi

)= dim

(Vk+1 ∩

(∩ki=1Vi

))= dim (Vk+1)+dim

(∩ki=1Vk

)−dim

(Span

(Vk+1,∩ki=1Vi

)),

by the dimensionality formula. But Span(Vk+1,∩ki=1Vi

)has dimension at most n, Vk+1

has dimension n − 1 and by our inductive hypothesis, ∩ki=1Vi has dimension at leastn− k. Then

dim(∩k+1i=1 Vi

)≥ n− 1 + n− k − n = n− k − 1 = n− (k + 1).

Thus the claim holds for k + 1. This completes the proof.

13

Page 14: Problem F02.10. Let T V T T TT V consisting Bchparkin/index/BasicExam...Problem S03.10. Let V be a nite dimensional real inner product space and T: V !V a hermitian linear operator.

Problem F05.10.

(a) For n = 2, 3, 4, . . . , is there an n× n matrix A with An−1 6= 0 and An = 0?

(b) Is there an n× n upper triangular matrix A with An 6= 0 and An+1 = 0?

Solution.

(a) Yes. Let A be the matrix which Ai,i+1 = 1 for i = 1, . . . , n− 1 and Aij = 0 otherwise.That is, 1 on the first superdiagonal and 0 elsewhere.

(b) No. Suppose An+1 = 0. Suppose that λ is an eigenvalue of A with nonzero eigenvectorv. Then

Av = λv =⇒ A2v = λAv = λ2v =⇒ · · · =⇒ An+1v = λnAv = λn+1v.

But An+1 = 0 so λn+1v = 0. Then v 6= 0 implies that λn+1 = 0 which gives λ = 0.Thus all eigenvalues of A are zero. Then the minimal polynomial of A has only zeroas a root and thus mA(x) = xk for some k ∈ N. However, the degree of mA is at mostn. So

mA(A) = 0 =⇒ Ak = 0, for some k ≤ n =⇒ An = 0.

[Note: the assumption that A is upper triangular is unnecessary.]

Problem S06.7. Prove that if a, λ ∈ C with a 6= 0, then

T =

1 a 00 1 a0 0 λ

is not diagonalizable.

Solution. A matrix is diagonalizable if and only if the geometric multiplicity of eacheigenvalue is equal to the algebraic multiplicity of the eigenvalue. Here 1 is an eigenvalue ofT of algebraic multiplicity 2 but

(T − I)v = 0 =⇒

0 a 00 0 a0 0 λ− 1

v1v2v3

=

000

.

Then v2 = v3 = 0 and so v = (1, 0, 0)t spans the eigenspace corresponding to the eigenvalue1. Hence the geometric multiplicity of this eigenvalue is only 1. Hence the matrix is notdiagonalizable.

Problem S06.8. A linear transformation T is called orthogonal if it is non-singular andT t = T−1. Prove that if T : R2n+1 → R2n+1 is orthogonal, then there is v ∈ R2n+1 such thatT (v) = v or T (v) = −v.

14

Page 15: Problem F02.10. Let T V T T TT V consisting Bchparkin/index/BasicExam...Problem S03.10. Let V be a nite dimensional real inner product space and T: V !V a hermitian linear operator.

Solution. The characteristic polynomial pT (t) is an odd degree polynomial over R so thereis a real root λ. Then there is a nonzero vector v ∈ R2n+1 such that T (v) = λv so

λ2(v, v) = (λv, λv) = (T (v), T (v)) = (v, T tT (v)) = (v, Iv) = (v, v).

Since (v, v) 6= 0, this gives λ2 = 1 so λ = ±1 and hence T (v) = ±v.

Problem S06.9. Let S be a real symmetric matrix.

(a) Prove that all eigenvalues of S are real.

(b) State and prove the spectral theorem.

Solution.

(a) Let λ ∈ C be an eigenvalue of S with nonzero eigenvector v.

λ(v, v) = (λv, v) = (Sv, v) = (v, Sv) = (v, λv) = λ(v, v).

Hence since v 6= 0, we have λ = λ so λ ∈ R.

(b) Spectral Theorem. Let S be a symmetric matrix in Mn(F) where F = R or F = C.Then there is an orthonormal basis for Fn consisting of eigenvectors of S. In particular,S is orthogonally diagonalizable.

Proof. We prove this by induction on n where n is the dimension of the space that Soperates on.

For n = 1, the statement is obvious since the operator S simply scales vectors.

Assume that if a symmetric matrix operates on an (n− 1)-dimensional space, then thestatement holds. If S is an n × n matrix, then it operates on Fn. By fundamentaltheorem of algebra, there is an eigenvalue λ of S and by the above proof, λ ∈ R. Let0 6= v ∈ Fn be a corresponding eigenvector; without loss of generality, ||v|| = 1. Thenv⊥ = {x ∈ Fn : (x, v) = 0} is an (n − 1)-dimensional subspace of Fn. Also, if x ∈ v⊥,then

(Sx, v) = (x, Sv) = (x, λv) = λ(x, v) = 0.

Hence v⊥ is an S-invariant subspace. Thus S∣∣v⊥

is a symmetric matrix operating onan (n− 1)-dimensional space. By inductive hypothesis, there is an orthonormal basis{x1, . . . , xn−1} of v⊥ which consists of eigenvectors of S

∣∣v⊥

and thus eigenvectors ofS. Then {x1, . . . , xn−1, v} is an orthonormal set in Fn (and thus a basis for Fn) whichconsists of eigenvectors of S. In particular, if λ1, . . . , λn−1, λ are the correspondingeigenvalues (there can be repetitions) and we put P = [x1 · · · xn−1 v], then

SP = P

λ1

λ2. . .

λn−1λ

..= PD,

so S = PDP−1 and since {x1, . . . , xn−1, v} is an orthonormal set P−1 = P t so S =PDP t and thus S is orthogonally diagonalizable.

15

Page 16: Problem F02.10. Let T V T T TT V consisting Bchparkin/index/BasicExam...Problem S03.10. Let V be a nite dimensional real inner product space and T: V !V a hermitian linear operator.

Problem S06.10. Let Y be an arbitrary set of commuting matrices in Mn(C). Prove thatthere exists a non-zero vector v ∈ Cn which is a common eigenvector of all matrices in Y .

Solution. Let A ∈ Y . Then by the fundamental theorem of algebra, A has an eigenvalueλ. Let Eλ = ker(A− λI). Let 0 6= x ∈ Eλ. Then for arbitrary B ∈ Y ,

Ax = λx =⇒ BAx = B(λx) =⇒ A(Bx) = λ(Bx).

Hence Bx ∈ Eλ. Thus Eλ is B invariant. Hence by the fundamental theorem of algebra,B∣∣Eλ

has an eigenvalue and thus an eigenvector v 6= 0. This v is a simultaneous eigenvectorof A and B. Since B was arbitrary, v is a simultaneuous eigenvector of all matrices in Y .

Problem W06.7. Let V be a complex inner product space. State and prove the Cauchy-Schwarz inequality for V .

Solution.Cauchy-Schwarz Inequality. Let V be an inner product space over C and v, w ∈ V .Then

|(v, w)| ≤ ||v|| ||w||

with equality if and only if v, w are linearly dependent.

Proof. If w = 0, the inequality is trivially satisfied (it is actually equality and v, w arelinearly dependent so all statements hold). Assume w 6= 0. For any c ∈ C, we have

0 ≤ (v − cw, v − cw) = (v, v)− c(v, w)− c(v, w) + |c|2 (w,w).

Putting c = (v,w)(w,w)

(possible since w 6= 0) we see

0 ≤ (v, v)− |(v, w)|2

(w,w)− |(v, w)|2

(w,w)+|(v, w)|2 (w,w)

(w,w)2=⇒ |(v, w)|2

(w,w)≤ (v, v).

Multiplying by (w,w), we see

|(v, w)|2 ≤ (v, v)(w,w) = ||v||2 + ||w||2 =⇒ |(v, w)| ≤ ||v|| ||w|| .

We also notice that equality holds if and only if

(v − cw, v − cw) = 0

which holds if and only if v = cw.

Problem W06.8. Let T : V → W be a linear transformation of finite dimensional innterproduct spaces. Show that there exists a unique linear transform T t : W → V such that

(Tv, w)W = (v, T tw)V for all v ∈ V,w ∈ W.

16

Page 17: Problem F02.10. Let T V T T TT V consisting Bchparkin/index/BasicExam...Problem S03.10. Let V be a nite dimensional real inner product space and T: V !V a hermitian linear operator.

Solution. We prove uniqueness first. Suppose there are linear maps R, S : W → V bothsatisfying the condition. Then for all v ∈ V,w ∈ W ,

(Tv, w)W = (v,Rw)V = (v, Sw)V =⇒ (v, (R− S)w)V = 0.

Thus (R − S)w = 0 since it is orthogonal to all of V . However, w was arbitrary so thisimplies R = S.

For existence, let {v1, . . . , vn} and {w1, . . . , wm} be orthonormal bases of V and W re-spectively. If T has matrix A with respect to these bases, define T t to have matrix At withrespect to these basis. Then T t satisfies the conditon.

Problem W06.9. Let A ∈M3(R) be invertible and satisfy A = At and det(A) = 1. Provethat 1 is an eigenvalue of A.

Solution. The conclusion isn’t actually true. The matrix

A =

12

0 00 1

30

0 0 6

is symmetric and has determinant 1 but doesn’t have 1 as an eigenvalue.

It is likely that we were supposed to assume that At = A−1 rather than At = A. In thiscase, see S03.9.

Problem S07.2. Let U, V,W be n-dimensional vector spaces and T : U → V, S : V → Wbe linear transformations. Prove that is S ◦ T : U → W is invertible, then both T, S areinvertible.

Solution. First suppose that T is not invertible. Then the kernel of T is nontrivial, so thereis a non-zero vector u ∈ U such that T(u)= 0. But then S ◦ T (u) = 0, so the kernel of S ◦ Tis nontrivial and so S ◦ T is not invertible; a contradiction. Thus T is invertible.

Now assume that S is not invertible. Then the kernel of S is nontrivial so there is a non-zero vector v ∈ V such that S(v) = 0. However, T is invertible, and thus surjective so thereis u ∈ U such that T (u) = v (and u 6= 0 since v 6= 0). Then S ◦ T (u) = S(T (u)) = S(v) = 0,so the kernel of S ◦ T is nontrivial and S ◦ T is not invertible; a contradiction. Hence S isinvertible.

Problem S07.3. Consider the space of infinite sequences of real numbers

S = {(a0, a1, a2, . . .) : an ∈ R, n = 0, 1, 2, . . .}.

For each pair if real numbers A and B, prove that the set of solutions (x0, x1, x2, . . .) of thelinear recursion xn+2 = Axn+1 +Bxn, n = 0, 1, 2, . . . is a subspace of S of dimension 2.

Solution. Let S0 be set of all solutions to the recurrence relation. We first show that S0 isindeed a subspace of S. Take x, y ∈ S0. Put z = x+ y. Then

zn+2 = xn+2+yn+2 = Axn+1+Bxn+Ayn+1+Byn = A(xn+1+yn+1)+B(xn+yn) = Azn+1+Bzn.

17

Page 18: Problem F02.10. Let T V T T TT V consisting Bchparkin/index/BasicExam...Problem S03.10. Let V be a nite dimensional real inner product space and T: V !V a hermitian linear operator.

Thus z ∈ S0. Next for α ∈ R and x ∈ S0, put w = αx. Then

wn+2 = αxn+2 = α(Axn+1 +Bxn) = A(αxn+1) +B(αxn) = Awn+1 +Bwn.

Thus w ∈ S0. Hence S0 is closed under addition and multiplication by scalars and is thus asubspace of S.

Next we show that if u, v ∈ S0 are such that u0 = 1, u1 = 0, v0 = 0, v1 = 1 then u, v forma basis for S0. It is clear that they are linearly independent and for x = (x0, x1, x2, . . .) ∈ S0,we have x = x0u + x1v. Indeed, x0u + x1v agrees with x in the first and second entries.Assume that it agrees with x in the (n+ 1)th and (n+ 2)th entries for some n ≥ 0. Then

xn+1 = x0un+1 + x1vn+1 and xn+2 = x0un+2 + x1vn+2.

Consider

xn+3 = Axn+2 +Bxn+1

= A(x0un+2 + x1vn+2) +B(x0un+1 + x1vn+1)

= x0(Aun+2 +Bun+1) + x1(Avn+2 +Bvn+1)

= x0un+3 + x1vn+3.

Hence by induction it is true that x = x0u+ x1v. Thus u, v span S0. Hence dim(S0) = 2.

Problem S07.4. Suppose that A is a symmetric n × n real matrix and let λ1, . . . , λ` bethe distinct eigenvalues of A. Find the sets

X ={x ∈ Rn : lim

k→∞

(xtA2kx

)1/kexists

}and

L ={

limk→∞

(xtA2kx

)1/k: x ∈ X

}.

Solution. We prove that X = Rn and L = {0, λ21, . . . , λ2`}.By taking the zero vector for x, it is clear that the zero vector is in X and zero is in L.

We’ll focus on non-zero vectors henceforth.Since A is symmetric, by the spectral theorem there is an orthonormal basis {x1, . . . , xn}

for Rn consisting of eigenvectors of A. Suppose that µ1, . . . , µn ∈ R are the correspondingeigenvalues [the eigenvalues are real since A is symmetric; also repititions are allowed so thisis a slightly different list from λ1, . . . , λ`]. Let 0 6= x ∈ Rn. Then there are α1, . . . , αn ∈ R(not all zero) such that

x = α1x1 + . . .+ αnxn.

ThenA2kx = α1A

2kx1 + · · ·+ αnA2kxn = α1µ

2k1 x1 + · · ·+ αnµ

2kn xn

so by orthogonality,xtA2kx = α2

1µ2k1 + · · ·+ αnµ

2kn .

From here, it is clear thatxtA2kx ≥ α2

jµ2kj

18

Page 19: Problem F02.10. Let T V T T TT V consisting Bchparkin/index/BasicExam...Problem S03.10. Let V be a nite dimensional real inner product space and T: V !V a hermitian linear operator.

for all j = 1, . . . , n.Let m ∈ {1, . . . , n} be an index satisfying αm 6= 0, and if αj 6= 0 for some j = 1, . . . , n,

then |µj| ≤ |µm|. That is, m is the index of the largest of the µ’s which has a non-zerocoefficient in the representation of x in this basis (note: such m always exists; it is notnecessarily unique, but that won’t matter). Then

xtA2kx = α21µ

2k1 + · · ·+ αnµ

2kn ≤ nα2

mµ2km .

Then using our two bounds, we see that

α2/km µ2

m ≤(xtA2kx

)1/k ≤ (nα2m)1/kµ2

m.

Thus by the squeeze theorem, limk→∞

(xtA2kx

)1/k= µ2

m.

This shows that X = Rn. It also shows that for every non-zero x ∈ Rn, we have

limk→∞

(xtA2kx

)1/k= µ2

j for some j = 1, . . . , n. Thus the eigenvalues of A are the only possible

values for the limit. To see that each eigenvalue is indeed achieved, notice that(xtjA

2kxj)1/k

= µ2j

for all k ∈ N. Thus limk→∞

(xtjA

2kxj)1/k

= µ2j for each j = 1, . . . , n.

Problem S07.5. Let T be a normal linear operator on a finite dimensional complex innerproduct space V . Prove that if v is an eigenvector of T , then v is also an eigenvector of theadjoint T ∗.

Solution. First, consider if S is any normal operator and x ∈ Cn, then

(Sx, Sx) = (x, S∗Sx) = (x, SS∗x) = (S∗x, S∗x).

Thus ||Sx|| = ||S∗x|| for all x ∈ Cn.Let v be an eigenvector of T with corresponding eigenvalue λ ∈ C. Since T is normal, so

is (T − λI). Then

0 = ||(T − λI)v|| = ||(T − λI)∗x|| =∣∣∣∣(T ∗ − λI)v

∣∣∣∣ .Hence T ∗v = λv so v is an eigenvector of T ∗ corresponding to eigenvalue λ.

Problem F07.3. Let V be a vector space and T a linear transformation such that Tv andv are linearly dependent for every v ∈ V . Prove that T is a scalar multiple of the identitytransformation.

Solution. If dim(V ) = 1, the result is trivial. Assume that dim(V ) > 1 Suppose x, y ∈ Vare linearly independent. Then there are scalars α, β, γ such that

Tx = αx, Ty = βy, T (x+ y) = γ(x+ y).

19

Page 20: Problem F02.10. Let T V T T TT V consisting Bchparkin/index/BasicExam...Problem S03.10. Let V be a nite dimensional real inner product space and T: V !V a hermitian linear operator.

Then0 = (γ − α)x+ (γ − β)y.

But x, y were taken to be linearly independent, so α = γ = β. Since the same argumentwould work for any linearly independent vectors, we see that Tv = αv for all v ∈ V . ThusT = αI.

Problem F07.12.

(a) Suppose that x0 < x1 < . . . < xn are points in [a, b]. Define linear functionals on Pn(the space of all polynomials of degree less than or equal to n) by

`j(p) = p(xj), j = 0, 1, . . . , n, p ∈ Pn.

Show that the set {`j}nj=0 is linearly independent.

(b) Show that there are unique coefficients cj ∈ R such that∫ b

a

p(t)dt =n∑j=0

cj`j(p)

for all p ∈ Pn.

Solution.

(a) Let α0, α1, . . . , αn ∈ R be such that

α0`0 + α1`1 + · · ·+ αn`n = 0.

Put

pi(x) =n∏

m=0,m 6=i

(x− xm), i = 0, 1, . . . , n.

Then each pi is a polynomial of degree n and `i(pi) 6= 0 since xi 6= xj when i 6= j.However, `j(pi) = 0 when i 6= j since (x − xj) is a factor of pi when i 6= j. Thus foreach i = 0, 1, . . . , n,

α0`0(pi) + α1`1(pi) + . . .+ αn`n(pi) = 0 =⇒ αi`i(pi) = 0 =⇒ αi = 0.

Hence `0, `1, . . . , `n are linearly independent.

(b) For any finite dimensional vector space V , we know dim(V ) = dim(V ∗). Here Pn hasdimension n + 1 and we have found n + 1 linearly independent members of (Pn)∗.

Thus `0, `1, . . . , `n form a basis for (Pn)∗. Since `(p) =∫ bap(x)dx, p ∈ Pn defines a

linear functional on Pn, we know there are unique c0, c1, . . . , cn ∈ R such that ` =c0`0 + c1`1 + · · ·+ cn`n. Hence∫ b

a

p(x)dx =n∑j=0

cj`j(p), for all p ∈ Pn.

20

Page 21: Problem F02.10. Let T V T T TT V consisting Bchparkin/index/BasicExam...Problem S03.10. Let V be a nite dimensional real inner product space and T: V !V a hermitian linear operator.

Problem S08.8. Assume that V is an n-dimensional vector space and that T is a lineartransformation T : V → V such that T 2 = T . Prove that every v ∈ V can be writtenuniquely as v = v1 + v2 such that T (v1) = v1 and T (v2) = 0.

Solution. Note that if v ∈ im(T ), then v = T (w) for some w ∈ V and so

T (v) = T 2(w) = T (w) = v.

Thus T fixes members of its image so T (v − T (v)) = 0 for all v ∈ V .For any v ∈ V , write v = T (v) + (v − T (v)) ..= v1 + v2. It’s clear that this is one way to

represent v in the desired way. Assume that v = v1+v2 = x1+x2, where T (v1) = v1, T (x1) =x1 and T (v2) = 0, T (x2) = 0. Then

T (v) = T (v1) + T (v2) = T (x1) + T (x2) =⇒ T (v1) = T (x1) =⇒ v1 = x1.

Then v1 + v2 = v1 + x2 =⇒ v2 = x2. This gives uniqueness of such a representation.

Problem S08.9. Let V be a finite-dimensional vector space over R.

(a) Show that if V has odd dimension and T : V → V is a real linear transformation, thenT has a non-zero eigenvector v ∈ V .

(a) Show that for every even positive integer n, there is a vector space V over R of di-mension n and a real linear transformation T : V → V such that there is no non-zerov ∈ V that satisfies T (v) = λv for some λ ∈ R.

Solution.

(a) The characteristic polynomial pT of T is a polynomial over R of degree dim(V ) whichis odd. Every odd ordered polynomial over R has a root in R. We know this root (sayλ ∈ R) is an eigenvalue of T and thus there is a non-zero eigenvector v ∈ V such thatT (v) = λv. Hence T has an eigenvector.

(b) Let V = Rn for positive even n and let T be the left multiplication operator for

A =

0 0 0 · · · 0 −11 0 0 · · · 0 00 1 0 · · · 0 00 0 1 · · · 0 0...

. . ....

0 0 0 · · · 1 0

.

It is easy to see that T has characteristic polynomial pT (t) = (−t)n + (−1)n. But n iseven, so pT (t) = tn + 1. This equation has no roots in R when n is even and so there isno real eigenvalue and thus no non-zero v ∈ Rn such that T (v) = λv for some λ ∈ R.

21

Page 22: Problem F02.10. Let T V T T TT V consisting Bchparkin/index/BasicExam...Problem S03.10. Let V be a nite dimensional real inner product space and T: V !V a hermitian linear operator.

Problem F08.7. Suppose that T is a complex n×n matrix and that λ1, . . . , λk are distincteigenvalues of T with corresponding eigenvectors v1, . . . , vk. Show that v1, . . . , vk are linearlyindependent.

Solution. We use induction.First, it is clear that {v1} is a linearly independent set because eigenvectors are necessarily

non-zero.Now assume that {v1, . . . , v`} are linearly independent for some 2 ≤ ` < k. Let α1, . . . , α`, α`+1 ∈

C be such thatα1v1 + · · ·+ α`v` + α`+1v`+1 = 0.

Then(T − λ`+1I)(α1v1 + · · ·+ α`v` + α`+1v`+1) = 0.

But (T − λ`+1I)vi = (λi − λ`+1)vi for i = 1, . . . , ` and (T − λ`+1I)v`+1 = 0. Thus

α1(λ1 − λ`+1)v1 + · · ·+ α`(λ` − λ`+1)v` = 0.

But these vectors are linearly independent by our inductive hypothesis, so α1(λ1 − λ`+1) =· · · = α`(λ` − λ`+1) = 0. However, since the eigenvalues are distinct, this implies thatα1 = · · · = α` = 0. Hence

0 = α1v1 + · · ·+ α`v` + α`+1v`+1 = α`+1v`+1 =⇒ α`+1 = 0.

Thus {v1, . . . , v`, v`+1} is a linearly independent set.By induction, we conclude that {v1, . . . , vk} are linearly independent.

Problem F08.8. Must the eigenvectors of a linear transformation T : Cn → Cn span Cn?

Solution. No. Take any non-diagonalizable matrix A ∈Mn(C) and let T (x) = Ax, x ∈ Cn.For example, if

A =

1 1 00 1 10 0 2

.Then, up to multiplication by scalars, the eigenvectors of A (and thus T ) are

v1 =

100

and v2 =

111

which clearly do not span C3.

Problem F08.9.

(a) Prove that any linear transformation T : C→ C must have an eigenvector.

(b) Is (a) true for any linear transformation T : Rn → Rn?

22

Page 23: Problem F02.10. Let T V T T TT V consisting Bchparkin/index/BasicExam...Problem S03.10. Let V be a nite dimensional real inner product space and T: V !V a hermitian linear operator.

Solution.

(a) By the fundamental theorem of algebra, pT (t) = det(T − tI) has a root λ ∈ C. Thendet(T − λI) = 0 so (T − λI) is a singular transformation. Hence there is a non-zerov ∈ Cn such that (T − λI)(v) = 0 or T (v) = λv; that is, v is an eigenvector of T .

(b) No. Let

A =

[0 −11 0

]and define T : R2 → R2 by T (x) = Ax, x ∈ R2. Suppose T has an eigenvector0 6= v ∈ R2. Then there is λ ∈ R such that T (v) = λv. If v = (v1, v2), this implies that−v2 = λv1 and v1 = λv2 where at least one of v1, v2 is non-zero. Asusme v1 6= 0. Thenby the second equation, neither λ nor v2 are zero. Also, by the first equation

−v2 = λv1 = λ2v2 =⇒ λ2 = −1,

a contradiction to the fact that λ ∈ R. Hence there is no eigenvector for T .

Problem F08.11. Consider the Poisson equation with periodic boundary condition

∂2u

∂x2= f, x ∈ (0, 1),

u(0) = u(1).

A second order accurate approximation to the problem is given by Au = ∆x2f where

A =

−2 1 0 · · · 0 11 −2 1 0 · · · 00 1 −2 1 0 · · ·

. . . . . . . . .

0 · · · 0 1 −2 11 0 · · · 0 1 −2

,

u = [u0, u1, . . . , un−1]t, f = [f0, f1, . . . , fn−1]

t and ui ≈ u(xi) with xi = i∆x,∆x = 1/n andfi = f(xi) for i = 0, 1, . . . , n− 1.

(a) Show that A is singular.

(b) What conditions must f satisfy so that a solution exists?

Solution.

(a) Notice Av = 0 when v = [1 1 · · · 1]t, so A has a nontrivial null space and is thussingular (in fact, up to scalar multiplication, this is the only vector in the null spaceof A).

(b) We need f in the range of A for a solution to exist. The Fredholm alternative tells usthat range(A) = null(At)⊥. But At = A, so we simply need f ∈ null(A)⊥. By (a), thisis equivalent to

(v, f) = 0 ⇐⇒ f0 + f1 + · · ·+ fn−1 = 0.

23

Page 24: Problem F02.10. Let T V T T TT V consisting Bchparkin/index/BasicExam...Problem S03.10. Let V be a nite dimensional real inner product space and T: V !V a hermitian linear operator.

Problem F08.12. Consider the least square problem

minx∈Rn||Ax− b||

where A ∈ Rm×n, b ∈ Rm and m ≥ n. Prove that if x and x+ αz (α 6= 0) are both minimiz-ers, then z ∈ null(A).

Solution. Let b = b1 + b2 be the projection of b onto im(A); i.e., b1 is the unique vector inim(A) such that b1 − b ∈ im(A)⊥. We know that b1 is the closest vector to b which lies inim(A). Then the minimizers of ||Ax− b|| are exactly those vectors x ∈ Rn such that Ax = b1.Let x be such a minimizer. Then for any y ∈ Rn, Ay ∈ im(A) and so (Ay, b1 − b) = 0. Butthen

0 = (Ay,Ax− b) = (y, A∗(Ax− b)).Since y is arbitrary, this implies that A∗(Ax− b) is orthogonal to all of Rn so A∗(Ax− b) = 0or A∗Ax = A∗b. That is, any minimizer x of ||Ax− b|| must satisfy the normal equations:A∗Ax = A∗b.

Assume that both x and x+ αz, α 6= 0 are minimizers of ||Ax− b|| . Then

A∗b = A∗Ax = A∗A(x+ αz) =⇒ αA∗Az = 0 =⇒ A∗Az = 0.

Taking the inner product of A∗Az with z, we see

(z, A∗Az) = 0 =⇒ (Az,Az) = 0 =⇒ ||Az||2 = 0 =⇒ Az = 0.

Thus z ∈ null(A).

Problem F09.4. Let V be a finite dimensional inner product space and let U be a sub-space of V . Show that dim(U) + dim(U⊥) = dim(V ).

Suppose dim(U) = n. Then dim(V ) = n+ k for some k ∈ N0. If k = 0, then U = V andthe result hold trivially. Assume k > 0. Let {x1, . . . , xn} form an orthonormal basis for U .We can extend this to an orthonormal basis {x1, . . . , xn, y1, . . . , yk} for V . If we can provethat y1, . . . , yk is an orthonormal basis for U⊥, then we will be done.

Since the set {x1, . . . , xn, y1, . . . , yk} is orthonormal, for any j = 1, . . . , k and i = 1, . . . , n,we have (yj, xi) = 0. Hince yj is orthogonal to each basis member for U and hence orthogonalto U ; that is, yj ∈ U⊥ for each j.

Take y ∈ U⊥. Then, y ∈ V so there are scalars α1, . . . , αn, β1, . . . , βk such that

y = α1x1 + · · ·+ αnxn + β1y1 + · · ·+ βkyk.

Since each xi ∈ U , we have (y, xi) = 0. Alternately taking the inner product of y with eachxi, we find αi = 0 for each i. Thus

y = β1y1 + · · ·+ βkyk.

Hence {y1, . . . , yk} is a spanning set for U⊥; it is also a linearly independent set since it is partof the basis for V . Thus {y1, . . . , yk} is a basis for U⊥ so dim(U⊥) = k so the claim is proven.

24

Page 25: Problem F02.10. Let T V T T TT V consisting Bchparkin/index/BasicExam...Problem S03.10. Let V be a nite dimensional real inner product space and T: V !V a hermitian linear operator.

Problem F09.5. Show that if α1, . . . , αn ∈ R are all different and some b1, . . . , bn ∈ Rsatisfy

n∑i=1

bieαit for all t ∈ (−1, 1)

then necessarily b1 = · · · = bn = 0.

Solution. Let T : C∞(−1, 1)→ C∞(−1, 1) be defined by

(Tf)(t) = f ′(t), t ∈ (−1, 1).

We see that for each αi, if we define fi ∈ C∞(−1, 1) by fi(t) = eαit, t ∈ (−1, 1) thenTfi = αifi. Hence the functions fi are eigenvectors of T corresponding to different eigenval-ues. But eigenvectors of a linear operator corresponding to different eigenvalues are linearlyindependent. Hence

b1f1 + · · ·+ bnfn = 0 =⇒ b1 = · · · = bn = 0

which is the desired conclusion.

Problem F09.12. Let n ≥ 2 and let V be an n-dimensional vector space over C with aset of basis vectors e1, . . . , en. Let T be the linear transformation of V satisfying

T (ei) = ei+1, i = 1, . . . , n− 1 and T (en) = e1.

(a) Show that T has 1 as an eigenvalue. Find an eigenvector with eigenvalue 1 and showthat it is unique up to scaling.

(b) Is T diagonalizable?

Solution.

(a) Let x = e1 + e2 + · · · + en ∈ V . Then T (x) = T (e1) + T (e2) + · · · + T (en) = e2 +e3 + · · · + e1 = e1 + e2 + · · · + en = x. Further x 6= 0 since the basis vectors arelinearly independent. Thus x is an eigenvector of T with eigenvalue 1. Conversely, ifv is an eigenvector of T corresponding to eigenvalue 1, then T (v) = v. Also there areα1, . . . , αn ∈ C such that v = α1e1 + · · ·+ αnen. Then

α1e1 + · · ·+ αnen = v = T (v) = α1e2 + α2e3 · · ·+ αne1

which gives(α1 − αn)e1 + (α2 − α1)e2 + · · · (αn − αn−1)en = 0.

But these vectors are linearly independent so

α1 − αn = 0, α2 − α1 = 0, · · · , αn − αn−1 = 0

which gives α1 = · · · = αn. Call this value α ∈ C. Then v = αe1 + · · · + αen = αx.Hence the eigenvector is unique up to scaling.

25

Page 26: Problem F02.10. Let T V T T TT V consisting Bchparkin/index/BasicExam...Problem S03.10. Let V be a nite dimensional real inner product space and T: V !V a hermitian linear operator.

(b) Since T (ej) = ej+1 for all j = 1, . . . , n− 1 and T (en) = e1, the matrix [T ]B is given by

[T ]B =

0 0 0 · · · 0 11 0 0 · · ·0 1 0 · · ·

. . . . . .. . . . . .

0 0 0 · · · 1 0

.

Then pT (t) = det ([T ]B − tI) = ±(tn− 1). But tn− 1 has distinct roots for all n. Thusthe eigenvalues of T are distinct and so T is diagonalizable.

Problem S10.1. Let u1, . . . , un be an orthonormal basis for Rn and let y1, . . . , yn be acollection of vectors in Rn such that

∑nj=1 ||yj||

2 < 1. Show that u1 + y1, . . . , un + yn form abasis for Rn.

Solution. Since invertible linear maps send one basis to another, it suffices to show thatthere is an invertible linear map L : Rn → Rn such that L(uj) = uj + yj for all j = 1, . . . , n.Define a linear map, T : Rn → Rn by T (uj) = −yj, j = 1, . . . , n (a linear map is uniquelydetermined by how it treats basis elements).

Let x ∈ Rn. Then there are scalars α1, . . . , αn such that x =∑n

j=1 αjuj. Then

||T (x)|| =

∣∣∣∣∣∣∣∣∣∣n∑j=1

αjT (uj)

∣∣∣∣∣∣∣∣∣∣

=

∣∣∣∣∣∣∣∣∣∣n∑j=1

αjyj

∣∣∣∣∣∣∣∣∣∣

≤n∑j=1

|αj| ||yj||

(n∑j=1

|αj|2)1/2( n∑

j=1

||yj||2)1/2

.

But by orthogonality of u1, . . . , un, we have ||x|| =(∑n

j=1 |αj|2)1/2

. Thus we have that

||T || ≤∑n

j=1 ||yj||2 < 1. But ||T || < 1 implies that I − T is invertible. Further, we see that

(I − T )(uj) = uj + yj, j = 1, . . . , n. Hence u1 + y1, . . . , un + yn form a basis for Rn.

Problem S10.3. Let S, T be two normal transformations in the complex finite dimensionalinner product space V such that ST = TS. Prove that there is a basis for V consisting ofvectors which are simultaneous eigenvectors of S and T .

Solution. Suppose that n = dim(V ). Let λ1, . . . , λk be the distinct eigenvalues of S. Thenby the spectral theorem, the eigenvectors of S can be taken to form an orthonormal basis

26

Page 27: Problem F02.10. Let T V T T TT V consisting Bchparkin/index/BasicExam...Problem S03.10. Let V be a nite dimensional real inner product space and T: V !V a hermitian linear operator.

for V which means thatEλ1 ⊕ · · · ⊕ Eλk = V.

Consider, for v ∈ Eλi , we have Sv = λiv. Then

TSv = T (λiv) =⇒ STv = λiTv =⇒ S(Tv) = λi(Tv).

Thus Tv ∈ Eλi . Hence Eλi is T -invariant. Then T∣∣Eλi

is a normal operator on Eλi so by the

spectral theorem, there is an orthonormal basis v(i)1 , . . . , v

(i)`i

for Eλi consisting of eigenvectorsof T . Then

k⋃i=1

`i⋃j=1

{v(i)j

}is a basis of V consisting of simultaneous eigenvectors of both S and T .

Problem S10.4.

(i) Let A be a real symmetric n× n matrix such that xtAx ≤ 0 for every x ∈ Rn. Provethat trace(A) = 0 implies A = 0.

(ii) Let T be a linear transformation in the complex finite dimensional vector space V withan inner product. Suppose that TT ∗ = 4T −3I where I is the identity transformation.Prove that T is Hermitian positive definite and find all possible eigenvalues of T .

Solution. I’m have no idea why these two questions are grouped into one problem. As faras I can see, they have nothing to do with each other and have elementary solutions whichare completely independent of the other.

(i) Let ei be the standard basis vectors. Then

etiAei ≤ 0 =⇒ aii ≤ 0.

Then if the sum of the diagonal entries is zero, each entry must be zero; i.e., aii =0, i = 1, . . . n. Next,

(ei + ej)tA(ei + ej) ≤ 0 =⇒ aii + aij + aji + ajj ≤ 0 =⇒ aij + aji ≤ 0.

But since A is symmetric, this implies aij ≤ 0. Also

(ei − ej)tA(ei − ej) = aii − aij − aji + ajj ≤ 0

which gives aij ≥ 0. Thus aij = 0. Since i, j were arbitrary, this gives A = 0.

(ii) We see (TT ∗)∗ = (T ∗)∗T ∗ = TT ∗. Thus

(4T − 3I)∗ = 4T − 3I =⇒ 4T ∗ − 3I = 4T − 3I =⇒ T ∗ = T

so T is Hermitian. Then

T = 14TT ∗ + 3

4I = 1

4T 2 + 3

4I.

27

Page 28: Problem F02.10. Let T V T T TT V consisting Bchparkin/index/BasicExam...Problem S03.10. Let V be a nite dimensional real inner product space and T: V !V a hermitian linear operator.

Then for any x ∈ V ,

(Tx, x) = 14(T 2x, x) + 3

4(x, x) = 1

4(Tx, Tx) + 3

4(x, x) ≥ 0

since inner products are positive definite. Further, if (Tx, x) = 0 then (Tx, Tx) = 0and (x, x) = 0. The latter can only happen if x = 0. Thus T is positive definite.

The functional equation for T gives

T 2 − 4T + 3I = 0 =⇒ (T − I)(T − 3I) = 0.

Then by the Cayley-Hamilton Theorem, the characteristic polynomial of T must divide(t− 1)(t− 3). Thus the only possible eigenvalues of T are 1 and 3.

Problem S10.6. Let A =

(4 −41 0

).

(i) Find a Jordan form J of A and a patrix P such that P−1AP = J .

(ii) Compute A100 and J100.

(iii) Find a formula for an when a0 = a, a1 = b and an+1 = 4an − 4an−1.

Solution.

(i) The characteristic polynomial of A is pA(x) = x(x− 4) + 4 = x2 − 4x + 4 = (x− 2)2.Thus the sole eigenvalue of A is 2. We see N = A − 2I is not the zero matrix. Thusthere is x 6= 0 so that Nx 6= 0. We see by inspection that x = (1, 0)t gives Nx = (2, 1)t.Put

P =

(2 11 0

).

Then

P−1 = −(

0 −1−1 2

)=

(0 11 −2

)and

P−1AP =

(0 11 −2

)(4 −41 0

)(2 11 0

)=

(1 02 −4

)(2 11 0

)=

(2 10 2

)..= J

(ii) We see

J2 =

(2 10 2

)(2 10 2

)=

(4 40 4

),

and

J3 =

(4 40 4

)(2 10 2

)=

(8 120 8

).

From these, it is reasonable to guess that

Jn =

(2n n2n−1

0 2n

).

28

Page 29: Problem F02.10. Let T V T T TT V consisting Bchparkin/index/BasicExam...Problem S03.10. Let V be a nite dimensional real inner product space and T: V !V a hermitian linear operator.

Indeed, assuming this holds for Jn, we see

Jn+1 =

(2n n2n−1

0 2n

)(2 10 2

)=

(2n+1 2n + n2n

0 2n+1

)=

(2n+1 (n+ 1)2n

0 2n+1

).

Thus the formular holds by induction. Then A = PJP−1 =⇒ An = PJnP−1. So

An =

(2 11 0

)(2n n2n−1

0 2n

)(0 11 −2

)=

(2n+1 (n+ 1)2n

2n n2n−1

)(0 11 −2

)=

((n+ 1)2n −n2n+1

n2n−1 −(n− 1)2n

).

Then

A100 =

(101 · 2100 −100 · 2101

100 · 299 −99 · 2100

)and J100 =

(2100 100 · 299

0 2100

).

(iii) The sequence an satisfies

An(ba

)=

(an+1

an

)so an = n2n−1b− (n− 1)2na.

Problem F10.5. Prove or disprove the following two statements. For any two subsetsU,W of a vector space V ,

(a) Span (U) ∩ Span (W ) = Span (U ∩W )

(b) Span (U) + Span (W ) = Span (U ∪W )

Solution.

(a) The statement is false. The sets

U =

{[10

],

[01

],

[00

]}, and W =

{[11

],

[1−1

],

[00

]}provide a counterexample since

Span (U) ∩ Span (W ) = R2 ∩ R2 = R2 6={[

00

]}= Span

([00

])= Span (U ∩W ) .

(b) The statement is true. Take x + y ∈ Span (U) + Span (W ) then x ∈ Span (U) =⇒x ∈ Span (U ∪W ) and likewise for y. Hence, since Span (U ∪W ) is a vector space, wehave x+ y ∈ Span (U ∪W ) and so Span (U) + Span (W ) ⊂ Span (U ∪W ).

Conversely, Span (U) + Span (W ) clearly contains U ∪ W , but Span (U ∪W ) is thesmallest vector space containing U ∪W , so it must be the case that Span (U ∪W ) ⊂Span (U) + Span (W ).

29

Page 30: Problem F02.10. Let T V T T TT V consisting Bchparkin/index/BasicExam...Problem S03.10. Let V be a nite dimensional real inner product space and T: V !V a hermitian linear operator.

Problem F10.6. Let T be an invertible linear map on a finite dimensional vector spaceV over a field F. Prove there is a polynomial f ∈ F[x] such that T−1 = f(T ).

Solution. By the Cayley-Hamilton Theorem, the linear operator satisfies pT (T ) = 0 wherepT is the characteristic polynomial. Put

pT (x) = α0 + α1x+ · · ·+ αnxn.

Specifically, since T is invertible, x = 0 is not a root of pT (x) and thus α0 6= 0. Then

pT (T ) = α0I + α1T + · · ·+ αnTn

where I is the identity map on V . Then

−α0I = α1T + · · ·+ αnTn =⇒ T−1 = −α1

α0

I − · · · − αnα0

T n−1.

Hence T−1 is a polynomial expression of T .

Problem F10.7. Let V,W be inner product spaces over C such that dim(V ) ≤ dim(W ) <∞. Prove that there is a linear transformation T : V → W satisfying (T (x), T (y))W = (x, y)Vfor all x, y ∈ V .

Solution. . Suppose n ∈ N and k ∈ N ∪ {0} are such that dim(V ) = n, dim(W ) =n + k. Let {v1, . . . , vn} be an orthonormal basis for V and {w1, . . . , wn, wn+1, . . . , wn+k} bean orthonormal basis for W . Define T : V → W such that T (vj) = wj for j = 1, . . . , n andT is linear (a linear map is completely determined by how it treats basis elements). For anyx, y ∈ V , there are α1, . . . , αn, β1, . . . , βn ∈ F such that

x = α1v1 + · · ·+ αnvn, y = β1v1 + · · ·+ βnvn.

Then

(T (x), T (y))W = (α1w1 + · · ·+ αnwn, β1w1 + · · ·+ βnwn)W

=n∑i=1

n∑j=1

αiβj(wi, wj)W =n∑i=1

αiβi

by orthogonality. Also,

(x, y)V = (α1v1 + · · ·+ αnvn, β1v1 + · · ·+ βnvn)V

=n∑i=1

n∑j=1

αiβj(vi, vj)V =n∑i=1

αiβi.

Thus (T (x), T (y))W = (x, y)V for all x, y ∈ V .

Problem F10.9. Consider the following iterative method:

xk+1 = A−1(Bxk + c)

where

A =

(2 00 2

), B =

(2 11 2

), c =

(11

).

30

Page 31: Problem F02.10. Let T V T T TT V consisting Bchparkin/index/BasicExam...Problem S03.10. Let V be a nite dimensional real inner product space and T: V !V a hermitian linear operator.

(a) Assume the iteration converges. To what vector x does the iteration converge?

(b) Does the iteration converge for arbitrary initial vectors?

Solution.

(a) Assuming the iteration converges to x = (y, z)t, we must have

x = A−1(Bx+ c) =⇒ y = y + 12z + 1

2,

z = 12y + z + 1

2.

Thus x =

(−1−1

).

(b) Putting x0 = (a0, a0)t for some a0 ∈ R, we see that xn = (an, an)t, n ∈ N where

an = 12(3an−1 + 1) = 3

2an−1 + 1

2, n ∈ N.

Then2an = 3an−1 + 1 =⇒ 2an + 2 = 3an−1 + 3, n ∈ N.

Putting bn = an + 1, n ∈ N ∪ {0}, we get that

2bn = 3bn−1, n ∈ N

from which an easy induction yields,

bn =(32

)nb0, n ∈ N.

Thenan =

(32

)n(a0 + 1)− 1, n ∈ N.

From here it is clear that the solution blows up as n → ∞ unless a0 = −1. Thusthe iteration does not converge for any initial vector of the form x0 = (a0, a0)

t fora0 ∈ R \ {−1}.[Note: a classmate informs me that the iteration diverges for any initial vector otherthat x0 = (−1,−1)t though I couldn’t be bothered to prove this.]

Problem F10.8. Let U,W be subspaces of a finite dimensional inner product space V .Prove that (U ∩W )⊥ = U⊥ +W⊥.

Solution. Instead we prove that U ∩W = (U⊥ +W⊥)⊥. This is sufficient because then

(U ∩W )⊥ = ((U⊥ +W⊥)⊥)⊥ = U⊥ +W⊥

since the spaces are all finite-dimensional.Let x ∈ U ∩ V . Then x ∈ U and x ∈ V . If y ∈ U⊥ + W⊥, then y = y1 + y2 with

y1 ∈ U⊥, y2 ∈ W⊥ and we see

(x, y) = (x, y1) + (x, y2) = 0 + 0 = 0.

31

Page 32: Problem F02.10. Let T V T T TT V consisting Bchparkin/index/BasicExam...Problem S03.10. Let V be a nite dimensional real inner product space and T: V !V a hermitian linear operator.

Hence x is orthogonal to all vectors in U⊥ + W⊥ and so x ∈ (U⊥ + W⊥)⊥. Thus U ∩ V ⊂(U⊥ +W⊥)⊥.

Let x ∈ (U⊥ + W⊥)⊥. Take y ∈ U⊥. Then y ∈ U⊥ + W⊥ and so (x, y) = 0. Thus xis orthogonal to every member or U⊥ so x ∈ (U⊥)⊥ = U. Likewise x ∈ (W⊥)⊥ = W . Thusx ∈ U ∩W so (U⊥ +W⊥)⊥ ⊂ U ∩W

Hence U ∩W = (U⊥ +W⊥)⊥ and the result follows.

Problem S11.2. Show that a positive power of an invertible matrix is diagonalizable ifand only if the matrix itself is diagonalizable.

Solution. Suppose that A ∈Mn(C) is invertible.If we suppose that A is diagonalizable, then there is diagonal D and invertible P in

Mn(C) such that A = PDP−1. But then Am = PDmP−1 for all m ∈ N and Dm is stilldiagonal so Am is diagonalizable for all m ∈ N (the assumption that A is invertible is notnecessary for this direction).

Now suppose there is m ∈ N such that Am is diagonalizable. Then the minimial polyno-mial of Am has the form

mAm(x) = (x− λ1) · · · (x− λk)where λ1, . . . , λk ∈ C are distinct and λi 6= 0 for i = 1, . . . , k since A (and thus Am) isinvertible. Let

p(x) = mAm(xm) = (xm − λ1) · · · (xm − λk).Then p(A) = mAm(Am) = 0. Hence, the minimal polynomial of A must divide p(x). Let x0be a root of p(x). Then xm0 = λi for some i = 1, . . . , k. Suppose that xm0 = λ1 (the proof isidentical in other cases, but the details become more tedious to write down). Then

p′(x0) = mxm−10 (xm0 − λ2) · · · (xm0 − λk)+mxm−10 (xm0 − λ1)(xm0 − λ3) · · · (xm0 − λk)+...

+mxm−10 (xm0 − λ1) · · · (xm0 − λk−1)

where there are k summands and the ith summand omits (xm0 −λi) [this is simply the productrule]. However, all of the summands except the first summand go to zero since xm0 − λ1 = 0.Thus

p′(x0) = mxm−10 (xm0 − λ2) · · · (xm0 − λk).However, xm0 = λ1 6= λ2, . . . , λk and x0 6= 0 since λ1 6= 0. Thus p′(x0) 6= 0 and so p(x) hasonly simple roots. Since mA(x) (the minimal polynomial of A) divides p(x) it must haveonly simple roots as well. Hence

mA(x) = (x− µ1) · · · (x− µ`)

where µ1, . . . , µ` are distinct. This implies that A is diagonalizable.

Problem S11.5. Let A be an n × n matrix with real entries and let b ∈ Rn. Prove thatthere exists x ∈ Rn such that Ax = b if and only if b is in the orthocomplement of the kernel

32

Page 33: Problem F02.10. Let T V T T TT V consisting Bchparkin/index/BasicExam...Problem S03.10. Let V be a nite dimensional real inner product space and T: V !V a hermitian linear operator.

of the transpose of A.

Solution. “There exists x ∈ Rn such that Ax = b” is an equivalent statement to b ∈ Col(A)and “b is in the orthocomplement of the kernel of the transpose of A” is the same as b ∈Null (At)

⊥. So the questions is asking us to prove that

Col(A) = Null(At)⊥.

We prove this a bit more generally. Let V be an n-dimensional vector space and let T : V →V be a linear transform with adjoint T ∗. We prove that

im(T ) = ker(T ∗)⊥.

This clearly subsumes the above equality by letting V = Rn and T (x) = Ax, x ∈ Rn.Let u ∈ ker(T ∗) and v ∈ im(T ). Then T (x) = v for some x ∈ V and

(v, u) = (T (x), u) = (x, T ∗(u)) = (x, 0) = 0.

Hence u is orthogonal to every member of im(T ) so u ∈ im(T )⊥. Thus ker(T ∗) ⊂ im(T )⊥.Let u ∈ im(T )⊥. Then u is orthogonal to every member of im(T ). But TT ∗(u) ∈ im(T ),

so0 = (u, TT ∗(u)) = (T ∗(u), T ∗(u)) = ||T ∗(u)|| =⇒ T ∗(u) = 0.

Thus u ∈ ker(T ∗) and so im(T )⊥ ⊂ ker(T ∗).We conclude that

im(T )⊥ = ker(T ∗).

Since the two subspaces are equal, their orthocomplements are equal and thus

(im(T )⊥)⊥ = ker(T ∗)⊥.

But for any subspace U of a finite-dimensional inner product space, we have (U⊥)⊥ = U .Thus

im(T ) = ker(T ∗)⊥,

which completes the proof.Note, for a finite dimensional vector space V and a linear operator T : V → V , we have

ker(T ) = im(T ∗)⊥,

ker(T ∗) = im(T )⊥,

im(T ) = ker(T ∗)⊥,

im(T ∗) = ker(T )⊥.

If V is not finite dimensional, in general we can only say

ker(T ) = im(T ∗)⊥,

ker(T ∗) = im(T )⊥,

im(T ) ⊂ ker(T ∗)⊥,

im(T ∗) ⊂ ker(T )⊥.

33

Page 34: Problem F02.10. Let T V T T TT V consisting Bchparkin/index/BasicExam...Problem S03.10. Let V be a nite dimensional real inner product space and T: V !V a hermitian linear operator.

Problem S11.6. Let V,W be finite dimensional real inner product spaces and letA : V → W be a linear transform. Fix w ∈ W . Show that the elements v ∈ V forwhich the norm ||Av − w|| is minimal are exactly the solutions to the equation A∗Av = A∗w.

Solution. Let w = w1 + w2 be the orthogonal projection of w onto im(A). That is, w1 ∈im(A) and w2 ∈ im(A)⊥ and we know that w1 is the vector in im(A) closest to w. That is,the minimizers of ||Av − w|| are those v ∈ V such that Av = w1. We must show that thesesatisfy the normal equation.

Suppose that v ∈ V is such that Av = w1. We know that w2 = w − w1 = w − Av isorthoronal to the image of A. Thus for all x ∈ V ,

(Ax,Av − w) = 0 =⇒ (x,A∗(Av − w)) = 0).

Thus A∗(Av − w) is orthogonal to the whole space V and thus A∗(Av − w) = 0 and soA∗Av = A∗w.

Conversely, suppose A∗Av = A∗w. It suffices to show that Av = w1 because w1 is theclosest member of im(A) to w. Consider

(Av − w1, Av − w1) = (Av − w1, Av − (w − w2))

= (Av − w1, (Av − w) + w2)

= (Av,Av − w) + (Av,w2)− (w1, Av − w)− (w1, w2)

= (v,A∗Av − A∗w) + (Av,w2)− (w1, Av − w)− (w1, w2).

Further, we know that w2 is orthogonal to the image of A. Hence, (Av,w2) = (w1, w2) = 0.Also A∗Av − A∗w = 0 so

(Av − w1, Av − w1) = −(w1, Av − w).

However, w1 is in im(A). Thus w1 = Ax, for some x ∈ V . Hence

(Av − w1, Av − w1) = −(Ax,Av − w) = −(x,A∗Av − A∗w) = 0.

Thus ||Av − w1|| = 0 so Av = w1 and so v is a minimizer of ||Av − w|| .

Problem F11.8. Assume that a complex matrix A satisfies

ker(A− λI) = ker((A− λI)2)

for all λ ∈ C. Prove from first principles (i.e., without using canonical forms) that A isdiagonalizable.

Solution. Let λ be an eigenvalue of A. Suppose λ has algebraic multiplicity more ≥ 2.Then λ is a root of the minimal polynomial more than once. That is

mA(x) = (x− λ)2q(x)

for some polynomial q. Then forany vector v,

mA(A)v = 0 =⇒ (A− λI)2q(A)v = 0.

34

Page 35: Problem F02.10. Let T V T T TT V consisting Bchparkin/index/BasicExam...Problem S03.10. Let V be a nite dimensional real inner product space and T: V !V a hermitian linear operator.

But ker(A− λI) = ker((A− λI)2) so

(A− λI)q(A)v = 0.

Since this holds for all vectors v, this shows that (A−λI)q(A) = 0. But this contradicts theminimality of mA. Thus all eigenvalues of A have algebraic multiplicity 1 so A is diagonal-izable.

Problem F11.9. Let V be a finite dimensional complex inner product space and letL : V → V be a self-adjoint linear operator. Suppose µ ∈ C, ε > 0 are given and assumethere is a unit vector x ∈ V such that

||L(x)− µx|| ≤ ε.

Prove there is an eigenvalue λ of L such that |λ− µ| ≤ ε.

Solution. By the spectral theorem, there is a orthonormal basis {v1, . . . , vn} of V consistingof eigenvectors of L (here, n = dim(V )). Let λ1, . . . , λn be the corresponding eigenvaluesrespectively. Then x =

∑i(x, ei)ei and

L(x) =n∑i=1

(x, ei)L(ei) =n∑i=1

(x, ei)λiei.

This leads to

||L(x)− µx||2 =

∣∣∣∣∣∣∣∣∣∣n∑i=1

(x, ei)(λi − µ)ei

∣∣∣∣∣∣∣∣∣∣2

=

(n∑i=1

(x, ei)(λi − µ)ei,n∑j=1

(x, ej)(λj − µ)ej

)

=n∑i=1

n∑j=1

(x, ei)(x, ej)(λi − µ)(λj − µ)(ei, ej)

=n∑i=1

|(x, ei)|2 |λi − µ|2 , by orthonormality.

If for every i, we had|λi − µ| > ε

then

||L(x)− µx||2 > ε2n∑i=1

|(x, ei)|2 = ε2 ||x|| = ε2.

However, this contradicts our assumption that ||L(x)− µx|| ≤ ε. Hence there is somei = 1, . . . , n such that |λi − µ| ≤ ε.

35

Page 36: Problem F02.10. Let T V T T TT V consisting Bchparkin/index/BasicExam...Problem S03.10. Let V be a nite dimensional real inner product space and T: V !V a hermitian linear operator.

Problem F11.10. Let A be a 3× 3 real matrix with A3 = I. Show that A is similar to amatrix of the form 1 0 0

0 cos θ − sin θ0 sin θ cos θ

.

What values of θ are possible?

Solution. From A3 = I, we get det(A)3 = 1 so det(A) = 1. Also since the characteristicpolynomial of A has degree 3, it has a real root. Thus A has a real eigenvalue, λ. We seethat λ3 is an eigenvalue of A3 = I and thus λ3 = 1 and so λ = 1.

If all eigenvalues of A are real then they are all 1 by the reasoning above. Since A3 = I,we know A is diagonalizable and thus A is similar to I. But this implies A = I. Then A isof the above form with θ = 0 so the claim clearly holds.

Otherwise, since complex eigenvalues of a real matrix come in conjugate pairs, besides1, A has eigenvalues of the form µ, µ where µ ∈ C. Then 1 = det(A) = λµµ = |µ|2 and soµ = eiθ = cos θ+ i sin θ for some θ ∈ [0, 2π). The eigenvectors corresponding to µ, µ will alsobe a conjugate pair. Let the eigenvector of µ be w = w1 + iw2. where w1, w2 ∈ R3. Then

Aw = µw =⇒ Aw1 = cos θw1 − sin θw2 and Aw2 = sin θw1 + cos θw2.

Put P = [v w1 w2]. Then

AP = (Av |Aw1 |Aw2) = (v | cos θw1 − sin θw2 | sin θw1 + cos θw2) = P

1 0 00 cos θ − sin θ0 sin θ cos θ

which shows that A is similar to a matrix of the desired form. There are no restrictions puton θ throughout the derivations, so it could be any θ ∈ R. Of course, θ ∈ [0, 2π) will suffice.

Problem F11.11.

(a) State and prove the rank-nullity theorem.

(b) Suppose U, V,W are finite dimensional vector spaces over R and that T : U → Vand S : V → W are linear operators. Suppose that T is injective, S is surjectiveand S ◦ T = 0. Prove that im(T ) ⊂ ker(S) and that dim(V ) − dim(U) − dim(W ) =dim(ker(S)/im(T )).

(a) Rank-Nullity Theorem. Let U, V be finite dimensional vector spaces and T : U → Vbe a linear map. Then

dim(im(T )) + dim(ker(T )) = dim(U).

Proof. Assume dim(ker(T )) = n and dim(U) = n+ k. Let {x1, . . . , xn} be a basis forker(T ). We can extend this to a basis of U : {x1, . . . , xn, u1, . . . uk}. Take v ∈ im(T ).

36

Page 37: Problem F02.10. Let T V T T TT V consisting Bchparkin/index/BasicExam...Problem S03.10. Let V be a nite dimensional real inner product space and T: V !V a hermitian linear operator.

Then v = T (u) for some u ∈ U . But we have a basis for U , so there are scalarsα1, . . . , αn, β1, . . . , βk such that

u = α1x1 + · · ·+ αnxn + β1u1 + · · ·+ βkuk.

Then

v = T (u) = α1T (x1) + · · ·+ αnT (xn) + β1T (u1) + · · ·+ βkT (uk)

= β1T (u1) + · · ·+ βkT (uk) (since xi ∈ ker(T ), i = 1, . . . , n).

Thus {T (u1), . . . , T (uk)} is a spanning set for im(T ). Now assume that γ1, . . . , γk arescalars such that

γ1T (u1) + · · ·+ γkT (uk) = 0.

ThenT (γ1u1 + · · ·+ γkuk) = 0 =⇒ γ1u1 + · · ·+ γkuk ∈ ker(T ).

Then there are scalars δ1, . . . , δn,

γ1u1 + · · ·+γkuk = δ1x1 + · · ·+ δnxn =⇒ γ1u1 + · · ·+γkuk− δ1x1−· · ·− δnxn = 0.

But {x1, . . . , xn, u1, . . . uk} form a basis and are thus linearly independent. Hence,γ1 = · · · = γk = 0 so {T (u1), . . . , T (uk)} is a linearly independent set.

Thus {T (u1), . . . , T (uk)} is a basis for im(T ) so dim(im(T )) = k and the result isproven.

(b) Take x ∈ im(T ). Then x = T (u) for some u ∈ U . But then S(x) = (S ◦ T )(u) = 0since S ◦ T is the zero transformation. Thus x ∈ ker(S) so im(T ) ⊂ ker(S).

Apply the rank-nullity theorem to T and S to see

dim(im(T )) + dim(ker(T )) = dim(U),

dim(im(S)) + dim(ker(S)) = dim(V ).

But dim(ker(T )) = 0 since T is injective and dim(im(S)) = dim(W ) because S issurjective. Hence

dim(im(T )) = dim(U),

dim(ker(S)) = dim(V )− dim(W ).

Then

dim(ker(S)/im(T )) = dim(ker(S))− dim(im(T )) = dim(V )− dim(W )− dim(U)

which is exactly the conclusion we needed to prove.

37

Page 38: Problem F02.10. Let T V T T TT V consisting Bchparkin/index/BasicExam...Problem S03.10. Let V be a nite dimensional real inner product space and T: V !V a hermitian linear operator.

Problem S12.7. Let F be a finite field of p elements and V be an n-dimensional vectorspace over F . Compute the number of invertible linear maps from V → V .

Solution. The matrix form of any invertible map must be invertible and thus must havelinearly independent columns. The choice for elements in the first column is arbitrary exceptthey can’t all be zero. Thus there are pn − 1 choices. The choice for elements in the nextcolumn simply cannot be a scalar multiple of the first. Thus there are pn − p choices. Thenext column cannot be a linear combination of the first two so there are pn− p · p = pn− p2choices. Similarly, there are pn − pk−1 choices for the kth column. Multiplying these, we seethere are

(pn − 1)(pn − p) · · · (pn − pn−1) =n−1∏k=0

(pn − pk)

invertible linear maps on V .

Problem S12.9. Let a1 = 1, a2 = 4, an+2 = 4an+1− 3an for all n ≥ 1. Find a 2× 2 matrixA such that

An(

10

)=

(an+1

an

).

Use the eigenvalues of A to determine the limit

limn→∞

(an)1/n .

Solution. From n = 1, we see that A has the form

A =

(4 b1 d

).

Then

A2 =

(16 + b 4b+ bd4 + d b+ d2

).

From n = 2,

A2

(10

)=

(134

)=⇒

(16 + b4 + d

)=

(134

),

so

A =

(4 −31 0

).

We see that det(A − λI) = λ(λ − 4) + 3 = (λ − 1)(λ − 3). Thus λ1 = 1 and λ2 = 3 areeigenvalues of A with corresponding eigenvectors

v1 =

(11

), v2 =

(31

).

Notice that (10

)= −1

2

(11

)+

1

2

(31

)..= x1 + x2,

38

Page 39: Problem F02.10. Let T V T T TT V consisting Bchparkin/index/BasicExam...Problem S03.10. Let V be a nite dimensional real inner product space and T: V !V a hermitian linear operator.

where x1, x2 are still eigenvectors corresponding to λ1, λ2 respectively. Finally, for any n ≥ 1,(an+1

an

)= An

(10

)= An(x1 + x2) = λn1x1 + λn2x2,

which gives an = 123n − 1

2. Thus

limn→∞

(an)1/n = 3.

Problem S12.11.

(a) Find a polynomial P (x) of degree 2 such that P (A) = 0 for

A =

(1 34 2

).

(b) Prove that P (x) from part (a) is unique up to scalar multiplication.

Solution.

(a) The characteristic polynomial of A is P (x) = (x− 1)(x− 2)− 12 = x2 − 3x− 10. Wecheck

P (A) = A2 − 3A− 10I =

(13 912 16

)−(

3 912 6

)−(

10 00 10

)=

(0 00 0

)= 0.

(We knew to check the characteristic polynomial because the Cayley-Hamilton The-orem tells us that the characteristic polynomial of a matrix always annihilates thematrix.)

(b) It is clear that there is no first degree polynomial Q such that Q(A) = 0 because A isnot a scalar multiple of the identity.

Suppose Q(A) = 0. By the above, this implies that Q is a second degree polynomial.Then there is α ∈ R such that αQ is a monic polynomial. Then (P −αQ)(A) = 0 andP−αQ is a first degree polynomial or a constant polynomial. However, as stated above,there is no first degree polynomial which annihilates A. Hence P − αQ is constant so(P − αQ)(A) = 0 =⇒ P − αQ = 0. Thus P = αQ so the polynomial is unique upto multiplication by a constant.

Problem F12.7. Let A be an invertible m × m matrix over C and suppose the set ofpowers An of A is bounded for n ∈ Z. Prove that A is diagonalizable.

Solution. Let λ ∈ C be an eigenvalue of A. Then λ 6= 0 since A is invertible. Further,there is a unit vector v ∈ C such that Av = λv. Then Anv = λnv for all n ∈ Z. Then we see

||An|| ≥ ||Anv|| = ||λnv|| = |λ|n .

If |λ| 6= 1, then |λ|n is unbounded for n ∈ Z and thus An would be unbounded for n ∈ Z; acontradiction. Thus all eigenvalues λ of A satisfy ||λ|| = 1.

39

Page 40: Problem F02.10. Let T V T T TT V consisting Bchparkin/index/BasicExam...Problem S03.10. Let V be a nite dimensional real inner product space and T: V !V a hermitian linear operator.

Assume A is not diagonalizable. Then the Jordan canonical form of A has a block of atleast size 2. Then there is an eigenvalue λ of A and two vectors v, w such that Av = λv andAw = v + λw. This implies

A2w = Av + λAw = λv + λ(v + λw) = 2λv + λ2w.

FurtherA3w = 2λAv + λ2Aw = 2λ2v + λ2(v + λw) = 3λ2v + λ3w.

Indeed, proceeding by induction, we have

Anw = nλn−1 + λnw.

Then

||An|| ≥ ||Anw|| =∣∣∣∣nλn−1v + λnw

∣∣∣∣ ≥ ∣∣nλn−1∣∣ ||v|| − |λn| ||w|| = n ||v|| − ||w|| .

Since this holds for all n ∈ N, letting n→∞ shows that the powers of A are not bounded,a contradiction. Thus A is diagonalizable.

Problem F12.10. Let A be a linear operator on a four dimensional complex vector spacethat satisfies the polynomial equation P (A) = A4 + 2A3− 2A− I = 0, there I is the identityoperator on V . Suppose that |tr(A)| = 2 and that dim(range(A + I)) = 2. Give a Jordancanonical form of A.

Solution. Let’s factor P (x). By inspection, we see that 1 is a root. So P (x) = (x− 1)(x3 +αx2 + βx + 1). Expanding, we see α − 1 = 2 and β − α = 0 (from the x3 and x2 termrespectively). Then α = β = 3 so P (x) = (x− 1)(x+ 1)3. Thus possible eigenvalues of A are1,−1. By the rank-nullity theorem, dim(ker(A + I)) = 4 − dim(range(A + I)) = 2. Thusthe algebraic multiplicity of −1 is at least 2. Then tr(A) = ±2 forces −1 to have algebraicmultiplicity 3. Thus A is not diagonalizable. A canonical form has 3 Jordan blocks; two ofthese are 1 × 1 blocks containing 1 and −1. The third block is 2 × 2 with −1 as the twodiagonal elements and 1 on the superdiagonal. That is a canonical form J of A is

J =

1 0 0 00 −1 0 00 0 −1 10 0 0 −1

.

Problem F12.12. Let M be an n × m matrix. Prove that the row rank of M equalsthe column rank of M . Interpret this result as an equality of the dimensions of two vectorspaces naturally attached to the map defined by M .

Solution. Let T : Rm → Rn be the associated linear map: T (x) = Mx, x ∈ Rm. Alsodefine T ∗ : Rn → Rm by T ∗(y) = M ty, y ∈ Rn.

40

Page 41: Problem F02.10. Let T V T T TT V consisting Bchparkin/index/BasicExam...Problem S03.10. Let V be a nite dimensional real inner product space and T: V !V a hermitian linear operator.

Then im(T ) = col(M) and im(T ∗) = col(M t) = row(M). Thus the problem boils downto showing that dim(im(T )) = dim(im(T ∗)).

Let y1, . . . , yk ∈ Rn be a basis for im(T ). Then T ∗(y1), . . . , T∗(yk) ∈ im(T ∗). Take

α1, . . . αk ∈ R such thatα1T

∗(y1) + · · ·+ αkT∗(yk) = 0.

ThenT ∗(α1y1 + · · ·+ αkyk) = 0

so α1y1 + · · ·+αkyk ∈ ker(T ∗). Recall that ker(T ∗) = im(T )⊥, so α1y1 + · · ·+αkyk ∈ im(T )⊥.However since y1, . . . , yk ∈ im(T ), we know that α1y1 + · · ·+ αkyk ∈ im(T ). Hence, then

α1y1 + · · ·αkyk ∈ im(T ) ∩ im(T )⊥

soα1y1 + · · ·αkyk = 0.

But y1, . . . , yk form a basis, so α1 = · · · = αk = 0. Hence T ∗(y1), . . . , T∗(yk) are linearly

independent so dim(im(T ∗)) ≥ k = dim(im(T )).Using the same argument but starting with a basis of im(T ∗) shows that dim(im(T ∗)) ≤

dim(im(T )), thus the values are equal and so the row rank of M is equal to its column rank.The interpretation as an equality of dimensions of two vector spaces naturally attached

to M would be

dim(im(T )) = dim(ker(T )⊥

)or dim(col(M)) = dim

(null(M)⊥

)since T is the left multiplication operator for M .

Problem S13.8. Let V,W be finite dimensional inner product space and T : V → W bea linear map.

(a) Define the adjoint map T ∗ : W → V .

(b) Show that if the matrices are written relative to orthonormal bases of V and W thenthe matrix of T ∗ is the transpose of the matris of T .

(c) Show that the kernel of T ∗ is the orthogonal complement of the range of T .

(d) Use (b) and (c) to prove that the row rank of a matrix is the same as the column rankof the matrix.

Solution.

(a) The adjoint T ∗ : W → V is the unique linear map such that

(Tv, w)W = (v, T ∗w)V

for all v ∈ V , w ∈ W , where (·, ·)V , (·, ·)W are the inner products on V and Wrespectively.

41

Page 42: Problem F02.10. Let T V T T TT V consisting Bchparkin/index/BasicExam...Problem S03.10. Let V be a nite dimensional real inner product space and T: V !V a hermitian linear operator.

(b) Let {v1, . . . , vn} be an orthonormal basis for V and {w1, . . . , wm} be an orthonormalbasis for W . For each vj, there are scalars α1j, . . . , αmj such that

T (vj) =m∑i=1

aijwi.

Then the matrix of T is given by [T ] = (aij) where i = 1, . . . ,m, j = 1, . . . , n. Similarly,for each wi there are scalars β1i, . . . , βni such that

T ∗(wi) =n∑j=1

βjivj.

Then [T ∗] = (βji). We need to show that βji = αij and this will imply that [T ] = [T ∗]t.

We see by orthogonality that, αij = (Tvj, wi). Then αij = (vj, T∗wi) by definition of

the adjoint. But then by orthogonality, αij = βji. Hence [T ] = [T ∗]t.

(c) Take w ∈ ker(T ∗). Let z ∈ im(T ). Then z = Tv for some v ∈ V . Then

(z, w) = (Tv, w) = (v, T ∗w) = (v, 0) = 0.

Thus w is orthogonal to z. Since z ∈ im(T ) was arbitrary, this shows that w ∈ im(T )⊥.

Take w ∈ im(T )⊥. Then w is orthogonal to every member of the image of T . Inparticular T (T ∗w) ∈ im(T ). Thus

0 = (T (T ∗w), w) = (T ∗w, T ∗w) = ||T ∗w||2 =⇒ T ∗w = 0.

Thus w ∈ ker(T ∗).

We conclude that ker(T ∗) = im(T )⊥.

(d) Translating (c) into matrix form using (b), we see that for A ∈Mn,m(C), we have

null(At) = col(A)⊥.

We see that col(At) = row(A). So we must show that col(A) = col(At).

Let {x1, . . . , xk} be a basis for the column space of A. Then Atx1, . . . , Atxk are in the

column space of At. Let α1, . . . , αk ∈ C be such that

α1Atx1 + · · ·+ αkA

txk = 0.

ThenAt(α1x1 + · · ·+ αkxk) = 0

so α1x1+· · ·+αkxk ∈ null(At). But by (c), this implies that α1x1+· · ·+αkxk ∈ col(A)⊥.However α1x1 + . . . + αkxk is also in col(A). Thus it is orthogonal to itself and soit is zero. But, x1, . . . , xk are linearly independent so α1 = · · · = αk = 0. HenceAtx1 + · · ·+Atxk are linearly independent in col(At) = row(A). Thus dim(row(A)) ≥k = dim(col(A)).

Making the same argument but beginning with a basis for the column space of At

shows that dim(row(A)) ≤ dim(col(A)). Thus dim(row(A)) = dim(col(A)) so the rowrank and column rank of A are the same.

42

Page 43: Problem F02.10. Let T V T T TT V consisting Bchparkin/index/BasicExam...Problem S03.10. Let V be a nite dimensional real inner product space and T: V !V a hermitian linear operator.

Problem S14.1.

(a) Find a real matrix A whose minimal polynomial is equal to

t4 + 1.

(b) Show that the real linear map determined by A has no non-trivial invariant subspace.

Solution.

(a) Put

A =

0 1 0 00 0 1 00 0 0 1−1 0 0 0

.

Then

A2 =

0 0 1 00 0 0 1−1 0 0 00 −1 0 0

, A3 =

0 0 0 1−1 0 0 00 −1 0 00 0 −1 0

, A4 =

−1 0 0 00 −1 0 00 0 −1 00 0 0 −1

.

Thus A4 + I = 0. Further, A3, A2, A, I are linearly independent, so there is no thirddegree polynomial which annihilates A. Thus t4 + 1 is the minimal polynomial of A.

(b) This isn’t true. For example, if

A =

0 1 0 0 0 0 0 00 0 1 0 0 0 0 00 0 0 1 0 0 0 0−1 0 0 0 0 0 0 00 0 0 0 0 1 0 00 0 0 0 0 0 1 00 0 0 0 0 0 0 10 0 0 0 −1 0 0 0

then A still has minimal polynomial t4 + 1 but Span (e1, e2, e3, e4) is invariant underA. Even if A has to be 4 × 4, we can only prove that A has no 1 or 3 dimensionalinvariant subspaces. There are cases where A has a 2 dimensional invariant subspace.

Problem S14.2. Suppose that S, T : V → V are linear where V is a finite dimensionalvector space over R. Show that

dim(im(S)) + dim(im(T )) ≤ dim(im(S ◦ T )) + dim(V ).

Solution. Adding dim(ker(S)) and dim(ker(T )) to both sides and using the rank nullitytheorem, we see that the given inequality is equivalent to

dim(V ) ≤ dim(im(S ◦ T )) + dim(ker(S)) + dim(ker(T )).

43

Page 44: Problem F02.10. Let T V T T TT V consisting Bchparkin/index/BasicExam...Problem S03.10. Let V be a nite dimensional real inner product space and T: V !V a hermitian linear operator.

Next adding dim(ker(S ◦ T )) to both sides, we see the original inequality is equivalent to

dim(ker(S ◦ T )) ≤ dim(ker(S)) + dim(ker(T )).

If we can prove this last inequality, we will have proven the first.Consider, if x ∈ ker(T ) then (S ◦ T )(x) = S(T (x)) = S(0) = 0 so x ∈ ker(S ◦ T ). Thus

ker(T ) is a subspace of ker(S ◦ T ). Let {x1, . . . , xk} be a basis of ker(T ). Extend this to abasis {x1, . . . , xk, y1, . . . , y`} of ker(S ◦ T ). Then

(S ◦ T )(yi) = S(T (yi)) = 0

for each i, so T (yi) ∈ ker(S). Suppose α1, . . . , α` ∈ R are such that

α1T (y1) + · · ·+ α`T (y`) = 0.

ThenT (α1y1 + · · ·α`y`) = 0

so α1y1 + · · ·α`y` ∈ ker(T ). Then there are β1, . . . , βk ∈ R such that

α1y1 + · · ·+ α`y` = β1x1 + · · ·+ βkxk =⇒ α1y1 + · · ·+ α`y` − β1x1 − · · · − βkxk = 0.

But these vectors form a basis for ker(S ◦ T ) so they are linearly independent. Hence allcoefficients are zero. Thus {T (y1), . . . , T (y`)} is a lineary independent set in ker(S). Hencedim(ker(S)) ≥ `. Then

dim(ker(S ◦ T )) = `+ k = `+ dim(ker(T )) ≤ dim(ker(S)) + dim(ker(T )).

The result follows.

Problem S14.3. Suppose that A,B ∈ Mn(C) satisfy AB − BA = A. Show that A is notinvertible.

Solution. Suppose that A is invertible. Let λ1, . . . , λ` ∈ C be the distinct eigenvalues of Bordered such that

Re(λ1) ≤ Re(λ2) ≤ · · · ≤ Re(λ`).

Multiplying by A−1 on the right, we see

ABA−1 −B = I =⇒ ABA−1 = B + I.

Thus B is similar to B+I. However, B+I has λ`+1 as an eigenvalue and Re(λ`+1) > Re(λj)for all j = 1, . . . , `. Thus B and B + I do not have the same eigenvalues and thus are notsimilar; a contradiction. Hence A is not invertible.

Problem S14.4. Suppose A,B ∈Mn(C). Show that the characteristic polynomials of ABand BA are equal.

44

Page 45: Problem F02.10. Let T V T T TT V consisting Bchparkin/index/BasicExam...Problem S03.10. Let V be a nite dimensional real inner product space and T: V !V a hermitian linear operator.

Solution. Suppose B is invertible. Then

AB = (B−1B)AB = B−1(BA)B.

Thus AB and BA are similar so they have the same characteristic polynomial.If B is not invertible, let λ be the non-zero eigenvalue of B with least real part. Then for

0 < t < |Re(λ)| , Bt = B + tI is invertible [note: if all eigenvalues of B have zero real part,then this holds for all t > 0]. Then ABt has the same characteristic polynomial as BtA.However, the characteristic polynomials of a matrix is a continuous function of the matrixitself [this is because the determinant map is smooth]. Thus taking the limit as t → 0, wesee that AB and BA have the same characteristic polynomial.

Problem S14.6. Show that if A ∈Mn(C) is normal then A∗ = P (A) for some P ∈ C[x].

Solution. Since A is normal, by the spectral theorem, we can unitarily diagonalize it:A = UDU∗ where U is unitary, D is diagonal. Then for any polynomial Q, we have Q(A) =UQ(D)U∗. Thus we reduce the problem to finding P ∈ C[x] such that

UDU∗ = UP (D)U∗ ⇐⇒ D = P (D).

However, a polynomial acting on a diagonal matrix acts individually on each diagonal el-ement. Let λ1, . . . , λ` ∈ C be the distinct eigenvalues of A. Then these are the diagonalelements of D and the λ1, . . . , λ` are the diagonal elements of D. Thus all we need a polyno-mial which satisfies P (λj) = λj for all 1, . . . , `. Such a polynomial certainly exists and canbe constructed using Lagrange interpolants. Hence A∗ can be expressed as a polynomial in A.

Problem F14.7. Among all solutions to the system 1 1 1 12 3 5 7−2 1 1 3

x =

27−1

find the solution with minimal length.

Solution. We notice 1 1 1 12 3 5 7−2 1 1 3

∼1 1 1 1

0 1 3 50 0 0 0

.

Thus all solutions of the given system are of the form

x =

−1300

+ s

2−310

+ t

4−501

for some s, t ∈ R (in fact we could replace (−1, 3, 0, 0)t with any particular solution). Thuswe need to minimize

f(s, t) = (−1 + 2s+ 4t)2 + (3− 3s− 5t)2 + s2 + t2

45

Page 46: Problem F02.10. Let T V T T TT V consisting Bchparkin/index/BasicExam...Problem S03.10. Let V be a nite dimensional real inner product space and T: V !V a hermitian linear operator.

among all (s, t) ∈ R2. We know the minima must annihilate the first derivatives, so

4(−1 + 2s+ 4t)− 6(3− 3s− 5t) + 2s = 0 and 8(−1 + 2s+ 4t)− 10(3− 3s− 5t) + 2t = 0.

Simplifying, this gives (1 23

14

1 4223

)(st

)=

(11141923

).

After copious amounts of infuriating and mind-numbing algebra, this yields

s =25

59, t =

13

59

so the solution of minimal length is

x =1

59

43372513

.

Problem F14.8. Compute the eigenvalues of the n× n matrix

M =

k 1 1 · · · 11 k 1 · · · 11 1 k · · · 1...

......

. . ....

1 1 1 · · · k

.

Use the eigenvalues to compute det(M).

Solution. We notice that λ = k − 1 is an eigenvalue and M − λI has rank 1. Thus thealgebraic multiplicity of λ is n− 1. Hence we only need one more eigenvalue. By inspectionif x = (1, 1, . . . , 1)t then Mx = (k + (n − 1))x. Thus k + (n − 1) = (k − 1) + n is anothereigenvalue.

The determinant is the product of the eigenvalues so det(M) = (k − 1)n + n(k − 1)n−1.[Note: a less “inspective” approach might use induction, but I couldn’t figure out how to

do that.]

Problem F14.9. Suppose A 6= 0 is an n×n complex matrix. Prove that there is a matrixB such that B and A+B have no eigenvalues in common.

Solution. Not sure.Problem F14.10. What is the largest possible number of 1’s an invertible n × n matrixwith entries in {0, 1} can have? You must show this number is possible and that no largernumber is possible.

Solution. The answer is n2 − n+ 1.

46

Page 47: Problem F02.10. Let T V T T TT V consisting Bchparkin/index/BasicExam...Problem S03.10. Let V be a nite dimensional real inner product space and T: V !V a hermitian linear operator.

First we show that no larger number is possible. Suppose an n × n matrix with entriesin {0, 1} has at least n2 − n + 2 entries that are 1. Then there are less than n − 2 zeros inthe matrix. But this means that at least two columns do not have a zero. These columnswill be the linearly dependent and thus the matrix is not invertible.

Consider the matrix

A =

1 1 1 · · · 10 1 1 · · · 11 0 1 · · · 1...

.... . . . . .

...1 1 · · · 0 1

.

That is, A is a matrix full of 1’s but with zeros on the first subdiagonal. Then there aren2− n+ 1 entries that are 1. Consider solving Ax = 0. By subtracting the second row fromthe first we would find x1 = 0. Then by subtracting the third row from the second, we wouldfind x2 = 0. Likewise we would find xk = 0, k = 1, . . . , n. Thus Ax = 0 has only the trivialsolution so A is invertible. Thus we can find an invertible matrix with n2 − n + 1 entrieswhich are 1.

Problem F14.11. Suppose a 4 × 4 integer matrix has four distinct real eigenvaluesλ1 > λ2 > λ3 > λ4. Prove that λ21 + λ22 + λ23 + λ24 ∈ Z.

Solution. Recall that the trace of a matrix is the sum of the eigenvalues. Since A hasinteger entries so does A2. Thus the trace of A2 is an integer since the diagonal elements ofA2 are all integers. However, the eigenvalues of A2 are λ21, λ

22, λ

23, λ

24 so their sum must equal

the trace of A2 and hence λ21 + λ22 + λ23 + λ24 ∈ Z.[Apparently the assumptions that the eigenvalues are real and distinct aren’t necessary.]

Problem F14.12. Prove that the matrix A = (aij) given by aij = 1i+j−1 , i, j = 1, . . . , n

is positive definite.

Solution. The matrix is the Gram matrix for the basis {1, x, . . . , xn−1} of Pn−1[0, 1] withthe inner product

(p, q) =

∫ 1

0

p(x)q(x)dx, p,∈ Pn−1[0, 1].

All Gram matrices are positive definite. To see this, let {v1, . . . , vn} be a linearly independentset in an inner product space and let A be the matrix given by

Aij = (vi, vj).

Then for any x ∈ Rn, x = (x1, . . . , xn)t, we see

xtAx =n∑

i,j=1

xixj(vi, vj) =n∑

i,j=1

(xivi, xjvj) =

(n∑i=1

xivi,n∑j=1

xjvj

)= (v, v) ≥ 0

where v =∑n

i=1 xivi. Further, there is equality iff v = 0 which happens iff x = 0 since{v1, . . . , vn} is a linearly independent set. Thus A is positive definite.

47

Page 48: Problem F02.10. Let T V T T TT V consisting Bchparkin/index/BasicExam...Problem S03.10. Let V be a nite dimensional real inner product space and T: V !V a hermitian linear operator.

Problem S15.7. Let

f(x, y, z) = 9x2 + 6y2 + 6z2 + 12xy − 10xz − 2yz.

Does there exist a point (x, y, z) such that f(x, y, z) < 0?

Solution. I assume they mean (x, y, z) ∈ R3. In this case the answer is no. We see thatany critical point (x, y, z) of f must satisfy

0 =∂f

∂x(x, y, z) = 18x+ 12y − 10z,

0 =∂f

∂y(x, y, z) = 12x+ 12y − 2z,

0 =∂f

∂z(x, y, z) = −10x− 2y + 12z.

However, the matrix

A =

9 6 −56 6 −1−5 −1 6

is invertible since

det(A) = 9(37)− 6(41)− 5(24) = 333− 246− 120 = 333− 366 = −33.

Thus the only solution to the above system is (0, 0, 0). Thus (0, 0, 0) is either a globalmaximum or a global minimum. It is easy to see that f(1, 1, 1) > 0 whereas f(0, 0, 0) = 0 so(0, 0, 0) must be a global minimum. Hence there is no point (x, y, z) such that f(x, y, z) < 0.

[I’m not sure what the indented solution was here but I’m fairly certain the correct ap-proach was to factor the polynomial into a sum of squares. I couldn’t figure out how to dothis. The function f is the quadratic form which is induced by A as defined above so thatproblem is actually to prove that this matrix is positive definite.]

Problem S15.8. Prove or disprove the following claims:

(a) Matrices with determinant 1 are dense in the set of all 3× 3 real matrices.

(b) Matrices with distinct eigenvalues are dense in the set of 3× 3 complex matrices.

Solution.

(a) Matrices with determinant 1 are not dense in the set of 3 × 3 real matrices. Thedeterminant map is a C∞-smooth map. Thus if the determinant was 1 on a dense set,then every matrix would have determinant 1.

(b) Matrices with distinct eigenvalues are dense in the set of 3 × 3 complex matrices.We prove this for upper triangular matrices first. Let ε > 0 and let T be an uppertriangular 3× 3 matrix. We show there is a matrix with distinct eigenvalues within ε

48

Page 49: Problem F02.10. Let T V T T TT V consisting Bchparkin/index/BasicExam...Problem S03.10. Let V be a nite dimensional real inner product space and T: V !V a hermitian linear operator.

of T . If T already has distinct eigenvalues, we’re done. Otherwise, there are two caseswe consider:

Case 1: All diagonal entries are T are the same value t. In this case, letDε = diag(ε/

√4, ε/√

6, ε/√

12). Then T +Dε has distinct eigenvalues and

||T − (T +Dε)|| = ||Dε|| =√ε2/4 + ε2/6 + ε2/12 =

√ε2/2 = ε/

√2 < ε.

Case 2: Two of the diagonal entries of T are the same while the other is different.Suppose without loss of generality that the first two are the same while the third isdifferent. Call the eigenvalues a and b. Let δ = |b− a| . Set Dε = diag(t,−t, t) wheret < min{δ/3, ε/

√5}. Then T +Dε has distinct eigenvalues and

||T − (T +Dε)|| = ||Dε|| <√

3ε2/5 = ε√

3/5 < ε.

Thus the claim holds for upper triangular matrices. However, every matrix is similar toan upper triangular matrix by the Schur decomposition (see W02.11). Thus addingto the diagonal of the original matrix changes the eigenvalues the in the same wayas adding to the diagonal of upper triangular matrix to which the original is similar.Hence the claim holds for all matrices.

Problem S15.9. Let V = Rn and let U1, U2,W1,W2 be subspaces of V of dimension d suchthat dim(U1 ∩W1) = dim(U2 ∩W2) = `, ` ≤ d ≤ n. Prove that there is a linear operatorT : V → V such that T (U1) = U2 and T (W1) = W2.

Solution. Not sure.

Problem 15.10. Let

M =

(A BC D

)and M−1 =

(P QR S

)where A,B,C,D, P,Q,R, S are all k × k matrices. Show that

det(M) · det(S) = det(A).

Solution. Not sure.

Problem S15.11. Two matrices A,B are called commuting if AB = BA. The order of amatrix A is defined to be the smallest non-negative integer k such that Ak = I; if no such kexists, the matrix is said to have infinite order. Prove that there exist ten distinct real 2× 2matrices which are pairwise commuting and have the same finite order.

Solution. Matrices of the form (cos θ − sin θsin θ cos θ

)

49

Page 50: Problem F02.10. Let T V T T TT V consisting Bchparkin/index/BasicExam...Problem S03.10. Let V be a nite dimensional real inner product space and T: V !V a hermitian linear operator.

always commute and have the property that(cos θ − sin θsin θ cos θ

)n=

(cos(nθ) − sin(nθ)sin(nθ) cos(nθ)

).

Choosing θ = 2πk/11, k = 1, 2, . . . , 11 gives ten different matrices, all of order 11 whichpairwise commute.

Note: the reason you should think of these matrices is because, together with the identity,they form a group which is isomorphic to the subgroup of D11 (the symmetries of the regular11-gon) consisting of rotations.

Problem S15.12. Let

M =

(3 51 −1

).

(a) Compute exp(M).

(b) Is there a real matrix A such that M = exp(A)?

Solution.

(a) To compute exp(M) we first find a Jordan form for M . The characteristic polynomialof M is

pM(t) = (3− t)(−1− t)− 5 = t2 − 2t− 8 = (t− 4)(t+ 2).

Thus M has distinct eigenvalues and so it is diagonalizable. An eigenvector correspond-ing to λ1 = 4 is given by v1 = (5, 1)t and an eigenvector corresponding to λ2 = −2 isv2 = (1,−1)t. Putting P = [v1 v2], we see

P−1MP = −1

6

(−1 −1−1 5

)(3 51 −1

)(5 11 −1

)= −1

6

(−1 −1−1 5

)(20 −24 2

)= −1

6

(−24 0

0 12

)=

(4 00 −2.

).

So M = PDP−1 where D = diag(4,−2). Then

exp(M) = P exp(D)P−1 = −1

6

(5 11 −1

)(e4 00 e−2

)(−1 −1−1 5

)= −1

6

(5 11 −1

)(−e4 −e4−e−2 5e−2

)= −1

6

(−5e4 − e−2 −5e4 + 5e−2

−e4 + e−2 −e4 − 5e−2

)=

1

6

(5e4 + e−2 5e4 − 5e−2

e4 − e−2 e4 + 5e−2

).

(b) No. The eigenvalues of exp(A) are eλ1 , eλ2 where λ1, λ2 are the eigenvalues of A.

50

Page 51: Problem F02.10. Let T V T T TT V consisting Bchparkin/index/BasicExam...Problem S03.10. Let V be a nite dimensional real inner product space and T: V !V a hermitian linear operator.

Suppose exp(A) = M . If A has real eigenvalues λ1, λ2, then (wlog) eλ1 = 4, eλ2 = −2,but this is impossible for λ2 ∈ R.

If A has complex eigenvalues then they must be a conjugate pair: λ, λ. But then eλ

and eλ form a conjugate pair which is impossible since eλ = 4, eλ = −2 (or vice versa).

Problem F15.7. Let A,B be two 4 × 5 matrices of rank 3. Find all possible values forthe rank of C = AtB. Specifically, you must find examples for any values possible and provethat no other values are possible.

Solution. The possible values for rank(C) are 2 and 3. Putting

A =

1 0 0 0 00 1 0 0 00 0 1 0 00 0 0 0 0

, B =

1 0 0 0 00 0 0 0 00 0 0 1 00 0 0 0 1

gives rank(C) = 2. Putting

A =

1 0 0 0 00 1 0 0 00 0 1 0 00 0 0 0 0

, B =

1 0 0 0 00 1 0 0 00 0 1 0 00 0 0 0 0

gives rank(C) = 3.

Let {v1, v2, v3} ⊂ C4 be a basis for im(B) (since rank(B) = 3). Take y ∈ im(C). Thenthere is v ∈ C5 such that Cv = y. Then At(Bv) = y. But Bv ∈ im(B) so there areα1, α2, α3 ∈ C such that Bv = α1v1 + α2v2 + α3v3. Then

y = α1Atv1 + α2A

tv2 + α3Atv3.

Since y ∈ im(C) was arbitrary, this shows that {Atv1, Atv2, Atv3} is a spanning set forim(C). Thus rank(C) ≤ 3 since the rank is less than or equal to the number of elements inany spanning set.

Consider, since rank(A) = 3, we have rank(At) = 3. Then by the rank-nullity theorem,dim(ker(At)) = 1 since At acts on C4. Then since v1, v2, v3 are linearly independent, there isat most one i = 1, 2, 3 such that Atvi = 0. Suppose there is one; wlog, Atv3 = 0. Then {v3}must form a basis for ker(At) since the kernel has dimension 1. Let β1, β2 ∈ C be such that

β1Atv1 + β2A

tv2 = 0.

Then At(β1v1 + β2v2) = 0 so β1v1 + β2v2 ∈ ker(At). Then there is β3 ∈ C such that

β1v1 + β2v2 = β3v3.

But these vectors form a basis for im(B) so this implies (in particular) that β1 = β2 = 0.Thus Atv1 and Atv2 are linearly independent. However, they are in im(C) so this impliesrank(C) ≥ 2 since the rank is greater than or equal to the number of elements in any linearlyindependent subset.

51

Page 52: Problem F02.10. Let T V T T TT V consisting Bchparkin/index/BasicExam...Problem S03.10. Let V be a nite dimensional real inner product space and T: V !V a hermitian linear operator.

The case where there is no i = 1, 2, 3 such that Avi = 0 is similar.

Problem F15.8. Find M−2 where

M =

2 3 2 13 6 4 24 8 6 32 4 3 1

.

Solution. I’m not sure there is a “clever” way to do this. Just perform elementary rowoperations on M until you have the identity and then perform those same row operations tothe identity to find M−1. Doing this, we find

M−1 =

2 −1 0 0−1 2 −1 00 −2 1 10 0 1 −2

.

Then squaring, we get

M−2 =

2 −1 0 0−1 2 −1 00 −2 1 10 0 1 −2

2 −1 0 0−1 2 −1 00 −2 1 10 0 1 −2

=

5 −4 1 0−4 7 −3 −12 −6 4 −10 −2 −1 5

.

Problem F15.9. Let A be an n×n real matrix such that At = −A. Prove that det(A) ≥ 0.

Solution. If A is not invertible, then det(A) = 0 so the claim is trivially satisfied.Suppose A is invertible. Then A does not have zero as an eigenvalue. Let λ ∈ C − {0}

be an eigenvalue of A with corresponsing eigenvector 0 6= v ∈ Cn. Then

−λ(v, v) = (−λv, v) = (−Av, v) = (Atv, v) = (v, Av) = (v, λv) = λ(v, v).

Then since (v, v) > 0, we have −λ = λ. But this yields Re(λ) = 0. Thus all eigenvalues ofA are purely imaginary. Also the non-real eigenvalues of A come in conjugate pairs sinceA is real. Thus the eigenvalues of A can be listed: iµ1,−iµ1, . . . , iµ`,−iµ` for some ` ∈ N,µ1, . . . , µ` ∈ R− {0}. The determinant of A is the product of the eigenvalues so

det(A) = µ21 · · ·µ2

` ≥ 0.

[Note: interestingly enough, this actually shows that if n is odd, then 0 must be aneigenvalue of A. Thus if A is an n× n skew-symmetric invertible matrix, then n is even.]

Problem F15.10. Let F,G : Rn → Rn be linear operators. Recall, we define the operatorexponential by

exp(F ) =∞∑k=0

1

k!F k.

52

Page 53: Problem F02.10. Let T V T T TT V consisting Bchparkin/index/BasicExam...Problem S03.10. Let V be a nite dimensional real inner product space and T: V !V a hermitian linear operator.

(a) Prove that when F and G commute, we have

exp(F +G) = exp(F ) exp(G).

(b) Find two non-commuting linear operators such that this equality fails.

Solution.

(a) If F,G commute, then the binomial theorem holds for F,G. That is,

(F +G)k =k∑`=0

(k

`

)F `Gk−`, k = 0, 1, . . .

where by definition, F 0 = I = G0 where I is the identity operator on Rn. Recall theCauchy product of two infinite series:(

∞∑k=0

ak

)(∞∑`=0

b`

)=∞∑k=0

k∑`=0

a`bk−`

when all series converge absolutely. Using these we have

exp(F ) exp(G) =

(∞∑k=0

1

k!F k

)(∞∑`=0

1

`!G`

)

=∞∑k=0

k∑`=0

(1

`!F `

)(1

(k − `)!Gk−`

)

=∞∑k=0

1

k!

k∑`=0

(k

`

)F `Gk−`

=∞∑k=0

1

k!(F +G)k = exp(F +G).

(b) Let

A =

(1 10 1

), B =

(1 01 1

)We see

exp(A) = exp(I) exp

((0 10 0

))=

(e 00 e

)(1 10 1

)=

(e e0 e

)and likewise

exp(B) =

(e 0e e

)so

exp(A) exp(B) =

(e e0 e

)(e 0e e

)=

(2e2 e2

e2 e2

).

53

Page 54: Problem F02.10. Let T V T T TT V consisting Bchparkin/index/BasicExam...Problem S03.10. Let V be a nite dimensional real inner product space and T: V !V a hermitian linear operator.

However,

exp(A+B) = exp

((2 11 2

))= exp(2I) exp(J),

where J =

(0 11 0

)so J2 = I. Then

exp(J) =∞∑k=0

1

(2k)!I +

∞∑k=0

1

(2k + 1)!J =

(cosh(1) sinh(1)sinh(1) cosh(1)

)so

exp(A+B) =

(e2 cosh(1) e2 sinh(1)e2 sinh(1) e2 cosh(1)

)6= exp(A) exp(B).

Problem F15.11. Let T : V → V be a linear operator such that T 6 = 0 and T 5 6= 0.Suppose V ' R6. Prove there is no linear operator S : V → V such that S2 = T . Does theanswer change if V ' R12?

Solution. Suppose that V ' R6 and that there is a linear operator S : V → V such thatT = S2. Then 0 = T 6 = S12 and 0 6= T 5 = S10. Let λ ∈ C be an eigenvalue of S. Thenλ12 is an eigenvalue of S12 = 0 so λ12 = 0 and so λ = 0. Thus all eigenvalues of S are zero.Then the characteristic polynomial of S is pS(x) = x6 since S acts on a 6 dimensional space.However, the Cayley-Hamilton theorem states that pS(S) = 0 so S6 = 0 =⇒ S10 = 0; acontradiction. Thus no such S exists.

Yes, the answer does change if V is 12 dimensional. Let V = R12 and T, S be the matrices

T =

0 0 1 0 · · · 0 00 0 0 1 · · · 0 0...

......

. . ....

0 0 0 0 · · · 1 00 0 0 0 · · · 0 10 0 0 0 · · · 0 00 0 0 0 · · · 0 0

, S =

0 1 0 0 · · · 0 00 0 1 0 · · · 0 0...

.... . .

......

.... . .

...0 0 0 0 · · · 1 00 0 0 0 · · · 0 10 0 0 0 · · · 0 0

;

that is T has all zeroes except ones on the second superdiagonal and S is all zeroes exceptones on the first superdiagonal. Then T 5 6= 0, T 6 = 0 and S2 = T .

Problem F15.12. Prove that the n× n matrix M is positive definite:

M =

2 1 1 · · · 11 3 1 · · · 11 1 4 · · · 1...

......

. . ....

1 1 1 · · · n+ 1

.

54

Page 55: Problem F02.10. Let T V T T TT V consisting Bchparkin/index/BasicExam...Problem S03.10. Let V be a nite dimensional real inner product space and T: V !V a hermitian linear operator.

Solution. Let x = (x1, . . . , xn) ∈ Cn and let M = (mij)1≤i,j≤n. Then

x∗Mx =n∑

i,j=1

mijxixj

= 2 |x1|2 + x1x2 + x1x3 + · · ·+ x1xn−1 + x1xn

+ x2x1 + 3 |x2|2 + x2x3 + · · ·+ x2xn−1 + x2xn...

+ xnx1 + xnx2 + xnx3 + · · ·+ xnxn−1 + (n+ 1) |xn|2 .

From here, we see

|xi|2 +xixi+1 +xi+1xi+ |xi+1|2 = (xi+xi+1)(xi+xi+1) = (xi+xi+1)(xi + xi+1) = |xi + xi+1|2

for each i = 1, . . . , n− 1, so

x∗Mx = |x1|2 + |x2|2 + 2 |x3|2 + · · ·+ (n− 2) |xn−1|2 + n |xn|2

+ |x1 + x2|2 + |x2 + x3|2 + · · ·+ |xn−1 + xn|2

from which it is clear that x∗Mx ≥ 0 and so M is positive definite.[Note: since M is real, it actually suffices to check x ∈ Rn. If v ∈ Cn, then v =

x + iy, x, y ∈ Rn and v∗Mv = xtMx + ytMy. Thus proving that v∗Mv ≥ 0 is reduced toshowing that xtMx, ytMy ≥ 0.]

55